text
stringlengths
11
320k
source
stringlengths
26
161
An operator logo is a logo which appears on the status screen of a mobile phone . Originally intended as a way for phone companies to brand phones attached to their networks, the operator logo has since become a method by which owners may customise their phones to reflect their own interests. It helped kick off mobile phone content advertising which became particularly prominent with ring tone adverts. The older mobile phones of 90's (Black & white LCD Screen) had options to show telecom operator logo instead of the showing the name in plain text. Later various companies provided custom logo and designs to set as Operator logo. [ 1 ] These logos can be shared by SMS with other people. Later on when color mobiles came to market, Colorful logo option came to mobile devices. An industry has sprung up around the use of these logos, and around ring tones , tailored towards phones, such as those made by Nokia , which can receive new logos in a text message . Several mobile phone companies provide services on their websites where users can design their own logos, and there is also software available which can be used to create them. This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Operator_logo
Operator Messaging is the term, similar to Text Messaging and Voice Messaging , applying to an answering service call center who focuses on one specific scripting style that has grown out of the alphanumeric pager history. In the 1970s and early 1980s, the cost of making a phone call decreased and more business communication was done by phone. As corporations grew and labor rates increased, the ratio of secretaries to employees decreased. The initial solution to the phone communication problem for businesses was the “message center.” A message center or “message desk” was a centralized, manual answering service inside a company staffed by a few people, usually women, answering everyone's phones. Extensions that were busy or rang “no answer” would forward to the message center onto a device called a “call director”. The call director had a button for each extension in the company which would flash when that person's extension forwarded to the message center. A little label next to the button told the operator whose extension it was. As wireless communication technologies increased in the late 1980s, the Pager service providers created a subscription service offered in a variety of plans and options to meet the needs of a subscriber and the type of device used. In general, all pagers are given unique telephone numbers so that callers could dial in and send a numeric message, such as their callback number or a numerically coded special message, such as room numbers to report to, etc. [ 1 ] However, alphanumeric pagers could only receive text messages when the message sender had installed software on their PC to dial into the publicly accessible modems operated by the paging [ 2 ] service provider to then transmit their message over-the-air through the network of radio towers. [ 3 ] Alpha Dispatch service is best described as enhanced numeric paging. [ 4 ] It is a service that consists of live operators who answer incoming calls and input the callers' messages on a computer, then transmit the message using the Telocator Alphanumeric Protocol to the paging provider's radio towers. Alphanumeric pagers receive the messages in the form of words and numbers. Messages are sequentially numbered and archived for later reference if required to be re-sent. PageNet was one of the larger paging providers who offered this service add-on to their alphanumeric pager customers. Alpha-dispatch was never designed to replace a full-service answering service. Although both services will answer calls in a customer's name and advise the caller that the customer is unavailable, a full-service answering service will usually have additional information about the customer that they are encouraged to share with the caller such as business summary, website information, personal schedule, and other informational details. An alpha-dispatch service operator usually has no knowledge of the subscriber, except for their first and last name or company name, and serves only as a messaging bridge between the caller and the subscriber with the caller dictating what the operator should type as a message to the subscriber. Because of this difference, minimal training and supervising is required of the call center employees and therefore operator messaging service is much less expensive than full-service answering services. The low cost makes operator messaging an affordable alternative to voicemail. As the use of alphanumeric pagers declined in the mid-1990s and cell phone text messaging availability and reliability increased ever since, these well-established alpha-dispatch call centers adjusted their technology to allow live operators' messages to be transmitted to cellular service providers in the same way as to pager service providers. Operators still follow the same answering procedures and have no idea if the subscriber is receiving the text message on a cell phone or a pager. The operator still serves as a "relay" or "bridge" for the caller to dictate their message to the operator messaging subscriber's device. Although e-mail capabilities have been extended to alphanumeric pagers and cellular text messaging, the operator messaging services are used by individuals who are not located near a computer or where sending a text message may be dangerous or impractical. Live operator messaging marries the technologies of voice messaging and text messaging as an alternative to voicemail service by using call forwarding features to redirect callers of your cell phone automatically to the operator messaging service after three or four unanswered rings. Operator messaging service providers remain profitable because the average call length is under 30 seconds and employees are often paid less than full-service answering service employees due to the limited training required.
https://en.wikipedia.org/wiki/Operator_messaging
An operator protection device (OPD) is a device that protects the operator of machinery. The term has been adopted by the Australian Competition and Consumer Commission (ACCC) to describe devices used to protect all-terrain vehicle (ATV) riders from being crushed in the event of an accident. [ 1 ] [ 2 ] [ 3 ] In the event of an ATV rollover, an OPD will flex over or around a person on the ground while bearing the load of the ATV - thus preventing the rider from being crushed. [ 3 ] [ 4 ] In 2017 the Australian Competition and Consumer Commission (ACCC) began investigating the mandatory safety standard for ATVs. [ 2 ] This resulted in legislation requiring all new quad bikes to have an operator protection device (OPD) fitted. [ 5 ] The safety standard issued by the ACCC specified two models: the Quadbar and the ATV Lifeguard . [ 5 ] Since the standard has been issued, the makers of ATV Lifeguard have developed and released the QuadGuard . [ 6 ] OPDs are sometimes referred to as crush protection devices (CPD) or roll-over protective structures ( ROPS ). [ 7 ]
https://en.wikipedia.org/wiki/Operator_protection_device
In genetics , an operon is a functioning unit of DNA containing a cluster of genes under the control of a single promoter . [ 1 ] The genes are transcribed together into an mRNA strand and either translated together in the cytoplasm, or undergo splicing to create monocistronic mRNAs that are translated separately, i.e. several strands of mRNA that each encode a single gene product. The result of this is that the genes contained in the operon are either expressed together or not at all. Several genes must be co-transcribed to define an operon. [ 2 ] Originally, operons were thought to exist solely in prokaryotes (which includes organelles like plastids that are derived from bacteria ), but their discovery in eukaryotes was shown in the early 1990s, and are considered to be rare. [ 3 ] [ 4 ] [ 5 ] [ 6 ] In general, expression of prokaryotic operons leads to the generation of polycistronic mRNAs, while eukaryotic operons lead to monocistronic mRNAs. Operons are also found in viruses such as bacteriophages . [ 7 ] [ 8 ] For example, T7 phages have two operons. The first operon codes for various products, including a special T7 RNA polymerase which can bind to and transcribe the second operon. The second operon includes a lysis gene meant to cause the host cell to burst. [ 9 ] The term "operon" was first proposed in a short paper in the Proceedings of the French Academy of Sciences in 1960. [ 10 ] From this paper, the so-called general theory of the operon was developed. This theory suggested that in all cases, genes within an operon are negatively controlled by a repressor acting at a single operator located before the first gene. Later, it was discovered that genes could be positively regulated and also regulated at steps that follow transcription initiation. Therefore, it is not possible to talk of a general regulatory mechanism, because different operons have different mechanisms. Today, the operon is simply defined as a cluster of genes transcribed into a single mRNA molecule. Nevertheless, the development of the concept is considered a landmark event in the history of molecular biology. The first operon to be described was the lac operon in E. coli . [ 10 ] The 1965 Nobel Prize in Physiology and Medicine was awarded to François Jacob , André Michel Lwoff and Jacques Monod for their discoveries concerning the operon and virus synthesis. Operons occur primarily in prokaryotes but also rarely in some eukaryotes , including nematodes such as C. elegans and the fruit fly, Drosophila melanogaster . [ 3 ] rRNA genes often exist in operons that have been found in a range of eukaryotes including chordates . An operon is made up of several structural genes arranged under a common promoter and regulated by a common operator. It is defined as a set of adjacent structural genes, plus the adjacent regulatory signals that affect transcription of the structural genes. 5 [ 12 ] The regulators of a given operon, including repressors , corepressors , and activators , are not necessarily coded for by that operon. The location and condition of the regulators, promoter, operator and structural DNA sequences can determine the effects of common mutations. Operons are related to regulons , stimulons and modulons ; whereas operons contain a set of genes regulated by the same operator, regulons contain a set of genes under regulation by a single regulatory protein, and stimulons contain a set of genes under regulation by a single cell stimulus. According to its authors, the term "operon" is derived from the verb "to operate". [ 13 ] An operon contains one or more structural genes which are generally transcribed into one polycistronic mRNA (a single mRNA molecule that codes for more than one protein ). However, the definition of an operon does not require the mRNA to be polycistronic, though in practice, it usually is. [ 6 ] Upstream of the structural genes lies a promoter sequence which provides a site for RNA polymerase to bind and initiate transcription. Close to the promoter lies a section of DNA called an operator . All the structural genes of an operon are turned ON or OFF together, due to a single promoter and operator upstream to them, but sometimes more control over the gene expression is needed. To achieve this aspect, some bacterial genes are located near together, but there is a specific promoter for each of them; this is called gene clustering . Usually these genes encode proteins which will work together in the same pathway, such as a metabolic pathway. Gene clustering helps a prokaryotic cell to produce metabolic enzymes in a correct order. [ 14 ] In one study, it has been posited that in the Asgard (archaea) , ribosomal protein coding genes occur in clusters that are less conserved in their organization than in other Archaea ; the closer an Asgard (archaea) is to the eukaryotes , the more dispersed is the arrangement of the ribosomal protein coding genes. [ 15 ] An operon is made up of 3 basic DNA components: Not always included within the operon, but important in its function is a regulatory gene , a constantly expressed gene which codes for repressor proteins . The regulatory gene does not need to be in, adjacent to, or even near the operon to control it. [ 17 ] An inducer (small molecule) can displace a repressor (protein) from the operator site (DNA), resulting in an uninhibited operon. Alternatively, a corepressor can bind to the repressor to allow its binding to the operator site. A good example of this type of regulation is seen for the trp operon . Control of an operon is a type of gene regulation that enables organisms to regulate the expression of various genes depending on environmental conditions. Operon regulation can be either negative or positive by induction or repression. [ 16 ] Negative control involves the binding of a repressor to the operator to prevent transcription. Operons can also be positively controlled. With positive control, an activator protein stimulates transcription by binding to DNA (usually at a site other than the operator). The lac operon of the model bacterium Escherichia coli was the first operon to be discovered and provides a typical example of operon function. It consists of three adjacent structural genes , a promoter , a terminator , and an operator . The lac operon is regulated by several factors including the availability of glucose and lactose . It can be activated by allolactose . Lactose binds to the repressor protein and prevents it from repressing gene transcription. This is an example of the derepressible (from above: negative inducible) model. So it is a negative inducible operon induced by presence of lactose or allolactose. Discovered in 1953 by Jacques Monod and colleagues, the trp operon in E. coli was the first repressible operon to be discovered. While the lac operon can be activated by a chemical ( allolactose ), the tryptophan (Trp) operon is inhibited by a chemical (tryptophan). This operon contains five structural genes: trp E, trp D, trp C, trp B, and trp A, which encodes tryptophan synthetase . It also contains a promoter which binds to RNA polymerase and an operator which blocks transcription when bound to the protein synthesized by the repressor gene (trp R) that binds to the operator. In the lac operon, lactose binds to the repressor protein and prevents it from repressing gene transcription, while in the trp operon, tryptophan binds to the repressor protein and enables it to repress gene transcription. Also unlike the lac operon, the trp operon contains a leader peptide and an attenuator sequence which allows for graded regulation. [ 18 ] This is an example of the corepressible model. The number and organization of operons has been studied most critically in E. coli . As a result, predictions can be made based on an organism's genomic sequence. One prediction method uses the intergenic distance between reading frames as a primary predictor of the number of operons in the genome. The separation merely changes the frame and guarantees that the read through is efficient. Longer stretches exist where operons start and stop, often up to 40–50 bases. [ 19 ] An alternative method to predict operons is based on finding gene clusters where gene order and orientation is conserved in two or more genomes. [ 20 ] Operon prediction is even more accurate if the functional class of the molecules is considered. Bacteria have clustered their reading frames into units, sequestered by co-involvement in protein complexes, common pathways, or shared substrates and transporters. Thus, accurate prediction would involve all of these data, a difficult task indeed. Pascale Cossart 's laboratory was the first to experimentally identify all operons of a microorganism, Listeria monocytogenes . The 517 polycistronic operons are listed in a 2009 study describing the global changes in transcription that occur in L. monocytogenes under different conditions. [ 21 ] Primary promoters are the main controllers of operons. However, many operons have internal promoters. For example, half of all operons of E. coli have internal promoters . What happens when both the primary as well as the internal promoters are simultaneously perturbed? A recent study followed how each gene in each operon responded to such genome-wide stresses [ 22 ] . They found that many transcription events of operons end prematurely in those conditions. They also found that internal promoters compensate significantly for those terminations creating a wave-like response pattern along operons. Next, it was shown that the same occurs in evolutionarily distant bacteria, such as Bacillus subtilis, Corynebacterium glutamicum, and Helicobacter pylori.
https://en.wikipedia.org/wiki/Operon
ODB ( Operon DataBase ) is a database of conserved operons in sequenced genomes . [ 1 ] This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Operon_database
Opha Pauline Dube (born 1960) is a Botswanan environmental scientist and Associate Professor in the Department of Environmental Science at the University of Botswana . She co-authored the IPCC's Special Report on Global Warming of 1.5 °C . She is one of fifteen scientists creating the 2023 Global Sustainable Development Report for the United Nations . Dube was awarded her MPhil in Applied Remote Sensing at the Cranfield Institute of Technology in the UK in 1989. [ 1 ] She graduated with a PhD from the University of Queensland in 2000. [ 2 ] She earned her doctorate due to a collaboration between the University of Botswana and the University of Queensland arranged by the Commonwealth Scientific and Industrial Research Organisation . The work involved investigating whether remote sensing-based methods used on Australian ranges could be applied to monitor land degradation in Botswana. [ 2 ] Dube is an Associate Professor in the Department of Environmental Science at the University of Botswana. [ 1 ] [ 2 ] Her research and teaching focuses on the social and biophysical aspects of global environmental change. In 2012, she held a research fellowship at the Australian National Climate Change Adaptation Research Facility (NCCARF) at Griffith University and had a similar position at the Environmental Change Institute at the University of Oxford in 2018. [ 2 ] Dube was Co-Vice Chair of the International Geosphere-Biosphere Programme (IGBP) between 2010 and 2015 [ 3 ] and the Deputy Chair of Botswana National Climate Change Committee between 2017 and 2019. [ 4 ] Dube is currently serving as the Co-Chair of the Scientific Advisory Committee of the Climate Research for Development in Africa (CR4D)-UNECA [ 4 ] and the Vice Chair of the World Meteorological Organisation (WMO) Scientific Advisory Panel. [ 5 ] She is one of the Editors-in-Chief of the Elsevier Current Opinion in Environmental Sustainability academic journal [ 6 ] and an associate editor of the CSIRO Rangeland Journal. [ 7 ] In 2019, Dube was listed in the top 100 of "The World's Most Influential People in Climate Policy" [ 8 ] and in October 2020, she was appointed by the UN Secretary General to be one of fifteen scientists creating the 2023 Global Sustainable Development Report for the United Nations. [ 9 ] Dube has served as part of the Intergovernmental Panel on Climate Change (IPCC) Working Group II since the Third Assessment Report . [ 4 ] This group "assesses the vulnerability of socio-economic and natural systems to climate change, negative and positive consequences of climate change and options for adapting to it". [ 10 ] She has contributed to the IPCC's Third, [ 11 ] Fourth [ 12 ] and Fifth [ 13 ] Assessment Reports, acting as both an author and a review editor. Her work on the Climate Change 2007: Impacts, Adaptation, and Vulnerability (AR4 WG2) report, as part of the Fourth Assessment Report, led to Dube being awarded an International Nobel Peace Prize Certificate in 2007. [ 14 ] She was also coordinating lead author for two of the IPCC's Special Reports: Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation (SREX) [ 15 ] and Global Warming of 1.5 °C (SR15). [ 16 ] Dube worked as a review editor for the upcoming IPCC Sixth Assessment Report , on the chapter titled "Food, fibre, and other ecosystem products." [ 17 ]
https://en.wikipedia.org/wiki/Opha_Pauline_Dube
Opioid food peptides include: This biochemistry article is a stub . You can help Wikipedia by expanding it . This food ingredient article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Opioid_food_peptides
An opioidergic agent (or drug ) is a chemical which directly or indirectly modulate the function of opioid receptors . Opioidergics comprise opioids , as well as allosteric modulators and enzyme affecting agents like enkephalinase inhibitors . This drug article relating to the nervous system is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Opioidergic
Opioid use during pregnancy can have significant implications for both the mother and the developing fetus. Opioids are a class of drugs that include prescription painkillers (e.g., oxycodone , hydrocodone ) and illicit substances like heroin . Opioid use during pregnancy is associated with an increased risk of complications, including an elevated risk of preterm birth , low birth weight , intrauterine growth restriction , and stillbirth . Opioids are substances that can cross the placenta, exposing the developing fetus to the drugs. This exposure can potentially lead to various adverse effects on fetal development, including an increased risk of birth defects . One of the most well-known consequences of maternal opioid use during pregnancy is the risk of neonatal abstinence syndrome (NAS). NAS occurs when the newborn experiences withdrawal symptoms after birth due to exposure to opioids in the womb. Maternal opioid use during pregnancy can also have long-term effects on the child's development. These effects may include cognitive and behavioral problems, as well as an increased risk of substance use disorders later in life. Current guidelines recommend that opioid use disorder in pregnancy be treated with opioid agonist pharmacotherapy consisting of methadone or buprenorphine to substitute for the drug of abuse. [ 1 ] Opioid usage is common among pregnant women and is on the rise. [ 2 ] Opioid drugs are used for various reasons during pregnancy, with pain being a frequent issue. Conditions like pelvic and lower back pain , occurring in around 68 to 72% of pregnancies, are commonly treated with these medications. [ 2 ] [ 3 ] [ 4 ] Moreover, other sources of pain like muscle aches , migraines , and joint pain are commonly reported during pregnancy. [ 2 ] [ 5 ] However, when it comes to chronic pain, guidelines from the American Pain Society recommend discussing the advantages and disadvantages of chronic opioid therapy with women and, if possible, limiting or avoiding opioid use during pregnancy due to potential risks to the fetus . [ 2 ] [ 6 ] Even though there is evidence suggesting harmful impacts on fetal development caused by prescription opioids, [ 7 ] [ 8 ] [ 9 ] [ 10 ] research conducted in both Europe and the United States consistently shows elevated levels of prescription opioid use during pregnancy, whether it's for medical reasons or due to opioid dependency . [ 2 ] It's important to note that prescription opioids encompass a range of medications, and the potential effects on the fetus may differ between different medications within the same drug class. [ 2 ] Opioids can cross both the placental and blood-brain barriers, which poses risks to fetuses and newborns exposed to these drugs before birth. This exposure to opioids during pregnancy can lead to potential obstetric complications, including spontaneous abortion , abruption of the placenta , pre-eclampsia , prelabor rupture of membranes , and fetal death . [ 11 ] [ 12 ] There are also adverse outcomes in newborns associated with maternal opioid use during pregnancy, such as sudden infant death syndrome , being smaller than expected for their gestational age, preterm birth , lower birth weight , and reduced head size. [ 11 ] [ 13 ] Neonatal abstinence syndrome is a commonly observed issue in newborns who were exposed to opioids before birth. [ citation needed ] The use of opioids in the early stages of pregnancy is associated with an elevated risk of congenital anomalies . Specifically, there is a two-fold increased likelihood of certain birth defects, including congenital heart defects, gastroschisis , and neural tube defects . [ 11 ] [ 10 ] [ 7 ] The risk of preterm birth and neonatal complications is reduced to some extent when dextropropoxyphene or codeine is used in comparison to other opioid analgesics . [ 14 ] [ 15 ] The potential impact on the neurodevelopment of infants exposed to opioids before birth is another significant concern. A recent meta-analysis revealed noteworthy deficiencies in cognitive, psychomotor, and behavioral abilities in infants and preschool-aged children who had experienced chronic intrauterine opioid exposure. [ 11 ] Children who experienced neonatal abstinence syndrome were notably more prone to hospitalizations due to cognitive impairments, communication, speech, or language disorders , autism spectrum disorder , and behavioral problems, particularly those concerning emotional control. [ 16 ] [ 17 ] Neonatal abstinence syndrome occurs when newborns go through withdrawal from opiates and is linked to dysfunction in the central and autonomic nervous systems, the respiratory system, and the gastrointestinal tract. [ 14 ] Additionally, there is an elevated risk of neonatal abstinence syndrome associated with the medical use of certain opioid analgesics , such as tramadol , codeine , and propoxyphene . [ 14 ] Pregnant women with opioid use disorder have treatment options including methadone , naltrexone , or buprenorphine to decrease opioid usage and enhance treatment adherence. [ 18 ] [ 19 ] Current guidelines suggest that methadone and buprenorphine are equally viable choices. Nevertheless, recent research suggests that buprenorphine may offer certain advantages over methadone. [ 20 ] Current guidelines recommend that pregnant women with opioid use disorder be treated with opioid agonist pharmacotherapy consisting of methadone or buprenorphine to substitute for the drug of abuse. [ 1 ]
https://en.wikipedia.org/wiki/Opioids_and_pregnancy
Oportuzumab monatox is an experimental anti-cancer medication . Chemically, oportuzumab is a single chain variable fragment of a monoclonal antibody which binds to epithelial cell adhesion molecule (EpCAM, the tumor-associated calcium signal transducer 1). Oportuzumab is fused with Pseudomonas aeruginosa exotoxin A (which is reflected by the monatox in the medication's name). [ 1 ] The drug was developed by Canadian-based Viventia Bio Inc. The company was acquired by Cambridge(MA)-based Eleven Biotherapeutics in 2016, which then changed its name to Sesen Bio. [ 2 ] In 2019 Sesen Bio reported updated, preliminary primary and secondary endpoint data from the company's Phase 3 VISTA trial further supporting the strong benefit-risk profile of Vicineum for the potential treatment of patients with high-risk, bacillus Calmette-Guérin(BCG) unresponsive, non-muscle invasive bladder cancer (NMIBC). [ 3 ] The Company applied for approval of Vicineum by the United States Food and Drug Administration and the European Medicines Agency . Vol 24, No 18S (June 20 Supplement), 2006: 4580 This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it . This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Oportuzumab_monatox
Oppenauer oxidation , named after Rupert Viktor Oppenauer [ de ] , [ 1 ] is a gentle method for selectively oxidizing secondary alcohols to ketones . The reaction is the opposite Meerwein–Ponndorf–Verley reduction . [ 2 ] The alcohol is oxidized with aluminium isopropoxide in excess acetone . This shifts the equilibrium toward the product side. The oxidation is highly selective for secondary alcohols and does not oxidize other sensitive functional groups such as amines and sulfides . [ 3 ] Though primary alcohols can be oxidized under Oppenauer conditions, primary alcohols are seldom oxidized by this method due to the competing aldol condensation of aldehyde products. The Oppenauer oxidation is still used for the oxidation of acid labile substrates. The method has been largely displaced by oxidation methods based on chromates (e.g. pyridinium chlorochromate ) or dimethyl sulfoxide (e.g. Swern oxidation ) or Dess–Martin oxidation due to its use of relatively mild and non-toxic reagents (e.g. the reaction is run in acetone/benzene mixtures). The Oppenauer oxidation is commonly used in various industrial processes such as the synthesis of steroids , hormones , alkaloids , terpenes , etc. In the first step of this mechanism , the alcohol (1) coordinates to the aluminium to form a complex (3), which then, in the second step, gets deprotonated by an alkoxide ion (4) to generate an alkoxide intermediate (5). In the third step, both the oxidant acetone (7) and the substrate alcohol are bound to the aluminium. The acetone is coordinated to the aluminium which activates it for the hydride transfer from the alkoxide. The aluminium-catalyzed hydride shift from the α-carbon of the alcohol to the carbonyl carbon of acetone proceeds over a six-membered transition state (8). The desired ketone (9) is formed after the hydride transfer. [ 4 ] An advantage of the Oppenauer oxidation is its use of relatively inexpensive and non-toxic reagents. Reaction conditions are mild and gentle since the substrates are generally heated in acetone/ benzene mixtures. Another advantage of the Oppenauer oxidation which makes it unique to other oxidation methods such as pyridinium chlorochromate (PCC) and Dess–Martin periodinane is that secondary alcohols are oxidized much faster than primary alcohols, thus chemoselectivity can be achieved. Furthermore, there is no over oxidation of aldehydes to carboxylic acids as opposed to another oxidation methods such the Jones oxidation . [ 4 ] In the Wettstein-Oppenauer reaction, discovered by Wettstein in 1945, Δ 5–3β-hydroxy steroids are oxidized to Δ 4,6-3-ketosteroids with benzoquinone as the hydrogen acceptor. This reaction is useful in that it affords a one-step preparation of Δ 4,6-3-ketosteroids. [ 5 ] In the Woodward modification, Woodward substituted potassium tert-butoxide for the aluminium alkoxide. The Woodward modification of the Oppenauer oxidation, also called the Oppenauer–Woodward oxidation , is used when certain alcohol groups do not oxidize under the standard Oppenauer reaction conditions. For example, Woodward used potassium tert-butoxide and benzophenone for the oxidation of quinine to quininone, as the traditional aluminium catalytic system failed to oxidize quinine due to the complex formed by coordination of the Lewis-basic nitrogen to the aluminium centre. [ 6 ] Several modified aluminium alkoxide catalysts have been also reported. For example, a highly active aluminium catalyst was reported by Maruoka and co-workers which was utilized in the oxidation of carveol to carvone (a member of a family of chemicals called terpenoids ) in excellent yield (94%). [ 7 ] In another modification [ 8 ] the catalyst is trimethylaluminium and the aldehyde 3-nitrobenzaldehyde is used as the oxidant, for example, in the oxidation of isoborneol to camphor . The Oppenauer oxidation is used to prepare analgesics in the pharmaceutical industry such as morphine and codeine . For instance, codeinone is prepared by the Oppenauer oxidation of codeine . [ 9 ] The Oppenauer oxidation is also used to synthesize hormones . Progesterone is prepared by the Oppenauer oxidation of pregnenolone . [ 10 ] A slight variation of the Oppenauer oxidation is also used to synthesize steroid derivatives. For example, an efficient catalytic version of the Oppenauer oxidation which employs a ruthenium catalyst has been developed for the oxidation of 5-unsaturated 3β-hydroxy steroids to the corresponding 4-en-3-one derivative. [ 11 ] The Oppenauer oxidation is also used in the synthesis of lactones from 1,4 and 1,5 diols . [ 12 ] A common side-reaction of the Oppenauer oxidation is the base -catalyzed aldol condensation of aldehyde product, which have α-hydrogens to form either β-hydroxy aldehydes or α, ß- unsaturated aldehydes . [ 13 ] Another side reaction is the Tischenko reaction of aldehyde products with no α-hydrogen, but this can be prevented by use of anhydrous solvents. [ 4 ] Another general side reaction is the migration of the double bond during the oxidation of allylic alcohol substrates. [ 14 ]
https://en.wikipedia.org/wiki/Oppenauer_oxidation
In Diophantine approximation , a subfield of number theory , the Oppenheim conjecture concerns representations of numbers by real quadratic forms in several variables. It was formulated in 1929 by Alexander Oppenheim and later the conjectured property was further strengthened by Harold Davenport and Oppenheim. Initial research on this problem took the number n of variables to be large, and applied a version of the Hardy-Littlewood circle method . The definitive work of Margulis , settling the conjecture in the affirmative, used methods arising from ergodic theory and the study of discrete subgroups of semisimple Lie groups . Meyer's theorem states that an indefinite integral quadratic form Q in n variables, n ≥ 5, nontrivially represents zero, i.e. there exists a non-zero vector x with integer components such that Q ( x ) = 0. The Oppenheim conjecture can be viewed as an analogue of this statement for forms Q that are not multiples of a rational form. It states that in this case, the set of values of Q on integer vectors is a dense subset of the real line . Several versions of the conjecture were formulated by Oppenheim and Harold Davenport . For n ≥ 5 this was conjectured by Oppenheim in 1929; the stronger version is due to Davenport in 1946. This was conjectured by Oppenheim in 1953 and proved by Birch, Davenport, and Ridout for n at least 21, and by Davenport and Heilbronn for diagonal forms in five variables. Other partial results are due to Oppenheim (for forms in four variables, but under the strong restriction that the form represents zero over Z ), Watson, Iwaniec, Baker–Schlickewey. Early work analytic number theory and reduction theory of quadratic forms. The conjecture was proved in 1987 by Margulis in complete generality using methods of ergodic theory. Geometry of actions of certain unipotent subgroups of the orthogonal group on the homogeneous space of the lattices in R 3 plays a decisive role in this approach. It is sufficient to establish the case n = 3. The idea to derive the Oppenheim conjecture from a statement about homogeneous group actions is usually attributed to M. S. Raghunathan , who observed in the 1970s that the conjecture for n = 3 is equivalent to the following property of the space of lattices: However, Margulis later remarked that in an implicit form this equivalence occurred already in a 1955 paper of Cassels and H. P. F. Swinnerton-Dyer , albeit in a different language. [ citation needed ] Shortly after Margulis's breakthrough, the proof was simplified and generalized by Dani and Margulis. Qualitative versions of the Oppenheim conjecture were later proved by Eskin–Margulis–Mozes. Borel and Prasad established some S -arithmetic analogues. The study of the properties of unipotent and quasiunipotent flows on homogeneous spaces remains an active area of research, with applications to further questions in the theory of Diophantine approximation.
https://en.wikipedia.org/wiki/Oppenheim_conjecture
The Oppenheimer–Phillips process or strip reaction is a type of deuteron -induced nuclear reaction . In this process the neutron half of an energetic deuteron (a stable isotope of hydrogen with one proton and one neutron) fuses with a target nucleus , transmuting the target to a heavier isotope while ejecting a proton. An example is the nuclear transmutation of carbon-12 to carbon-13 . The process allows a nuclear interaction to take place at lower energies than would be expected from a simple calculation of the Coulomb barrier between a deuteron and a target nucleus. This is because, as the deuteron approaches the positively charged target nucleus, it experiences a charge polarization where the "proton-end" faces away from the target and the "neutron-end" faces towards the target. The fusion proceeds when the binding energy of the neutron and the target nucleus exceeds the binding energy of the deuteron itself; the proton formerly in the deuteron is then repelled from the new, heavier, nucleus. [ 1 ] An explanation of this effect was published by J. Robert Oppenheimer and Melba Phillips in 1935, considering experiments with the Berkeley cyclotron showing that some elements became radioactive under deuteron bombardment. [ 2 ] During the O-P process, the deuteron's positive charge is spatially polarized, and collects preferentially at one end of the deuteron's density distribution , nominally, the "proton end". As the deuteron approaches the target nucleus, the positive charge is repelled by the electrostatic field until, assuming the incident energy is not sufficient for it to surmount the barrier, the "proton end" approaches to a minimum distance having climbed the Coulomb barrier as far as it can. If the "neutron end" is close enough for the strong nuclear force , which only operates over very short distances, to exceed the repulsive electrostatic force on the "proton end", fusion of a neutron with the target nucleus may begin. The reaction proceeds as follows: In the O-P process, as the neutron fuses to the target nucleus, the deuteron binding force pulls the "proton end" closer than a naked proton could otherwise have approached on its own, increasing the potential energy of the positive charge. As a neutron is captured, a proton is stripped from the complex and is ejected. The proton at this point is able to carry away more than the incident kinetic energy of the deuteron since it has approached the target nucleus more closely than what is possible for an isolated proton with the same incident energy. In such instances, the transmuted nucleus is left in an energy state as if it had fused with a neutron of negative kinetic energy . There is an upper bound of how much energy the proton can be ejected with, set by the ground state of the daughter nucleus. [ 1 ] [ 3 ]
https://en.wikipedia.org/wiki/Oppenheimer–Phillips_process
In general relativity , the Oppenheimer–Snyder model is a solution to the Einstein field equations based on the Schwarzschild metric describing the collapse of an object of extreme mass into a black hole . [ 1 ] It is named after physicists J. Robert Oppenheimer and Hartland Snyder , who published it in 1939. [ 2 ] During the collapse of a star to a black hole the geometry on the outside of the sphere is the Schwarzschild geometry. However the geometry inside is, curiously enough, the same Robertson-Walker geometry as in the rest of the observable universe. [ 3 ] Albert Einstein , who had developed his theory of general relativity in 1915, initially denied the possibility of black holes, [ 4 ] even though they were a genuine implication of the Schwarzschild metric, obtained by Karl Schwarzschild in 1916, the first known non-trivial exact solution to Einstein's field equations. [ 1 ] In 1939, Einstein published "On a Stationary System with Spherical Symmetry Consisting of Many Gravitating Masses" in the Annals of Mathematics , claiming to provide "a clear understanding as to why these ' Schwarzschild singularities ' do not exist in physical reality." [ 4 ] [ 5 ] Months after the issuing of Einstein's article, [ 4 ] J. Robert Oppenheimer and his student Hartland Snyder studied this topic with their paper "On Continued Gravitational Contraction" making the opposite argument as Einstein's. [ 6 ] [ 5 ] They showed when a sufficiently massive star runs out of thermonuclear fuel, it will undergo continued gravitational contraction and become separated from the rest of the universe by a boundary called the event horizon , which not even light can escape. This paper predicted the existence of what are today known as black holes. [ 1 ] [ 7 ] The term "black hole" was coined decades later, in the fall of 1967, by John Archibald Wheeler at a conference held by the Goddard Institute for Space Studies in New York City; [ 7 ] it appeared for the first time in print the following year. [ 8 ] Oppenheimer and Snyder used Einstein's own theory of gravity to prove how black holes could develop for the first time in contemporary physics, but without referencing the aforementioned article by Einstein. [ 4 ] Oppenheimer and Snyder did, however, refer to an earlier article by Oppenheimer and Volkoff on neutron stars, improving upon the work of Lev Davidovich Landau . [ 7 ] Previously, and in the same year, Oppenheimer and three colleagues, Richard Tolman , Robert Serber , and George Volkoff , had investigated the stability of neutron stars, obtaining the Tolman-Oppenheimer-Volkoff limit . [ 9 ] [ 10 ] [ 11 ] Oppenheimer would not revisit the topic in future publications. [ 12 ] The Oppenheimer–Snyder model of continued gravitational collapse is described by the line element [ 13 ] d s 2 = − d τ 2 + A 2 ( η ) ( d R 2 1 − 2 M R − 2 R b 2 1 R + + R 2 d Ω 2 ) {\displaystyle ds^{2}=-d\tau ^{2}+A^{2}(\eta )\left({\frac {dR^{2}}{1-2M{\frac {R_{-}^{2}}{R_{b}^{2}}}{\frac {1}{R_{+}}}}}+R^{2}d\Omega ^{2}\right)} The quantities appearing in this expression are as follows: τ ( η , R ) = 1 2 R + 3 2 M ( η + sin ⁡ η ) . {\displaystyle \tau (\eta ,R)={\frac {1}{2}}{\sqrt {\frac {R_{+}^{3}}{2M}}}(\eta +\sin \eta ).} This expression is valid both in the matter region R < R b {\displaystyle R<R_{b}} , and the vacuum region R > R b {\displaystyle R>R_{b}} , and continuously transitions between the two. Kip Thorne recalled that physicists were initially skeptical of the model, viewing it as "truly strange" at the time. [ 12 ] He explained further, "It was hard for people of that era to understand the paper because the things that were being smoked out of the mathematics were so different from any mental picture of how things should behave in the universe." [ 14 ] Oppenheimer himself thought little of this discovery. [ 2 ] However, some considered the model's discovery to be more significant than Oppenheimer did, [ 2 ] and model would later be described as forward thinking. [ 12 ] Freeman Dyson thought it was Oppenheimer's greatest contribution to science. Lev Davidovich Landau added the Oppenheimer-Snyder paper to his "golden list" of classic papers. [ 2 ] John Archibald Wheeler was initially an opponent of the model until the late 1950s, [ 1 ] [ 12 ] when he was asked to teach a course on general relativity at Princeton University. [ 8 ] Wheeler claimed at a conference in 1958 that the Oppenheimer-Snyder model had neglected the many features of a realistic star. However, he later changed his mind completely after being informed by Edward Teller that a computer simulation ran by Stirling Colgate and his team at the Lawrence Livermore National Laboratory had shown a sufficiently heavy star would undergo continued gravitational contraction in a manner similar to the idealized scenario described by Oppenheimer and Snyder. [ 1 ] Wheeler subsequently played a key role in reviving interest in general relativity in the United States, and popularized the term "black hole" in the late 1960s. [ 8 ] Various theoretical physicists pursued this topic [ 5 ] and by the late 1960s and early 1970s, advances in observational astronomy, such as radio telescopes , changed the attitude of the scientific community. [ 14 ] Pulsars had already been discovered and black holes were no longer considered mere textbook curiosities. [ 15 ] Cygnus X-1 , the first solid black-hole candidate, was discovered by the Uhuru X-ray space telescope in 1971. [ 1 ] Jeremy Bernstein described it as "one of the great papers in twentieth-century physics." [ 14 ] After winning the Nobel Prize in Physics in 2020, Roger Penrose would credit the Oppenheimer–Snyder model as one of his inspirations for research. [ 16 ] [ 12 ] The Hindu wrote in 2023: [ 17 ] The world of physics does indeed remember the paper. While Oppenheimer is remembered in history as the “father of the atomic bomb”, his greatest contribution as a physicist was on the physics of black holes. The work of Oppenheimer and Hartland Snyder helped transform black holes from figments of mathematics to real, physical possibilities – something to be found in the cosmos out there.
https://en.wikipedia.org/wiki/Oppenheimer–Snyder_model
The Oppo Find X6 is a series of two Android -based smartphones manufactured by Oppo as part of its flagship Find X series. Unveiled as successors to the Oppo Find X5 series , both phones were unveiled on 21 March 2023. [ 3 ] [ 4 ] Currently, the Find X6 series is available for sale only in mainland China . [ 5 ] [ 6 ] [ 7 ] The Find X6 series consists of two devices - the regular Find X6 and the top of the line Find X6 Pro. The Find X6 features a curved 6.74 in (171 mm) display with a variable refresh rate from 40 Hz to 120 Hz, either 12 GB or 16GB of RAM, and storage options from 256 GB to 512 GB. [ 8 ] The Find X6 Pro flagship comes with a curved 6.82 in (173 mm) LTPO3 display that offers a variable refresh rate starting at 1 HZ and a higher 1440p resolution, either 12 GB or 16GB of RAM, and storage options from 256 GB to 512 GB. [ 5 ] Both phones feature 10-bit HDR10+ capable displays, but the Find X6 Pro's battery capacity is the largest in the lineup and comes with upgraded cameras compared to the Find X6. Both the Find X6 and the Find X6 Pro feature curved displays and aluminium frames. However, only the Find X6 Pro's screen is protected by Corning Gorilla Glass Victus 2. [ 5 ] The Find X6 comes in either Black, Green or Gold colourways. The Green and Gold variants were manufactured with Oppo's patented Oppo Glow process, while the Black variant features a mirrored glass rear. [ 3 ] The Find X6 is also IP64 protected. [ 4 ] The more advanced Find X6 Pro features IP68 water and dust resistance. Its colour options are Black, Green and Brown, with the Brown variant being the only one that is uniquely crafted with a dual tone vegan leather and glass rear. The Black and Green variants are fitted with matte glass backs. [ 5 ] The Find X6 is powered by MediaTek Dimensity 9200 and operates on Octa-core (1x3.05 GHz Cortex-X3 & 3x2.85 GHz Cortex-A715 & 4x1.80 GHz Cortex-A510), an upgrade from its predecessor the Find X5. The flagship Find X6 Pro uses Snapdragon 8 Gen 2, the highest specced Snapdragon chip in 2023. It operates on a more advanced octa-core system (1x3.2 GHz Cortex-X3 & 2x2.8 GHz Cortex-A715 & 2x2.80 GHz Cortex-A710 & 3x2.0 GHz Cortex-A510). Both the Find X6 and the Find X6 Pro offer UFS 4.0 without expandable storage, as well as 256 GB or 512 GB of ROM paired with either 12 or 16 GB of RAM. Both phones include Dolby Atmos stereo speakers with active noise cancellation , and have no audio jack . Biometric options include an optical fingerprint scanner and facial recognition . While both the Find X6 and the Find X6 Pro are equipped with identical 32MP front-facing Sony IMX709 cameras, subsequent software updates have enabled the latter to shoot videos in 4K resolution. [ 5 ] [ 9 ] The Find X6 has a slightly inferior rear camera setup, utilising the 50 MP Sony IMX890 as the main sensor and the 50 MP Isocell JN1 as the ultrawide sensor. [ 4 ] The Sony IMX890 is also used as the periscope telephoto lens with 2.8x optical zoom and 6x hybrid zoom. The Find X6 Pro, on the other hand, features the 1-inch type Sony IMX989 main sensor, while both the ultrawide and the 2.8x-periscope telephoto lens use the Sony IMX890, giving rise to the claim of having 'Three Main Cameras' that offer parity in image quality across focal lengths. [ 3 ] [ 10 ] Both phones also feature software-based tuning co-developed with Hasselblad and the custom-made MariSilicon X image processing NPU. In the end of 2023, it was the 8th best smartphone camera in the world according to DxOMark . [ 11 ] The Find X6 and Find X6 Pro's battery capacity are 4800 mAh and 5000 mAh respectively. The Find X6 supports up to 80 W wired charging, while the Find X6 Pro is capable of up to 100W wired charging. [ 4 ] [ 5 ] In addition, the Find X6 Pro supports 50W wireless charging, whereas the Find X6 misses out on wireless charging support. Oppo claims that its proprietary battery technology allows the Find X6 series to retain 80% of their battery capacity after 1,600 charging cycles. [ 4 ] The Find X6 and Find X6 Pro run on ColorOS 13.1, which is based on Android 13 .
https://en.wikipedia.org/wiki/Oppo_Find_X6
The Oppo Find X6 is a series of two Android -based smartphones manufactured by Oppo as part of its flagship Find X series. Unveiled as successors to the Oppo Find X5 series , both phones were unveiled on 21 March 2023. [ 3 ] [ 4 ] Currently, the Find X6 series is available for sale only in mainland China . [ 5 ] [ 6 ] [ 7 ] The Find X6 series consists of two devices - the regular Find X6 and the top of the line Find X6 Pro. The Find X6 features a curved 6.74 in (171 mm) display with a variable refresh rate from 40 Hz to 120 Hz, either 12 GB or 16GB of RAM, and storage options from 256 GB to 512 GB. [ 8 ] The Find X6 Pro flagship comes with a curved 6.82 in (173 mm) LTPO3 display that offers a variable refresh rate starting at 1 HZ and a higher 1440p resolution, either 12 GB or 16GB of RAM, and storage options from 256 GB to 512 GB. [ 5 ] Both phones feature 10-bit HDR10+ capable displays, but the Find X6 Pro's battery capacity is the largest in the lineup and comes with upgraded cameras compared to the Find X6. Both the Find X6 and the Find X6 Pro feature curved displays and aluminium frames. However, only the Find X6 Pro's screen is protected by Corning Gorilla Glass Victus 2. [ 5 ] The Find X6 comes in either Black, Green or Gold colourways. The Green and Gold variants were manufactured with Oppo's patented Oppo Glow process, while the Black variant features a mirrored glass rear. [ 3 ] The Find X6 is also IP64 protected. [ 4 ] The more advanced Find X6 Pro features IP68 water and dust resistance. Its colour options are Black, Green and Brown, with the Brown variant being the only one that is uniquely crafted with a dual tone vegan leather and glass rear. The Black and Green variants are fitted with matte glass backs. [ 5 ] The Find X6 is powered by MediaTek Dimensity 9200 and operates on Octa-core (1x3.05 GHz Cortex-X3 & 3x2.85 GHz Cortex-A715 & 4x1.80 GHz Cortex-A510), an upgrade from its predecessor the Find X5. The flagship Find X6 Pro uses Snapdragon 8 Gen 2, the highest specced Snapdragon chip in 2023. It operates on a more advanced octa-core system (1x3.2 GHz Cortex-X3 & 2x2.8 GHz Cortex-A715 & 2x2.80 GHz Cortex-A710 & 3x2.0 GHz Cortex-A510). Both the Find X6 and the Find X6 Pro offer UFS 4.0 without expandable storage, as well as 256 GB or 512 GB of ROM paired with either 12 or 16 GB of RAM. Both phones include Dolby Atmos stereo speakers with active noise cancellation , and have no audio jack . Biometric options include an optical fingerprint scanner and facial recognition . While both the Find X6 and the Find X6 Pro are equipped with identical 32MP front-facing Sony IMX709 cameras, subsequent software updates have enabled the latter to shoot videos in 4K resolution. [ 5 ] [ 9 ] The Find X6 has a slightly inferior rear camera setup, utilising the 50 MP Sony IMX890 as the main sensor and the 50 MP Isocell JN1 as the ultrawide sensor. [ 4 ] The Sony IMX890 is also used as the periscope telephoto lens with 2.8x optical zoom and 6x hybrid zoom. The Find X6 Pro, on the other hand, features the 1-inch type Sony IMX989 main sensor, while both the ultrawide and the 2.8x-periscope telephoto lens use the Sony IMX890, giving rise to the claim of having 'Three Main Cameras' that offer parity in image quality across focal lengths. [ 3 ] [ 10 ] Both phones also feature software-based tuning co-developed with Hasselblad and the custom-made MariSilicon X image processing NPU. In the end of 2023, it was the 8th best smartphone camera in the world according to DxOMark . [ 11 ] The Find X6 and Find X6 Pro's battery capacity are 4800 mAh and 5000 mAh respectively. The Find X6 supports up to 80 W wired charging, while the Find X6 Pro is capable of up to 100W wired charging. [ 4 ] [ 5 ] In addition, the Find X6 Pro supports 50W wireless charging, whereas the Find X6 misses out on wireless charging support. Oppo claims that its proprietary battery technology allows the Find X6 series to retain 80% of their battery capacity after 1,600 charging cycles. [ 4 ] The Find X6 and Find X6 Pro run on ColorOS 13.1, which is based on Android 13 .
https://en.wikipedia.org/wiki/Oppo_Find_X6_Pro
The Oppo Find X7 is a series of two Android -based smartphones manufactured by Oppo as part of its flagship Find X series. Unveiled as successors to the Oppo Find X6 series , both phones were launched on 8 January 2024. Just like its predecessor the Find X6 series, the Find X7 series were released exclusively for the mainland Chinese market. [ 3 ] [ 4 ] The Find X7 series consists of two devices - the regular Find X7 and the top of the line Find X7 Ultra. As the first "Ultra" phone in the flagship Find X series, the Find X7 Ultra is also being touted as the first phone to incorporate two periscope telephoto cameras . [ 3 ] [ 4 ] [ 5 ] Both the Find X7 and the Find X7 Ultra feature curved displays and aluminium frames. The Find X7 features a curved 6.78 in (172 mm) display, while the Find X7 Ultra has a slightly larger 6.82 in (173 mm) screen with a higher 1440p resolution. [ 1 ] [ 2 ] Both display systems are 10-bit HDR10+ capable. The Find X7 is IP65 protected, while the Find X7 Ultra features IP68 water and dust resistance. [ 1 ] [ 2 ] The Find X7 smartphone is available in four different colour options, namely Black, Ocean Blue, Sepia Brown and Purple. The Black and Purple variants have all-glass back panels, while the Ocean Blue and Sepia Brown colour options come with dual-tone glass and vegan leather design. [ 6 ] The Find X7 Ultra is available in Ocean Blue, Sepia Brown and Tailored Black colourways, all of which feature similar dual-tone glass and leather backs. [ 6 ] [ 7 ] [ 8 ] The Find X7 series is the first within Oppo's Find X lineup to introduce an alert slider, a design copied from the company's subsidiary OnePlus . [ 3 ] [ 5 ] The Find X7 is powered by MediaTek Dimensity 9300 and operates on a 1x3.25 GHz Cortex-X4, 3x2.85 GHz Cortex-X4 and 4x2.0 GHz Cortex-A720 octa-core system. [ 1 ] The flagship Find X7 Ultra uses Snapdragon 8 Gen 3, the highest specced Snapdragon chip in 2024. It operates on a more advanced octa-core system comprising 1x3.3 GHz Cortex-X4, 5x3.2 GHz Cortex-A720 and 2x2.3 GHz Cortex-A520. [ 2 ] Both the Find X7 and the Find X7 Ultra offer UFS 4.0 without expandable storage and Dolby Atmos stereo speakers with active noise cancellation . [ 1 ] [ 2 ] Biometric options include an optical fingerprint scanner and facial recognition . While both phones come equipped with either 12 or 16 GB of RAM, the Find X7 is the only phone in the series to offer up to 1 TB of UFS4.0 flash memory, whereas the Find X7 Ultra has either 256 GB or 512 GB UFS 4.0 flash memory options. [ 1 ] [ 2 ] [ 6 ] The Find X7 has a triple camera setup consisting of a 50 MP main sensor, a 50 MP ultrawide sensor and a 64 MP periscope telephoto lens capable of 3x optical zoom. [ 1 ] The Find X7 Ultra incorporates a quad-camera setup comprising a 50 MP 1-inch type Sony LYT-900 main sensor, a 50 MP Sony LYT-600 ultrawide sensor, a 50 MP Sony IMX890 2.8x-periscope telephoto lens and a 50 MP Sony IMX858 6x-periscope telephoto lens. [ 3 ] [ 8 ] [ 9 ] Oppo has claimed that the Find X7 Ultra's quad-camera system is capable of covering between 14mm to 270mm equivalent focal lengths. [ 5 ] Both the Find X7 and the Find X7 Ultra are equipped with a 32 MP front-facing camera capable of 4K selfie video resolution. [ 1 ] [ 2 ] GSMArena speculated that either a Sony IMX709 or LYT-506 sensor is used for the front-facing camera. [ 10 ] The Find X7 series also features software-based tuning co-developed with Hasselblad , with built-in Hypertone Image Engine purportedly designed to enhance computational photography. [ 6 ] [ 11 ] The Find X7 and Find X7 Ultra run on ColorOS 14, which is based on Android 14 . [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Oppo_Find_X7
The Oppo Find X7 is a series of two Android -based smartphones manufactured by Oppo as part of its flagship Find X series. Unveiled as successors to the Oppo Find X6 series , both phones were launched on 8 January 2024. Just like its predecessor the Find X6 series, the Find X7 series were released exclusively for the mainland Chinese market. [ 3 ] [ 4 ] The Find X7 series consists of two devices - the regular Find X7 and the top of the line Find X7 Ultra. As the first "Ultra" phone in the flagship Find X series, the Find X7 Ultra is also being touted as the first phone to incorporate two periscope telephoto cameras . [ 3 ] [ 4 ] [ 5 ] Both the Find X7 and the Find X7 Ultra feature curved displays and aluminium frames. The Find X7 features a curved 6.78 in (172 mm) display, while the Find X7 Ultra has a slightly larger 6.82 in (173 mm) screen with a higher 1440p resolution. [ 1 ] [ 2 ] Both display systems are 10-bit HDR10+ capable. The Find X7 is IP65 protected, while the Find X7 Ultra features IP68 water and dust resistance. [ 1 ] [ 2 ] The Find X7 smartphone is available in four different colour options, namely Black, Ocean Blue, Sepia Brown and Purple. The Black and Purple variants have all-glass back panels, while the Ocean Blue and Sepia Brown colour options come with dual-tone glass and vegan leather design. [ 6 ] The Find X7 Ultra is available in Ocean Blue, Sepia Brown and Tailored Black colourways, all of which feature similar dual-tone glass and leather backs. [ 6 ] [ 7 ] [ 8 ] The Find X7 series is the first within Oppo's Find X lineup to introduce an alert slider, a design copied from the company's subsidiary OnePlus . [ 3 ] [ 5 ] The Find X7 is powered by MediaTek Dimensity 9300 and operates on a 1x3.25 GHz Cortex-X4, 3x2.85 GHz Cortex-X4 and 4x2.0 GHz Cortex-A720 octa-core system. [ 1 ] The flagship Find X7 Ultra uses Snapdragon 8 Gen 3, the highest specced Snapdragon chip in 2024. It operates on a more advanced octa-core system comprising 1x3.3 GHz Cortex-X4, 5x3.2 GHz Cortex-A720 and 2x2.3 GHz Cortex-A520. [ 2 ] Both the Find X7 and the Find X7 Ultra offer UFS 4.0 without expandable storage and Dolby Atmos stereo speakers with active noise cancellation . [ 1 ] [ 2 ] Biometric options include an optical fingerprint scanner and facial recognition . While both phones come equipped with either 12 or 16 GB of RAM, the Find X7 is the only phone in the series to offer up to 1 TB of UFS4.0 flash memory, whereas the Find X7 Ultra has either 256 GB or 512 GB UFS 4.0 flash memory options. [ 1 ] [ 2 ] [ 6 ] The Find X7 has a triple camera setup consisting of a 50 MP main sensor, a 50 MP ultrawide sensor and a 64 MP periscope telephoto lens capable of 3x optical zoom. [ 1 ] The Find X7 Ultra incorporates a quad-camera setup comprising a 50 MP 1-inch type Sony LYT-900 main sensor, a 50 MP Sony LYT-600 ultrawide sensor, a 50 MP Sony IMX890 2.8x-periscope telephoto lens and a 50 MP Sony IMX858 6x-periscope telephoto lens. [ 3 ] [ 8 ] [ 9 ] Oppo has claimed that the Find X7 Ultra's quad-camera system is capable of covering between 14mm to 270mm equivalent focal lengths. [ 5 ] Both the Find X7 and the Find X7 Ultra are equipped with a 32 MP front-facing camera capable of 4K selfie video resolution. [ 1 ] [ 2 ] GSMArena speculated that either a Sony IMX709 or LYT-506 sensor is used for the front-facing camera. [ 10 ] The Find X7 series also features software-based tuning co-developed with Hasselblad , with built-in Hypertone Image Engine purportedly designed to enhance computational photography. [ 6 ] [ 11 ] The Find X7 and Find X7 Ultra run on ColorOS 14, which is based on Android 14 . [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Oppo_Find_X7_Ultra
Flexible or opportunistic breeders mate whenever the conditions of their environment become favorable. Their ability and motivation to mate are primarily independent of day-length ( photoperiod ) and instead rely on cues from short-term changes in local conditions like rainfall, food abundance and temperature. Another factor is the presence of suitable breeding sites, which may only form with heavy rain or other environmental changes. [ 1 ] Thus, they are distinct from seasonal breeders that rely on changes in day length to induce entry into estrus and to cue mating, and continuous breeders like humans that can mate year-round. Other categories of breeders that perhaps can be subdivided under the heading "opportunistic" have been used to describe many species, such as many that are anurans like frogs. These include sporadic wet and sporadic dry , describing animals that breed sporadically not always under favorable conditions of rain or lack thereof. [ 1 ] Many opportunistic breeders are non-mammals. Those that are mammals tend to be small rodents . [ 2 ] Since changes in season can coincide with favorable changes in environment, the distinction between seasonal breeder and opportunistic can be muddled. In equatorial climes, the change in seasons is not always perceptible and thus, changes in day length not remarkable. Thus, the tree kangaroo (Dendrolagus) previously categorized as a seasonal breeder is now suspected to be an opportunistic breeder. [ 3 ] Additionally, opportunists can have qualities of seasonal breeders. The red crossbill exhibits a preference (not a requirement) for long-day seasonality, but requires other factors, especially food abundance and social interactions, in order to breed. [ 4 ] [ 5 ] Conversely, food availability by itself incompletely promotes reproductive development. Opportunistic breeders are typically capable of breeding at any time or becoming fertile within a short period of time. An example is the golden spiny mouse where changes in dietary salt in its desert habitat due to rainfall appear to cue reproductive function. [ 6 ] Increased levels of salinity in drying vegetation cause females to experience a reproductive hiatus. While reproduction is generally independent of photoperiod, animals can still experience reduced fertility with changes in day-length. Frogs and toads including:
https://en.wikipedia.org/wiki/Opportunistic_breeder
An opportunistic infection is an infection that occurs most commonly in individuals with an immunodeficiency disorder and acts more severe on those with a weakened immune system. These types of infections are considered serious and can be caused by a variety of pathogens including viruses, bacteria, fungi, and parasites. [ 1 ] Under normal conditions, such as in humans with uncompromised immune systems, an opportunistic infection would be less likely to cause significant harm and would typically result in a mild infection or no effect at all. These opportunistic infections can stem from a variety of sources, such as a weakened immune system (caused by human immunodeficiency virus and acquired immunodeficiency syndrome ), when being treated with immunosuppressive drugs (as in cancer treatment ), [ 2 ] when a microbiome is altered (such as a disruption in gut microbiota ), or when integumentary barriers are breached (as in penetrating trauma ). Opportunistic infections can contribute to antimicrobial resistance in an individual making these infections more severe. Some pathogens that cause these infections possess intrinsic resistance (natural resistance) to many antibiotics while others acquire resistance over time through mutations or horizontal gene transfer . [ 3 ] Many of these pathogens, such as the bacterium Clostridioides difficile (C. diff), can be present in hosts with uncompromised immune systems without generating any symptoms, and can, in some cases, act as commensals until the balance of the immune system is disrupted. [ 4 ] [ 5 ] [ 6 ] [ 7 ] With C. diff and many other pathogens, the overuse or misuse of antibiotics can cause the disruption of normal microbiota and lead to an opportunistic infection caused by antibiotic resistant pathogens. [ 8 ] In some cases, opportunistic infections can be labeled as a hospital-acquired infection due to individuals contracting them within a healthcare/hospital setting. [ 9 ] In terms of history, there is not one individual that can be attributed for discovering opportunistic infections. Over time and through medical advancement, there have been many scientists that have contributed to the study and treatment options for patients affected by these infections. [ 10 ] [ 11 ] Opportunistic infections can be caused by a wide variety of different types of pathogens. These infections can be caused by viral, bacterial, fungal, as well as parasitic pathogens. [ 12 ] A partial list of opportunistic pathogens and their associated effects are as follows: Human Immunodeficiency Virus is a virus that targets the CD4 cells (a type of white blood cell) within the body's immune system. CD4 counts within a non-affected immune system would range anywhere from 500-1500 cells per cubic millimeter of blood, while an affected immune system would show cell counts below 200. [ 77 ] HIV infection can lead to progressively worsening immunodeficiency, a condition ideal for the development of opportunistic infection. [ 78 ] [ 79 ] As HIV worsens over time, the term AIDS, or acquired immunodeficiency syndrome has been used to describe the condition and extensive damage to the immune system as well as the onset and susceptibility to other illnesses. The onset of AIDS leads to respiratory and central nervous system opportunistic infections, including but not limited to pneumonia , tuberculosis and meningitis . [ 80 ] [ 81 ] [ 82 ] Kaposi's sarcoma , a virally associated cancer, and non-Hodgkin's lymphoma are two types of cancers that are generally defined as AIDS malignancies. [ 83 ] As immune function declines and HIV-infection progresses to AIDS, individuals are at an increased risk of opportunistic infections that their immune systems are no longer capable of responding properly to. Because of this, opportunistic infections are a leading cause of HIV/AIDS-related deaths. [ 84 ] Immunodeficiency is characterized by the absence of or the disruption in components of the immune system such as white blood cells (e.g. lymphocytes , phagocytes , etc.). These disruptions cause a decrease in immune function and result in an overall reduction of immunity against pathogens. [ 2 ] They can be caused by a variety of factors, including: Since opportunistic infections can cause severe disease, much emphasis is placed on measures to prevent infection. Such a strategy usually includes restoration of the immune system as soon as possible, avoiding exposures to infectious agents, and using antimicrobial medications ("prophylactic medications") directed against specific infections. [ 105 ] Individuals at higher risk for opportunistic infections are often prescribed prophylactic medication to prevent an infection from occurring. A person's risk level for developing an opportunistic infection is approximated using the person's CD4 T-cell count and other indicators such as current medical treatments, age, and lifestyle choices. The table below provides information regarding the treatment management of common opportunistic infections. [ 118 ] [ 119 ] [ 120 ] [ 121 ] Alternative agents can be used instead of the preferred agents. These alternative agents may be used due to an individual's allergies, availability, or clinical presentation. The alternative agents are listed in the table below. [ 118 ] [ 122 ] [ 120 ] Due to the prevention techniques used with HIV patients, such as prophylactic medications, opportunistic infections in HIV patients have decreased in number over the past few decades. In some circumstances, where individuals are not aware they have HIV and they develop an opportunistic infection, they may be prescribed, antivirals , antibiotics , or antifungals . After the infection has cleared, and to prevent it from coming back, they may be recommended to stay on that medication as well as it being coupled with another medication to ensure drug efficiency. [ 123 ]
https://en.wikipedia.org/wiki/Opportunistic_infection
Opportunistic mesh (OPM) is a wireless networking technology that aims to provide reliable and cost-effective wireless bandwidth when used to build the networking infrastructure of large-scale wireless systems. The OPM technology is based on the cognitive networking principles [ 1 ] that are advanced from traditional wireless networking by the opportunistic utilization of both spectrum bandwidth and mesh station/radio availability. Traditional wireless networking assumes that those resources can be predetermined, and the protocol stacks from wire-line networks can be re-used. For example, in the traditional stack, the MAC ( media access control ) layer allocates spectrum resources to wireless links; and the network layer sets up a network routing path from source to destination based on the overall network topology. In large-scale wireless systems, the use of this stack results in a network that is unable to respond to volatile spectrum availability which can be typical in unlicensed bands where interference prevails. In addition, random station/radio availability is also often encountered due to the dynamic traffic load ( congestion ) and other factors such as radio failure. Bottlenecks along both wireless links and stations are created because the packet forwarding protocol cannot respond quickly to these changes. By adopting the cross-layer architecture that merges network routing into wireless link and RF design, the OPM technology can create a dynamic (fluid) wireless network without predetermined topology and spectrum allocation. In the multi-hop wireless communications, every packet takes opportunistically available paths and spectrum in the wireless network on each hop. The network resource utilization can potentially reach its instantaneous maximum, in spite of volatile changes and demand placed on the network. Testing results [ 2 ] show that the OPM wireless networks can achieve 5–10 times higher throughput (bandwidth) in multi-hop wireless communications and interference environments. The technology can have great applications in current and future smart wireless systems and infrastructures, including such as location/tracking networks, real-time sensor networks, smart vehicular networks, smart healthcare, smart agriculture, industrial controlling, broadband access and mobile social networking, surveillance, smart utilities, and emergency networks. The promoters and supporters of OPM claim that the technology may make wireless communications ultimately scalable and affordable; and once it potentially becomes ubiquitous, the commercial impact could be big as comparable to: 1) what packet-switched network technology (Internet) has brought to personal communications; 2) what mobile communication technology (cellular) has brought to telephony.
https://en.wikipedia.org/wiki/Opportunistic_mesh
Opportunistic reasoning is a method of selecting a suitable logical inference strategy within artificial intelligence applications. Specific reasoning methods may be used to draw conclusions from a set of given facts in a knowledge base , e.g. forward chaining versus backward chaining . However, in opportunistic reasoning, pieces of knowledge may be applied either forward or backward, at the "most opportune time". [ 1 ] An opportunistic reasoning system may combine elements of both forward and backward reasoning. It is useful when the number of possible inferences is very large and the reasoning system must be responsive to new data that may become known. [ 2 ] Opportunistic reasoning has been used in applications such as blackboard systems and medical applications. [ 3 ] This artificial intelligence -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Opportunistic_reasoning
The opposition surge (sometimes known as the opposition effect , opposition spike or Seeliger effect [ 1 ] ) is the brightening of a rough surface, or an object with many particles , when illuminated from directly behind the observer. The term is most widely used in astronomy , where generally it refers to the sudden noticeable increase in the brightness of a celestial body such as a planet , moon , or comet as its phase angle of observation approaches zero. It is so named because the reflected light from the Moon and Mars appear significantly brighter than predicted by simple Lambertian reflectance when at astronomical opposition . Two physical mechanisms have been proposed for this observational phenomenon: shadow hiding and coherent backscatter. The phase angle is defined as the angle between the observer, the observed object and the source of light. In the case of the Solar System, the light source is the Sun, and the observer is generally on Earth. At zero phase angle, the Sun is directly behind the observer and the object is directly ahead, fully illuminated. As the phase angle of an object lit by the Sun decreases, the object's luminous intensity increases. This is partly due to the increased area lit, but is also partly due to the intrinsic brightness (the luminance ) of the part that is sunlit. This is affected by the illuminance of the surface, which is stongest right under the sun and goes to zero at the parts of the object that face at right angle to the sun. But the luminance is also affected by the angle at which light reflected from the object is observed. For this reason, moonlight at full moon is much more than at first or third quarter, even though the visible area illuminated is only twice as large. When the angle of reflection is close to the angle at which the light's rays hit the surface (that is, when the Sun and the object are close to opposition from the viewpoint of the observer), this intrinsic brightness is usually close to its maximum. At a phase angle of zero degrees, all shadows disappear and the object is fully illuminated. When phase angles approach zero, there is a sudden increase in apparent brightness, and this sudden increase is referred to as the opposition surge. The effect is particularly pronounced on regolith surfaces of airless bodies in the Solar System . The usual major cause of the effect is that a surface's small pores and pits that would otherwise be in shadow at other incidence angles become lit up when the observer is almost in the same line as the source of illumination. The effect is usually only visible for a very small range of phase angles near zero. For bodies whose reflectance properties have been quantitatively studied, details of the opposition effect – its strength and angular extent – are described by two of the Hapke parameters . In the case of planetary rings (such as Saturn's ), an opposition surge is due to the uncovering of shadows on the ring particles. This explanation was first proposed by Hugo von Seeliger in 1887. [ 2 ] A theory for an additional effect that increases brightness during opposition is that of coherent backscatter. [ 3 ] In the case of coherent backscatter, the reflected light is enhanced at narrow angles if the size of the scatterers in the surface of the body is comparable to the wavelength of light and the distance between scattering particles is greater than a wavelength. The increase in brightness is due to the reflected light combining coherently with the emitted light. Coherent backscatter phenomena have also been observed with radar . In particular, recent observations of Titan at 2.2 cm with Cassini have shown that a strong coherent backscatter effect is required to explain the high albedos at radar wavelengths. [ 4 ] On Earth, water droplets can also create bright spots around the antisolar point in various situations. For more details, see Heiligenschein and Glory (optical phenomenon) . The existence of the opposition surge was described in 1956 by Tom Gehrels during his study of the reflected light from an asteroid . [ 5 ] Gehrels' later studies showed that the same effect could be shown in the moon's brightness. [ 6 ] He coined the term "opposition effect" for the phenomenon, but the more intuitive "opposition surge" is now more widely used. Since Gehrels' early studies, an opposition surge has been noted for most airless solar system bodies. No such surge has been reported for bodies with significant atmospheres. In the case of the Moon , B. J. Buratti et al. used observations from the Clementine spacecraft at very low phase angle to find that the moon's brightness increases by more than 40% between a phase angle of 4° and one of 0°. (Observation from Earth cannot be at a phase angle less than about half a degree without there being a lunar eclipse. A phase angle of 4° is achieved about eight hours before or after a lunar eclipse.) This increase is greater for the rougher-surfaced highland areas than for the relatively smooth maria . As for the principal mechanism of the phenomenon, measurements indicate that the opposition effect exhibits only a small wavelength dependence: the surge is 3-4% larger at 0.41 μm than at 1.00 μm. This result suggests that the principal cause of the lunar opposition surge is shadow-hiding rather than coherent backscatter. [ 7 ]
https://en.wikipedia.org/wiki/Opposition_surge
Opposition to the Mauna Kea Observatories has existed since the first telescope was built in the late 1960s. Originally part of research begun by Gerard Kuiper of the University of Arizona , the site has expanded into the world's largest observatory for infrared and submillimeter telescopes. Opposition to the telescope from residents in the city of Hilo, Hawaii were concerned about the visual appearance of the mountain and Native Hawaiians voiced concerns over the site being sacred to the Hawaiian religion as the home of several deities. Environmental groups and activists have been expressing concern over endangered species habitat. The Outrigger Telescopes Project , intended to build from four to six comparatively small telescopes for interferometry, was to surround the Keck telescopes. [ 1 ] It was cancelled in 2006, after a court found NASA's Environmental Impact Statement was improperly limited to just the telescope area. [ 2 ] [ 3 ] An ongoing proposal for one of the world's largest optical telescopes, the Thirty Meter Telescope (TMT) was the focus of protests concerning the continued development of the mountain Hawaiians consider the most sacred peak in the island chain. On 30 October 2018, the Supreme Court of Hawaii approved the resumption of construction of the TMT. [ 4 ] [ 5 ] After studying photos for NASA's Apollo program that contained greater detail than any ground-based telescope, Gerard Kuiper began seeking an arid site for infrared studies. [ 6 ] [ 7 ] While he first began looking in Chile, he also made the decision to perform tests in the Hawaiian Islands. Tests on Maui 's Haleakalā were promising but the mountain was too low in the inversion layer and often covered by clouds. On the "Big Island" of Hawaii, Mauna Kea is considered the highest island mountain in the world. While the summit is often covered with snow the air itself is extremely dry. [ 6 ] Kuiper began looking into the possibility of an observatory on Mauna Kea. After testing, he discovered the low humidity was perfect for infrared signals. He persuaded then-Governor John A. Burns to bulldoze a dirt road to the summit where he built a small telescope on Puʻu Poliʻahu, a cinder cone peak. [ 6 ] [ 8 ] [ 9 ] The peak was the second highest on the mountain with the highest peak being holy ground, so Kuiper avoided it. [ 10 ] Next, Kuiper tried enlisting NASA to fund a larger facility with a large telescope, housing and other needed structures. NASA, in turn decided to make the project open to competition. Professor of physics, John Jefferies of the University of Hawaii placed a bid on behalf of the university. [ 6 ] [ 11 ] [ 12 ] Jefferies had gained his reputation through observations at Sacramento Peak Observatory . The proposal was for a two-meter telescope to serve both the needs of NASA and the university. While large telescopes are not ordinarily awarded to universities without well established astronomers, Jefferies and UH were awarded the NASA contract, infuriating Kuiper who felt that "his mountain" had been "stolen" from "him". [ 6 ] [ 13 ] Kuiper would abandon his site (the very first telescope on Mauna Kea) over the competition and begin work in Arizona on a different NASA project. After considerable testing by Jefferies' team, the best locations were determined to be near the summit at the top of the cinder cones. Testing also determined Mauna Kea to be superb for nighttime viewing due to many factors including the thin air, constant trade winds and being surrounded by sea. Jefferies would build a 2.24 meter telescope with the State of Hawaii agreeing to build a reliable, all weather roadway to the summit. Building began in 1967 and first light seen in 1970. [ 6 ] Some of the people on the Big Island were concerned that the whole thing had got out of hand and the University of Hawaii was going to take over the top of the mountain, push all the skiers off and push all the hunters off, and essentially develop, in the worst sense, the side of the mountain. The Big Island is a rural community and there are a lot of people there who are not very sophisticated, as you know. They are nervous about changes in lifestyles, and they see those following on the development of the program of astronomy as just about as far removed from their daily pursuits as they possibly can be. And they do not trust the University, State, or Federal government worth a damn. They feel—and in some cases I’ve had this said to me—that they are going to lose all access to the mountain because of these programs. The Federal government is going to come in and it's going to slowly move down the mountain, taking more and more of the mountain over as more and more programs go up there, and no-one will be able to get there. It is very hard to fight a fear of this kind—a formless, baseless concern—except through the same kind of backwoods interaction, at a grassroots level. In Honolulu, the governor and legislature, enthusiastic about the development, set aside an even larger area for the observatory causing opposition in the main city of the Big Island, Hilo . Native Hawaiians believe the entire site to be sacred and that developing the mountain, even for science, would spoil the area. Environmentalists were concerned about rare native bird populations and other citizens of Hilo were concerned about the sight of the domes from the city. Using town hall meetings, Jefferies was able to overcome opposition by weighing the economic advantage and prestige the island would receive. [ 6 ] There has been substantial opposition to the Mauna Kea observatories that continues to grow. [ 15 ] By 1977 Jefferies stated that the Mayor of Hawaii County had joined existing hunting and environmentalist opposition. [ 14 ] Over the years, the opposition to the observatories may have become the most visible example of the conflict western science has encountered over access and use of environmental and culturally significant sites. [ 16 ] Opposition to development grew shortly after expansion of the observatories commenced. Once access was opened up by the roadway to the summit, skiers began using it for recreation and objected when the road was closed as a precaution against vandalism when the telescopes were being built. Hunters voiced concerns, as did the Hawaiian Audubon Society , which was supported by Governor George Ariyoshi . [ 10 ] The Audubon Society objected to further development on Mauna Kea over concerns to habitat of the endangered palila , an endemic species to only specific parts of this mountain. The bird is the last of the finch billed honeycreepers existing on the island. Over 50% of native bird species had been killed off due to loss of habitat from early western settlers, or the introduction of non-native species competing for resources. Hunters and sportsmen were concerned that the hunting of feral animals would be effected by the telescope operations. [ 17 ] A "Save Mauna Kea" movement was inspired by the proliferation of telescopes, with opposition believing development of the mountain to be sacrilegious. [ 18 ] Native Hawaiian non-profit groups, such as Kahea, (whose goals are the protection of cultural heritage and the environment), oppose development on Mauna Kea as a sacred space to the Hawaiian religion. [ 19 ] Today, Mauna Kea hosts the world's largest location for telescope observations in infrared and submillimeter astronomy. The land itself is protected by the U.S. Historical Preservation Act due to its significance to Hawaiian culture, but this still allowed development. [ 20 ] Development of the Mauna Kea observatories is still opposed by environmental groups and Native Hawaiians. A 2006 proposal for the Outrigger Telescopes to become extensions of the Keck Observatory was canceled after a judge's determination that a full environmental impact statement must be prepared before any further development of the site. [ 21 ] The "outrigger" would have linked the Keck I and Keck II telescopes. Environmental groups and Native Hawaiian activists were much stronger in their opposition this time than they had been in the past, but NASA went ahead with the proposal for lack of an alternate site. The group Mauna Kea Anaina Hou made several arguments against the development, including that Mauna Kea was a sacred mountain to Native Hawaiians where many deities live, and that the cinder cone location being proposed was holy in Hawaiian tradition as a burial site for a demi-god. The group raised several other concerns, such as environmental, concern for the preservation of native insects, the question of Ceded lands , and an audit report critical of the mountain's management. [ 22 ] The Thirty Meter Telescope (TMT) is a proposal for a large, segmented, mirror telescope, planned for the summit of Mauna Kea. The TMT has become a focal point for protests against further development of the observatory site, and a legal battle was fought through the Hawaii court system. The Supreme Court of Hawaii approved the resumption of construction of the telescope on 31 October 2018. [ 4 ] The TMT project is a response to a recommendation in 2000 from the US National Academy of Sciences that a thirty-meter telescope be the top priority and be built within the decade. [ 23 ] Urgency in construction is due to the competitive nature of science with the European-Extremely Large Telescope also under construction. [ 24 ] Mauna Kea's summit is the most sacred of all the mountains in Hawaii to many, but not all, Native Hawaiian people . [ 25 ] Hawaiian cultural practitioners cite impacts to indigenous cultural practice , while recreational users have argued that construction harms the scenic view plane. Some environmentalists are concerned that irreparable ecological damage may be done by construction, although this has been disputed by other environmental advocates. [ 26 ] All three groups are represented among the petitioners opposing the TMT. [ 27 ] According to the State of Hawaii law HAR 13-5-30, the eight key criteria must be met before construction be allowed on conservation lands in Hawaii. Among other criteria, the development may not "cause substantial adverse impact to existing natural resources within the surrounding area, community, or region," and the "existing physical and environmental aspects of the land must be preserved or improved upon." [ 28 ] Native Hawaiian activists such as Kealoha Pisciotta, a former employee of the Mauna Kea Observatories , have raised concerns over the telescopes on Mauna Kea desecrating what some Native Hawaiians consider to be their most sacred mountain. [ 29 ] Pisciotta, a former telescope systems specialist technician at James Clerk Maxwell Telescope , is one of several people suing to stop the construction, [ 30 ] and is also director of Mauna Kea Anaina Hou. [ 31 ] As of April 2015, two separate appeals were still pending. [ 32 ] The 1998 study Mauna Kea Science Reserve and Hale Pohaku Complex Development Plan Update stated that "...nearly all the interviewees and all others who participated in the consultation process (Appendices B and C) called for a moratorium on any further development on the summit of Mauna Kea." [ 33 ] The Hawaii Board of Land and Natural Resources gave final approval for the project in September 2017 after a protracted hearing process that included a six month long contested case hearing . [ 34 ] This decision was challenged in the Hawaii State Supreme Court the following year. The court ruled that the DLNR decision was valid and that construction may proceed. [ 35 ] As of late 2021 construction of the Thirty Meter Telescope remains paused due to the controversy and ongoing effects of the COVID-19 pandemic . The 2020 Decadal report of the National Science Foundation has recommended federal investment in the TMT project. [ 36 ] The controversy surrounding construction of the Thirty Meter Telescope continues. Independent polls commissioned by local media organizations [ 37 ] [ 38 ] show consistent support for the project in the islands with over two thirds of local residents supporting the project. These same polls indicate Native Hawaiian community support remains split with about half of Hawaiian respondents supporting construction of the new telescope. A July 2022, state law [ 39 ] responds to the protests by removing sole control over the master land lease from the University of Hawaii. After a joint transition period from 2023 to 2028, control will shift to the new Mauna Kea Stewardship and Oversight Authority, which will include representatives from the University, astronomers and native Hawaiians. [ 40 ]
https://en.wikipedia.org/wiki/Opposition_to_the_Mauna_Kea_Observatories
Opsys is an educational adventure video game by Polish studio Lemon Interactive and published by [hyper]media limited in 2000 on Macintosh and Windows. When someone breaks into the Museum of the History of Cypriot Coinage and steals all the ancient coins, the player must travel through time and recover them, from 500 BC to 1960. Players can travel to locations via a map, and can access a clue book to complete the puzzles. Opsys is a 3D virtual reality game with Myst -like graphics and full-motion video. [ 1 ] Lemon Interactive, the game's Polish developer, announced a competition where by the first player to find all the coins would win 10,000 dollars, but the competition was never finalised. The competition was also extended to the English-speaking world. [ 2 ] The demo version lacked some gameplay elements and only allowed players to walk through the wardrobe in their own apartment to the virtual reality lab, and access the temple VR, tomb VR and theatre VR. [ 3 ] Gamepressure/ Gry-Online praised the artwork of the landscapes that the player traverses through. [ 4 ] [ 5 ] Absolute Games deemed it a "boring and tedious game". [ 6 ] Gamezone felt it was a "terrific cerebral challenge". [ 7 ] Quandaryland felt that one of the only reasons someone would play this game is for the chance to win $10,000. [ 8 ] Just Adventure described it as more than a contest than a game. [ 9 ]
https://en.wikipedia.org/wiki/Opsys
Optibo is the product of a collaboration between Swedish firms to solve the housing industry 's problem of availability of space caused by high land prices. Optibo's main architect was inspired when he saw the Disney cartoon Mickey's Trailer on TV. [ 1 ] The project has led to the construction of an apartment which is only 25 square meters (270 square feet) in area. The single-room living space has the furniture built into the floor and the room can be changed from a living room to a bedroom , to a dining room , and back. The kitchen area is affixed to the wall and does not change. [ citation needed ] This real estate article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Optibo
In number theory , the optic equation is an equation that requires the sum of the reciprocals of two positive integers a and b to equal the reciprocal of a third positive integer c : [ 1 ] Multiplying both sides by abc shows that the optic equation is equivalent to a Diophantine equation (a polynomial equation in multiple integer variables). All solutions in integers a, b, c are given in terms of positive integer parameters m, n, k by [ 1 ] a = k m ( m + n ) , b = k n ( m + n ) , c = k m n {\displaystyle a=km(m+n),\quad b=kn(m+n),\quad c=kmn} where m, n are coprime . The optic equation, permitting but not requiring integer solutions, appears in several contexts in geometry . In a bicentric quadrilateral , the inradius r , the circumradius R , and the distance x between the incenter and the circumcenter are related by Fuss' theorem according to and the distances of the incenter I from the vertices A, B, C, D are related to the inradius according to In the crossed ladders problem , [ 2 ] two ladders braced at the bottoms of vertical walls cross at the height h and lean against the opposite walls at heights of A and B . We have 1 h = 1 A + 1 B . {\displaystyle {\tfrac {1}{h}}={\tfrac {1}{A}}+{\tfrac {1}{B}}.} Moreover, the formula continues to hold if the walls are slanted and all three measurements are made parallel to the walls. Let P be a point on the circumcircle of an equilateral triangle △ ABC , on the minor arc AB . Let a be the distance from P to A and b be the distance from P to B . On a line passing through P and the far vertex C , let c be the distance from P to the triangle side AB . Then [ 3 ] : p. 172 1 a + 1 b = 1 c . {\displaystyle {\tfrac {1}{a}}+{\tfrac {1}{b}}={\tfrac {1}{c}}.} In a trapezoid , draw a segment parallel to the two parallel sides, passing through the intersection of the diagonals and having endpoints on the non-parallel sides. Then if we denote the lengths of the parallel sides as a and b and half the length of the segment through the diagonal intersection as c , the sum of the reciprocals of a and b equals the reciprocal of c . [ 4 ] The special case in which the integers whose reciprocals are taken must be square numbers appears in two ways in the context of right triangles . First, the sum of the reciprocals of the squares of the altitudes from the legs (equivalently, of the squares of the legs themselves) equals the reciprocal of the square of the altitude from the hypotenuse. This holds whether or not the numbers are integers; there is a formula (see here ) that generates all integer cases. [ 5 ] [ 6 ] Second, also in a right triangle the sum of the squared reciprocal of the side of one of the two inscribed squares and the squared reciprocal of the hypotenuse equals the squared reciprocal of the side of the other inscribed square. The sides of a heptagonal triangle , which shares its vertices with a regular heptagon , satisfy the optic equation. For a lens of negligible thickness, and focal length f , the distances from the lens to an object, S 1 , and from the lens to its image, S 2 , are related by the thin lens formula : Components of an electrical circuit or electronic circuit can be connected in what is called a series or parallel configuration. For example, the total resistance value R t of two resistors with resistances R 1 and R 2 connected in parallel follows the optic equation: Similarly, the total inductance L t of two inductors with inductances L 1 , L 2 connected in parallel is given by: and the total capacitance C t of two capacitors with capacitances C 1 , C 2 connected in series is as follows: The optic equation of the crossed ladders problem can be applied to folding rectangular paper into three equal parts. One side (the left one illustrated here) is partially folded in half and pinched to leave a mark. The intersection of a line from this mark to an opposite corner, with a diagonal is exactly one third from the bottom edge. The top edge can then be folded down to meet the intersection. [ 7 ] The harmonic mean of a and b is 2 1 a + 1 b {\displaystyle {\tfrac {2}{{\frac {1}{a}}+{\frac {1}{b}}}}} or 2 c . In other words, c is half the harmonic mean of a and b . Fermat's Last Theorem states that the sum of two integers each raised to the same integer power n cannot equal another integer raised to the power n if n > 2 . This implies that no solutions to the optic equation have all three integers equal to perfect powers with the same power n > 2 . For if 1 x n + 1 y n = 1 z n , {\displaystyle {\tfrac {1}{x^{n}}}+{\tfrac {1}{y^{n}}}={\tfrac {1}{z^{n}}},} then multiplying through by ( x y z ) n {\displaystyle (xyz)^{n}} would give ( y z ) n + ( x z ) n = ( x y ) n , {\displaystyle (yz)^{n}+(xz)^{n}=(xy)^{n},} which is impossible by Fermat's Last Theorem.
https://en.wikipedia.org/wiki/Optic_equation
Optical Materials Express is a monthly peer-reviewed scientific journal published by Optica . It covers advances in and applications of optical materials, including but not limited to nonlinear optical materials , laser media, nanomaterials , metamaterials and biomaterials . Its editor-in-chief is Andrea Alù ( City University of New York ). [ 1 ] The founding editor-in-chief was David J. Hagan. [ 2 ] According to the Journal Citation Reports , the journal has a 2023 impact factor of 2.8. [ 3 ] This article about an optics journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . This article about a materials science journal is a stub . You can help Wikipedia by expanding it . See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
https://en.wikipedia.org/wiki/Optical_Materials_Express
An optical amplifier is a device that amplifies an optical signal directly, without the need to first convert it to an electrical signal. An optical amplifier may be thought of as a laser without an optical cavity , or one in which feedback from the cavity is suppressed. Optical amplifiers are important in optical communication and laser physics . They are used as optical repeaters in the long distance fiber-optic cables which carry much of the world's telecommunication links. There are several different physical mechanisms that can be used to amplify a light signal, which correspond to the major types of optical amplifiers. In doped fiber amplifiers and bulk lasers, stimulated emission in the amplifier's gain medium causes amplification of incoming light. In semiconductor optical amplifiers (SOAs), electron – hole recombination occurs. In Raman amplifiers , Raman scattering of incoming light with phonons in the lattice of the gain medium produces photons coherent with the incoming photons. Parametric amplifiers use parametric amplification. The principle of optical amplification was invented by Gordon Gould on November 13, 1957. [ 2 ] He filed US Patent US80453959A on April 6, 1959, titled "Light Amplifiers Employing Collisions to Produce Population Inversions" [ 3 ] (subsequently amended as a continuation in part and finally issued as U.S. patent 4,746,201A on May 4, 1988). The patent covered “the amplification of light by the stimulated emission of photons from ions, atoms or molecules in gaseous, liquid or solid state.” [ 4 ] In total, Gould obtained 48 patents related to the optical amplifier [ 5 ] that covered 80% of the lasers on the market at the time of issuance. [ 6 ] Gould co-founded an optical telecommunications equipment firm, Optelecom Inc. , that helped start Ciena Corp with his former head of Light Optics Research, David Huber and Kevin Kimberlin . Huber and Steve Alexander of Ciena invented the dual-stage optical amplifier [ 7 ] ( U.S. patent 5,159,601 ) that was a key to the first dense wave division multiplexing (DWDM) system, that they released in June 1996. This marked the start of optical networking. [ 3 ] Its significance was recognized at the time by optical authority, Shoichi Sudo and technology analyst, George Gilder in 1997, when Sudo wrote that optical amplifiers “will usher in a worldwide revolution called the Information Age” [ 4 ] and Gilder compared the optical amplifier to the integrated circuit in importance, predicting that it would make possible the Age of Information. [ 8 ] Optical amplification WDM systems are the common basis of all local, metro, national, intercontinental and subsea telecommunications networks [ 9 ] and the technology of choice for the fiber optic backbones of the Internet (e.g. fiber-optic cables form a basis of modern-day computer networking ). Almost any laser active gain medium can be pumped to produce gain for light at the wavelength of a laser made with the same material as its gain medium. Such amplifiers are commonly used to produce high power laser systems. Special types such as regenerative amplifiers and chirped-pulse amplifiers are used to amplify ultrashort pulses . Solid-state amplifiers are optical amplifiers that use a wide range of doped solid-state materials ( Nd: Yb:YAG, Ti:Sa ) and different geometries (disk, slab, rod) to amplify optical signals. The variety of materials allows the amplification of different wavelengths, while the shape of the medium can distinguish between those more suitable for energy or average power scaling. [ 10 ] Beside their use in fundamental research from gravitational wave detection [ 11 ] to high energy physics at the National Ignition Facility they can also be found in many of today's ultra short pulsed lasers . [ citation needed ] Doped-fiber amplifiers (DFAs) are optical amplifiers that use a doped optical fiber as a gain medium to amplify an optical signal. [ 12 ] They are related to fiber lasers . The signal to be amplified and a pump laser are multiplexed into the doped fiber, and the signal is amplified through interaction with the doping ions . Amplification is achieved by stimulated emission of photons from dopant ions in the doped fiber. The pump laser excites ions into a higher energy from where they can decay via stimulated emission of a photon at the signal wavelength back to a lower energy level. The excited ions can also decay spontaneously (spontaneous emission) or even through nonradiative processes involving interactions with phonons of the glass matrix. These last two decay mechanisms compete with stimulated emission reducing the efficiency of light amplification. The amplification window of an optical amplifier is the range of optical wavelengths for which the amplifier yields a usable gain. The amplification window is determined by the spectroscopic properties of the dopant ions, the glass structure of the optical fiber, and the wavelength and power of the pump laser. Although the electronic transitions of an isolated ion are very well defined, broadening of the energy levels occurs when the ions are incorporated into the glass of the optical fiber and thus the amplification window is also broadened. This broadening is both homogeneous (all ions exhibit the same broadened spectrum) and inhomogeneous (different ions in different glass locations exhibit different spectra). Homogeneous broadening arises from the interactions with phonons of the glass, while inhomogeneous broadening is caused by differences in the glass sites where different ions are hosted. Different sites expose ions to different local electric fields, which shifts the energy levels via the Stark effect . In addition, the Stark effect also removes the degeneracy of energy states having the same total angular momentum (specified by the quantum number J). Thus, for example, the trivalent erbium ion (Er 3+ ) has a ground state with J = 15/2, and in the presence of an electric field splits into J + 1/2 = 8 sublevels with slightly different energies. The first excited state has J = 13/2 and therefore a Stark manifold with 7 sublevels. Transitions from the J = 13/2 excited state to the J= 15/2 ground state are responsible for the gain at 1500 nm wavelength. The gain spectrum of the EDFA has several peaks that are smeared by the above broadening mechanisms. The net result is a very broad spectrum (30 nm in silica, typically). The broad gain-bandwidth of fiber amplifiers make them particularly useful in wavelength-division multiplexed communications systems as a single amplifier can be utilized to amplify all signals being carried on a fiber and whose wavelengths fall within the gain window. An erbium-doped waveguide amplifier (EDWA) is an optical amplifier that uses a waveguide to boost an optical signal. A relatively high-powered beam of light is mixed with the input signal using a wavelength selective coupler (WSC). The input signal and the excitation light must be at significantly different wavelengths. The mixed light is guided into a section of fiber with erbium ions included in the core. This high-powered light beam excites the erbium ions to their higher-energy state. When the photons belonging to the signal at a different wavelength from the pump light meet the excited erbium ions, the erbium ions undergo stimulated emission and return to their lower-energy state. A significant point is that the erbium gives up its energy in the form of additional photons which are exactly in the same phase and direction as the signal being amplified. So the signal is amplified along its direction of travel only. This is not unusual – when an atom "lases" it always gives up its energy in the same direction and phase as the incoming light. Thus all of the additional signal power is guided in the same fiber mode as the incoming signal. An optical isolator is usually placed at the output to prevent reflections returning from the attached fiber. Such reflections disrupt amplifier operation and in the extreme case can cause the amplifier to become a laser. The erbium doped amplifier is a high gain amplifier. The principal source of noise in DFAs is amplified spontaneous emission (ASE), which has a spectrum approximately the same as the gain spectrum of the amplifier. Noise figure in an ideal DFA is 3 dB, while practical amplifiers can have noise figure as large as 6–8 dB. As well as decaying via stimulated emission, electrons in the upper energy level can also decay by spontaneous emission, which occurs at random, depending upon the glass structure and inversion level. Photons are emitted spontaneously in all directions, but a proportion of those will be emitted in a direction that falls within the numerical aperture of the fiber and are thus captured and guided by the fiber. Those photons captured may then interact with other dopant ions, and are thus amplified by stimulated emission. The initial spontaneous emission is therefore amplified in the same manner as the signals, hence the term a mplified spontaneous emission . ASE is emitted by the amplifier in both the forward and reverse directions, but only the forward ASE is a direct concern to system performance since that noise will co-propagate with the signal to the receiver where it degrades system performance. Counter-propagating ASE can, however, lead to degradation of the amplifier's performance since the ASE can deplete the inversion level and thereby reduce the gain of the amplifier and increase the noise produced relative to the desired signal gain. Noise figure can be analyzed in both the optical domain and in the electrical domain. [ 13 ] In the optical domain, measurement of the ASE, the optical signal gain, and signal wavelength using an optical spectrum analyzer permits calculation of the noise figure. For the electrical measurement method, the detected photocurrent noise is evaluated with a low-noise electrical spectrum analyzer, which along with measurement of the amplifier gain permits a noise figure measurement. Generally, the optical technique provides a more simple method, though it is not inclusive of excess noise effects captured by the electrical method such multi-path interference (MPI) noise generation. In both methods, attention to effects such as the spontaneous emission accompanying the input signal are critical to accurate measurement of noise figure. Gain is achieved in a DFA due to population inversion of the dopant ions. The inversion level of a DFA is set, primarily, by the power of the pump wavelength and the power at the amplified wavelengths. As the signal power increases, or the pump power decreases, the inversion level will reduce and thereby the gain of the amplifier will be reduced. This effect is known as gain saturation – as the signal level increases, the amplifier saturates and cannot produce any more output power, and therefore the gain reduces. Saturation is also commonly known as gain compression. To achieve optimum noise performance DFAs are operated under a significant amount of gain compression (10 dB typically), since that reduces the rate of spontaneous emission, thereby reducing ASE. Another advantage of operating the DFA in the gain saturation region is that small fluctuations in the input signal power are reduced in the output amplified signal: smaller input signal powers experience larger (less saturated) gain, while larger input powers see less gain. The leading edge of the pulse is amplified, until the saturation energy of the gain medium is reached. In some condition, the width ( FWHM ) of the pulse is reduced. [ 14 ] Due to the inhomogeneous portion of the linewidth broadening of the dopant ions, the gain spectrum has an inhomogeneous component and gain saturation occurs, to a small extent, in an inhomogeneous manner. This effect is known as spectral hole burning because a high power signal at one wavelength can 'burn' a hole in the gain for wavelengths close to that signal by saturation of the inhomogeneously broadened ions. Spectral holes vary in width depending on the characteristics of the optical fiber in question and the power of the burning signal, but are typically less than 1 nm at the short wavelength end of the C-band, and a few nm at the long wavelength end of the C-band. The depth of the holes are very small, though, making it difficult to observe in practice. Although the DFA is essentially a polarization independent amplifier, a small proportion of the dopant ions interact preferentially with certain polarizations and a small dependence on the polarization of the input signal may occur (typically < 0.5 dB). This is called polarization dependent gain (PDG). The absorption and emission cross sections of the ions can be modeled as ellipsoids with the major axes aligned at random in all directions in different glass sites. The random distribution of the orientation of the ellipsoids in a glass produces a macroscopically isotropic medium, but a strong pump laser induces an anisotropic distribution by selectively exciting those ions that are more aligned with the optical field vector of the pump. Also, those excited ions aligned with the signal field produce more stimulated emission. The change in gain is thus dependent on the alignment of the polarizations of the pump and signal lasers – i.e. whether the two lasers are interacting with the same sub-set of dopant ions or not. In an ideal doped fiber without birefringence , the PDG would be inconveniently large. Fortunately, in optical fibers small amounts of birefringence are always present and, furthermore, the fast and slow axes vary randomly along the fiber length. A typical DFA has several tens of meters, long enough to already show this randomness of the birefringence axes. These two combined effects (which in transmission fibers give rise to polarization mode dispersion ) produce a misalignment of the relative polarizations of the signal and pump lasers along the fiber, thus tending to average out the PDG. The result is that PDG is very difficult to observe in a single amplifier (but is noticeable in links with several cascaded amplifiers). The erbium-doped fiber amplifier (EDFA) is the most deployed fiber amplifier as its amplification window coincides with the third transmission window of silica-based optical fiber. The core of a silica fiber is doped with trivalent erbium ions (Er 3+ ) and can be efficiently pumped with a laser at or near wavelengths of 980 nm and 1480 nm, and gain is exhibited in the 1550 nm region. The EDFA amplification region varies from application to application and can be anywhere from a few nm up to ~80 nm. Typical use of EDFA in telecommunications calls for Conventional , or C-band amplifiers (from ~1525 nm to ~1565 nm) or Long , or L-band amplifiers (from ~1565 nm to ~1610 nm). Both of these bands can be amplified by EDFAs, but it is normal to use two different amplifiers, each optimized for one of the bands. The principal difference between C- and L-band amplifiers is that a longer length of doped fiber is used in L-band amplifiers. The longer length of fiber allows a lower inversion level to be used, thereby giving emission at longer wavelengths (due to the band-structure of Erbium in silica) while still providing a useful amount of gain. [ citation needed ] EDFAs have two commonly used pumping bands – 980 nm and 1480 nm. The 980 nm band has a higher absorption cross-section and is generally used where low-noise performance is required. The absorption band is relatively narrow and so wavelength stabilised laser sources are typically needed. The 1480 nm band has a lower, but broader, absorption cross-section and is generally used for higher power amplifiers. A combination of 980 nm and 1480 nm pumping is generally utilised in amplifiers. Gain and lasing in erbium-doped fibers were first demonstrated in 1986–87 by two groups; one including David N. Payne , R. Mears , I.M Jauncey and L. Reekie, from the University of Southampton [ 15 ] [ 16 ] and one from AT&T Bell Laboratories, consisting of E. Desurvire, P. Becker, and J. Simpson. [ 17 ] The dual-stage optical amplifier which enabled dense wave division multiplexing (DWDM) was invented by Stephen B. Alexander at Ciena Corporation. [ 18 ] [ 19 ] Thulium doped fiber amplifiers have been used in the S-band (1450–1490 nm) and Praseodymium doped amplifiers in the 1300 nm region. However, those regions have not seen any significant commercial use so far and so those amplifiers have not been the subject of as much development as the EDFA. However, Ytterbium doped fiber lasers and amplifiers, operating near 1 micrometre wavelength, have many applications in industrial processing of materials, as these devices can be made with extremely high output power (tens of kilowatts). Semiconductor optical amplifiers (SOAs) are amplifiers which use a semiconductor to provide the gain medium. [ 20 ] These amplifiers have a similar structure to Fabry–Pérot laser diodes but with anti-reflection design elements at the end faces. Recent designs include anti-reflective coatings and tilted wave guide and window regions which can reduce end face reflection to less than 0.001%. Since this creates a loss of power from the cavity which is greater than the gain, it prevents the amplifier from acting as a laser. Another type of SOA consists of two regions. One part has a structure of a Fabry-Pérot laser diode and the other has a tapered geometry in order to reduce the power density on the output facet. Semiconductor optical amplifiers are typically made from group III-V compound semiconductors such as GaAs /AlGaAs, InP / InGaAs , InP /InGaAsP and InP /InAlGaAs, though any direct band gap semiconductors such as II-VI could conceivably be used. Such amplifiers are often used in telecommunication systems in the form of fiber-pigtailed components, operating at signal wavelengths between 850 nm and 1600 nm and generating gains of up to 30 dB. The semiconductor optical amplifier is of small size and electrically pumped. It can be potentially less expensive than the EDFA and can be integrated with semiconductor lasers, modulators, etc. However, the performance is still not comparable with the EDFA. The SOA has higher noise, lower gain, moderate polarization dependence and high nonlinearity with fast transient time. The main advantage of SOA is that all four types of nonlinear operations (cross gain modulation, cross phase modulation, wavelength conversion and four wave mixing ) can be conducted. Furthermore, SOA can be run with a low power laser. [ 21 ] This originates from the short nanosecond or less upper state lifetime, so that the gain reacts rapidly to changes of pump or signal power and the changes of gain also cause phase changes which can distort the signals. This nonlinearity presents the most severe problem for optical communication applications. However it provides the possibility for gain in different wavelength regions from the EDFA. "Linear optical amplifiers" using gain-clamping techniques have been developed. High optical nonlinearity makes semiconductor amplifiers attractive for all optical signal processing like all-optical switching and wavelength conversion. There has been much research on semiconductor optical amplifiers as elements for optical signal processing, wavelength conversion, clock recovery, signal demultiplexing, and pattern recognition. A recent addition to the SOA family is the vertical-cavity SOA (VCSOA). These devices are similar in structure to, and share many features with, vertical-cavity surface-emitting lasers ( VCSELs ). The major difference when comparing VCSOAs and VCSELs is the reduced mirror reflectivity used in the amplifier cavity. With VCSOAs, reduced feedback is necessary to prevent the device from reaching lasing threshold. Due to the extremely short cavity length, and correspondingly thin gain medium, these devices exhibit very low single-pass gain (typically on the order of a few percent) and also a very large free spectral range (FSR). The small single-pass gain requires relatively high mirror reflectivity to boost the total signal gain. In addition to boosting the total signal gain, the use of the resonant cavity structure results in a very narrow gain bandwidth; coupled with the large FSR of the optical cavity, this effectively limits operation of the VCSOA to single-channel amplification. Thus, VCSOAs can be seen as amplifying filters. Given their vertical-cavity geometry, VCSOAs are resonant cavity optical amplifiers that operate with the input/output signal entering/exiting normal to the wafer surface. In addition to their small size, the surface normal operation of VCSOAs leads to a number of advantages, including low power consumption, low noise figure, polarization insensitive gain, and the ability to fabricate high fill factor two-dimensional arrays on a single semiconductor chip. These devices are still in the early stages of research, though promising preamplifier results have been demonstrated. Further extensions to VCSOA technology are the demonstration of wavelength tunable devices. These MEMS-tunable vertical-cavity SOAs utilize a microelectromechanical systems ( MEMS ) based tuning mechanism for wide and continuous tuning of the peak gain wavelength of the amplifier. [ 22 ] SOAs have a more rapid gain response, which is in the order of 1 to 100 ps. For high output power and broader wavelength range, tapered amplifiers are used. These amplifiers consist of a lateral single-mode section and a section with a tapered structure, where the laser light is amplified. The tapered structure leads to a reduction of the power density at the output facet. Typical parameters: [ 23 ] In a Raman amplifier, the signal is intensified by Raman amplification . Unlike the EDFA and SOA the amplification effect is achieved by a nonlinear interaction between the signal and a pump laser within an optical fiber. There are two types of Raman amplifier: distributed and lumped. A distributed Raman amplifier is one in which the transmission fiber is utilised as the gain medium by multiplexing a pump wavelength with signal wavelength, while a lumped Raman amplifier utilises a dedicated, shorter length of fiber to provide amplification. In the case of a lumped Raman amplifier, a highly nonlinear fiber with a small core is utilised to increase the interaction between signal and pump wavelengths, and thereby reduce the length of fiber required. The pump light may be coupled into the transmission fiber in the same direction as the signal (co-directional pumping), in the opposite direction (contra-directional pumping) or both. Contra-directional pumping is more common as the transfer of noise from the pump to the signal is reduced. The pump power required for Raman amplification is higher than that required by the EDFA, with in excess of 500 mW being required to achieve useful levels of gain in a distributed amplifier. Lumped amplifiers, where the pump light can be safely contained to avoid safety implications of high optical powers, may use over 1 W of optical power. The principal advantage of Raman amplification is its ability to provide distributed amplification within the transmission fiber, thereby increasing the length of spans between amplifier and regeneration sites. The amplification bandwidth of Raman amplifiers is defined by the pump wavelengths utilised and so amplification can be provided over wider, and different, regions than may be possible with other amplifier types which rely on dopants and device design to define the amplification 'window'. Raman amplifiers have some fundamental advantages. First, Raman gain exists in every fiber, which provides a cost-effective means of upgrading from the terminal ends. Second, the gain is nonresonant, which means that gain is available over the entire transparency region of the fiber ranging from approximately 0.3 to 2 μm. A third advantage of Raman amplifiers is that the gain spectrum can be tailored by adjusting the pump wavelengths. For instance, multiple pump lines can be used to increase the optical bandwidth, and the pump distribution determines the gain flatness. Another advantage of Raman amplification is that it is a relatively broad-band amplifier with a bandwidth > 5 THz, and the gain is reasonably flat over a wide wavelength range. [ 24 ] However, a number of challenges for Raman amplifiers prevented their earlier adoption. First, compared to the EDFAs, Raman amplifiers have relatively poor pumping efficiency at lower signal powers. Although a disadvantage, this lack of pump efficiency also makes gain clamping easier in Raman amplifiers. Second, Raman amplifiers require a longer gain fiber. However, this disadvantage can be mitigated by combining gain and the dispersion compensation in a single fiber. A third disadvantage of Raman amplifiers is a fast response time, which gives rise to new sources of noise, as further discussed below. Finally, there are concerns of nonlinear penalty in the amplifier for the WDM signal channels. [ 24 ] Note: The text of an earlier version of this article was taken from the public domain Federal Standard 1037C . An optical parametric amplifier allows the amplification of a weak signal-impulse in a nonlinear medium such as a noncentrosymmetric nonlinear medium (e.g. Beta barium borate (BBO)) or even a standard fused silica optical fiber via the Kerr effect . In contrast to the previously mentioned amplifiers, which are mostly used in telecommunication environments, this type finds its main application in expanding the frequency tunability of ultrafast solid-state lasers (e.g. Ti:sapphire ). By using a noncollinear interaction geometry optical parametric amplifiers are capable of extremely broad amplification bandwidths. In the 21st century high power fiber lasers were adopted as an industrial material processing tool, and were expanding into other markets including the medical and scientific markets. One key enhancement enabling penetration into the scientific market was improvement in high finesse fiber amplifiers, which became able to deliver single frequency linewidths (<5 kHz) together with excellent beam quality and stable linearly polarized output. Systems meeting these specifications steadily progressed from a few watts of output power initially, to tens of watts and later hundreds of watts. This power increase was achieved with developments in fiber technology, such as the adoption of stimulated brillouin scattering (SBS) suppression/mitigation techniques within the fiber, and improvements in overall amplifier design, including large mode area (LMA) fibers with a low-aperture core, [ 25 ] micro-structured rod-type fiber [ 26 ] [ 27 ] helical core, [ 28 ] or chirally-coupled core fibers, [ 29 ] and tapered double-clad fibers (T-DCF). [ 30 ] As of 2015 [update] high finesse, high power and pulsed fiber amplifiers delivered power levels exceeding those available from commercial solid-state single-frequency sources, and stable optimized performance, opening up new scientific applications. [ 31 ] There are several simulation tools that can be used to design optical amplifiers. Popular commercial tools have been developed by Optiwave Systems and VPI Systems.
https://en.wikipedia.org/wiki/Optical_amplifier
An optical attenuator , or fiber optic attenuator, is a device used to reduce the power level of an optical signal , either in free space or in an optical fiber . The basic types of optical attenuators are fixed, step-wise variable, and continuously variable. Optical attenuators are commonly used in fiber-optic communications , either to test power level margins by temporarily adding a calibrated amount of signal loss, or installed permanently to properly match transmitter and receiver levels. Sharp bends stress optic fibers and can cause losses. If a received signal is too strong a temporary fix is to wrap the cable around a pencil until the desired level of attenuation is achieved. [ 1 ] However, such arrangements are unreliable, since the stressed fiber tends to break over time. Generally, multimode systems do not need attenuators as the multimode sources, rarely have enough power output to saturate receivers. Instead, single-mode systems, especially the long-haul DWDM network links, often need to use fiber optic attenuators to adjust the optical power during the transmission. The power reduction is done by such means as absorption, reflection, diffusion, scattering, deflection, diffraction, and dispersion, etc. Optical attenuators usually work by absorbing the light, like sunglasses absorb extra light energy. They typically have a working wavelength range in which they absorb all light energy equally. They should not reflect the light or scatter the light in an air gap, since that could cause unwanted back reflection in the fiber system. Another type of attenuator utilizes a length of high-loss optical fiber, that operates upon its input optical signal power level in such a way that its output signal power level is less than the input level. [ 2 ] Optical attenuators can take a number of different forms and are typically classified as fixed or variable attenuators. What's more, they can be classified as LC, SC, ST, FC, MU, E2000 etc. according to the different types of connectors. [ 2 ] [ dead link ] Fixed optical attenuators used in fiber optic systems may use a variety of principles for their functioning. Preferred attenuators use either doped fibers, or mis-aligned splices, or total power since both of these are reliable and inexpensive. Inline style attenuators are incorporated into patch cables. The alternative build out style attenuator is a small male-female adapter that can be added onto other cables. Non-preferred attenuators often use gap loss or reflective principles. Such devices can be sensitive to: modal distribution , wavelength, contamination, vibration, temperature, damage due to power bursts, may cause back reflections, may cause signal dispersion etc. Loopback fiber optic attenuator is designed for testing, engineering and the burn-in stage of boards or other equipment. Available in SC/UPC, SC/APC, LC/UPC, LC/APC, MTRJ, MPO for singlemode application.900 um fiber cable inside of the black shell for LC and SC type. No black shell for MTRJ and MPO type. Built-in variable optical attenuators may be either manually or electrically controlled. A manual device is useful for one-time set up of a system, and is a near-equivalent to a fixed attenuator, and may be referred to as an "adjustable attenuator". In contrast, an electrically controlled attenuator can provide adaptive power optimization. Attributes of merit for electrically controlled devices, include speed of response and avoiding degradation of the transmitted signal. Dynamic range is usually quite restricted, and power feedback may mean that long term stability is a relatively minor issue. Speed of response is a particularly major issue in dynamically reconfigurable systems, where a delay of one millionth of a second can result in the loss of large amounts of transmitted data. Typical technologies employed for high speed response include liquid crystal variable attenuator (LCVA), or lithium niobate devices. There is a class of built-in attenuators that is technically indistinguishable from test attenuators, except they are packaged for rack mounting, and have no test display. Variable optical test attenuators generally use a variable neutral density filter. Despite relatively high cost, this arrangement has the advantages of being stable, wavelength insensitive, mode insensitive, and offering a large dynamic range. Other schemes such as LCD, variable air gap etc. have been tried over the years, but with limited success. They may be either manually or motor controlled. Motor control give regular users a distinct productivity advantage, since commonly used test sequences can be run automatically. Attenuator instrument calibration is a major issue. The user typically would like an absolute port to port calibration. Also, calibration should usually be at a number of wavelengths and power levels, since the device is not always linear. However a number of instruments do not in fact offer these basic features, presumably in an attempt to reduce cost. The most accurate variable attenuator instruments have thousands of calibration points, resulting in excellent overall accuracy in use. Test sequences that use variable attenuators can be very time-consuming. Therefore, automation is likely to achieve useful benefits. Both bench and handheld-style devices are available that offer such features. This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22.
https://en.wikipedia.org/wiki/Optical_attenuator
Optical bonding refers to a protective glass that is glued in front of a display to enhance its readability where installed in high humidity outdoor environments. When a normal display is used in an outdoor environment, there are some factors that affect its readability. The most common one is “fog”, or condensation , which forms on the inner surface of display's vandal shield. Another factor is the reflection of sunlight, which causes a mirror-image on the display. Both phenomena can be solved by using optical bonding. There are a wide variety of adhesives used for optical bonding processes. Three of the most commonly used are silicone, epoxy, and polyurethane. [ 1 ] Below are overviews of the pros/cons of each adhesive type. Optical bonding is the use of an optical-grade adhesive to glue a glass to the top surface of a display. The main goal of optical bonding is to improve the display performance under outdoor environments. This method eliminates the air gap between the cover glass and the display. [ 2 ] Moreover, anti-reflective coating is often used in optical bonding glass. The real problem for display readability in outdoor environments is not the display's brightness but its contrast . Contrast means the ratio of the white level to the black level; in other words, the contrast ratio of display means the difference of light intensity between the brightest white pixel and the darkest black pixel. The main purpose of optical bonding is to increase the display's contrast ratio by reducing the amount of reflected ambient light. [ 3 ] Optical bonding, we call full-lamination which can be used for touch lamination/integration, bonded touch to lcd module
https://en.wikipedia.org/wiki/Optical_bonding
Optical brighteners , optical brightening agents ( OBAs ), fluorescent brightening agents ( FBAs ), or fluorescent whitening agents ( FWAs ), are chemical compounds that absorb light in the ultraviolet and violet region (usually 340-370 nm) of the electromagnetic spectrum , and re-emit light in the blue region (typically 420-470 nm) through the phenomenon of fluorescence . These additives are often used to enhance the appearance of color of fabric and paper , causing a "whitening" effect; they make intrinsically yellow/orange materials look less so, by compensating the deficit in blue and purple light reflected by the material, with the blue and purple optical emission of the fluorophore. [ 1 ] The most common classes of compounds with this property are the stilbenes , e.g., 4,4′-diamino-2,2′-stilbenedisulfonic acid . Older, non-commercial fluorescent compounds include umbelliferone , which absorbs in the UV portion of the spectrum and re-emit it in the blue portion of the visible spectrum. A white surface treated with an optical brightener can emit more visible light than that which shines on it, making it appear brighter. The blue light emitted by the brightener compensates for the diminishing blue of the treated material and changes the hue away from yellow or brown and toward white. [ 2 ] Approximately 400 brightener types are listed in the international Colour Index database, [ 4 ] but fewer than 90 are produced commercially, and only a handful are commercially important. The Colour Index Generic Names and Constitution Numbers can be assigned to a specific substance. However, some are duplicated, since manufacturers apply for the index number when they produce it. The global OBA production for paper, textiles, and detergents is dominated by just a few di- and tetra-sulfonated triazole-stilbenes and a di-sulfonated stilbene-biphenyl derivatives. The stilbene derivatives are subject to fading upon prolonged exposure to UV, due to the formation of optically inactive cis-stilbenes. They are also degraded by oxygen in air, like most dye colorants. All brighteners have extended conjugation and/or aromaticity, allowing for electron movement. Some non-stilbene brighteners are used in more permanent applications such as whitening synthetic fiber. Brighteners can be "boosted" by the addition of certain polyols , such as high molecular weight polyethylene glycol or polyvinyl alcohol . These additives increase the visible blue light emissions significantly. Brighteners can also be "quenched". Excess brightener will often cause a greening effect as emissions start to show above the blue region in the visible spectrum. Brighteners are commonly added to laundry detergents to make the clothes appear cleaner. Normally cleaned laundry appears yellowish, which consumers do not like. [ 2 ] Optical brighteners have replaced bluing which was formerly used to produce the same effect. Brighteners are used in many papers, especially high brightness papers, resulting in their strongly fluorescent appearance under UV illumination. Paper brightness is typically measured at 457 nm, well within the fluorescent activity range of brighteners. [ 5 ] Paper used for banknotes does not contain optical brighteners, so a common method for detecting counterfeit notes is to check for fluorescence. Optical brighteners have also found use in cosmetics . One application is to formulas for washing and conditioning grey or blonde hair, where the brightener can not only increase the luminance and sparkle of the hair, but can also correct dull, yellowish discoloration without darkening the hair. Some advanced face and eye powders contain optical brightener microspheres that brighten shadowed or dark areas of the skin, such as "tired eyes". End uses of optical brighteners include: From around 2002 to 2012, chemical brighteners were used by many Chinese farmers to enhance the appearance of their white mushrooms. This illegal use was mostly eliminated by the Chinese Ministry of Agriculture. [ 6 ]
https://en.wikipedia.org/wiki/Optical_brightener
Optical character recognition or optical character reader ( OCR ) is the electronic or mechanical conversion of images of typed, handwritten or printed text into machine-encoded text, whether from a scanned document, a photo of a document, a scene photo (for example the text on signs and billboards in a landscape photo) or from subtitle text superimposed on an image (for example: from a television broadcast). [ 1 ] Widely used as a form of data entry from printed paper data records – whether passport documents, invoices, bank statements , computerized receipts, business cards, mail, printed data, or any suitable documentation – it is a common method of digitizing printed texts so that they can be electronically edited, searched, stored more compactly, displayed online, and used in machine processes such as cognitive computing , machine translation , (extracted) text-to-speech , key data and text mining . OCR is a field of research in pattern recognition , artificial intelligence and computer vision . Early versions needed to be trained with images of each character, and worked on one font at a time. Advanced systems capable of producing a high degree of accuracy for most fonts are now common, and with support for a variety of image file format inputs. [ 2 ] Some systems are capable of reproducing formatted output that closely approximates the original page including images, columns, and other non-textual components. Early optical character recognition may be traced to technologies involving telegraphy and creating reading devices for the blind. [ 3 ] In 1914, Emanuel Goldberg developed a machine that read characters and converted them into standard telegraph code. [ 4 ] Concurrently, Edmund Fournier d'Albe developed the Optophone , a handheld scanner that when moved across a printed page, produced tones that corresponded to specific letters or characters. [ 5 ] In the late 1920s and into the 1930s, Emanuel Goldberg developed what he called a "Statistical Machine" for searching microfilm archives using an optical code recognition system. In 1931, he was granted US Patent number 1,838,389 for the invention. The patent was acquired by IBM . In 1974, Ray Kurzweil started the company Kurzweil Computer Products, Inc. and continued development of omni- font OCR, which could recognize text printed in virtually any font. (Kurzweil is often credited with inventing omni-font OCR, but it was in use by companies, including CompuScan, in the late 1960s and 1970s. [ 3 ] [ 6 ] ) Kurzweil used the technology to create a reading machine for blind people to have a computer read text to them out loud. The device included a CCD -type flatbed scanner and a text-to-speech synthesizer. On January 13, 1976, the finished product was unveiled during a widely reported news conference headed by Kurzweil and the leaders of the National Federation of the Blind . [ citation needed ] In 1978, Kurzweil Computer Products began selling a commercial version of the optical character recognition computer program. LexisNexis was one of the first customers, and bought the program to upload legal paper and news documents onto its nascent online databases. Two years later, Kurzweil sold his company to Xerox , which eventually spun it off as Scansoft , which merged with Nuance Communications . In the 2000s, OCR was made available online as a service (WebOCR), in a cloud computing environment, and in mobile applications like real-time translation of foreign-language signs on a smartphone . With the advent of smartphones and smartglasses , OCR can be used in internet connected mobile device applications that extract text captured using the device's camera. These devices that do not have built-in OCR functionality will typically use an OCR API to extract the text from the image file captured by the device. [ 7 ] [ 8 ] The OCR API returns the extracted text, along with information about the location of the detected text in the original image back to the device app for further processing (such as text-to-speech) or display. Various commercial and open source OCR systems are available for most common writing systems , including Latin, Cyrillic, Arabic, Hebrew, Indic, Bengali (Bangla), Devanagari, Tamil, Chinese, Japanese, and Korean characters. OCR engines have been developed into software applications specializing in various subjects such as receipts, invoices, checks, and legal billing documents. The software can be used for: OCR is generally an offline process, which analyses a static document. There are cloud based services which provide an online OCR API service. Handwriting movement analysis can be used as input to handwriting recognition . [ 14 ] Instead of merely using the shapes of glyphs and words, this technique is able to capture motion, such as the order in which segments are drawn, the direction, and the pattern of putting the pen down and lifting it. This additional information can make the process more accurate. This technology is also known as "online character recognition", "dynamic character recognition", "real-time character recognition", and "intelligent character recognition". OCR software often pre-processes images to improve the chances of successful recognition. Techniques include: [ 15 ] Segmentation of fixed-pitch fonts is accomplished relatively simply by aligning the image to a uniform grid based on where vertical grid lines will least often intersect black areas. For proportional fonts , more sophisticated techniques are needed because whitespace between letters can sometimes be greater than that between words, and vertical lines can intersect more than one character. [ 22 ] There are two basic types of core OCR algorithm, which may produce a ranked list of candidate characters. [ 23 ] Software such as Cuneiform and Tesseract use a two-pass approach to character recognition. The second pass is known as adaptive recognition and uses the letter shapes recognized with high confidence on the first pass to better recognize the remaining letters on the second pass. This is advantageous for unusual fonts or low-quality scans where the font is distorted (e.g. blurred or faded). [ 22 ] As of December 2016 [update] , modern OCR software includes Google Docs OCR, ABBYY FineReader , and Transym. [ 26 ] [ needs update ] Others like OCRopus and Tesseract use neural networks which are trained to recognize whole lines of text instead of focusing on single characters. A technique known as iterative OCR automatically crops a document into sections based on the page layout. OCR is then performed on each section individually using variable character confidence level thresholds to maximize page-level OCR accuracy. A patent from the United States Patent Office has been issued for this method. [ 27 ] The OCR result can be stored in the standardized ALTO format, a dedicated XML schema maintained by the United States Library of Congress . Other common formats include hOCR and PAGE XML . For a list of optical character recognition software, see Comparison of optical character recognition software . OCR accuracy can be increased if the output is constrained by a lexicon – a list of words that are allowed to occur in a document. [ 15 ] This might be, for example, all the words in the English language, or a more technical lexicon for a specific field. This technique can be problematic if the document contains words not in the lexicon, like proper nouns . Tesseract uses its dictionary to influence the character segmentation step, for improved accuracy. [ 22 ] The output stream may be a plain text stream or file of characters, but more sophisticated OCR systems can preserve the original layout of the page and produce, for example, an annotated PDF that includes both the original image of the page and a searchable textual representation. Near-neighbor analysis can make use of co-occurrence frequencies to correct errors, by noting that certain words are often seen together. [ 28 ] For example, "Washington, D.C." is generally far more common in English than "Washington DOC". Knowledge of the grammar of the language being scanned can also help determine if a word is likely to be a verb or a noun, for example, allowing greater accuracy. The Levenshtein Distance algorithm has also been used in OCR post-processing to further optimize results from an OCR API. [ 29 ] In recent years, [ when? ] the major OCR technology providers began to tweak OCR systems to deal more efficiently with specific types of input. Beyond an application-specific lexicon, better performance may be had by taking into account business rules, standard expression, [ clarification needed ] or rich information contained in color images. This strategy is called "Application-Oriented OCR" or "Customized OCR", and has been applied to OCR of license plates , invoices , screenshots , ID cards , driver's licenses , and automobile manufacturing . The New York Times has adapted the OCR technology into a proprietary tool they entitle Document Helper , that enables their interactive news team to accelerate the processing of documents that need to be reviewed. They note that it enables them to process what amounts to as many as 5,400 pages per hour in preparation for reporters to review the contents. [ 30 ] There are several techniques for solving the problem of character recognition by means other than improved OCR algorithms. Special fonts like OCR-A , OCR-B , or MICR fonts, with precisely specified sizing, spacing, and distinctive character shapes, allow a higher accuracy rate during transcription in bank check processing. Several prominent OCR engines were designed to capture text in popular fonts such as Arial or Times New Roman, and are incapable of capturing text in these fonts that are specialized and very different from popularly used fonts. As Google Tesseract can be trained to recognize new fonts, it can recognize OCR-A, OCR-B and MICR fonts. [ 31 ] Comb fields are pre-printed boxes that encourage humans to write more legibly – one glyph per box. [ 28 ] These are often printed in a dropout color which can be easily removed by the OCR system. [ 28 ] Palm OS used a special set of glyphs, known as Graffiti , which are similar to printed English characters but simplified or modified for easier recognition on the platform's computationally limited hardware. Users would need to learn how to write these special glyphs. Zone-based OCR restricts the image to a specific part of a document. This is often referred to as Template OCR . Crowdsourcing humans to perform the character recognition can quickly process images like computer-driven OCR, but with higher accuracy for recognizing images than that obtained via computers. Practical systems include the Amazon Mechanical Turk and reCAPTCHA . The National Library of Finland has developed an online interface for users to correct OCRed texts in the standardized ALTO format. [ 32 ] Crowd sourcing has also been used not to perform character recognition directly but to invite software developers to develop image processing algorithms, for example, through the use of rank-order tournaments . [ 33 ] Commissioned by the U.S. Department of Energy (DOE), the Information Science Research Institute (ISRI) had the mission to foster the improvement of automated technologies for understanding machine printed documents, and it conducted the most authoritative of the Annual Test of OCR Accuracy from 1992 to 1996. [ 35 ] Recognition of typewritten, Latin script text is still not 100% accurate even where clear imaging is available. One study based on recognition of 19th- and early 20th-century newspaper pages concluded that character-by-character OCR accuracy for commercial OCR software varied from 81% to 99%; [ 36 ] total accuracy can be achieved by human review or Data Dictionary Authentication. Other areas – including recognition of hand printing, cursive handwriting, and printed text in other scripts (especially those East Asian language characters which have many strokes for a single character) – are still the subject of active research. The MNIST database is commonly used for testing systems' ability to recognize handwritten digits. Accuracy rates can be measured in several ways, and how they are measured can greatly affect the reported accuracy rate. For example, if word context (a lexicon of words) is not used to correct software finding non-existent words, a character error rate of 1% (99% accuracy) may result in an error rate of 5% or worse if the measurement is based on whether each whole word was recognized with no incorrect letters. [ 37 ] Using a large enough dataset is important in a neural-network-based handwriting recognition solutions. On the other hand, producing natural datasets is very complicated and time-consuming. [ 38 ] An example of the difficulties inherent in digitizing old text is the inability of OCR to differentiate between the " long s " and "f" characters. [ 39 ] [ 34 ] Web-based OCR systems for recognizing hand-printed text on the fly have become well known as commercial products in recent years [ when? ] (see Tablet PC history ). Accuracy rates of 80% to 90% on neat, clean hand-printed characters can be achieved by pen computing software, but that accuracy rate still translates to dozens of errors per page, making the technology useful only in very limited applications. [ citation needed ] Recognition of cursive text is an active area of research, with recognition rates even lower than that of hand-printed text . Higher rates of recognition of general cursive script will likely not be possible without the use of contextual or grammatical information. For example, recognizing entire words from a dictionary is easier than trying to parse individual characters from script. Reading the Amount line of a check (which is always a written-out number) is an example where using a smaller dictionary can increase recognition rates greatly. The shapes of individual cursive characters themselves simply do not contain enough information to accurately (greater than 98%) recognize all handwritten cursive script. [ citation needed ] Most programs allow users to set "confidence rates". This means that if the software does not achieve their desired level of accuracy, a user can be notified for manual review. An error introduced by OCR scanning is sometimes termed a scanno (by analogy with the term typo ). [ 40 ] [ 41 ] Characters to support OCR were added to the Unicode Standard in June 1993, with the release of version 1.1. Some of these characters are mapped from fonts specific to MICR , OCR-A or OCR-B .
https://en.wikipedia.org/wiki/Optical_character_recognition
An optical comparator (often called just a comparator in context) or profile projector is a device that applies the principles of optics to the inspection of manufactured parts. In a comparator, the magnified silhouette of a part is projected upon the screen, and the dimensions and geometry of the part are measured against prescribed limits . It is a useful item in a small parts machine shop or production line for the quality control inspection team. The measuring happens in any of several ways. The simplest way is that graduations on the screen, being superimposed over the silhouette, allow the viewer to measure, as if a clear ruler were laid over the image. Another way is that various points on the silhouette are lined up with the reticle at the centerpoint of the screen, one after another, by moving the stage on which the part sits, and a digital read out reports how far the stage moved to reach those points. Finally, the most technologically advanced methods involve software that analyzes the image and reports measurements. The first two methods are the most common; the third is newer and not as widespread, but its adoption is ongoing in the digital era. The first commercial comparator was developed by James Hartness and Russell W. Porter . [ 2 ] Hartness' long-continuing work as the Chairman of the U.S.'s National Screw-Thread Commission led him to apply his familiarity with optics (from his avocations of astronomy and telescope -building) to the problem of screw thread inspection. The Hartness Screw-Thread Comparator was for many years a profitable product for the Jones and Lamson Machine Company, of which he was president. In subsequent decades optical comparators have been made by many companies and have been applied to the inspection of many kinds of parts. Today they may be found in many machine shops. [ 3 ] The idea of mixing optics and measurement, and the use of the term comparator for metrological equipment, had existed in other forms prior to Hartness's work; but they had remained in realms of pure science (such as telescopy and microscopy ) and highly specialized applied science (such as comparing master measuring standards). Hartness's comparator, intended for the routine inspection of machined parts, was a natural next step in the era during which applied science became widely integrated into industrial production. The profile projector is widely used for complex-shape stampings , gears, cams, threads and comparing the measured contour model. The profile projector is hence widely used in precision machinery manufacturing, including aviation , aerospace industry , watches and clocks, electronics , instrumentation industry , research institutes and detection metering stations at all levels, etc. The projector magnifies the profile of the specimen, and displays this on the built-in projection screen. [ 4 ] On this screen there is typically a grid that can be rotated 360 degrees so the X-Y axis of the screen can be aligned with a straight edge of the machined part to examine or measure. This projection screen displays the profile of the specimen and is magnified for better ease of calculating linear measurements. An edge of the specimen to examine may be lined up with the grid on the screen. From there, simple measurements may be taken for distances to other points. This is being done on a magnified profile of the specimen. It can be simpler as well as reduce errors by measuring on the magnified projection screen of a profile projector. The typical method for lighting is by diascopic illumination, which is lighting from behind. This type of lighting is also called transmitted illumination when the specimen is translucent and light can pass through it. If the specimen is opaque, then the light will not go through it, but will form a profile of the specimen. Measuring of the sample can be done on the projection screen. A profile projector may also have episcopic illumination (which is light shining from above). This is useful in displaying bores or internal areas that may need to be measured. For the simplest type of profile projector, the part's inverted image, also known as its mirror image, will be displayed on the screen. In order to facilitate the measurement, sometimes a plus-image system is deliberately added, changing the inverted image into a positive one, which increases the cost due to scale/material used, while somewhat reducing its measurement accuracy. As for selection of screen size, one should carefully consider whether the entire part must be imaged on the screen. If the inspection can readily be done at a modest scale, there is no need for a larger screen. Projector manufacturers offer multiple screen sizes to meet various needs. The projection lens magnification is fixed. Different views of measured pieces often require different magnifications. However, the usual projector factory configuration is with a single lens, so according to needs, additional lenses may be purchased and used. The work table is used to place and hold the measured piece. Its own volume, X, Y travel and carrying capacity are critical. Meanwhile, for the convenience of holding the workpiece, a precision rotary table, a V-block part holder and other accessories are generally added. Also, the projector must have a flexible and stable focusing mechanism and large working distance (the top surface of the workpiece relative to the lens pitch). The user selects appropriate data processing modes: without exception, all modern optical measuring projectors on market have been digitized. We will therefore also consider relevant data-processing capabilities.
https://en.wikipedia.org/wiki/Optical_comparator
Optical contact bonding is a glueless process whereby two closely conformal surfaces are joined, being held purely by intermolecular forces . Isaac Newton has been credited with the first description of conformal interaction observed through the interference phenomenon known as Newton's rings , though it was S. D. Poisson in 1823 who first described the optical characteristics of two identical surfaces in contact. It was not until the 19th century that objects were made with such precision that the binding phenomenon was observed. The bond was referred to as "ansprengen" in German language. By 1900, optical contact bonding was being employed in the construction of optical prisms, and the following century saw further research into the phenomenon at the same time that ideas of inter-atom interactions were first being studied. [ 1 ] Intermolecular forces such as Van der Waals forces , hydrogen bonds , and dipole–dipole interactions are typically not sufficiently strong to hold two apparently conformal rigid bodies together, since the forces drop off rapidly with distance, [ 2 ] and the actual area in contact between the two bodies is small due to surface roughness and minor imperfections. However, if the bodies are conformal to an accuracy of better than 10 angstroms (1 nanometer), then a sufficient surface area is in close enough contact for the intermolecular interactions to have an observable macroscopic effect—that is, the two objects stick together. [ 3 ] Such a condition requires a high degree of accuracy and surface smoothness, which is typically found in optical components, such as prisms. In addition to both surfaces' being practically conformal (in practice often completely flat), the surfaces must also be extremely clean and free from any small contamination that would prevent or weaken the bond—including grease films and specks of dust. For bonding to occur, the surfaces need only to be brought together; the intermolecular forces draw the bodies into the lowest energy conformation, and no pressure needs to be applied. Since the method requires no binder , balsam or glue, the physical properties of the bound object are the same as the objects joined. Typically, glues and binders are more heat sensitive or have undesirable properties compared to the actual bodies being joined. The use of optical contact bonding allows the production of a final product with properties as well as the bulk solid. [ 4 ] This can include temperature and chemical resistances, spectral absorption properties and reduced contamination from bonding materials. Originally the process was confined to optical equipment such as prisms —the earliest examples being made around 1900. Later the range of use was expanded to microelectronics and other miniaturised devices. [ 5 ]
https://en.wikipedia.org/wiki/Optical_contact_bonding
An optical cross-connect ( OXC ) is a device used by telecommunications carriers to switch high-speed optical signals in a fiber optic network, such as an optical mesh network . In the 1980s, when transmission speeds supported by optical fibers increased from 45 Mbit/s to 2.5 Gbit/s , carrier networks developed and introduced digital cross connects to restore 64 kbit/s , 1.5 Mbit/s , and 45 Mbit/s traffic. [ 1 ] There are several ways to realize an OXC: An optical add-drop multiplexer (OADM) can be viewed as a special case of an OXC, where to node degree is two.
https://en.wikipedia.org/wiki/Optical_cross-connect
In physics , optical depth or optical thickness is the natural logarithm of the ratio of incident to transmitted radiant power through a material. Thus, the larger the optical depth, the smaller the amount of transmitted radiant power through the material. Spectral optical depth or spectral optical thickness is the natural logarithm of the ratio of incident to transmitted spectral radiant power through a material. [ 1 ] Optical depth is dimensionless , and in particular is not a length, though it is a monotonically increasing function of optical path length , and approaches zero as the path length approaches zero. The use of the term "optical density" for optical depth is discouraged. [ 1 ] In chemistry , a closely related quantity called " absorbance " or "decadic absorbance" is used instead of optical depth: the common logarithm of the ratio of incident to transmitted radiant power through a material. It is the optical depth divided by log e (10) , because of the different logarithm bases used. The optical depth of a material, denoted τ {\textstyle \tau } , is given by: [ 2 ] τ = ln ( Φ e i Φ e t ) = − ln ⁡ T {\displaystyle \tau =\ln \!\left({\frac {\Phi _{\mathrm {e} }^{\mathrm {i} }}{\Phi _{\mathrm {e} }^{\mathrm {t} }}}\right)=-\ln T} where The absorbance A {\textstyle A} is related to optical depth by: τ = A ln ⁡ 10 {\displaystyle \tau =A\ln {10}} The spectral optical depth in frequency (denoted τ ν {\displaystyle \tau _{\nu }} ) or in wavelength ( τ λ {\displaystyle \tau _{\lambda }} ) of a material is given by: [ 1 ] τ ν = ln ( Φ e , ν i Φ e , ν t ) = − ln ⁡ T ν {\displaystyle \tau _{\nu }=\ln \!\left({\frac {\Phi _{\mathrm {e} ,\nu }^{\mathrm {i} }}{\Phi _{\mathrm {e} ,\nu }^{\mathrm {t} }}}\right)=-\ln T_{\nu }} τ λ = ln ( Φ e , λ i Φ e , λ t ) = − ln ⁡ T λ , {\displaystyle \tau _{\lambda }=\ln \!\left({\frac {\Phi _{\mathrm {e} ,\lambda }^{\mathrm {i} }}{\Phi _{\mathrm {e} ,\lambda }^{\mathrm {t} }}}\right)=-\ln T_{\lambda },} where Spectral absorbance is related to spectral optical depth by: τ ν = A ν ln ⁡ 10 , {\displaystyle \tau _{\nu }=A_{\nu }\ln 10,} τ λ = A λ ln ⁡ 10 , {\displaystyle \tau _{\lambda }=A_{\lambda }\ln 10,} where Optical depth measures the attenuation of the transmitted radiant power in a material. Attenuation can be caused by absorption, but also reflection, scattering, and other physical processes. Optical depth of a material is approximately equal to its attenuation when both the absorbance is much less than 1 and the emittance of that material (not to be confused with radiant exitance or emissivity ) is much less than the optical depth: Φ e t + Φ e a t t = Φ e i + Φ e e , {\displaystyle \Phi _{\mathrm {e} }^{\mathrm {t} }+\Phi _{\mathrm {e} }^{\mathrm {att} }=\Phi _{\mathrm {e} }^{\mathrm {i} }+\Phi _{\mathrm {e} }^{\mathrm {e} },} T + A T T = 1 + E , {\displaystyle T+ATT=1+E,} where and according to the Beer–Lambert law , T = e − τ , {\displaystyle T=e^{-\tau },} so: A T T = 1 − e − τ + E ≈ τ + E ≈ τ , if τ ≪ 1 and E ≪ τ . {\displaystyle ATT=1-e^{-\tau }+E\approx \tau +E\approx \tau ,\quad {\text{if}}\ \tau \ll 1\ {\text{and}}\ E\ll \tau .} Optical depth of a material is also related to its attenuation coefficient by: τ = ∫ 0 l α ( z ) d z , {\displaystyle \tau =\int _{0}^{l}\alpha (z)\,\mathrm {d} z,} where and if α ( z ) is uniform along the path, the attenuation is said to be a linear attenuation and the relation becomes: τ = α l {\displaystyle \tau =\alpha l} Sometimes the relation is given using the attenuation cross section of the material, that is its attenuation coefficient divided by its number density : τ = ∫ 0 l σ n ( z ) d z , {\displaystyle \tau =\int _{0}^{l}\sigma n(z)\,\mathrm {d} z,} where and if n {\displaystyle n} is uniform along the path, i.e., n ( z ) ≡ N {\displaystyle n(z)\equiv N} , the relation becomes: τ = σ N l {\displaystyle \tau =\sigma Nl} In atomic physics , the spectral optical depth of a cloud of atoms can be calculated from the quantum-mechanical properties of the atoms. It is given by τ ν = d 2 n ν 2 c ℏ ε 0 σ γ {\displaystyle \tau _{\nu }={\frac {d^{2}n\nu }{2\mathrm {c} \hbar \varepsilon _{0}\sigma \gamma }}} where In atmospheric sciences , one often refers to the optical depth of the atmosphere as corresponding to the vertical path from Earth's surface to outer space; at other times the optical path is from the observer's altitude to outer space. The optical depth for a slant path is τ = mτ ′ , where τ′ refers to a vertical path, m is called the relative airmass , and for a plane-parallel atmosphere it is determined as m = sec θ where θ is the zenith angle corresponding to the given path. Therefore, T = e − τ = e − m τ ′ {\displaystyle T=e^{-\tau }=e^{-m\tau '}} The optical depth of the atmosphere can be divided into several components, ascribed to Rayleigh scattering , aerosols , and gaseous absorption . The optical depth of the atmosphere can be measured with a Sun photometer . The optical depth with respect to the height within the atmosphere is given by [ 3 ] τ ( z ) = k a w 1 ρ 0 H e − z / H {\displaystyle \tau (z)=k_{\text{a}}w_{1}\rho _{0}He^{-z/H}} and it follows that the total atmospheric optical depth is given by [ 3 ] τ ( 0 ) = k a w 1 ρ 0 H {\displaystyle \tau (0)=k_{\text{a}}w_{1}\rho _{0}H} In both equations: The optical depth of a plane parallel cloud layer is given by [ 3 ] τ = Q e [ 9 π L 2 H N 16 ρ l 2 ] 1 / 3 {\displaystyle \tau =Q_{\text{e}}\left[{\frac {9\pi L^{2}HN}{16\rho _{l}^{2}}}\right]^{1/3}} where: So, with a fixed depth and total liquid water path, τ ∝ N 1 / 3 {\textstyle \tau \propto N^{1/3}} . [ 3 ] In astronomy , the photosphere of a star is defined as the surface where its optical depth is 2/3. This means that each photon emitted at the photosphere suffers an average of less than one scattering before it reaches the observer. At the temperature at optical depth 2/3, the energy emitted by the star (the original derivation is for the Sun) matches the observed total energy emitted. [ citation needed ] [ clarification needed ] Note that the optical depth of a given medium will be different for different colors ( wavelengths ) of light. For planetary rings , the optical depth is the (negative logarithm of the) proportion of light blocked by the ring when it lies between the source and the observer. This is usually obtained by observation of stellar occultations.
https://en.wikipedia.org/wiki/Optical_depth
Optical depth in astrophysics refers to a specific level of transparency . Optical depth and actual depth, τ {\displaystyle \tau } and z {\displaystyle z} respectively, can vary widely depending on the absorptivity of the astrophysical environment. Indeed, τ {\displaystyle \tau } is able to show the relationship between these two quantities and can lead to a greater understanding of the structure inside a star . Optical depth is a measure of the extinction coefficient or absorptivity up to a specific 'depth' of a star's makeup. The assumption here is that either the extinction coefficient α {\displaystyle \alpha } or the column number density N {\displaystyle N} is known. These can generally be calculated from other equations if a fair amount of information is known about the chemical makeup of the star. From the definition, it is also clear that large optical depths correspond to higher rate of obscuration. Optical depth can therefore be thought of as the opacity of a medium. The extinction coefficient α {\displaystyle \alpha } can be calculated using the transfer equation . In most astrophysical problems, this is exceptionally difficult to solve since solving the corresponding equations requires the incident radiation as well as the radiation leaving the star. These values are usually theoretical. In some cases the Beer–Lambert law can be useful in finding α {\displaystyle \alpha } . where κ {\displaystyle \kappa } is the refractive index , and λ 0 {\displaystyle \lambda _{0}} is the wavelength of the incident light before being absorbed or scattered. [ 2 ] The Beer–Lambert law is only appropriate when the absorption occurs at a specific wavelength, λ 0 {\displaystyle \lambda _{0}} . For a gray atmosphere, for instance, it is most appropriate to use the Eddington Approximation. Therefore, τ {\displaystyle \tau } is simply a constant that depends on the physical distance from the outside of a star. To find τ {\displaystyle \tau } at a particular depth z ′ {\displaystyle z'} , the above equation may be used with α {\displaystyle \alpha } and integration from z = 0 {\displaystyle z=0} to z = z ′ {\displaystyle z=z'} . Since it is difficult to define where the interior of a star ends and the photosphere begins, astrophysicists usually rely on the Eddington Approximation to derive the formal definition of τ = 2 / 3 {\displaystyle \tau =2/3} Devised by Sir Arthur Eddington the approximation takes into account the fact that H − produces a "gray" absorption in the atmosphere of a star, that is, it is independent of any specific wavelength and absorbs along the entire electromagnetic spectrum. In that case, where T e {\displaystyle T_{e}} is the effective temperature at that depth and τ {\displaystyle \tau } is the optical depth. This illustrates not only that the observable temperature and actual temperature at a certain physical depth of a star vary, but that the optical depth plays a crucial role in understanding the stellar structure. It also serves to demonstrate that the depth of the photosphere of a star is highly dependent upon the absorptivity of its environment. The photosphere extends down to a point where τ {\displaystyle \tau } is about 2/3, which corresponds to a state where a photon would experience, in general, less than 1 scattering before leaving the star. The above equation can be rewritten in terms of α {\displaystyle \alpha } in the following way: Which is useful, for example, when τ {\displaystyle \tau } is not known but α {\displaystyle \alpha } is.
https://en.wikipedia.org/wiki/Optical_depth_(astrophysics)
Optical engineering is the field of engineering encompassing the physical phenomena and technologies associated with the generation, transmission, manipulation, detection, and utilization of light . [ 2 ] Optical engineers use the science of optics to solve problems and to design and build devices that make light do something useful. [ 3 ] They design and operate optical equipment that uses the properties of light using physics and chemistry , [ 4 ] such as lenses , microscopes , telescopes , lasers , sensors , fiber-optic communication systems and optical disc systems (e.g. CD , DVD ). Optical engineering metrology uses optical methods to measure either micro-vibrations with instruments like the laser speckle interferometer , or properties of masses with instruments that measure refraction . [ 5 ] Nano-measuring and nano-positioning machines are devices designed by optical engineers. These machines, for example microphotolithographic steppers , have nanometer precision, and consequently are used in the fabrication of goods at this scale. [ 6 ] This engineering-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Optical_engineering
An optical fiber , or optical fibre , is a flexible glass or plastic fiber that can transmit light [ a ] from one end to the other. Such fibers find wide usage in fiber-optic communications , where they permit transmission over longer distances and at higher bandwidths (data transfer rates) than electrical cables. Fibers are used instead of metal wires because signals travel along them with less loss and are immune to electromagnetic interference . [ 1 ] Fibers are also used for illumination and imaging, and are often wrapped in bundles so they may be used to carry light into, or images out of confined spaces, as in the case of a fiberscope . [ 2 ] Specially designed fibers are also used for a variety of other applications, such as fiber optic sensors and fiber lasers . [ 3 ] Glass optical fibers are typically made by drawing , while plastic fibers can be made either by drawing or by extrusion . [ 4 ] [ 5 ] Optical fibers typically include a core surrounded by a transparent cladding material with a lower index of refraction . Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as a waveguide . [ 6 ] Fibers that support many propagation paths or transverse modes are called multi-mode fibers , while those that support a single mode are called single-mode fibers (SMF). [ 7 ] Multi-mode fibers generally have a wider core diameter [ 8 ] and are used for short-distance communication links and for applications where high power must be transmitted. [ 9 ] Single-mode fibers are used for most communication links longer than 1,050 meters (3,440 ft). [ 10 ] Being able to join optical fibers with low loss is important in fiber optic communication. [ 11 ] This is more complex than joining electrical wire or cable and involves careful cleaving of the fibers, precise alignment of the fiber cores, and the coupling of these aligned cores. For applications that demand a permanent connection a fusion splice is common. In this technique, an electric arc is used to melt the ends of the fibers together. Another common technique is a mechanical splice , where the ends of the fibers are held in contact by mechanical force. Temporary or semi-permanent connections are made by means of specialized optical fiber connectors . [ 12 ] The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics . The term was coined by Indian-American physicist Narinder Singh Kapany . [ 13 ] Daniel Colladon and Jacques Babinet first demonstrated the guiding of light by refraction, the principle that makes fiber optics possible, in Paris in the early 1840s. [ 14 ] John Tyndall included a demonstration of it in his public lectures in London , 12 years later. [ 15 ] Tyndall also wrote about the property of total internal reflection in an introductory book about the nature of light in 1870: [ 16 ] [ 17 ] When the light passes from air into water, the refracted ray is bent towards the perpendicular ... When the ray passes from water to air it is bent from the perpendicular... If the angle which the ray in water encloses with the perpendicular to the surface be greater than 48 degrees, the ray will not quit the water at all: it will be totally reflected at the surface... The angle which marks the limit where total reflection begins is called the limiting angle of the medium. For water this angle is 48°27′, for flint glass it is 38°41′, while for a diamond it is 23°42′. In the late 19th century, a team of Viennese doctors guided light through bent glass rods to illuminate body cavities. [ 18 ] Practical applications such as close internal illumination during dentistry followed, early in the twentieth century. Image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. In the 1930s, Heinrich Lamm showed that one could transmit images through a bundle of unclad optical fibers and used it for internal medical examinations, but his work was largely forgotten. [ 15 ] [ 19 ] In 1953, Dutch scientist Bram van Heel first demonstrated image transmission through bundles of optical fibers with a transparent cladding. [ 19 ] Later that same year, Harold Hopkins and Narinder Singh Kapany at Imperial College in London succeeded in making image-transmitting bundles with over 10,000 fibers, and subsequently achieved image transmission through a 75 cm long bundle which combined several thousand fibers. [ 19 ] [ 20 ] [ 21 ] The first practical fiber optic semi-flexible gastroscope was patented by Basil Hirschowitz , C. Wilbur Peters, and Lawrence E. Curtiss, researchers at the University of Michigan , in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers; previous optical fibers had relied on air or impractical oils and waxes as the low-index cladding material. [ 19 ] Kapany coined the term fiber optics after writing a 1960 article in Scientific American that introduced the topic to a wide audience. He subsequently wrote the first book about the new field. [ 19 ] [ 22 ] The first working fiber-optic data transmission system was demonstrated by German physicist Manfred Börner at Telefunken Research Labs in Ulm in 1965, followed by the first patent application for this technology in 1966. [ 23 ] [ 24 ] In 1968, NASA used fiber optics in the television cameras that were sent to the moon. At the time, the use in the cameras was classified confidential , and employees handling the cameras had to be supervised by someone with an appropriate security clearance. [ 25 ] Charles K. Kao and George A. Hockham of the British company Standard Telephones and Cables (STC) were the first to promote the idea that the attenuation in optical fibers could be reduced below 20 decibels per kilometer (dB/km), making fibers a practical communication medium, in 1965. [ 26 ] They proposed that the attenuation in fibers available at the time was caused by impurities that could be removed, rather than by fundamental physical effects such as scattering. They correctly and systematically theorized the light-loss properties for optical fiber and pointed out the right material to use for such fibers— silica glass with high purity. This discovery earned Kao the Nobel Prize in Physics in 2009. [ 27 ] The crucial attenuation limit of 20 dB/km was first achieved in 1970 by researchers Robert D. Maurer , Donald Keck , Peter C. Schultz , and Frank Zimar working for American glass maker Corning Glass Works . [ 28 ] They demonstrated a fiber with 17 dB/km attenuation by doping silica glass with titanium . A few years later they produced a fiber with only 4 dB/km attenuation using germanium dioxide as the core dopant. In 1981, General Electric produced fused quartz ingots that could be drawn into strands 25 miles (40 km) long. [ 29 ] Initially, high-quality optical fibers could only be manufactured at 2 meters per second. Chemical engineer Thomas Mensah joined Corning in 1983 and increased the speed of manufacture to over 50 meters per second, making optical fiber cables cheaper than traditional copper ones. [ 30 ] [ self-published source ] [ 31 ] [ 32 ] These innovations ushered in the era of optical fiber telecommunication. The Italian research center CSELT worked with Corning to develop practical optical fiber cables, resulting in the first metropolitan fiber optic cable being deployed in Turin in 1977. [ 33 ] [ 34 ] CSELT also developed an early technique for splicing optical fibers, called Springroove. [ 35 ] Attenuation in modern optical cables is far less than in electrical copper cables, leading to long-haul fiber connections with repeater distances of 70–150 kilometers (43–93 mi). Two teams, led by David N. Payne of the University of Southampton and Emmanuel Desurvire at Bell Labs , developed the erbium-doped fiber amplifier , which reduced the cost of long-distance fiber systems by reducing or eliminating optical-electrical-optical repeaters, in 1986 and 1987 respectively. [ 36 ] [ 37 ] [ 38 ] The emerging field of photonic crystals led to the development in 1991 of photonic-crystal fiber , [ 39 ] which guides light by diffraction from a periodic structure, rather than by total internal reflection. The first photonic crystal fibers became commercially available in 2000. [ 40 ] Photonic crystal fibers can carry higher power than conventional fibers and their wavelength-dependent properties can be manipulated to improve performance. These fibers can have hollow cores. [ 41 ] Optical fiber is used as a medium for telecommunication and computer networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because infrared light propagates through the fiber with much lower attenuation compared to electricity in electrical cables. This allows long distances to be spanned with few repeaters . 10 or 40 Gbit/s is typical in deployed systems. [ 42 ] [ 43 ] Using wavelength-division multiplexing (WDM) enables each fiber to carry many independent channels, each using a different wavelength of light. The net data rate (data rate without overhead bytes) per fiber is the per-channel data rate reduced by the forward error correction (FEC) overhead, multiplied by the number of channels (usually up to 80 in commercial dense WDM systems as of 2008 [update] ). For short-distance applications, such as a network in an office building (see fiber to the office ), fiber-optic cabling can save space in cable ducts. This is because a single fiber can carry much more data than electrical cables such as standard category 5 cable , which typically runs at 100 Mbit/s or 1 Gbit/s speeds. Fibers are often also used for short-distance connections between devices. For example, most high-definition televisions offer a digital audio optical connection. This allows the streaming of audio over light, using the S/PDIF protocol over an optical TOSLINK connection. Fiber optic drones have been used in the Russo-Ukrainian War since March 2024. [ 53 ] [ 54 ] This type of drones are immune to electromagnetic interference and are not affected by electronic warfare systems. [ 54 ] Fibers have many uses in remote sensing . In some applications, the fiber itself is the sensor (the fibers channel optical light to a processing device that analyzes changes in the light's characteristics). In other cases, fiber is used to connect a sensor to a measurement system. Optical fibers can be used as sensors to measure strain , temperature , pressure , and other quantities by modifying a fiber so that the property being measured modulates the intensity , phase , polarization , wavelength , or transit time of light in the fiber. Sensors that vary the intensity of light are the simplest since only a simple source and detector are required. A particularly useful feature of such fiber optic sensors is that they can, if required, provide distributed sensing over distances of up to one meter. Distributed acoustic sensing is one example of this. In contrast, highly localized measurements can be provided by integrating miniaturized sensing elements with the tip of the fiber. [ 55 ] These can be implemented by various micro- and nanofabrication technologies, such that they do not exceed the microscopic boundary of the fiber tip, allowing for such applications as insertion into blood vessels via hypodermic needle. Extrinsic fiber optic sensors use an optical fiber cable , normally a multi-mode one, to transmit modulated light from either a non-fiber optical sensor—or an electronic sensor connected to an optical transmitter. A major benefit of extrinsic sensors is their ability to reach otherwise inaccessible places. An example is the measurement of temperature inside jet engines by using a fiber to transmit radiation into a pyrometer outside the engine. Extrinsic sensors can be used in the same way to measure the internal temperature of electrical transformers , where the extreme electromagnetic fields present make other measurement techniques impossible. Extrinsic sensors measure vibration, rotation, displacement, velocity, acceleration, torque, and torsion. A solid-state version of the gyroscope, using the interference of light, has been developed. The fiber optic gyroscope (FOG) has no moving parts and exploits the Sagnac effect to detect mechanical rotation. Common uses for fiber optic sensors include advanced intrusion detection security systems . The light is transmitted along a fiber optic sensor cable placed on a fence, pipeline, or communication cabling, and the returned signal is monitored and analyzed for disturbances. This return signal is digitally processed to detect disturbances and trip an alarm if an intrusion has occurred. Optical fibers are widely used as components of optical chemical sensors and optical biosensors . [ 56 ] Optical fiber can be used to transmit power using a photovoltaic cell to convert the light into electricity. [ 57 ] While this method of power transmission is not as efficient as conventional ones, it is especially useful in situations where it is desirable not to have a metallic conductor as in the case of use near MRI machines, which produce strong magnetic fields. [ 58 ] Other examples are for powering electronics in high-powered antenna elements and measurement devices used in high-voltage transmission equipment. Optical fibers are used as light guides in medical and other applications where bright light needs to be shone on a target without a clear line-of-sight path. Many microscopes use fiber-optic light sources to provide intense illumination of samples being studied. Optical fiber is also used in imaging optics. A coherent bundle of fibers is used, sometimes along with lenses, for a long, thin imaging device called an endoscope , which is used to view objects through a small hole. Medical endoscopes are used for minimally invasive exploratory or surgical procedures. Industrial endoscopes (see fiberscope or borescope ) are used for inspecting anything hard to reach, such as jet engine interiors. In some buildings, optical fibers route sunlight from the roof to other parts of the building (see nonimaging optics ). Optical-fiber lamps are used for illumination in decorative applications, including signs , art , toys and artificial Christmas trees . Optical fiber is an intrinsic part of the light-transmitting concrete building product LiTraCon . Optical fiber can also be used in structural health monitoring . This type of sensor can detect stresses that may have a lasting impact on structures . It is based on the principle of measuring analog attenuation. In spectroscopy , optical fiber bundles transmit light from a spectrometer to a substance that cannot be placed inside the spectrometer itself, in order to analyze its composition. A spectrometer analyzes substances by bouncing light off and through them. By using fibers, a spectrometer can be used to study objects remotely. [ 59 ] [ 60 ] [ 61 ] An optical fiber doped with certain rare-earth elements such as erbium can be used as the gain medium of a fiber laser or optical amplifier . Rare-earth-doped optical fibers can be used to provide signal amplification by splicing a short section of doped fiber into a regular (undoped) optical fiber line. The doped fiber is optically pumped with a second laser wavelength that is coupled into the line in addition to the signal wave. Both wavelengths of light are transmitted through the doped fiber, which transfers energy from the second pump wavelength to the signal wave. The process that causes the amplification is stimulated emission . Optical fiber is also widely exploited as a nonlinear medium. The glass medium supports a host of nonlinear optical interactions, and the long interaction lengths possible in fiber facilitate a variety of phenomena, which are harnessed for applications and fundamental investigation. [ 62 ] Conversely, fiber nonlinearity can have deleterious effects on optical signals, and measures are often required to minimize such unwanted effects. Optical fibers doped with a wavelength shifter collect scintillation light in physics experiments . Fiber-optic sights for handguns, rifles, and shotguns use pieces of optical fiber to improve the visibility of markings on the sight. Optical fibers are used as components in e-textiles . This was first done by Harry Wainwright in the 1980s. [ 63 ] He used fiber optics to create "a sweatshirt with a dragon spitting flames morphing into a bird." An optical fiber is a cylindrical dielectric waveguide ( nonconducting waveguide) that transmits light along its axis through the process of total internal reflection. The fiber consists of a core surrounded by a cladding layer, both of which are made of dielectric materials. [ 64 ] To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The boundary between the core and cladding may either be abrupt, in step-index fiber , or gradual, in graded-index fiber . Light can be fed into optical fibers using lasers or LEDs . Optical fibers are immune to electrical interference as there is no cross-talk between signals in different cables and no pickup of environmental noise . Information traveling inside the optical fiber is even immune to electromagnetic pulses generated by nuclear devices. [ b ] [ 65 ] Fiber cables do not conduct electricity, which makes them useful for protecting communications equipment in high voltage environments such as power generation facilities or applications prone to lightning strikes. The electrical isolation also prevents problems with ground loops . Because there is no electricity in optical cables that could potentially generate sparks, they can be used in environments where explosive fumes are present. Wiretapping (in this case, fiber tapping ) is more difficult compared to electrical connections. Fiber cables are not targeted for metal theft . In contrast, copper cable systems use large amounts of copper and have been targeted since the 2000s commodities boom . The refractive index is a way of measuring the speed of light in a material. Light travels fastest in a vacuum , such as in outer space. The speed of light in vacuum is about 300,000 kilometers (186,000 miles) per second. The refractive index of a medium is calculated by dividing the speed of light in vacuum by the speed of light in that medium. The refractive index of vacuum is therefore 1, by definition. A typical single-mode fiber used for telecommunications has a cladding made of pure silica, with an index of 1.444 at 1500 nm, and a core of doped silica with an index around 1.4475. [ 64 ] The larger the index of refraction, the slower light travels in that medium. From this information, a simple rule of thumb is that a signal using optical fiber for communication will travel at around 200,000 kilometers per second. Thus a phone call carried by fiber between Sydney and New York, a 16,000-kilometer distance, means that there is a minimum delay of 80 milliseconds (about 1 12 {\displaystyle {\tfrac {1}{12}}} of a second) between when one caller speaks and the other hears. [ c ] When light traveling in an optically dense medium hits a boundary at a steep angle of incidence (larger than the critical angle for the boundary), the light is completely reflected. This is called total internal reflection . This effect is used in optical fibers to confine light in the core. Most modern optical fiber is weakly guiding , meaning that the difference in refractive index between the core and the cladding is very small (typically less than 1%). [ 66 ] Light travels through the fiber core, bouncing back and forth off the boundary between the core and cladding. Because the light must strike the boundary with an angle greater than the critical angle, only light that enters the fiber within a certain range of angles can travel down the fiber without leaking out. This range of angles is called the acceptance cone of the fiber. There is a maximum angle from the fiber axis at which light may enter the fiber so that it will propagate, or travel, in the core of the fiber. The sine of this maximum angle is the numerical aperture (NA) of the fiber. Fiber with a larger NA requires less precision to splice and work with than fiber with a smaller NA. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Single-mode fiber has a small NA. Optical fibers with a large core diameter (greater than 10 micrometers) may be analyzed by geometrical optics . Such fibers are called multi-mode fibers , from the electromagnetic analysis (see below). In a step-index multi-mode fiber, rays of light are guided along the fiber core by total internal reflection. Rays that meet the core-cladding boundary at an angle (measured relative to a line normal to the boundary) greater than the critical angle for this boundary, are completely reflected. The critical angle is determined by the difference in the index of refraction between the core and cladding materials. Rays that meet the boundary at a low angle are refracted from the core into the cladding where they terminate. The critical angle determines the acceptance angle of the fiber, often reported as a numerical aperture . A high numerical aperture allows light to propagate down the fiber in rays both close to the axis and at various angles, allowing efficient coupling of light into the fiber. However, this high numerical aperture increases the amount of dispersion as rays at different angles have different path lengths and therefore take different amounts of time to traverse the fiber. In graded-index fiber, the index of refraction in the core decreases continuously between the axis and the cladding. This causes light rays to bend smoothly as they approach the cladding, rather than reflecting abruptly from the core-cladding boundary. The resulting curved paths reduce multi-path dispersion because high-angle rays pass more through the lower-index periphery of the core, rather than the high-index center. The index profile is chosen to minimize the difference in axial propagation speeds of the various rays in the fiber. This ideal index profile is very close to a parabolic relationship between the index and the distance from the axis. [ citation needed ] Fibers with a core diameter less than about ten times the wavelength of the propagating light cannot be modeled using geometric optics. Instead, they must be analyzed as an electromagnetic waveguide structure, according to Maxwell's equations as reduced to the electromagnetic wave equation . [ d ] As an optical waveguide, the fiber supports one or more confined transverse modes by which light can propagate along the fiber. Fiber supporting only one mode is called single-mode . [ e ] The waveguide analysis shows that the light energy in the fiber is not completely confined in the core. Instead, especially in single-mode fibers, a significant fraction of the energy in the bound mode travels in the cladding as an evanescent wave . The most common type of single-mode fiber has a core diameter of 8–10 micrometers and is designed for use in the near infrared . Multi-mode fiber, by comparison, is manufactured with core diameters as small as 50 micrometers and as large as hundreds of micrometers. Some special-purpose optical fiber is constructed with a non-cylindrical core or cladding layer, usually with an elliptical or rectangular cross-section. These include polarization-maintaining fiber used in fiber optic sensors and fiber designed to suppress whispering gallery mode propagation. Photonic-crystal fiber is made with a regular pattern of index variation (often in the form of cylindrical holes that run along the length of the fiber). Such fiber uses diffraction effects instead of or in addition to total internal reflection, to confine light to the fiber's core. The properties of the fiber can be tailored to a wide variety of applications. Attenuation in fiber optics, also known as transmission loss, is the reduction in the intensity of the light signal as it travels through the transmission medium. Attenuation coefficients in fiber optics are usually expressed in units of dB/km. The medium is usually a fiber of silica glass [ f ] that confines the incident light beam within. Attenuation is an important factor limiting the transmission of a digital signal across large distances. Thus, much research has gone into both limiting the attenuation and maximizing the amplification of the optical signal. The four orders of magnitude reduction in the attenuation of silica optical fibers over four decades was the result of constant improvement of manufacturing processes, raw material purity, preform, and fiber designs, which allowed for these fibers to approach the theoretical lower limit of attenuation. [ 67 ] Single-mode optical fibers can be made with extremely low loss. Corning's Vascade® EX2500 fiber, a low loss single-mode fiber for telecommunications wavelengths, has a nominal attenuation of 0.148 dB/km at 1550 nm. [ 68 ] A 10 km length of such fiber transmits nearly 71% of optical energy at 1550 nm. Attenuation in optical fiber is caused primarily by both scattering and absorption . In fibers based on fluoride glasses such as ZBLAN, minimum attenuation is limited by impurity absorption. Vast majority of optical fibers are based on silica glass, where impurity absorption is negligible. In silica fibers attenuation is determined by intrinsic mechanisms: Rayleigh scattering in the glasses through which the light is propagating, and infrared absorption in the same glasses. Absorption in silica increases steeply at wavelengths above 1570 nm. At wavelengths most useful for telecommunications, Rayleigh scattering is the dominant loss mechanism. At 1550 nm attenuation components for a record low loss fiber are given as follows: Rayleigh scattering loss: 0.1200 dB/km, infrared absorption loss: 0.0150 dB/km, impurity absorption loss: 0.0047 dB/km, waveguide imperfection loss: 0.0010 dB/km. The propagation of light through the core of an optical fiber is based on the total internal reflection of the lightwave, in terms of geometric optics, or guided modes, in terms of electromagnetic waveguide. In a typical single mode optical fiber about 75% of light is propagating through the core material, having higher refractive index, and about 25% of light is propagating through the cladding, having lower refractive index. The interface between the core and cladding glasses is exceptionally smooth and does not give rise to a significant scattering loss or a waveguide imperfection loss. The scattering loss originates primarily from the Rayleigh scattering in the bulk of the glasses composing the fiber core and cladding. The scattering of light in optical quality glass fiber is caused by molecular level irregularities (compositional fluctuations) in the glass structure. Indeed, one emerging school of thought is that glass is simply the limiting case of a polycrystalline solid. Within this framework, domains exhibiting various degrees of short-range order become the building blocks of metals as well as glasses and ceramics. Distributed both between and within these domains are micro-structural defects that provide the most ideal locations for light scattering. Scattering depends on the wavelength of the light being scattered and on the size of the scattering centers. Angular dependence of the light intensity scattered from an optical fiber matched that of Rayleigh scattering, indicating that the scattering centers are much smaller than the wavelength of propagating light. It originates from the density fluctuations driven by fictive temperature of the glass, and from the concentration fluctuations of dopants in both the core and the cladding. Rayleigh scattering coefficient, R , can be presented as : R = R d + R c {\displaystyle R=R_{\text{d}}+R_{\text{c}}} where R d represents Rayleigh scattering on density fluctuations and R c represents Rayleigh scattering on dopant concentration fluctuations. Dopants, such as germanium dioxide or fluorine, are used to create the refractive index difference between the core and the cladding, to form a waveguide structure. R d = 8 π 3 3 λ 4 n 8 p 2 β c k B T f {\displaystyle R_{\text{d}}={\frac {8\pi ^{3}}{3\lambda ^{4}}}n^{8}p^{2}\beta _{\text{c}}k_{\text{B}}T_{\text{f}}} where λ is wavelength, n is refractive index , p is photo-elastic coefficient, β c is isothermal compressibility, k B is the Boltzmann constant , T f is fictive temperature. The only physically significant variable affecting scattering on density fluctuations is the fictive temperature of the glass, lower fictive temperature results in a more homogeneous glass and lower Rayleigh scattering. Fictive temperature may be dramatically reduced by about 100 wt. ppm of alkali oxide dopant in the fiber core, as well as slower cooling of the fiber during the fiber draw process. These approaches are used to produce optical fibers with the lowest attenuation, especially those for submarine telecom cables. For small dopant concentrations, R c is proportional to x (d n /d x ) 2 , where x is the mole fraction of the dopant in SiO 2 -based glass and n is the refractive index of the glass. When GeO 2 dopant is used to increase the refractive index of the fiber core, it increases the concentration fluctuation component of Rayleigh scattering, and attenuation of the fiber. This is why the lowest attenuation fibers do not use GeO 2 in the core, and use fluorine in the cladding, to reduce the refractive index of the cladding. R c in pure silica core fiber is proportional to the overlap integral between LP01 mode and fluorine-induced concentration fluctuation component in the cladding. In the core of potassium-doped pure silica-core (KPSC) fiber only density fluctuations play a significant role, as the concentrations of K 2 O, fluorine and chlorine are very low. The density fluctuations in the core are moderated by lower fictive temperature resulting from potassium doping, and are further reduced by annealing during the fiber draw process. This differs from the cladding, where higher fluorine dopant levels and the resulting concentration fluctuations add to the loss. In such fibers the light travelling through the core experiences lower scattering and lower attenuation compared to the light propagating through the cladding segment of the fiber. At high optical powers, scattering can also be caused by nonlinear optical processes in the fiber. [ 69 ] [ 70 ] In addition to light scattering, attenuation or signal loss can also occur due to selective absorption of specific wavelengths. Primary material considerations include both electrons and molecules as follows: The design of any optically transparent device requires the selection of materials based upon knowledge of its properties and limitations. The crystal structure absorption characteristics observed at the lower frequency regions (mid- to far-IR wavelength range) define the long-wavelength transparency limit of the material. They are the result of the interactive coupling between the motions of thermally induced vibrations of the constituent atoms and molecules of the solid lattice and the incident light wave radiation. Hence, all materials are bounded by limiting regions of absorption caused by atomic and molecular vibrations (bond-stretching) in the far-infrared (>10 μm). In other words, the selective absorption of IR light by a particular material occurs because the selected frequency of the light wave matches the frequency (or an integer multiple of the frequency, i.e. harmonic ) at which the particles of that material vibrate. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of IR light. Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When IR light of these frequencies strikes an object, the energy is either reflected or transmitted. Attenuation over a cable run is significantly increased by the inclusion of connectors and splices. When computing the acceptable attenuation (loss budget) between a transmitter and a receiver one includes: Connectors typically introduce 0.3 dB per connector on well-polished connectors. Splices typically introduce less than 0.2 dB per splice. [ citation needed ] The total loss can be calculated by: where the dB loss per kilometer is a function of the type of fiber and can be found in the manufacturer's specifications. For example, a typical 1550 nm single-mode fiber has a loss of 0.3 dB per kilometer. [ citation needed ] The calculated loss budget is used when testing to confirm that the measured loss is within the normal operating parameters. Glass optical fibers are almost always made from silica , but some other materials, such as fluorozirconate , fluoroaluminate , and chalcogenide glasses as well as crystalline materials like sapphire , are used for longer-wavelength infrared or other specialized applications. Silica and fluoride glasses usually have refractive indices of about 1.5, but some materials such as the chalcogenides can have indices as high as 3. Typically the index difference between core and cladding is less than one percent. Plastic optical fibers (POF) are commonly step-index multi-mode fibers with a core diameter of 0.5 millimeters or larger. POF typically have higher attenuation coefficients than glass fibers, 1 dB/m or higher, and this high attenuation limits the range of POF-based systems. Silica exhibits fairly good optical transmission over a wide range of wavelengths. In the near-infrared (near IR) portion of the spectrum, particularly around 1.5 μm, silica can have extremely low absorption and scattering losses of the order of 0.2 dB/km. Such low losses depend on using ultra-pure silica. A high transparency in the 1.4-μm region is achieved by maintaining a low concentration of hydroxyl groups (OH). Alternatively, a high OH concentration is better for transmission in the ultraviolet (UV) region. [ 71 ] Silica can be drawn into fibers at reasonably high temperatures and has a fairly broad glass transformation range . One other advantage is that fusion splicing and cleaving of silica fibers is relatively effective. Silica fiber also has high mechanical strength against both pulling and even bending, provided that the fiber is not too thick and that the surfaces have been well prepared during processing. Even simple cleaving of the ends of the fiber can provide nicely flat surfaces with acceptable optical quality. Silica is also relatively chemically inert . In particular, it is not hygroscopic (does not absorb water). Silica glass can be doped with various materials. One purpose of doping is to raise the refractive index (e.g. with germanium dioxide (GeO 2 ) or aluminium oxide (Al 2 O 3 )) or to lower it (e.g. with fluorine or boron trioxide (B 2 O 3 )). Doping is also possible with laser-active ions (for example, rare-earth-doped fibers) in order to obtain active fibers to be used, for example, in fiber amplifiers or laser applications. Both the fiber core and cladding are typically doped, so that the entire assembly (core and cladding) is effectively the same compound (e.g. an aluminosilicate , germanosilicate, phosphosilicate or borosilicate glass ). Particularly for active fibers, pure silica is usually not a very suitable host glass, because it exhibits a low solubility for rare-earth ions. This can lead to quenching effects due to the clustering of dopant ions. Aluminosilicates are much more effective in this respect. Silica fiber also exhibits a high threshold for optical damage. This property ensures a low tendency for laser-induced breakdown. This is important for fiber amplifiers when utilized for the amplification of short pulses. Because of these properties, silica fibers are the material of choice in many optical applications, such as communications (except for very short distances with plastic optical fiber), fiber lasers, fiber amplifiers, and fiber-optic sensors. Large efforts put forth in the development of various types of silica fibers have further increased the performance of such fibers over other materials. [ 72 ] [ 73 ] [ 74 ] [ 75 ] [ 76 ] [ 77 ] [ 78 ] [ 79 ] Fluoride glass is a class of non-oxide optical quality glasses composed of fluorides of various metals . Because of the low viscosity of these glasses, it is very difficult to completely avoid crystallization while processing it through the glass transition (or drawing the fiber from the melt). Thus, although heavy metal fluoride glasses (HMFG) exhibit very low optical attenuation, they are not only difficult to manufacture, but are quite fragile, and have poor resistance to moisture and other environmental attacks. Their best attribute is that they lack the absorption band associated with the hydroxyl (OH) group (3,200–3,600 cm −1 ; i.e., 2,777–3,125 nm or 2.78–3.13 μm), which is present in nearly all oxide-based glasses. Such low losses were never realized in practice, and the fragility and high cost of fluoride fibers made them less than ideal as primary candidates. Fluoride fibers are used in mid- IR spectroscopy , fiber optic sensors , thermometry , and imaging . Fluoride fibers can be used for guided lightwave transmission in media such as YAG ( yttrium aluminium garnet ) lasers at 2.9 μm, as required for medical applications (e.g. ophthalmology and dentistry ). [ 80 ] [ 81 ] An example of a heavy metal fluoride glass is the ZBLAN glass group, composed of zirconium , barium , lanthanum , aluminium , and sodium fluorides. Their main technological application is as optical waveguides in both planar and fiber forms. They are advantageous especially in the mid-infrared (2,000–5,000 nm) range. Phosphate glass is a class of optical glasses composed of metaphosphates of various metals. Instead of the SiO 4 tetrahedra in the network solid structure of silicate glasses, the building block for this glass is phosphorus pentoxide (P 2 O 5 ), which crystallizes in at least four different forms. The most familiar polymorph is the cagelike structure of P 4 O 10 . Phosphate glasses can be advantageous over silica glasses for optical fibers with a high concentration of doping rare-earth ions. A mix of fluoride glass and phosphate glass is fluorophosphate glass. [ 82 ] [ 83 ] The chalcogens —the elements in group 16 of the periodic table —particularly sulfur (S), selenium (Se) and tellurium (Te)—react with more electropositive elements, such as silver , to form chalcogenides . These are extremely versatile compounds, in that they can be crystalline or amorphous, metallic or semiconducting, and conductors of ions or electrons . chalcogenide glass can be used to make fibers for far infrared transmission. [ 84 ] Standard optical fibers are made by first constructing a large-diameter preform with a carefully controlled refractive index profile, and then pulling the preform to form the long, thin optical fiber. The preform is commonly made by three chemical vapor deposition methods: inside vapor deposition , outside vapor deposition , and vapor axial deposition . [ 85 ] With inside vapor deposition , the preform starts as a hollow glass tube approximately 40 centimeters (16 in) long, which is placed horizontally and rotated slowly on a lathe . Gases such as silicon tetrachloride (SiCl 4 ) or germanium tetrachloride (GeCl 4 ) are injected with oxygen in the end of the tube. The gases are then heated by means of an external hydrogen burner, bringing the temperature of the gas up to 1,900 K (1,600 °C, 3,000 °F), where the tetrachlorides react with oxygen to produce silica or germanium dioxide particles. When the reaction conditions are chosen to allow this reaction to occur in the gas phase throughout the tube volume, in contrast to earlier techniques where the reaction occurred only on the glass surface, this technique is called modified chemical vapor deposition . [ 86 ] The oxide particles then agglomerate to form large particle chains, which subsequently deposit on the walls of the tube as soot. The deposition is due to the large difference in temperature between the gas core and the wall causing the gas to push the particles outward in a process known as thermophoresis . The torch is then traversed up and down the length of the tube to deposit the material evenly. After the torch has reached the end of the tube, it is then brought back to the beginning of the tube and the deposited particles are then melted to form a solid layer. This process is repeated until a sufficient amount of material has been deposited. For each layer the composition can be modified by varying the gas composition, resulting in precise control of the finished fiber's optical properties. In outside vapor deposition or vapor axial deposition, the glass is formed by flame hydrolysis , a reaction in which silicon tetrachloride and germanium tetrachloride are oxidized by reaction with water in an oxyhydrogen flame. In outside vapor deposition, the glass is deposited onto a solid rod, which is removed before further processing. In vapor axial deposition, a short seed rod is used, and a porous preform, whose length is not limited by the size of the source rod, is built up on its end. The porous preform is consolidated into a transparent, solid preform by heating to about 1,800 K (1,500 °C, 2,800 °F). Typical communications fiber uses a circular preform. For some applications such as double-clad fibers another form is preferred. [ 87 ] In fiber lasers based on double-clad fiber, an asymmetric shape improves the filling factor for laser pumping . Because of the surface tension, the shape is smoothed during the drawing process, and the shape of the resulting fiber does not reproduce the sharp edges of the preform. Nevertheless, careful polishing of the preform is important, since any defects of the preform surface affect the optical and mechanical properties of the resulting fiber. The preform, regardless of construction, is placed in a device known as a drawing tower , where the preform tip is heated and the optical fiber is pulled out as a string. The tension on the fiber can be controlled to maintain the desired fiber thickness. The light is guided down the core of the fiber by an optical cladding with a lower refractive index that traps light in the core through total internal reflection. For some types of fiber, the cladding is made of glass and is drawn along with the core from a preform with radially varying index of refraction. For other types of fiber, the cladding made of plastic and is applied like a coating (see below). The cladding is coated by a buffer, (not to be confused with an actual buffer tube), that protects it from moisture and physical damage. [ 73 ] These coatings are UV-cured urethane acrylate composite or polyimide materials applied to the outside of the fiber during the drawing process. The coatings protect the very delicate strands of glass fiber—about the size of a human hair—and allow it to survive the rigors of manufacturing, proof testing, cabling, and installation. The buffer coating must be stripped off the fiber for termination or splicing. Today's glass optical fiber draw processes employ a dual-layer coating approach. An inner primary coating is designed to act as a shock absorber to minimize attenuation caused by microbending . An outer secondary coating protects the primary coating against mechanical damage and acts as a barrier to lateral forces, and may be colored to differentiate strands in bundled cable constructions. These fiber optic coating layers are applied during the fiber draw, at speeds approaching 100 kilometers per hour (60 mph). Fiber optic coatings are applied using one of two methods: wet-on-dry and wet-on-wet . In wet-on-dry, the fiber passes through a primary coating application, which is then UV cured, then through the secondary coating application, which is subsequently cured. In wet-on-wet, the fiber passes through both the primary and secondary coating applications, then goes to UV curing. [ 88 ] The thickness of the coating is taken into account when calculating the stress that the fiber experiences under different bend configurations. [ 89 ] When a coated fiber is wrapped around a mandrel, the stress experienced by the fiber is given by [ 89 ] : 45 σ = E d f d m + d c , {\displaystyle \sigma =E{d_{\text{f}} \over d_{\text{m}}+d_{\text{c}}},} where E is the fiber's Young's modulus , d m is the diameter of the mandrel, d f is the diameter of the cladding and d c is the diameter of the coating. In a two-point bend configuration, a coated fiber is bent in a U-shape and placed between the grooves of two faceplates, which are brought together until the fiber breaks. The stress in the fiber in this configuration is given by [ 89 ] : 47 σ = 1.198 E d f d − d c , {\displaystyle \sigma =1.198E{d_{\text{f}} \over d-d_{\text{c}}},} where d is the distance between the faceplates. The coefficient 1.198 is a geometric constant associated with this configuration. Fiber optic coatings protect the glass fibers from scratches that could lead to strength degradation. The combination of moisture and scratches accelerates the aging and deterioration of fiber strength. When fiber is subjected to low stresses over a long period, fiber fatigue can occur. Over time or in extreme conditions, these factors combine to cause microscopic flaws in the glass fiber to propagate, which can ultimately result in fiber failure. Three key characteristics of fiber optic waveguides can be affected by environmental conditions: strength, attenuation, and resistance to losses caused by microbending. External optical fiber cable jackets and buffer tubes protect glass optical fiber from environmental conditions that can affect the fiber's performance and long-term durability. On the inside, coatings ensure the reliability of the signal being carried and help minimize attenuation due to microbending. In practical fibers, the cladding is usually coated with a tough resin and features an additional buffer layer, which may be further surrounded by a jacket layer, usually plastic. These layers add strength to the fiber but do not affect its optical properties. Rigid fiber assemblies sometimes put light-absorbing glass between the fibers, to prevent light that leaks out of one fiber from entering another. This reduces crosstalk between the fibers, or reduces flare in fiber bundle imaging applications. [ 90 ] [ 91 ] Multi-fiber cable usually uses colored buffers to identify each strand. Modern cables come in a wide variety of sheathings and armor, designed for applications such as direct burial in trenches, high voltage isolation, dual use as power lines, [ 92 ] [ failed verification ] installation in conduit, lashing to aerial telephone poles, submarine installation , and insertion in paved streets. Some fiber optic cable versions are reinforced with aramid yarns or glass yarns as an intermediary strength member . In commercial terms, usage of the glass yarns are more cost-effective with no loss of mechanical durability. Glass yarns also protect the cable core against rodents and termites. Fiber cable can be very flexible, but traditional fiber's loss increases greatly if the fiber is bent with a radius smaller than around 30 mm. This creates a problem when the cable is bent around corners. Bendable fibers , targeted toward easier installation in home environments, have been standardized as ITU-T G.657 . This type of fiber can be bent with a radius as low as 7.5 mm without adverse impact. Even more bendable fibers have been developed. [ 93 ] Bendable fiber may also be resistant to fiber hacking, in which the signal in a fiber is surreptitiously monitored by bending the fiber and detecting the leakage. [ 94 ] Another important feature of cable is cable's ability to withstand tension which determines how much force can be applied to the cable during installation. Optical fibers are connected to terminal equipment by optical fiber connectors . These connectors are usually of a standard type such as FC , SC , ST , LC , MTRJ , MPO or SMA . Optical fibers may be connected by connectors typically on a patch panel , or permanently by splicing , that is, joining two fibers together to form a continuous optical waveguide. The generally accepted splicing method is fusion splicing , which melts the fiber ends together. For quicker fastening jobs, a mechanical splice is used. All splicing techniques involve installing an enclosure that protects the splice. Fusion splicing is done with a specialized instrument. The fiber ends are first stripped of their protective polymer coating (as well as the more sturdy outer jacket, if present). The ends are cleaved with a precision cleaver to make them perpendicular, and are placed into special holders in the fusion splicer. The splice is usually inspected via a magnified viewing screen to check the cleaves before and fusion after the splice. The splicer uses small motors to align the end faces together, and emits a small spark between electrodes at the gap to burn off dust and moisture. Then the splicer generates a larger spark that raises the temperature above the melting point of the glass, fusing the ends permanently. The location and energy of the spark is carefully controlled so that the molten core and cladding do not mix, and this minimizes optical loss. A splice loss estimate is measured by the splicer by directing light through the cladding on one side and measuring the light leaking from the cladding on the other side. A splice loss under 0.1 dB is typical. The complexity of this process makes fiber splicing much more difficult than splicing copper wire. Mechanical fiber splices are designed to be quicker and easier to install, but there is still the need for stripping, careful cleaning, and precision cleaving. The fiber ends are aligned and held together by a precision sleeve, often using a clear index-matching gel that enhances the transmission of light across the joint. Mechanical splices typically have a higher optical loss and are less robust than fusion splices, especially if the gel is used. Fibers are terminated in connectors that hold the fiber end precisely and securely. An optical fiber connector is a rigid cylindrical barrel surrounded by a sleeve that holds the barrel in its mating socket. The mating mechanism can be push and click , turn and latch ( bayonet mount ), or screw-in ( threaded ). The barrel is typically free to move within the sleeve and may have a key that prevents the barrel and fiber from rotating as the connectors are mated. A typical connector is installed by preparing the fiber end and inserting it into the rear of the connector body. Quick-set adhesive is usually used to hold the fiber securely, and a strain relief is secured to the rear. Once the adhesive sets, the fiber's end is polished. Various polish profiles are used, depending on the type of fiber and the application. The resulting signal strength loss is called gap loss . For single-mode fiber, fiber ends are typically polished with a slight curvature that makes the mated connectors touch only at their cores. This is called a physical contact (PC) polish. The curved surface may be polished at an angle, to make an angled physical contact (APC) connection. Such connections have higher loss than PC connections but greatly reduced back reflection because light that reflects from the angled surface leaks out of the fiber core. APC fiber ends have low back reflection even when disconnected. In the 1990s, the number of parts per connector, polishing of the fibers, and the need to oven-bake the epoxy in each connector made terminating fiber optic cables difficult. Today, connector types on the market offer easier, less labor-intensive ways of terminating cables. Some of the most popular connectors are pre-polished at the factory and include a gel inside the connector. A cleave is made at a required length, to get as close to the polished piece already inside the connector. The gel surrounds the point where the two pieces meet inside the connector for very little light loss. [ 95 ] For the most demanding installations, factory pre-polished pigtails of sufficient length to reach the first fusion splice enclosure assures good performance and minimizes on-site labor. It is often necessary to align an optical fiber with another optical fiber or with an optoelectronic device such as a light-emitting diode , a laser diode , or a modulator . This can involve either carefully aligning the fiber and placing it in contact with the device, or can use a lens to allow coupling over an air gap. Typically the size of the fiber mode is much larger than the size of the mode in a laser diode or a silicon optical chip . In this case, a tapered or lensed fiber is used to match the fiber mode field distribution to that of the other element. The lens on the end of the fiber can be formed using polishing, laser cutting [ 96 ] or fusion splicing. In a laboratory environment, a bare fiber end is coupled using a fiber launch system, which uses a microscope objective lens to focus the light down to a fine point. A precision translation stage (micro-positioning table) is used to move the lens, fiber, or device to allow the coupling efficiency to be optimized. Fibers with a connector on the end make this process much simpler: the connector is simply plugged into a pre-aligned fiber-optic collimator, which contains a lens that is either accurately positioned to the fiber or is adjustable. To achieve the best injection efficiency into a single-mode fiber, the direction, position, size, and divergence of the beam must all be optimized. With good optimization, 70 to 90% coupling efficiency can be achieved. With properly polished single-mode fibers, the emitted beam has an almost perfect Gaussian shape—even in the far field—if a good lens is used. The lens needs to be large enough to support the full numerical aperture of the fiber, and must not introduce aberrations in the beam. Aspheric lenses are typically used. At optical intensities above 2 megawatts per square centimeter, when a fiber is subjected to a shock or is otherwise suddenly damaged, a fiber fuse can occur. The reflection from the damage vaporizes the fiber immediately before the break, and this new defect remains reflective so that the damage propagates back toward the transmitter at 1–3 meters per second (4–11 km/h, 2–8 mph). [ 97 ] [ 98 ] The open fiber control system, which ensures laser eye safety in the event of a broken fiber, can also effectively halt propagation of the fiber fuse. [ 99 ] In situations, such as undersea cables, where high power levels might be used without the need for open fiber control, a fiber fuse protection device at the transmitter can break the circuit to minimize damage. The refractive index of fibers varies slightly with the frequency of light, and light sources are not perfectly monochromatic. Modulation of the light source to transmit a signal also slightly widens the frequency band of the transmitted light. This has the effect that, over long distances and at high modulation speeds, different portions of light can take different times to arrive at the receiver, ultimately making the signal impossible to discern. [ 100 ] This problem can be overcome in several ways, including the use of extra repeaters and the use of a relatively short length of fiber that has the opposite refractive index gradient .
https://en.wikipedia.org/wiki/Optical_fiber
The optical force is a phenomenon whereby beams of light can attract and repel each other. The force acts along an axis which is perpendicular to the light beams. Because of this, parallel beams can be induced to converge or diverge. The optical force works on a microscopic scale, and cannot currently be detected at larger scales. It was discovered by a team of Yale researchers led by electrical engineer Hong Tang. [ 1 ] This physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Optical_force
Optical lens design is the process of designing a lens to meet a set of performance requirements and constraints, including cost and manufacturing limitations. Parameters include surface profile types ( spherical , aspheric , holographic , diffractive , etc.), as well as radius of curvature , distance to the next surface, material type and optionally tilt and decenter. The process is computationally intensive, using ray tracing or other techniques to model how the lens affects light that passes through it. Performance requirements can include: Design constraints can include realistic lens element center and edge thicknesses, minimum and maximum air-spaces between lenses, maximum constraints on entrance and exit angles, physically realizable glass index of refraction and dispersion properties. Manufacturing costs and delivery schedules are also a major part of optical design. The price of an optical glass blank of given dimensions can vary by a factor of fifty or more, depending on the size, glass type, index homogeneity quality, and availability, with BK7 usually being the cheapest. Costs for larger and/or thicker optical blanks of a given material, above 100–150 mm, usually increase faster than the physical volume due to increased blank annealing time required to achieve acceptable index homogeneity and internal stress birefringence levels throughout the blank volume. Availability of glass blanks is driven by how frequently a particular glass type is made by a given manufacturer, and can seriously affect manufacturing cost and schedule. Lenses can first be designed using paraxial theory to position images and pupils , then real surfaces inserted and optimized. Paraxial theory can be skipped in simpler cases and the lens directly optimized using real surfaces. Lenses are first designed using average index of refraction and dispersion (see Abbe number ) properties published in the glass manufacturer's catalog and through glass model calculations. However, the properties of the real glass blanks will vary from this ideal; index of refraction values can vary by as much as 0.0003 or more from catalog values, and dispersion can vary slightly. These changes in index and dispersion can sometimes be enough to affect the lens focus location and imaging performance in highly corrected systems. The lens blank manufacturing process is as follows: The glass blank pedigree, or "melt data", can be determined for a given glass batch by making small precision prisms from various locations in the batch and measuring their index of refraction on a spectrometer , typically at five or more wavelengths . Lens design programs have curve fitting routines that can fit the melt data to a selected dispersion curve , from which the index of refraction at any wavelength within the fitted wavelength range can be calculated. A re-optimization, or "melt re-comp", can then be performed on the lens design using measured index of refraction data where available. When manufactured, the resulting lens performance will more closely match the desired requirements than if average glass catalog values for index of refraction were assumed. Delivery schedules are impacted by glass and mirror blank availability and lead times to acquire, the amount of tooling a shop must fabricate prior to starting on a project, the manufacturing tolerances on the parts (tighter tolerances mean longer fab times), the complexity of any optical coatings that must be applied to the finished parts, further complexities in mounting or bonding lens elements into cells and in the overall lens system assembly, and any post-assembly alignment and quality control testing and tooling required. Tooling costs and delivery schedules can be reduced by using existing tooling at any given shop wherever possible, and by maximizing manufacturing tolerances to the extent possible. A simple two-element air-spaced lens has nine variables (four radii of curvature, two thicknesses, one airspace thickness, and two glass types). A multi-configuration lens corrected over a wide spectral band and field of view over a range of focal lengths and over a realistic temperature range can have a complex design volume having over one hundred dimensions. Lens optimization techniques that can navigate this multi-dimensional space and proceed to local minima have been studied since the 1940s, beginning with early work by James G. Baker , and later by Feder, [ 3 ] Wynne, [ 4 ] Glatzel, [ 5 ] Grey [ 6 ] and others. Prior to the development of digital computers , lens optimization was a hand-calculation task using trigonometric and logarithmic tables to plot 2-D cuts through the multi-dimensional space. Computerized ray tracing allows the performance of a lens to be modelled quickly, so that the design space can be searched rapidly. This allows design concepts to be rapidly refined. Popular optical design software includes Zemax 's OpticStudio, Synopsys 's Code V, and Lambda Research's OSLO . In most cases the designer must first choose a viable design for the optical system, and then numerical modelling is used to refine it. [ 7 ] The designer ensures that designs optimized by the computer meet all requirements, and makes adjustments or restarts the process when they do not.
https://en.wikipedia.org/wiki/Optical_lens_design
Optical lift is an optical analogue of aerodynamic lift , in which a cambered refractive object with differently shaped top and bottom surfaces experiences a stable transverse lift force when placed in a uniform stream of light . [ 1 ] The ability of light to apply pressure to objects is known as radiation pressure , which was first postulated in 1619 and proven in 1900. This is the principle behind the solar sail , which uses light radiation pressure to move through space . A 2010 study by physicist Grover Swartzlander and colleagues of the Rochester Institute of Technology in Rochester, New York shows light is also capable of creating the more complex force of " lift ", which is the force generated by airfoils that make an airplane rise upwards as it travels forward. This study was published in December 2010 in Nature Photonics journal. Swartzlander predicted, observed and experimentally verified at a micrometer-scale that when applying a beam of laser light to a semi-cylindrical refractive rod, it automatically torques into a stable angle of attack , and then exhibits uniform motion . [ 1 ] The experiment began as computer models that suggested when light is incident on a tiny object shaped like a wing , a stable lift force is applied to the particle. Then the researchers decided to do physical experiments in the laboratory, and they created tiny, transparent, micrometer-sized rods that were flat on one side and rounded on the other, rather like airplane wings. They immersed the lightfoils in water and bombarded them with 130 mW infrared laser light from underneath the chamber. Radiation pressure pushes the particles along the direction of propagation, this is called the scatter force , but the excitement came when the particles were forced to the side in a direction perpendicular to the direction of propagating light. The transverse force on the particles is the lift force. The researchers discovered not only that the rods experienced stable lift, but that, depending on refractive index, the rod could have up to two stable angles of attack it rotated to when exposed to the laser light. Symmetrical spheres tested did not exhibit this same lift effect. [ 2 ] In optical lift, created by a "lightfoil", the lift is created within the transparent object as light shines through it and is refracted by its inner surfaces. In the lightfoil rods a greater proportion of light leaves in a direction perpendicular to the beam and this side therefore experiences a larger radiation pressure and hence, lift. [ 2 ] The 2010 discovery of stable optical lift is considered by some physicists to be "most surprising". [ 3 ] Unlike optical tweezers , an intensity gradient is not required to achieve a transverse force. Many rods may therefore be lifted simultaneously in a single quasi-uniform beam of light. Swartzlander and his team propose using optical lift to power micromachines, transport microscopic particles in a liquid, or to help on self-alignment and steering of solar sails , [ 3 ] a form of spacecraft propulsion for interstellar space travel. Solar sails are generally designed to harness light to "push" a spacecraft, whereas Swartzlander designed their lightfoil to lift in a perpendicular direction; this is where the idea of being able to steer a future solar sail spacecraft may be applied. [ 4 ] Swartzlander said the next step would be to test lightfoils in air and experiment with a variety of materials with different refractive properties, and with incoherent light. [ 2 ]
https://en.wikipedia.org/wiki/Optical_lift
Optical manufacturing and testing is the process of manufacturing and testing optical components. It spans a wide range of manufacturing procedures and optical test configurations. The manufacture of a conventional spherical lens typically begins with the generation of the optic's rough shape by grinding a glass blank. [ 1 ] This can be done, for example, with ring tools. Next, the lens surface is polished to its final form. Typically this is done by lapping —rotating and rubbing the rough lens surface against a tool with the desired surface shape, with a mixture of abrasives and fluid in between. Typically a carved pitch tool is used to polish the surface of a lens. The mixture of abrasive is called slurry and it is typically made from cerium or zirconium oxide in water with lubricants added to facilitate pitch tool movement without sticking to the lens. The particle size in the slurry is adjusted to get the desired shape and finish. Types of lapping include planetary lapping, double-sided lapping, and cylindrical lapping. [ 2 ] During polishing, the lens may be tested to confirm that the desired shape is being produced, and to ensure that the final shape has the correct form to within the allowed precision. The deviation of an optical surface from the correct shape is typically expressed in fractions of a wavelength , for some convenient wavelength of light (perhaps the wavelength at which the lens is to be used, or a visible wavelength for which a source is available). Inexpensive lenses may have deviations of form as large as several wavelengths (λ, 2λ, etc.). More typical industrial lenses would have deviations no larger than a quarter wavelength (λ/4). Precision lenses for use in applications such as lasers , interferometers , and holography have surfaces with a tenth of a wavelength (λ/10) tolerance or better. In addition to surface profile, a lens must meet requirements for surface quality (scratches, pits, specks, etc.) and accuracy of dimensions. Unconventional techniques include single-point diamond turning (SPDT) and magnetorheological finishing (MRF). [ 3 ] Free-abrasive grinding is a technique to grind down the surface of a material before polishing. It involves the use of small particles of grit to grind away small chips of material from the surface of an optical workpiece. The grit particles are known as free abrasives. The particles are added to a liquid slurry, which goes between a grinding plate and the material. Sliding motions between the grinding plate and the material are used. [ 4 ] After grinding, there is a small amount of surface roughness, which is based on the size of the grit. There is also a small amount of fracturing below the surface of the material, known as subsurface damage (SSD). [ 4 ] To reduce the amount of surface roughness and subsurface damage, additional grinding at a smaller grit size can be done. [ 4 ] Typically, two or three stages of grinding are used, with the second and third stages having a size that is decreasing. For example, a typical set of grit stages is 30 micrometer, then 15 micrometer, then 9 micrometer. An alternate set of typical grit stages is 20 micrometer, then 12 micrometer, then 5 micrometer. [ 4 ] Types of abrasives include aluminium oxide , industrial diamond, and silicon carbide . Diamond is typically only used for grinding down very hard materials, or for certain crystals. [ 5 ] Optics are polished in a slurry of abrasive particles, a fluid carrier, and optional additives. [ 6 ] Types of abrasive particles that can be used include cerium(IV) oxide , diamond, aluminum oxide, and colloidal silica . [ 6 ] Optional additives include suspension agents, lubricants, and detergents. [ 6 ] There are various materials that can be used for optical components, including various types of glass, fused silica , silicon , and crystal quartz. [ 7 ] Calcium fluoride (CaF2) can be used as an optical material, although it is easily fractured and scratched. [ 7 ] Materials for infrared optical components include zinc selenide (ZnSe), zinc sulfide (ZnS), and gallium arsenide (GaAs). [ 7 ] The specifications for optical components vary based on their type: Specifications for prisms include pyramidal error, beam path, beam displacement and deviation, base angle, roof edge chips, wavefront, and polarization. [ 8 ] Specifications for aspheric lenses include base radius with tolerance, conic and polynomial coefficients, best-fit sphere reference, sag table reference, sag error tolerance, slope errors versus bandwidth, wavefront per specified test, tilt, and decenter. [ 8 ] Optical coating specifications include apertures, reflection, transmission, absorption, phase shift, adhesion, abrasion resistance, and damage threshold. [ 8 ] In order to avoid the irrecoverable loss of going under minimum thickness, opticians strive to meet all other specifications for an optical component at the maximum allowable thickness within tolerance. [ 9 ] Surface quality is the condition of the surface of an optical component. It indicates the presence of imperfections, such as scratches and pits. [ 10 ] It is typically rated according to scratch-dig (S-D) specifications. [ 11 ] Standards for specifying surface quality include the U.S. Military Performance Specification MIL-PRF-13830B and ISO 10110. [ 10 ] MIL-PRF-13830B was formerly MIL-O-13830a. Other standards include MIL-C-48497a and MIL-F-48616, which are formally inactive and apply only to coatings. [ 11 ] All three of these military standards lack specifications for statistical surface parameters, such as root-mean-square roughness, slope error, and ripple. [ 11 ] An extension and improvement to MIL-PRF is the ANSI/OEOSC OP1.002 standard. [ 11 ] The Fizeau interferometer is the standard type of interferometer that is used in optical fabrication. [ 12 ] Stitching interferometry can be used for testing aspheres. It involves performing subaperture tests that are stitched together into a single-high resolution image. [ 13 ]
https://en.wikipedia.org/wiki/Optical_manufacturing_and_testing
Optical mapping [ 1 ] is a technique for constructing ordered, genome-wide, high-resolution restriction maps from single, stained molecules of DNA, called "optical maps". By mapping the location of restriction enzyme sites along the unknown DNA of an organism, the spectrum of resulting DNA fragments collectively serves as a unique "fingerprint" or "barcode" for that sequence. Originally developed by Dr. David C. Schwartz and his lab at NYU in the 1990s [ 2 ] this method has since been integral to the assembly process of many large-scale sequencing projects for both microbial and eukaryotic genomes. Later technologies use DNA melting, [ 3 ] DNA competitive binding [ 4 ] or enzymatic labelling [ 5 ] [ 6 ] in order to create the optical mappings. The modern optical mapping platform works as follows: [ 7 ] DNA molecules were fixed on molten agarose developed between a cover slip and a microscope slide. Restriction enzyme was pre-mixed with the molten agarose before DNA placement and cleavage was triggered by addition of magnesium. Rather than being immobilized within a gel matrix, DNA molecules were held in place by electrostatic interactions on a positively charged surface. Resolution improved such that fragments from ~30 kb to as small as 800 bp could sized. This involved the development and integration of an automated spotting system to spot multiple single molecules on a slide (like a microarray) for parallel enzymatic processing, automated fluorescence microscopy for image acquisition, image procession vision to handle images, algorithms for optical map construction, cluster computing for processing large amounts of data Observing that microarrays spotted with single molecules did not work well for large genomic DNA molecules, microfluidic devices using soft lithography possessing a series of parallel microchannels were developed. An improvement on optical mapping, called "Nanocoding", [ 8 ] has potential to boost throughput by trapping elongated DNA molecules in nanoconfinements. The advantage of OM over traditional mapping techniques is that it preserves the order of the DNA fragment, whereas the order needs to be reconstructed using restriction mapping . In addition, since maps are constructed directly from genomic DNA molecules, cloning or PCR artifacts are avoided. However, each OM process is still affected by false positive and negative sites because not all restriction sites are cleaved in each molecule and some sites may be incorrectly cut. In practice, multiple optical maps are created from molecules of the same genomic region, and an algorithm is used to determine the best consensus map. [ 9 ] There are a variety of approaches to identifying large-scale genomic variations (such as indels, duplications, inversions, translocations) between genomes. Other categories of methods include using microarrays , pulsed-field gel electrophoresis , cytogenetics and paired-end tags . Initially, the optical mapping system has been used to construct whole-genome restriction maps of bacteria, parasites, and fungi. [ 10 ] [ 11 ] [ 12 ] It has also been used to scaffold and validate bacterial genomes. [ 13 ] To serve as scaffolds for assembly, assembled sequence contigs can be scanned for restriction sites in silico using known sequence data and aligning them to the assembled genomic optical map. Commercial company, Opgen has provided optical mappings for microbial genomes. For larger eukaryotic genomes, only the David C. Schwartz lab (now at Madison-Wisconsin) has produced optical maps for mouse, [ 14 ] human, [ 15 ] rice, [ 16 ] and maize. [ 17 ] Optical sequencing is a single molecule DNA sequencing technique that follows sequence-by-synthesis and uses optical mapping technology. [ 18 ] [ 19 ] Similar to other single molecular sequencing approaches such as SMRT sequencing , this technique analyzes a single DNA molecule, rather than amplify the initial sample and sequence multiple copies of the DNA. During synthesis, fluorochrome-labeled nucleotides are incorporated through the use of DNA polymerases and tracked by fluorescence microscopy . This technique was originally proposed by David C. Schwartz and Arvind Ramanathan in 2003. The following is an overview of each cycle in the optical sequencing process. [ 20 ] Step 1: DNA barcoding Cells are lysed to release genomic DNA. These DNA molecules are untangled, placed onto optical mapping surface containing microfluidic channels and the DNA is allowed to flow through the channels. These molecules are then barcoded by restriction enzymes to allow for genomic localization through the technique of optical mapping. See the above section on "Technology" for those steps. Step 2: Template nicking DNase I is added to randomly nick the mounted DNA molecules. A wash is then performed to remove the DNase I. The mean number of nicks that occur per template is dependent on the concentration of DNase I as well as the incubation time. Step 3: Gap formation T7 exonuclease is added which uses the nicks in the DNA molecules to expand the gaps in a 5'–3' direction. Amount of T7 exonuclease must be carefully controlled to avoid overly high levels of double-stranded breaks. Step 4: Fluorochrome incorporation DNA polymerase is used to incorporate fluorochrome-labelled nucleotides (FdNTPs) into the multiple gapped sites along each DNA molecule. During each cycle, the reaction mixture contains a single type of FdNTP and allows for multiple additions of that nucleotide type. Various washes are then performed to remove unincorporated fdNTPs in preparation for imaging and the next cycle of FdNTP addition. Step 5: Imaging This step counts the number of incorporated fluorochrome-labeled nucleotides at the gap regions using fluorescence microscopy. Step 6: Photobleaching The laser illumination that is used to excite the fluorochrome is also used here to destroy the fluorochrome signal. This essentially resets the fluorochrome counter, and prepares the counter for the next cycle. This step is a unique aspect of optical sequencing as it does not actually remove the fluorochrome label of the nucleotide after its incorporation. not removing the fluorochrome label makes sequencing more economical, but it results in the need to incorporate fluorochrome labels consecutively which can result in problems due to the bulkiness of the labels. Step 7: Repeat steps 4–6 Steps 4-6 are repeated with step 4 using a reaction mixture that contains a different fluorochrome-labeled nucleotide (FdNTP) each time. This is repeated until the desired region is sequenced. Selection of an appropriate DNA polymerase is critical to the efficiency of the base addition step and must meet several criteria: In addition, different polymerase preference for different fluorochromes, linker length on fluorochrome-nucleotides, and buffer compositions are also important factors to be considered to optimize the base addition process and maximize number of consecutive FdNTP incorporations. Single-molecule analysis Since minimal DNA sample required, time-consuming and costly amplification step is avoided to streamline sample preparation process. Large DNA molecule templates (~500 kb) vs. Short DNA molecule templates (< 1kb) While most next generation sequencing technologies aim of massive amounts of smalls sequence reads, these small sequence reads make de novo sequencing efforts and genome repeat regions difficult to comprehend. Optical sequencing uses large DNA molecule templates (~500 kb) for sequencing and these offer several advantages over small templates:
https://en.wikipedia.org/wiki/Optical_mapping
In optics , an optical medium is material through which light and other electromagnetic waves propagate. It is a form of transmission medium . The permittivity and permeability of the medium define how electromagnetic waves propagate in it. The optical medium has an intrinsic impedance , given by where E x {\displaystyle E_{x}} and H y {\displaystyle H_{y}} are the electric field and magnetic field , respectively. In a region with no electrical conductivity , the expression simplifies to: For example, in free space the intrinsic impedance is called the characteristic impedance of vacuum , denoted Z 0 , and Waves propagate through a medium with velocity c w = ν λ {\displaystyle c_{w}=\nu \lambda } , where ν {\displaystyle \nu } is the frequency and λ {\displaystyle \lambda } is the wavelength of the electromagnetic waves. This equation also may be put in the form where ω {\displaystyle \omega } is the angular frequency of the wave and k {\displaystyle k} is the wavenumber of the wave. In electrical engineering , the symbol β {\displaystyle \beta } , called the phase constant , is often used instead of k {\displaystyle k} . The propagation velocity of electromagnetic waves in free space , an idealized standard reference state (like absolute zero for temperature), is conventionally denoted by c 0 : [ 1 ] For a general introduction, see Serway [ 2 ] For a discussion of synthetic media, see Joannopoulus. [ 3 ]
https://en.wikipedia.org/wiki/Optical_medium
The optical metric was defined by German theoretical physicist Walter Gordon in 1923 [ 1 ] to study the geometrical optics in curved space-time filled with moving dielectric materials. Let u a be the normalized (covariant) 4-velocity of the arbitrarily-moving dielectric medium filling the space-time, and assume that the fluid’s electromagnetic properties are linear, isotropic, transparent, nondispersive, and can be summarized by two scalar functions: a dielectric permittivity ε and a magnetic permeability μ . [ 2 ] Then the optical metric tensor is defined as where g a b {\displaystyle g_{ab}} is the physical metric tensor . The sign of ± {\displaystyle \pm } is determined by the metric signature convention used: ± {\displaystyle \pm } is replaced with a plus sign (+) for a metric signature (-,+,+,+), while a minus sign (-) is chosen for (+,-,-,-). The inverse (contravariant) optical metric tensor is where u a is the contravariant 4-velocity of the moving fluid. Note that the traditional refractive index is defined as n ( x ) ≡ √ εμ . An important fact about Gordon's optical metric is that in curved space-time filled with dielectric material, electromagnetic waves (under geometrical optics approximation) follows geodesics of the optical metric instead of the physical metric. Consequently, the study of geometric optics in curved space-time with dielectric material can sometimes be simplified by using optical metric (note that the dynamics of the physical system is still described by the physical metric). For example, optical metric can be used to study the radiative transfer in stellar atmospheres around compact astrophysical objects such as neutron stars and white dwarfs, and in accretion disks around black holes. [ 3 ] In cosmology, optical metric can be used to study the distance-redshift relation in cosmological models in which the intergalactic or interstellar medium have a non-vanishing refraction index. After the original introduction of the concept of optical metric by Gordon in 1923, the mathematical formalism of optical metric was further investigated by Jürgen Ehlers in 1967 [ 4 ] including a detailed discussion of the geometrical optical approximation in curved space-time and the optical scalars transport equation. Gordon's optical metric was extended by Bin Chen and Ronald Kantowski [ 5 ] to include light absorption. The original real optical metric was consequently extended into a complex one. The optical metric was further generalized by Robert Thompson [ 6 ] from simple isotropic media described only by scalar-valued ε and μ to bianisotropic, magnetoelectrically coupled media residing in curved background space-times. The first application of Gordon's optical metric theory to cosmology was also made by Bin Chen and Ronald Kantowski. [ 7 ] The absorption corrected distance-redshift relation in the homogeneous and isotropic Friedman-Lemaitre-Robertson-Walker (FLRW) universe is called Gordon-Chen-Kantowski formalism [ 8 ] and can be used to study the absorption of intergalactic medium (or cosmic opacity) in the Universe. For example, the physical metric for a Robertson-Walker spacetime can be written (using the metric signature (-,+,+,+)) where k = 1 , 0 , − 1 {\displaystyle k=1,0,-1} for a closed, flat, or open universe, and R ( t ) {\displaystyle R(t)} is the scale factor . On the other hand, the optical metric for Robertson-Walker Universe filled with rest homogeneous refraction material is where n ( t ) {\displaystyle n(t)} the cosmic-time dependent refraction index. The luminosity distance - redshift relation in a Flat FLRW universe with dark absorption can be written where z is the cosmological redshift, c is the light speed, H 0 the Hubble Constant , τ is the optical depth caused by absorption (or the so-called cosmic opacity), and h(z) is the dimensionless Hubble curve. A non-zero cosmic opacity will render the standard candles such as Type Ia supernovae appear dimmer than expected from a transparent Universe. This can be used as an alternative explanation of the observed apparent acceleration of the cosmic expansion. In analog models of gravity , the "Gordon form" expresses the metric for a curved spacetime as the sum of a flat (Minkowski) metric and a 4-velocity field u: where n is the refractive index. This is analogous to Kerr-Schild form, which uses a null vector field in place of timelike. An open question is which spacetimes can be expressed in this way. The challenge is to pick coordinate systems for which the above relationship holds. Schwarzschild spacetime , which describes a non-rotating black hole, can be expressed this way. [ 9 ] There has been progress for Kerr spacetime which describes a rotating black hole, but this case remains elusive. [ 10 ] The dielectric permittivity ε and magnetic permeability μ are usually understood within the 3-vector representation of electrodynamics via the relations D → = ε E → {\textstyle {\vec {D}}=\varepsilon {\vec {E}}} and B → = μ H → , {\textstyle {\vec {B}}=\mu {\vec {H}},} where E → , B → , D → , {\textstyle {\vec {E}},{\vec {B}},{\vec {D}},} and H → {\textstyle {\vec {H}}} are, respectively, the electric field , magnetic flux density , electric displacement , and magnetic field intensity , and where ε and μ could be matrices. On the other hand, general relativity is formulated in the language of 4-dimensional tensors. To obtain the tensorial optical metric, medium properties such as permittivity, permeability, and magnetoelectric couplings must first be promoted to 4-dimensional covariant tensors, and the electrodynamics of light propagation through such media residing within a background space-time must also be expressed in a compatible 4-dimensional way. Here, electrodynamic fields will be described in terms of differential forms , exterior algebra , and the exterior derivative . Similar to the way that 3-vectors are denoted with an arrow, as in E → , {\textstyle {\vec {E}},} 4-dimensional tensors will be denoted by bold symbols, for example E . {\displaystyle {\boldsymbol {E}}.} The musical isomorphisms will be used to indicate raising and lowering of indices with the metric, and a dot notation is used to denote contraction on adjacent indices, e.g. u ⋅ F = u α F α β . {\displaystyle {\boldsymbol {u}}\cdot {\boldsymbol {F}}=u^{\alpha }F_{\alpha \beta }.} The speed of light is set to c = 1 , {\displaystyle c=1,} and the vacuum permeability and permittivity are likewise set to 1. The fundamental quantity of electrodynamics is the potential 1-form A , {\displaystyle {\boldsymbol {A}},} from which the field strength tensor is the 2-form F = d A . {\textstyle {\boldsymbol {F}}=d{\boldsymbol {A}}.} From the nilpotency of the exterior derivative one immediately has the homogeneous Maxwell equations d F = 0 , {\displaystyle d{\boldsymbol {F}}=0,} while a variation of the Yang-Mills action S = ∫ 1 2 F ∧ ⋆ F − A ∧ J {\displaystyle S=\int {\frac {1}{2}}{\boldsymbol {F}}\wedge \star {\boldsymbol {F}}-{\boldsymbol {A}}\wedge {\boldsymbol {J}}} with respect to A {\displaystyle {\boldsymbol {A}}} provides the inhomogeneous Maxwell equations d ⋆ F = J {\displaystyle d\star {\mathbf {F} }={\mathbf {J} }} where J {\displaystyle {\boldsymbol {J}}} is the charge-current 3-form. [ 11 ] Within dielectric media there exist charges bound up in otherwise neutral atoms. These charges are not free to move around very much, but distortions to the distribution of charge within the atom can allow dipole (or more generally multipole) moments to form, with which is associated a dipole field. Separating bound and free charges in the charge-current three form J = J b o u n d + J f r e e , {\textstyle {\boldsymbol {J}}={\boldsymbol {J}}_{bound}+{\boldsymbol {J}}_{free},} the bound source is associated with a particular solution called the polarization field P {\textstyle {\boldsymbol {P}}} satisfying d ⋆ P = J b o u n d . {\displaystyle d\star {\boldsymbol {P}}={\boldsymbol {J}}_{bound}.} One may then write d G = d ⋆ ( F + P ) = J f r e e {\displaystyle d{\boldsymbol {G}}=d\star ({\boldsymbol {F}}+{\boldsymbol {P}})={\boldsymbol {J}}_{free}} with the constitutive equation G = ⋆ ( F + P ) . {\displaystyle {\boldsymbol {G}}=\star ({\boldsymbol {F}}+{\boldsymbol {P}}).} In linear media, the dipole moment is induced by the incident free field in such a way that the polarization field is linearly proportional to the free field, P = ζ ( F ) {\displaystyle {\boldsymbol {P}}={\boldsymbol {\zeta }}({\boldsymbol {F}})} (in indices this is P α β = ζ α β μ ν F μ ν {\displaystyle P_{\alpha \beta }=\zeta _{\alpha \beta }{}^{\mu \nu }F_{\mu \nu }} ). Then the constitutive equation can be written G = ⋆ χ F . {\displaystyle {\boldsymbol {G}}=\star {\boldsymbol {\chi }}{\boldsymbol {F}}.} The ( 2 2 ) {\textstyle {\binom {2}{2}}} tensor χ = χ α β μ ν {\displaystyle {\boldsymbol {\chi }}=\chi _{\alpha \beta }{}^{\mu \nu }} is antisymmetric in each pair of indices, and the vacuum is seen to be a trivial dielectric such that χ v a c F = F . {\textstyle {\boldsymbol {\chi }}_{vac}{\boldsymbol {F}}={\boldsymbol {F}}.} This means that the distribution of dielectric material within the curved background space-time can be completely described functionally by giving χ {\textstyle \chi } and smooth transitions from vacuum into media can be described. The electric and magnetic fields E → , B → , D → , {\textstyle {\vec {E}},{\vec {B}},{\vec {D}},} and H → , {\textstyle {\vec {H}},} as they are commonly understood in the 3-vector representation, have no independent existence. They are merely different parts of the 2-forms F {\textstyle {\boldsymbol {F}}} and G , {\displaystyle {\boldsymbol {G}},} as measured relative to a chosen observer. Let u {\displaystyle {\boldsymbol {u}}} be the contravariant velocity 4-vector of the observer. Then one may define the covariant 1-forms E = u ⋅ F , B = − u ⋅ ⋆ F , {\displaystyle {\boldsymbol {E}}={\boldsymbol {u}}\cdot {\boldsymbol {F}},\quad {\boldsymbol {B}}=-{\boldsymbol {u}}\cdot \star {\boldsymbol {F}},} D = − u ⋅ ⋆ G , H = − u ⋅ G . {\displaystyle \mathbf {D} =-\mathbf {u} \cdot \star \mathbf {G} ,\quad \mathbf {H} =-\mathbf {u} \cdot \mathbf {G} .} The corresponding 3-vectors are obtained in Minkowski space-time by taking the purely spatial (relative to the observer) parts of the contravariant versions of these 1-forms. These 1-form field definitions can be used to re-express the 2-form constitutive equation to a set of two 1-form equations [ 6 ] D = ε c ⋅ E + γ b c ⋅ B , {\displaystyle {\boldsymbol {D}}={\boldsymbol {\varepsilon }}^{c}\cdot {\boldsymbol {E}}+{\boldsymbol {\gamma }}_{b}^{c}\cdot {\boldsymbol {B}},} H = ξ ⋅ B + γ e c ⋅ E . {\displaystyle {\boldsymbol {H}}={\boldsymbol {\xi }}\cdot {\boldsymbol {B}}+{\boldsymbol {\gamma }}_{e}^{c}\cdot \mathbf {E} .} where the ( 1 1 ) {\textstyle {\binom {1}{1}}} tensors ε c , ξ , γ b c , {\displaystyle {\boldsymbol {\varepsilon }}^{c},{\boldsymbol {\xi }},{\boldsymbol {\gamma }}_{b}^{c},} and γ e c {\displaystyle {\boldsymbol {\gamma }}_{e}^{c}} are ε c = − 2 ( u ⋅ χ ⋅ u ♭ ) , {\displaystyle {\boldsymbol {\varepsilon }}^{c}=-2({\boldsymbol {u}}\cdot {\boldsymbol {\chi }}\cdot {\boldsymbol {u}}^{\flat }),} ξ = 2 ( u ⋅ ⋆ χ ⋆ ⋅ u ♭ ) , {\displaystyle {\boldsymbol {\xi }}=2({\boldsymbol {u}}\cdot \star {\boldsymbol {\chi }}\star \cdot {\boldsymbol {u}}^{\flat }),} γ b c = − 2 ( u ⋅ χ ⋆ ⋅ u ♭ ) , {\displaystyle {\boldsymbol {\gamma }}_{b}^{c}=-2({\boldsymbol {u}}\cdot {\boldsymbol {\chi }}\star \cdot {\boldsymbol {u}}^{\flat }),} γ e c = 2 ( u ⋅ ⋆ χ ⋅ u ♭ ) . {\displaystyle {\boldsymbol {\gamma }}_{e}^{c}=2({\boldsymbol {u}}\cdot \star {\boldsymbol {\chi }}\cdot {\boldsymbol {u}}^{\flat }).} Note that each of these tensors is orthogonal, or transverse, to u , {\displaystyle {\boldsymbol {u}},} meaning that u ⋅ α = α ⋅ u ♭ = 0 {\displaystyle {\boldsymbol {u}}\cdot {\boldsymbol {\alpha }}={\boldsymbol {\alpha }}\cdot {\boldsymbol {u}}^{\flat }=0} for each α ∈ { ε c , ξ , γ b c , γ e c } {\displaystyle {\boldsymbol {\alpha }}\in \{{\boldsymbol {\varepsilon }}^{c},{\boldsymbol {\xi }},{\boldsymbol {\gamma }}_{b}^{c},{\boldsymbol {\gamma }}_{e}^{c}\}} , which can be seen from the antisymmetry of χ {\displaystyle {\boldsymbol {\chi }}} on each pair of indices. Since each of the 1-form fields defined above is also transverse to u , {\displaystyle {\boldsymbol {u}},} we may conclude that each α {\displaystyle {\boldsymbol {\alpha }}} is an automorphism of a subspace of the cotangent space defined by orthogonality with respect to the observer. In other words, everything operates in the observer's purely spatial 3-dimensional space. In terms of these parameters, χ {\displaystyle {\boldsymbol {\chi }}} is found to be [ 6 ] χ = 1 2 [ − ( u ♭ ∧ ε c ∧ u ) + ⋆ ( u ♭ ∧ ξ ∧ u ) − ⋆ ( u ♭ ∧ γ e c ∧ u ) + ( u ♭ ∧ γ b c ∧ u ) ⋆ ) ] . {\displaystyle {\boldsymbol {\chi }}={\frac {1}{2}}\left[-({\boldsymbol {u}}^{\flat }\wedge {\boldsymbol {\varepsilon }}^{c}\wedge {\boldsymbol {u}})+\star ({\boldsymbol {u}}^{\flat }\wedge {\boldsymbol {\xi }}\wedge {\boldsymbol {u}})-\star ({\boldsymbol {u}}^{\flat }\wedge {\boldsymbol {\gamma }}_{e}^{c}\wedge {\boldsymbol {u}})+({\boldsymbol {u}}^{\flat }\wedge {\boldsymbol {\gamma }}_{b}^{c}\wedge {\boldsymbol {u}})\star )\right].} Although the set of 1-form constitutive equations shown above are the ones that follow most naturally from the covariant 2-form constitutive equation G = ⋆ χ F {\displaystyle {\boldsymbol {G}}=\star {\boldsymbol {\chi }}{\boldsymbol {F}}} , they are not the only possibility. Indeed, the traditional 3-vector formulation of the constitutive equations usually relates B → {\displaystyle {\vec {B}}} and H → {\displaystyle {\vec {H}}} by B → = μ H → {\displaystyle {\vec {B}}=\mu {\vec {H}}} . Therefore, it could be desirable to rearrange the preceding set of relations into D = ε ⋅ E + γ h ⋅ H , {\displaystyle {\boldsymbol {D}}={\boldsymbol {\varepsilon }}\cdot {\boldsymbol {E}}+{\boldsymbol {\gamma }}_{h}\cdot {\boldsymbol {H}},} B = μ ⋅ H + γ e ⋅ E , {\displaystyle {\boldsymbol {B}}={\boldsymbol {\mu }}\cdot {\boldsymbol {H}}+{\boldsymbol {\gamma }}_{e}\cdot {\boldsymbol {E}},} where ε , μ , γ h , γ e {\displaystyle {\boldsymbol {\varepsilon }},{\boldsymbol {\mu }},{\boldsymbol {\gamma }}_{h},{\boldsymbol {\gamma }}_{e}} are related to ε c , ξ , γ b c , γ e c {\displaystyle {\boldsymbol {\varepsilon }}^{c},{\boldsymbol {\xi }},{\boldsymbol {\gamma }}_{b}^{c},{\boldsymbol {\gamma }}_{e}^{c}} by μ = ξ ¯ , {\displaystyle {\boldsymbol {\mu }}={\bar {\boldsymbol {\xi }}},} ε = ε c − γ b c ⋅ μ ⋅ γ e c , {\displaystyle {\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}^{c}-{\boldsymbol {\gamma }}_{b}^{c}\cdot {\boldsymbol {\mu }}\cdot {\boldsymbol {\gamma }}_{e}^{c},} γ e = − μ ⋅ γ e c , {\displaystyle {\boldsymbol {\gamma }}_{e}=-{\boldsymbol {\mu }}\cdot {\boldsymbol {\gamma }}_{e}^{c},} γ h = γ b c ⋅ μ . {\displaystyle {\boldsymbol {\gamma }}_{h}={\boldsymbol {\gamma }}_{b}^{c}\cdot {\boldsymbol {\mu }}.} The 4-dimensional inverse of these tensors does not exist, but the bar notation ξ ¯ {\displaystyle {\bar {\boldsymbol {\xi }}}} denotes an inverse defined with respect to the subspace orthogonal to u , {\displaystyle {\boldsymbol {u}},} which exists and is a valid operation since it was noted above that ξ {\displaystyle {\boldsymbol {\xi }}} is an automorphism of this subspace. In Minkowski space-time, the space-space part (relative to observer u {\displaystyle {\boldsymbol {u}}} ) of each of these tensors is equivalent to the traditional 3 × 3 {\displaystyle 3\times 3} constitutive matrices of 3-vector electrodynamics. In terms of this alternative set of constitutive tensors, χ {\displaystyle {\boldsymbol {\chi }}} is found to be [ 6 ] χ = 1 2 [ − ( u ♭ ∧ ε ∧ u ) + [ ⋆ ( u ♭ ∧ h ) + u ♭ ∧ γ h ] ⋅ μ ¯ ⋅ [ ( h ∧ u ⋆ + γ e ∧ u ] ] . {\displaystyle {\boldsymbol {\chi }}={\frac {1}{2}}\left[-({\boldsymbol {u}}^{\flat }\wedge {\boldsymbol {\varepsilon }}\wedge {\boldsymbol {u}})+[\star ({\boldsymbol {u}}^{\flat }\wedge {\boldsymbol {h}})+{\boldsymbol {u}}^{\flat }\wedge {\boldsymbol {\gamma }}_{h}]\cdot {\bar {\boldsymbol {\mu }}}\cdot [({\boldsymbol {h}}\wedge {\boldsymbol {u}}\star +{\boldsymbol {\gamma }}_{e}\wedge {\boldsymbol {u}}]\right].} Here, h = δ − u ♭ ⊗ u {\displaystyle {\boldsymbol {h}}={\boldsymbol {\delta }}-{\boldsymbol {u}}^{\flat }\otimes {\boldsymbol {u}}} is a projection operator that annihilates any tensor components parallel to u . {\displaystyle {\boldsymbol {u}}.} Since h ⋅ δ = h , {\displaystyle {\boldsymbol {h}}\cdot {\boldsymbol {\delta }}={\boldsymbol {h}},} then h {\displaystyle {\boldsymbol {h}}} also serves as the Kronecker delta on the subspace orthogonal to u . {\displaystyle {\boldsymbol {u}}.} In the vacuum, ε = μ = h , γ e = γ h = 0. {\displaystyle {\boldsymbol {\varepsilon }}={\boldsymbol {\mu }}={\boldsymbol {h}},{\boldsymbol {\gamma }}_{e}={\boldsymbol {\gamma }}_{h}=0.} For light propagating through linear dielectric media, Maxewell's inhomogeneous equation in the absence of free sources represents a wave equation for A {\displaystyle {\boldsymbol {A}}} in the Lorenz gauge , δ A = 0 {\displaystyle \delta {\boldsymbol {A}}=0} (here δ {\displaystyle \delta } is the codifferential ), given by ⋆ d ⋆ χ d A = δ χ d A = 0. {\displaystyle \star d\star {\boldsymbol {\chi }}d\mathbf {A} =\delta {\boldsymbol {\chi }}d\mathbf {A} =0.} A JWKB type approximation of plane wave solutions is assumed such that A = A ^ e − ( i λ ) − 1 S {\displaystyle {\boldsymbol {A}}={\hat {\boldsymbol {A}}}e^{-(i\lambda )^{-1}S}} where the amplitude A ^ {\displaystyle {\hat {\boldsymbol {A}}}} is assumed to be slowly varying compared to the phase function S . {\displaystyle S.} Plugging this approximate solution into the wave equation, and retaining only the leading order terms in the limit λ → 0 {\displaystyle \lambda \to 0} leads to − ( k ♯ ⋅ χ ⋅ k ) ⋅ A ^ = 0 {\displaystyle -({\boldsymbol {k}}^{\sharp }\cdot {\boldsymbol {\chi }}\cdot {\boldsymbol {k}})\cdot {\hat {\boldsymbol {A}}}=0} where k = d S . {\displaystyle {\boldsymbol {k}}=dS.} The existence of a solution to this equation requires det ( k ♯ ⋅ χ ⋅ k ) = 0. {\displaystyle \det \left({\boldsymbol {k}}^{\sharp }\cdot {\boldsymbol {\chi }}\cdot {\boldsymbol {k}}\right)=0.} In fact, this determinant condition is satisfied identically because the antisymmetry in the second pair of indices on χ {\displaystyle {\boldsymbol {\chi }}} shows that A ^ ∝ k {\displaystyle {\hat {\boldsymbol {A}}}\propto {\boldsymbol {k}}} is already a trivial solution. Therefore, any non-trivial solutions must reside in the 3-dimensional subspace orthogonal to k , {\displaystyle {\boldsymbol {k}},} so the tensor k ♯ ⋅ χ ⋅ k {\displaystyle {\boldsymbol {k}}^{\sharp }\cdot {\boldsymbol {\chi }}\cdot {\boldsymbol {k}}} is effectively only 3-dimensional. Thus, the determinant condition is insufficient to provide any information. However, the classical adjugate of a matrix M {\displaystyle M} is related to its determinant by M . a d j ( M ) = det ( M ) I {\displaystyle M.\mathrm {adj} (M)=\det(M)I} . Since in this case det ( M ) = 0 {\displaystyle \det(M)=0} but M {\displaystyle M} is arbitrary, one obtains the secondary condition a d j ( k ♯ ⋅ χ ⋅ k ) = 0. {\displaystyle \mathrm {adj} \left({\boldsymbol {k}}^{\sharp }\cdot {\boldsymbol {\chi }}\cdot {\boldsymbol {k}}\right)=0.} Notice that the adjugate of a matrix is still a matrix, so the scalar determinant condition has now been replaced by a matrix condition. This would appear to add a great deal of complexity to the problem, but it has been shown [ 6 ] that this adjugate has the form a d j ( k ♯ ⋅ χ ⋅ k ) = P ( k ⊗ k ♯ ) , {\displaystyle \mathrm {adj} \left({\boldsymbol {k}}^{\sharp }\cdot {\boldsymbol {\chi }}\cdot {\boldsymbol {k}}\right)=P({\boldsymbol {k}}\otimes {\boldsymbol {k}}^{\sharp }),} where P {\displaystyle P} is a fourth order polynomial in k . {\displaystyle {\boldsymbol {k}}.} The vanishing condition on the adjugate matrix is therefore equivalent to the scalar condition P = 0. {\displaystyle P=0.} The goal now is to demonstrate that the polynomial P {\displaystyle P} takes the form P ∝ [ 1 2 g + − 1 ( k ⊗ k ) ] [ 1 2 g − − 1 ( k ⊗ k ) ] . {\displaystyle P\propto \left[{\frac {1}{2}}{\boldsymbol {\mathfrak {g}}}_{+}^{-1}({\boldsymbol {k}}\otimes {\boldsymbol {k}})\right]\left[{\frac {1}{2}}{\boldsymbol {\mathfrak {g}}}_{-}^{-1}({\boldsymbol {k}}\otimes {\boldsymbol {k}})\right].} Then the condition P = 0 {\displaystyle P=0} is satisfied by either of 1 2 g ± − 1 ( k ⊗ k ) = 0 {\displaystyle {\tfrac {1}{2}}{\boldsymbol {\mathfrak {g}}}_{\pm }^{-1}({\boldsymbol {k}}\otimes {\boldsymbol {k}})=0} (written with indices, 1 2 g ± μ ν k μ k ν = 0 {\displaystyle {\tfrac {1}{2}}{\mathfrak {g}}_{\pm }^{\mu \nu }k_{\mu }k_{\nu }=0} ). What has been shown so far is that wave solutions of Maxwell's equations, in the ray limit, must satisfy one of these two polynomial conditions. The tensors g ± − 1 {\displaystyle {\boldsymbol {\mathfrak {g}}}_{\pm }^{-1}} therefore determine the lightcone structures. The fact that there are two of them implies a double light cone structure - one for each of the two polarization states, i.e. birefringence. In vacuum, it is readily found that g + − 1 = g − − 1 = g − 1 {\displaystyle {\boldsymbol {\mathfrak {g}}}_{+}^{-1}={\boldsymbol {\mathfrak {g}}}_{-}^{-1}={\boldsymbol {g}}^{-1}} degenerates to the space-time metric. Since the g ± − 1 {\displaystyle {\boldsymbol {\mathfrak {g}}}_{\pm }^{-1}} determine the lightcones in media in the way that g − 1 {\displaystyle {\boldsymbol {g}}^{-1}} does for the vacuum, they are referred to as optical metrics. However, it is perhaps more appropriate to take the point of view that the space-time metric happens to also serve as the optical metric in vacuum, [ 6 ] which is not so surprising considering that the space-time metric is the only available structure in vacuum. So far, no assumptions have been imposed on the form of ε , μ , γ e , {\displaystyle {\boldsymbol {\varepsilon }},{\boldsymbol {\mu }},{\boldsymbol {\gamma }}_{e},} or γ h , {\displaystyle {\boldsymbol {\gamma }}_{h},} so there are currently 36 freely specifiable parameters. To determine the optical metrics, Thompson imposes the conditions that γ e {\displaystyle {\boldsymbol {\gamma }}_{e}} and γ h {\displaystyle {\boldsymbol {\gamma }}_{h}} are antisymmetric with respect to g {\displaystyle {\boldsymbol {g}}} (i.e. antisymmetric when the indices on γ e {\displaystyle {\boldsymbol {\gamma }}_{e}} and γ h {\displaystyle {\boldsymbol {\gamma }}_{h}} are either both up or both down). The antisymmetry condition allows them to be written in the forms γ e = ( h ∧ u ) ⋆ ⋅ γ e 1 , {\displaystyle {\boldsymbol {\gamma }}_{e}=({\boldsymbol {h}}\wedge {\boldsymbol {u}})\star \cdot {\boldsymbol {\gamma }}_{e1},} γ h = ( γ h 1 ) ♯ ⋅ ⋆ ( u ∧ h ) . {\displaystyle {\boldsymbol {\gamma }}_{h}=({\boldsymbol {\gamma }}_{h1})^{\sharp }\cdot \star ({\boldsymbol {u}}\wedge {\boldsymbol {h}}).} With this restriction, it is found that P {\displaystyle P} is biquadratic in k ⋅ u {\displaystyle {\boldsymbol {k}}\cdot {\boldsymbol {u}}} and can be factored to P = H + H − {\displaystyle P=H_{+}H_{-}} where H ± = 1 2 ( u . a d j ( ε ) . u ♭ ) [ ( u μ u ν − 1 2 W α α μ ν ) k μ k ν ± ( 1 2 W α β μ ν W β α σ ρ − 1 4 W α α μ ν W β β σ ρ ) k μ k ν k σ k ρ ] {\displaystyle H_{\pm }={\frac {1}{2}}({\boldsymbol {u}}.\mathrm {adj} \left({\boldsymbol {\varepsilon }}\right).{\boldsymbol {u}}^{\flat })\left[(u^{\mu }u^{\nu }-{\frac {1}{2}}W_{\alpha }{}^{\alpha \mu \nu })k_{\mu }k_{\nu }\pm {\sqrt {\left({\frac {1}{2}}W_{\alpha }{}^{\beta \mu \nu }W_{\beta }{}^{\alpha \sigma \rho }-{\frac {1}{4}}W_{\alpha }{}^{\alpha \mu \nu }W_{\beta }{}^{\beta \sigma \rho }\right)k_{\mu }k_{\nu }k_{\sigma }k_{\rho }}}\right]} with W α κ μ ν = u θ u π δ θ ψ β φ π λ κ ρ g λ τ ε ¯ σ τ g σ ψ μ ¯ α β g η φ ( δ ρ μ + ( γ e 1 ) ρ u μ ) ( δ η ν + ( γ h 1 ) η u ν ) . {\displaystyle W_{\alpha }{}^{\kappa \mu \nu }=u^{\theta }u_{\pi }\delta _{\theta \psi \beta \varphi }^{\pi \lambda \kappa \rho }g_{\lambda \tau }{\bar {\varepsilon }}_{\sigma }{}^{\tau }g^{\sigma \psi }{\bar {\mu }}_{\alpha }{}^{\beta }g^{\eta \varphi }(\delta _{\rho }^{\mu }+(\gamma _{e1})_{\rho }{}u^{\mu })(\delta _{\eta }^{\nu }+(\gamma _{h1})_{\eta }u^{\nu }).} Finally, the optical metrics correspond to g ± μ ν = ∂ 2 H ± ∂ k μ ∂ k ν . {\displaystyle {\boldsymbol {\mathfrak {g}}}_{\pm }^{\mu \nu }={\frac {\partial ^{2}H_{\pm }}{\partial k_{\mu }\partial k_{\nu }}}.} The presence of the square root in H ± , {\displaystyle H_{\pm },} and consequently in g ± − 1 , {\displaystyle {\boldsymbol {\mathfrak {g}}}_{\pm }^{-1},} shows that the birefringent optical metrics are of the pseudo-Finslerian type. A key feature here is that the optical metric is not only a function of position, but also retains a dependency on k {\displaystyle {\boldsymbol {k}}} . These pseudo-Finslerian optical metrics degenerate to a common, non-birefringent, pseudo-Riemannian optical metric for media that obey a curved space-time generalization of the Post conditions. [ 12 ] [ 6 ]
https://en.wikipedia.org/wiki/Optical_metric
An optical modulator is an optical device which is used to modulate a beam of light with a perturbation device. It is a kind of transmitter to convert information to optical binary signal through optical fiber ( optical waveguide ) or transmission medium of optical frequency in fiber optic communication. There are several methods to manipulate this device depending on the parameter of a light beam like amplitude modulator (majority), phase modulator , polarization modulator etc. The easiest way to obtain modulation is modulation of intensity of a light by the current driving the light source ( laser diode ). This sort of modulation is called direct modulation, as opposed to the external modulation performed by a light modulator. For this reason, light modulators are called external light modulators. According to manipulation of the properties of material modulators are divided into two groups, absorptive modulators ( absorption coefficient ) and refractive modulators ( refractive index of the material). Absorption coefficient can be manipulated by Franz-Keldysh effect, Quantum-Confined Stark Effect , excitonic absorption, or changes of free carrier concentration. Usually, if several such effects appear together, the modulator is called electro-absorptive modulator. Refractive modulators most often make use of electro-optic effect (amplitude & phase modulation), other modulators are made with acousto-optic effect , magneto-optic effect such as Faraday and Cotton-Mouton effects. The other case of modulators is spatial light modulator (SLM) which is modified two dimensional distribution of amplitude & phase of an optical wave. Optical modulators can be implemented using Semiconductor Nano-structures to increase the performance like high operation, high stability, high speed response, and highly compact system. Highly compact electro-optical modulators have been demonstrated in compound semiconductors. [ 1 ] However, in silicon photonics , electro-optical modulation has been demonstrated only in large structures, and is therefore inappropriate for effective on-chip integration. Electro-optical control of light on silicon is challenging owing to its weak electro-optical properties. The large dimensions of previously demonstrated structures were necessary to achieve a significant modulation of the transmission in spite of the small change of refractive index of silicon. Liu et al. have recently demonstrated a high-speed silicon optical modulator based on a metal–oxide–semiconductor (MOS) configuration. [ 2 ] Their work showed a high-speed optical active device on silicon—a critical milestone towards optoelectronic integration on silicon. An electro-optic modulator is a device which can be used for controlling the power, phase or polarization of a laser beam with an electrical control signal. It typically contains one or two Pockels cells , and possibly additional optical elements such as polarizers. The principle of operation is based on the linear electro-optic effect (the Pockels effect , the modification of the refractive index of a nonlinear crystal by an electric field in proportion to the field strength). The crystal which is covered by electrode may be considered to be a voltage-variable wave-plate. When a voltage is applied, the retardation of laser polarization of the light would be changed while a beam passes through an ADP crystal. This variation in polarization results in intensity modulation downstream from the output polarizer. The output polarizer converts the phase shift into an amplitude modulation . Micrometre-scale silicon electro-optic modulator [ 3 ] This device was fabricated a shape of the p-i-n ring resonator on a silicon-on-insulator substrate with a 3-mm-thick buried oxide layer. Both the waveguide coupling to the ring and that forming the ring have awidth of 450 nm and a height of 250 nm. The diameter of the ring is 12 mm, and the spacing between the ring and the straight waveguide is 200 nm. Acousto-optic modulators are used to vary and control laser beam intensity. A Bragg configuration gives a single first order output beam, whose intensity is directly linked to the power of RF control signal. The rise time of the modulator is simply deduced by the necessary time for the acoustic wave to travel through the laser beam. For highest speeds the laser beam will be focused down, forming a beam waist as it passes through the modulator. In an AOM a laser beam is caused to interact with a high frequency ultrasonic sound wave inside an optically polished block of crystal or glass (the interaction medium). By carefully orientating the laser with respect to the sound waves the beam can be made to reflect off the acoustic wave-fronts ( Bragg diffraction ). Therefore, when the sound field is present the beam is deflected and when it is absent the beam passes through undeviated. By switching the sound field on and off very rapidly the deflected beam appears and disappears in response (digital modulation). By varying the amplitude of the acoustic waves the intensity of the deflected beam can similarly be modulated (analogue modulation). Acoustic solitons in semiconductor nanostructures [ 4 ] Acoustic solitons strongly influence the electron states in a semiconductor nanostructure. The amplitude of soliton pulses is so high that the electron states in a quantum well make temporal excursions in energy up to 10 meV. The subpicosecond duration of the solitons is less than the coherence time of the optical transition between the electron states and a frequency modulation of emitted light during the coherence time (chirping effect) is observed. This system is for an ultrafast control of electron states in semiconductor nanostructures. A dc magnetic field Hdc is applied perpendicular to the light propagation direction to produce a single domain, transverse directed 4~Ms. The rf modulation field Hrf, applied by means of a coil along the light propagation direction, wobbles 4~Ms through an angle of @ and produces a time varying magnetization component in the longitudinal direction. This component then produces an ac variation in the plane of polarization via the longitudinal Faraday effect. Conversion to amplitude modulation is accomplished by the indicated analyzer. Wideband magneto-optic modulation in a bismuth-substituted yttrium iron garnet waveguide [ 5 ] The current transient creates a time-varying magnetic field that has a component along the direction of optical propagation. This component (underneath the microstrip line) acts to tip the magnetization, M, along the propagation direction of the optical beam. A static in-plane magnetic field, by, is applied perpendicular to the light propagation direction, thus ensuring the return of M to its initial orientation after the passage of the current transient. Depending on the component of the magnetization along the z-direction, Mz, the optical beam experiences a rotation of its polarization due to the Faraday effect. The polarization modulation is converted into an intensity modulation via a polarization analyzer, which is detected by a high-speed photodiode . MODULATION OF THz RADIATION BY SEMICONDUCTOR NANOSTRUCTURES [ 6 ] As a result of increased demand for bandwidth, wireless short-range communication systems are expected to extend into the THz frequency range. Therefore, the fundamental interactions between THz radiation and semiconductors are receiving increasing attention. This new quantum structure is based on the well-established technology for producing high electron mobility transistors where an electron gas is confined at a GaAs/AlxGa1 xAs interface. The electron density at the hetero-interface can be controlled by the application of an external gate voltage, which in turn will alter the transmission/reflection characteristics of the device to an incident THz beam. 40 Gbit/s Phase Modulator The 40 Gbit/s Phase Modulator is a high performance, low drive voltage External Optical Modulator designed for customers developing next generation 40G transmission systems. The increased bandwidth allows for chirp control in high-speed data communications. Applications; Chirp Control for High-Speed Communications (SONET OC-768 Interfaces, SDH STM-256 Interfaces), Coherent communications, C & L Band Operation, Optical Sensing, All-optical frequency shifting. Applications; acousto-optic modulators include laser printing, video disk recording, laser projection systems.
https://en.wikipedia.org/wiki/Optical_modulators_using_semiconductor_nano-structures
Optical molasses ( OM ) is a laser cooling technique that can cool neutral atoms to as low as a few microkelvins, depending on the atomic species. An optical molasses consists of 3 pairs of counter-propagating orthogonally polarized laser beams intersecting in the region where the atoms are present. The main difference between an optical molasses and a magneto-optical trap (MOT) is the absence of magnetic field in the former. Unlike a MOT, an OM provides only cooling and no trapping. When laser cooling was proposed in 1975, a theoretical limit on the lowest possible temperature was predicted. [ 1 ] Known as the Doppler limit , T D = ℏ Γ / ( 2 k B ) {\displaystyle T_{\text{D}}=\hbar \Gamma /(2k_{\text{B}})} , this was given by the lowest possible temperature attainable considering the cooling of two-level atoms by Doppler cooling and the heating of atoms due to momentum diffusion from the scattering of laser photons. Here Γ {\displaystyle \Gamma } is the natural line-width of the atomic transition, ℏ {\displaystyle \hbar } is the reduced Planck constant , and k B {\displaystyle k_{\text{B}}} is the Boltzmann constant . The first experimental realization of optical molasses was achieved in 1985 by Chu et al. at AT&T Bell Laboratories. [ 2 ] The authors measured laser cooling of neutral sodium atoms down to the theoretical Doppler cooling limit by observing the fluorescence of a hot atomic beam. By temporarily switching off the laser beams for a fixed time interval, the authors firstly measured the average kinetic energy of the atoms by a time-of-flight technique. The fraction of atoms that left the region while it was in the dark was measured by comparing the brightness of the fluorescence before and after the turnoff. Then velocity distribution and temperature were measured by estimating the dependence of this fraction on the light-off time. The kinetic temperature they obtained was T ≈ 240 μK, not very different from the Doppler cooling limit in the two-level approximation. The size of the optical molasses region was a limiting factor. Experiments at the National Institute of Standards and Technology in Gaithersburg found the temperature of cooled atoms to be well below the theoretical limit. [ 3 ] In 1988, Lett et al. [ 3 ] directed sodium atoms through an optical molasses and found the temperatures to be as low as ~40 μk, 6 times lower than the expected 240 μk Doppler cooling limit. Other unexpected properties found in other experiments [ 4 ] included significant unexpected insensitivity to laser alignment of the counter-propagating beams. These unexpected observations led to the development of more sophisticated models [ 5 ] of laser cooling that took into account the Zeeman and hyperfine sublevels of the atomic structure. The dynamics of optical pumping between these sublevels allow the cooling of atoms below the Doppler limit . The best explanation of the phenomenon of optical molasses is based on the principle of polarization gradient cooling . [ 6 ] For one-dimensional optical molasses: Suppose two laser beams approach an atom from opposite directions. Counterpropagating beams of circularly polarized light cause a standing wave, where the light polarization is linear but the direction rotates along the direction of the beams at a very fast rate. Atoms moving in the spatially varying linear polarization have a higher probability density of being in a state that is more susceptible to absorption of light from the beam coming head-on, rather than the beam from behind. This results in a velocity-dependent damping force [ 7 ] F = − α v , {\displaystyle F=-\alpha v,} where α = 4 ℏ k 2 I I 0 2 δ / Γ [ 1 + ( 2 δ / Γ ) 2 ] 2 . {\displaystyle \alpha =4\hbar k^{2}{\frac {I}{I_{0}}}{\frac {2\delta /\Gamma }{[1+(2\delta /\Gamma )^{2}]^{2}}}.} The variable ℏ {\displaystyle \hbar } is the reduced Planck constant, I 0 {\displaystyle I_{0}} is the saturation intensity, δ {\displaystyle \delta } is the laser detuning, and Γ {\displaystyle \Gamma } is the linewidth of the atom-cooling transition. For sodium, the cooling (cycling) transition is the 3 S 1 / 2 ( F = 2 ) ↔ 3 P 3 / 2 ( F = 0 ) {\displaystyle {}^{3}S_{1/2}(F=2)\leftrightarrow {}^{3}P_{3/2}(F=0)} transition, driven by laser light at 589 nm. The optical molasses can reduce the atom temperature to the recoil limit T r {\displaystyle T_{\text{r}}} is set by the energy of the photon emitted in the decay from the J ′ to J state, where the J state is the ground-state angular momentum, and the J ′ state is the excited-state angular momentum. This temperature is given by k B T r = h 2 M λ 2 , {\displaystyle k_{\text{B}}T_{\text{r}}={\frac {h^{2}}{M\lambda ^{2}}},} though practically the limit is a few times this value because of the extreme sensitivity to external magnetic fields in this cooling scheme. Atoms typically reach temperatures on the order of microkelvins, as compared to the doppler limit T D ≃ 240 {\displaystyle T_{D}\simeq 240} μK. The one-dimensional optical molasses can be extended to three dimensions with six counter-propagating laser beams. The total force is the sum from each beam. For example, a study [ 8 ] using cesium atoms achieved temperatures as low as ~3 μK, approximately 40 times below the Doppler limit and only slightly above the recoil temperature limit of Cs. The temperature obtained varies with the configuration of the laser polarization and are all higher than the theoretical estimate. Thus the extension has been proven to be effective, despite a few caveats. In 3D experiments, the transverse nature of light leads to the limitation that there will always be polarization gradients. The atoms also see different gradients along different directions, and they may change dramatically during the atom's diffusive movement in the molasses. [ 9 ] The trajectories are not straight either, but severely affected by the cooling process. [ 10 ] Quantum treatments are needed due to these limitations. An optical molasses slows down the atoms but does not provide any trapping force to confine them spatially. A magneto-optical trap employs a 3-dimensional optical molasses along with a spatially varying magnetic field to slow down and confine the atoms.
https://en.wikipedia.org/wiki/Optical_molasses
An optical mount is a device used to join a normal camera and another optical instrument, such as a microscope or telescope . The optical mount is generally attached to the camera as a lens would on one end, and fastened to the other instrument in a similar fashion. Optical mounts are used extensively in scientific imaging applications in biology and astronomy . This photography-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Optical_mount
The optical properties of a material define how it interacts with light . The optical properties of matter are studied in optical physics (a subfield of optics ) and applied in materials science . The optical properties of matter include: A basic distinction is between isotropic materials, which exhibit the same properties regardless of the direction of the light, and anisotropic ones, which exhibit different properties when light passes through them in different directions. The optical properties of matter can lead to a variety of interesting optical phenomena . This optics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Optical_properties
The refractive index of water at 20 °C for visible light is 1.33. [ 1 ] The refractive index of normal ice is 1.31 (from List of refractive indices ). In general, an index of refraction is a complex number with real and imaginary parts, where the latter indicates the strength of absorption loss at a particular wavelength. In the visible part of the electromagnetic spectrum, the imaginary part of the refractive index is very small. However, water and ice absorb in infrared and close the infrared atmospheric window , thereby contributing to the greenhouse effect . The absorption spectrum of pure water is used in numerous applications, including light scattering and absorption by ice crystals and cloud water droplets , theories of the rainbow , determination of the single-scattering albedo , ocean color , and many others. Over the wavelengths from 0.2 μm to 1.2 μm, and over temperatures from −12 °C to 500 °C, the real part of the index of refraction of water can be calculated by the following empirical expression: [ 2 ] Where: and the appropriate constants are a 0 {\displaystyle a_{0}} = 0.244257733, a 1 {\displaystyle a_{1}} = 0.00974634476, a 2 {\displaystyle a_{2}} = −0.00373234996, a 3 {\displaystyle a_{3}} = 0.000268678472, a 4 {\displaystyle a_{4}} = 0.0015892057, a 5 {\displaystyle a_{5}} = 0.00245934259, a 6 {\displaystyle a_{6}} = 0.90070492, a 7 {\displaystyle a_{7}} = −0.0166626219, T ∗ {\displaystyle T^{*}} = 273.15 K, ρ ∗ {\displaystyle \rho ^{*}} = 1000 kg/m 3 , λ ∗ {\displaystyle \lambda ^{*}} = 589 nm, λ ¯ IR {\displaystyle {\overline {\lambda }}_{\text{IR}}} = 5.432937, and λ ¯ UV {\displaystyle {\overline {\lambda }}_{\text{UV}}} = 0.229202. In the above expression, T is the absolute temperature of water (in K), λ {\displaystyle \lambda } is the wavelength of light in nm, ρ {\displaystyle \rho } is the density of the water in kg/m 3 , and n is the real part of the index of refraction of water. In the above formula, the density of water also varies with temperature and is defined by: [ 3 ] [ 4 ] ρ ( t ) = a 5 ( 1 − ( t + a 1 ) 2 ( t + a 2 ) a 3 ( t + a 4 ) ) {\displaystyle \rho (t)=a_{5}\left(1-{\frac {(t+a_{1})^{2}(t+a_{2})}{a_{3}(t+a_{4})}}\right)} with: The total refractive index of water is given as m = n + ik . The absorption coefficient α' is used in the Beer–Lambert law with the prime here signifying base e convention. Values are for water at 25 °C, and were obtained through various sources in the cited literature review. [ 5 ]
https://en.wikipedia.org/wiki/Optical_properties_of_water_and_ice
Optical rotation , also known as polarization rotation or circular birefringence , is the rotation of the orientation of the plane of polarization about the optical axis of linearly polarized light as it travels through certain materials. Circular birefringence and circular dichroism are the manifestations of optical activity . Optical activity occurs only in chiral materials, those lacking microscopic mirror symmetry. Unlike other sources of birefringence which alter a beam's state of polarization, optical activity can be observed in fluids . This can include gases or solutions of chiral molecules such as sugars, molecules with helical secondary structure such as some proteins, and also chiral liquid crystals . It can also be observed in chiral solids such as certain crystals with a rotation between adjacent crystal planes (such as quartz ) or metamaterials . When looking at the source of light, the rotation of the plane of polarization may be either to the right ( dextrorotatory or dextrorotary — d -rotary, represented by (+), clockwise), or to the left ( levorotatory or levorotary — l -rotary, represented by (−), counter-clockwise) depending on which stereoisomer is dominant. For instance, sucrose and camphor are d -rotary whereas cholesterol is l -rotary. For a given substance, the angle by which the polarization of light of a specified wavelength is rotated is proportional to the path length through the material and (for a solution) proportional to its concentration. Optical activity is measured using a polarized source and polarimeter . This is a tool particularly used in the sugar industry to measure the sugar concentration of syrup, and generally in chemistry to measure the concentration or enantiomeric ratio of chiral molecules in solution. Modulation of a liquid crystal's optical activity, viewed between two sheet polarizers , is the principle of operation of liquid-crystal displays (used in most modern televisions and computer monitors). Dextrorotation and laevorotation (also spelled levorotation ) [ 1 ] [ 2 ] in chemistry and physics are the optical rotation of plane-polarized light . From the point of view of the observer, dextrorotation refers to clockwise or right-handed rotation, and laevorotation refers to counterclockwise or left-handed rotation. [ 3 ] [ 4 ] A chemical compound that causes dextrorotation is dextrorotatory or dextrorotary , while a compound that causes laevorotation is laevorotatory or laevorotary . [ 5 ] Compounds with these properties consist of chiral molecules and are said to have optical activity. If a chiral molecule is dextrorotary, its enantiomer (geometric mirror image) will be laevorotary, and vice versa. Enantiomers rotate plane-polarized light the same number of degrees, but in opposite directions. A compound may be labeled as dextrorotary by using the "(+)-" or " d -" prefix. Likewise, a levorotary compound may be labeled using the "(−)-" or " l -" prefix. The International Union of Pure and Applied Chemistry , the authority on chemical nomenclature, strongly discourages use of the " d -" and " l -" prefixes. [ 6 ] The lowercase " d -" and " l -" prefixes are distinct from the SMALL CAPS " D -" and " L -" prefixes. The " D -" and " L -" prefixes are used to specify the enantiomer of chiral organic compounds in biochemistry and are based on the compound's absolute configuration relative to (+)- glyceraldehyde , which is the D -form by definition. The prefix used to indicate absolute configuration is not directly related to the (+) or (−) prefix used to indicate optical rotation in the same molecule. For example, nine of the nineteen L - amino acids naturally occurring in proteins are, despite the L - prefix, actually dextrorotary (at a wavelength of 589 nm), and D - fructose is sometimes called "levulose" because it is levorotary. The two naming systems can be combined to indicate both absolute configuration and optical rotation, as in D -(+)-glyceraldehyde. The D - and L - prefixes describe the molecule as a whole, as do the (+) and (−) prefixes for optical rotation. In contrast, the ( R )- and ( S )- prefixes from the Cahn–Ingold–Prelog priority rules characterize the absolute configuration of each specific chiral stereocenter with the molecule, rather than a property of the molecule as a whole. A molecule having exactly one chiral stereocenter (usually an asymmetric carbon atom) can be labeled ( R ) or ( S ), but a molecule having multiple stereocenters needs more than one label. For example, the essential amino acid L -threonine contains two chiral stereocenters and is written (2 S ,3 S )-threonine. There is no strict relationship between the R/S, the D / L , and (+)/(−) designations, although some correlations exist. For example, of the naturally occurring amino acids, all are L , and most are ( S ). For some molecules the ( R )-enantiomer is the dextrorotary (+) enantiomer, and in other cases it is the levorotary (−) enantiomer. The relationship must be determined on a case-by-case basis with experimental measurements or detailed computer modeling. [ 7 ] The rotation of the orientation of linearly polarized light was first observed in 1811 in quartz by French physicist François Arago . [ 8 ] In 1820, the English astronomer Sir John F.W. Herschel discovered that different individual quartz crystals, whose crystalline structures are mirror images of each other (see illustration), rotate linear polarization by equal amounts but in opposite directions. [ 9 ] Jean Baptiste Biot also observed the rotation of the axis of polarization in certain liquids [ 10 ] and vapors of organic substances such as turpentine . [ 11 ] In 1822, Augustin-Jean Fresnel found that optical rotation could be explained as a species of birefringence : whereas previously known cases of birefringence were due to the different speeds of light polarized in two perpendicular planes, optical rotation was due to the different speeds of right-hand and left-hand circularly polarized light. [ 12 ] Simple polarimeters have been used since this time to measure the concentrations of simple sugars, such as glucose , in solution. In fact one name for D -glucose (the biological isomer), is dextrose , referring to the fact that it causes linearly polarized light to rotate to the right or dexter side. In a similar manner, levulose, more commonly known as fructose , causes the plane of polarization to rotate to the left. Fructose is even more strongly levorotatory than glucose is dextrorotatory. Invert sugar syrup , commercially formed by the hydrolysis of sucrose syrup to a mixture of the component simple sugars, fructose, and glucose, gets its name from the fact that the conversion causes the direction of rotation to "invert" from right to left. In 1849, Louis Pasteur resolved a problem concerning the nature of tartaric acid . [ 13 ] A solution of this compound derived from living things (to be specific, wine lees ) rotates the plane of polarization of light passing through it, but tartaric acid derived by chemical synthesis has no such effect, even though its reactions are identical and its elemental composition is the same. Pasteur noticed that crystals of this compound come in two asymmetric forms that are mirror images of one another. Sorting the crystals by hand gave two forms of the compound: Solutions of one form rotate polarized light clockwise, while the other form rotate light counterclockwise. An equal mix of the two has no polarizing effect on light. Pasteur deduced that the molecule in question is asymmetric and could exist in two different forms that resemble one another as would left- and right-hand gloves, and that the organic form of the compound consists of purely the one type. In 1874, Jacobus Henricus van 't Hoff [ 14 ] and Joseph Achille Le Bel [ 15 ] independently proposed that this phenomenon of optical activity in carbon compounds could be explained by assuming that the 4 saturated chemical bonds between carbon atoms and their neighbors are directed towards the corners of a regular tetrahedron. If the 4 neighbors are all different, then there are two possible orderings of the neighbors around the tetrahedron, which will be mirror images of each other. This led to a better understanding of the three-dimensional nature of molecules. [ 16 ] In 1898, Jagadish Chandra Bose described the ability of twisted artificial structures to rotate the polarization of microwaves . [ 17 ] In 1914, Karl F. Lindman showed the same effect for an artificial composite consisting of randomly-dispersed left- or right-handed wire helices in cotton. [ 18 ] [ 19 ] [ 20 ] [ 21 ] Since the early 21st century, the development of artificial materials has led to the prediction [ 22 ] and realization [ 23 ] [ 24 ] of chiral metamaterials with optical activity exceeding that of natural media by orders of magnitude in the optical part of the spectrum. Extrinsic chirality associated with oblique illumination of metasurfaces lacking two-fold rotational symmetry has been observed to lead to large linear optical activity in transmission [ 25 ] and reflection, [ 26 ] as well as nonlinear optical activity exceeding that of lithium iodate by 30 million times. [ 27 ] In 1945, Charles William Bunn [ 28 ] predicted optical activity of achiral structures, if the wave's propagation direction and the achiral structure form an experimental arrangement that is different from its mirror image. Such optical activity due to extrinsic chirality was observed in the 1960s in liquid crystals. [ 29 ] [ 30 ] In 1950, Sergey Vavilov [ 31 ] predicted optical activity that depends on the intensity of light and the effect of nonlinear optical activity was observed in 1979 in lithium iodate crystals. [ 32 ] Optical activity is normally observed for transmitted light. However, in 1988, M. P. Silverman discovered that polarization rotation can also occur for light reflected from chiral substances. [ 33 ] Shortly after, it was observed that chiral media can also reflect left-handed and right-handed circularly polarized waves with different efficiencies. [ 34 ] These phenomena of specular circular birefringence and specular circular dichroism are jointly known as specular optical activity. Specular optical activity is very weak in natural materials. Optical activity occurs due to molecules dissolved in a fluid or due to the fluid itself only if the molecules are one of two (or more) stereoisomers ; this is known as an enantiomer . The structure of such a molecule is such that it is not identical to its mirror image (which would be that of a different stereoisomer, or the "opposite enantiomer"). In mathematics, this property is also known as chirality . For instance, a metal rod is not chiral, since its appearance in a mirror is not distinct from itself. However a screw or light bulb base (or any sort of helix ) is chiral; an ordinary right-handed screw thread, viewed in a mirror, would appear as a left-handed screw (very uncommon) which could not possibly screw into an ordinary (right-handed) nut. A human viewed in a mirror would have their heart on the right side, clear evidence of chirality, whereas the mirror reflection of a doll might well be indistinguishable from the doll itself. In order to display optical activity, a fluid must contain only one, or a preponderance of one, stereoisomer. If two enantiomers are present in equal proportions, then their effects cancel out and no optical activity is observed; this is termed a racemic mixture. But when there is an enantiomeric excess , more of one enantiomer than the other, the cancellation is incomplete and optical activity is observed. Many naturally occurring molecules are present as only one enantiomer (such as many sugars). Chiral molecules produced within the fields of organic chemistry or inorganic chemistry are racemic unless a chiral reagent was employed in the same reaction. At the fundamental level, polarization rotation in an optically active medium is caused by circular birefringence, and can best be understood in that way. Whereas linear birefringence in a crystal involves a small difference in the phase velocity of light of two different linear polarizations, circular birefringence implies a small difference in the velocities between right and left-handed circular polarizations . [ 12 ] Think of one enantiomer in a solution as a large number of little helices (or screws), all right-handed, but in random orientations. Birefringence of this sort is possible even in a fluid because the handedness of the helices is not dependent on their orientation: even when the direction of one helix is reversed, it still appears right handed. And circularly polarized light itself is chiral: as the wave proceeds in one direction the electric (and magnetic) fields composing it are rotating clockwise (or counterclockwise for the opposite circular polarization), tracing out a right (or left) handed screw pattern in space. In addition to the bulk refractive index which substantially lowers the phase velocity of light in any dielectric (transparent) material compared to the speed of light (in vacuum), there is an additional interaction between the chirality of the wave and the chirality of the molecules. Where their chiralities are the same, there will be a small additional effect on the wave's velocity, but the opposite circular polarization will experience an opposite small effect as its chirality is opposite that of the molecules. Unlike linear birefringence, however, natural optical rotation (in the absence of a magnetic field) cannot be explained in terms of a local material permittivity tensor (i.e., a charge response that only depends on the local electric field vector), as symmetry considerations forbid this. Rather, circular birefringence only appears when considering nonlocality of the material response, a phenomenon known as spatial dispersion . [ 35 ] Nonlocality means that electric fields in one location of the material drive currents in another location of the material. Light travels at a finite speed, and even though it is much faster than the electrons, it makes a difference whether the charge response naturally wants to travel along with the electromagnetic wavefront, or opposite to it. Spatial dispersion means that light travelling in different directions (different wavevectors) sees a slightly different permittivity tensor. Natural optical rotation requires a special material, but it also relies on the fact that the wavevector of light is nonzero, and a nonzero wavevector bypasses the symmetry restrictions on the local (zero-wavevector) response. However, there is still reversal symmetry, which is why the direction of natural optical rotation must be 'reversed' when the direction of the light is reversed, in contrast to magnetic Faraday rotation . All optical phenomena have some nonlocality/wavevector influence but it is usually negligible; natural optical rotation, rather uniquely, absolutely requires it. [ 35 ] The phase velocity of light in a medium is commonly expressed using the index of refraction n , defined as the speed of light (in free space) divided by its speed in the medium. The difference in the refractive indices between the two circular polarizations quantifies the strength of the circular birefringence (polarization rotation), While Δ n {\displaystyle \Delta n} is small in natural materials, examples of giant circular birefringence resulting in a negative refractive index for one circular polarization have been reported for chiral metamaterials. [ 36 ] [ 37 ] The familiar rotation of the axis of linear polarization relies on the understanding that a linearly polarized wave can as well be described as the superposition (addition) of a left and right circularly polarized wave in equal proportion. The phase difference between these two waves is dependent on the orientation of the linear polarization which we'll call θ 0 {\displaystyle \theta _{0}} , and their electric fields have a relative phase difference of 2 θ 0 {\displaystyle 2\theta _{0}} which then add to produce linear polarization: where E θ 0 {\displaystyle \mathbf {E} _{\theta _{0}}} is the electric field of the net wave, while E RHC {\displaystyle \mathbf {E} _{\text{RHC}}} and E LHC {\displaystyle \mathbf {E} _{\text{LHC}}} are the two circularly polarized basis functions (having zero phase difference). Assuming propagation in the + z direction, we could write E RHC {\displaystyle \mathbf {E} _{\text{RHC}}} and E LHC {\displaystyle \mathbf {E} _{\text{LHC}}} in terms of their x and y components as follows: where x ^ {\displaystyle {\hat {x}}} and y ^ {\displaystyle {\hat {y}}} are unit vectors, and i is the imaginary unit , in this case representing the 90-degree phase shift between the x and y components that we have decomposed each circular polarization into. As usual when dealing with phasor notation, it is understood that such quantities are to be multiplied by e − i ω t {\displaystyle e^{-i\omega t}} and then the actual electric field at any instant is given by the real part of that product. Substituting these expressions for E RHC {\displaystyle \mathbf {E} _{\text{RHC}}} and E LHC {\displaystyle \mathbf {E} _{\text{LHC}}} into the equation for E θ 0 , {\displaystyle \mathbf {E} _{\theta _{0}},} we obtain The last equation shows that the resulting vector has the x and y components in phase and oriented exactly in the θ 0 {\displaystyle \theta _{0}} direction, as we had intended, justifying the representation of any linearly polarized state at angle θ {\displaystyle \theta } as the superposition of right and left circularly polarized components with a relative phase difference of 2 θ {\displaystyle 2\theta } . Now let us assume transmission through an optically active material which induces an additional phase difference between the right and left circularly polarized waves of 2 Δ θ {\displaystyle 2\Delta \theta } . Let us call E out {\displaystyle \mathbf {E} _{\text{out}}} the result of passing the original wave linearly polarized at angle θ {\displaystyle \theta } through this medium. This will apply additional phase factors of − Δ θ {\displaystyle -\Delta \theta } and Δ θ {\displaystyle \Delta \theta } to the right and left circularly polarized components of E θ 0 {\displaystyle \mathbf {E} _{\theta _{0}}} : Using similar math as above, we find describing a wave linearly polarized at angle θ 0 + Δ θ {\displaystyle \theta _{0}+\Delta \theta } , thus rotated by Δ θ {\displaystyle \Delta \theta } relative to the incoming wave E θ 0 . {\displaystyle \mathbf {E} _{\theta _{0}}.} We defined above the difference in the refractive indices for right and left circularly polarized waves of Δ n {\displaystyle \Delta n} . Considering propagation through a length L in such a material, there will be an additional phase difference induced between them of 2 Δ θ {\displaystyle 2\Delta \theta } (as we used above) given by where λ {\displaystyle \lambda } is the wavelength of the light (in vacuum). This will cause a rotation of the linear axis of polarization by Δ θ {\displaystyle \Delta \theta } as we have shown. In general, the refractive index depends on wavelength (see dispersion ) and the differential refractive index Δ n {\displaystyle \Delta n} will also be wavelength dependent. The resulting variation in rotation with the wavelength of the light is called optical rotatory dispersion (ORD). ORD spectra and circular dichroism spectra are related through the Kramers–Kronig relations . Complete knowledge of one spectrum allows the calculation of the other. So we find that the degree of rotation depends on the color of the light (the yellow sodium D line near 589 nm wavelength is commonly used for measurements) and is directly proportional to the path length L {\displaystyle L} through the substance and the amount of circular birefringence of the material Δ n {\displaystyle \Delta n} which, for a solution, may be computed from the substance's specific rotation and its concentration in solution. Although optical activity is normally thought of as a property of fluids, particularly aqueous solutions , it has also been observed in crystals such as quartz (SiO 2 ). Although quartz has a substantial linear birefringence, that effect is cancelled when propagation is along the optic axis . In that case, rotation of the plane of polarization is observed due to the relative rotation between crystal planes, thus making the crystal formally chiral as we have defined it above. The rotation of the crystal planes can be right or left-handed, again producing opposite optical activities. On the other hand, amorphous forms of silica such as fused quartz , like a racemic mixture of chiral molecules, has no net optical activity since one or the other crystal structure does not dominate the substance's internal molecular structure. For a pure substance in solution, if the color and path length are fixed and the specific rotation is known, the observed rotation can be used to calculate the concentration. This usage makes a polarimeter a tool of great importance to those trading in or using sugar syrups in bulk. Rotation of light's plane of polarization may also occur through the Faraday effect , which involves a static magnetic field . However, this is a distinct phenomenon and is not classified as "optical activity". Optical activity is reciprocal, i.e. it is the same for opposite directions of wave propagation through an optically active medium, for example, clockwise polarization rotation from the point of view of an observer. In case of optically active isotropic media, the rotation is the same for any direction of wave propagation. In contrast, the Faraday effect is non-reciprocal, i.e. opposite directions of wave propagation through a Faraday medium will result in clockwise and anti-clockwise polarization rotation from the point of view of an observer. Faraday rotation depends on the propagation direction relative to that of the applied magnetic field. All compounds can exhibit polarization rotation in the presence of an applied magnetic field, provided that (a component of) the magnetic field is oriented in the direction of light propagation. The Faraday effect is one of the first discoveries of the relationship between light and electromagnetic effects.
https://en.wikipedia.org/wiki/Optical_rotation
In optics , optical rotatory dispersion is the variation of the specific rotation of a medium with respect to the wavelength of light . Usually described by German physicist Paul Drude 's empirical relation: [ 1 ] [ α ] λ T = ∑ n = 0 ∞ A n λ 2 − λ n 2 {\displaystyle [\alpha ]_{\lambda }^{T}=\sum _{n=0}^{\infty }{\frac {A_{n}}{\lambda ^{2}-\lambda _{n}^{2}}}} where [ α ] λ T {\displaystyle [\alpha ]_{\lambda }^{T}} is the specific rotation at temperature T {\displaystyle T} and wavelength λ {\displaystyle \lambda } , and A n {\displaystyle A_{n}} and λ n {\displaystyle \lambda _{n}} are constants that depend on the properties of the medium. Optical rotatory dispersion has applications in organic chemistry regarding determining the structure of organic compounds. [ 2 ] When white light passes through a polarizer , the extent of rotation of light depends on its wavelength . Short wavelengths are rotated more than longer wavelengths, per unit of distance. Because the wavelength of light determines its color, the variation of color with distance through the tube is observed. [ citation needed ] This dependence of specific rotation on wavelength is called optical rotatory dispersion. In all materials the rotation varies with wavelength. The variation is caused by two quite different phenomena. The first accounts in most cases for the majority of the variation in rotation and should not strictly be termed rotatory dispersion. It depends on the fact that optical activity is actually circular birefringence . In other words, a substance which is optically active transmits right circularly polarized light with a different velocity from left circularly polarized light. In addition to this pseudodispersion which depends on the material thickness, there is a true rotatory dispersion which depends on the variation with wavelength of the indices of refraction for right and left circularly polarized light. For wavelengths that are absorbed by the optically active sample, the two circularly polarized components will be absorbed to differing extents. This unequal absorption is known as circular dichroism . Circular dichroism causes incident linearly polarized light to become elliptically polarized . The two phenomena are closely related, just as are ordinary absorption and dispersion. If the entire optical rotatory dispersion spectrum is known, the circular dichroism spectrum can be calculated, and vice versa. In order for a molecule (or crystal) to exhibit circular birefringence and circular dichroism, it must be distinguishable from its mirror image . An object that cannot be superimposed on its mirror image is said to be chiral , and optical rotatory dispersion and circular dichroism are known as chiroptical properties. Most biological molecules have one or more chiral centers and undergo enzyme-catalyzed transformations that either maintain or invert the chirality at one or more of these centers. Still other enzymes produce new chiral centers, always with a high specificity. These properties account for the fact that optical rotatory dispersion and circular dichroism are widely used in organic and inorganic chemistry and in biochemistry. In the absence of magnetic fields, only chiral substances exhibit optical rotatory dispersion and circular dichroism. In a magnetic field, even substances that lack chirality rotate the plane of polarized light, as shown by Michael Faraday . Magnetic optical rotation is known as the Faraday effect , and its wavelength dependence is known as magnetic optical rotatory dispersion. In regions of absorption, magnetic circular dichroism is observable.
https://en.wikipedia.org/wiki/Optical_rotatory_dispersion
Optical sectioning is the process by which a suitably designed microscope can produce clear images of focal planes deep within a thick sample. This is used to reduce the need for thin sectioning using instruments such as the microtome . Many different techniques for optical sectioning are used and several microscopy techniques are specifically designed to improve the quality of optical sectioning. Good optical sectioning, often referred to as good depth or z resolution, is popular in modern microscopy as it allows the three-dimensional reconstruction of a sample from images captured at different focal planes. In an ideal microscope, only light from the focal plane would be allowed to reach the detector (typically an observer or a CCD ) producing a clear image of the plane of the sample the microscope is focused on. Unfortunately a microscope is not this specific and light from sources outside the focal plane also reaches the detector; in a thick sample there may be a significant amount of material, and so spurious signal, between the focal plane and the objective lens . With no modification to the microscope, i.e. with a simple wide field light microscope , the quality of optical sectioning is governed by the same physics as the depth of field effect in photography . For a high numerical aperture lens, equivalent to a wide aperture , the depth of field is small ( shallow focus ) and gives good optical sectioning. High magnification objective lenses typically have higher numerical apertures (and so better optical sectioning) than low magnification objectives. Oil immersion objectives typically have even larger numerical apertures so improved optical sectioning. The resolution in the depth direction (the "z resolution") of a standard wide field microscope depends on the numerical aperture and the wavelength of the light and can be approximated as: D z = λ n ( N A ) 2 {\displaystyle D_{z}={\frac {\lambda n}{(\mathrm {NA} )^{2}}}} where λ is the wavelength, n the refractive index of the objective lens immersion media and NA the numerical aperture. [ 2 ] In comparison, the lateral resolution can be approximated as: [ 3 ] D x = D y = 0.61 λ N A {\displaystyle D_{x}=D_{y}={\frac {0.61\lambda }{\mathrm {NA} }}} Beyond increasing numerical aperture, there are few techniques available to improve optical sectioning in bright-field light microscopy. Most microscopes with oil immersion objectives are reaching the limits of numerical aperture possible due to refraction limits. Differential interference contrast (DIC) provides modest improvements to optical sectioning. In DIC the sample is effectively illuminated by two slightly offset light sources which then interfere to produce an image resulting from the phase differences of the two sources. As the offset in the light sources is small the only difference in phase results from the material close to the focal plane. In fluorescence microscopy objects out of the focal plane only interfere with the image if they are illuminated and fluoresce. This adds an extra way in which optical sectioning can be improved by making illumination specific to only the focal plane. Confocal microscopy uses a scanning point or points of light to illuminate the sample. In conjunction with a pinhole at a conjugate focal plane this acts to filter out light from sources outside the focal plane to improve optical sectioning. [ 4 ] Lightsheet based fluorescence microscopy illuminates the sample with excitation light under an angle of 90° to the direction of observation, i.e. only the focal plane is illuminated using a laser that is only focused in one direction (lightsheet). [ 5 ] This method effectively reduces out-of focus light and may in addition lead to a modest improvement in longitudinal resolution, compared to epi fluorescence microscopy. Dual and multi-photon excitation techniques take advantage of the fact that fluorophores can be excited not just by a single photon of the correct energy but also by multiple photons, which together provide the correct energy. The additional " concentration "-dependent effect of requiring multiple photons to simultaneously interact with a fluorophore gives stimulation only very close to the focal plane. These techniques are normally used in conjunction with confocal microscopy. [ 6 ] Further improvements in optical sectioning are under active development, these principally work through methods to circumvent the diffraction limit of light. Examples include single photon interferometry through two objective lenses to give extremely accurate depth information about a single fluorophore [ 7 ] and three-dimensional structured illumination microscopy . [ 8 ] The optical sectioning of normal wide field microscopes can be improved significantly by deconvolution , an image processing technique to remove blur from the image according to a measured or calculated point spread function . [ 9 ] Optical sectioning can be enhanced by the use of clearing agents possessing a high refractive index (>1.4) such as Benzyl-Alcohol/Benzyl Benzoate (BABB) or Benzyl-ether [ 10 ] which render specimens transparent and therefore allow for observation of internal structures. Optical sectioning is underdeveloped in non-light microscopes. [ citation needed ] X-ray and electron microscopes typically have a large depth of field (poor optical sectioning), and thus thin sectioning of samples is still widely used. Although similar physics guides the focusing process, [ 11 ] Scanning probe microscopes and scanning electron microscopes are not typically discussed in the context of optical sectioning as these microscopes only interact with the surface of the sample. Total internal reflection microscopy is a fluorescent microscopy technique, which intentionally restricts observation to either the top or bottom surfaces of a sample, but with extremely high depth resolution. 3D imaging using a combination of focal sectioning and tilting has been demonstrated theoretically and experimentally in order to provide exceptional 3D resolution over large fields of view. [ 12 ] The primary alternatives to optical sectioning are:
https://en.wikipedia.org/wiki/Optical_sectioning
Optical sorting (sometimes called digital sorting ) is the automated process of sorting solid products using cameras and/or lasers . Depending on the types of sensors used and the software-driven intelligence of the image processing system, optical sorters can recognize an object's color, size, shape, structural properties and chemical composition. [ 1 ] The sorter compares objects to user-defined accept/reject criteria to identify and remove defective products and foreign material (FM) from the production line, or to separate product of different grades or types of materials. Optical sorters are in widespread use in the food industry worldwide, with the highest adoption in processing harvested foods such as potatoes, fruits, vegetables and nuts where it achieves non-destructive, 100 percent inspection in-line at full production volumes. [ citation needed ] The technology is also used in pharmaceutical manufacturing and nutraceutical manufacturing, tobacco processing, waste recycling and other industries. Compared to manual sorting, which is subjective and inconsistent, optical sorting helps improve product quality, maximize throughput and increase yields while reducing labor costs. [ 2 ] Optical sorting is an idea that first came out of the desire to automate industrial sorting of agricultural goods like fruits and vegetables. [ 3 ] Before automated optical sorting technology was conceived in the 1930s, companies like Unitec were producing wooden machinery to assist in the mechanical sorting of fruit processing. [ 3 ] In 1931, a company known as “the Electric Sorting Company” was incorporated and began the creation of the world’s first color sorters, which were being installed and used in Michigan’s bean industry by 1932. [ 4 ] In 1937, optical sorting technology had advanced to allow for systems based on a two-color principle of selection. [ 4 ] The next few decades saw the installation of new and improved sorting mechanisms, like gravity feed systems and the implementation of optical sorting in more agricultural industries. [ 5 ] In the late 1960s, optical sorting began to be implemented to new industries beyond agriculture, like the sorting of ferrous and non-ferrous metals. [ 6 ] By the 1990s, optical sorting was being used heavily in the sorting of solid wastes. [ 6 ] With the large technological revolution happening in the late 1990s and early 2000s, optical sorters were being made more efficient via the implementation of new optical sensors, like CCD, UV , and IR cameras. [ 5 ] Today, optical sorting is used in a wide variety of industries and, as such, is implemented with a varying selection of mechanisms to assist in that specific sorter’s task. In general, optical sorters feature four major components: the feed system, the optical system, image processing software, and the separation system. [ 7 ] The objective of the feed system is to spread products into a uniform monolayer so products are presented to the optical system evenly, without clumps, at a constant velocity. The optical system includes lights and sensors housed above and/or below the flow of the objects being inspected. The image processing system compares objects to user-defined accept/reject thresholds to classify objects and actuate the separation system. The separation system — usually compressed air for small products and mechanical devices for larger products, like whole potatoes — pinpoints objects while in-air and deflects the objects to remove into a reject chute while the good product continues along its normal trajectory. The ideal sorter to use depends on the application. Therefore, the product's characteristics and the user's objectives determine the ideal sensors, software-driven capabilities and mechanical platform. Optical sorters require a combination of lights and sensors to illuminate and capture images of the objects so the images can be processed. The processed images will determine if the material should be accepted or rejected. There are camera sorters, laser sorters and sorters that feature a combination of the two on one platform. Lights, cameras, lasers and laser sensors can be designed to function within visible light wavelengths as well as the infrared (IR) and ultraviolet (UV) spectrums . The optimal wavelengths for each application maximize the contrast between the objects to be separated. Cameras and laser sensors can differ in spatial resolution, with higher resolutions enabling the sorter to detect and remove smaller defects. Monochromatic cameras detect shades of gray from black to white and can be effective when sorting products with high-contrast defects. Sophisticated color cameras with high color resolution are capable of detecting millions of colors to better distinguish more subtle color defects. Trichromatic color cameras (also called three-channel cameras) divide light into three bands, which can include red, green and/or blue within the visible spectrum as well as IR and UV. The interaction of different materials with parts of the electromagnetic spectrum make these contrasts more evident than how they appear to the naked human eye. Coupled with intelligent software, sorters that feature cameras are capable of recognizing each object's color, size and shape; as well as the color, size, shape and location of a defect on a product. Some intelligent sorters even allow the user to define a defective product based on the total defective surface area of any given object. While cameras capture product information based primarily on material reflectance, lasers and their sensors are able to distinguish a material's structural properties along with their color. This structural property inspection allows lasers to detect a wide range of organic and inorganic foreign material such as insects, glass, metal, sticks, rocks and plastic; even if they are the same color as the good product. Lasers can be designed to operate within specific wavelengths of light; whether on the visible spectrum or beyond. [ 8 ] For example, lasers can detect chlorophyll by stimulating fluorescence using specific wavelengths; which is a process that is very effective for removing foreign material from green vegetables. [ 9 ] Sorters equipped with cameras and lasers on one platform are generally capable of identifying the widest variety of attributes. Cameras are often better at recognizing color, size and shape while laser sensors identify differences in structural properties to maximize foreign material detection and removal. Driven by the need to solve previously impossible sorting challenges, a new generation of sorters that feature multispectral and hyperspectral imaging Optical Sorters . [ 10 ] Like trichromatic cameras, multispectral and hyperspectral cameras collect data from the electromagnetic spectrum. Unlike trichromatic cameras, which divide light into three bands, hyperspectral systems can divide light into hundreds of narrow bands over a continuous range that covers a vast portion of the electromagnetic spectrum. This opens the door for more detailed analysis that leads to a more consistent product. Using IR alone might detect some defects, but combining it with a broader range of the spectrum makes it more effective. Compared to the three data points per pixel collected by trichromatic cameras, hyperspectral cameras can collect hundreds of data points per pixel, which are combined to create a unique spectral signature (also called a fingerprint) for each object. When complemented by capable software intelligence, a hyperspectral sorter processes those fingerprints to enable sorting on the chemical composition of the product. This is an emerging area of chemometrics . Once the sensors capture the object's response to the energy source, image processing is used to manipulate the raw data. The image processing extracts and categorizes information about specific features. The user then defines accept/reject thresholds that are used to determine what is good and bad in the raw data flow. The art and science of image processing lies in developing algorithms that maximize the effectiveness of the sorter while presenting a simple user-interface to the operator. Object-based recognition is a classic example of software-driven intelligence. It allows the user to define a defective product based on where a defect lies on the product and/or the total defective surface area of an object. It offers more control in defining a wider range of defective products. When used to control the sorter's ejection system, it can improve the accuracy of ejecting defective products. This improves product quality and increases yields. New software-driven capabilities are constantly being developed to address the specific needs of various applications. As computing hardware becomes more powerful, new software-driven advancements become possible. Some of these advancements enhance the effectiveness of sorters to achieve better results while others enable completely new sorting decisions to be made. The considerations that determine the ideal platform for a specific application include the nature of the product – large or small, wet or dry, fragile or unbreakable, round or easy to stabilize – and the user's objectives. In general, products smaller than a grain of rice and as large as whole potatoes can be sorted. Throughputs range from less than 2 metric tons of product per hour on low-capacity sorters to more than 35 metric tons of product per hour on high-capacity sorters. The simplest optical sorters are channel sorters, a type of color sorter that can be effective for products that are small, hard, and dry with a consistent size and shape; such as rice and seeds. For these products, channel sorters offer an affordable solution and ease of use with a small footprint. Channel sorters feature monochromatic or color cameras and remove defects and foreign material based only on differences in color. For products that cannot be handled by a channel sorter – such as soft, wet, or nonhomogeneous products – and for processors that want more control over the quality of their product, freefall sorters (also called waterfall or gravity-fed sorters), chute-fed, sorters or belt sorters are more ideal. These more sophisticated sorters often feature advanced cameras and/or lasers that, when complemented by capable software intelligence, detect objects' size, shape, color, structural properties, and chemical composition. Freefall sorters inspect product in-air during the freefall and chute-fed sorters stabilize product on a chute prior to in-air inspection. The major advantages of freefall and chute-fed sorters, compared to belt sorters, are a lower price point and lower maintenance. These sorters are often most suitable for nuts and berries as well as frozen and dried fruits, vegetables, potato strips and seafood, in addition to waste recycling applications that require mid-volume throughputs. Belt sorting platforms are often preferred for higher capacity applications such as vegetable and potato products prior to canning, freezing or drying. The products are often stabilized on a conveyor belt prior to inspection. Some belt sorters inspect products from above the belt, while other sorters also send products off of the belt for an in-air inspection. These sorters can either be designed to achieve traditional two-way sorting or three-way sorting if two ejector systems with three outfeed streams are equipped. A fifth type of sorting platform, called an automated defect removal (ADR) system, is specifically for potato strips (French fries). Unlike other sorters that eject products with defects from the production line, ADR systems identify defects and actually cut the defects from the strips. The combination of an ADR system followed by a mechanical nubbin grader is another type of optical sorting system because it uses optical sensors to identify and remove defects. The platforms described above all operate with materials in bulk; meaning they do not need the materials to be in a single-file to be inspected. In contrast, a sixth type of platform, used in the pharmaceutical industry, is a single-file optical inspection system. These sorters are effective in removing foreign objects based on differences in size, shape and color. They are not as popular as the other platforms due to decreased efficiency. For products that require sorting only by size, mechanical grading systems are used because sensors and image processing software is not necessary. These mechanical grading systems are sometimes referred to as sorting systems, but should not be confused with optical sorters that feature sensors and image processing systems. Optical sorting machines can be used to identify and discard manufacturing waste, such as metals, drywall, cardboard, and various plastics. [ 11 ] In the metal industry, optical sorting machines are used to discard plastics, glass, wood, and other non-needed metals. [ 12 ] The plastic industry uses optical sorting machines to not only discard various materials like those listed, but also different types of plastics. Optical sorting machines discard different types of plastics by distinguishing resin types. Resin types that optical sorting machines can identify are: PET, HDPE, PP, PVC, LDPE, and others. Most recyclables are in the form of bottles. [ 13 ] [ 12 ] Optical sorting also aids in recycling since the discarded materials are stored in bins. Once a bin is full of a given material, it can be sent to the appropriate recycling facility. [ 14 ] Optical sorting machines’ ability to distinguish between resin types also aids in the process of plastic recycling because there are different methods used for each plastic type. [ 15 ] In the coffee industry, optical sorting machines are used to identify and remove underdeveloped coffee beans called quakers; quakers are beans that contain mostly carbohydrates and sugars. [ 16 ] A more accurate calibration offers a lower total number of defective products. [ 16 ] Some coffee companies like Counter Culture use these machines in addition to pre-existing sorting methods in order to create a better tasting cup of coffee. [ 16 ] One limitation is that someone has to program these machines by hand to identify defective products. [ 16 ] However, this science is not limited to coffee beans; food items such as mustard seeds, fruits, wheat, and hemp can all be processed through optical sorting machines. [ 17 ] In the wine manufacturing process, grapes and berries are sorted like coffee beans. [ 18 ] Grape sorting is used to ensure no unripe/green parts to the plant are involved in the wine making process. [ 18 ] In the past, manual sorting via sorting tables was used to separate the defective grapes from the more effective grapes. [ 18 ] Now, mechanical harvesting provides a higher effectiveness rate compared to manual sorting. [ 18 ] At different points in the line, materials are sorted out via several optical sorting machines. [ 18 ] Each machine is looking for various materials of differing shapes and sizes. [ 18 ] The berries or grapes can then be sorted accordingly using a camera, a laser, or a form of LED technology with regard to the shape and form of the given fruit. The sorting machine then discards any unnecessary elements. [ 19 ] [ 20 ] [ 21 ] In the pharmaceutical sector, optical sorting ensures the production of high-quality and safe medications. The technology meticulously inspects tablets and capsules to detect and remove defects such as cracks, chips, discoloration, and size deviations. It also eliminates foreign contaminants like metal particles or plastic fragments that may have entered during manufacturing. By automating the inspection process, optical sorters reduce human error and labor costs while maintaining compliance with stringent regulatory standards, ultimately safeguarding consumer health and brand reputation. [ 22 ] Additionally, in medical laboratories, optical sorters aid in the sorting and analysis of biological samples, such as cells or bacteria cultures. The high-speed analysis and sorting capabilities of these machines improve diagnostic accuracy, research efficiency, and overall laboratory productivity. [ 23 ]
https://en.wikipedia.org/wiki/Optical_sorting
An optical spectrometer ( spectrophotometer , spectrograph or spectroscope ) is an instrument used to measure properties of light over a specific portion of the electromagnetic spectrum , typically used in spectroscopic analysis to identify materials. [ 1 ] The variable measured is most often the irradiance of the light but could also, for instance, be the polarization state. The independent variable is usually the wavelength of the light or a closely derived physical quantity, such as the corresponding wavenumber or the photon energy, in units of measurement such as centimeters, reciprocal centimeters , or electron volts , respectively. A spectrometer is used in spectroscopy for producing spectral lines and measuring their wavelengths and intensities. Spectrometers may operate over a wide range of non-optical wavelengths, from gamma rays and X-rays into the far infrared . If the instrument is designed to measure the spectrum on an absolute scale rather than a relative one, then it is typically called a spectrophotometer . The majority of spectrophotometers are used in spectral regions near the visible spectrum. A spectrometer that is calibrated for measurement of the incident optical power is called a spectroradiometer . [ 2 ] In general, any particular instrument will operate over a small portion of this total range because of the different techniques used to measure different portions of the spectrum. Below optical frequencies (that is, at microwave and radio frequencies), the spectrum analyzer is a closely related electronic device. Spectrometers are used in many fields. For example, they are used in astronomy to analyze the radiation from objects and deduce their chemical composition. The spectrometer uses a prism or a grating to spread the light into a spectrum. This allows astronomers to detect many of the chemical elements by their characteristic spectral lines. These lines are named for the elements which cause them, such as the hydrogen alpha , beta, and gamma lines. A glowing object will show bright spectral lines. Dark lines are made by absorption, for example by light passing through a gas cloud, and these absorption lines can also identify chemical compounds. Much of our knowledge of the chemical makeup of the universe comes from spectra. Spectroscopes are often used in astronomy and some branches of chemistry . Early spectroscopes were simply prisms with graduations marking wavelengths of light. Modern spectroscopes generally use a diffraction grating , a movable slit , and some kind of photodetector , all automated and controlled by a computer . Recent advances have seen increasing reliance of computational algorithms in a range of miniaturised spectrometers without diffraction gratings, for example, through the use of quantum dot-based filter arrays on to a CCD chip [ 3 ] or a series of photodetectors realised on a single nanostructure. [ 4 ] Joseph von Fraunhofer developed the first modern spectroscope by combining a prism, diffraction slit and telescope in a manner that increased the spectral resolution and was reproducible in other laboratories. Fraunhofer also went on to invent the first diffraction spectroscope. [ 5 ] Gustav Robert Kirchhoff and Robert Bunsen discovered the application of spectroscopes to chemical analysis and used this approach to discover caesium and rubidium . [ 6 ] [ 7 ] Kirchhoff and Bunsen's analysis also enabled a chemical explanation of stellar spectra , including Fraunhofer lines . [ 8 ] When a material is heated to incandescence it emits light that is characteristic of the atomic makeup of the material. Particular light frequencies give rise to sharply defined bands on the scale which can be thought of as fingerprints. For example, the element sodium has a very characteristic double yellow band known as the Sodium D-lines at 588.9950 and 589.5924 nanometers, the color of which will be familiar to anyone who has seen a low pressure sodium vapor lamp . In the original spectroscope design in the early 19th century, light entered a slit and a collimating lens transformed the light into a thin beam of parallel rays. The light then passed through a prism (in hand-held spectroscopes, usually an Amici prism ) that refracted the beam into a spectrum because different wavelengths were refracted different amounts due to dispersion . This image was then viewed through a tube with a scale that was transposed upon the spectral image, enabling its direct measurement. With the development of photographic film , the more accurate spectrograph was created. It was based on the same principle as the spectroscope, but it had a camera in place of the viewing tube. In recent years, the electronic circuits built around the photomultiplier tube have replaced the camera, allowing real-time spectrographic analysis with far greater accuracy. Arrays of photosensors are also used in place of film in spectrographic systems. Such spectral analysis, or spectroscopy, has become an important scientific tool for analyzing the composition of unknown material and for studying astronomical phenomena and testing astronomical theories. In modern spectrographs in the UV, visible, and near-IR spectral ranges, the spectrum is generally given in the form of photon number per unit wavelength (nm or μm), wavenumber (μm −1 , cm −1 ), frequency (THz), or energy (eV), with the units indicated by the abscissa . In the mid- to far-IR, spectra are typically expressed in units of Watts per unit wavelength (μm) or wavenumber (cm −1 ). In many cases, the spectrum is displayed with the units left implied (such as "digital counts" per spectral channel). Gemologists frequently use spectroscopes to determine the absorption spectra of gemstones, thereby allowing them to make inferences about what kind of gem they are examining. [ 9 ] A gemologist may compare the absorption spectrum they observe with a catalogue of spectra for various gems to help narrow down the exact identity of the gem. A spectrograph is an instrument that separates light into its wavelengths and records the data. [ 11 ] A spectrograph typically has a multi-channel detector system or camera that detects and records the spectrum of light. [ 11 ] [ 12 ] The term was first used in 1876 by Dr. Henry Draper when he invented the earliest version of this device, and which he used to take several photographs of the spectrum of Vega . This earliest version of the spectrograph was cumbersome to use and difficult to manage. [ 13 ] There are several kinds of machines referred to as spectrographs , depending on the precise nature of the waves. The first spectrographs used photographic paper as the detector. The plant pigment phytochrome was discovered using a spectrograph that used living plants as the detector. More recent spectrographs use electronic detectors, such as CCDs which can be used for both visible and UV light. The exact choice of detector depends on the wavelengths of light to be recorded. A spectrograph is sometimes called polychromator , as an analogy to monochromator . The star spectral classification and discovery of the main sequence , Hubble's law and the Hubble sequence were all made with spectrographs that used photographic paper. James Webb Space Telescope contains both a near-infrared spectrograph ( NIRSpec ) and a mid-infrared spectrograph ( MIRI ). An echelle -based spectrograph uses two diffraction gratings , rotated 90 degrees with respect to each other and placed close to one another. Therefore, an entrance point and not a slit is used and a CCD-chip records the spectrum. Both gratings have a wide spacing, and one is blazed so that only the first order is visible and the other is blazed with many higher orders visible, so a very fine spectrum is presented to the CCD. In conventional spectrographs, a slit is inserted into the beam to limit the image extent in the dispersion direction. A slitless spectrograph omits the slit; this results in images that convolve the image information with spectral information along the direction of dispersion. If the field is not sufficiently sparse, then spectra from different sources in the image field will overlap. The trade is that slitless spectrographs can produce spectral images much more quickly than scanning a conventional spectrograph. That is useful in applications such as solar physics where time evolution is important.
https://en.wikipedia.org/wiki/Optical_spectrometer
The optical stretcher is a dual-beam optical trap that is used for trapping and deforming ("stretching") micrometre-sized soft matter particles, such as biological cells in suspension. The forces used for trapping and deforming objects arise from photon momentum transfer on the surface of the objects, making the Optical Stretcher – unlike atomic force microscopy or micropipette aspiration – a tool for contact-free rheology measurements. The trapping of micrometre-sized particles by two laser beams was first demonstrated by Arthur Ashkin in 1970, [ 1 ] before he developed the single-beam trap now known as optical tweezers . With the single-beam design it is no longer necessary to exactly match two lasers optical axes. From the late 1980s on, optical tweezers have been used to trap and hold biological dielectrica, such as cells or viruses . [ 2 ] In order to ensure trap stability, the single beam must be highly focused, with the particle trapped close to the focus point. Preventing damage to biological material (see Opticution ) by the high local light intensities in the focus limits the laser powers that can be used in the optical tweezers to a force range too low for rheology experiments, so that optical tweezers are suitable for trapping biological particles, but unsuitable for deforming them. The optical stretcher, developed at the end of the 1990s by Jochen Guck and Josef A. Käs , [ 3 ] circumvents this problem by going back to the dual-beam design originally developed by Ashkin. This allows for weakly divergent laser, thus preventing damage done by localized light intensities and increasing the possible stretching forces to a range that is sufficient for the deformation of soft matter. The laser powers used in stretching cells are typically on the order of 1 W, generating stretch forces on the order of 100 pN. The resulting relative cellular deformation then usually lies in the range from 1%–10%. The optical stretcher has since been developed into a versatile biophysical tool used by many groups worldwide for contact-free, marker-free measurements of whole-cell rheology. Using automated setups, high throughput rates of more than 100 cells/hour have been achieved, allowing for statistical analysis of the data. Cell mechanics and cell rheology play a crucial role in cellular development and also in many diseases. Due to its high throughput, the optical stretcher has in many biomechanical studies been the tool of choice to investigate the development of or changes in cell mechanics, among them studies on the development of cancer and stem cell differentiation. An exemplary study in stem cell research sheds light on the process of cell differentiation: Hematopoietic stem cells residing in the bone marrow differentiate into different types of blood cells to produce human blood – i.e., red blood cells and different types of white blood cells. In this study, it was shown that the white blood cell types show different mechanical behaviour depending on their later physiological function and that these differences arise during the process of stem cell differentiation. [ 4 ] Using the optical stretcher, it was also shown that cancerous cells differ significantly in their mechanical properties from their healthy counterparts. [ 5 ] The authors claim that the 'optical deformability' can be used as a biomechanical marker to distinguish cancerous from healthy cells, and even that higher stages of malignancy can be detected. A typical optical stretcher setup consists of the following main parts: Objects trapped in the optical stretcher usually have diameters on the scale of 10 μm, which is very large compared to the laser wavelengths used (often 1064 nm). It is thus sufficient to consider the interaction with the laser light in terms of ray optics . When a ray enters the object, it is refracted due to the different refractive index according to Snell's law . Because photons carry momentum , a change in the direction of propagation of a light ray implies a momentum change, i.e. a force. According to Newton's third law , a corresponding force pointing in the opposite direction acts on the surface of the object. These surface forces due to photon momentum change are the origin for the ability of the optical stretcher to trap and stretch objects. [ 6 ] All surface forces can be added up to a resulting force pulling on the center of mass of the object, which is used to trap objects. Usually, one uses Gaussian laser beams to trap particles. The most important thing to note is that Gaussian beams have a light intensity gradient, i.e. the light intensity is high in the center of the beam (on the optical axis ) and decreases off the axis. It can be illustrative to decompose the trapping force into two components called the scattering force and the gradient force : The rays on the inner side are mostly refracted away from the beam axis (see figure on the right), leading to a corresponding force towards the beam axis on the object. The gradient force thus pulls object onto the beam axis. This requires the refractive index of that object to be higher than the index of the surrounding medium – else the refraction would lead to opposite results, pushing particles out of the beam. However, the refractive index of biological matter is always higher than that of water or cell medium due to the additional protein content. In the optical stretcher, two counterpropagating laser beams are used in order to cancel their corresponding scattering forces. Because their gradient forces point in the same direction, pulling particles towards their common beam axis, they add up, and one arrives at a stable trap position. An alternative approach to understand the trapping mechanism is to consider the interaction of the particle with the electric fields of the laser beam. This leads to the known fact that electric dipoles (or dielectric, polarizable media like cells) are pulled to the region of highest field intensities, i.e. to the center of the beam. See Optical trap § Electric dipole approximation for details. Once the particle is stably trapped, there is no net force on the center of mass of the particle. However, the forces appearing at the surface of the particle do not cancel, and contrary to what one might naively expect, the light does not squeeze the cell but stretch it: The magnitude of the photon momentum is given by where h is the Planck constant , n is the refractive index of the medium, and λ is the wavelength of the light. The photon momentum increases when the photon enters a medium of higher refractive index. Conservation of momentum then leads to a surface force acting on the particle, pointing in the opposite direction, i.e. outwards . When a photon leaves the trapped object, its momentum decreases and again conservation of momentum requires that an outward-pointing force be exerted. Thus, as all surface forces point outwards, they do not cancel but add up. The highest stretching forces can be found on the beam axis, where the light intensity is highest and the rays incide at right angle. Near the poles of the cell, where virtually no rays impinge, the surface forces vanish. Different mathematical models have been developed to calculate the stretching forces, based on ray optics [ 7 ] [ 8 ] or the solution of Maxwell's equations. [ 9 ]
https://en.wikipedia.org/wiki/Optical_stretcher
In physics , the optical theorem is a general law of wave scattering theory , which relates the zero-angle scattering amplitude to the total cross section of the scatterer. [ 1 ] It is usually written in the form where f (0) is the scattering amplitude with an angle of zero, that is the amplitude of the wave scattered to the center of a distant screen and k is the wave vector in the incident direction. Because the optical theorem is derived using only conservation of energy , or in quantum mechanics from conservation of probability , the optical theorem is widely applicable and, in quantum mechanics , σ t o t {\displaystyle \sigma _{\mathrm {tot} }} includes both elastic and inelastic scattering. The generalized optical theorem , first derived by Werner Heisenberg , follows from the unitary condition and is given by [ 2 ] where f ( n , n ′ ) {\displaystyle f(\mathbf {n} ,\mathbf {n} ')} is the scattering amplitude that depends on the direction n {\displaystyle \mathbf {n} } of the incident wave and the direction n ′ {\displaystyle \mathbf {n} '} of scattering and d Ω {\displaystyle d\Omega } is the differential solid angle . When n = n ′ {\displaystyle \mathbf {n} =\mathbf {n} '} , the above relation yields the optical theorem since the left-hand side is just twice the imaginary part of f ( n , n ) {\displaystyle f(\mathbf {n} ,\mathbf {n} )} and since σ = ∫ | f ( n , n ″ ) | 2 d Ω ″ {\displaystyle \sigma =\int |f(\mathbf {n} ,\mathbf {n} '')|^{2}\,d\Omega ''} . For scattering in a centrally symmetric field, f {\displaystyle f} depends only on the angle θ {\displaystyle \theta } between n {\displaystyle \mathbf {n} } and n ′ {\displaystyle \mathbf {n} '} , in which case, the above relation reduces to where γ {\displaystyle \gamma } and γ ′ {\displaystyle \gamma '} are the angles between n {\displaystyle \mathbf {n} } and n ′ {\displaystyle \mathbf {n} '} and some direction n ″ {\displaystyle \mathbf {n} ''} . The optical theorem was originally developed independently by Wolfgang Sellmeier [ 3 ] and Lord Rayleigh in 1871. [ 4 ] Lord Rayleigh recognized the zero-angle scattering amplitude in terms of the index of refraction as (where N is the number density of scatterers), which he used in a study of the color and polarization of the sky. The equation was later extended to quantum scattering theory by several individuals, and came to be known as the Bohr–Peierls–Placzek relation after a 1939 paper. It was first referred to as the "optical theorem" in print in 1955 by Hans Bethe and Frederic de Hoffmann , after it had been known as a "well known theorem of optics" for some time. The theorem can be derived rather directly from a treatment of a scalar wave . If a plane wave is incident along positive z axis on an object, then the wave scattering amplitude a great distance away from the scatterer is approximately given by All higher terms, when squared, vanish more quickly than 1 / r 2 {\displaystyle 1/r^{2}} , and so are negligible a great distance away. For large values of z {\displaystyle z} and for small angles, a Taylor expansion gives us We would now like to use the fact that the intensity is proportional to the square of the amplitude ψ {\displaystyle \psi } . Approximating 1 / r {\displaystyle 1/r} as 1 / z {\displaystyle 1/z} , we have If we drop the 1 / z 2 {\displaystyle 1/z^{2}} term and use the fact that c + c ∗ = 2 Re ⁡ c {\displaystyle c+c^{*}=2\operatorname {Re} {c}} , we have Now suppose we integrate over a screen far away in the xy plane, which is small enough for the small-angle approximations to be appropriate, but large enough that we can integrate the intensity over − ∞ {\displaystyle -\infty } to ∞ {\displaystyle \infty } in x and y with negligible error. In optics , this is equivalent to summing over many fringes of the diffraction pattern. By the method of stationary phase , we can approximate f ( θ ) = f ( 0 ) {\displaystyle f(\theta )=f(0)} in the below integral. We obtain where A is the area of the surface integrated over. Although these are improper integrals, by suitable substitutions the exponentials can be transformed into complex Gaussians and the definite integrals evaluated resulting in: This is the probability of reaching the screen if none were scattered, lessened by an amount ( 4 π / k ) Im ⁡ [ f ( 0 ) ] {\displaystyle (4\pi /k)\operatorname {Im} [f(0)]} , which is therefore the effective scattering cross section of the scatterer.
https://en.wikipedia.org/wiki/Optical_theorem
Optical transfection is a biomedical technique that entails introducing nucleic acids (i.e. genetic material such as DNA ) into cells using light. All cells are surrounded by a plasma membrane , which prevents many substances from entering or exiting the cell. Lasers can be used to burn a tiny hole in this membrane, allowing substances to enter. This is tremendously useful to biologists who are studying disease, as a common experimental requirement is to put things (such as DNA) into cells. Typically, a laser is focussed to a diffraction limited spot (~ 1 μm diameter) using a high numerical aperture microscope objective. The plasma membrane of a cell is then exposed to this highly focussed light for a small amount of time (typically tens of milliseconds to seconds), generating a transient pore on the membrane. The generation of a photopore [ check spelling ] allows exogenous plasmid DNA , RNA , organic fluorophores , or larger objects such as semiconductor quantum nanodots to enter the cell. In this technique, one cell at a time is treated, making it particularly useful for single cell analysis. This technique was first demonstrated in 1984 by Tsukakoshi et al., who used a frequency tripled Nd:YAG to generate stable and transient transfection of normal rat kidney cells. [ 1 ] Since this time, the optical transfection of a host of mammalian cell types has been demonstrated using a variety of laser sources, including the 405 nm continuous wave (cw), [ 2 ] 488 nm cw, [ 3 ] or pulsed sources such as the 800 nm femtosecond pulsed Ti:Sapphire [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] or 1064 nm nanosecond pulsed Nd:YAG. [ 14 ] [ 15 ] The meaning of the term transfection has evolved. [ 16 ] The original meaning of transfection was "infection by transformation", i.e. introduction of DNA (or RNA) from a prokaryote-infecting virus or bacteriophage into cells, resulting in an infection. Because the term transformation had another sense in animal cell biology (a genetic change allowing long-term propagation in culture, or acquisition of properties typical of cancer cells), the term transfection acquired, for animal cells, its present meaning of a change in cell properties caused by introduction of DNA (or other nucleic acid species such as RNA or SiRNA ). Because of this strict definition of transfection , optical transfection also refers only to the introduction of nucleic acid species. The introduction of other impermeable compounds into a cell, such as organic fluorophores or semiconductor quantum nanodots is not strictly speaking "transfection," and is therefore referred to as "optical injection" or one of the many other terms now outlined. The lack of a unified name for this technology makes reviewing the literature on the subject very difficult. [ 17 ] Optical injection has been described using over a dozen different names or phrases (see bulleted lists below). Some trends in the literature are clear. The first term of the technique is invariably a derivation of word laser, optical, or photo, and the second term is usually in reference to injection, transfection, poration, perforation or puncture. Like many cellular perturbations, when a single cell or group of cells is treated with a laser, three things can happen: the cell dies (overdose), the cell membrane is permeabilised, substances enter, and the cell recovers (therapeutic dose), or nothing happens (underdose). There have been suggestions in the literature to reserve the term optoinjection for when a therapeutic dose is delivered upon a single cell, [ 18 ] [ 19 ] [ 20 ] and the term optoporation for when a laser generated shockwave treats a cluster of many (10s to 100s) cells. [ 18 ] [ 19 ] [ 14 ] [ 20 ] The first definition of optoinjection is uncontroversial. The definition of optoporation , however, has failed to be adopted, with a similar number of references using the term to denote the dosing of single cells [ 3 ] [ 5 ] [ 15 ] [ 21 ] as those using the term to denote the simultaneous dosing of clusters of many cells [ 18 ] [ 19 ] [ 14 ] [ 20 ] As the field stands, it is the opinion of the authors of a review article on the subject [ 17 ] that the term optoinjection always be included as a keyword in future publications, regardless of their own naming preferences. Terms agreed by consensus Terms under deliberation Some of the above was reproduced with permission from. [ 17 ] A typical optical transfection protocol is as follows: [ 11 ] 1) Build an optical tweezers system with a high NA objective 2) Culture cells to 50-60% confluency 3) Expose cells to at least 10 μg/mL of plasmid DNA 4) Dose the plasma membrane of each cell with 10-40 ms of focussed laser, at a power of <100 mW at focus 5) Observe transient transfection 24-96h later 6) Add selective medium if the generation of stable colonies is desired
https://en.wikipedia.org/wiki/Optical_transfection
Optical tweezers (originally called single-beam gradient force trap ) are scientific instruments that use a highly focused laser beam to hold and move microscopic and sub-microscopic objects like atoms , nanoparticles and droplets, in a manner similar to tweezers . If the object is held in air or vacuum without additional support, it can be called optical levitation . The laser light provides an attractive or repulsive force (typically on the order of pico newtons ), depending on the relative refractive index between particle and surrounding medium. Levitation is possible if the force of the light counters the force of gravity . The trapped particles are usually micron -sized, or even smaller. Dielectric and absorbing particles can be trapped, too. Optical tweezers are used in biology and medicine (for example to grab and hold a single bacterium , a cell like a sperm cell or a blood cell , or a molecule like DNA ), nanoengineering and nanochemistry (to study and build materials from single molecules ), quantum optics and quantum optomechanics (to study the interaction of single particles with light). The development of optical tweezing by Arthur Ashkin was lauded with the 2018 Nobel Prize in Physics . The detection of optical scattering and the gradient forces on micron sized particles was first reported in 1970 by Arthur Ashkin, a scientist working at Bell Labs . [ 1 ] Years later, Ashkin and colleagues reported the first observation of what is now commonly referred to as an optical tweezer: a tightly focused beam of light capable of holding microscopic particles stable in three dimensions. [ 2 ] In 2018, Ashkin was awarded the Nobel Prize in Physics for this development. One author of this seminal 1986 paper, Steven Chu , would go on to use optical tweezing in his work on cooling and trapping neutral atoms. [ 3 ] This research earned Chu the 1997 Nobel Prize in Physics along with Claude Cohen-Tannoudji and William D. Phillips . [ 4 ] In an interview, Steven Chu described how Ashkin had first envisioned optical tweezing as a method for trapping atoms. [ 5 ] Ashkin was able to trap larger particles (10 to 10,000 nanometers in diameter) but it fell to Chu to extend these techniques to the trapping of neutral atoms (0.1 nanometers in diameter) using resonant laser light and a magnetic gradient trap (cf. Magneto-optical trap ). In the late 1980s, Arthur Ashkin and Joseph M. Dziedzic demonstrated the first application of the technology to the biological sciences, using it to trap an individual tobacco mosaic virus and Escherichia coli bacterium. [ 6 ] Throughout the 1990s and afterwards, researchers like Carlos Bustamante , James Spudich , and Steven Block pioneered the use of optical trap force spectroscopy to characterize molecular-scale biological motors. These molecular motors are ubiquitous in biology, and are responsible for locomotion and mechanical action within the cell. Optical traps allowed these biophysicists to observe the forces and dynamics of nanoscale motors at the single-molecule level; optical trap force-spectroscopy has since led to greater understanding of the stochastic nature of these force-generating molecules. Optical tweezers have proven useful in other areas of biology as well. They are used in synthetic biology to construct tissue-like networks of artificial cells, [ 7 ] and to fuse synthetic membranes together [ 8 ] to initiate biochemical reactions. [ 7 ] They are also widely employed in genetic studies [ 9 ] and research on chromosome structure and dynamics. [ 10 ] In 2003 the techniques of optical tweezers were applied in the field of cell sorting; by creating a large optical intensity pattern over the sample area, cells can be sorted by their intrinsic optical characteristics. [ 11 ] [ 12 ] Optical tweezers have also been used to probe the cytoskeleton , measure the visco-elastic properties of biopolymers , [ 13 ] and study cell motility . A bio-molecular assay in which clusters of ligand coated nano-particles are both optically trapped and optically detected after target molecule induced clustering was proposed in 2011 [ 14 ] and experimentally demonstrated in 2013. [ 15 ] Optical tweezers are also used to trap laser-cooled atoms in vacuum, mainly for applications in quantum science. Some achievements in this area include trapping of a single atom in 2001, [ 16 ] trapping of 2D arrays of atoms in 2002, [ 17 ] trapping of strongly interacting entangled pairs in 2010, [ 18 ] [ 19 ] [ 20 ] trapping precisely assembled 2-dimensional arrays of atoms in 2016 [ 21 ] [ 22 ] and 3-dimensional arrays in 2018. [ 23 ] [ 24 ] These techniques have been used in quantum simulators to obtain programmable arrays of 196 and 256 atoms in 2021 [ 25 ] [ 26 ] [ 27 ] and represent a promising platform for quantum computing. [ 17 ] [ 28 ] Researchers have worked to convert optical tweezers from large, complex instruments to smaller, simpler ones, for use by those with smaller research budgets. [ 3 ] [ 29 ] Optical tweezers are capable of manipulating nanometer and micron-sized dielectric particles, and even individual atoms, by exerting extremely small forces via a highly focused laser beam. The beam is typically focused by sending it through a microscope objective . Near the narrowest point of the focused beam, known as the beam waist , the amplitude of the oscillating electric field varies rapidly in space. Dielectric particles are attracted along the gradient to the region of strongest electric field, which is the center of the beam. The laser light also tends to apply a force on particles in the beam along the direction of beam propagation. This is due to conservation of momentum : photons that are absorbed or scattered by the tiny dielectric particle impart momentum to the dielectric particle. This is known as the scattering force and results in the particle being displaced slightly downstream from the exact position of the beam waist, as seen in the figure. Optical traps are very sensitive instruments and are capable of the manipulation and detection of sub-nanometer displacements for sub-micron dielectric particles. [ 30 ] For this reason, they are often used to manipulate and study single molecules by interacting with a bead that has been attached to that molecule. DNA and the proteins [ 31 ] and enzymes that interact with it are commonly studied in this way. For quantitative scientific measurements, most optical traps are operated in such a way that the dielectric particle rarely moves far from the trap center. The reason for this is that the force applied to the particle is linear with respect to its displacement from the center of the trap as long as the displacement is small. In this way, an optical trap can be compared to a simple spring, which follows Hooke's law . Proper explanation of optical trapping behavior depends upon the size of the trapped particle relative to the wavelength of light used to trap it. In cases where the dimensions of the particle are much greater than the wavelength, a simple ray optics treatment is sufficient. If the wavelength of light far exceeds the particle dimensions, the particles can be treated as electric dipoles in an electric field. For optical trapping of dielectric objects of dimensions within an order of magnitude of the trapping beam wavelength, the only accurate models involve the treatment of either time dependent or time harmonic Maxwell equations using appropriate boundary conditions. In cases where the diameter of a trapped particle is significantly greater than the wavelength of light, the trapping phenomenon can be explained using ray optics. As shown in the figure, individual rays of light emitted from the laser will be refracted as it enters and exits the dielectric bead. As a result, the ray will exit in a direction different from which it originated. Since light has a momentum associated with it, this change in direction indicates that its momentum has changed. Due to Newton's third law , there should be an equal and opposite momentum change on the particle. Most optical traps operate with a Gaussian beam (TEM 00 mode) profile intensity. In this case, if the particle is displaced from the center of the beam, as in the right part of the figure, the particle has a net force returning it to the center of the trap because more intense beams impart a larger momentum change towards the center of the trap than less intense beams, which impart a smaller momentum change away from the trap center. The net momentum change, or force, returns the particle to the trap center. If the particle is located at the center of the beam, then individual rays of light are refracting through the particle symmetrically, resulting in no net lateral force. The net force in this case is along the axial direction of the trap, which cancels out the scattering force of the laser light. The cancellation of this axial gradient force with the scattering force is what causes the bead to be stably trapped slightly downstream of the beam waist. The standard tweezers works with the trapping laser propagated in the direction of gravity [ 32 ] and the inverted tweezers works against gravity. In cases where the diameter of a trapped particle is significantly smaller than the wavelength of light, the conditions for Rayleigh scattering are satisfied and the particle can be treated as a point dipole in an inhomogeneous electromagnetic field . The force applied on a single charge in an electromagnetic field is known as the Lorentz force , The force on the dipole can be calculated by substituting two terms for the electric field in the equation above, one for each charge. The polarization of a dipole is p = q d , {\displaystyle \mathbf {p} =q\mathbf {d} ,} where d {\displaystyle \mathbf {d} } is the distance between the two charges. For a point dipole, the distance is infinitesimal , x 1 − x 2 . {\displaystyle \mathbf {x} _{1}-\mathbf {x} _{2}.} Taking into account that the two charges have opposite signs, the force takes the form Notice that the E 1 {\displaystyle \mathbf {E_{1}} } cancel out. Multiplying through by the charge, q {\displaystyle q} , converts position, x {\displaystyle \mathbf {x} } , into polarization, p {\displaystyle \mathbf {p} } , where in the second equality, it has been assumed that the dielectric particle is linear (i.e. p = α E {\displaystyle \mathbf {p} =\alpha \mathbf {E} } ). In the final steps, two equalities will be used: (1) a vector analysis equality , (2) Faraday's law of induction . First, the vector equality will be inserted for the first term in the force equation above. Maxwell's equation will be substituted in for the second term in the vector equality. Then the two terms which contain time derivatives can be combined into a single term. [ 33 ] The second term in the last equality is the time derivative of a quantity that is related through a multiplicative constant to the Poynting vector , which describes the power per unit area passing through a surface. Since the power of the laser is constant when sampling over frequencies much longer than the frequency of the laser's light ~10 14 Hz, the derivative of this term averages to zero and the force can be written as [ 34 ] where in the second part we have included the induced dipole moment (in MKS units) of a spherical dielectric particle: p = α E ( r , t ) = 4 π n 1 2 ϵ 0 a 3 ( m 2 − 1 ) / ( m 2 + 2 ) E ( r , t ) {\displaystyle \mathbf {p} =\alpha \mathbf {E} (\mathbf {r} ,t)=4\pi n_{1}^{2}\epsilon _{0}a^{3}(m^{2}-1)/(m^{2}+2)\mathbf {E} (\mathbf {r} ,t)} , where a {\displaystyle a} is the particle radius, n 0 {\displaystyle n_{0}} is the index of refraction of the particle and m = n 0 / n 1 {\displaystyle m=n_{0}/n_{1}} is the relative refractive index between the particle and the medium. The square of the magnitude of the electric field is equal to the intensity of the beam as a function of position. Therefore, the result indicates that the force on the dielectric particle, when treated as a point dipole, is proportional to the gradient along the intensity of the beam. In other words, the gradient force described here tends to attract the particle to the region of highest intensity. In reality, the scattering force of the light works against the gradient force in the axial direction of the trap, resulting in an equilibrium position that is displaced slightly downstream of the intensity maximum. Under the Rayleigh approximation, we can also write the scattering force as Since the scattering is isotropic, the net momentum is transferred in the forward direction. On the quantum level, we picture the gradient force as forward Rayleigh scattering in which identical photons are created and annihilated concurrently, while in the scattering (radiation) force the incident photons travel in the same direction and ‘scatter’ isotropically. By conservation of momentum, the particle must accumulate the photons' original momenta, causing a forward force in the latter. [ 35 ] A useful way to study the interaction of an atom in a Gaussian beam is to look at the harmonic potential approximation of the intensity profile the atom experiences. In the case of the two-level atom, the potential experienced is related to its AC Stark Shift , where Γ {\displaystyle \Gamma } is the natural line width of the excited state, μ {\displaystyle \mu } is the electric dipole coupling, ω o {\displaystyle \omega _{o}} is the frequency of the transition, and δ {\displaystyle \delta } is the detuning or difference between the laser frequency and the transition frequency. The intensity of a gaussian beam profile is characterized by the wavelength ( λ ) {\displaystyle (\lambda )} , minimum waist ( w o ) {\displaystyle (w_{o})} , and power of the beam ( P o ) {\displaystyle (P_{o})} . The following formulas define the beam profile: To approximate this Gaussian potential in both the radial and axial directions of the beam, the intensity profile must be expanded to second order in z {\displaystyle z} and r {\displaystyle r} for r = 0 {\displaystyle r=0} and z = 0 {\displaystyle z=0} respectively and equated to the harmonic potential 1 2 m ( ω z 2 z 2 + ω r 2 r 2 ) {\displaystyle {\frac {1}{2}}m(\omega _{z}^{2}z^{2}+\omega _{r}^{2}r^{2})} . These expansions are evaluated assuming fixed power. This means that when solving for the harmonic frequencies (or trap frequencies when considering optical traps for atoms), the frequencies are given as: so that the relative trap frequencies for the radial and axial directions as a function of only beam waist scale as: In order to levitate the particle in air, the downward force of gravity must be countered by the forces stemming from photon momentum transfer. Typically photon radiation pressure of a focused laser beam of enough intensity counters the downward force of gravity while also preventing lateral (side to side) and vertical instabilities to allow for a stable optical trap capable of holding small particles in suspension. Micrometer sized (from several to 50 micrometers in diameter) transparent dielectric spheres such as fused silica spheres, oil or water droplets, are used in this type of experiment. The laser radiation can be fixed in wavelength such as that of an argon ion laser or that of a tunable dye laser . Laser power required is of the order of 1 Watt focused to a spot size of several tens of micrometers. Phenomena related to morphology-dependent resonances in a spherical optical cavity have been studied by several research groups. For a shiny object, such as a metallic micro-sphere, stable optical levitation has not been achieved. Optical levitation of a macroscopic object is also theoretically possible, [ 36 ] and can be enhanced with nano-structuring. [ 37 ] Materials that have been successfully levitated include Black liquor, aluminum oxide, tungsten, and nickel. [ 38 ] In the last two decades, optical forces are combined with thermophoretic forces to enable trapping at reduced laser powers, thus resulting in minimized photon damage. By introducing light-absorbing elements (either particles or substrates), microscale temperature gradients are created, resulting in thermophoresis . [ 39 ] Typically, particles (including biological objects such as cells, bacteria, DNA/RNA) drift towards the cold - resulting in particle repulsion using optical tweezers. Overcoming this limitation, different techniques such as beam shaping and solution modification with electrolytes and surfactants [ 40 ] were used to successfully trap the objects. Laser cooling was also achieved with Ytterbium-doped yttrium lithium fluoride crystals to generate cold spots using lasers to achieve trapping with reduced photobleaching . [ 41 ] The sample temperature has also been reduced to achieve optical trapping for a significantly increased selection of particles using optothermal tweezers for drug delivery applications. [ 42 ] The most basic optical tweezer setup will likely include the following components: a laser (usually Nd:YAG ), a beam expander, some optics used to steer the beam location in the sample plane, a microscope objective and condenser to create the trap in the sample plane, a position detector (e.g. quadrant photodiode ) to measure beam displacements and a microscope illumination source coupled to a CCD camera . An Nd:YAG laser (1064 nm wavelength) is a common choice of laser for working with biological specimens. This is because such specimens (being mostly water) have a low absorption coefficient at this wavelength. [ 43 ] A low absorption is advisable so as to minimise damage to the biological material, sometimes referred to as opticution . Perhaps the most important consideration in optical tweezer design is the choice of the objective. A stable trap requires that the gradient force, which is dependent upon the numerical aperture (NA) of the objective, be greater than the scattering force. Suitable objectives typically have an NA between 1.2 and 1.4. [ 44 ] While alternatives are available, perhaps the simplest method for position detection involves imaging the trapping laser exiting the sample chamber onto a quadrant photodiode. Lateral deflections of the beam are measured similarly to how it is done using atomic force microscopy (AFM) . Expanding the beam emitted from the laser to fill the aperture of the objective will result in a tighter, diffraction-limited spot. [ 45 ] While lateral translation of the trap relative to the sample can be accomplished by translation of the microscope slide, most tweezer setups have additional optics designed to translate the beam to give an extra degree of translational freedom. This can be done by translating the first of the two lenses labelled as "Beam Steering" in the figure. For example, translation of that lens in the lateral plane will result in a laterally deflected beam from what is drawn in the figure. If the distance between the beam steering lenses and the objective is chosen properly, this will correspond to a similar deflection before entering the objective and a resulting lateral translation in the sample plane. The position of the beam waist, that is the focus of the optical trap, can be adjusted by an axial displacement of the initial lens. Such an axial displacement causes the beam to diverge or converge slightly, the result of which is an axially displaced position of the beam waist in the sample chamber. [ 46 ] Visualization of the sample plane is usually accomplished through illumination via a separate light source coupled into the optical path in the opposite direction using dichroic mirrors . This light is incident on a CCD camera and can be viewed on an external monitor or used for tracking the trapped particle position via video tracking . The majority of optical tweezers make use of conventional TEM 00 Gaussian beams . However a number of other beam types have been used to trap particles, including high order laser beams i.e. Hermite-Gaussian beams (TEM xy ), Laguerre-Gaussian (LG) beams (TEM pl ) and Bessel beams . Optical tweezers based on Laguerre-Gaussian beams have the unique capability of trapping particles that are optically reflective and absorptive. [ 47 ] [ 48 ] [ 49 ] Laguerre-Gaussian beams also possess a well-defined orbital angular momentum that can rotate particles. [ 50 ] [ 51 ] This is accomplished without external mechanical or electrical steering of the beam. Both zero and higher order Bessel Beams also possess a unique tweezing ability. They can trap and rotate multiple particles that are millimeters apart and even around obstacles. [ 52 ] Micromachines can be driven by these unique optical beams due to their intrinsic rotating mechanism due to the spin and orbital angular momentum of light. [ 53 ] A typical setup uses one laser to create one or two traps. Commonly, two traps are generated by splitting the laser beam into two orthogonally polarized beams. Optical tweezing operations with more than two traps can be realized either by time-sharing a single laser beam among several optical tweezers, [ 54 ] or by diffractively splitting the beam into multiple traps. With acousto-optic deflectors or galvanometer -driven mirrors, a single laser beam can be shared among hundreds of optical tweezers in the focal plane, or else spread into an extended one-dimensional trap. Specially designed diffractive optical elements can divide a single input beam into hundreds of continuously illuminated traps in arbitrary three-dimensional configurations. The trap-forming hologram also can specify the mode structure of each trap individually, thereby creating arrays of optical vortices, optical tweezers, and holographic line traps, for example. [ 55 ] When implemented with a spatial light modulator , such holographic optical traps also can move objects in three dimensions. [ 56 ] Advanced forms of holographic optical traps with arbitrary spatial profiles, where smoothness of the intensity and the phase are controlled, find applications in many areas of science, from micromanipulation to ultracold atoms . [ 57 ] Ultracold atoms could also be used for realization of quantum computers. [ 58 ] The standard fiber optical trap relies on the same principle as the optical trapping, but with the Gaussian laser beam delivered through an optical fiber . If one end of the optical fiber is molded into a lens -like facet, the nearly gaussian beam carried by a single mode standard fiber will be focused at some distance from the fiber tip. The effective Numerical Aperture of such assembly is usually not enough to allow for a full 3D optical trap but only for a 2D trap (optical trapping and manipulation of objects will be possible only when, e.g., they are in contact with a surface ). [ 59 ] A true 3D optical trapping based on a single fiber, with a trapping point which is not in nearly contact with the fiber tip, has been realized based on a not-standard annular-core fiber arrangement and a total-internal-reflection geometry. [ 60 ] On the other hand, if the ends of the fiber are not moulded, the laser exiting the fiber will be diverging and thus a stable optical trap can only be realised by balancing the gradient and the scattering force from two opposing ends of the fiber. The gradient force will trap the particles in the transverse direction, while the axial optical force comes from the scattering force of the two counter propagating beams emerging from the two fibers. The equilibrium z-position of such a trapped bead is where the two scattering forces equal each other. This work was pioneered by A. Constable et al. , Opt. Lett. 18 ,1867 (1993), and followed by J.Guck et al. , Phys. Rev. Lett. 84 , 5451 (2000), who made use of this technique to stretch microparticles. By manipulating the input power into the two ends of the fiber, there will be an increase of an "optical stretching" that can be used to measure viscoelastic properties of cells, with sensitivity sufficient to distinguish between different individual cytoskeletal phenotypes. i.e. human erythrocytes and mouse fibroblasts. A recent test has seen great success in differentiating cancerous cells from non-cancerous ones from the two opposed, non-focused laser beams. [ 61 ] While earlier version of fiber-based laser traps exclusively used single mode beams, M. Kreysing and colleagues recently showed that the careful excitation of further optical modes in a short piece of optical fiber allows the realization of non-trivial trapping geometries. By this the researchers were able to orient various human cell types (individual cells and clusters) on a microscope. The main advantage of the so-called "optical cell rotator" technology over standard optical tweezers is the decoupling of trapping from imaging optics. This, its modular design, and the high compatibility of divergent laser traps with biological material indicates the great potential of this new generation of laser traps in medical research and life science. [ 62 ] Recently, the optical cell rotator technology was implemented on the basis of adaptive optics , allowing to dynamically reconfigure the optical trap during operation and adapt it to the sample. [ 63 ] One of the more common cell-sorting systems makes use of flow cytometry through fluorescence imaging . In this method, a suspension of biologic cells is sorted into two or more containers, based upon specific fluorescent characteristics of each cell during an assisted flow. By using an electrical charge that the cell is "trapped" in, the cells are then sorted based on the fluorescence intensity measurements. The sorting process is undertaken by an electrostatic deflection system that diverts cells into containers based upon their charge. In the optically actuated sorting process, the cells are flowed through into an optical landscape i.e. 2D or 3D optical lattices. Without any induced electrical charge, the cells would sort based on their intrinsic refractive index properties and can be re-configurability for dynamic sorting. An optical lattice can be created using diffractive optics and optical elements. [ 11 ] On the other hand, K. Ladavac et al. used a spatial light modulator to project an intensity pattern to enable the optical sorting process. [ 64 ] K. Xiao and D. G. Grier applied holographic video microscopy to demonstrate that this technique can sort colloidal spheres with part-per-thousand resolution for size and refractive index. [ 65 ] The main mechanism for sorting is the arrangement of the optical lattice points. As the cell flow through the optical lattice, there are forces due to the particles drag force that is competing directly with the optical gradient force (See Physics of optical tweezers) from the optical lattice point. By shifting the arrangement of the optical lattice point, there is a preferred optical path where the optical forces are dominant and biased. With the aid of the flow of the cells, there is a resultant force that is directed along that preferred optical path. Hence, there is a relationship of the flow rate with the optical gradient force. By adjusting the two forces, one will be able to obtain a good optical sorting efficiency. Competition of the forces in the sorting environment need fine tuning to succeed in high efficient optical sorting. The need is mainly with regards to the balance of the forces; drag force due to fluid flow and optical gradient force due to arrangement of intensity spot. Scientists at the University of St. Andrews have received considerable funding from the UK Engineering and Physical Sciences Research Council ( EPSRC ) for an optical sorting machine. This new technology could rival the conventional fluorescence-activated cell sorting. [ 66 ] An evanescent field [ 67 ] is a residue optical field that "leaks" during total internal reflection . This "leaking" of light fades off at an exponential rate. The evanescent field has found a number of applications in nanometer resolution imaging (microscopy); optical micromanipulation (optical tweezers) are becoming ever more relevant in research. In optical tweezers, a continuous evanescent field can be created when light is propagating through an optical waveguide (multiple total internal reflection ). The resulting evanescent field has a directional sense and will propel microparticles along its propagating path. This work was first pioneered by S. Kawata and T. Sugiura, in 1992, who showed that the field can be coupled to the particles in proximity on the order of 100 nanometers. [ 68 ] This direct coupling of the field is treated as a type of photon tunnelling across the gap from prism to microparticles. The result is a directional optical propelling force. A recent updated version of the evanescent field optical tweezers makes use of extended optical landscape patterns to simultaneously guide a large number of particles into a preferred direction without using a waveguide . It is termed as Lensless Optical Trapping ("LOT"). The orderly movement of the particles is aided by the introduction of Ronchi Ruling that creates well-defined optical potential wells (replacing the waveguide). This means that particles are propelled by the evanescent field while being trapped by the linear bright fringes. At the moment, there are scientists working on focused evanescent fields as well. In recent studies, the evanescent field generated by mid-infrared laser has been used to sort particles by molecular vibrational resonance selectively. Mid-infrared light is commonly used to identify molecular structures of materials because the vibrational modes exist in the mid-infrared region. A study by Statsenko et al. described optical force enhancement by molecular vibrational resonance by exciting the stretching mode of Si-O-Si bond at 9.3 μm. [ 69 ] It is shown that silica microspheres containing significant Si-O-Si bond move up to ten times faster than polystyrene microspheres due to molecular vibrational resonance. Moreover, this same group also investigated the possibility of optical force chromatography based on molecular vibrational resonance. [ 70 ] Another approach that has been recently proposed makes use of surface plasmons, which is an enhanced evanescent wave localized at a metal/dielectric interface. The enhanced force field experienced by colloidal particles exposed to surface plasmons at a flat metal/dielectric interface has been for the first time measured using a photonic force microscope, the total force magnitude being found 40 times stronger compared to a normal evanescent wave. [ 71 ] By patterning the surface with gold microscopic islands it is possible to have selective and parallel trapping in these islands. The forces of the latter optical tweezers lie in the femtonewton range. [ 72 ] The evanescent field can also be used to trap cold atoms and molecules near the surface of an optical waveguide or optical nanofiber . [ 73 ] [ 74 ] Ming Wu, a UC Berkeley Professor of electrical engineering and computer sciences invented the new optoelectronic tweezers. Wu transformed the optical energy from low powered light emitting diodes (LED) into electrical energy via a photoconductive surface. The idea is to allow the LED to switch on and off the photoconductive material via its fine projection. As the optical pattern can be easily transformable through optical projection, this method allows a high flexibility of switching different optical landscapes. The manipulation/tweezing process is done by the variations between the electric field actuated by the light pattern. The particles will be either attracted or repelled from the actuated point due to its induced electrical dipole. Particles suspended in a liquid will be susceptible to the electrical field gradient, this is known as dielectrophoresis . One clear advantage is that the electrical conductivity is different between different kinds of cells. Living cells have a lower conductive medium while the dead ones have minimum or no conductive medium. The system may be able to manipulate roughly 10,000 cells or particles at the same time. See comments by Professor Kishan Dholakia on this new technique, K. Dholakia, Nature Materials 4, 579–580 (01 Aug 2005) News and Views. "The system was able to move live E. coli bacteria and 20-micrometre-wide particles, using an optical power output of less than 10 microwatts. This is one-hundred-thousandth of the power needed for [direct] optical tweezers". [ 75 ] Another notably new type of optical tweezers is optothermal tweezers invented by Yuebing Zheng at The University of Texas at Austin . The strategy is to use light to create a temperature gradient and exploit the thermophoretic migration of matter for optical trapping. [ 76 ] The team further integrated thermophoresis with laser cooling to develop opto-refrigerative tweezers to avoid thermal damages for noninvasive optical trapping and manipulation. [ 77 ] When a cluster of microparticles are trapped within a monochromatic laser beam, the organization of the microparticles within the optical trapping is heavily dependent on the redistributing of the optical trapping forces amongst the microparticles. This redistribution of light forces amongst the cluster of microparticles provides a new force equilibrium on the cluster as a whole. As such we can say that the cluster of microparticles are somewhat bound together by light. One of the first experimental evidence of optical binding was reported by Michael M. Burns, Jean-Marc Fournier, and Jene A. Golovchenko, [ 78 ] though it was originally predicted by T. Thirunamachandran. [ 79 ] One of the many recent studies on optical binding has shown that for a system of chiral nanoparticles, the magnitude of the binding forces are dependent on the polarisation of the laser beam and the handedness of interacting particles themselves, [ 80 ] with potential applications in areas such as enantiomeric separation and optical nanomanipulation. In order to simultaneously manipulate and image samples that exhibit fluorescence , optical tweezers can be built alongside a fluorescence microscope . [ 81 ] Such instruments are particularly useful when it comes to studying single or small numbers of biological molecules that have been fluorescently labelled, or in applications in which fluorescence is used to track and visualize objects that are to be trapped. This approach has been extended for simultaneous sensing and imaging of dynamic protein complexes using long and strong tethers generated by a highly efficient multi-step enzymatic approach [ 82 ] and applied to investigations of disaggregation machines in action. [ 83 ] Other than 'standard' fluorescence optical tweezers are now being built with multiple color Confocal, Widefield, STED, FRET, TIRF or IRM. This allows applications such as measuring: protein/DNA localization binding, protein folding, condensation, motor protein force generation, visualization of cytoskeletal filaments and motor dynamics, microtubule dynamics, manipulating liquid droplet (rheology) or fusion. These setups are difficult to build and traditionally are found in non correlated 'academic' setups. In the recent years even home builders (both biophysics and general biologists) are converting to the alternative and are acquiring total correlated solution with easy data acquisition and data analysis.
https://en.wikipedia.org/wiki/Optical_tweezers
Optical units are dimensionless units of length used in optical microscopy . They are used to express distances in terms of the numerical aperture of the system and the wavelength of the light used for observation. Using these units allows comparison of the properties of different microscopes. [ 1 ] For example, the diameter of the first minimum of the Airy disk is always 7.6 optical units in the image plane of a diffraction limited microscope. There are two types of optical units. Radial optical units are measured in the image plane, and axial optical units are used to measure distances between the image plane and the observer. The number of optical units v {\displaystyle v} in a given radial length r {\displaystyle r} is given by: v r a d i a l = 2 π λ n sin ⁡ α M t o t r {\displaystyle v_{\mathrm {radial} }={\frac {2\pi }{\lambda }}{\frac {n\sin \alpha }{M_{\mathrm {tot} }}}r} where: Axial optical units are more complicated, as there is no simple definition of resolution in the axial direction. There are two forms of the optical unit for the axial direction. For the case of a system with high numerical aperture, the axial optical units in a distance z are given by: u z = 2 π λ ( n sin ⁡ α ) 2 η z {\displaystyle u_{z}={\frac {2\pi }{\lambda }}{\frac {(n\sin \alpha )^{2}}{\eta }}z} where: For systems with low numerical aperture, the axial optical unit is: u z = 8 π η λ sin 2 ⁡ ( α 2 ) z {\displaystyle u_{z}={\frac {8\pi \eta }{\lambda }}\sin ^{2}({\frac {\alpha }{2}})z} This optics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Optical_unit
Optical wireless communications ( OWC ) is a form of optical communication in which unguided light is used "in the air" (or in outer space ), without an optical fiber . Visible , infrared (IR), or ultraviolet (UV) light is used to carry a wireless signal. It is generally used in short-range communication; extensions exist for long-range and ultra-long range . OWC systems operating in the visible band (390–750 nm) are commonly referred to as visible light communication (VLC). VLC systems take advantage of light-emitting diodes (LEDs) which can be pulsed at very high speeds without a noticeable effect on the lighting output and human eye. VLC can be possibly used in a wide range of applications including wireless local area networks , wireless personal area networks and vehicular networks , among others. [ 1 ] On the other hand, terrestrial point-to-point OWC systems, also known as the free space optical (FSO) systems, [ 2 ] operate at the near IR frequencies (750–1600 nm). These systems typically use laser transmitters and offer a cost-effective protocol-transparent link with high data rates , i.e., 10 Gbit/s per wavelength, and provide a potential solution for the backhaul bottleneck. There has also been a growing interest in ultraviolet communication (UVC) as a result of recent progress in solid-state optical sources/detectors operating within solar-blind UV spectrum (200–280 nm). In this so-called deep UV band, solar radiation is negligible at the ground level and this makes possible the design of photon-counting detectors with wide field-of-view receivers that increase the received energy with little additional background noise. Such designs are particularly useful for outdoor non-line-of-sight configurations to support low-power short-range UVC such as in wireless sensors and ad-hoc networks. Wireless communications technologies proliferated and became essential very quickly during the last few decades of the 20th century, and the early 21st century. The wide-scale deployment of radio-frequency technologies was a key factor in the expansion of wireless devices and systems. However, the portion of the electromagnetic spectrum used by wireless systems is limited in capacity, and licenses to use parts of the spectrum are expensive. With the rise in data-heavy wireless communications, the demand for RF spectrum is outstripping supply, causing companies to consider options for using parts of the electromagnetic spectrum other than radio frequencies. Optical wireless communication (OWC) refers to transmission in unguided propagation media through the use of optical carriers: visible , infrared (IR), and ultraviolet (UV) radiation. Signalling through beacon fires , smoke , ship flags and semaphore telegraph can be considered the historical forms of OWC. [ 3 ] Sunlight has also been used for long-distance signaling since very early times. The earliest use of sunlight for communication purposes is attributed to ancient Greeks and Romans who used polished shields to send signals by reflecting sunlight during battles. [ 4 ] In 1810, Carl Friedrich Gauss invented the heliograph which uses a pair of mirrors to direct a controlled beam of sunlight to a distant station. Although the original heliograph was designed for the geodetic survey, it was used extensively for military purposes during the late 19th and early 20th century. In 1880, Alexander Graham Bell invented the photophone , the world’s first wireless telephone system. Military interest in photophones continued after Bell's time. For example, in 1935, the German Army developed a photophone where a tungsten filament lamp with an IR transmitting filter was used as a light source. Also, American and German military laboratories continued the development of high-pressure arc lamps for optical communication until the 1950s. [ 5 ] Modern OWC uses either lasers or light-emitting diodes (LEDs) as transmitters. In 1962, MIT Lincoln Labs built an experimental OWC link using a light-emitting GaAs diode and was able to transmit TV signals over a distance of 30 miles. After the invention of the laser, OWC was envisioned to be the main deployment area for lasers and many trials were conducted using different types of lasers and modulation schemes. [ 6 ] However, the results were in general disappointing due to the large divergence of laser beams and the inability to cope with atmospheric effects. With the development of low-loss fiber optics in the 1970s, they became the obvious choice for long distance optical transmission and shifted the focus away from OWC systems. Over the decades, interest in OWC was mainly limited to covert military applications, [ 7 ] and space applications including inter-satellite and deep-space links. [ 8 ] OWC’s mass market penetration has been so far limited with the exception of IrDA which is a highly successful wireless short-range transmission solution. [ needs update? ] Variations of OWC can be potentially employed in a diverse range of communication applications ranging from optical interconnects within integrated circuits through outdoor inter-building links to satellite communications. OWC can be divided into five categories based on the transmission range:
https://en.wikipedia.org/wiki/Optical_wireless_communications
Optically active additive (OAA) is an organic or inorganic material which, when added to a coating , makes that coating react to ultraviolet light. This effect enables quick, non-invasive inspection of very large coated areas during the application process allowing the coating inspector to identify and concentrate on defective areas, thus reducing inspection time while assuring the probability of good application and coverage. It works by highlighting holidays and pin-holes, areas of over and under application as well as giving the opportunity for crack detection and identification of early coating deterioration through life. The use of optically active additives or fluorescing additives is specified in US Military Specification MIL-SPEC-23236C. [ 1 ] The use of OAAs and the inspection technique is described in the SSPC document Technology Up-date 11. There are two common types of optically active additives available commercially: inorganic and organic. Inorganic OAAs exhibit large particle sizes of 5–10 μm (no mobility), are light-stable, can have a choice of colours as shown in image above, are useful in a wide range of coating systems, and are more expensive. Some inorganic OAAs can exhibit some degree of afterglow aiding inspection. Organic OAAs require low addition levels, are soluble in solvents and organic liquids (mobile), are blue under UV (emitting the same colour as lint, oil, grease etc.), can fade quickly, have limited use in a range of coating systems and are less expensive. They are also indistinguishable from old tar epoxy-type coatings still seen on some structures and vessels. Organic OAAs have no afterglow. If a single photon approaches an atom which is receptive to it, the photon can be absorbed by the atom in a manner very similar to a radio wave being picked up by an aerial. At the moment of absorption the photon ceases to exist and the total energy contained within the atom increases. This increase in energy is usually described symbolically by saying that one of the outermost electrons "jumps" to a "higher orbit". This new atomic configuration is unstable and the tendency is for the electron to fall back to its lower orbit or energy level, emitting a new photon as it goes. The entire process may take no more than 10 −9 seconds. The result is much the same as with reflective colour, but because of the process of absorption and emission, the substance emits a glow. According to Planck , the energy of each photon is given by multiplying its frequency by a constant (the Planck constant , 6.626 × 10 −34 J⋅Hz −1 ‍ [ 2 ] ). It follows that the wavelength of a photon emitted from a luminescent system is directly related to the difference between the energy of the two atomic levels involved. In terms of wavelength , this relationship is an inverse one so that if an emitted photon is to be of short wavelength (high energy), the gap to be jumped by the electron must be a large one. The numerical relationship between these two aspects is the reciprocal of the Planck constant. Chemical engineers are able to devise molecules with these energy levels in mind, so as to adjust the wavelength of the emitted photons to produce a specific colour.
https://en.wikipedia.org/wiki/Optically_active_additive
In physics, optically detected magnetic resonance ( ODMR ) is a technique for detecting quantum objects that are both paramagnetic and optically active. In the case of photoluminescent point defects ( color centers ) in crystals, the “ODMR signal” usually means a decrease in the defect’s fluorescence intensity under continuous illumination due to a simultaneously applied AC magnetic field . The AC magnetic field induces Rabi oscillations of the fluorescing electrons, which as a result rapidly transition between an optically active state and an optically inactive state, decreasing the overall fluorescence signal. By varying the frequency of the AC magnetic field, (often referred to as the RF field due to the typical frequency used), the resonance frequency of a particular transition can be measured since the resonant RF field induces a marked decrease in fluorescence intensity. There may be many such transitions, and their characteristics as observed with ODMR (principally frequency and linewidth) depend sensitively on the conditions of the measurement, motivating the use of ODMR as a technique for quantum sensing . [ 1 ] Like electron paramagnetic resonance (EPR), ODMR makes use of the Zeeman effect in unpaired electrons. The negatively charged nitrogen vacancy centre (NV − ) has been the target of considerable interest with regards to performing experiments using ODMR. [ 2 ] ODMR of NV − s in diamond has applications in magnetometry [ 3 ] and sensing, biomedical imaging, quantum information and the exploration of fundamental physics . The nitrogen vacancy defect in diamond consists of a single substitutional nitrogen atom (replacing one carbon atom) and an adjacent gap, or vacancy, in the lattice where normally a carbon atom would be located. The nitrogen vacancy occurs in three possible charge states: positive (NV + ), neutral (NV 0 ) and negative (NV − ). [ 4 ] As NV − is the only one of these charge states which has shown to be ODMR active, it is often referred to simply as the NV. The energy level structure of the NV − consists of a triplet ground state, a triplet excited state and two singlet states. Under resonant optical excitation, the NV may be raised from the triplet ground state to the triplet excited state. The centre may then return to the ground state via two routes; by the emission of a photon of 637 nm in the zero phonon line (ZPL) (or longer wavelength from the phonon sideband) or alternatively via the aforementioned singlet states through intersystem crossing and the emission of a 1042 nm photon. A return to the ground state via the latter route will preferentially result in the m s = 0 {\displaystyle m_{s}=0} state. Relaxation to the m s = 0 {\displaystyle m_{s}=0} state necessarily results in a decrease in visible wavelength fluorescence (as the emitted photon is in the infrared range). Microwave pumping at a resonant frequency of ν = 2.87 G H z {\displaystyle \nu =2.87{\text{ }}GHz} places the centre in the degenerate m s = ± 1 {\displaystyle m_{s}=\pm 1} state. The application of a magnetic field lifts this degeneracy , causing Zeeman splitting and the decrease of fluorescence at two resonant frequencies, given by h ν = g e μ B B 0 {\displaystyle h\nu =g_{e}\mu _{B}B_{0}} , where h {\displaystyle h} is the Planck constant , g e {\displaystyle g_{e}} is the electron g-factor and μ B {\displaystyle \mu _{B}} is the Bohr magneton . Sweeping the microwave field through these frequencies results in two characteristic dips in the observed fluorescence, the separation between which enables determination of the strength of the magnetic field B 0 {\displaystyle B_{0}} . Further splitting in the fluorescence spectrum may occur due to the hyperfine interaction , which leads to further resonance conditions and corresponding spectral lines. In NV ODMR, this detailed structure usually originates from nitrogen and carbon-13 atoms near to the defect. These atoms have small magnetic fields which interact with the spectral lines from the NV, causing further splitting. Hyperfine interactions in nitrogen-vacancy (NV) centres arise from nearby nuclear spins, primarily due to nitrogen (14N or 15N) and, in some cases 13C atoms near the defect. These interactions are significant because they further split the energy levels of the NV center, resulting in additional resonances in the ODMR spectrum. The nitrogen atom in the NV centre can exist as either 14N (with nuclear spin I = 1) or 15N (with nuclear spin I=1/2). The most common isotope, 14N, couples with the electron spin of the NV center, leading to a hyperfine splitting of the states m s = ± 1 {\displaystyle m_{s}=\pm 1} into three sub-levels. The interaction of NV electron spin with 14N nuclear spin can be defined by the hamiltonian shown above where S represents NV electron spin system and I represents nitrogen nuclear spin. This splitting typically depends upon the constants A ∥ = 2.14 {\displaystyle A_{\parallel }=2.14} MHz and A ⊥ = 2.14 {\displaystyle A_{\perp }=2.14} MHz. Splitting can be observed as three peaks in the ODMR hyperfine resolved spectrum. In NV centres, hyperfine splitting arises due to the interaction between the NV electron spin magnetic spin moment and nuclear spin magnetic moments. NV spin magnetic moments also depend upon the external magnetic field magnitude and orientation. [ 5 ] To perform hyperfine resolved ODMR, a single NV ODMR experiment is generally preferable. If 15N is present instead of 14N. It will split m s = ± 1 {\displaystyle m_{s}=\pm 1} into two sublevels. [ 6 ] Nearby 13C atoms (with nuclear spin I=1/2) can also interact with the NV centre (3). 13C carbon atoms are randomly distributed in diamonds and have a natural abundance of about 1.1%. When located near the NV center, they induce additional fine structures in the ODMR signal. The coupling strength varies with the position of the 13C nuclei relative to the NV center. [ 7 ]
https://en.wikipedia.org/wiki/Optically_detected_magnetic_resonance
Optics is the branch of physics that studies the behaviour and properties of light , including its interactions with matter and the construction of instruments that use or detect it. [ 1 ] Optics usually describes the behaviour of visible , ultraviolet , and infrared light. Light is a type of electromagnetic radiation , and other forms of electromagnetic radiation such as X-rays , microwaves , and radio waves exhibit similar properties. [ 1 ] Most optical phenomena can be accounted for by using the classical electromagnetic description of light, however complete electromagnetic descriptions of light are often difficult to apply in practice. Practical optics is usually done using simplified models. The most common of these, geometric optics , treats light as a collection of rays that travel in straight lines and bend when they pass through or reflect from surfaces. Physical optics is a more comprehensive model of light, which includes wave effects such as diffraction and interference that cannot be accounted for in geometric optics. Historically, the ray-based model of light was developed first, followed by the wave model of light. Progress in electromagnetic theory in the 19th century led to the discovery that light waves were in fact electromagnetic radiation. Some phenomena depend on light having both wave-like and particle-like properties . Explanation of these effects requires quantum mechanics . When considering light's particle-like properties, the light is modelled as a collection of particles called " photons ". Quantum optics deals with the application of quantum mechanics to optical systems. Optical science is relevant to and studied in many related disciplines including astronomy , various engineering fields, photography , and medicine (particularly ophthalmology and optometry , in which it is called physiological optics). Practical applications of optics are found in a variety of technologies and everyday objects, including mirrors , lenses , telescopes , microscopes , lasers , and fibre optics . Optics began with the development of lenses by the ancient Egyptians and Mesopotamians . The earliest known lenses, made from polished crystal , often quartz , date from as early as 2000 BC from Crete (Archaeological Museum of Heraclion, Greece). Lenses from Rhodes date around 700 BC, as do Assyrian lenses such as the Nimrud lens . [ 2 ] The ancient Romans and Greeks filled glass spheres with water to make lenses. These practical developments were followed by the development of theories of light and vision by ancient Greek and Indian philosophers, and the development of geometrical optics in the Greco-Roman world . The word optics comes from the ancient Greek word ὀπτική , optikē ' appearance, look ' . [ 3 ] Greek philosophy on optics broke down into two opposing theories on how vision worked, the intromission theory and the emission theory . [ 4 ] The intromission approach saw vision as coming from objects casting off copies of themselves (called eidola) that were captured by the eye. With many propagators including Democritus , Epicurus , Aristotle and their followers, this theory seems to have some contact with modern theories of what vision really is, but it remained only speculation lacking any experimental foundation. Plato first articulated the emission theory , the idea that visual perception is accomplished by rays emitted by the eyes. He also commented on the parity reversal of mirrors in Timaeus . [ 5 ] Some hundred years later, Euclid (4th–3rd century BC) wrote a treatise entitled Optics where he linked vision to geometry , creating geometrical optics . [ 6 ] He based his work on Plato's emission theory wherein he described the mathematical rules of perspective and described the effects of refraction qualitatively, although he questioned that a beam of light from the eye could instantaneously light up the stars every time someone blinked. [ 7 ] Euclid stated the principle of shortest trajectory of light, and considered multiple reflections on flat and spherical mirrors. Ptolemy , in his treatise Optics , held an extramission-intromission theory of vision: the rays (or flux) from the eye formed a cone, the vertex being within the eye, and the base defining the visual field. The rays were sensitive, and conveyed information back to the observer's intellect about the distance and orientation of surfaces. He summarized much of Euclid and went on to describe a way to measure the angle of refraction , though he failed to notice the empirical relationship between it and the angle of incidence. [ 8 ] Plutarch (1st–2nd century AD) described multiple reflections on spherical mirrors and discussed the creation of magnified and reduced images, both real and imaginary, including the case of chirality of the images. During the Middle Ages , Greek ideas about optics were resurrected and extended by writers in the Muslim world . One of the earliest of these was Al-Kindi ( c. 801 –873) who wrote on the merits of Aristotelian and Euclidean ideas of optics, favouring the emission theory since it could better quantify optical phenomena. [ 9 ] In 984, the Persian mathematician Ibn Sahl wrote the treatise "On burning mirrors and lenses", correctly describing a law of refraction equivalent to Snell's law. [ 10 ] He used this law to compute optimum shapes for lenses and curved mirrors . In the early 11th century, Alhazen (Ibn al-Haytham) wrote the Book of Optics ( Kitab al-manazir ) in which he explored reflection and refraction and proposed a new system for explaining vision and light based on observation and experiment. [ 11 ] He rejected the "emission theory" of Ptolemaic optics with its rays being emitted by the eye, and instead put forward the idea that light reflected in all directions in straight lines from all points of the objects being viewed and then entered the eye, although he was unable to correctly explain how the eye captured the rays. [ 12 ] Alhazen's work was largely ignored in the Arabic world but it was anonymously translated into Latin around 1200 A.D. and further summarised and expanded on by the Polish monk Witelo [ 13 ] making it a standard text on optics in Europe for the next 400 years. [ 14 ] In the 13th century in medieval Europe, English bishop Robert Grosseteste wrote on a wide range of scientific topics, and discussed light from four different perspectives: an epistemology of light, a metaphysics or cosmogony of light, an etiology or physics of light, and a theology of light, [ 15 ] basing it on the works of Aristotle and Platonism. Grosseteste's most famous disciple, Roger Bacon , wrote works citing a wide range of recently translated optical and philosophical works, including those of Alhazen, Aristotle, Avicenna , Averroes , Euclid, al-Kindi, Ptolemy, Tideus, and Constantine the African . Bacon was able to use parts of glass spheres as magnifying glasses to demonstrate that light reflects from objects rather than being released from them. The first wearable eyeglasses were invented in Italy around 1286. [ 16 ] This was the start of the optical industry of grinding and polishing lenses for these "spectacles", first in Venice and Florence in the thirteenth century, [ 17 ] and later in the spectacle making centres in both the Netherlands and Germany. [ 18 ] Spectacle makers created improved types of lenses for the correction of vision based more on empirical knowledge gained from observing the effects of the lenses rather than using the rudimentary optical theory of the day (theory which for the most part could not even adequately explain how spectacles worked). [ 19 ] [ 20 ] This practical development, mastery, and experimentation with lenses led directly to the invention of the compound optical microscope around 1595, and the refracting telescope in 1608, both of which appeared in the spectacle making centres in the Netherlands. [ 21 ] [ 22 ] In the early 17th century, Johannes Kepler expanded on geometric optics in his writings, covering lenses, reflection by flat and curved mirrors, the principles of pinhole cameras , inverse-square law governing the intensity of light, and the optical explanations of astronomical phenomena such as lunar and solar eclipses and astronomical parallax . He was also able to correctly deduce the role of the retina as the actual organ that recorded images, finally being able to scientifically quantify the effects of different types of lenses that spectacle makers had been observing over the previous 300 years. [ 24 ] After the invention of the telescope, Kepler set out the theoretical basis on how they worked and described an improved version, known as the Keplerian telescope , using two convex lenses to produce higher magnification. [ 25 ] Optical theory progressed in the mid-17th century with treatises written by philosopher René Descartes , which explained a variety of optical phenomena including reflection and refraction by assuming that light was emitted by objects which produced it. [ 26 ] This differed substantively from the ancient Greek emission theory. In the late 1660s and early 1670s, Isaac Newton expanded Descartes's ideas into a corpuscle theory of light , famously determining that white light was a mix of colours that can be separated into its component parts with a prism . In 1690, Christiaan Huygens proposed a wave theory for light based on suggestions that had been made by Robert Hooke in 1664. Hooke himself publicly criticised Newton's theories of light and the feud between the two lasted until Hooke's death. In 1704, Newton published Opticks and, at the time, partly because of his success in other areas of physics, he was generally considered to be the victor in the debate over the nature of light. [ 26 ] Newtonian optics was generally accepted until the early 19th century when Thomas Young and Augustin-Jean Fresnel conducted experiments on the interference of light that firmly established light's wave nature. Young's famous double slit experiment showed that light followed the superposition principle , which is a wave-like property not predicted by Newton's corpuscle theory. This work led to a theory of diffraction for light and opened an entire area of study in physical optics. [ 27 ] Wave optics was successfully unified with electromagnetic theory by James Clerk Maxwell in the 1860s. [ 28 ] The next development in optical theory came in 1899 when Max Planck correctly modelled blackbody radiation by assuming that the exchange of energy between light and matter only occurred in discrete amounts he called quanta . [ 29 ] In 1905, Albert Einstein published the theory of the photoelectric effect that firmly established the quantization of light itself. [ 30 ] [ 31 ] In 1913, Niels Bohr showed that atoms could only emit discrete amounts of energy, thus explaining the discrete lines seen in emission and absorption spectra . [ 32 ] The understanding of the interaction between light and matter that followed from these developments not only formed the basis of quantum optics but also was crucial for the development of quantum mechanics as a whole. The ultimate culmination, the theory of quantum electrodynamics , explains all optics and electromagnetic processes in general as the result of the exchange of real and virtual photons. [ 33 ] Quantum optics gained practical importance with the inventions of the maser in 1953 and of the laser in 1960. [ 34 ] Following the work of Paul Dirac in quantum field theory , George Sudarshan , Roy J. Glauber , and Leonard Mandel applied quantum theory to the electromagnetic field in the 1950s and 1960s to gain a more detailed understanding of photodetection and the statistics of light. Classical optics is divided into two main branches: geometrical (or ray) optics and physical (or wave) optics. In geometrical optics, light is considered to travel in straight lines, while in physical optics, light is considered as an electromagnetic wave. Geometrical optics can be viewed as an approximation of physical optics that applies when the wavelength of the light used is much smaller than the size of the optical elements in the system being modelled. Geometrical optics , or ray optics , describes the propagation of light in terms of "rays" which travel in straight lines, and whose paths are governed by the laws of reflection and refraction at interfaces between different media. [ 35 ] These laws were discovered empirically as far back as 984 AD [ 10 ] and have been used in the design of optical components and instruments from then until the present day. They can be summarised as follows: When a ray of light hits the boundary between two transparent materials, it is divided into a reflected and a refracted ray. The laws of reflection and refraction can be derived from Fermat's principle which states that the path taken between two points by a ray of light is the path that can be traversed in the least time. [ 36 ] Geometric optics is often simplified by making the paraxial approximation , or "small angle approximation". The mathematical behaviour then becomes linear, allowing optical components and systems to be described by simple matrices. This leads to the techniques of Gaussian optics and paraxial ray tracing , which are used to find basic properties of optical systems, such as approximate image and object positions and magnifications . [ 37 ] Reflections can be divided into two types: specular reflection and diffuse reflection . Specular reflection describes the gloss of surfaces such as mirrors, which reflect light in a simple, predictable way. This allows for the production of reflected images that can be associated with an actual ( real ) or extrapolated ( virtual ) location in space. Diffuse reflection describes non-glossy materials, such as paper or rock. The reflections from these surfaces can only be described statistically, with the exact distribution of the reflected light depending on the microscopic structure of the material. Many diffuse reflectors are described or can be approximated by Lambert's cosine law , which describes surfaces that have equal luminance when viewed from any angle. Glossy surfaces can give both specular and diffuse reflection. In specular reflection, the direction of the reflected ray is determined by the angle the incident ray makes with the surface normal , a line perpendicular to the surface at the point where the ray hits. The incident and reflected rays and the normal lie in a single plane, and the angle between the reflected ray and the surface normal is the same as that between the incident ray and the normal. [ 38 ] This is known as the Law of Reflection . For flat mirrors , the law of reflection implies that images of objects are upright and the same distance behind the mirror as the objects are in front of the mirror. The image size is the same as the object size. The law also implies that mirror images are parity inverted, which we perceive as a left-right inversion. Images formed from reflection in two (or any even number of) mirrors are not parity inverted. Corner reflectors produce reflected rays that travel back in the direction from which the incident rays came. [ 39 ] This is called retroreflection . Mirrors with curved surfaces can be modelled by ray tracing and using the law of reflection at each point on the surface. For mirrors with parabolic surfaces , parallel rays incident on the mirror produce reflected rays that converge at a common focus . Other curved surfaces may also focus light, but with aberrations due to the diverging shape causing the focus to be smeared out in space. In particular, spherical mirrors exhibit spherical aberration . Curved mirrors can form images with a magnification greater than or less than one, and the magnification can be negative, indicating that the image is inverted. An upright image formed by reflection in a mirror is always virtual, while an inverted image is real and can be projected onto a screen. [ 40 ] Refraction occurs when light travels through an area of space that has a changing index of refraction; this principle allows for lenses and the focusing of light. The simplest case of refraction occurs when there is an interface between a uniform medium with index of refraction n 1 and another medium with index of refraction n 2 . In such situations, Snell's Law describes the resulting deflection of the light ray: n 1 sin ⁡ θ 1 = n 2 sin ⁡ θ 2 {\displaystyle n_{1}\sin \theta _{1}=n_{2}\sin \theta _{2}} where θ 1 and θ 2 are the angles between the normal (to the interface) and the incident and refracted waves, respectively. [ 38 ] The index of refraction of a medium is related to the speed, v , of light in that medium by n = c / v , {\displaystyle n=c/v,} where c is the speed of light in vacuum . Snell's Law can be used to predict the deflection of light rays as they pass through linear media as long as the indexes of refraction and the geometry of the media are known. For example, the propagation of light through a prism results in the light ray being deflected depending on the shape and orientation of the prism. In most materials, the index of refraction varies with the frequency of the light, known as dispersion . Taking this into account, Snell's Law can be used to predict how a prism will disperse light into a spectrum. [ 41 ] The discovery of this phenomenon when passing light through a prism is famously attributed to Isaac Newton. Some media have an index of refraction which varies gradually with position and, therefore, light rays in the medium are curved. This effect is responsible for mirages seen on hot days: a change in index of refraction air with height causes light rays to bend, creating the appearance of specular reflections in the distance (as if on the surface of a pool of water). Optical materials with varying indexes of refraction are called gradient-index (GRIN) materials. Such materials are used to make gradient-index optics . [ 42 ] For light rays travelling from a material with a high index of refraction to a material with a low index of refraction, Snell's law predicts that there is no θ 2 when θ 1 is large. In this case, no transmission occurs; all the light is reflected. This phenomenon is called total internal reflection and allows for fibre optics technology. As light travels down an optical fibre, it undergoes total internal reflection allowing for essentially no light to be lost over the length of the cable. [ 43 ] A device that produces converging or diverging light rays due to refraction is known as a lens . Lenses are characterized by their focal length : a converging lens has positive focal length, while a diverging lens has negative focal length. Smaller focal length indicates that the lens has a stronger converging or diverging effect. The focal length of a simple lens in air is given by the lensmaker's equation . [ 44 ] Ray tracing can be used to show how images are formed by a lens. For a thin lens in air, the location of the image is given by the simple equation 1 S 1 + 1 S 2 = 1 f , {\displaystyle {\frac {1}{S_{1}}}+{\frac {1}{S_{2}}}={\frac {1}{f}},} where S 1 is the distance from the object to the lens, S 2 is the distance from the lens to the image, and f is the focal length of the lens. In the sign convention used here, the object and image distances are positive if the object and image are on opposite sides of the lens. [ 45 ] Incoming parallel rays are focused by a converging lens onto a spot one focal length from the lens, on the far side of the lens. This is called the rear focal point of the lens. Rays from an object at a finite distance are focused further from the lens than the focal distance; the closer the object is to the lens, the further the image is from the lens. With diverging lenses, incoming parallel rays diverge after going through the lens, in such a way that they seem to have originated at a spot one focal length in front of the lens. This is the lens's front focal point. Rays from an object at a finite distance are associated with a virtual image that is closer to the lens than the focal point, and on the same side of the lens as the object. The closer the object is to the lens, the closer the virtual image is to the lens. As with mirrors, upright images produced by a single lens are virtual, while inverted images are real. [ 46 ] Lenses suffer from aberrations that distort images. Monochromatic aberrations occur because the geometry of the lens does not perfectly direct rays from each object point to a single point on the image, while chromatic aberration occurs because the index of refraction of the lens varies with the wavelength of the light. [ 47 ] In physical optics, light is considered to propagate as waves. This model predicts phenomena such as interference and diffraction, which are not explained by geometric optics. The speed of light waves in air is approximately 3.0×10 8 m/s (exactly 299,792,458 m/s in vacuum ). The wavelength of visible light waves varies between 400 and 700 nm, but the term "light" is also often applied to infrared (0.7–300 μm) and ultraviolet radiation (10–400 nm). The wave model can be used to make predictions about how an optical system will behave without requiring an explanation of what is "waving" in what medium. Until the middle of the 19th century, most physicists believed in an "ethereal" medium in which the light disturbance propagated. [ 48 ] The existence of electromagnetic waves was predicted in 1865 by Maxwell's equations . These waves propagate at the speed of light and have varying electric and magnetic fields which are orthogonal to one another, and also to the direction of propagation of the waves. [ 49 ] Light waves are now generally treated as electromagnetic waves except when quantum mechanical effects have to be considered. Many simplified approximations are available for analysing and designing optical systems. Most of these use a single scalar quantity to represent the electric field of the light wave, rather than using a vector model with orthogonal electric and magnetic vectors. [ 50 ] The Huygens–Fresnel equation is one such model. This was derived empirically by Fresnel in 1815, based on Huygens' hypothesis that each point on a wavefront generates a secondary spherical wavefront, which Fresnel combined with the principle of superposition of waves. The Kirchhoff diffraction equation , which is derived using Maxwell's equations, puts the Huygens-Fresnel equation on a firmer physical foundation. Examples of the application of Huygens–Fresnel principle can be found in the articles on diffraction and Fraunhofer diffraction . More rigorous models, involving the modelling of both electric and magnetic fields of the light wave, are required when dealing with materials whose electric and magnetic properties affect the interaction of light with the material. For instance, the behaviour of a light wave interacting with a metal surface is quite different from what happens when it interacts with a dielectric material. A vector model must also be used to model polarised light. Numerical modeling techniques such as the finite element method , the boundary element method and the transmission-line matrix method can be used to model the propagation of light in systems which cannot be solved analytically. Such models are computationally demanding and are normally only used to solve small-scale problems that require accuracy beyond that which can be achieved with analytical solutions. [ 51 ] All of the results from geometrical optics can be recovered using the techniques of Fourier optics which apply many of the same mathematical and analytical techniques used in acoustic engineering and signal processing . Gaussian beam propagation is a simple paraxial physical optics model for the propagation of coherent radiation such as laser beams. This technique partially accounts for diffraction, allowing accurate calculations of the rate at which a laser beam expands with distance, and the minimum size to which the beam can be focused. Gaussian beam propagation thus bridges the gap between geometric and physical optics. [ 52 ] In the absence of nonlinear effects, the superposition principle can be used to predict the shape of interacting waveforms through the simple addition of the disturbances. [ 53 ] This interaction of waves to produce a resulting pattern is generally termed "interference" and can result in a variety of outcomes. If two waves of the same wavelength and frequency are in phase , both the wave crests and wave troughs align. This results in constructive interference and an increase in the amplitude of the wave, which for light is associated with a brightening of the waveform in that location. Alternatively, if the two waves of the same wavelength and frequency are out of phase, then the wave crests will align with wave troughs and vice versa. This results in destructive interference and a decrease in the amplitude of the wave, which for light is associated with a dimming of the waveform at that location. See below for an illustration of this effect. [ 54 ] Since the Huygens–Fresnel principle states that every point of a wavefront is associated with the production of a new disturbance, it is possible for a wavefront to interfere with itself constructively or destructively at different locations producing bright and dark fringes in regular and predictable patterns. [ 55 ] Interferometry is the science of measuring these patterns, usually as a means of making precise determinations of distances or angular resolutions . [ 56 ] The Michelson interferometer was a famous instrument which used interference effects to accurately measure the speed of light. [ 57 ] The appearance of thin films and coatings is directly affected by interference effects. Antireflective coatings use destructive interference to reduce the reflectivity of the surfaces they coat, and can be used to minimise glare and unwanted reflections. The simplest case is a single layer with a thickness of one-fourth the wavelength of incident light. The reflected wave from the top of the film and the reflected wave from the film/material interface are then exactly 180° out of phase, causing destructive interference. The waves are only exactly out of phase for one wavelength, which would typically be chosen to be near the centre of the visible spectrum, around 550 nm. More complex designs using multiple layers can achieve low reflectivity over a broad band, or extremely low reflectivity at a single wavelength. Constructive interference in thin films can create a strong reflection of light in a range of wavelengths, which can be narrow or broad depending on the design of the coating. These films are used to make dielectric mirrors , interference filters , heat reflectors , and filters for colour separation in colour television cameras. This interference effect is also what causes the colourful rainbow patterns seen in oil slicks. [ 58 ] Diffraction is the process by which light interference is most commonly observed. The effect was first described in 1665 by Francesco Maria Grimaldi , who also coined the term from the Latin diffringere ' to break into pieces ' . [ 59 ] [ 60 ] Later that century, Robert Hooke and Isaac Newton also described phenomena now known to be diffraction in Newton's rings [ 61 ] while James Gregory recorded his observations of diffraction patterns from bird feathers. [ 62 ] The first physical optics model of diffraction that relied on the Huygens–Fresnel principle was developed in 1803 by Thomas Young in his interference experiments with the interference patterns of two closely spaced slits. Young showed that his results could only be explained if the two slits acted as two unique sources of waves rather than corpuscles. [ 63 ] In 1815 and 1818, Augustin-Jean Fresnel firmly established the mathematics of how wave interference can account for diffraction. [ 64 ] The simplest physical models of diffraction use equations that describe the angular separation of light and dark fringes due to light of a particular wavelength ( λ ). In general, the equation takes the form m λ = d sin ⁡ θ {\displaystyle m\lambda =d\sin \theta } where d is the separation between two wavefront sources (in the case of Young's experiments, it was two slits ), θ is the angular separation between the central fringe and the m -th order fringe, where the central maximum is m = 0 . [ 65 ] This equation is modified slightly to take into account a variety of situations such as diffraction through a single gap, diffraction through multiple slits, or diffraction through a diffraction grating that contains a large number of slits at equal spacing. [ 66 ] More complicated models of diffraction require working with the mathematics of Fresnel or Fraunhofer diffraction . [ 67 ] X-ray diffraction makes use of the fact that atoms in a crystal have regular spacing at distances that are on the order of one angstrom . To see diffraction patterns, x-rays with similar wavelengths to that spacing are passed through the crystal. Since crystals are three-dimensional objects rather than two-dimensional gratings, the associated diffraction pattern varies in two directions according to Bragg reflection , with the associated bright spots occurring in unique patterns and d being twice the spacing between atoms. [ 68 ] Diffraction effects limit the ability of an optical detector to optically resolve separate light sources. In general, light that is passing through an aperture will experience diffraction and the best images that can be created (as described in diffraction-limited optics ) appear as a central spot with surrounding bright rings, separated by dark nulls; this pattern is known as an Airy pattern , and the central bright lobe as an Airy disk . [ 69 ] The size of such a disk is given by sin ⁡ θ = 1.22 λ D {\displaystyle \sin \theta =1.22{\frac {\lambda }{D}}} where θ is the angular resolution, λ is the wavelength of the light, and D is the diameter of the lens aperture. If the angular separation of the two points is significantly less than the Airy disk angular radius, then the two points cannot be resolved in the image, but if their angular separation is much greater than this, distinct images of the two points are formed and they can therefore be resolved. Rayleigh defined the somewhat arbitrary " Rayleigh criterion " that two points whose angular separation is equal to the Airy disk radius (measured to first null, that is, to the first place where no light is seen) can be considered to be resolved. It can be seen that the greater the diameter of the lens or its aperture, the finer the resolution. [ 70 ] Interferometry , with its ability to mimic extremely large baseline apertures, allows for the greatest angular resolution possible. [ 56 ] For astronomical imaging, the atmosphere prevents optimal resolution from being achieved in the visible spectrum due to the atmospheric scattering and dispersion which cause stars to twinkle . Astronomers refer to this effect as the quality of astronomical seeing . Techniques known as adaptive optics have been used to eliminate the atmospheric disruption of images and achieve results that approach the diffraction limit. [ 71 ] Refractive processes take place in the physical optics limit, where the wavelength of light is similar to other distances, as a kind of scattering. The simplest type of scattering is Thomson scattering which occurs when electromagnetic waves are deflected by single particles. In the limit of Thomson scattering, in which the wavelike nature of light is evident, light is dispersed independent of the frequency, in contrast to Compton scattering which is frequency-dependent and strictly a quantum mechanical process, involving the nature of light as particles. In a statistical sense, elastic scattering of light by numerous particles much smaller than the wavelength of the light is a process known as Rayleigh scattering while the similar process for scattering by particles that are similar or larger in wavelength is known as Mie scattering with the Tyndall effect being a commonly observed result. A small proportion of light scattering from atoms or molecules may undergo Raman scattering , wherein the frequency changes due to excitation of the atoms and molecules. Brillouin scattering occurs when the frequency of light changes due to local changes with time and movements of a dense material. [ 72 ] Dispersion occurs when different frequencies of light have different phase velocities , due either to material properties ( material dispersion ) or to the geometry of an optical waveguide ( waveguide dispersion ). The most familiar form of dispersion is a decrease in index of refraction with increasing wavelength, which is seen in most transparent materials. This is called "normal dispersion". It occurs in all dielectric materials , in wavelength ranges where the material does not absorb light. [ 73 ] In wavelength ranges where a medium has significant absorption, the index of refraction can increase with wavelength. This is called "anomalous dispersion". [ 73 ] The separation of colours by a prism is an example of normal dispersion. At the surfaces of the prism, Snell's law predicts that light incident at an angle θ to the normal will be refracted at an angle arcsin(sin ( θ ) / n ) . Thus, blue light, with its higher refractive index, is bent more strongly than red light, resulting in the well-known rainbow pattern. [ 41 ] Material dispersion is often characterised by the Abbe number , which gives a simple measure of dispersion based on the index of refraction at three specific wavelengths. Waveguide dispersion is dependent on the propagation constant . [ 74 ] Both kinds of dispersion cause changes in the group characteristics of the wave, the features of the wave packet that change with the same frequency as the amplitude of the electromagnetic wave. "Group velocity dispersion" manifests as a spreading-out of the signal "envelope" of the radiation and can be quantified with a group dispersion delay parameter: D = 1 v g 2 d v g d λ {\displaystyle D={\frac {1}{v_{\mathrm {g} }^{2}}}{\frac {dv_{\mathrm {g} }}{d\lambda }}} where v g is the group velocity. [ 75 ] For a uniform medium, the group velocity is v g = c ( n − λ d n d λ ) − 1 {\displaystyle v_{\mathrm {g} }=c\left(n-\lambda {\frac {dn}{d\lambda }}\right)^{-1}} where n is the index of refraction and c is the speed of light in a vacuum. [ 76 ] This gives a simpler form for the dispersion delay parameter: D = − λ c d 2 n d λ 2 . {\displaystyle D=-{\frac {\lambda }{c}}\,{\frac {d^{2}n}{d\lambda ^{2}}}.} If D is less than zero, the medium is said to have positive dispersion or normal dispersion. If D is greater than zero, the medium has negative dispersion . If a light pulse is propagated through a normally dispersive medium, the result is the higher frequency components slow down more than the lower frequency components. The pulse therefore becomes positively chirped , or up-chirped , increasing in frequency with time. This causes the spectrum coming out of a prism to appear with red light the least refracted and blue/violet light the most refracted. Conversely, if a pulse travels through an anomalously (negatively) dispersive medium, high-frequency components travel faster than the lower ones, and the pulse becomes negatively chirped , or down-chirped , decreasing in frequency with time. [ 77 ] The result of group velocity dispersion, whether negative or positive, is ultimately temporal spreading of the pulse. This makes dispersion management extremely important in optical communications systems based on optical fibres , since if dispersion is too high, a group of pulses representing information will each spread in time and merge, making it impossible to extract the signal. [ 75 ] Polarisation is a general property of waves that describes the orientation of their oscillations. For transverse waves such as many electromagnetic waves, it describes the orientation of the oscillations in the plane perpendicular to the wave's direction of travel. The oscillations may be oriented in a single direction ( linear polarisation ), or the oscillation direction may rotate as the wave travels ( circular or elliptical polarisation ). Circularly polarised waves can rotate rightward or leftward in the direction of travel, and which of those two rotations is present in a wave is called the wave's chirality . [ 78 ] The typical way to consider polarisation is to keep track of the orientation of the electric field vector as the electromagnetic wave propagates. The electric field vector of a plane wave may be arbitrarily divided into two perpendicular components labeled x and y (with z indicating the direction of travel). The shape traced out in the x-y plane by the electric field vector is a Lissajous figure that describes the polarisation state . [ 79 ] The following figures show some examples of the evolution of the electric field vector (blue), with time (the vertical axes), at a particular point in space, along with its x and y components (red/left and green/right), and the path traced by the vector in the plane (purple): The same evolution would occur when looking at the electric field at a particular time while evolving the point in space, along the direction opposite to propagation. In the leftmost figure above, the x and y components of the light wave are in phase. In this case, the ratio of their strengths is constant, so the direction of the electric vector (the vector sum of these two components) is constant. Since the tip of the vector traces out a single line in the plane, this special case is called linear polarisation. The direction of this line depends on the relative amplitudes of the two components. [ 80 ] In the middle figure, the two orthogonal components have the same amplitudes and are 90° out of phase. In this case, one component is zero when the other component is at maximum or minimum amplitude. There are two possible phase relationships that satisfy this requirement: the x component can be 90° ahead of the y component or it can be 90° behind the y component. In this special case, the electric vector traces out a circle in the plane, so this polarisation is called circular polarisation. The rotation direction in the circle depends on which of the two-phase relationships exists and corresponds to right-hand circular polarisation and left-hand circular polarisation . [ 81 ] In all other cases, where the two components either do not have the same amplitudes and/or their phase difference is neither zero nor a multiple of 90°, the polarisation is called elliptical polarisation because the electric vector traces out an ellipse in the plane (the polarisation ellipse ). [ 82 ] This is shown in the above figure on the right. Detailed mathematics of polarisation is done using Jones calculus and is characterised by the Stokes parameters . [ 83 ] Media that have different indexes of refraction for different polarisation modes are called birefringent . [ 84 ] Well known manifestations of this effect appear in optical wave plates /retarders (linear modes) and in Faraday rotation / optical rotation (circular modes). [ 85 ] If the path length in the birefringent medium is sufficient, plane waves will exit the material with a significantly different propagation direction, due to refraction. For example, this is the case with macroscopic crystals of calcite , which present the viewer with two offset, orthogonally polarised images of whatever is viewed through them. It was this effect that provided the first discovery of polarisation, by Erasmus Bartholinus in 1669. In addition, the phase shift, and thus the change in polarisation state, is usually frequency dependent, which, in combination with dichroism , often gives rise to bright colours and rainbow-like effects. In mineralogy , such properties, known as pleochroism , are frequently exploited for the purpose of identifying minerals using polarisation microscopes. Additionally, many plastics that are not normally birefringent will become so when subject to mechanical stress , a phenomenon which is the basis of photoelasticity . [ 86 ] Non-birefringent methods, to rotate the linear polarisation of light beams, include the use of prismatic polarisation rotators which use total internal reflection in a prism set designed for efficient collinear transmission. [ 87 ] Media that reduce the amplitude of certain polarisation modes are called dichroic , with devices that block nearly all of the radiation in one mode known as polarising filters or simply " polarisers ". Malus' law, which is named after Étienne-Louis Malus , says that when a perfect polariser is placed in a linear polarised beam of light, the intensity, I , of the light that passes through is given by I = I 0 cos 2 ⁡ θ i , {\displaystyle I=I_{0}\cos ^{2}\theta _{\mathrm {i} },} where I 0 is the initial intensity, and θ i is the angle between the light's initial polarisation direction and the axis of the polariser. [ 88 ] A beam of unpolarised light can be thought of as containing a uniform mixture of linear polarisations at all possible angles. Since the average value of cos 2 θ is 1/2, the transmission coefficient becomes I I 0 = 1 2 . {\displaystyle {\frac {I}{I_{0}}}={\frac {1}{2}}\,.} In practice, some light is lost in the polariser and the actual transmission of unpolarised light will be somewhat lower than this, around 38% for Polaroid-type polarisers but considerably higher (>49.9%) for some birefringent prism types. [ 89 ] In addition to birefringence and dichroism in extended media, polarisation effects can also occur at the (reflective) interface between two materials of different refractive index. These effects are treated by the Fresnel equations . Part of the wave is transmitted and part is reflected, with the ratio depending on the angle of incidence and the angle of refraction. In this way, physical optics recovers Brewster's angle . [ 90 ] When light reflects from a thin film on a surface, interference between the reflections from the film's surfaces can produce polarisation in the reflected and transmitted light. Most sources of electromagnetic radiation contain a large number of atoms or molecules that emit light. The orientation of the electric fields produced by these emitters may not be correlated , in which case the light is said to be unpolarised . If there is partial correlation between the emitters, the light is partially polarised . If the polarisation is consistent across the spectrum of the source, partially polarised light can be described as a superposition of a completely unpolarised component, and a completely polarised one. One may then describe the light in terms of the degree of polarisation , and the parameters of the polarisation ellipse. [ 79 ] Light reflected by shiny transparent materials is partly or fully polarised, except when the light is normal (perpendicular) to the surface. It was this effect that allowed the mathematician Étienne-Louis Malus to make the measurements that allowed for his development of the first mathematical models for polarised light. Polarisation occurs when light is scattered in the atmosphere . The scattered light produces the brightness and colour in clear skies . This partial polarisation of scattered light can be taken advantage of using polarising filters to darken the sky in photographs . Optical polarisation is principally of importance in chemistry due to circular dichroism and optical rotation ( circular birefringence ) exhibited by optically active ( chiral ) molecules . [ 91 ] Modern optics encompasses the areas of optical science and engineering that became popular in the 20th century. These areas of optical science typically relate to the electromagnetic or quantum properties of light but do include other topics. A major subfield of modern optics, quantum optics , deals with specifically quantum mechanical properties of light. Quantum optics is not just theoretical; some modern devices, such as lasers, have principles of operation that depend on quantum mechanics. Light detectors, such as photomultipliers and channeltrons , respond to individual photons. Electronic image sensors , such as CCDs , exhibit shot noise corresponding to the statistics of individual photon events. Light-emitting diodes and photovoltaic cells , too, cannot be understood without quantum mechanics. In the study of these devices, quantum optics often overlaps with quantum electronics . [ 92 ] Specialty areas of optics research include the study of how light interacts with specific materials as in crystal optics and metamaterials . Other research focuses on the phenomenology of electromagnetic waves as in singular optics , non-imaging optics , non-linear optics , statistical optics, and radiometry . Additionally, computer engineers have taken an interest in integrated optics , machine vision , and photonic computing as possible components of the "next generation" of computers. [ 93 ] Today, the pure science of optics is called optical science or optical physics to distinguish it from applied optical sciences, which are referred to as optical engineering . Prominent subfields of optical engineering include illumination engineering , photonics , and optoelectronics with practical applications like lens design , fabrication and testing of optical components , and image processing . Some of these fields overlap, with nebulous boundaries between the subjects' terms that mean slightly different things in different parts of the world and in different areas of industry. A professional community of researchers in nonlinear optics has developed in the last several decades due to advances in laser technology. [ 94 ] A laser is a device that emits light, a kind of electromagnetic radiation, through a process called stimulated emission . The term laser is an acronym for ' Light Amplification by Stimulated Emission of Radiation ' . [ 95 ] Laser light is usually spatially coherent , which means that the light either is emitted in a narrow, low-divergence beam , or can be converted into one with the help of optical components such as lenses. Because the microwave equivalent of the laser, the maser , was developed first, devices that emit microwave and radio frequencies are usually called masers . [ 96 ] The first working laser was demonstrated on 16 May 1960 by Theodore Maiman at Hughes Research Laboratories . [ 98 ] When first invented, they were called "a solution looking for a problem". [ 99 ] Since then, lasers have become a multibillion-dollar industry, finding utility in thousands of highly varied applications. The first application of lasers visible in the daily lives of the general population was the supermarket barcode scanner, introduced in 1974. [ 100 ] The laserdisc player, introduced in 1978, was the first successful consumer product to include a laser, but the compact disc player was the first laser-equipped device to become truly common in consumers' homes, beginning in 1982. [ 101 ] These optical storage devices use a semiconductor laser less than a millimetre wide to scan the surface of the disc for data retrieval. Fibre-optic communication relies on lasers to transmit large amounts of information at the speed of light. Other common applications of lasers include laser printers and laser pointers . Lasers are used in medicine in areas such as bloodless surgery , laser eye surgery , and laser capture microdissection and in military applications such as missile defence systems , electro-optical countermeasures (EOCM) , and lidar . Lasers are also used in holograms , bubblegrams , laser light shows , and laser hair removal . [ 102 ] The Kapitsa–Dirac effect causes beams of particles to diffract as the result of meeting a standing wave of light. Light can be used to position matter using various phenomena (see optical tweezers ). Optics is part of everyday life. The ubiquity of visual systems in biology indicates the central role optics plays as the science of one of the five senses . Many people benefit from eyeglasses or contact lenses , and optics are integral to the functioning of many consumer goods including cameras . Rainbows and mirages are examples of optical phenomena. Optical communication provides the backbone for both the Internet and modern telephony . The human eye functions by focusing light onto a layer of photoreceptor cells called the retina, which forms the inner lining of the back of the eye. The focusing is accomplished by a series of transparent media. Light entering the eye passes first through the cornea, which provides much of the eye's optical power. The light then continues through the fluid just behind the cornea—the anterior chamber , then passes through the pupil . The light then passes through the lens , which focuses the light further and allows adjustment of focus. The light then passes through the main body of fluid in the eye—the vitreous humour , and reaches the retina. The cells in the retina line the back of the eye, except for where the optic nerve exits; this results in a blind spot . There are two types of photoreceptor cells, rods and cones, which are sensitive to different aspects of light. [ 103 ] Rod cells are sensitive to the intensity of light over a wide frequency range, thus are responsible for black-and-white vision . Rod cells are not present on the fovea, the area of the retina responsible for central vision, and are not as responsive as cone cells to spatial and temporal changes in light. There are, however, twenty times more rod cells than cone cells in the retina because the rod cells are present across a wider area. Because of their wider distribution, rods are responsible for peripheral vision . [ 104 ] In contrast, cone cells are less sensitive to the overall intensity of light, but come in three varieties that are sensitive to different frequency-ranges and thus are used in the perception of colour and photopic vision . Cone cells are highly concentrated in the fovea and have a high visual acuity meaning that they are better at spatial resolution than rod cells. Since cone cells are not as sensitive to dim light as rod cells, most night vision is limited to rod cells. Likewise, since cone cells are in the fovea, central vision (including the vision needed to do most reading, fine detail work such as sewing, or careful examination of objects) is done by cone cells. [ 104 ] Ciliary muscles around the lens allow the eye's focus to be adjusted. This process is known as accommodation . The near point and far point define the nearest and farthest distances from the eye at which an object can be brought into sharp focus. For a person with normal vision, the far point is located at infinity. The near point's location depends on how much the muscles can increase the curvature of the lens, and how inflexible the lens has become with age. Optometrists , ophthalmologists , and opticians usually consider an appropriate near point to be closer than normal reading distance—approximately 25 cm. [ 103 ] Defects in vision can be explained using optical principles. As people age, the lens becomes less flexible and the near point recedes from the eye, a condition known as presbyopia . Similarly, people suffering from hyperopia cannot decrease the focal length of their lens enough to allow for nearby objects to be imaged on their retina. Conversely, people who cannot increase the focal length of their lens enough to allow for distant objects to be imaged on the retina suffer from myopia and have a far point that is considerably closer than infinity. A condition known as astigmatism results when the cornea is not spherical but instead is more curved in one direction. This causes horizontally extended objects to be focused on different parts of the retina than vertically extended objects, and results in distorted images. [ 103 ] All of these conditions can be corrected using corrective lenses . For presbyopia and hyperopia, a converging lens provides the extra curvature necessary to bring the near point closer to the eye while for myopia a diverging lens provides the curvature necessary to send the far point to infinity. Astigmatism is corrected with a cylindrical surface lens that curves more strongly in one direction than in another, compensating for the non-uniformity of the cornea. [ 105 ] The optical power of corrective lenses is measured in diopters , a value equal to the reciprocal of the focal length measured in metres; with a positive focal length corresponding to a converging lens and a negative focal length corresponding to a diverging lens. For lenses that correct for astigmatism as well, three numbers are given: one for the spherical power, one for the cylindrical power, and one for the angle of orientation of the astigmatism. [ 105 ] Optical illusions (also called visual illusions) are characterized by visually perceived images that differ from objective reality. The information gathered by the eye is processed in the brain to give a percept that differs from the object being imaged. Optical illusions can be the result of a variety of phenomena including physical effects that create images that are different from the objects that make them, the physiological effects on the eyes and brain of excessive stimulation (e.g. brightness, tilt, colour, movement), and cognitive illusions where the eye and brain make unconscious inferences . [ 106 ] Cognitive illusions include some which result from the unconscious misapplication of certain optical principles. For example, the Ames room , Hering , Müller-Lyer , Orbison , Ponzo , Sander , and Wundt illusions all rely on the suggestion of the appearance of distance by using converging and diverging lines, in the same way that parallel light rays (or indeed any set of parallel lines) appear to converge at a vanishing point at infinity in two-dimensionally rendered images with artistic perspective. [ 107 ] This suggestion is also responsible for the famous moon illusion where the moon, despite having essentially the same angular size, appears much larger near the horizon than it does at zenith . [ 108 ] This illusion so confounded Ptolemy that he incorrectly attributed it to atmospheric refraction when he described it in his treatise, Optics . [ 8 ] Another type of optical illusion exploits broken patterns to trick the mind into perceiving symmetries or asymmetries that are not present. Examples include the café wall , Ehrenstein , Fraser spiral , Poggendorff , and Zöllner illusions . Related, but not strictly illusions, are patterns that occur due to the superimposition of periodic structures. For example, transparent tissues with a grid structure produce shapes known as moiré patterns , while the superimposition of periodic transparent patterns comprising parallel opaque lines or curves produces line moiré patterns. [ 109 ] Single lenses have a variety of applications including photographic lenses , corrective lenses, and magnifying glasses while single mirrors are used in parabolic reflectors and rear-view mirrors . Combining a number of mirrors, prisms, and lenses produces compound optical instruments which have practical uses. For example, a periscope is simply two plane mirrors aligned to allow for viewing around obstructions. The most famous compound optical instruments in science are the microscope and the telescope which were both invented by the Dutch in the late 16th century. [ 110 ] Microscopes were first developed with just two lenses: an objective lens and an eyepiece . The objective lens is essentially a magnifying glass and was designed with a very small focal length while the eyepiece generally has a longer focal length. This has the effect of producing magnified images of close objects. Generally, an additional source of illumination is used since magnified images are dimmer due to the conservation of energy and the spreading of light rays over a larger surface area. Modern microscopes, known as compound microscopes have many lenses in them (typically four) to optimize the functionality and enhance image stability. [ 111 ] A slightly different variety of microscope, the comparison microscope , looks at side-by-side images to produce a stereoscopic binocular view that appears three dimensional when used by humans. [ 112 ] The first telescopes, called refracting telescopes, were also developed with a single objective and eyepiece lens. In contrast to the microscope, the objective lens of the telescope was designed with a large focal length to avoid optical aberrations. The objective focuses an image of a distant object at its focal point which is adjusted to be at the focal point of an eyepiece of a much smaller focal length. The main goal of a telescope is not necessarily magnification, but rather the collection of light which is determined by the physical size of the objective lens. Thus, telescopes are normally indicated by the diameters of their objectives rather than by the magnification which can be changed by switching eyepieces. Because the magnification of a telescope is equal to the focal length of the objective divided by the focal length of the eyepiece, smaller focal-length eyepieces cause greater magnification. [ 113 ] Since crafting large lenses is much more difficult than crafting large mirrors, most modern telescopes are reflecting telescopes , that is, telescopes that use a primary mirror rather than an objective lens. The same general optical considerations apply to reflecting telescopes that applied to refracting telescopes, namely, the larger the primary mirror, the more light collected, and the magnification is still equal to the focal length of the primary mirror divided by the focal length of the eyepiece. Professional telescopes generally do not have eyepieces and instead place an instrument (often a charge-coupled device) at the focal point instead. [ 114 ] The optics of photography involves both lenses and the medium in which the electromagnetic radiation is recorded, whether it be a plate , film , or charge-coupled device. Photographers must consider the reciprocity of the camera and the shot which is summarized by the relation In other words, the smaller the aperture (giving greater depth of focus), the less light coming in, so the length of time has to be increased (leading to possible blurriness if motion occurs). An example of the use of the law of reciprocity is the Sunny 16 rule which gives a rough estimate for the settings needed to estimate the proper exposure in daylight. [ 116 ] A camera's aperture is measured by a unitless number called the f-number or f-stop, f / #, often notated as N {\displaystyle N} , and given by where f {\displaystyle f} is the focal length, and D {\displaystyle D} is the diameter of the entrance pupil. By convention, " f / #" is treated as a single symbol, and specific values of f / # are written by replacing the number sign with the value. The two ways to increase the f-stop are to either decrease the diameter of the entrance pupil or change to a longer focal length (in the case of a zoom lens , this can be done by simply adjusting the lens). Higher f-numbers also have a larger depth of field due to the lens approaching the limit of a pinhole camera which is able to focus all images perfectly, regardless of distance, but requires very long exposure times. [ 117 ] The field of view that the lens will provide changes with the focal length of the lens. There are three basic classifications based on the relationship to the diagonal size of the film or sensor size of the camera to the focal length of the lens: [ 118 ] Modern zoom lenses may have some or all of these attributes. The absolute value for the exposure time required depends on how sensitive to light the medium being used is (measured by the film speed , or, for digital media, by the quantum efficiency ). [ 123 ] Early photography used media that had very low light sensitivity, and so exposure times had to be long even for very bright shots. As technology has improved, so has the sensitivity through film cameras and digital cameras. [ 124 ] Other results from physical and geometrical optics apply to camera optics. For example, the maximum resolution capability of a particular camera set-up is determined by the diffraction limit associated with the pupil size and given, roughly, by the Rayleigh criterion. [ 125 ] The unique optical properties of the atmosphere cause a wide range of spectacular optical phenomena. The blue colour of the sky is a direct result of Rayleigh scattering which redirects higher frequency (blue) sunlight back into the field of view of the observer. Because blue light is scattered more easily than red light, the sun takes on a reddish hue when it is observed through a thick atmosphere, as during a sunrise or sunset . Additional particulate matter in the sky can scatter different colours at different angles creating colourful glowing skies at dusk and dawn. Scattering off of ice crystals and other particles in the atmosphere are responsible for halos , afterglows , coronas , rays of sunlight , and sun dogs . The variation in these kinds of phenomena is due to different particle sizes and geometries. [ 126 ] Mirages are optical phenomena in which light rays are bent due to thermal variations in the refraction index of air, producing displaced or heavily distorted images of distant objects. Other dramatic optical phenomena associated with this include the Novaya Zemlya effect where the sun appears to rise earlier than predicted with a distorted shape. A spectacular form of refraction occurs with a temperature inversion called the Fata Morgana where objects on the horizon or even beyond the horizon, such as islands, cliffs, ships or icebergs, appear elongated and elevated, like "fairy tale castles". [ 127 ] Rainbows are the result of a combination of internal reflection and dispersive refraction of light in raindrops. A single reflection off the backs of an array of raindrops produces a rainbow with an angular size on the sky that ranges from 40° to 42° with red on the outside. Double rainbows are produced by two internal reflections with angular size of 50.5° to 54° with violet on the outside. Because rainbows are seen with the sun 180° away from the centre of the rainbow, rainbows are more prominent the closer the sun is to the horizon. [ 128 ]
https://en.wikipedia.org/wiki/Optics
OptimFROG is a proprietary , lossless audio codec developed by Florin Ghido. OptimFROG is optimized for high compression (small file sizes) at the expense of encoding and decoding speed, and consistently measures among the highest compressing lossless codecs. [ 3 ] [ 4 ] OptimFROG comes with three compressors: a lossless codec for integer LPCM format in WAV files, one for IEEE 754 floating-point WAV files, and third codec called DualStream. OptimFROG DualStream is lossy , but fill the gap between perceptual coding and lossless coding by producing a correction file. In combination with the main, lossy-encoded file, the correction file provides for lossless decoding. The lossless decoding is computationally intense and cannot be done in real time on contemporary hardware. [ 5 ] The rival audio codecs WavPack . MPEG-4 SLS , and DTS-HD Master Audio also offer correction file generation. The OptimFROG file formats use APEv2 tags to store the metadata . ID3 is also possible. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/OptimFROG
In the design of experiments , optimal experimental designs (or optimum designs [ 2 ] ) are a class of experimental designs that are optimal with respect to some statistical criterion . The creation of this field of statistics has been credited to Danish statistician Kirstine Smith . [ 3 ] [ 4 ] In the design of experiments for estimating statistical models , optimal designs allow parameters to be estimated without bias and with minimum variance . A non-optimal design requires a greater number of experimental runs to estimate the parameters with the same precision as an optimal design. In practical terms, optimal experiments can reduce the costs of experimentation. The optimality of a design depends on the statistical model and is assessed with respect to a statistical criterion, which is related to the variance-matrix of the estimator. Specifying an appropriate model and specifying a suitable criterion function both require understanding of statistical theory and practical knowledge with designing experiments . Optimal designs offer three advantages over sub-optimal experimental designs : [ 5 ] Experimental designs are evaluated using statistical criteria. [ 6 ] It is known that the least squares estimator minimizes the variance of mean - unbiased estimators (under the conditions of the Gauss–Markov theorem ). In the estimation theory for statistical models with one real parameter , the reciprocal of the variance of an ( "efficient" ) estimator is called the " Fisher information " for that estimator. [ 7 ] Because of this reciprocity, minimizing the variance corresponds to maximizing the information . When the statistical model has several parameters , however, the mean of the parameter-estimator is a vector and its variance is a matrix . The inverse matrix of the variance-matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using statistical theory , statisticians compress the information-matrix using real-valued summary statistics ; being real-valued functions, these "information criteria" can be maximized. [ 8 ] The traditional optimality-criteria are invariants of the information matrix; algebraically, the traditional optimality-criteria are functionals of the eigenvalues of the information matrix. Other optimality-criteria are concerned with the variance of predictions : In many applications, the statistician is most concerned with a "parameter of interest" rather than with "nuisance parameters" . More generally, statisticians consider linear combinations of parameters, which are estimated via linear combinations of treatment-means in the design of experiments and in the analysis of variance ; such linear combinations are called contrasts . Statisticians can use appropriate optimality-criteria for such parameters of interest and for contrasts . [ 12 ] Catalogs of optimal designs occur in books and in software libraries. In addition, major statistical systems like SAS and R have procedures for optimizing a design according to a user's specification. The experimenter must specify a model for the design and an optimality-criterion before the method can compute an optimal design. [ 13 ] Some advanced topics in optimal design require more statistical theory and practical knowledge in designing experiments. Since the optimality criterion of most optimal designs is based on some function of the information matrix, the 'optimality' of a given design is model dependent : While an optimal design is best for that model , its performance may deteriorate on other models . On other models , an optimal design can be either better or worse than a non-optimal design. [ 14 ] Therefore, it is important to benchmark the performance of designs under alternative models . [ 15 ] The choice of an appropriate optimality criterion requires some thought, and it is useful to benchmark the performance of designs with respect to several optimality criteria. Cornell writes that since the [traditional optimality] criteria . . . are variance-minimizing criteria, . . . a design that is optimal for a given model using one of the . . . criteria is usually near-optimal for the same model with respect to the other criteria. Indeed, there are several classes of designs for which all the traditional optimality-criteria agree, according to the theory of "universal optimality" of Kiefer . [ 17 ] The experience of practitioners like Cornell and the "universal optimality" theory of Kiefer suggest that robustness with respect to changes in the optimality-criterion is much greater than is robustness with respect to changes in the model . High-quality statistical software provide a combination of libraries of optimal designs or iterative methods for constructing approximately optimal designs, depending on the model specified and the optimality criterion. Users may use a standard optimality-criterion or may program a custom-made criterion. All of the traditional optimality-criteria are convex (or concave) functions , and therefore optimal-designs are amenable to the mathematical theory of convex analysis and their computation can use specialized methods of convex minimization . [ 18 ] The practitioner need not select exactly one traditional, optimality-criterion, but can specify a custom criterion. In particular, the practitioner can specify a convex criterion using the maxima of convex optimality-criteria and nonnegative combinations of optimality criteria (since these operations preserve convex functions ). For convex optimality criteria, the Kiefer - Wolfowitz equivalence theorem allows the practitioner to verify that a given design is globally optimal. [ 19 ] The Kiefer - Wolfowitz equivalence theorem is related with the Legendre - Fenchel conjugacy for convex functions . [ 20 ] If an optimality-criterion lacks convexity , then finding a global optimum and verifying its optimality often are difficult. When scientists wish to test several theories, then a statistician can design an experiment that allows optimal tests between specified models. Such "discrimination experiments" are especially important in the biostatistics supporting pharmacokinetics and pharmacodynamics , following the work of Cox and Atkinson. [ 21 ] When practitioners need to consider multiple models , they can specify a probability-measure on the models and then select any design maximizing the expected value of such an experiment. Such probability-based optimal-designs are called optimal Bayesian designs . Such Bayesian designs are used especially for generalized linear models (where the response follows an exponential-family distribution). [ 22 ] The use of a Bayesian design does not force statisticians to use Bayesian methods to analyze the data, however. Indeed, the "Bayesian" label for probability-based experimental-designs is disliked by some researchers. [ 23 ] Alternative terminology for "Bayesian" optimality includes "on-average" optimality or "population" optimality. Scientific experimentation is an iterative process, and statisticians have developed several approaches to the optimal design of sequential experiments. Sequential analysis was pioneered by Abraham Wald . [ 24 ] In 1972, Herman Chernoff wrote an overview of optimal sequential designs, [ 25 ] while adaptive designs were surveyed later by S. Zacks. [ 26 ] Of course, much work on the optimal design of experiments is related to the theory of optimal decisions , especially the statistical decision theory of Abraham Wald . [ 27 ] Optimal designs for response-surface models are discussed in the textbook by Atkinson, Donev and Tobias, and in the survey of Gaffke and Heiligers and in the mathematical text of Pukelsheim. The blocking of optimal designs is discussed in the textbook of Atkinson, Donev and Tobias and also in the monograph by Goos. The earliest optimal designs were developed to estimate the parameters of regression models with continuous variables, for example, by J. D. Gergonne in 1815 (Stigler). In English, two early contributions were made by Charles S. Peirce and Kirstine Smith . Pioneering designs for multivariate response-surfaces were proposed by George E. P. Box . However, Box's designs have few optimality properties. Indeed, the Box–Behnken design requires excessive experimental runs when the number of variables exceeds three. [ 28 ] Box's "central-composite" designs require more experimental runs than do the optimal designs of Kôno. [ 29 ] The optimization of sequential experimentation is studied also in stochastic programming and in systems and control . Popular methods include stochastic approximation and other methods of stochastic optimization . Much of this research has been associated with the subdiscipline of system identification . [ 30 ] In computational optimal control , D. Judin & A. Nemirovskii and Boris Polyak has described methods that are more efficient than the ( Armijo-style ) step-size rules introduced by G. E. P. Box in response-surface methodology . [ 31 ] Adaptive designs are used in clinical trials , and optimal adaptive designs are surveyed in the Handbook of Experimental Designs chapter by Shelemyahu Zacks. There are several methods of finding an optimal design, given an a priori restriction on the number of experimental runs or replications. Some of these methods are discussed by Atkinson, Donev and Tobias and in the paper by Hardin and Sloane . Of course, fixing the number of experimental runs a priori would be impractical. Prudent statisticians examine the other optimal designs, whose number of experimental runs differ. In the mathematical theory on optimal experiments, an optimal design can be a probability measure that is supported on an infinite set of observation-locations. Such optimal probability-measure designs solve a mathematical problem that neglected to specify the cost of observations and experimental runs. Nonetheless, such optimal probability-measure designs can be discretized to furnish approximately optimal designs. [ 32 ] In some cases, a finite set of observation-locations suffices to support an optimal design. Such a result was proved by Kôno and Kiefer in their works on response-surface designs for quadratic models. The Kôno–Kiefer analysis explains why optimal designs for response-surfaces can have discrete supports, which are very similar as do the less efficient designs that have been traditional in response surface methodology . [ 33 ] In 1815, an article on optimal designs for polynomial regression was published by Joseph Diaz Gergonne , according to Stigler . Charles S. Peirce proposed an economic theory of scientific experimentation in 1876, which sought to maximize the precision of the estimates. Peirce's optimal allocation immediately improved the accuracy of gravitational experiments and was used for decades by Peirce and his colleagues. In his 1882 published lecture at Johns Hopkins University , Peirce introduced experimental design with these words: Logic will not undertake to inform you what kind of experiments you ought to make in order best to determine the acceleration of gravity, or the value of the Ohm; but it will tell you how to proceed to form a plan of experimentation. [....] Unfortunately practice generally precedes theory, and it is the usual fate of mankind to get things done in some boggling way first, and find out afterward how they could have been done much more easily and perfectly. [ 34 ] Kirstine Smith proposed optimal designs for polynomial models in 1918. (Kirstine Smith had been a student of the Danish statistician Thorvald N. Thiele and was working with Karl Pearson in London.) The textbook by Atkinson, Donev and Tobias has been used for short courses for industrial practitioners as well as university courses. Optimal block designs are discussed by Bailey and by Bapat. The first chapter of Bapat's book reviews the linear algebra used by Bailey (or the advanced books below). Bailey's exercises and discussion of randomization both emphasize statistical concepts (rather than algebraic computations). Optimal block designs are discussed in the advanced monograph by Shah and Sinha and in the survey-articles by Cheng and by Majumdar.
https://en.wikipedia.org/wiki/Optimal_experimental_design
Optimal network design is a problem in combinatorial optimization . It is an abstract representation of the problem faced by states and municipalities when they plan their road network. Given a set of locations to connect by roads, the objective is to have a short traveling distance between every two points. More specifically, the goal is to minimize the sum of shortest distances, where the sum is taken over all pairs of points. For each two locations, there is a number representing the cost of building a direct road between them. A decision must be made about which roads to build with a fixed budget. The input to the optimal network design problem is a weighted graph G = (V,E), where the weight of each edge (u,v) in the graph represents the cost of building a road from u to v; and a budget B . A feasible network is a subset S of E, such that the sum of w(u,v) for all (u,v) in S is at most B, and there is a path between every two nodes u and v (that is, S contains a spanning tree of G). For each feasible network S , the total cost of S is the sum, over all pairs (u,v) in E, of the length of the shortest path from u to v, which uses only edges in S. The objective is to find a feasible network with a minimum total cost. Johnson, Lenstra and Kan prove that the problem is NP-hard , even for the simple case where all edge weights are equal and the budget restricts the choice to spanning trees. [ 1 ] Dionne and Florian studied branch and bound algorithms, and showed that they work in reasonable time on medium-sized inputs, but not on large inputs. Therefore, they presented heuristic approximation algorithms . [ 2 ] Anshelevic, Dasgupta, Tardos and Wexler study a game of network design, where every agent has a set of terminals and wants to build a network in which his terminals are connected, but pay as little as possible. They study the computational problem of checking whether a Nash equilibrium exists. For some special cases, they give a polynomial time algorithm that finds a (1+ε) -approximate Nash equilibrium. [ 3 ] Boffey and Hinxman present a heuristic method, and show that it yields high quality results. They also study solution methods based on branch-and-Bound, and evaluate the effects of making various approximations when calculating lower bounds. They also generalize the problem to networks with link construction cost not proportional to length, and with trip demands that are not all equal. [ 4 ]
https://en.wikipedia.org/wiki/Optimal_network_design
In mathematics and computer science, optimal radix choice is the problem of choosing the base, or radix , that is best suited for representing numbers. Various proposals have been made to quantify the relative costs of using different radices in representing numbers, especially in computer systems. One formula is the number of digits needed to express it in that base, multiplied by the base (the number of possible values each digit could have). This expression also arises in questions regarding organizational structure, networking, and other fields. The cost of representing a number N in a given base b can be defined as where we use the floor function ⌊ ⌋ {\displaystyle \lfloor \rfloor } and the base-b logarithm log b {\displaystyle \log _{b}} . If both b and N are positive integers, then the quantity E ( b , N ) {\displaystyle E(b,N)} is equal to the number of digits needed to express the number N in base b , multiplied by base b . [ 1 ] This quantity thus measures the cost of storing or processing the number N in base b if the cost of each "digit" is proportional to b . A base with a lower average E ( b , N ) {\displaystyle E(b,N)} is therefore, in some senses, more efficient than a base with a higher average value. For example, 100 in decimal has three digits, so its cost of representation is 10×3 = 30, while its binary representation has seven digits (1100100 2 ), so the analogous calculation gives 2×7 = 14. Likewise, in base 3 its representation has five digits (10201 3 ), for a value of 3×5 = 15, and in base 36 (2S 36 ) one finds 36×2 = 72. If the number is imagined to be represented by a combination lock or a tally counter , in which each wheel has b digit faces, from 0 , 1 , . . . , b − 1 {\displaystyle 0,1,...,b-1} and having ⌊ log b ⁡ ( N ) + 1 ⌋ {\displaystyle \lfloor \log _{b}(N)+1\rfloor } wheels, then E ( b , N ) {\displaystyle E(b,N)} is the total number of digit faces needed to inclusively represent any integer from 0 to N . The quantity E ( b , N ) {\displaystyle E(b,N)} for large N can be approximated as follows: The asymptotically best value is obtained for base 3, since b ln ⁡ ( b ) {\displaystyle b \over \ln(b)} attains a minimum for b = 3 {\displaystyle b=3} in the positive integers: For base 10, we have: The closely related continuous optimization problem of finding the maximum of the function f ( x ) = x 1 / x , {\displaystyle f(x)=x^{1/x},} or equivalently, on taking logs and inverting, minimizing x ln ⁡ x {\displaystyle {\tfrac {x}{\ln x}}} for continuous rather than integer values of x {\displaystyle x} , was posed and solved by Jakob Steiner in 1850. [ 2 ] The solution is Euler's number e ≈ 2.71828 {\displaystyle e\approx 2.71828} , the base of the natural logarithm , for which e ln ⁡ e = e ≈ 2.71828 . {\displaystyle {\frac {e}{\ln e}}=e\approx 2.71828\,.} Translating this solution back to Steiner's formulation, e 1 / e ≈ 1.44467 {\displaystyle e^{1/e}\approx 1.44467} is the unique maximum of f ( x ) = x 1 / x {\displaystyle f(x)=x^{1/x}} . [ 3 ] This analysis has sometimes been used to argue that, in some sense, "base e {\displaystyle e} is the most economical base for the representation and storage of numbers", despite the difficulty in understanding what that might mean in practice. [ 4 ] This topic appears in Underwood Dudley 's Mathematical Cranks . One of the eccentrics discussed in the book argues that e {\displaystyle e} is the best base, based on a muddled understanding of Steiner's calculus problem, and with a greatly exaggerated sense of how important the choice of radix is. [ 5 ] The values of E ( b , N ) {\displaystyle E(b,N)} of bases b 1 and b 2 may be compared for a large value of N : Choosing e {\displaystyle e} for b 2 {\displaystyle b_{2}} gives The average E ( b , N ) {\displaystyle E(b,N)} of various bases up to several arbitrary numbers (avoiding proximity to powers of 2 through 12 and e ) are given in the table below. Also shown are the values relative to that of base e . E ( 1 , N ) {\displaystyle E(1,N)} of any number N {\displaystyle N} is just N {\displaystyle N} , making unary the most economical for the first few integers, but this no longer holds as N climbs to infinity. N = 1 to 6 N = 1 to 43 N = 1 to 182 N = 1 to 5329 One result of the relative economy of base 3 is that ternary search trees offer an efficient strategy for retrieving elements of a database. [ 6 ] A similar analysis suggests that the optimum design of a large telephone menu system to minimise the number of menu choices that the average customer must listen to (i.e. the product of the number of choices per menu and the number of menu levels) is to have three choices per menu. [ 1 ] In a d -ary heap , a priority queue data structure based on d -ary trees, the worst-case number of comparisons per operation in a heap containing n {\displaystyle n} elements is d log d ⁡ n {\displaystyle d\log _{d}n} (up to lower-order terms), the same formula used above. It has been suggested that choosing d = 3 {\displaystyle d=3} or d = 4 {\displaystyle d=4} may offer optimal performance in practice. [ 7 ] Brian Hayes suggests that E ( b , N ) {\displaystyle E(b,N)} may be the appropriate measure for the complexity of an Interactive voice response menu: in a tree-structured phone menu with n {\displaystyle n} outcomes and r {\displaystyle r} choices per step, the time to traverse the menu is proportional to the product of r {\displaystyle r} (the time to present the choices at each step) with log r ⁡ n {\displaystyle \log _{r}n} (the number of choices that need to be made to determine the outcome). From this analysis, the optimal number of choices per step in such a menu is three. [ 1 ] The 1950 reference High-Speed Computing Devices describes a particular situation using contemporary technology. Each digit of a number would be stored as the state of a ring counter composed of several triodes . Whether vacuum tubes or thyratrons , the triodes were the most expensive part of a counter. For small radices r less than about 7, a single digit required r triodes. [ 8 ] (Larger radices required 2 r triodes arranged as r flip-flops , as in ENIAC 's decimal counters.) [ 9 ] So the number of triodes in a numerical register with n digits was rn . In order to represent numbers up to 10 6 , the following numbers of tubes were needed: The authors conclude, Under these assumptions, the radix 3, on the average, is the most economical choice, closely followed by radices 2 and 4. These assumptions are, of course, only approximately valid, and the choice of 2 as a radix is frequently justified on more complete analysis. Even with the optimistic assumption that 10 triodes will yield a decimal ring, radix 10 leads to about one and one-half times the complexity of radix 2, 3, or 4. This is probably significant despite the shallow nature of the argument used here. [ 10 ]
https://en.wikipedia.org/wiki/Optimal_radix_choice
Optimal solutions for the Rubik's Cube are solutions that are the shortest in some sense. There are two common ways to measure the length of a solution. The first is to count the number of quarter turns. The second and more popular is to count the number of outer-layer twists, called "face turns". A move to turn an outer layer two quarter (90°) turns in the same direction would be counted as two moves in the quarter turn metric (QTM), but as one turn in the face metric (FTM, or HTM "Half Turn Metric"). [ 1 ] It means that the length of an optimal solution in HTM ≤ the length of an optimal solution in QTM. The maximal number of face turns needed to solve any instance of the Rubik's Cube is 20, [ 2 ] and the maximal number of quarter turns is 26. [ 3 ] These numbers are also the diameters of the corresponding Cayley graphs of the Rubik's Cube group . In STM (slice turn metric) the minimal number of turns is unknown, lower bound being 18 and upper bound being 20. A randomly scrambled Rubik's Cube will most likely be optimally solvable in 18 moves (~ 67.0%), 17 moves (~ 26.7%), 19 moves (~ 3.4%), 16 moves (~ 2.6%) or 15 moves (~ 0.2%) in HTM. [ 4 ] By the same token, it is estimated that there is approximately 1 configuration which needs 20 moves to be solved optimally in every 90 billion random scrambles. The exact number of configurations requiring 20 optimal moves to solve the cube is still unknown. To denote a sequence of moves on the 3×3×3 Rubik's Cube, this article uses "Singmaster notation", [ 5 ] which was developed by David Singmaster . The letters L , R , F , B , U , and D indicate a clockwise quarter turn of the left, right, front, back, up, and down face respectively. A half-turn (i.e. 2 quarter turns in the same direction) are indicated by appending a 2 . A counterclockwise turn is indicated by appending a prime symbol ( ' ). Computer solvers can find both optimal and non-optimal solutions in a given turn metric. To distinguish between these states, an asterisk symbol ( * ) is being used. For example, a solution followed by a (18f) means that an 18-move long solution in the face-turn metric was found but that specific solution was not proved to be optimal. On the contrary, a solution followed by a (18f*) means that an 18-move long solution in the face-turn metric was found and that specific solution was proved to be optimal. It can be proven by counting arguments that there exist positions needing at least 18 moves to solve. To show this, first count the number of cube positions that exist in total, then count the number of positions achievable using at most 17 moves starting from a solved cube. It turns out that the latter number is smaller. This argument was not improved upon for many years. Also, it is not a constructive proof : it does not exhibit a concrete position that needs this many moves. It was conjectured that the so-called superflip would be a position that is very difficult. A Rubik's Cube is in the superflip pattern when each corner piece is in the correct position, but each edge piece is incorrectly oriented. [ 6 ] In 1992, a solution for the superflip with 20 face turns was found by Dik T. Winter , of which the minimality was shown in 1995 by Michael Reid , providing a new lower bound for the diameter of the cube group. Also in 1995, a solution for superflip in 24 quarter turns was found by Michael Reid, with its minimality proven by Jerry Bryan . [ 6 ] In 1998, a new position requiring more than 24 quarter turns to solve was found. The position, which was called a 'superflip composed with four spot' needs 26 quarter turns. [ 7 ] The first upper bounds were based on the 'human' algorithms . By combining the worst-case scenarios for each part of these algorithms, the typical upper bound was found to be around 100. Perhaps the first concrete value for an upper bound was the 277 moves mentioned by David Singmaster in early 1979. He simply counted the maximum number of moves required by his cube-solving algorithm. [ 8 ] [ 9 ] Later, Singmaster reported that Elwyn Berlekamp , John Conway , and Richard K. Guy had come up with a different algorithm that took at most 160 moves. [ 8 ] [ 10 ] Soon after, Conway's Cambridge Cubists reported that the cube could be restored in at most 94 moves. [ 8 ] [ 11 ] Five computer algorithms (four of which can find an optimal Rubik's Cube solution in the half-turn metric) are briefly described below in chronological order. An animated example solve has been made for each of them. The scrambling move sequence used in all example solves is: U2 B2 R' F2 R' U2 L2 B2 R' B2 R2 U2 B2 U' L R2 U L F D2 R' F'. See example solves and use the buttons at the top right to navigate through the solves, then use the button bar at the bottom to play the solving sequence. Thistlethwaite's four-phase algorithm is not designed to search for an optimal solution, its average move count being about 31 moves. [ 12 ] Nevertheless, it is an interesting solving method from a theoretical standpoint. The breakthrough in determining an upper bound, known as "descent through nested sub-groups" was found by Morwen Thistlethwaite ; details of Thistlethwaite's algorithm were published in Scientific American in 1981 by Douglas Hofstadter . The approaches to the cube that led to algorithms with very few moves are based on group theory and on extensive computer searches. Thistlethwaite's idea was to divide the problem into subproblems. Where algorithms up to that point divided the problem by looking at the parts of the cube that should remain fixed, he divided it by restricting the type of moves that could be executed. In particular he divided the cube group into the following chain of subgroups: Next he prepared tables for each of the right coset spaces G i + 1 ∖ G i {\displaystyle G_{i+1}\setminus G_{i}} . For each element he found a sequence of moves that took it to the next smaller group. After these preparations he worked as follows. A random cube is in the general cube group G 0 {\displaystyle G_{0}} . Next he found this element in the right coset space G 1 ∖ G 0 {\displaystyle G_{1}\setminus G_{0}} . He applied the corresponding process to the cube. This took it to a cube in G 1 {\displaystyle G_{1}} . Next he looked up a process that takes the cube to G 2 {\displaystyle G_{2}} , next to G 3 {\displaystyle G_{3}} and finally to G 4 {\displaystyle G_{4}} . Although the whole cube group G 0 {\displaystyle G_{0}} is very large (~4.3×10 19 ), the right coset spaces G 1 ∖ G 0 , G 2 ∖ G 1 , G 3 ∖ G 2 {\displaystyle G_{1}\setminus G_{0},G_{2}\setminus G_{1},G_{3}\setminus G_{2}} and G 3 {\displaystyle G_{3}} are much smaller. The coset space G 2 ∖ G 1 {\displaystyle G_{2}\setminus G_{1}} is the largest and contains only 1082565 elements. The number of moves required by this algorithm is the sum of the largest process in each step. Initially, Thistlethwaite showed that any configuration could be solved in at most 85 moves using a totally different method. In January 1980 he improved his strategy to yield a maximum of 80 moves. Later that same year, he reduced the number to 63 using a new approach, and then again to 52 using an entirely different approach which is now known as Thistlethwaite's algorithm. [ 13 ] By exhaustively searching the coset spaces it was later found that the worst possible number of moves for each phase was 7, 10, 13, and 15 giving a total of 45 moves at most. There have been implementations of Thistlewaite's algorithm in various computer languages. The main idea behind the 4-list algorithm (sometimes denoted as Shamir's algorithm) is a bidirectional search, also known as a meet-in-the-middle approach. A group of researchers— Adi Shamir , Amos Fiat , Shahar Mozes , Ilan Shimshoni and Gábor Tardos —demonstrated how to apply the algorithm to the Rubik's Cube in 1989, [ 14 ] based on earlier work by Richard Schroeppel and Adi Shamir from January 1980 (which was published in 1981). [ 15 ] The 4-list algorithm is not designed to search for an optimal solution as quickly as possible. Its purpose is to find a solution at most 20 moves long, but without any guarantee that the solution found is optimal. If the algorithm is not terminated upon finding the first solution, it can find all solutions including optimal ones. However, the first report of optimal solutions for randomly scrambled cubes came from Richard E. Korf in 1997, using his own algorithm. The search time required for the 4-list algorithm to find an optimal solution is considerably longer compared to Kociemba's or Feather's algorithm. Bidirectional search works by searching forward from the scrambled state, and backward from the solved state simultaneously, until a common state is reached from both directions. The solution is then found by combining the forward search path with the inverse of the backward search path. To find a solution using the 4-list algorithm, a list of all 621,649 permutations that reach depths 0-5 is first stored in RAM . By cleverly multiplying all possible pairs of elements from that list and sorting the resulting products, all permutations that reach depths 0-10 are then generated in lexicographical order from both scrambled and solved states, ultimately leading to a match being found at their intersection. [ 16 ] As a consequence of the lexicographic ordering, it is possible to eliminate duplicate permutations and to count the number of unique permutations without storing any of the created products in RAM. Thistlethwaite's algorithm was improved by Herbert Kociemba in 1992. He reduced the number of intermediate groups to only two: Kociemba's two-phase algorithm is not designed to search for an optimal solution; its purpose is to quickly find a reasonably short suboptimal solution. A randomly scrambled cube would be typically solved in a fraction of a second in 20 moves or less, but without any guarantee that the solution found is optimal. While it is technically possible to search for an optimal solution using Kociemba's algorithm by reducing a two-phase solver to only a one-phase solver (only phase 1 would be used until the cube is completely solved, no phase 2 operation being done at all), more hardware utilization would be necessary in that case because a fast optimal solver requires significantly more computing resources than an equally fast suboptimal solver. As with Thistlethwaite's algorithm , he would search through the right coset space G 1 ∖ G 0 {\displaystyle G_{1}\setminus G_{0}} to take the cube to group G 1 {\displaystyle G_{1}} . Next he searched for the optimal solution for group G 1 {\displaystyle G_{1}} . The searches in G 1 ∖ G 0 {\displaystyle G_{1}\setminus G_{0}} and G 1 {\displaystyle G_{1}} were both done with a method equivalent to iterative deepening A* (IDA*). The search in G 1 ∖ G 0 {\displaystyle G_{1}\setminus G_{0}} needs at most 12 moves and the search in G 1 {\displaystyle G_{1}} at most 18 moves, as Michael Reid showed in 1995. By also generating suboptimal solutions that take the cube to group G 1 {\displaystyle G_{1}} and looking for short solutions in G 1 {\displaystyle G_{1}} , much shorter overall solutions are usually obtained. In 1995 Michael Reid proved that, by using these two groups, every position can be solved in at most 29 face turns, or in 42 quarter turns. This result was improved by Silviu Radu in 2005 to 40 quarter turns. At first glance, this algorithm appears to be practically inefficient: if G 0 {\displaystyle G_{0}} contains 18 possible moves (each move, its prime, and its 180-degree rotation), that leaves 18 12 {\displaystyle 18^{12}} (over 1 quadrillion) cube states to be searched. Even with a heuristic -based computer algorithm like IDA* , which may narrow it down considerably, searching through that many states is likely not practical. To solve this problem, Kociemba devised a lookup table that provides an exact heuristic for G 0 {\displaystyle G_{0}} . [ 17 ] When the exact number of moves needed to reach G 1 {\displaystyle G_{1}} is available, the search for suboptimal solutions becomes virtually instantaneous (note that the search for optimal solutions takes much longer): one need only generate 18 cube states for each of the 12 moves and choose the one with the lowest heuristic each time. This allows the second heuristic, that for G 1 {\displaystyle G_{1}} , to be less precise and still allow for a solution to be computed in reasonable time on a modern computer. In 1997 Richard E. Korf wrote the first program to solve randomly scrambled cubes optimally. Of the ten random cubes he did, none required more than 18 face turns. The method he used is called IDA* and is described in his paper "Finding Optimal Solutions to Rubik's Cube Using Pattern Databases". [ 18 ] Korf describes this method as follows: It works roughly as follows. First he identified a number of subproblems that are small enough to be solved optimally. He used: Clearly the number of moves required to solve any of these subproblems is a lower bound for the number of moves needed to solve the entire cube. Given a random cube C, it is solved as iterative deepening . First all cubes are generated that are the result of applying 1 move to them. That is C * F, C * U, ... Next, from this list, all cubes are generated that are the result of applying two moves, then three moves, and so on. If at any point a cube is found that needs too many moves based on the lower bounds to still be optimal, it can be eliminated from the list. Although this algorithm will always find an optimal solution, its search time is considerably longer than Kociemba's or Feather's algorithm. In 2015, Michael Feather introduced a unique two-phase algorithm on his website. It is capable of generating both suboptimal and optimal solutions in reasonable time on a modern device. [ 19 ] Unlike Thistlethwaite's or Kociemba's algorithm, Feather's algorithm is not heavily based on group theory. The Rubik's Cube can be simplified by using only 3 colors instead of usual 6 colors. Generally, opposite faces would share the same color. On a 6-color cube, a solved 3-color cube would be represented by a state in which only opposite colors appear on opposite faces. In a nutshell, Feather's algorithm goes like this: any 3-color solutions (in phase 1) that arise from the nodes being generated are then (in phase 2) looked up in the array having a total of 3,981,312 configurations and containing distances from intermediate 3-color solutions to the final 6-color solution, and if phase 2 is at most 8 moves long (of which there are 117,265 configurations) then a solution is generated. [ 20 ] One can think of it as a brute-force search enhanced by using distance arrays to prune the search tree where possible, and also reducing the size of the distance arrays effectively by using cube symmetry. [ 21 ] When searching for an optimal solution, Feather's algorithm finds suboptimal solutions along the way. This is an advantage because in many cases it does not have to search the depth of the optimal solution length because it has already found a suboptimal solution at a lower depth that is equal to the optimal solution length, so it only has to complete search depth n − 1 to prove that solution length n is optimal. Feather's algorithm was implemented in the first online optimal Rubik's Cube solver, more specifically in the first client-side processing ( JavaScript ) solver with a graphical user interface running in a web browser and being able to generate optimal solutions in a timely manner. That includes computing 19-move-long optimal solutions, as they will occur in roughly 3.4% of all cases in a batch of randomly scrambled cubes. The solver has multiple options for different-size distance arrays to maximize use of available RAM , and it also uses all available processors to get the best performance from a wide variety of platforms. [ 22 ] The 4-list, Kociemba's, Korf's and Feather's algorithms can all be adjusted to always find an optimal solution in HTM, Thistlethwaite's algorithm can not do that. The 4-list, Kociemba's and Korf's algorithms are known to be always searching at depth n to prove that the solution found is optimal, Feather's algorithm is known to be often searching at depth n − 1 to prove that the solution found is optimal, where n is a solution length. Korf's, Kociemba's and Feather's algorithms are all using IDA* , they differ in heuristic functions being used. Branching factor for all 3 mentioned algorithms is about 13.5, meaning that it will take approximately 13.5 times longer to complete the search at depth n than the search at depth n − 1 . Thistlethwaite's, two-phase (suboptimal) Kociemba's and two-phase (suboptimal) Feather's algorithms are all reduction-based algorithms: While Thistlethwaite's and two-phase Kociemba's algorithms are being more move-restricted in each next phase, Feather's algorithm is not being move-restricted in phase 2. Also, there is a substantial difference between HTR and 3-color cube reduction even though they might seem the same at first sight. Similarly to Thistlethwaite's HTR, phase 2 of Feather's algorithm can also be solved using only half-turns, but in that case not all configurations would be solvable in at most 8 moves. Some people regard a one-phase (optimal) and a two-phase (suboptimal) Kociemba's algorithm as being two different algorithms. In the event "3x3x3 Fewest Moves" governed by the World Cube Association there are known cases in which humans found 16, 17 and 18 move long optimal solutions to randomly scrambled cubes. To achieve this feat, most competitors are currently mimicking the steps from Thistlethwaite's algorithm, combined with advanced solving techniques such as premoves, normal inverse scramble switch, insertions, and others. Two terms—God's number and God's algorithm —are closely related to the optimal solution for the Rubik's Cube. God's number refers to the fewest number of moves required to solve a scrambled cube in a given turn metric; it also refers to the greatest such number among all scrambled cubes. God's algorithm refers to the shortest move sequence required to solve a particular scrambled cube in a given turn metric. For instance, God's number for a scrambling move sequence given in the Computer solving section above is 18 in FTM, and each of the four example solves from that section being 18 moves long in FTM is God's algorithm for that particular scrambled cube. In 2006, Silviu Radu proved that every position can be solved in at most 27 face turns or 35 quarter turns. [ 23 ] Daniel Kunkle and Gene Cooperman in 2007 used a supercomputer to show that all unsolved cubes can be solved in no more than 26 face turns. Instead of attempting to solve each of the billions of variations explicitly, the computer was programmed to bring the cube to one of 15,752 states, each of which could be solved within a few extra moves. All were proved solvable in 29 moves, with most solvable in 26. Those that could not initially be solved in 26 moves were then solved explicitly, and shown that they too could be solved in 26 moves. [ 24 ] [ 25 ] Tomas Rokicki reported in a 2008 computational proof that all unsolved cubes could be solved in 25 moves or fewer. [ 26 ] This was later reduced to 23 moves. [ 27 ] In August 2008, Rokicki announced that he had a proof for 22 moves. [ 28 ] Finally, in 2010, Tomas Rokicki, Herbert Kociemba, Morley Davidson, and John Dethridge gave the final computer-assisted proof that all cube positions could be solved with a maximum of 20 face turns. [ 2 ] In 2009, Tomas Rokicki proved that 29 moves in the quarter-turn metric is enough to solve any scrambled cube. [ 29 ] And in 2014, Tomas Rokicki and Morley Davidson proved that the maximum number of quarter-turns needed to solve the cube is 26. [ 3 ] The face-turn and quarter-turn metrics differ in the nature of their antipodes. [ 3 ] An antipode is a scrambled cube that is maximally far from solved, one that requires the maximum number of moves to solve. In the half-turn metric, where God's number is 20, there are hundreds of millions of such positions. In the quarter-turn metric, where God's number is 26, only a single position (and its two rotations) is known that requires the maximum of 26 moves. Despite significant effort, no additional quarter-turn distance-26 positions have been found. Even at distance 25, only two positions (and their rotations) are known to exist. [ 3 ] At distance 24, perhaps 150,000 positions exist.
https://en.wikipedia.org/wiki/Optimal_solutions_for_the_Rubik's_Cube
Optimal virulence is a concept relating to the ecology of hosts and parasites . One definition of virulence is the host's parasite-induced loss of fitness . The parasite's fitness is determined by its success in transmitting offspring to other hosts. For about 100 years, the consensus was that virulence decreased and parasitic relationships evolved toward symbiosis . This was even called the law of declining virulence despite being a hypothesis, not even a theory. It has been challenged since the 1980s and has been disproved. [ 1 ] [ 2 ] A pathogen that is too restrained will lose out in competition to a more aggressive strain that diverts more host resources to its own reproduction. However, the host, being the parasite's resource and habitat in a way, suffers from this higher virulence . This might induce faster host death, and act against the parasite's fitness by reducing probability to encounter another host (killing the host too fast to allow for transmission). Thus, there is a natural force providing pressure on the parasite to "self-limit" virulence. The idea is, then, that there exists an equilibrium point of virulence, where parasite's fitness is highest. Any movement on the virulence axis, towards higher or lower virulence, will result in lower fitness for the parasite, and thus will be selected against. Paul W. Ewald has explored the relationship between virulence and mode of transmission. He came to the conclusion that virulence tends to remain especially high in waterborne and vector-borne infections, such as cholera and dengue . Cholera is spread through sewage and dengue through mosquitos . In the case of respiratory infections, the pathogen depends on an ambulatory host to survive. It must spare the host long enough to find a new host. Water- or vector-borne transmission circumvents the need for a mobile host. Ewald is convinced that the crowding of field hospitals and trench warfare provided an easy route to transmission that evolved the virulence of the 1918 influenza pandemic . In such immobilized, crowded conditions pathogens can make individuals very sick and still jump to healthy individuals. Other epidemiologists have expanded on the idea of a tradeoff between costs and benefits of virulence. One factor is the time or distance between potential hosts. Airplane travel, crowded factory farms , and urbanization have all been suggested as possible sources of virulence. Another factor is the presence of multiple infections in a single host leading to increased competition among pathogens. In this scenario, the host can survive only as long as it resists the most virulent strains. The advantage of a low virulence strategy becomes moot. Multiple infections can also result in gene swapping among pathogens, increasing the likelihood of lethal combinations. There are three main hypotheses about why a pathogen evolves as it does. These three models help to explain the life history strategies of parasites, including reproduction, migration within the host, virulence, etc. The three hypotheses are the trade-off hypothesis, the short-sighted evolution hypothesis, and the coincidental evolution hypothesis. All of these offer ultimate explanations for virulence in pathogens. At one time, some biologists argued that pathogens would tend to evolve toward ever decreasing virulence because the death of the host (or even serious disability) is ultimately harmful to the pathogen living inside. For example, if the host dies, the pathogen population inside may die out entirely. Therefore, it was believed that less virulent pathogens that allowed the host to move around and interact with other hosts should have greater success reproducing and dispersing. But this is not necessarily the case. Pathogen strains that kill the host can increase in virulence as long as the pathogen can transmit itself to a new host, whether before or after the host dies. The evolution of virulence in pathogens is a balance between the costs and benefits of virulence to the pathogen. For example, studies of the malaria parasite using rodent [ 3 ] and chicken [ 4 ] models found that there was trade-off between transmission success and virulence as defined by host mortality. Short-sighted evolution suggests that the traits that increase reproduction rate and transmission to a new host will rise to high frequency within the pathogen population. These traits include the ability to reproduce sooner, reproduce faster, reproduce in higher numbers, live longer, survive against antibodies, or survive in parts of the body the pathogen does not normally infiltrate. These traits typically arise due to mutations, which occur more frequently in pathogen populations than in host populations, due to the pathogens' rapid generation time and immense numbers. After only a few generations, the mutations that enhance rapid reproduction or dispersal will increase in frequency. The same mutations that enhance the reproduction and dispersal of the pathogen also enhance its virulence in the host, causing much harm (disease and death). If the pathogen's virulence kills the host and interferes with its own transmission to a new host, virulence will be selected against. But as long as transmission continues despite the virulence, virulent pathogens will have the advantage. So, for example, virulence often increases within families, where transmission from one host to the next is likely, no matter how sick the host. Similarly, in crowded conditions such as refugee camps, virulence tends to increase over time since new hosts cannot escape the likelihood of infection. Some forms of pathogenic virulence do not co-evolve with the host. For example, tetanus is caused by the soil bacterium Clostridium tetani . After C. tetani bacteria enter a human wound, the bacteria may grow and divide rapidly, even though the human body is not their normal habitat. While dividing, C. tetani produce a neurotoxin that is lethal to humans. But it is selection in the bacterium's normal life cycle in the soil that leads it to produce this toxin, not any evolution with a human host. The bacterium finds itself inside a human instead of in the soil by mere happenstance . We can say that the neurotoxin is not directed at the human host. More generally, the virulence of many pathogens in humans may not be a target of selection itself, but rather an accidental by-product of selection that operates on other traits, as is the case with antagonistic pleiotropy . A potential for virulence exists whenever a pathogen invades a new environment, host or tissue. The new host is likely to be poorly adapted to the intruder, either because it has not built up an immunological defense or because of a fortuitous vulnerability. In times of change, natural selection favors mutations that exploit the new host more effectively than the founder strain, providing an opportunity for virulence to erupt. Host susceptibility contributes to virulence. Once transmission occurs, the pathogen must establish an infection to continue. The more competent the host immune system, the less chance there is for the parasite to survive. It may require multiple transmission events to find a suitably vulnerable host. During this time, the invader is dependent upon the survival of its current host. The optimum conditions for high virulence would be a community with immune dysfunction (and/or poor hygiene and sanitation ) that was in all other ways as healthy as possible (eg optimum nutrition ).
https://en.wikipedia.org/wiki/Optimal_virulence
The Optimized Link State Routing Protocol ( OLSR ) [ 1 ] is an IP routing protocol optimized for mobile ad hoc networks , which can also be used on other wireless ad hoc networks . OLSR is a proactive link-state routing protocol , which uses hello and topology control (TC) messages to discover and then disseminate link state information throughout the mobile ad hoc network. Individual nodes use this topology information to compute next hop destinations for all nodes in the network using shortest hop forwarding paths. Link-state routing protocols such as Open Shortest Path First (OSPF) and Intermediate System to Intermediate System (IS-IS) elect a designated router on every link to perform flooding of topology information. In wireless ad hoc networks, there is different notion of a link, packets can and do go out the same interface; hence, a different approach is needed in order to optimize the flooding process. Using Hello messages the OLSR protocol at each node discovers 2-hop neighbor information and performs a distributed election of a set of multipoint relays (MPRs). Nodes select MPRs such that there exists a path to each of its 2-hop neighbors via a node selected as an MPR. These MPR nodes then source and forward TC messages that contain the MPR selectors. This functioning of MPRs makes OLSR unique from other link state routing protocols in a few different ways: The forwarding path for TC messages is not shared among all nodes but varies depending on the source, only a subset of nodes source link state information, not all links of a node are advertised but only those that represent MPR selections. Since link-state routing requires the topology database to be synchronized across the network, OSPF and IS-IS perform topology flooding using a reliable algorithm. Such an algorithm is very difficult to design for ad hoc wireless networks, so OLSR doesn't bother with reliability; it simply floods topology data often enough to make sure that the database does not remain unsynchronized for extended periods of time. Multipoint relays (MPRs) relay messages between nodes. They also have the main role in routing and selecting the proper route from any source to any desired destination node. MPRs advertise link-state information for their MPR selectors (a node selected as a MPR) periodically in their control messages. MPRs are also used to form a route from a given node to any destination in route calculation. Each node periodically broadcasts a Hello message for the link sensing, neighbor detection and MPR selection processes. [ 2 ] Being a proactive protocol, routes to all destinations within the network are known and maintained before use. Having the routes available within the standard routing table can be useful for some systems and network applications as there is no route discovery delay associated with finding a new route. The routing overhead generated, while generally greater than that of a reactive protocol, does not increase with the number of routes being created. Default and network routes can be injected into the system by Host and Network Association (HNA) messages allowing for connection to the internet or other networks within the OLSR MANET cloud. Network routes are something reactive protocols do not currently execute well. Timeout values and validity information is contained within the messages conveying information allowing for differing timer values to be used at differing nodes. The original definition of OLSR does not include any provisions for sensing of link quality; it simply assumes that a link is up if a number of hello packets have been received recently. This assumes that links are bi-modal (either working or failed), which is not necessarily the case on wireless networks, where links often exhibit intermediate rates of packet loss. Implementations such as the open source OLSRd (commonly used on Linux -based mesh routers) have been extended (as of v. 0.4.8) with link quality sensing. Being a proactive protocol, OLSR uses power and network resources in order to propagate data about possibly unused routes. While this is not a problem for wired access points, and laptops, it makes OLSR unsuitable for sensor networks that try to sleep most of the time. For small scale wired access points with low CPU power, the open source OLSRd project showed that large scale mesh networks can run with OLSRd on thousands of nodes with very little CPU power on 200 MHz embedded devices. [ citation needed ] Being a link-state protocol, OLSR requires a reasonably large amount of bandwidth and CPU power to compute optimal paths in the network. In the typical networks where OLSR is used (which rarely exceed a few hundreds of nodes), this does not appear to be a problem. By only using MPRs to flood topology information, OLSR removes some of the redundancy of the flooding process, which may be a problem in networks with moderate to large packet loss rates [ 3 ] – however the MPR mechanism is self-pruning (which means that in case of packet losses, some nodes that would not have retransmitted a packet, may do so). OLSR makes use of "Hello" messages to find its one hop neighbors and its two hop neighbors through their responses. The sender can then select its multipoint relays (MPR) based on the one hop node that offers the best routes to the two hop nodes. Each node has also an MPR selector set, which enumerates nodes that have selected it as an MPR node. OLSR uses topology control (TC) messages along with MPR forwarding to disseminate neighbor information throughout the network. Host and network association (HNA) messages are used by OLSR to disseminate network route advertisements in the same way TC messages advertise host routes. The problem of routing in ad hoc wireless networks is actively being researched, and OLSR is but one of several proposed solutions. To many, it is not clear whether a whole new protocol is needed, or whether OSPF could be extended with support for wireless interfaces. [ 4 ] [ 5 ] In bandwidth- and power-starved environments, it is interesting to keep the network silent when there is no traffic to be routed. Reactive routing protocols do not maintain routes, but build them on demand. As link-state protocols require database synchronisation, such protocols typically use the distance vector approach, as in AODV and DSDV , or more ad hoc approaches that do not necessarily build optimal paths, such as Dynamic Source Routing . For more information see the list of ad hoc routing protocols . OLSRv2 was published by the IETF in April 2014 as a standards-track protocol. [ 6 ] It maintains many of the key features of the original including MPR selection and dissemination. Key differences are the flexibility and modular design using shared components: packet format packetbb, and neighborhood discovery protocol NHDP. These components are being designed to be common among next generation IETF MANET protocols. Differences in the handling of multiple address and interface enabled nodes is also present between OLSR and OLSRv2.
https://en.wikipedia.org/wiki/Optimized_Link_State_Routing_Protocol
The optimized effective potential method ( OEP ) [ 1 ] [ 2 ] in Kohn-Sham (KS) density functional theory (DFT) [ 3 ] [ 4 ] is a method to determine the potentials as functional derivatives of the corresponding KS orbital-dependent energy density functionals . This can be in principle done for any arbitrary orbital-dependent functional, [ 5 ] but is most common for exchange energy as the so-called exact exchange method (EXX) , [ 6 ] [ 7 ] which will be considered here. The OEP method was developed more than 10 years prior to the work of Pierre Hohenberg , [ 3 ] Walter Kohn and Lu Jeu Sham [ 4 ] in 1953 by R. T. Sharp and G. K. Horton [ 8 ] in order to investigate, what happens to Hartree-Fock (HF) theory [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] when, instead of the regular nonlocal exchange potential, a local exchange potential is demanded. Much later after 1990 it was found out that this ansatz is useful in density functional theory . In density functional theory the exchange correlation (xc) potential is defined as the functional derivative of the exchange correlation (xc) energy with respect to the electron density ρ ( r ) {\displaystyle \rho (r)} [ citation needed ] where the index s {\displaystyle s} denotes either occupied or unoccupied KS orbitals and eigenvalues. The problem is that, although the xc energy is in principle (due to the Hohenberg-Kohn (HK) theorem [ 3 ] ) a functional of the density, its explicit dependence on the density is unknown (only known in the simple Local density approximation (LDA) [ 3 ] case), only its implicit dependence through the KS orbitals. That motivates the use of the chain rule v x c ( r ) = ∫ d r ′ ∑ s [ δ E x c [ { ϕ s } ] δ ϕ s ( r ′ ) δ ϕ s ( r ′ ) δ ρ ( r ) + c . c . ] {\displaystyle v_{xc}(r)=\int dr'\sum _{s}{\bigg [}{\frac {\delta E_{xc}[\{\phi _{s}\}]}{\delta \phi _{s}(r')}}{\frac {\delta \phi _{s}(r')}{\delta \rho (r)}}+c.c.{\bigg ]}} Unfortunately the functional derivative δ ϕ s / δ ρ {\displaystyle \delta \phi _{s}/\delta \rho } , despite its existence, is also unknown. So one needs to invoke the chain rule once more, now with respect to the Kohn-Sham (KS) potential v S ( r ) {\displaystyle v_{S}(r)} v x c ( r ) = ∬ d r ′ d r ″ ∑ s [ δ E x c [ { ϕ s } ] δ ϕ s ( r ′ ) δ ϕ s ( r ′ ) δ v S ( r ″ ) δ v S ( r ″ ) δ ρ ( r ) ⏟ ≡ X S − 1 ( r , r ′ ) + c . c . ] {\displaystyle v_{xc}(r)=\iint dr'dr''\sum _{s}{\bigg [}{\frac {\delta E_{xc}[\{\phi _{s}\}]}{\delta \phi _{s}(r')}}{\frac {\delta \phi _{s}(r')}{\delta v_{S}(r'')}}\underbrace {\frac {\delta v_{S}(r'')}{\delta \rho (r)}} _{\equiv X_{S}^{-1}(r,r')}+c.c.{\bigg ]}} where X S − 1 ( r , r ′ ) {\displaystyle X_{S}^{-1}(r,r')} is defined the inverse static Kohn-Sham (KS) response function. [ citation needed ] The KS orbital-dependent exact exchange energy (EXX) is given in Chemist's notation as E x [ { ϕ i } ] = − 1 2 ∑ i ∑ j ( i j | j i ) ≡ − 1 2 ∑ i ∑ j ∬ d r d r ′ ϕ i † ( r ) ϕ j ( r ) ϕ j † ( r ′ ) ϕ i ( r ′ ) | r − r ′ | {\displaystyle E_{x}[\{\phi _{i}\}]=-{\frac {1}{2}}\sum _{i}\sum _{j}(ij|ji)\equiv -{\frac {1}{2}}\sum _{i}\sum _{j}\iint drdr'{\frac {\phi _{i}^{\dagger }(r)\phi _{j}(r)\phi _{j}^{\dagger }(r')\phi _{i}(r')}{|r-r'|}}} where r , r ′ {\displaystyle r,r'} denote electronic coordinates, † {\displaystyle \dagger } the hermitian conjugate .The static Kohn-Sham (KS) response function is given as where the indices i {\displaystyle i} denote occupied and a {\displaystyle a} unoccupied KS orbitals, c . c . {\displaystyle c.c.} the complex conjugate. the right hand side (r.h.s.) of the OEP equation is where v ^ x NL {\displaystyle {\hat {v}}_{x}^{\text{NL}}} is the nonlocal exchange operator from Hartree-Fock (HF) theory but evaluated with KS orbitals stemming from the functional derivative δ E x c [ { ϕ i } ] / δ ϕ i ( r ′ ) {\displaystyle \delta E_{xc}[\{\phi _{i}\}]/\delta \phi _{i}(r')} . Lastly note that the following functional derivative is given by first order static perturbation theory exactly δ ϕ s ( r ′ ) δ v S ( r ″ ) = ϕ i ( r ′ ) ∑ t , t ≠ i ϕ t † ( r ′ ) ϕ t ( r ) ε i − ε t ⏟ G ( r , r ′ ) {\displaystyle {\frac {\delta \phi _{s}(r')}{\delta v_{S}(r'')}}=\phi _{i}(r')\underbrace {\sum _{t,t\neq i}{\frac {\phi _{t}^{\dagger }(r')\phi _{t}(r)}{\varepsilon _{i}-\varepsilon _{t}}}} _{G(r,r')}} which is a Green's function . Combining eqs. (1), (2) and (3) leads to the Optimized Effective Potential (OEP) Integral equation ∫ d r ′ v x ( r ′ ) X S ( r , r ′ ) = t ( r ) {\displaystyle \int dr'v_{x}(r')X_{S}(r,r')=t(r)} Usually the exchange potential is expanded in an auxiliary basis set (RI basis) { f μ } {\displaystyle \{f_{\mu }\}} as v x ( r ) = ∑ ν v x , ν f ν ( r ) {\displaystyle v_{x}(r)=\sum _{\nu }v_{x,\nu }f_{\nu }(r)} together with the regular orbital basis { χ λ } {\displaystyle \{\chi _{\lambda }\}} requiring the so-called 3-index integrals of the form ( f ν | χ λ χ κ ) {\displaystyle (f_{\nu }|\chi _{\lambda }\chi _{\kappa })} as the linear algebra problem X S v x = t {\displaystyle {\textbf {X}}_{\text{S}}{\textbf {v}}_{\text{x}}={\textbf {t}}} It shall be noted, that many OEP codes suffer from numerical issues. [ 14 ] There are two main causes. The first is, that the Hohenberg-Kohn theorem is violated since for practical reasons a finite basis set is used, the second being that different spatial regions of potentials have different influence on the optimized energy leading e.g. to oscillations in the convergence from poor conditioning .
https://en.wikipedia.org/wiki/Optimized_effective_potential_method
In finance , an option symbol is a code by which options are identified on an options exchange or a futures exchange . Before 2010, the ticker (trading) symbols for US options typically looked like this: IBMAF . This consisted of a root symbol ('IBM') + month code ('A') + strike price code ('F'). The root symbol is the symbol of the stock on the stock exchange. After this comes the month code, A-L mean January–December calls , M-X mean January–December puts . The strike price code is a letter corresponding with a certain strike price (which letter corresponds with which strike price depends on the stock). On February 12, 2010, the five-character ticker format stopped being used in the US and Canada. The new standard is now fully in place, as in the first few months after February 12 the LEAP roots and additional roots needed to handle large numbers of options for a given issuer were consolidated into a single root ticker for a given underlying symbol. Options Clearing Corporation 's (OCC) Options Symbology Initiative (OSI) mandated an industry-wide change to a new option symbol structure, resulting in option symbols 21 characters in length. March 2010 - May 2010 was the symbol consolidation period in which all outgoing option roots will be replaced with the underlying stock symbol. [ 1 ] On March 18, 2013, CBOE Mini Options became available for trading on a select group of securities (AMZN, AAPL, GOOG, GLD, and SPY). These options represent a deliverable of 10 shares of an underlying security, whereas standard equity options represent a deliverable of 100 shares. [ 2 ] CBOE appended a "7" to the end of the security symbol to represent the mini option contracts. These options series were discontinued on Dec. 17, 2014, not so long after their introduction, and mini options on stocks and ETFs no longer trade. [ 3 ] The OCC option symbol consists of four parts: Examples: [ 4 ] The above symbol represents a put on SPX, expiring on 11/22/2014, with a strike price of $19.50. The above symbol represents a call on LAMR, expiring on 1/17/2015, with a strike price of $52.50. The OCC option symbol can be mapped to other identifiers, such as a Financial Instrument Global Identifier (FIGI). [ 5 ] Mini-options contracts trade under a different trading symbol than standard-sized options contracts. Mini-options carry the number "7" at the end of the security symbol. For example, the Apple mini-options symbol is AAPL7. [ 6 ] Examples: The above symbol represents a mini call option (10 shares) on AAPL, with a strike price of $470, expiring on Nov 1, 2013 The above symbol represents the standard call option (100 shares), with the same strike and expiration date.
https://en.wikipedia.org/wiki/Option_symbol
In probability theory , the optional stopping theorem (or sometimes Doob's optional sampling theorem , for American probabilist Joseph Doob ) says that, under certain conditions, the expected value of a martingale at a stopping time is equal to its initial expected value. Since martingales can be used to model the wealth of a gambler participating in a fair game, the optional stopping theorem says that, on average, nothing can be gained by stopping play based on the information obtainable so far (i.e., without looking into the future). Certain conditions are necessary for this result to hold true. In particular, the theorem applies to doubling strategies . The optional stopping theorem is an important tool of mathematical finance in the context of the fundamental theorem of asset pricing . A discrete-time version of the theorem is given below, with N {\displaystyle \mathbb {N} } 0 denoting the set of natural integers, including zero. Let X = ( X t ) t ∈ N {\displaystyle \mathbb {N} } 0 be a discrete-time martingale and τ a stopping time with values in N {\displaystyle \mathbb {N} } 0 ∪ {∞ }, both with respect to a filtration ( F t ) t ∈ N {\displaystyle \mathbb {N} } 0 . Assume that one of the following three conditions holds: Then X τ is an almost surely well defined random variable and E [ X τ ] = E [ X 0 ] . {\displaystyle \mathbb {E} [X_{\tau }]=\mathbb {E} [X_{0}].} Similarly, if the stochastic process X = ( X t ) t ∈ N {\displaystyle \mathbb {N} } 0 is a submartingale or a supermartingale and one of the above conditions holds, then for a submartingale, and for a supermartingale. Under condition ( c ) it is possible that τ = ∞ happens with positive probability. On this event X τ is defined as the almost surely existing pointwise limit of ( X t ) t ∈ N {\displaystyle \mathbb {N} } 0 , see the proof below for details. Let X τ denote the stopped process , it is also a martingale (or a submartingale or supermartingale, respectively). Under condition ( a ) or ( b ), the random variable X τ is well defined. Under condition ( c ) the stopped process X τ is bounded, hence by Doob's martingale convergence theorem it converges a.s. pointwise to a random variable which we call X τ . If condition ( c ) holds, then the stopped process X τ is bounded by the constant random variable M := c . Otherwise, writing the stopped process as gives | X t τ | ≤ M for all t ∈ N {\displaystyle \mathbb {N} } 0 , where By the monotone convergence theorem If condition ( a ) holds, then this series only has a finite number of non-zero terms, hence M is integrable. If condition ( b ) holds, then we continue by inserting a conditional expectation and using that the event { τ > s } is known at time s (note that τ is assumed to be a stopping time with respect to the filtration), hence where a representation of the expected value of non-negative integer-valued random variables is used for the last equality. Therefore, under any one of the three conditions in the theorem, the stopped process is dominated by an integrable random variable M . Since the stopped process X τ converges almost surely to X τ , the dominated convergence theorem implies By the martingale property of the stopped process, hence Similarly, if X is a submartingale or supermartingale, respectively, change the equality in the last two formulas to the appropriate inequality.
https://en.wikipedia.org/wiki/Optional_stopping_theorem
An optode or optrode is an optical sensor device that optically measures a specific substance usually with the aid of a chemical transducer . [ 1 ] An optode requires three components to function: a chemical that responds to an analyte , a polymer to immobilise the chemical transducer and instrumentation ( optical fibre , light source , detector and other electronics). Optodes usually have the polymer matrix coated onto the tip of an optical fibre, but in the case of evanescent wave optodes the polymer is coated on a section of fibre that has been unsheathed. [ citation needed ] Optodes can apply various optical measurement schemes such as reflection , absorption , evanescent wave, luminescence ( fluorescence and phosphorescences ), chemiluminescence , surface plasmon resonance . By far the most popular methodology is luminescence. Luminescence in solution obeys the linear Stern–Volmer relationship . Fluorescence of a molecule is quenched by specific analytes, e.g., ruthenium complexes are quenched by oxygen. When a fluorophore is immobilised within a polymer matrix myriad micro-environments are created. The micro-environments reflect varying diffusion co-efficients for the analyte. This leads to a non-linear relationship between the fluorescence and the quencher (analyte). This relationship is modelled in various ways, the most popular model is the two site model created by James Demas (University of Virginia). The signal (fluorescence) to oxygen ratio is not linear, and an optode is most sensitive at low oxygen concentration, i.e., the sensitivity decreases as oxygen concentration increases. The optode sensors can however work in the whole region 0–100% oxygen saturation in water, and the calibration is done the same way as with the Clark type sensor . No oxygen is consumed and hence the sensor is stirring insensitive, but the signal will stabilize more quickly if the sensor is stirred after being put into the sample. Optical sensors are growing in popularity due to the low-cost, low power requirements and long term stability. They provide viable alternatives to electrode-based sensors or more complicated analytical instrumentation, especially in the field of environmental monitoring [ 2 ] although in the case of oxygen optrodes, they do not have the resolution as the most recent cathodic microsensors . [ 3 ]
https://en.wikipedia.org/wiki/Optode
Optoelectrowetting (OEW) is a method of liquid droplet manipulation used in microfluidics applications. This technique builds on the principle of electrowetting , which has proven useful in liquid actuation due to fast switching response times and low power consumption. Where traditional electrowetting runs into challenges, however, such as in the simultaneous manipulation of multiple droplets, OEW presents a lucrative alternative that is both simpler and cheaper to produce. OEW surfaces are easy to fabricate, since they require no lithography , and have real-time, reconfigurable, large-scale manipulation control, due to its reaction to light intensity. The traditional electrowetting mechanism has been receiving increasing interest due to its ability to control tension forces on a liquid droplet. As surface tension acts as the dominant liquid actuation force in nano-scale applications, electrowetting has been used to modify this tension at the solid-liquid interface through the application of an external voltage. The applied electric field causes a change in the contact angle of the liquid droplet, and in turn changes the surface tensions across the droplet. Precise manipulation of the electric field allows control of the droplets. The droplet is placed on an insulating substrate located in between an electrode.cooxoxc9x The optoelectrowetting mechanism adds a photoconductor underneath the conventional electrowetting circuit, with an AC power source attached. Under normal (dark) conditions, the majority of the system's impedance lies in the photoconducting region, and therefore the majority of the voltage drop occurs here. However, when light is shined on the system, carrier generation and recombination causes the conductivity of the photoconductor spikes and results in a voltage drop across the insulating layer, changing the contact angle as a function of the voltage. The contact angle between a liquid and electrode can be described as: [ 1 ] where V A , d , ε, and γ LV are applied voltage, thickness of the insulation layer, dielectric constant of the insulation layer, and the interfacial tension constant between liquid and gas. In AC situations, such as OEW, V A is replaced with the RMS voltage. The frequency of the AC power source is adjusted so that the impedance of the photoconductor dominates in the dark state. The shift in the voltage drop across the insulating layer therefore reduces the contact angle of the droplet as a function of the light intensity. By shining an optical beam on one edge of a liquid droplet, the reduced contact angle creates a pressure difference throughout the droplet, and pushes the droplet's center of mass towards the illuminated side. Control of the optical beam results in control of the droplet's movement. Using 4 mW laser beams, OEW has proven to move droplets of deionized water at speeds of 7mm/s. Traditional electrowetting runs into problems because it requires a two-dimensional array of electrodes for droplet actuation. The large number of electrodes leads to complexity for both control and packaging of these chips, especially for droplet sizes of smaller scales. While this problem can be solved through integration of electronic decoders, the cost of the chip would significantly increase. [ 2 ] [ 3 ] Droplet manipulation in electrowetting-based devices are usually accomplished using two parallel plates which sandwiches the droplet and is actuated by digital electrodes. The minimum droplet size that can be manipulated is determined by the size of pixilated electrodes. This mechanism provides a solution to the size limitation of physical pixilated electrodes by utilizing dynamic and reconfigurable optical patterns and enables operations such as continuous transport, splitting, merging, and mixing of droplets. SCOEW is conducted on open, featureless, and photoconductive surfaces. This configuration creates a flexible interface that allows simple integration with other microfluidic components, such as sample reservoirs through simple tubing. [ 4 ] It is also known as open optoelectrowetting (O-OEW). [ 5 ] Optoelectrowetting can also be achieved using the photocapacitance in a liquid–insulator–semiconductor junction . [ 6 ] The photo-sensitive electrowetting is achieved via optical modulation of carriers in the space charge region at the insulator-semiconductor junction which acts as a photodiode – similar to a charge-coupled device based on a metal–oxide–semiconductor structure. Electrowetting presents a solution to one of the most challenging tasks in lab-on-a-chip systems in its ability to handle and manipulate complete physiological compounds. [ 7 ] Conventional microfluidic systems aren't easily adaptable to handle different compounds, requiring reconfiguration that often results in the device being impractical as a whole. Through OEW, a chip with one power source can be readily used with a variety of substances, with potential for multiplexed detection. Photoactuation in microelectromechanical systems (MEMS) has been demonstrated in proof-of-concept experiments. [ 8 ] [ 9 ] Instead of a typical substrate, a specialized cantilever is placed on top of the liquid-insulator-photoconductor stack. As light is shined on the photoconductor, the capillary force from the drop on the cantilever changes with the contact angle, and deflects the beam. This wireless actuation can be used as a substitute for complex circuit-based systems currently used for optical addressing and control of autonomous wireless sensors [ 10 ]
https://en.wikipedia.org/wiki/Optoelectrowetting
The Optogalvanic effect is the change in the conductivity of a gas discharge induced by a light source (typically a laser ). This effect has found many applications in atomic spectroscopy and laser stabilization. [ 1 ] In general, neutral atoms/molecules are generated in a gas discharge, and some of these neutral particles will be subsequently ionized, thereby creating a plasma. Incoming light will excite electronic transitions if the energy difference between pairs of atomic/molecular levels is in resonance with a frequency component of this light. Now, the probability and the cross-section to ionize one neutral particle depends on its initial energetic state. This means that light can change the rate at which neutral particles are ionised. Since the medium contains charged particles, it is not surprising that also the electrical properties of the gas discharge can change. [ 2 ] [ 3 ] This electromagnetism -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Optogalvanic_effect
Optogenetics began with methods to alter neuronal activity with light, using e.g. channelrhodopsins . In a broader sense, optogenetic approaches also include the use of genetically encoded biosensors to monitor the activity of neurons or other cell types by measuring fluorescence or bioluminescence . Genetically encoded calcium indicators (GECIs) are used frequently to monitor neuronal activity, but other cellular parameters such as membrane voltage or second messenger activity can also be recorded optically. The use of optogenetic sensors is not restricted to neuroscience , but plays increasingly important roles in immunology , cardiology and cancer research . The first experiments to measure intracellular calcium levels via protein expression were based on aequorin , a bioluminescent protein from the jellyfish Aequorea . To produce light, however, this enzyme needs the 'fuel' compound coelenteracine , which has to be added to the preparation. This is not practical in intact animals, and in addition, the temporal resolution of bioluminescence imaging is relatively poor (seconds-minutes). The first genetically encoded fluorescent calcium indicator (GECI) to be used to image activity in an animal was cameleon , designed by Atsushi Miyawaki, Roger Tsien and coworkers in 1997. [ 1 ] Cameleon was first used successfully in an animal by Rex Kerr, William Schafer and coworkers to record from neurons and muscle cells of the nematode C. elegans . [ 2 ] Cameleon was subsequently used to record neural activity in flies [ 3 ] and zebrafish. [ 4 ] In mammals, the first GECI to be used in vivo was GCaMP , [ 5 ] first developed by Junichi Nakai and coworkers in 2001. [ 6 ] GCaMP has undergone numerous improvements, notably by a team of scientists at the Janelia Farm Research Campus (GENIE project, HHMI ), and GCaMP6 [ 7 ] in particular has become widely used in neuroscience. Very recently, G protein-coupled receptors have been harnessed to generate a series of highly specific indicators for various neurotransmitters . [ 8 ] [ 9 ] Genetically encoded sensors are fusion proteins , consisting of a ligand binding domain (sensor) and a fluorescent protein , attached by a short linker (flexible peptide). When the sensor domain binds the correct ligand, it changes conformation. This movement is transferred to the fluorescent protein and the resulting deformation leads to a change in fluorescence. The efficiency of this process depends critically on the length of the linker region, which has to be optimized in a labor-intensive process. The fluorescent protein is often circularly permuted , i.e. new C-terminal and N-terminal ends were created. Single-wavelength sensors are easy to use for qualitative measurements, but difficult to calibrate for quantitative measurements of ligand concentration. A second class of sensors relies on Förster resonance energy transfer (FRET) between two fluorescent proteins (FP) of different color. The shorter wavelength FP (donor) is excited with blue light from a laser or LED. If the second FP (acceptor) is very close, the energy is transferred to the acceptor, resulting in yellow or red fluorescence. When the acceptor FP moves further away, the donor emits green fluorescence. The sensor domain is typically spliced between the two FPs, resulting in a hinge-type movement upon ligand binding that changes the distance between donor and acceptor. The imaging procedure is more complex for FRET sensors, but the fluorescence ratio can be calibrated to measure the absolute concentration of a ligand. Read-out via fluorescence lifetime imaging (FLIM) of donor fluorescence is also possible, as the FRET process speeds up the fluorescence decay. Indicators have been designed to measure ion concentrations, membrane potential, neurotransmitters, and various intracellular signaling molecules. The following list provides only examples for each class; many more have been published. A recent review of GPCR-based genetically encoded fluorescent indicators for neuromodulators [ 9 ]
https://en.wikipedia.org/wiki/Optogenetic_methods_to_record_cellular_activity
Optogenetics is a biological technique to control the activity of neurons or other cell types with light . This is achieved by expression of light-sensitive ion channels , pumps or enzymes specifically in the target cells. On the level of individual cells , light-activated enzymes and transcription factors allow precise control of biochemical signaling pathways. [ 1 ] In systems neuroscience , the ability to control the activity of a genetically defined set of neurons has been used to understand their contribution to decision making, [ 2 ] learning, [ 3 ] fear memory, [ 4 ] mating, [ 5 ] addiction, [ 6 ] feeding, [ 7 ] and locomotion. [ 8 ] In a first medical application of optogenetic technology, vision was partially restored in a blind patient with Retinitis pigmentosa . [ 9 ] Optogenetic techniques have also been introduced to map the functional connectivity of the brain . [ 10 ] [ 11 ] By altering the activity of genetically labelled neurons with light and by using imaging and electrophysiology techniques to record the activity of other cells, researchers can identify the statistical dependencies between cells and brain regions. [ 12 ] [ 13 ] In a broader sense, the field of optogenetics also includes methods to record cellular activity with genetically encoded indicators . In 2010, optogenetics was chosen as the "Method of the Year" across all fields of science and engineering by the interdisciplinary research journal Nature Methods . [ 14 ] In the same year an article on "Breakthroughs of the Decade" in the academic research journal Science highlighted optogenetics. [ 15 ] [ 16 ] [ 17 ] In 1979, Francis Crick suggested that controlling all cells of one type in the brain, while leaving the others more or less unaltered, is a real challenge for neuroscience. Crick speculated that a technology using light might be useful to control neuronal activity with temporal and spatial precision but at the time there was no technique to make neurons responsive to light. By the early 1990s LC Katz and E Callaway had shown that light could uncage glutamate. [ 18 ] Heberle and Büldt in 1994 had already shown functional heterologous expression of a bacteriorhodopsin for light-activated ion flow in yeast. [ 19 ] In 1995, Georg Nagel et al. and Ernst Bamberg tried the heterologous expression of microbial rhodopsins (also bacteriorhodopsin and also in a non-neural system, Xenopus oocytes) (Georg Nagel et al., 1995, FEBS Lett.) and showed light-induced current. The earliest genetically targeted method that used light to control rhodopsin-sensitized neurons was reported in January 2002, by Boris Zemelman and Gero Miesenböck , who employed Drosophila rhodopsin cultured mammalian neurons. [ 20 ] In 2003, Zemelman and Miesenböck developed a second method for light-dependent activation of neurons in which single ionotropic channels TRPV1, TRPM8 and P2X2 were gated by photocaged ligands in response to light. [ 21 ] Beginning in 2004, the Kramer and Isacoff groups developed organic photoswitches or "reversibly caged" compounds in collaboration with the Trauner group that could interact with genetically introduced ion channels. [ 22 ] [ 23 ] TRPV1 methodology, albeit without the illumination trigger, was subsequently used by several laboratories to alter feeding, locomotion and behavioral resilience in laboratory animals. [ 24 ] [ 25 ] [ 26 ] However, light-based approaches for altering neuronal activity were not applied outside the original laboratories, likely because the easier to employ channelrhodopsin was cloned soon thereafter. [ 27 ] Peter Hegemann , studying the light response of green algae at the University of Regensburg, had discovered photocurrents that were too fast to be explained by the classic g-protein-coupled animal rhodopsins . [ 28 ] Teaming up with the electrophysiologist Georg Nagel at the Max Planck Institute in Frankfurt, they could demonstrate that a single gene from the alga Chlamydomonas produced large photocurrents when expressed in the oocyte of a frog. [ 29 ] To identify expressing cells, they replaced the cytoplasmic tail of the algal protein with a fluorescent protein YFP , generating the first generally applicable optogenetic tool. [ 27 ] They stated in the 2003 paper that "expression of ChR2 in oocytes or mammalian cells may be used as a powerful tool to increase cytoplasmic Ca 2+ concentration or to depolarize the cell membrane, simply by illumination". Karl Deisseroth in the Bioengineering Department at Stanford published the notebook pages from early July 2004 of his initial experiment showing light activation of neurons expressing a channelrhodopsin. [ 30 ] In August 2005, his laboratory staff, including graduate students Ed Boyden and Feng Zhang , in collaboration with Georg Nagel , published the first demonstration of a single-component optogenetic system, in neurons [ 31 ] using the channelrhodopsin-2(H134R)-eYFP mutant from Georg Nagel , which is the first mutant of channelrhodopsin-2 since its functional characterization by Georg Nagel and Hegemann. [ 27 ] Zhuo-Hua Pan of Wayne State University , researching on restore sight to blindness, tried channelrhodopsin out in ganglion cells—the neurons in human eyes that connect directly to the brain. Pan's first observation of optical activation of retinal neurons with channelrhodopsin was in February 2004 according to Pan, [ 32 ] five months before Deisseroth's initial observation in July 2004. [ 33 ] Indeed, the transfected neurons became electrically active in response to light, and in 2005 Zhuo-Hua Pan reported successful in-vivo transfection of channelrhodopsin in retinal ganglion cells of mice, and electrical responses to photostimulation in retinal slice culture. [ 34 ] This approach was eventually realized in a human patient by Botond Roska and coworkers in 2021. [ 9 ] In April 2005, Susana Lima and Miesenböck reported the first use of genetically targeted P2X2 photostimulation to control the behaviour of an animal. [ 35 ] They showed that photostimulation of genetically circumscribed groups of neurons, such as those of the dopaminergic system, elicited characteristic behavioural changes in fruit flies. In October 2005, Lynn Landmesser and Stefan Herlitze also published the use of channelrohodpsin-2 to control neuronal activity in cultured hippocampal neurons and chicken spinal cord circuits in intact developing embryos. [ 36 ] In addition, they introduced for the first time vertebrate rhodopsin, a light-activated G protein coupled receptor, as a tool to inhibit neuronal activity via the recruitment of intracellular signaling pathways also in hippocampal neurons and the intact developing chicken embryo. [ 36 ] The groups of Alexander Gottschalk and Georg Nagel made the first ChR2 mutant (H134R) and were first to use channelrhodopsin-2 for controlling neuronal activity in an intact animal, showing that motor patterns in the roundworm C. elegans could be evoked by light stimulation of genetically selected neural circuits (published in December 2005). [ 37 ] In mice, controlled expression of optogenetic tools is often achieved with cell-type-specific Cre/loxP methods developed for neuroscience by Joe Z. Tsien back in the 1990s [ 38 ] to activate or inhibit specific brain regions and cell-types in vivo . [ 39 ] In 2007, the labs of Boyden and Deisseroth (together with the groups of Gottschalk and Georg Nagel ) simultaneously reported successful optogenetic inhibition of activity in neurons. [ 40 ] [ 41 ] In 2007, Georg Nagel and Hegemann's groups started the optogenetic manipulation of cAMP. [ 42 ] In 2014, Avelar et al. reported the first rhodopsin-guanylyl cyclase gene from fungus. In 2015, Scheib et al. and Gao et al. characterized the activity of the rhodopsin-guanylyl cyclase gene. And Shiqiang Gao et al. and Georg Nagel , Alexander Gottschalk identified it as the first 8 TM rhodopsin. [ 43 ] Optogenetics provides millisecond-scale temporal precision which allows the experimenter to keep pace with fast biological information processing (for example, in probing the causal role of specific action potential patterns in defined neurons). Indeed, to probe the neural code, optogenetics by definition must operate on the millisecond timescale to allow addition or deletion of precise activity patterns within specific cells in the brains of intact animals, including mammals (see Figure 1) . By comparison, the temporal precision of traditional genetic manipulations (employed to probe the causal role of specific genes within cells, via "loss-of-function" or "gain of function" changes in these genes) is rather slow, from hours or days to months. It is important to also have fast readouts in optogenetics that can keep pace with the optical control. This can be done with electrical recordings ("optrodes") or with reporter proteins that are biosensors , where scientists have fused fluorescent proteins to detector proteins. Additionally, beyond its scientific impact optogenetics represents an important case study in the value of both ecological conservation (as many of the key tools of optogenetics arise from microbial organisms occupying specialized environmental niches), and in the importance of pure basic science as these opsins were studied over decades for their own sake by biophysicists and microbiologists, without involving consideration of their potential value in delivering insights into neuroscience and neuropsychiatric disease. [ 47 ] Light-activated proteins: channels, pumps and enzymes The hallmark of optogenetics therefore is introduction of fast light-activated channels, pumps, and enzymes that allow temporally precise manipulation of electrical and biochemical events while maintaining cell-type resolution through the use of specific targeting mechanisms. Among the microbial opsins which can be used to investigate the function of neural systems are the channelrhodopsins (ChR2, ChR1, VChR1, and SFOs) to excite neurons and anion-conducting channelrhodopsins for light-induced inhibition. Indirectly light-controlled potassium channels have recently been engineered to prevent action potential generation in neurons during blue light illumination. [ 48 ] [ 49 ] Light-driven ion pumps are also used to inhibit neuronal activity, e.g. halorhodopsin (NpHR), [ 50 ] enhanced halorhodopsins (eNpHR2.0 and eNpHR3.0, see Figure 2), [ 51 ] archaerhodopsin (Arch), fungal opsins (Mac) and enhanced bacteriorhodopsin (eBR). [ 52 ] Optogenetic control of well-defined biochemical events within behaving mammals is now also possible. Building on prior work fusing vertebrate opsins to specific G-protein coupled receptors [ 53 ] a family of chimeric single-component optogenetic tools was created that allowed researchers to manipulate within behaving mammals the concentration of defined intracellular messengers such as cAMP and IP3 in targeted cells. [ 54 ] Other biochemical approaches to optogenetics (crucially, with tools that displayed low activity in the dark) followed soon thereafter, when optical control over small GTPases and adenylyl cyclase was achieved in cultured cells using novel strategies from several different laboratories. [ 55 ] [ 56 ] [ 57 ] Photoactivated adenylyl cyclases have been discovered in fungi and successfully used to control cAMP levels in mammalian neurons. [ 58 ] [ 59 ] This emerging repertoire of optogenetic actuators now allows cell-type-specific and temporally precise control of multiple axes of cellular function within intact animals. [ 60 ] Hardware for light application Another necessary factor is hardware (e.g. integrated fiberoptic and solid-state light sources) to allow specific cell types, even deep within the brain, to be controlled in freely behaving animals. Most commonly, the latter is now achieved using the fiberoptic-coupled diode technology introduced in 2007, [ 61 ] [ 62 ] [ 63 ] though to avoid use of implanted electrodes, researchers have engineered ways to inscribe a "window" made of zirconia that has been modified to be transparent and implanted in mice skulls, to allow optical waves to penetrate more deeply to stimulate or inhibit individual neurons. [ 64 ] To stimulate superficial brain areas such as the cerebral cortex, optical fibers or LEDs can be directly mounted to the skull of the animal. More deeply implanted optical fibers have been used to deliver light to deeper brain areas. [ 65 ] Complementary to fiber-tethered approaches, completely wireless techniques have been developed utilizing wirelessly delivered power to headborne LEDs for unhindered study of complex behaviors in freely behaving organisms. [ 66 ] Expression of optogenetic actuators Optogenetics also necessarily includes the development of genetic targeting strategies such as cell-specific promoters or other customized conditionally-active viruses, to deliver the light-sensitive probes to specific populations of neurons in the brain of living animals (e.g. worms, fruit flies, mice, rats, and monkeys). In invertebrates such as worms and fruit flies some amount of all-trans-retinal (ATR) is supplemented with food. A key advantage of microbial opsins as noted above is that they are fully functional without the addition of exogenous co-factors in vertebrates. [ 63 ] The technique of using optogenetics is flexible and adaptable to the experimenter's needs. Cation-selective channelrhodopsins (e.g. ChR2) are used to excite neurons, anion-conducting channelrhodopsins (e.g. GtACR2) inhibit neuronal activity. Combining these tools into a single construct (e.g. BiPOLES) allows for both inhibition and excitation, depending on the wavelength of illumination. [ 68 ] Introducing the microbial opsin into a specific subset of cells is challenging. A popular approach is to introduce an engineered viral vector that contains the optogenetic actuator gene attached to a specific promoter such as CAMKIIα , which is active in excitatory neurons. This allows for some level of specificity, preventing e.g. expression in glia cells. [ 69 ] A more specific approach is based on transgenic "driver" mice which express Cre recombinase , an enzyme that catalyzes recombination between two lox-P sites, in a specific subset of cells, e.g. parvalbumin -expressing interneurons . By introducing an engineered viral vector containing the optogenetic actuator gene in between two lox-P sites, only the cells producing Cre recombinase will express the microbial opsin. This technique has allowed for multiple modified optogenetic actuators to be used without the need to create a whole line of transgenic animals every time a new microbial opsin is needed. [ 70 ] After the introduction and expression of the microbial opsin, a computer-controlled light source has to be optically coupled to the brain region in question. Light-emitting diodes (LEDs) or fiber-coupled diode-pumped solid-state lasers (DPSS) are frequently used. Recent advances include the advent of wireless head-mounted devices that apply LEDs to the targeted areas and as a result, give the animals more freedom to move. [ 71 ] [ 72 ] Fiber -based approaches can also be used to combine optical stimulation and calcium imaging . [ 65 ] This enables researchers to visualize and manipulate the activity of single neurons in awake behaving animals. [ 73 ] It is also possible to record from multiple deep brain regions at the same using GRIN lenses connected via optical fiber to an externally positioned photodetector and photostimulator. [ 74 ] [ 75 ] One of the main problems of optogenetics is that not all the cells in question may express the microbial opsin gene at the same level. Thus, even illumination with a defined light intensity will have variable effects on individual cells. Optogenetic stimulation of neurons in the brain is even less controlled as the light intensity drops exponentially from the light source (e.g. implanted optical fiber). It remains difficult to target opsin to defined subcellular compartments, e.g. the plasma membrane, synaptic vesicles, or mitochondria. [ 51 ] [ 76 ] Restricting the opsin to specific regions of the plasma membrane such as dendrites , somata or axon terminals provides a more robust understanding of neuronal circuitry. [ 76 ] Mathematical modelling shows that selective expression of opsin in specific cell types can dramatically alter the dynamical behavior of the neural circuitry. In particular, optogenetic stimulation that preferentially targets inhibitory cells can transform the excitability of the neural tissue, affecting non-transfected neurons as well. [ 77 ] The original channelrhodopsin-2 was slower closing than typical cation channels of cortical neurons, leading to prolonged depolarization and calcium influx. [ 78 ] Many channelrhodopsin variants with more favorable kinetics have since been engineered. [55] [56] A difference between natural spike patterns and optogenetic activation is that pulsed light stimulation produces synchronous activation of expressing neurons, which removes the possibility of sequential activity in the stimulated population. Therefore, it is difficult to understand how the cells in the population affected communicate with one another or how their phasic properties of activation relate to circuit function. Optogenetic activation has been combined with functional magnetic resonance imaging (ofMRI) to elucidate the connectome , a thorough map of the brain's neural connections. [ 76 ] [ 79 ] Precisely timed optogenetic activation is used to calibrate the delayed hemodynamic signal ( BOLD ) fMRI is based on. The opsin proteins currently in use have absorption peaks across the visual spectrum, but remain considerably sensitive to blue light. [ 76 ] This spectral overlap makes it very difficult to combine opsin activation with genetically encoded indicators ( GEVIs , GECIs , GluSnFR , synapto-pHluorin ), most of which need blue light excitation. Opsins with infrared activation would, at a standard irradiance value, increase light penetration and augment resolution through reduction of light scattering. Due to scattering, a narrow light beam to stimulate neurons in a patch of neural tissue can evoke a response profile that is much broader than the stimulation beam. [ 80 ] In this case, neurons may be activated (or inhibited) unintentionally. Computational simulation tools [ 81 ] [ 82 ] are used to estimate the volume of stimulated tissue for different wavelengths of light. The field of optogenetics has furthered the fundamental scientific understanding of how specific cell types contribute to the function of biological tissues such as neural circuits in vivo . On the clinical side, optogenetics-driven research has led to insights into restoring with light [1] , [ 83 ] Parkinson's disease [ 84 ] [ 85 ] and other neurological and psychiatric disorders such as autism , Schizophrenia , drug abuse , anxiety, and depression . [ 52 ] [ 86 ] [ 87 ] [ 88 ] An experimental treatment for blindness involves a channel rhodopsin expressed in ganglion cells , stimulated with light patterns from engineered goggles. [ 89 ] [ 9 ] Optogenetic approaches have been used to map neural circuits in the amygdala that contribute to fear conditioning . [ 90 ] [ 91 ] [ 92 ] [ 93 ] One such example of a neural circuit is the connection made from the basolateral amygdala to the dorsal-medial prefrontal cortex where neuronal oscillations of 4 Hz have been observed in correlation to fear induced freezing behaviors in mice. Transgenic mice were introduced with channelrhodoposin-2 attached with a parvalbumin -Cre promoter that selectively infected interneurons located both in the basolateral amygdala and the dorsal-medial prefrontal cortex responsible for the 4 Hz oscillations. The interneurons were optically stimulated generating a freezing behavior and as a result provided evidence that these 4 Hz oscillations may be responsible for the basic fear response produced by the neuronal populations along the dorsal-medial prefrontal cortex and basolateral amygdala. [ 94 ] Optogenetic activation of olfactory sensory neurons was critical for demonstrating timing in odor processing [ 95 ] and for mechanism of neuromodulatory mediated olfactory guided behaviors (e.g. aggression , mating ) [ 96 ] In addition, with the aid of optogenetics, evidence has been reproduced to show that the "afterimage" of odors is concentrated more centrally around the olfactory bulb rather than on the periphery where the olfactory receptor neurons would be located. Transgenic mice infected with channel-rhodopsin Thy1-ChR2, were stimulated with a 473 nm laser transcranially positioned over the dorsal section of the olfactory bulb. Longer photostimulation of mitral cells in the olfactory bulb led to observations of longer lasting neuronal activity in the region after the photostimulation had ceased, meaning the olfactory sensory system is able to undergo long term changes and recognize differences between old and new odors. [ 97 ] Optogenetics, freely moving mammalian behavior, in vivo electrophysiology, and slice physiology have been integrated to probe the cholinergic interneurons of the nucleus accumbens by direct excitation or inhibition. Despite representing less than 1% of the total population of accumbal neurons, these cholinergic cells are able to control the activity of the dopaminergic terminals that innervate medium spiny neurons (MSNs) in the nucleus accumbens. [ 98 ] These accumbal MSNs are known to be involved in the neural pathway through which cocaine exerts its effects, because decreasing cocaine-induced changes in the activity of these neurons has been shown to inhibit cocaine conditioning . The few cholinergic neurons present in the nucleus accumbens may prove viable targets for pharmacotherapy in the treatment of cocaine dependence [ 52 ] In vivo and in vitro recordings from the University of Colorado, Boulder Optophysiology Laboratory of Donald C. Cooper Ph.D. showing individual CAMKII AAV-ChR2 expressing pyramidal neurons within the prefrontal cortex that demonstrated high fidelity action potential output with short pulses of blue light at 20 Hz ( Figure 1 ). [ 44 ] Motor cortex In vivo repeated optogenetic stimulation in healthy animals was able to eventually induce seizures. [ 99 ] This model has been termed optokindling. Piriform cortex In vivo repeated optogenetic stimulation of pyramidal cells of the piriform cortex in healthy animals was able to eventually induce seizures. [ 100 ] In vitro studies have revealed a loss of feedback inhibition in the piriform circuit due to impaired GABA synthesis. [ 100 ] Optogenetics was applied on atrial cardiomyocytes to end spiral wave arrhythmias , found to occur in atrial fibrillation , with light. [ 101 ] This method is still in the development stage. A recent study explored the possibilities of optogenetics as a method to correct for arrhythmias and resynchronize cardiac pacing. The study introduced channelrhodopsin-2 into cardiomyocytes in ventricular areas of hearts of transgenic mice and performed in vitro studies of photostimulation on both open-cavity and closed-cavity mice. Photostimulation led to increased activation of cells and thus increased ventricular contractions resulting in increasing heart rates. In addition, this approach has been applied in cardiac resynchronization therapy ( CRT ) as a new biological pacemaker as a substitute for electrode based-CRT. [ 102 ] Lately, optogenetics has been used in the heart to defibrillate ventricular arrhythmias with local epicardial illumination, [ 103 ] a generalized whole heart illumination [ 104 ] or with customized stimulation patterns based on arrhythmogenic mechanisms in order to lower defibrillation energy. [ 105 ] Optogenetic stimulation of the spiral ganglion in deaf mice restored auditory activity. [ 106 ] Optogenetic application onto the cochlear region allows for the stimulation or inhibition of the spiral ganglion cells (SGN). In addition, due to the characteristics of the resting potentials of SGN's, different variants of the protein channelrhodopsin-2 have been employed such as Chronos, [ 107 ] CatCh and f-Chrimson. [ 108 ] Chronos and CatCh variants are particularly useful in that they have less time spent in their deactivated states, which allow for more activity with less bursts of blue light emitted. Additionally, using engineered red-shifted channels as f-Chrimson allow for stimulation using longer wavelengths, which decreases the potential risks of phototoxicity in the long term without compromising gating speed. [ 109 ] The result being that the LED producing the light would require less energy and the idea of cochlear prosthetics in association with photo-stimulation, would be more feasible. [ 110 ] Optogenetic stimulation of a modified red-light excitable channelrhodopsin (ReaChR) expressed in the facial motor nucleus enabled minimally invasive activation of motoneurons effective in driving whisker movements in mice. [ 111 ] One novel study employed optogenetics on the Dorsal Raphe Nucleus to both activate and inhibit dopaminergic release onto the ventral tegmental area. To produce activation transgenic mice were infected with channelrhodopsin-2 with a TH-Cre promoter and to produce inhibition the hyperpolarizing opsin NpHR was added onto the TH-Cre promoter. Results showed that optically activating dopaminergic neurons led to an increase in social interactions, and their inhibition decreased the need to socialize only after a period of isolation. [ 112 ] Studying the visual system using optogenetics can be challenging. Indeed, the light used for optogenetic control may lead to the activation of photoreceptors, as a result of the proximity between primary visual circuits and these photoreceptors. In this case, spatial selectivity is difficult to achieve (particularly in the case of the fly optic lobe). Thus, the study of the visual system requires spectral separation, using channels that are activated by different wavelengths of light than rhodopsins within the photoreceptors (peak activation at 480 nm for Rhodopsin 1 in Drosophila ). Red-shifted CsChrimson [ 113 ] or bistable Channelrhodopsin [ 114 ] are used for optogenetic activation of neurons (i.e. depolarization ), as both allow spectral separation. In order to achieve neuronal silencing (i.e. hyperpolarization ), an anion channelrhodopsin discovered in the cryptophyte algae species Guillardia theta (named GtACR1). [ 115 ] can be used. GtACR1 is more light sensitive than other inhibitory channels such as the Halorhodopsin class of chlorid pumps and imparts a strong conductance. As its activation peak (515 nm) is close to that of Rhodopsin 1, it is necessary to carefully calibrate the optogenetic illumination as well as the visual stimulus. The factors to take into account are the wavelength of the optogenetic illumination (possibly higher than the activation peak of GtACR1), the size of the stimulus (in order to avoid the activation of the channels by the stimulus light) and the intensity of the optogenetic illumination. It has been shown that GtACR1 can be a useful inhibitory tool in optogenetic study of Drosophila 's visual system by silencing T4/T5 neurons expression. [ 116 ] These studies can also be led on intact behaving animals, for instance to probe optomotor response . Optogenetically inhibiting or activating neurons tests their necessity and sufficiency, respectively, in generating a behavior. [ 117 ] Using this approach, researchers can dissect the neural circuitry controlling motor output. By perturbing neurons at various places in the sensorimotor system, researchers have learned about the role of descending neurons in eliciting stereotyped behaviors, [ 118 ] how localized tactile sensory input [ 119 ] and activity of interneurons [ 120 ] alters locomotion, and the role of Purkinje cells in generating and modulating movement. [ 121 ] This is a powerful technique for understanding the neural underpinnings of animal locomotion and movement more broadly. The currently available optogenetic actuators allow for the accurate temporal control of the required intervention (i.e. inhibition or excitation of the target neurons) with precision routinely going down to the millisecond level. [ 122 ] The temporal precision varies, however, across optogenetic actuators, [ 123 ] and depends on the frequency and intensity of the stimulation. [ 80 ] Experiments can now be devised where the light used for the intervention is triggered by a particular element of behavior (to inhibit the behavior), a particular unconditioned stimulus (to associate something to that stimulus) or a particular oscillatory event in the brain (to inhibit the event). [ 124 ] [ 125 ] This kind of approach has already been used in several brain regions: Sharp waves and ripple complexes (SWRs) are distinct high frequency oscillatory events in the hippocampus thought to play a role in memory formation and consolidation. These events can be readily detected by following the oscillatory cycles of the on-line recorded local field potential . In this way the onset of the event can be used as a trigger signal for a light flash that is guided back into the hippocampus to inhibit neurons specifically during the SWRs and also to optogenetically inhibit the oscillation itself. [ 126 ] These kinds of "closed-loop" experiments are useful to study SWR complexes and their role in memory. Analogously to how natural light-gated ion channels such as channelrhodopsin-2 allows optical control of ion flux, which is especially useful in neuroscience, natural light-controlled signal transduction proteins also allow optical control of biochemical pathways, including both second-messenger generation and protein-protein interactions, which is especially useful in studying cell and developmental biology. [ 128 ] In 2002, the first example of using photoproteins from another organism for controlling a biochemical pathway was demonstrated using the light-induced interaction between plant phytochrome and phytochrome-interacting factor (PIF) to control gene transcription in yeast. [ 1 ] By fusing phytochrome to a DNA-binding domain and PIF to a transcriptional activation domain, transcriptional activation of genes recognized by the DNA-binding domain could be induced by light. [ 1 ] This study anticipated aspects of the later development of optogenetics in the brain, for example, by suggesting that "Directed light delivery by fiber optics has the potential to target selected cells or tissues, even within larger, more-opaque organisms." [ 1 ] The literature has been inconsistent as to whether control of cellular biochemistry with photoproteins should be subsumed within the definition of optogenetics, as optogenetics in common usage refers specifically to the control of neuronal firing with opsins, [ 129 ] [ 130 ] [ 17 ] [ 131 ] and as control of neuronal firing with opsins postdates and uses distinct mechanisms from control of cellular biochemistry with photoproteins. [ 128 ] In addition to phytochromes, which are found in plants and cyanobacteria, LOV domains( Light-oxygen-voltage-sensing domain ) from plants and yeast and cryptochrome domains from plants are other natural photosensory domains that have been used for optical control of biochemical pathways in cells. [ 132 ] [ 128 ] In addition, a synthetic photosensory domain has been engineered from the fluorescent protein Dronpa for optical control of biochemical pathways. [ 128 ] In photosensory domains, light absorption is either coupled to a change in protein-protein interactions (in the case of phytochromes, some LOV domains, cryptochromes, and Dronpa mutants) or a conformational change that exposes a linked protein segment or alters the activity of a linked protein domain (in the case of phytochromes and some LOV domains). [ 128 ] Light-regulated protein-protein interactions can then be used to recruit proteins to DNA, for example to induce gene transcription or DNA modifications, or to the plasma membrane, for example to activate resident signaling proteins. [ 127 ] [ 133 ] [ 134 ] [ 135 ] [ 136 ] [ 137 ] CRY2 also clusters when active, so has been fused with signaling domains and subsequently photoactivated to allow for clustering-based activation. [ 138 ] The LOV2 domain of Avena sativa (common oat) has been used to expose short peptides or an active protein domain in a light-dependent manner. [ 139 ] [ 140 ] [ 141 ] Introduction of this LOV domain into another protein can regulate function through light induced peptide disorder. [ 142 ] The asLOV2 protein, which optogenetically exposes a peptide, has also been used as a scaffold for several synthetic light induced dimerization and light induced dissociation systems (iLID and LOVTRAP, respectively). [ 143 ] [ 144 ] The systems can be used to control proteins through a protein splitting strategy. [ 145 ] Photodissociable Dronpa domains have also been used to cage a protein active site in the dark, uncage it after cyan light illumination, and recage it after violet light illumination. [ 146 ] The ability to optically control signals for various time durations is being explored to elucidate how cell signaling pathways convert signal duration and response to different outputs. [ 147 ] Natural signaling cascades are capable of responding with different outputs to differences in stimulus timing duration and dynamics. [ 148 ] For example, treating PC12 cells with epidermal growth factor (EGF, inducing a transient profile of ERK activity) leads to cellular proliferation whereas introduction of nerve growth factor (NGF, inducing a sustained profile of ERK activity) leads to differentiation into neuron-like cells. [ 149 ] This behavior was initially characterized using EGF and NGF application, but the finding has been partially replicated with optical inputs. [ 150 ] In addition, a rapid negative feedback loop in the RAF-MEK-ERK pathway was discovered using pulsatile activation of a photoswitchable RAF engineered with photodissociable Dronpa domains. [ 146 ] Professor Elias Manjarrez's research group introduced the Optogenetic noise-photostimulation. [ 151 ] [ 152 ] [ 153 ] This is a technique that uses random noisy light to activate neurons expressing ChR2. An optimal level of optogenetic-noise photostimulation on the brain can increase the somatosensory evoked field potentials, the firing frequency response of pyramidal neurons to somatosensory stimulation, and the sodium current amplitude. The powerful impact of optogenetic technology on brain research has been recognized by numerous awards to key players in the field. In 2010, Georg Nagel , Peter Hegemann and Ernst Bamberg were awarded the Wiley Prize in Biomedical Sciences [ 154 ] and they were also among those awarded the Karl Heinz Beckurts Prize in 2010. [ 155 ] In the same year, Karl Deisseroth was awarded the inaugural HFSP Nakasone Award for "his pioneering work on the development of optogenetic methods for studying the function of neuronal networks underlying behavior". [ 156 ] In 2012, Bamberg, Deisseroth, Hegemann and Georg Nagel were awarded the Zülch Prize by the Max Planck Society , [ 157 ] and Miesenböck was awarded the Baillet Latour Health Prize for "having pioneered optogenetic approaches to manipulate neuronal activity and to control animal behaviour." [ 158 ] In 2013, Georg Nagel and Hegemann were among those awarded the Louis-Jeantet Prize for Medicine . [ 159 ] Also that year, year Bamberg, Boyden, Deisseroth, Hegemann, Miesenböck and Georg Nagel were jointly awarded The Brain Prize for "their invention and refinement of optogenetics." [ 160 ] [ 161 ] In 2017, Deisseroth was awarded the Else Kröner Fresenius Research Prize for "his discoveries in optogenetics and hydrogel-tissue chemistry, as well as his research into the neural circuit basis of depression." [ 162 ] In 2018, the Inamori Foundation presented Deisseroth with the Kyoto Prize for "spearheading optogenetics" and "revolutionizing systems neuroscience research." [ 163 ] In 2019, Bamberg, Boyden, Deisseroth, Hegemann, Miesenböck and Georg Nagel were awarded the Rumford Prize by the American Academy of Arts and Sciences in recognition of "their extraordinary contributions related to the invention and refinement of optogenetics." [ 164 ] In 2020, Deisseroth was awarded the Heineken Prize for Medicine from the Royal Netherlands Academy of Arts and Sciences , for developing optogenetics and hydrogel-tissue chemistry. [ 165 ] In 2020, Miesenböck, Hegemann and Georg Nagel jointly received the Shaw Prize in Life Science and Medicine. [ 166 ] In 2021, Hegemann, Deisseroth and Dieter Oesterhelt received the Albert Lasker Award for Basic Medical Research .
https://en.wikipedia.org/wiki/Optogenetics
The optometer was a device used for measuring the necessary spherical and/or cylindrical corrections to be prescribed for eyeglasses , from the middle of the 18th century until around 1922, when modern instruments were developed. [ 1 ] [ 2 ] [ 3 ] The term, coined in 1738 by W. Porterfield to describe his Scheiner slit optometer, [ 4 ] and used for 200 years to describe many different inventions to measure refractive error of the eye, has completely fallen out of usage today as the task of measuring eyes for spectacles is done with modern instruments, such as the phoropter . "Phoropter" is one of several generic names for modern instruments containing an optometer for each eye (battery of lenses for determination of optical error), combined with prisms and other attachments for measuring binocularity . The term refractor is another such term, and "vision tester" or other descriptive terms are used because "phoroptor", spelled with "-or", is a trademark of one company. [ 5 ] In the middle of the 19th century, doctors tested for optical error using single hand-held lenses, held one at a time in front of the patient's eye, or in a trial frame. A wooden case with dozens or hundreds of lenses was held on the doctor's lap, or in a case near the patient's chair, as he or she examined the patient. In the later part of the 19th century, the United States, Germany, France and the UK were actively inventing numerous mechanical optometers, to speed up the process of bringing lenses before the patients' eyes. Various patented or unpatented optometers were sold throughout the later 19th and the start of the 20th centuries, some containing rotating batteries of lenses in various arrangements, usually with the name of the inventor at the front. Around 1910, binocularity was tested using trial frames which sat on the patient's face or on a support bar, with extra testing devices added to the front of the frames, such as Maddox rods, rotating prisms, and phorometers. The refraction part of the exam was done with trial lenses that fit into the back of the same trial frame. Optometer was the generic name for devices, crude and simple, with rotating batteries of sphere and cylinder lenses placed in front of each eye, one at a time; so there was no testing for binocularity. When the optometer and phorometer were combined into single instruments, the modern refractor/phoropter was born. This happened in the middle 1910s when two companies in the New York City area began to market competing versions . [ 12 ] A third US company, Bausch & Lomb, joined the competition in 1934, while the other 2 made improvements. Around that time, many companies in Europe and Asia began making phoropters of their own design, as well as copying American models.
https://en.wikipedia.org/wiki/Optometer_(ophthalmic_instrument)
Optomux [ 1 ] is a serial ( RS-422 / RS485 ) network protocol originally developed by Opto 22 in 1982 which is used for industrial automation applications. Optomux is an ASCII protocol consisting of command messages and response messages containing data from an Optomux unit & contain a message checksum to ensure secure communications. The serial data link is very reliable, over distances up to 4,000 feet and is suitable for extreme safety applications. An Optomux system is typically made up of three main elements: The primary performance limitation of the Optomux system is the slow serial data link. The maximum data rate supported by the Optomux brain boards is 38.4 kbit/s (also dependent on the length of the communication lines). In theory, at maximum speed, the Optomux system should be capable of polling roughly 3,400 digital positions per second, or roughly 600 analog positions per second. This is assuming that all the positions are on the same brain board, which is not possible with Optomux. A more realistic speed figure would be about half of the previous numbers. For faster serial data communication, Opto 22's Mistic protocol and hardware may be used at speeds to 115.2 kbit/s. Or, a B3000 brain using the Optomux protocol can communicate at similar high speeds.
https://en.wikipedia.org/wiki/Optomux
Optomyography (OMG) was proposed in 2015 as a technique that could be used to monitor muscular activity. [ 1 ] It is possible to use OMG for the same applications where Electromyography (EMG) and Mechanomyography (MMG) are used. However, OMG offers superior signal-to-noise ratio and improved robustness against the disturbing factors and limitations of EMG and MMG. The basic principle of OMG is to use active near-infra-red optical sensors to measure the variations in the measured signals that are reflected from the surface of the skin while activating the muscles below and around the skin spot where the photoelectric sensor is focusing to measure the signals reflected from this spot. [ 2 ] A glasses based optomyography device was patented [ 3 ] for measuring facial expressions and emotional responses particularly for mental health monitoring [1] . Generating proper control signals is the most important task to be able to control any kind of a prosthesis, computer game or any other system which contains a human-computer interaction unit or module. For this purpose, surface-Electromyographic (s-EMG) and Mechanomyographic (MMG) signals are measured during muscular activities and used, not only for monitoring and assessing these activities, but also to help in providing efficient rehabilitation treatment for patients with disabilities as well as in constructing and controlling sophisticated prostheses for various types of amputees and disabilities. However, while the existing s-EMG and MMG based systems have compelling benefits, many engineering challenges still remain unsolved, especially with regard to the sensory control system. This article about biomedical engineering is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Optomyography
Oracle Reports [ 1 ] is a tool for developing reports against data stored in an Oracle database . Oracle Reports consists of Oracle Reports Developer (a component of the Oracle Developer Suite ) and Oracle Application Server Reports Services (a component of the Oracle Application Server ). The report output can be delivered directly to a printer or saved in the following formats: HTML , RTF , PDF , XML , Microsoft Excel This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Oracle_Reports
Oral and maxillofacial pathology refers to the diseases of the mouth ("oral cavity" or "stoma"), jaws ("maxillae" or "gnath") and related structures such as salivary glands , temporomandibular joints , facial muscles and perioral skin (the skin around the mouth). [ 1 ] [ 2 ] The mouth is an important organ with many different functions. It is also prone to a variety of medical and dental disorders. [ 3 ] The specialty oral and maxillofacial pathology is concerned with diagnosis and study of the causes and effects of diseases affecting the oral and maxillofacial region. It is sometimes considered to be a specialty of dentistry and pathology . [ 4 ] Sometimes the term head and neck pathology is used instead, which may indicate that the pathologist deals with otorhinolaryngologic disorders (i.e. ear, nose and throat) in addition to maxillofacial disorders. In this role there is some overlap between the expertise of head and neck pathologists and that of endocrine pathologists . The key to any diagnosis is thorough medical, dental, social and psychological history as well as assessing certain lifestyle risk factors that may be involved in disease processes. This is followed by a thorough clinical investigation including extra-oral and intra-oral hard and soft tissues. [ 5 ] It is sometimes the case that a diagnosis and treatment regime are possible to determine from history and examination, however it is good practice to compile a list of differential diagnoses . Differential diagnosis allows for decisions on what further investigations are needed in each case. [ 5 ] There are many types of investigations in diagnosis of oral and maxillofacial diseases, including screening tests, imaging ( radiographs , CBCT , CT , MRI , ultrasound ) and histopathology ( biopsy ). [ 5 ] A biopsy is indicated when the patient's clinical presentation, past history or imaging studies do not allow a definitive diagnosis . A biopsy is a surgical procedure that involves the removal of a piece of tissue sample from the living organism for the purpose of microscopic examination . In most cases, biopsies are carried out under local anaesthesia. Some biopsies are carried out endoscopically, others under image guidance, for instance ultrasound, computed tomography (CT) or magnetic resonance imaging (MRI) in the radiology suite. Examples of the most common tissues examined by means of a biopsy include oral and sinus mucosa, bone, soft tissue, skin and lymph nodes. [ 6 ] Types of biopsies typically used for diagnosing oral and maxillofacial pathology are: Excisional biopsy: A small lesion is totally excised. This method is preferred if the lesions are approximately 1 cm or less in diameter, clinically and seemingly benign and surgically accessible. Large lesions which are more diffused and dispersed in nature or those which are seemed to be more clinically malignant are not conducive to total removal. [ 7 ] Incisional biopsy: A small portion of the tissue is removed from an abnormal-looking area for examination. This method is useful in dealing with large lesions. If the abnormal region is easily accessed, the sample may be taken at the doctor's office. If the tumour is deeper inside the mouth or throat, the biopsy may need to be performed in an operating room. General anaesthesia is administered to eliminate any pain. [ 7 ] Exfoliative cytology: A suspected area is gently scraped to collect a sample of cells for examination. These cells are placed on a glass slide and stained with dye, so that they can be viewed under a microscope. If any cells appear abnormal, a deeper biopsy will be performed. [ 7 ] Oral and maxillofacial pathology can involve many different types of tissues of the head. Different disease processes affect different tissues within this region with various outcomes.  A great many diseases involve the mouth, jaws and orofacial skin. The following list is a general outline of pathologies that can affect oral and maxillofacial region; some are more common than others. This list is by no means exhaustive. Cleft lip and palate is one of the most common occurring multi-factorial congenital disorder occurring in 1 in 500–1000 live births in several forms. [ 8 ] [ 9 ] [ 10 ] The most common form is combined cleft lip and palate and it accounts for approximately 50% of cases, whereas isolated cleft lip concerns 20% of the patients. [ 11 ] People with cleft lip and palate malformation tend to be less social and report lower self-esteem, anxiety and depression related to their facial malformation. [ 12 ] [ 13 ] One of the major goals in the treatment of patients with cleft is to enhance social acceptance by surgical reconstruction. A cleft lip is an opening of the upper lip, mainly due to the failure of fusion of the medial nasal processes with the palatal processes; a cleft palate is the opening of the soft and hard palate in the mouth, which is due to the failure of the palatal shelves to fuse together. [ 10 ] The palate's main function is to demarcate the nasal and oral cavity, without which the patient will have problems with swallowing, eating and speech, thus affecting the quality of life and in some cases certain functions. [ 10 ] Some examples include food going up into the nasal cavity during swallowing as the soft palate is not present to close the cavity during the process. Speech is also affected as the nasal cavity is a source of resonance during speech and failure to manipulate spaces in the cavities will result in the lack of ability to produce certain consonants in audible language. [ 10 ] Macroglossia is a rare condition, categorised by tongue enlargement which will eventually create a crenated border in relation to the embrasures between the teeth. [ 14 ] Hereditary causes include vascular malformations , Down syndrome , Beckwith–Wiedemann syndrome , Duchenne muscular dystrophy , and Neurofibromatosis type I . [ 14 ] Acquired causes include carcinoma , [ 14 ] lingual thyroid , [ 5 ] myxedema , [ 14 ] and amyloidosis . [ 14 ] Consequences may include noisy breaths – airway obstruction in severe cases, drooling, difficulty eating, lisping speech, open bite , and protruding tongue, which may ulcerate and undergo necrosis. [ 14 ] For mild cases, surgical treatment is not mandatory but if speech is affected, speech therapy may be useful. Reduction glossectomy may be required for severe cases. [ 14 ] Ankyloglossia (also known as tongue-tie) may decrease the mobility of the tongue tip [ 15 ] and is caused by an unusually short, thick lingual frenulum , a membrane connecting the underside of the tongue to the floor of the mouth. [ 16 ] Stafne defect is a depression of the mandible , most commonly located on the lingual surface (the side nearest the tongue). [ citation needed ] Torus palatinus is a bony protrusion on the palate , usually present on the midline of the hard palate. [ citation needed ] Torus mandibularis is a bony growth in the mandible along the surface nearest to the tongue. Mandibular tori usually are present near the premolars and above the location on the mandible of the mylohyoid muscle attachment. [ 17 ] Eagle syndrome is a condition where there is an abnormal ossification of the stylohyoid ligament. This leads to an increase in the thickness and the length of the stylohyoid process and the ligament. Pain is felt due to the pressure applied to the internal jugular vein. Eagle syndrome occurs due to elongation of the styloid process or calcification of the stylohyoid ligament. However, the cause of the elongation has not been known clearly. It could occur spontaneously or could arise since birth. Usually normal stylohyoid process is 2.5–3 cm in length, if the length is longer than 3 cm, it is classified as an elongated stylohyoid process. [ 18 ] Sjögren syndrome is an autoimmune chronic inflammatory disorder characterised by some of the body's own immune cells infiltrating and destroying lacrimal and salivary glands (and other exocrine glands). There are two types of Sjögren syndrome: primary and secondary. In primary Sjögren syndrome (pSS) individuals have dry eyes (keratoconjunctivitis sicca) and a dry mouth (xerostomia). Based on a meta-analysis, the prevalence of pSS worldwide is estimated to 0.06%, with 90% of the patients being female. [ 26 ] In secondary Sjögren syndrome (sSS), individuals have a dry mouth, dry eyes and a connective tissue disorder such as rheumatoid arthritis (prevalence 7% in the UK), systemic lupus erythematosus (prevalence 6.5%–19%) and systemic sclerosis (prevalence 14%–20.5%). [ 27 ] Additional features and symptoms include: Tests used to diagnose Sjögren syndrome include: There is no cure for Sjögren syndrome; however, there are treatments used to help with the associated symptoms. Complications of Sjögren syndrome include ulcers that can develop on the surface of the eyes if the dryness is not treated. These ulcers can then cause more worrying issues such as loss of eyesight and life-long damage. Individuals with Sjögren syndrome have a slightly increased risk of developing non-Hodgkin lymphoma , a type of cancer. Other conditions such as peripheral neuropathy, Raynaud's phenomenon, kidney problems, underactive thyroid gland and irritable bowel syndrome have been linked to Sjögren syndrome. [ 28 ] There are many oral and maxillofacial pathologies which are not fully understood. Oral and maxillofacial pathology, previously termed oral pathology, is a speciality involved with the diagnosis and study of the causes and effects of diseases affecting the oral and maxillofacial regions (i.e. the mouth, the jaws and the face). It can be considered a speciality of dentistry and pathology. [ 4 ] Oral pathology is a closely allied speciality with oral and maxillofacial surgery and oral medicine . The clinical evaluation and diagnosis of oral mucosal diseases are in the scope of oral and maxillofacial pathology specialists and oral medicine practitioners, [ 33 ] both disciplines of dentistry . When a microscopic evaluation is needed, a biopsy is taken, and microscopically observed by a pathologist . The American Dental Association uses the term oral and maxillofacial pathology , and describes it as "the specialty of dentistry and pathology which deals with the nature, identification, and management of diseases affecting the oral and maxillofacial regions. It is a science that investigates the causes, processes and effects of these diseases." [ 34 ] In some parts of the world, oral and maxillofacial pathologists take on responsibilities in forensic odontology . There are approximately 30 consultant oral and maxillofacial pathologists in the UK. A dental degree is mandatory, but a medical degree is not. The shortest pathway to becoming an oral pathologist in the UK is completion of two years' general professional training and then five years in a diagnostic histopathology training course. After passing the required Royal College of Pathologists exams and gaining a Certificate of Completion of Specialist Training, the trainee is entitled to apply for registration as a specialist. [ 35 ] Many oral and maxillofacial pathologists in the UK are clinical academics, having undertaken a PhD either prior to or during training. Generally, oral and maxillofacial pathologists in the UK are employed by dental or medical schools and undertake their clinical work at university hospital departments. There are five practising oral pathologists in New Zealand (as of May 2013 [update] ). [ 36 ] Oral pathologists in New Zealand also take part in forensic evaluations. [ 36 ]
https://en.wikipedia.org/wiki/Oral_and_maxillofacial_pathology
Oral ecology is the microbial ecology of the microorganisms found in mouths . Oral ecology, like all forms of ecology , involves the study of the living things found in oral cavities as well as their interactions with each other and with their environment. Oral ecology is frequently investigated from the perspective of oral disease prevention, often focusing on conditions such as dental caries (or "cavities"), candidiasis ("thrush"), gingivitis , periodontal disease , and others. However, many of the interactions between the microbiota and oral environment protect from disease and support a healthy oral cavity. Interactions between microbes and their environment can result in the stabilization or destabilization of the oral microbiome , with destabilization believed to result in disease states. Destabilization of the microbiome can be influenced by several factors, including diet changes, drugs or immune system disorders . Bacteria were first detected under the microscope of Dutch scientist Anton van Leeuwenhoek in the late 17th century from his own healthy human oral sample. [ 1 ] After using this technology on a healthy sample, Leeuwenhoek applied his tool to the decayed tooth matter of his wife, where he noted that the organisms present were highly similar to those found in cheese. [ 1 ] These are believed to likely have been lactic acid bacteria, however the link between bacterial acid production and tooth decay was not further uncovered until much later. After this discovery and the further development of microscopy, bacteria was found within tooth cavities by multiple scientists throughout the 19th century. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] Willoughby Miller was the first recorded oral microbiologist, and he performed much of his foundational microbiology research in the laboratory of famed microbiologist Robert Koch . In this time, Miller generated the chemo-parasitic (also referred to as "acidogenic") theory of caries, which proposed that tooth decay is initiated by bacterial acid production on the surface of teeth. [ 8 ] This theory is considered to be foundational to the field of dentistry as well as oral ecology, by drawing connections between the activities of microbial entities and its effects on their non-living microscopic environment. [ 2 ] [ 9 ] In ecological terms, early work in oral microbiology largely falls into a category of microbial research now described as "reductionist", generally meaning it focused heavily on the isolation of individual microbes before observation or testing. [ 10 ] It wasn't until the late 20th century that "holistic" approaches to oral microbiology were coming into the mainstream, and thus microbial ecology was intentionally studied. Holistic microbiology considers not only an organism of interest but also the biological and abiotic context in which the organism naturally is found. Scientist Philip Marsh is credited with developing the ecological plaque hypothesis in 1994, in which he ideated that dental plaque can be both normal and healthy as well as "cariogenic" (creates cavities), depending on the microbial community (or " consortia ") present in the biofilm and the community's stability. [ 11 ] Furthermore, in his theory, Marsh links the exposure of nonliving environmental influences on the microbial community to the selection and change in microbial constituents that can cause cariogenic conditions. Teeth, saliva, and oral tissues are the major components of the oral environment in which the oral microbiome resides. Like most environments, some oral environments, such as teeth and saliva , are abiotic (non-living), and some are living, such as the host immune system or host mouth mucosal tissues - including gums, cheek ("buccal") and tongue (when present). Saliva holds multiple roles in oral ecology. For example, it creates a physical disturbance to microbes through a washing action. Increase in saliva flow via stimulation (i.e. chewing gum) has been shown to diminish cariogenic plaque formation. [ 12 ] Saliva is also largely responsible for environmental pH, water content, nutrients, and host-produced immune cells and antimicrobials . One major antimicrobial found in saliva (as well as mucus) is lysozyme , an enzyme that shears bacterial cells. Another critical role that saliva plays in the microscopic environment is supplying the glycoproteins bacteria use to cling to the surface of teeth. [ 12 ] [ 13 ] [ 14 ] Teeth are another example of the abiotic environmental factors involved in oral ecology. Bacteria settle on the tooth surface as a solid substrate on which they grow. Compared to floating in saliva, bacteria on teeth gain environmental stability so that they experience a consistent environment of temperature, relative oxygen exposure, nutrient density, physical disturbances, etc. While teeth provide stability to the microbial community, the overgrowth of bacteria is known to result in tooth decay primarily due to acid production from sugar-consuming fermentative metabolisms. Some organisms associated with this condition are lactobacilli , which produce the lactic acid that breaks down tooth enamel. As a result, host diet also influences the ecology of the mouth by altering saliva pH and nutrient content. As a result the microbial life interacts with the oral environment. Oxygen content is a major variable that can influence the type of microbial flora present in the oral cavity. This variable is slightly unique to the oral cavity due to its exposure to the outside of the host body. In ecology, niches are a set of conditions that can be associated with the presence of a certain organism. Thus, oxygen concentration variation throughout the mouth can be a factor in niche differentiation within this environment. At the microscopic scale, oxygen concentration can dictate where in the mouth aerobic , anaerobic , facultative anaerobic , aerotolerant , or microaerophilic microbes grow or form biofilm. Biofilms themselves can help regulate oxygen exposure and keep anaerobic organisms at the interior, adding to the complexity of the niches within the oral cavity. Another abiotic environmental influence on oral ecology includes the use of drugs, especially antibiotics and orally-administered antibiotics. Antibiotics can kill oral bacteria as well as cause secondary environmental effects such as a decrease in saliva, leading to further changes in the abiotic microenvironment. [ 15 ] The destabilization of the bacteria in a microbiome which results in disease is known as bacterial dysbiosis . For example, the destabilization of the bacterial community in the mouth can lead to a bloom in fungal communities, resulting in diseases such as thrush. [ 16 ] Furthermore, the development of antibiotic-resistant populations in response to the treatment can result in an overpopulation of the resistant bacteria after treatment is completed, disturbing the relative abundances found pre-treatment. The host of the oral cavity in which the oral ecology is studied is also of importance. This is an example of a biotic, or living, environmental factor. General host health and immune system function is critical to oral microflora, as it determines which microbes are able to survive in the mouth. The innate immune system , which operates in animals continuously regardless of the presence of disease, is most relevant due to its constant role in oral ecology both in healthy and unhealthy hosts. This includes the production of free-floating antibodies , macrophages, and other immune cells present in saliva. At a healthy, stable state, the host immune system permits the colonization of certain microbes by not targeting them. This can be described as "immune equilibrium", or the conditions where the host and the microbiota in the oral microbiome symbiose . [ 17 ] In microbial ecology, the principle of priority effect refers to the competitive advantage some microorganisms gain by colonizing a surface first. [ 18 ] It is generally believed that primary colonization occurs by transmission from the mother or their breastmilk ( vertical transmission ), as well as the environment of the newborn ( horizontal transmission ). [ 18 ] [ 19 ] It has been found that at different locations in the oral cavity, different microbes are early colonizers. [ 17 ] [ 18 ] [ 20 ] The very initial colonizers of teeth are considered to be Streptococcus , a genus of bacteria that are usually facultative anaerobes that can grow in both aerobic and anaerobic conditions. This is advantageous in an environment that is variably exposed to oxygen throughout the day as well as throughout the oral cavity. Despite over 700 unique species of bacteria being associated with the human mouth, in tooth plaque only between 7-9 "major players" have been repeatedly identified as early colonizers, including Actinomyces , Streptococcus , Neisseria , and Veillonella species. [ 21 ] [ 2 ] It is believed that the colonization of these specific genera of bacteria influence the stability and homeostasis of the resulting oral microflora. [ 22 ] This colonization occurs by the construction of and adhesion to a pellicle made of glycoproteins from host saliva. [ 12 ] [ 13 ] [ 14 ] Upon adhesion to the pellicle, early colonizing bacteria begin to produce the biofilm intended to anchor the colony to the tooth. As is common in microbiomes, this biofilm does not remain a single genera or species. In fact, the vast majority of relevant microbes perform co-aggregation within a biofilm. [ 23 ] [ 20 ] [ 24 ] However, it is understood that not all microbes will co-aggregate together, and ammensal activity does occur between specific species, such as S. mutans and P. gingivalis . [ 14 ] The interbacterial interactions as well as the interactions with the host teeth, oxygen conditions, and saliva are what compose bacterial oral ecology. Bacteria, while being the most abundant, are not the only kind of microbiota present in the oral cavity. Fungal/yeast cells are also present, particularly including the genus Candida . The yeast species C. albicans and C. tropicalis are known as commensals in the human mouth, which means that they are a part of normal flora that engages in a mutually-beneficial relationship with its host. [ 25 ] They are the most abundant non-bacterial microbes isolated from the human mouth. As described in the above section, co-aggregation within a biofilm is not uncommon, including the cohabitation of yeasts with bacteria. [ 26 ] Candida albicans is known to selectively participate in "dual-species" biofilms with certain species of Streptococcus bacteria through the actual attachment of the yeast to the bacterial cell surface. [ 27 ] [ 28 ] This allows the yeast to be anchored to the tooth surface indirectly to gain stability. Some other, but significantly less abundant, non-bacterial microbes in the human mouth include the fungi genera Cryptococcus , Aspergillus , and Fusarium . [ 29 ]
https://en.wikipedia.org/wiki/Oral_ecology
This medical article is a stub . You can help Wikipedia by expanding it . An oral food challenge is a method for determining if a person has a specific food allergy . It involves giving increasing amounts of a food and watching to see if an allergic reaction occurs. They are potentially dangerous. [ 1 ] : 15
https://en.wikipedia.org/wiki/Oral_food_challenge
Oral microbiology is the study of the microorganisms (microbiota) of the oral cavity and their interactions between oral microorganisms or with the host. [ 1 ] The environment present in the human mouth is suited to the growth of characteristic microorganisms found there. It provides a source of water and nutrients, as well as a moderate temperature. [ 2 ] Resident microbes of the mouth adhere to the teeth and gums to resist mechanical flushing from the mouth to stomach where acid-sensitive microbes are destroyed by hydrochloric acid . [ 2 ] [ 3 ] Anaerobic bacteria in the oral cavity include: Actinomyces , Arachnia ( Propionibacterium propionicus ), Bacteroides , Bifidobacterium , Eubacterium , Fusobacterium , Lactobacillus , Leptotrichia , Peptococcus , Peptostreptococcus , Propionibacterium , Selenomonas , Treponema , and Veillonella . [ 4 ] [ needs update ] The most commonly found protists are Entamoeba gingivalis and Trichomonas tenax . [ 5 ] Genera of fungi that are frequently found in the mouth include Candida , Cladosporium , Aspergillus , Fusarium , Glomus , Alternaria , Penicillium , and Cryptococcus , among others. [ 6 ] Bacteria accumulate on both the hard and soft oral tissues in biofilms . Bacterial adhesion is particularly important for oral bacteria. Oral bacteria have evolved mechanisms to sense their environment and evade or modify the host. Bacteria occupy the ecological niche provided by both the tooth surface and mucosal epithelium . [ 7 ] [ 8 ] Factors of note that have been found to affect the microbial colonization of the oral cavity include the pH, oxygen concentration and its availability at specific oral surfaces, mechanical forces acting upon oral surfaces, salivary and fluid flow through the oral cavity, and age. [ 8 ] Interestingly, it has been observed that the oral microbiota differs between men and women in conditions of oral health, but especially during periodontitis . [ 9 ] However, a highly efficient innate host defense system constantly monitors the bacterial colonization and prevents bacterial invasion of local tissues. A dynamic equilibrium exists between dental plaque bacteria and the innate host defense system. [ 7 ] Of particular interest is the role of oral microorganisms in the two major dental diseases: dental caries and periodontal disease . [ 7 ] The oral microbiome, mainly comprising bacteria which have developed resistance to the human immune system, has been known to impact the host for its own benefit, as seen with dental cavities . The environment present in the human mouth allows the growth of characteristic microorganisms found there. It provides a source of water and nutrients, as well as a moderate temperature. [ 2 ] Resident microbes of the mouth adhere to the teeth and gums to resist mechanical flushing from the mouth to stomach where acid-sensitive microbes are destroyed by hydrochloric acid. [ 2 ] [ 3 ] Anaerobic bacteria in the oral cavity include: Actinomyces , Arachnia , Bacteroides , Bifidobacterium , Eubacterium , Fusobacterium , Lactobacillus , Leptotrichia , Peptococcus , Peptostreptococcus , Propionibacterium , Selenomonas , Treponema , and Veillonella . [ 4 ] In addition, there are also a number of fungi found in the oral cavity, including: Candida, Cladosporium , Aspergillus , Fusarium , Glomus, Alternaria , Penicillium , and Cryptococcus . [ 11 ] The oral cavity of a new-born baby does not contain bacteria but rapidly becomes colonized with bacteria such as Streptococcus salivarius . With the appearance of the teeth during the first year colonization by Streptococcus mutans and Streptococcus sanguinis occurs as these organisms colonise the dental surface and gingiva. Other strains of streptococci adhere strongly to the gums and cheeks but not to the teeth. The gingival crevice area (supporting structures of the teeth) provides a habitat for a variety of anaerobic species. Bacteroides and spirochetes colonize the mouth around puberty. [ 7 ] As a diverse environment, a variety of organisms can inhabit unique ecological niches present in the oral cavity including the teeth, gingiva, tongue, cheeks, and palates. [ 12 ] The dental plaque is made up of the microbial community that is adhered to the tooth surface; this plaque is also recognized as a biofilm . While it is said that this plaque is adhered to the tooth surface, the microbial community of the plaque is not directly in contact with the enamel of the tooth. Instead, bacteria with the ability to form attachments to the acquired pellicle , which contains certain salivary proteins, on the surface of the teeth, begin the establishment of the biofilm. Upon dental plaque maturation, in which the microbial community grows and diversifies, the plaque is covered in an interbacterial matrix. [ 8 ] The calculus of the oral cavity is the result of mineralization of and around dead microorganisms; this calculus can then be colonized by living bacteria. Dental calculus can be present on supragingival and subgingival surfaces. [ 8 ] The mucosa of the oral cavity provides a unique ecological site for microbiota to inhabit. Unlike the teeth, the mucosa of the oral cavity is frequently shedding and thus its microbial inhabitants are both kept at lower relative abundance than those of the teeth but also must be able to overcome the obstacle of the shedding epithelia. [ 8 ] Unlike other mucosal surfaces of the oral cavity, the nature of the top surface of the tongue, due in part to the presence of numerous papillae, provides a unique ecological niche for its microbial inhabits. One important characteristic of this habitat is that the spaces between the papillae tend to not receive much, if any, oxygenated saliva, which creates an environment suitable for microaerophilic and obligate anaerobic microbiota. [ 13 ] Acquisition of the oral microbiota heavily depends on the route of delivery as an infant – vaginal versus caesarian ; upon comparing infants three months after birth, infants born vaginally were reported to have higher oral taxonomic diversity than their cesarean-born counterparts. [ 14 ] [ 12 ] Further acquisition is determined by diet, developmental accomplishments, general lifestyle habits, hygiene, and the use of antibiotics. [ 14 ] Breastfed infants are noted to have higher oral lactobacilli colonization than their formula-fed counterparts. [ 12 ] Diversity of the oral microbiome is also shown to flourish upon the eruption of primary teeth and later adult teeth, as new ecological niches are introduced to the oral cavity. [ 12 ] [ 14 ] Saliva plays a considerable role in influencing the oral microbiome. [ 15 ] More than 800 species of bacteria colonize oral mucus, 1,300 species are found in the gingival crevice, and nearly 1,000 species comprise dental plaque. The mouth is a rich environment for hundreds of species of bacteria since saliva is mostly water and plenty of nutrients pass through the mouth each day. When kissing, it takes only 10 seconds for no less than 80 million bacteria to be exchanged by the passing of saliva. However, the effect is transitory, as each individual quickly returns to their own equilibrium. [ 16 ] [ 17 ] Due to progress in molecular biology techniques, scientific understanding of oral ecology is improving. Oral ecology is being more comprehensively mapped, including the tongue, the teeth, the gums, salivary glands, etc. which are home to these communities of different microorganisms. [ 18 ] The host's immune system controls the bacterial colonization of the mouth and prevents local infection of tissues. A dynamic equilibrium exists notably between the bacteria of dental plaque and the host's immune system, enabling the plaque to stay behind in the mouth when other biofilms are washed away. [ 19 ] In equilibrium, the bacterial biofilm produced by the fermentation of sugar in the mouth is quickly swept away by the saliva, except for dental plaque. In cases of imbalance in the equilibrium, oral microorganisms grow out of control and cause oral diseases such as tooth decay and periodontal disease. Several studies have also linked poor oral hygiene to infection by pathogenic bacteria. [ 20 ] The oral microbiota is largely related to systemic health, and disturbances in the oral microbiota can lead to diseases in both the oral cavity and the rest of the body. [ 21 ] There are many factors that influence the diversity of the oral microbiota, such as age, diet, hygiene practices, and genetics. [ 22 ] Of particular interest is the role of oral microorganisms in the two major dental diseases: dental caries and periodontal disease . [ 7 ] There are many factors of oral health which need to be preserved in order to prevent pathogenesis of the oral microbiota or diseases of the mouth. Dental plaque is the material that adheres to the teeth and consists of bacterial cells (mainly S. mutans and S. sanguis ), salivary polymers and bacterial extracellular products. Plaque is a biofilm on the surfaces of the teeth. This accumulation of microorganisms subject the teeth and gingival tissues to high concentrations of bacterial metabolites which results in dental disease. If not taken care of, via brushing or flossing, the plaque can turn into tartar (its hardened form) and lead to gingivitis or periodontal disease . In the case of dental cavities , proteins involved in colonization of teeth by Streptococcus mutans can produce antibodies that inhibit the cariogenic process which can be used to create vaccines . [ 19 ] Bacteria species typically associated with the oral microbiota have been found to be present in women with bacterial vaginosis . [ 23 ] Genera of fungi that are frequently found in the mouth include Candida , Cladosporium , Aspergillus , Fusarium , Glomus , Alternaria , Penicillium , and Cryptococcus , among others. [ 6 ] Additionally, research has correlated poor oral health and the resulting ability of the oral microbiota to invade the body to affect cardiac health as well as cognitive function. [ 20 ] High levels of circulating antibodies to oral pathogens Campylobacter rectus , Veillonella parvula and Prevotella melaninogenica are associated with hypertension in human. [ 24 ] One of the most important factors in promoting optimal oral microbiota health is the use of good oral hygiene practices. To prevent any possible complication from an altered oral microbiota, it is important to brush and floss every day, schedule regular cleanings, eat a healthy diet, and replace toothbrushes frequently. [ 25 ] Dental plaque is associated with two extremely common oral diseases, dental caries and periodontal disease. [ 26 ] Consistent toothbrushing and flossing is essential for disrupting harmful plaque formation. Research has shown that flossing is associated with a decrease in the bacteria Streptococcus mutans which has been shown to be involved in cavity formation. [ 27 ] Insufficient brushing and flossing can lead to gum and tooth disease , and eventually tooth loss . [ 25 ] In addition, poor dental hygiene has been linked to conditions such as osteoporosis , diabetes and cardiovascular diseases . [ 25 ] The oral environment (temperature, humidity, pH, nutrients, etc.) impacts the selection of adapted (and sometimes pathogenic) populations of microorganisms. [ 28 ] For a young person or an adult in good health and with a healthy diet, the microbes living in the mouth adhere to mucus, teeth and gums to resist removal by saliva. Eventually, they are mostly washed away and destroyed during their trip through the stomach. [ 28 ] [ 29 ] Salivary flow and oral conditions vary person-to-person, and also relative to the time of day and whether or not an individual sleeps with their mouth open. From youth to old age, the entire mouth interacts with and affects the oral microbiome. [ 30 ] Via the larynx , numerous bacteria can travel through the respiratory tract to the lungs . There, mucus is charged with their removal. Pathogenic oral microflora have been linked to the production of factors which favor autoimmune diseases such as psoriasis and arthritis , as well as cancers of the colon , lungs and breasts . [ 31 ] Most of the bacterial species found in the mouth belong to microbial communities, called biofilms , a feature of which is inter-bacterial communication. Cell–cell contact is mediated by specific protein adhesins and often, as in the case of inter-species aggregation, by complementary polysaccharide receptors. Another method of communication involves cell–cell signalling molecules, which are of two classes: those used for intra-species and those used for inter-species signalling. An example of intra-species communication is quorum sensing . Oral bacteria have been shown to produce small peptides, such as competence stimulating peptides , which can help promote single-species biofilm formation. A common form of inter-species signalling is mediated by 4, 5-dihydroxy-2, 3-pentanedione (DPD), also known as autoinducer-2 (Al-2). [ 32 ] The evolution of the human oral microbiome can be traced through time via the sequencing of dental calculus (essentially fossilized dental plaque). [ 33 ] As mentioned in prior sections, the human oral microbiome has important implications for the health and wellness of human beings overall, and is often the only surviving health record for ancient populations. The oral microbiome has evolved over time alongside humans, in response to changes in diet, lifestyle, environment, and even the advent of cooking . [ 33 ] There have also been similarities in oral microbiota across hominins, as well as other primate species. While a core microbiome consisting of specific bacteria exists across most individuals, significant variation can arise depending on an individual’s unique environment, lifestyle, physiology, and heritage. [ 34 ] Considering that oral bacteria are transferred vertically from primary caregivers in early childhood, and horizontally between family members later in life, archaeological dental calculus is a unique way to trace population structure, movement, and admixture between ancient cultures, as well as the spread of disease. [ 33 ] Ancient humans are thought to have maintained a much different oral microbiome landscape than non-human primates, despite having a shared environment. Existing data has found that chimpanzees maintain higher levels of Bacteroidetes and Fusobacteria , while humans have greater proportions of Firmicutes and Proteobacteria . [ 33 ] Human oral microbiota have also been found to be less diverse when compared with other primates. [ 33 ] Of the hominins ( Homo erectus , Neanderthals , Denisovans ) Neanderthal oral microbiomes have been studied in the greatest detail. A cluster of oral microbiota has been found to be shared across Spanish Neanderthals, foraging humans from ~3000 years ago, and a single wild-caught chimpanzee . Similarities have also been found between a meat-eating Neanderthal in Belgium , and hunter humans in Europe and Africa. Ozga et al. (2019) found that Neanderthals and humans share similar oral microbiota, and are more alike to each other than to chimpanzees . Weyrich (2021) finds that these observations suggest humans shared an oral microbiota with Neanderthals until at least 3000 years ago. While it is possible that humans and Neanderthals shared oral microbiota from the moment of separation (~700,000 years ago) until their extinction , Weyrich finds that an equally likely hypothesis is that convergent evolution accounted for similar oral microbiotas across Neanderthals and humans for that period. [ 35 ] The human oral microbiome has been a subject of increasing scientific scrutiny, especially in understanding its evolutionary journey. The oral microbiome has undergone significant shifts in composition, particularly during key historical periods like the Neolithic and the Industrial Revolution . The Neolithic period began around 10,000 years ago and marked a significant turning point in human history. This era saw the shift from a hunter-gatherer lifestyle to agriculture and farming. One of the most significant changes during this period was the adoption of carbohydrate-rich diets, particularly the consumption of domesticated cereals like wheat and barley . This shift had a profound impact on the oral microbiome. The increase in fermentable carbohydrates led to a surge in dental caries , a common oral health issue. Additionally, the Neolithic period also witnessed a reduction in microbial diversity in the oral environment. [ 33 ] Transitioning from the Neolithic to the Medieval period , which began around 400 years ago, there was little change in the composition of the oral microbiota. This period of stability suggests that despite advancements in agriculture and societal structures, the oral microbiome remained relatively constant. This period did not bring about significant shifts in oral microbial communities, indicating a sort of equilibrium had been reached. [ 33 ] The Industrial Revolution , starting around 1850, brought about another significant shift in human lifestyle and, consequently, the oral microbiome. The widespread availability of industrially processed flour and sugar led to a predominance of cariogenic bacteria in the oral environment. This shift has persisted to the present day, making the modern oral microbiome less diverse than ever before, rendering it less resilient to perturbations in the form of dietary imbalances or invasion by pathogenic bacterial species. [ 33 ] The shifts in the oral microbiome through time have significant implications for modern health. The current lack of diversity in the oral microbiome makes it more susceptible to imbalances and pathogenic invasions. This, in turn, can lead to a range of oral and systemic health issues, from dental caries to cardiovascular disease . Dental caries affects between 60 and 90% of children and adults in industrialized countries, and has a more severe effect on less industrialized countries with less capable healthcare systems. [ 36 ] An understanding of the oral microbiome, via an examination of the evolution of the oral microbiome, can help science understand past errors and help inform the best path forward in sustainable healthcare interventions that work proactively with the body's natural systems, rather than fighting them with intermittent reactive interventions.
https://en.wikipedia.org/wiki/Oral_microbiology
Tissue engineering of oral mucosa combines cells, materials and engineering to produce a three-dimensional reconstruction of oral mucosa. It is meant to simulate the real anatomical structure and function of oral mucosa. Tissue engineered oral mucosa shows promise for clinical use, such as the replacement of soft tissue defects in the oral cavity. [ 1 ] These defects can be divided into two major categories: the gingival recessions ( receding gums ) which are tooth-related defects, and the non tooth-related defects. Non tooth-related defects can be the result of trauma, chronic infection or defects caused by tumor resection or ablation (in the case of oral cancer ). Common approaches for replacing damaged oral mucosa are the use of autologous grafts and cultured epithelial sheets. Autologous grafts are used to transfer tissue from one site to another on the same body. The use of autologous grafts prevents transplantation rejection reactions . Grafts used for oral reconstruction are preferably taken from the oral cavity itself (such as gingival and palatal grafts). However, their limited availability and small size leads to the use of either skin transplants or intestinal mucosa to be able to cover bigger defects. [ 2 ] Other than tissue shortage, donor site morbidity is a common problem that may occur when using autologous grafts. When tissue is obtained from somewhere other than the oral cavity (such as the intestine or skin) there is a risk of the graft not being able to lose its original donor tissue characteristics. For example, skin grafts are often taken from the radial forearm or lateral upper arm when covering more extensive defects. A positive aspect of using skin grafts is the large availability of skin. However, skin grafts differ from oral mucosa in: consistency, color and keratinization pattern. The transplanted skin graft often continues to grow hair in the oral cavity. To better understand the challenges for building full-thickness engineered oral mucosa it is important to first understand the structure of normal oral mucosa. Normal oral mucosa consists of two layers, the top stratified squamous epithelial layer and the bottom lamina propria . The epithelial layer consists of four layers: Depending on the region of the mouth the epithelium may be keratinized or non-keratinized. Non-keratinized squamous epithelium covers the soft palate , lips, cheeks and the floor of the mouth. Keratinized squamous epithelium is present in the gingiva and hard palate . [ 3 ] Keratinization is the differentiation of keratinocytes in the granular layer into dead surface cells to form the stratum corneum. The cells terminally differentiate as they migrate to the surface (from the basal layer where the progenitor cells are located to the dead superficial surface). The lamina propria is a fibrous connective tissue layer that consists of a network of type I and III collagen and elastin fibers. The main cells of the lamina propria are the fibroblasts , which are responsible for the production of the extracellular matrix . The basement membrane forms the border between the epithelial layer and the lamina propria. Cell culture techniques make it possible to produce epithelial sheets for the replacement of damaged oral mucosa. Partial-thickness tissue engineering uses one type of cell layer, this can be in monolayers or multilayers. Monolayer epithelial sheets suffice for the study of the basic biology of oral mucosa, for example its responses to stimuli such as mechanical stress, growth factor addition and radiation damage . Oral mucosa, however, is a complex multilayer structure with proliferating and differentiating cells and monolayer epithelial sheets have been shown to be fragile, difficult to handle and likely to contract without a supporting extracellular matrix. Monolayer epithelial sheets can be used to manufacture multilayer cultures. These multilayer epithelial sheets show signs of differentiation such as the formation of a basement membrane and keratinization. [ 1 ] Fibroblasts are the most common cells in extracellular matrix and are important for epithelial morphogenesis . If fibroblasts are absent from the matrix, the epithelium stops proliferating but continues to differentiate. The structures obtained by partial-thickness oral mucosa engineering form the basis for full-thickness oral mucosa engineering. With the advancement of tissue engineering an alternative approach was developed: the full-thickness engineered oral mucosa. Full-thickness engineered oral mucosa is a better simulation of the in vivo situation because they take the anatomical structure of native oral mucosa into account. Problems, such as tissue shortage and donor site morbidity, do not occur when using full-thickness engineered oral mucosa. The main goal when producing full-thickness engineered oral mucosa is to make it resemble normal oral mucosa as much as possible. This is achieved by using a combination of different cell types and scaffolds . To obtain the best results, the type and origin of the fibroblasts and keratinocytes used in oral mucosa tissue engineering are important factors to hold into account. Fibroblasts are usually taken from the dermis of the skin or oral mucosa. Kertinocytes can be isolated from different areas of the oral cavity (such as the palate or gingiva). It is important that the fibroblasts and keratinocytes are used in the earliest stage possible as the function of these cells decreases with time. The transplanted keratinocytes and fibroblasts should adapt to their new environment and adopt their function. There is a risk of losing the transplanted tissue if the cells do not adapt properly. This adaptation goes more smoothly when the donor tissue cells resemble the cells of the native tissue. A scaffold or matrix serves as a temporary supporting structure (extracellular matrix), the initial architecture, on which the cells can grow three-dimensionally into the desired tissue. A scaffold must provide the environment needed for cellular growth and differentiation; it must provide the strength to withstand mechanical stress and guide their growth. Moreover, scaffolds should be biodegradable and degrade at the same rate as the tissue regenerates to be optimally replaced by the host tissue. [ citation needed ] There are numerous scaffolds to choose from and when choosing a scaffold biocompatibility, porosity and stability should also be held into account. [ 4 ] Available scaffolds for oral mucosa tissue engineering are: Fibroblast-populated Skin Substitutes are scaffolds which contain fibroblasts that are able to proliferate and produce extracellular matrix and growth factors within 2 to 3 weeks. This creates a matrix similar to that of a dermis. Commercially available types are for example: Gelatin is the denatured form of collagen. Gelatin possesses several advantages for tissue-engineering application: they attract fibroblasts, are non-immunogenic, easy to manipulate and boost the formation of epithelium. There are three types of gelatin-based scaffolds: Glucan is a polysaccharide with antibacterial , antiviral and anticoagulant properties. Hyaluronic acid is added to improve the biological and mechanical properties of the matrix. [ 1 ] Collagen is the primary component of the extracellular matrix. Collagen scaffolds efficiently support fibroblast growth, which in turn allows keratinocytes to grow nicely into multilayers. Collagen (mainly collagen type I) is often used as a scaffold because it is biocompatible, non-immunogenic and available. However, collagen biodegrades relatively rapidly and is not good at withstanding mechanical forces. Improved characteristics can be created by cross-linking collagen-based matrices: this is an effective method to correct the instability and mechanical properties. [ 6 ] Compound collagen-based scaffolds have been developed in an attempt to improve the function of these scaffolds for tissue engineering. An example of a compound collagen scaffold is the collagen-chitosan matrix. Chitosan is a polysaccharide that is chemically similar to cellulose . Unlike collagen, chitosan biodegrades relatively slowly. However, chitosan is not very biocompatible with fibroblasts. To improve the stability of scaffolds containing gelatin or collagen and the biocompatibility of chitosan is made by crosslinking the two; they compensate for each other's shortcomings. [ 4 ] [ 6 ] Collagen-elastine membrane, collagen-glycosaminoglycane (C-GAG) matrix, cross-linked collagen matrix Integra and Terudermis are other examples of compound collagen scaffolds. [ 7 ] Allogeneic cultured keratinocytes and fibroblasts in bovine collagen (Gintuit) is the first cell-based product made from allogeneic human cells and bovine collagen approved by the US Food and Drug Administration (FDA). [ 8 ] It is an allogeneic cellularized scaffold product and was approved for medical use in the United States in March 2012. [ 9 ] Fibrin-based scaffolds contain fibrin which gives the keratinocytes stability. Moreover, they are simple to reproduce and handle. [ 1 ] A hybrid scaffold is a skin substitute based on a combination of synthetic and natural materials. Examples of hybrid scaffolds are HYAFF and Laserskin. These hybrid scaffolds have been shown to have good in-vitro and in-vivo biocompatibilities and their biodegradability is controllable. [ 7 ] The use of natural materials in scaffolds has its disadvantages. Usually, they are expensive, not available in large quantities and they have the risk of disease transmission. This has led to the development of synthetic scaffolds. When producing synthetic scaffolds there is full control over their properties. For example, they can be made to have good mechanical properties and the right biodegradability . When it comes to synthetic scaffolds thickness, porosity and pore size are important factors for controlling connective tissue formation. Examples of synthetic scaffolds are: Historical use of electrospinning to produce synthetic scaffolds dates back to at least the late 1980s when Simon showed that technology could be used to produce nano- and submicron-scale fibrous scaffolds from polymer solutions specifically intended for use as in vitro cell and tissue substrates. This early use of electrospun lattices for cell culture and tissue engineering showed that various cell types would adhere to and proliferate upon polycarbonate fibers. It was noted that as opposed to the flattened morphology typically seen in 2D culture, cells grown on the electrospun fibers exhibited a more rounded 3-dimensional morphology generally observed of tissues in vivo . [ 10 ] Although it has not yet been commercialized for clinical use clinical studies have been done on intra- and extra-oral treatments with full-thickness engineered oral mucosa. Full-thickness engineered oral mucosa is mainly used in maxillofacial reconstructive surgery and periodontal peri-implant reconstruction. Good clinical and histological results have been obtained. For example, there is vascular ingrowth and the transplanted keratinocytes integrate well into the native epithelium. Full-thickness engineered oral mucosa has also shown good results for extra-oral applications such as urethral reconstruction, ocular surface reconstruction and eyelid reconstruction. [ 1 ]
https://en.wikipedia.org/wiki/Oral_mucosa_tissue_engineering
Orange carotenoid protein ( OCP ) is a water-soluble protein which plays a role in photoprotection in diverse cyanobacteria . [ 1 ] It is the only photoactive protein known to use a carotenoid as the photoresponsive chromophore . The protein consists of two domains, with a single keto-carotenoid molecule non-covalently bound between the two domains. It is a very efficient quencher of excitation energy absorbed by the primary light-harvesting antenna complexes of cyanobacteria, the phycobilisomes . The quenching is induced by blue-green light. It is also capable of preventing oxidative damage by directly scavenging singlet oxygen ( 1 O 2 ). OCP was first described in 1981 by Holt and Krogmann [ 2 ] who isolated it from the unicellular cyanobacterium Arthrospira maxima , [ 3 ] [ 4 ] although its function would remain obscure until 2006. The crystal structure of the OCP was reported in 2003. [ 5 ] At the same time the protein was shown to be an effective quencher of singlet oxygen and was suggested to be involved in photoprotection, or carotenoid transport. [ 6 ] [ 7 ] [ 8 ] In 2000, it was demonstrated that cyanobacteria could perform photoprotective fluorescence quenching independent of lipid phase transitions, differential transmembrane pH, and inhibitors. [ 9 ] The action spectrum for this quenching process suggested the involvement of carotenoids, [ 10 ] and the specific involvement of the OCP was later demonstrated by Kirilovsky and coworkers in 2006. [ 11 ] In 2008, OCP was shown to require photoactivation by strong blue-green light for its photoprotective quenching function. [ 12 ] Photoactivation is accompanied by a pronounced color change, from orange to red, which had been previously observed by Kerfeld et al in the initial structural studies. [ 7 ] [ 8 ] [ 5 ] In 2015 a combination of biophysical methods by researchers in Berkeley showed that the visible color change is the consequence of a 12Å translocation of the carotenoid . [ 13 ] [ 14 ] [ 15 ] For a long time, cyanobacteria were considered incapable of performing non-photochemical quenching (NPQ) as a photoprotective mechanism, relying instead on a mechanism of energy redistribution between the two photosynthetic reaction centers , PSII and PSI , known as "state transitions" . [ 16 ] OCP is found in a majority of cyanobacterial genomes, [ 1 ] [ 17 ] with remarkable conservation of its amino acid sequence, implying evolutionary constraints to preserve an important function. Mutant cells engineered to lack OCP photobleach under high light [ 11 ] and become photoinhibited more rapidly under fluctuating light. [ 18 ] Under nutrient stress conditions, which are expected to be norm in marine environments, photoprotective mechanisms such as OCP become important even at lower irradiances. [ 19 ] This protein is not found in chloroplasts, and appears to be specific to cyanobacteria. [ 20 ] Upon illumination with blue-green light, OCP switches from an orange form (OCP O ) to a red form (OCP R ). The reversion of OCP R to OCP O is light independent and occurs slowly in darkness. OCP O is considered the dark, stable form of the protein, and does not contribute to phycobilisome quenching. OCP R is considered to be essential for induction of the photoprotection mechanism. The photoconversion from the orange to red form has a poor light efficiency (very low quantum yield), which helps to ensure the protein's photoprotective role only functions during high light conditions; otherwise, the dissipative NPQ process could unproductively divert light energy away from photosynthesis under light-limiting conditions. [ 12 ] [ 15 ] As evidenced by a decreased fluorescence, OCP in its red form is capable of dissipating absorbed light energy from the phycobilisome antenna complex. According to Rakhimberdieva and coworkers, about 30-40% of the energy absorbed by phycobilisomes does not reach the reaction centers when the carotenoid-induced NPQ is active. [ 21 ] The exact mechanism and quenching site in both the carotenoid as well as the phycobilisome still remain uncertain. The linker polypeptide ApcE in the allophycocyanin (APC) core of the phycobilisomes is known to be important, [ 11 ] [ 22 ] but is not the site of quenching. [ 23 ] Several lines of evidence suggest that it is the 660 nm fluorescence emission band of the APC core which is quenched by OCP R . [ 21 ] [ 23 ] [ 24 ] The temperature dependence of the rate of fluorescence quenching is similar to that of soluble protein folding, [ 25 ] supporting the hypothesis that OCP O slightly unfolds when it converts to OCP R . As first shown in 2003, [ 5 ] the auxiliary function of carotenoids as quenchers of singlet oxygen contributes to the photoprotective role of OCP has also been demonstrated under strong orange-red light, which are conditions where OCP cannot be photoactivated for its energy-quenching role. [ 26 ] This is significant because all oxygenic phototrophs have a particular risk of oxidative damage initiated by singlet oxygen ( 1 O 2 ), which is produced when their own light-harvesting pigments act as photosensitizers. [ 27 ] The three-dimensional protein structure of OCP (in the OCP O form) was solved in 2003, before its photoprotective role had been defined. [ 6 ] The 35 kDa protein contains two structural domains : an all- α-helical N-terminal domain (NTD) consisting of two interleaved 4-helix bundles, and a mixed α/β C-terminal domain (CTD). The two domains are connected by an extended linker. In OCP O , the carotenoid spans both domains, which are tightly associated in this form of protein. In 2013 Kerfeld and co-workers showed that the NTD is the effector (quencher) domain of the protein while the CTD plays a regulatory role. [ 28 ] The OCP participates in key protein–protein interactions that are critical to its photoprotective function. The activated OCP R form binds to allophycocyanin in the core of the phycobilisome and initiates the OCP-dependent photoprotective quenching mechanism. Another protein, the fluorescence recovery protein (FRP), interacts with the CTD in OCP R and catalyzes the reaction which reverts it back to the OCP O form. [ 29 ] Because OCP O cannot bind to the phycobilisome antenna, FRP effectively can detach OCP from the antenna and restore full light-harvesting capacity. The primary structure (amino acid sequence) is highly conserved among OCP sequences, and the full-length protein is usually co-located on the chromosome with a second open reading frame [ 7 ] [ 8 ] that was later characterized as the FRP. [ 1 ] Often, biosynthetic genes for ketocarotenoid synthesis (e.g., CrtW) are nearby. These conserved functional linkages underscore the evolutionary importance of the OCP style of photoprotection for many cyanobacteria. The first structure determination of the OCP coincided with the beginning of the genome sequencing era, and it was already apparent in 2003 that there is also a variety of evolutionarily related genes which encode proteins with only one of the two domains present in OCP. [ 5 ] [ 7 ] [ 8 ] The N-terminal domain (NTD), "Carot_N" , is found only in cyanobacteria, but exhibits a considerable amount of gene duplication. The C-terminal domain (CTD), however, is homologous with the widespread NTF2 superfamily, which shares a protein fold with its namesake, nuclear transport factor 2 , as well as around 20 other subfamilies of proteins with functions as diverse as limonene-1,2-epoxide hydrolase, SnoaL polyketide cyclase, and delta-5-3-ketosteroid isomerase (KSI). Most, if not all, of the members of the NTF2 superfamily form oligomers, often using the surface of their beta sheet to interact with another monomer or other protein. Bioinformatic analyses carried out over the past 15 years has resulted in the identification of new groups of carotenoid proteins: [ 30 ] In addition to new families of the OCP, [ 17 ] there are HCPs [ 31 ] and CCPs that correspond to the NTD and CTD of the OCP, respectively.  Based on the primary structure, the HCPs can be subdivided into at least nine evolutionarily distinct clades, each binds carotenoid. [ 32 ] [ 33 ] The CCPs resolve into 2 major groups, and these proteins also bind carotenoid. [ 34 ] Given these data, and the ability to devolve OCP into its two component domains while retaining function [ 35 ] has led to a reconstruction of the evolution of the OCP. [ 36 ] [ 35 ] Its water-solubility, together with its status as the only known photoactive protein containing a carotenoid, makes the OCP a valuable model for studying solution-state energetic and photophysical properties of carotenoids, which are a diverse class of molecules found across all domains of life. Moreover, carotenoids are widely investigated for their properties as anti-oxidants, and thus the protein may serve as a template for delivery of carotenoids for therapeutic purposes in human medicine. Because of its high efficiency of fluorescence quenching, coupled to its low quantum yield of photoactivation by specific wavelengths of light, OCP has ideal properties as a photoswitch and has been proposed as a novel system for developing optogenetics technologies [ 1 ] and may have other applications in optofluidics and biophotonics .
https://en.wikipedia.org/wiki/Orange_carotenoid_protein
In geometry , orbifold notation (or orbifold signature ) is a system, invented by the mathematician William Thurston and promoted by John Conway , for representing types of symmetry groups in two-dimensional spaces of constant curvature. The advantage of the notation is that it describes these groups in a way which indicates many of the groups' properties: in particular, it follows William Thurston in describing the orbifold obtained by taking the quotient of Euclidean space by the group under consideration. Groups representable in this notation include the point groups on the sphere ( S 2 {\displaystyle S^{2}} ), the frieze groups and wallpaper groups of the Euclidean plane ( E 2 {\displaystyle E^{2}} ), and their analogues on the hyperbolic plane ( H 2 {\displaystyle H^{2}} ). The following types of Euclidean transformation can occur in a group described by orbifold notation: All translations which occur are assumed to form a discrete subgroup of the group symmetries being described. Each group is denoted in orbifold notation by a finite string made up from the following symbols: A string written in boldface represents a group of symmetries of Euclidean 3-space. A string not written in boldface represents a group of symmetries of the Euclidean plane, which is assumed to contain two independent translations. Each symbol corresponds to a distinct transformation: An orbifold symbol is called good if it is not one of the following: p , pq , * p , * pq , for p , q ≥ 2, and p ≠ q . An object is chiral if its symmetry group contains no reflections; otherwise it is called achiral . The corresponding orbifold is orientable in the chiral case and non-orientable otherwise. The Euler characteristic of an orbifold can be read from its Conway symbol, as follows. Each feature has a value: Subtracting the sum of these values from 2 gives the Euler characteristic. If the sum of the feature values is 2, the order is infinite, i.e., the notation represents a wallpaper group or a frieze group. Indeed, Conway's "Magic Theorem" indicates that the 17 wallpaper groups are exactly those with the sum of the feature values equal to 2. Otherwise, the order is 2 divided by the Euler characteristic. The following groups are isomorphic: This is because 1-fold rotation is the "empty" rotation. The symmetry of a 2D object without translational symmetry can be described by the 3D symmetry type by adding a third dimension to the object which does not add or spoil symmetry. For example, for a 2D image we can consider a piece of carton with that image displayed on one side; the shape of the carton should be such that it does not spoil the symmetry, or it can be imagined to be infinite. Thus we have n • and * n •. The bullet (•) is added on one- and two-dimensional groups to imply the existence of a fixed point. (In three dimensions these groups exist in an n-fold digonal orbifold and are represented as nn and * nn .) Similarly, a 1D image can be drawn horizontally on a piece of carton, with a provision to avoid additional symmetry with respect to the line of the image, e.g. by drawing a horizontal bar under the image. Thus the discrete symmetry groups in one dimension are *•, *1•, ∞• and *∞•. Another way of constructing a 3D object from a 1D or 2D object for describing the symmetry is taking the Cartesian product of the object and an asymmetric 2D or 1D object, respectively. A first few hyperbolic groups, ordered by their Euler characteristic are:
https://en.wikipedia.org/wiki/Orbifold_notation
orbit@home [ 1 ] was a BOINC -based volunteer computing project of the Planetary Science Institute . It uses the " Orbit Reconstruction, Simulation and Analysis" [ 2 ] framework to optimize the search strategies that are used to find near-Earth objects . On March 4, 2008, orbit@home completed the installation of its new server and officially opened to new members. On April 11, orbit@home launched a Windows version of their client . On February 16, 2013, the project was halted due to lack of grant funding . [ 3 ] However, on July 23, 2013, the Orbit@home project was selected for funding by NASA 's Near Earth Object Observation program. It was announced that orbit@home is to resume operations sometime in 2014 or 2015. [ 4 ] As of July 13, 2018, orbit@home is offline according to its website, and the upgrade announcement has been removed. This astronomy -related article is a stub . You can help Wikipedia by expanding it . This network -related software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Orbit@home
In mathematics , specifically in the study of dynamical systems , an orbit is a collection of points related by the evolution function of the dynamical system. It can be understood as the subset of phase space covered by the trajectory of the dynamical system under a particular set of initial conditions , as the system evolves. As a phase space trajectory is uniquely determined for any given set of phase space coordinates, it is not possible for different orbits to intersect in phase space, therefore the set of all orbits of a dynamical system is a partition of the phase space. Understanding the properties of orbits by using topological methods is one of the objectives of the modern theory of dynamical systems. For discrete-time dynamical systems , the orbits are sequences ; for real dynamical systems , the orbits are curves ; and for holomorphic dynamical systems, the orbits are Riemann surfaces . Given a dynamical system ( T , M , Φ) with T a group , M a set and Φ the evolution function we define then the set is called the orbit through x . An orbit which consists of a single point is called constant orbit . A non-constant orbit is called closed or periodic if there exists a t ≠ 0 {\displaystyle t\neq 0} in I ( x ) {\displaystyle I(x)} such that Given a real dynamical system ( R , M , Φ), I ( x ) is an open interval in the real numbers , that is I ( x ) = ( t x − , t x + ) {\displaystyle I(x)=(t_{x}^{-},t_{x}^{+})} . For any x in M is called positive semi-orbit through x and is called negative semi-orbit through x . For a discrete time dynamical system with a time-invariant evolution function f {\displaystyle f} : The forward orbit of x is the set : If the function is invertible, the backward orbit of x is the set : and orbit of x is the set : where : For a general dynamical system, especially in homogeneous dynamics, when one has a "nice" group G {\displaystyle G} acting on a probability space X {\displaystyle X} in a measure-preserving way, an orbit G . x ⊂ X {\displaystyle G.x\subset X} will be called periodic (or equivalently, closed) if the stabilizer S t a b G ( x ) {\displaystyle Stab_{G}(x)} is a lattice inside G {\displaystyle G} . In addition, a related term is a bounded orbit, when the set G . x {\displaystyle G.x} is pre-compact inside X {\displaystyle X} . The classification of orbits can lead to interesting questions with relations to other mathematical areas, for example the Oppenheim conjecture (proved by Margulis) and the Littlewood conjecture (partially proved by Lindenstrauss) are dealing with the question whether every bounded orbit of some natural action on the homogeneous space S L 3 ( R ) ∖ S L 3 ( Z ) {\displaystyle SL_{3}(\mathbb {R} )\backslash SL_{3}(\mathbb {Z} )} is indeed periodic one, this observation is due to Raghunathan and in different language due to Cassels and Swinnerton-Dyer . Such questions are intimately related to deep measure-classification theorems. It is often the case that the evolution function can be understood to compose the elements of a group , in which case the group-theoretic orbits of the group action are the same thing as the dynamical orbits. A basic classification of orbits is An orbit can fail to be closed in two ways. It could be an asymptotically periodic orbit if it converges to a periodic orbit. Such orbits are not closed because they never truly repeat, but they become arbitrarily close to a repeating orbit. An orbit can also be chaotic . These orbits come arbitrarily close to the initial point, but fail to ever converge to a periodic orbit. They exhibit sensitive dependence on initial conditions , meaning that small differences in the initial value will cause large differences in future points of the orbit. There are other properties of orbits that allow for different classifications. An orbit can be hyperbolic if nearby points approach or diverge from the orbit exponentially fast.
https://en.wikipedia.org/wiki/Orbit_(dynamics)
Orbit modeling is the process of creating mathematical models to simulate motion of a massive body as it moves in orbit around another massive body due to gravity . Other forces such as gravitational attraction from tertiary bodies, air resistance , solar pressure , or thrust from a propulsion system are typically modeled as secondary effects. Directly modeling an orbit can push the limits of machine precision due to the need to model small perturbations to very large orbits. Because of this, perturbation methods are often used to model the orbit in order to achieve better accuracy. The study of orbital motion and mathematical modeling of orbits began with the first attempts to predict planetary motions in the sky, although in ancient times the causes remained a mystery. Newton , at the time he formulated his laws of motion and of gravitation , applied them to the first analysis of perturbations, [ 1 ] recognizing the complex difficulties of their calculation. [ 1 ] Many of the great mathematicians since then have given attention to the various problems involved; throughout the 18th and 19th centuries there was demand for accurate tables of the position of the Moon and planets for purposes of navigation at sea. The complex motions of orbits can be broken down. The hypothetical motion that the body follows under the gravitational effect of one other body only is typically a conic section , and can be readily modeled with the methods of geometry . This is called a two-body problem , or an unperturbed Keplerian orbit . The differences between the Keplerian orbit and the actual motion of the body are caused by perturbations . These perturbations are caused by forces other than the gravitational effect between the primary and secondary body and must be modeled to create an accurate orbit simulation. Most orbit modeling approaches model the two-body problem and then add models of these perturbing forces and simulate these models over time. Perturbing forces may include gravitational attraction from other bodies besides the primary, solar wind, drag, magnetic fields, and propulsive forces. Analytical solutions (mathematical expressions to predict the positions and motions at any future time) for simple two-body and three-body problems exist; none have been found for the n -body problem except for certain special cases. Even the two-body problem becomes insoluble if one of the bodies is irregular in shape. [ 2 ] Due to the difficulty in finding analytic solutions to most problems of interest, computer modeling and simulation is typically used to analyze orbital motion. A wide variety of software is available to simulate orbits and trajectories of spacecraft. In its simplest form, an orbit model can be created by assuming that only two bodies are involved, both behave as spherical point-masses, and that no other forces act on the bodies. For this case the model is simplified to a Kepler orbit . Keplerian orbits follow conic sections . The mathematical model of the orbit which gives the distance between a central body and an orbiting body can be expressed as: Where: Alternately, the equation can be expressed as: Where p {\displaystyle p} is called the semi-latus rectum of the curve. This form of the equation is particularly useful when dealing with parabolic trajectories, for which the semi-major axis is infinite. An alternate approach uses Isaac Newton 's law of universal gravitation as defined below: where: Making an additional assumption that the mass of the primary body is much greater than the mass of the secondary body and substituting in Newton's second law of motion , results in the following differential equation Solving this differential equation results in Keplerian motion for an orbit. In practice, Keplerian orbits are typically only useful for first-order approximations, special cases, or as the base model for a perturbed orbit. Orbit models are typically propagated in time and space using special perturbation methods. This is performed by first modeling the orbit as a Keplerian orbit. Then perturbations are added to the model to account for the various perturbations that affect the orbit. [ 1 ] Special perturbations can be applied to any problem in celestial mechanics , as it is not limited to cases where the perturbing forces are small. [ 2 ] Special perturbation methods are the basis of the most accurate machine-generated planetary ephemerides . [ 1 ] see, for instance, Jet Propulsion Laboratory Development Ephemeris Cowell's method is a special perturbation method; [ 3 ] mathematically, for n {\displaystyle n} mutually interacting bodies, Newtonian forces on body i {\displaystyle i} from the other bodies j {\displaystyle j} are simply summed thus, where with all vectors being referred to the barycenter of the system. This equation is resolved into components in x {\displaystyle x} , y {\displaystyle y} , z {\displaystyle z} and these are integrated numerically to form the new velocity and position vectors as the simulation moves forward in time. The advantage of Cowell's method is ease of application and programming. A disadvantage is that when perturbations become large in magnitude (as when an object makes a close approach to another) the errors of the method also become large. [ 4 ] Another disadvantage is that in systems with a dominant central body, such as the Sun , it is necessary to carry many significant digits in the arithmetic because of the large difference in the forces of the central body and the perturbing bodies. [ 5 ] Encke's method begins with the osculating orbit as a reference and integrates numerically to solve for the variation from the reference as a function of time. [ 6 ] Its advantages are that perturbations are generally small in magnitude, so the integration can proceed in larger steps (with resulting lesser errors), and the method is much less affected by extreme perturbations than Cowell's method. Its disadvantage is complexity; it cannot be used indefinitely without occasionally updating the osculating orbit and continuing from there, a process known as rectification . [ 4 ] [ 7 ] Letting ρ {\displaystyle {\boldsymbol {\rho }}} be the radius vector of the osculating orbit , r {\displaystyle \mathbf {r} } the radius vector of the perturbed orbit, and δ r {\displaystyle \delta \mathbf {r} } the variation from the osculating orbit, r ¨ {\displaystyle \mathbf {\ddot {r}} } and ρ ¨ {\displaystyle {\boldsymbol {\ddot {\rho }}}} are just the equations of motion of r {\displaystyle \mathbf {r} } and ρ {\displaystyle {\boldsymbol {\rho }}} , where μ = G ( M + m ) {\displaystyle \mu =G(M+m)} is the gravitational parameter with M {\displaystyle M} and m {\displaystyle m} the masses of the central body and the perturbed body, a per {\displaystyle \mathbf {a} _{\text{per}}} is the perturbing acceleration , and r {\displaystyle r} and ρ {\displaystyle \rho } are the magnitudes of r {\displaystyle \mathbf {r} } and ρ {\displaystyle {\boldsymbol {\rho }}} . Substituting from equations ( 3 ) and ( 4 ) into equation ( 2 ), which, in theory, could be integrated twice to find δ r {\displaystyle \delta \mathbf {r} } . Since the osculating orbit is easily calculated by two-body methods, ρ {\displaystyle {\boldsymbol {\rho }}} and δ r {\displaystyle \delta \mathbf {r} } are accounted for and r {\displaystyle \mathbf {r} } can be solved. In practice, the quantity in the brackets, ρ ρ 3 − r r 3 {\displaystyle {{\boldsymbol {\rho }} \over \rho ^{3}}-{\mathbf {r} \over r^{3}}} , is the difference of two nearly equal vectors, and further manipulation is necessary to avoid the need for extra significant digits . [ 8 ] [ 9 ] In 1991 Victor R. Bond and Michael F. Fraietta created an efficient and highly accurate method for solving the two-body perturbed problem. [ 10 ] This method uses the linearized and regularized differential equations of motion derived by Hans Sperling and a perturbation theory based on these equations developed by C.A. Burdet in the year 1864. In 1973, Bond and Hanssen improved Burdet's set of differential equations by using the total energy of the perturbed system as a parameter instead of the two-body energy and by reducing the number of elements to 13. In 1989 Bond and Gottlieb embedded the Jacobian integral, which is a constant when the potential function is explicitly dependent upon time as well as position in the Newtonian equations. The Jacobian constant was used as an element to replace the total energy in a reformulation of the differential equations of motion. In this process, another element which is proportional to a component of the angular momentum is introduced. This brought the total number of elements back to 14. In 1991, Bond and Fraietta made further revisions by replacing the Laplace vector with another vector integral as well as another scalar integral which removed small secular terms which appeared in the differential equations for some of the elements. [ 11 ] The Sperling–Burdet method is executed in a 5 step process as follows: [ 11 ] Perturbing forces cause orbits to become perturbed from a perfect Keplerian orbit. Models for each of these forces are created and executed during the orbit simulation so their effects on the orbit can be determined. The Earth is not a perfect sphere nor is mass evenly distributed within the Earth. This results in the point-mass gravity model being inaccurate for orbits around the Earth, particularly Low Earth orbits . To account for variations in gravitational potential around the surface of the Earth, the gravitational field of the Earth is modeled with spherical harmonics [ 12 ] which are expressed through the equation: where where: When modeling perturbations of an orbit around a primary body only the sum of the f n , m {\displaystyle {\mathbf {f} }_{n,m}} terms need to be included in the perturbation since the point-mass gravity model is accounted for in the − μ R 2 r ^ {\displaystyle -{\frac {\mu }{R^{2}}}\mathbf {\hat {r}} } term Gravitational forces from third bodies can cause perturbations to an orbit. For example, the Sun and Moon cause perturbations to Orbits around the Earth. [ 13 ] These forces are modeled in the same way that gravity is modeled for the primary body by means of direct gravitational N-body simulations . Typically, only a spherical point-mass gravity model is used for modeling effects from these third bodies. [ 14 ] Some special cases of third-body perturbations have approximate analytic solutions. For example, perturbations for the right ascension of the ascending node and argument of perigee for a circular Earth orbit are: [ 13 ] Solar radiation pressure causes perturbations to orbits. The magnitude of acceleration it imparts to a spacecraft in Earth orbit is modeled using the equation below: [ 13 ] where: For orbits around the Earth, solar radiation pressure becomes a stronger force than drag above 800 km (500 mi) altitude. [ 13 ] There are many different types of spacecraft propulsion. Rocket engines are one of the most widely used. The force of a rocket engine is modeled by the equation: [ 15 ] Another possible method is a solar sail . Solar sails use radiation pressure in a way to achieve a desired propulsive force. [ 16 ] The perturbation model due to the solar wind can be used as a model of propulsive force from a solar sail. The primary non-gravitational force acting on satellites in low Earth orbit is atmospheric drag. [ 13 ] Drag will act in opposition to the direction of velocity and remove energy from an orbit. The force due to drag is modeled by the following equation: where Orbits with an altitude below 120 km (75 mi) generally have such high drag that the orbits decay too rapidly to give a satellite a sufficient lifetime to accomplish any practical mission. On the other hand, orbits with an altitude above 600 km (370 mi) have relatively small drag so that the orbit decays slow enough that it has no real impact on the satellite over its useful life. [ 13 ] Density of air can vary significantly in the thermosphere where most low Earth orbiting satellites reside. The variation is primarily due to solar activity, and thus solar activity can greatly influence the force of drag on a spacecraft and complicate long-term orbit simulation. [ 13 ] Magnetic fields can play a significant role as a source of orbit perturbation as was seen in the Long Duration Exposure Facility . [ 12 ] Like gravity, the magnetic field of the Earth can be expressed through spherical harmonics as shown below: [ 12 ] where where:
https://en.wikipedia.org/wiki/Orbit_modeling
In mathematics , an orbit portrait is a combinatorial tool used in complex dynamics for understanding the behavior of one-complex dimensional quadratic maps . In simple words one can say that it is : Given a quadratic map from the complex plane to itself and a repelling or parabolic periodic orbit O = { z 1 , … z n } {\displaystyle {\mathcal {O}}=\{z_{1},\ldots z_{n}\}} of f {\displaystyle f} , so that f ( z j ) = z j + 1 {\displaystyle f(z_{j})=z_{j+1}} (where subscripts are taken 1 + modulo n {\displaystyle n} ), let A j {\displaystyle A_{j}} be the set of angles whose corresponding external rays land at z j {\displaystyle z_{j}} . Then the set P = P ( O ) = { A 1 , … A n } {\displaystyle {\mathcal {P}}={\mathcal {P}}({\mathcal {O}})=\{A_{1},\ldots A_{n}\}} is called the orbit portrait of the periodic orbit O {\displaystyle {\mathcal {O}}} . All of the sets A j {\displaystyle A_{j}} must have the same number of elements, which is called the valence of the portrait. P = { ( 1 3 , 2 3 ) } {\displaystyle {\mathcal {P}}=\left\{\left({\frac {1}{3}},{\frac {2}{3}}\right)\right\rbrace } P = { ( 3 7 , 4 7 ) , ( 6 7 , 1 7 ) , ( 5 7 , 2 7 ) } {\displaystyle {\mathcal {P}}=\left\{\left({\frac {3}{7}},{\frac {4}{7}}\right),\left({\frac {6}{7}},{\frac {1}{7}}\right),\left({\frac {5}{7}},{\frac {2}{7}}\right)\right\rbrace } P = { ( 4 9 , 5 9 ) , ( 8 9 , 1 9 ) , ( 7 9 , 2 9 ) } {\displaystyle {\mathcal {P}}=\left\{\left({\frac {4}{9}},{\frac {5}{9}}\right),\left({\frac {8}{9}},{\frac {1}{9}}\right),\left({\frac {7}{9}},{\frac {2}{9}}\right)\right\rbrace } P = { ( 11 31 , 12 31 ) , ( 22 31 , 24 31 ) , ( 13 31 , 17 31 ) , ( 26 31 , 3 31 ) , ( 21 31 , 6 31 ) } {\displaystyle {\mathcal {P}}=\left\{\left({\frac {11}{31}},{\frac {12}{31}}\right),\left({\frac {22}{31}},{\frac {24}{31}}\right),\left({\frac {13}{31}},{\frac {17}{31}}\right),\left({\frac {26}{31}},{\frac {3}{31}}\right),\left({\frac {21}{31}},{\frac {6}{31}}\right)\right\rbrace } Valence is 3 so rays land on each orbit point. P = { ( 1 7 , 2 7 , 4 7 ) } {\displaystyle {\mathcal {P}}=\left\{\left({\frac {1}{7}},{\frac {2}{7}},{\frac {4}{7}}\right)\right\rbrace } For complex quadratic polynomial with c= -0.03111+0.79111*i portrait of parabolic period 3 orbit is : [ 1 ] P = { ( 74 511 , 81 511 , 137 511 ) , ( 148 511 , 162 511 , 274 511 ) , ( 296 511 , 324 511 , 37 511 ) } {\displaystyle {\mathcal {P}}=\left\{\left({\frac {74}{511}},{\frac {81}{511}},{\frac {137}{511}}\right),\left({\frac {148}{511}},{\frac {162}{511}},{\frac {274}{511}}\right),\left({\frac {296}{511}},{\frac {324}{511}},{\frac {37}{511}}\right)\right\rbrace } Rays for above angles land on points of that orbit . Parameter c is a center of period 9 hyperbolic component of Mandelbrot set. For parabolic julia set c = -1.125 + 0.21650635094611*i. It is a root point between period 2 and period 6 components of Mandelbrot set. Orbit portrait of period 2 orbit with valence 3 is : [ 2 ] P = { ( 22 63 , 25 63 , 37 63 ) , ( 11 63 , 44 63 , 50 63 ) } {\displaystyle {\mathcal {P}}=\left\{\left({\frac {22}{63}},{\frac {25}{63}},{\frac {37}{63}}\right),\left({\frac {11}{63}},{\frac {44}{63}},{\frac {50}{63}}\right)\right\rbrace } P = { ( 1 15 , 2 15 , 4 15 , 8 15 ) } {\displaystyle {\mathcal {P}}=\left\{\left({\frac {1}{15}},{\frac {2}{15}},{\frac {4}{15}},{\frac {8}{15}}\right)\right\rbrace } Every orbit portrait P {\displaystyle {\mathcal {P}}} has the following properties: Any collection { A 1 , … , A n } {\displaystyle \{A_{1},\ldots ,A_{n}\}} of subsets of the circle which satisfy these four properties above is called a formal orbit portrait . It is a theorem of John Milnor that every formal orbit portrait is realized by the actual orbit portrait of a periodic orbit of some quadratic one-complex-dimensional map. Orbit portraits contain dynamical information about how external rays and their landing points map in the plane, but formal orbit portraits are no more than combinatorial objects. Milnor's theorem states that, in truth, there is no distinction between the two. Orbit portrait where all of the sets A j {\displaystyle A_{j}} have only a single element are called trivial, except for orbit portrait 0 {\displaystyle {0}} . An alternative definition is that an orbit portrait is nontrivial if it is maximal, which in this case means that there is no orbit portrait that strictly contains it (i.e. there does not exist an orbit portrait { A 1 ′ , … , A n ′ } {\displaystyle \{A_{1}^{\prime },\ldots ,A_{n}^{\prime }\}} such that A j ⊊ A j ′ {\displaystyle A_{j}\subsetneq A_{j}^{\prime }} ). It is easy to see that every trivial formal orbit portrait is realized as the orbit portrait of some orbit of the map f 0 ( z ) = z 2 {\displaystyle f_{0}(z)=z^{2}} , since every external ray of this map lands, and they all land at distinct points of the Julia set . Trivial orbit portraits are pathological in some respects, and in the sequel we will refer only to nontrivial orbit portraits. In an orbit portrait { A 1 , … , A n } {\displaystyle \{A_{1},\ldots ,A_{n}\}} , each A j {\displaystyle A_{j}} is a finite subset of the circle R / Z {\displaystyle \mathbb {R} /\mathbb {Z} } , so each A j {\displaystyle A_{j}} divides the circle into a number of disjoint intervals, called complementary arcs based at the point z j {\displaystyle z_{j}} . The length of each interval is referred to as its angular width. Each z j {\displaystyle z_{j}} has a unique largest arc based at it, which is called its critical arc. The critical arc always has length greater than 1 2 {\displaystyle {\frac {1}{2}}} These arcs have the property that every arc based at z j {\displaystyle z_{j}} , except for the critical arc, maps diffeomorphically to an arc based z j + 1 {\displaystyle z_{j+1}} , and the critical arc covers every arc based at z j + 1 {\displaystyle z_{j+1}} once, except for a single arc, which it covers twice. The arc that it covers twice is called the critical value arc for z j + 1 {\displaystyle z_{j+1}} . This is not necessarily distinct from the critical arc. When c {\displaystyle c} escapes to infinity under iteration of f c {\displaystyle f_{c}} , or when c {\displaystyle c} is in the Julia set, then c {\displaystyle c} has a well-defined external angle. Call this angle θ c {\displaystyle \theta _{c}} . θ c {\displaystyle \theta _{c}} is in every critical value arc. Also, the two inverse images of c {\displaystyle c} under the doubling map ( θ c 2 {\displaystyle {\frac {\theta _{c}}{2}}} and θ c + 1 2 {\displaystyle {\frac {\theta _{c}+1}{2}}} ) are both in every critical arc. Among all of the critical value arcs for all of the A j {\displaystyle A_{j}} 's, there is a unique smallest critical value arc I P {\displaystyle {\mathcal {I}}_{\mathcal {P}}} , called the characteristic arc which is strictly contained within every other critical value arc. The characteristic arc is a complete invariant of an orbit portrait, in the sense that two orbit portraits are identical if and only if they have the same characteristic arc. Much as the rays landing on the orbit divide up the circle, they divide up the complex plane. For every point z j {\displaystyle z_{j}} of the orbit, the external rays landing at z j {\displaystyle z_{j}} divide the plane into v {\displaystyle v} open sets called sectors based at z j {\displaystyle z_{j}} . Sectors are naturally identified the complementary arcs based at the same point. The angular width of a sector is defined as the length of its corresponding complementary arc. Sectors are called critical sectors or critical value sectors when the corresponding arcs are, respectively, critical arcs and critical value arcs. [ 4 ] Sectors also have the interesting property that 0 {\displaystyle 0} is in the critical sector of every point, and c {\displaystyle c} , the critical value of f c {\displaystyle f_{c}} , is in the critical value sector. Two parameter rays with angles t − {\displaystyle t_{-}} and t + {\displaystyle t_{+}} land at the same point of the Mandelbrot set in parameter space if and only if there exists an orbit portrait P {\displaystyle {\mathcal {P}}} with the interval [ t − , t + ] {\displaystyle [t_{-},t_{+}]} as its characteristic arc. For any orbit portrait P {\displaystyle {\mathcal {P}}} let r P {\displaystyle r_{\mathcal {P}}} be the common landing point of the two external angles in parameter space corresponding to the characteristic arc of P {\displaystyle {\mathcal {P}}} . These two parameter rays, along with their common landing point, split the parameter space into two open components. Let the component that does not contain the point 0 {\displaystyle 0} be called the P {\displaystyle {\mathcal {P}}} -wake and denoted as W P {\displaystyle {\mathcal {W}}_{\mathcal {P}}} . A quadratic polynomial f c ( z ) = z 2 + c {\displaystyle f_{c}(z)=z^{2}+c} realizes the orbit portrait P {\displaystyle {\mathcal {P}}} with a repelling orbit exactly when c ∈ W P {\displaystyle c\in {\mathcal {W}}_{\mathcal {P}}} . P {\displaystyle {\mathcal {P}}} is realized with a parabolic orbit only for the single value c = r P {\displaystyle c=r_{\mathcal {P}}} for about Other than the zero portrait, there are two types of orbit portraits: primitive and satellite. If v {\displaystyle v} is the valence of an orbit portrait P {\displaystyle {\mathcal {P}}} and r {\displaystyle r} is the recurrent ray period, then these two types may be characterized as follows: Orbit portraits turn out to be useful combinatorial objects in studying the connection between the dynamics and the parameter spaces of other families of maps as well. In particular, they have been used to study the patterns of all periodic dynamical rays landing on a periodic cycle of a unicritical anti-holomorphic polynomial. [ 5 ]
https://en.wikipedia.org/wiki/Orbit_portrait
In computational chemistry , orbital-free density functional theory ( OFDFT ) is a quantum mechanical approach to electronic structure determination which is based on functionals of the electronic density . It is most closely related to the Thomas–Fermi model . Orbital-free density functional theory is, at present, less accurate than Kohn–Sham density functional theory models, but it has the advantage of being fast, so that it can be applied to large systems. The Hohenberg–Kohn theorems [ 1 ] guarantee that, for a system of atoms, there exists a functional of the electron density that yields the total energy. Minimization of this functional with respect to the density gives the ground-state density from which all of the system's properties can be obtained. Although the Hohenberg–Kohn theorems tell us that such a functional exists, they do not give us guidance on how to find it. In practice, the density functional is known exactly except for two terms. These are the electronic kinetic energy and the exchange – correlation energy. The lack of the true exchange–correlation functional is a well known problem in DFT, and there exists a huge variety of approaches to approximate this crucial component. In general, there is no known form for the interacting kinetic energy in terms of electron density. In practice, instead of deriving approximations for interacting kinetic energy, much effort was devoted to deriving approximations for non-interacting ( Kohn–Sham ) kinetic energy, which is defined as (in atomic units) where | ϕ i ⟩ {\displaystyle |\phi _{i}\rangle } is the i -th Kohn–Sham orbital. The summation is performed over all the occupied Kohn–Sham orbitals. One of the first attempts to do this (even before the formulation of the Hohenberg–Kohn theorem) was the Thomas–Fermi model (1927), which wrote the kinetic energy as [ 2 ] [ 3 ] [ 4 ] This expression is based on the homogeneous electron gas (HEG) and a Local Density Approximation (LDA) , thus, is not very accurate for most physical systems. By formulating Kohn–Sham kinetic energy in terms of electron density, one avoids diagonalizing the Kohn–Sham Hamiltonian for solving for the Kohn–Sham orbitals, therefore saving the computational cost. Since no Kohn–Sham orbital is involved in orbital-free density functional theory, one only needs to minimize the system's energy with respect to the electron density. An important bound for the TF kinetic energy is the Lieb-Thirring inequality . A notable historical improvement of the Thomas-Fermi model is the von Weizsäcker (vW) kinetic energy (1935), [ 5 ] which is exactly the kinetic energy for noninteracting bosons and can be regarded as a Generalized Gradient approximation (GGA). A conceptually really important quantity in OFDFT is the Pauli kinetic energy. [ 6 ] [ 7 ] [ 8 ] [ 9 ] As the Kohn-Sham correlation energy links the real system of interacting electrons to the artificial Kohn-Sham (KS) system of noninteracting electrons, the Pauli kinetic energy links the KS system to the fictitious system noninteracting model bosons . In the same way as the KS interacting energy it is highy KS-orbital dependent and must be in practice approximated. The term Pauli comes from the fact, that the functional is related to the Pauli exclusion principle . T P [ n ] = 0 {\displaystyle T_{\text{P}}[n]=0} for an electron number of N ≤ 2 {\displaystyle N\leq 2} . The exchange energy in orbital-free density functional theory (OFDFT) is the Dirac exchange [ 10 ] as a Local Density Approximation (LDA) (1930) It is related to the homogeneous electron gas (HEG) . An important bound for the LDA exchange energy is the Lieb-Oxford inequality . State of the art kinetic energy density functionals for orbital-free density functional theory and still subject to ongoing research are the so called nonlocal (NL) kinetic energy density functionals such as e.g. the Huang-Carter (HC) functional [ 11 ] [ 12 ] (2010), the Mi-Genova-Pavenello (MGP) functional [ 13 ] (2018) or the Wang-Teter (WT) functional [ 14 ] (1992). They admit the general form T NL [ n ] = C NL ∬ d 3 r d 3 r ′ n ( r ) α K [ n ] ( r , r ′ ) n ( r ′ ) β {\displaystyle T_{\text{NL}}[n]=C_{\text{NL}}\iint d^{3}rd^{3}r'n(\mathbf {r} )^{\alpha }K[n](r,r')n(\mathbf {r} ')^{\beta }} where α {\displaystyle \alpha } and β {\displaystyle \beta } are arbitrary fractional exponents, K [ n ] ( r , r ′ ) {\displaystyle K[n](r,r')} is a nonlocal KEDF kernel and C NL {\displaystyle C_{\text{NL}}} some constant. The analogue to the Kohn-Sham (KS) equations (1965) in Orbital-free density functional theory (OFDFT) is the Levy-Perdew-Sahni (LPS) equation [ 15 ] [ 16 ] (1984), an effectively bosonic Schrödinger equation where v S ( r ) {\displaystyle v_{S}(\mathbf {r} )} is the Kohn-Sham (KS) potential, v P ( r ) {\displaystyle v_{P}(\mathbf {r} )} the Pauli potential, μ {\displaystyle \mu } the highest occupied KS orbital and n ( r ) {\displaystyle {\sqrt {n(\mathbf {r} )}}} the square root of the density. One big benefit of the LPS equation being so intimately related to the KS equations is that an existing KS code can be easily modified in an OF code with ejecting all orbitals except for one in the Self-Consistent-Field (SCF) cycle. Starting from Euler-Lagrange equation of density functional theory δ T S [ n ] δ n ( r ) + v S ( r ) = μ {\displaystyle {\frac {\delta T_{S}[n]}{\delta n(\mathbf {r} )}}+v_{S}(\mathbf {r} )=\mu } , simultaneously adding and subtracting the von Weizsäcker potential, i.e. the functional derivative of the vW functional and acknowleding the definition of the Pauli kinetic energy, while the functional derivative of the Pauli kinetic energy with respect to the density is the Pauli potential δ T v W [ n ] δ n ( r ) ⏟ v v W ( r ) + v S ( r ) + δ T P [ n ] δ n ( r ) ⏟ v P ( r ) = μ {\displaystyle \underbrace {\frac {\delta T_{vW}[n]}{\delta n(\mathbf {r} )}} _{v_{vW}(\mathbf {r} )}+v_{S}(\mathbf {r} )+\underbrace {\frac {\delta T_{P}[n]}{\delta n(\mathbf {r} )}} _{v_{P}(\mathbf {r} )}=\mu } . Expanding the functional derivative via chain rule δ n ( r ) δ n ( r ) ⏟ 1 / n ( r ) δ δ n ( r ) ∫ n ( r ) ( − 1 2 Δ ) n ( r ) d 3 r ⏟ − 1 2 Δ n ( r ) + v S ( r ) + v P ( r ) = μ {\displaystyle \underbrace {\frac {\delta {\sqrt {n(\mathbf {r} )}}}{\delta n(\mathbf {r} )}} _{1/{\sqrt {n(\mathbf {r} )}}}\underbrace {{\frac {\delta }{\delta {\sqrt {n(\mathbf {r} )}}}}\int {\sqrt {n(\mathbf {r} )}}(-{\frac {1}{2}}\Delta ){\sqrt {n(\mathbf {r} )}}d^{3}r} _{-{\frac {1}{2}}\Delta {\sqrt {n(\mathbf {r} )}}}+v_{S}(\mathbf {r} )+v_{P}(\mathbf {r} )=\mu } and as a last step multiplying both sides by the square root of the density n ( r ) {\displaystyle {\sqrt {n(\mathbf {r} )}}} yields the LPS equation. With the linear transformation n ( r ) ↦ 1 N ϕ B ( r ) {\displaystyle {\sqrt {n(\mathbf {r} )}}\mapsto {\frac {1}{\sqrt {N}}}\phi _{B}(\mathbf {r} )} and by defining the bosonic potential as v B ( r ) ≡ v S ( r ) + v P ( r ) {\displaystyle v_{B}(\mathbf {r} )\equiv v_{S}(\mathbf {r} )+v_{P}(\mathbf {r} )} the LPS equation evolves to the bosonic Schrödinger equation ( − 1 2 Δ + v B ( r ) ) ϕ B ( r ) = μ ϕ B ( r ) {\displaystyle {\bigg (}-{\frac {1}{2}}\Delta +v_{B}(\mathbf {r} ){\bigg )}\phi _{B}(\mathbf {r} )=\mu \phi _{B}(\mathbf {r} )} . Note that the normalization constraint in Bra–ket notation ⟨ ϕ B | ϕ B ⟩ = 1 {\displaystyle \langle \phi _{B}|\phi _{B}\rangle =1} holds, since N [ n ] = ∫ n ( r ) d 3 r {\displaystyle N[n]=\int n(\mathbf {r} )d^{3}r} . Quite recently also a time-dependent version of OFDFT has been developed. [ 17 ] It is also implemented in DFTpy. As a free open-source software package for OFDFT DFTpy has been developed by the Pavanello Group. [ 18 ] [ 19 ] It was launched in 2020. [ 20 ] The most recent version is 2.1.1.
https://en.wikipedia.org/wiki/Orbital-free_density_functional_theory
The Orbital Debris Co-ordination Working Group ( ODCWG ) is one of the working groups of the International Organization for Standardization 's Technical Committee 20/Subcommittee 14 TC20/SC14 "Spacecraft Systems and Operations". The Orbital Debris Co-ordination Working Group was formed by unanimous agreement at the May 2003 Plenary meeting of TC20/SC14. [ 1 ] The ODCWG recognizes that the mitigation of orbital space debris is an international concern, thus international, comprehensive and cohesive standards (namely ISO TC20/SC14) must be adopted to address the issue. [ 1 ] Currently six standards projects are in development, and a further seven project proposals are being prepared. The first debris mitigation standards were expected in 2008, with more International Standards, technical specifications or technical reports expected to be published through to 2011–2012. [ citation needed ] This article about an international organization is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Orbital_Debris_Co-ordination_Working_Group
In quantum mechanics , orbital magnetization , M orb , refers to the magnetization induced by orbital motion of charged particles , usually electrons in solids . The term "orbital" distinguishes it from the contribution of spin degrees of freedom, M spin , to the total magnetization. A nonzero orbital magnetization requires broken time-reversal symmetry, which can occur spontaneously in ferromagnetic and ferrimagnetic materials, or can be induced in a non- magnetic material by an applied magnetic field . The orbital magnetic moment of a finite system, such as a molecule, is given classically by [ 1 ] where J ( r ) is the current density at point r . (Here SI units are used; in Gaussian units , the prefactor would be 1/2 c instead, where c is the speed of light .) In a quantum-mechanical context, this can also be written as where − e and m e are the charge and mass of the electron , Ψ is the ground-state wave function , and L is the angular momentum operator. The total magnetic moment is where the spin contribution is intrinsically quantum-mechanical and is given by where g s is the electron spin g-factor , μ B is the Bohr magneton , ħ is the reduced Planck constant , and S is the electron spin operator . The orbital magnetization M is defined as the orbital moment density; i.e., orbital moment per unit volume. For a crystal of volume V composed of isolated entities (e.g., molecules) labelled by an index j having magnetic moments m orb, j , this is However, real crystals are made up out of atomic or molecular constituents whose charge clouds overlap, so that the above formula cannot be taken as a fundamental definition of orbital magnetization. [ 2 ] Only recently have theoretical developments led to a proper theory of orbital magnetization in crystals, as explained below. For a magnetic crystal, it is tempting to try to define where the limit is taken as the volume V of the system becomes large. However, because of the factor of r in the integrand, the integral has contributions from surface currents that cannot be neglected, and as a result the above equation does not lead to a bulk definition of orbital magnetization. [ 2 ] Another way to see that there is a difficulty is to try to write down the quantum-mechanical expression for the orbital magnetization in terms of the occupied single-particle Bloch functions | ψ n k ⟩ of band n and crystal momentum k : where p is the momentum operator , L = r × p , and the integral is evaluated over the Brillouin zone (BZ). However, because the Bloch functions are extended, the matrix element of a quantity containing the r operator is ill-defined, and this formula is actually ill-defined. [ 3 ] In practice, orbital magnetization is often computed by decomposing space into non-overlapping spheres centered on atoms (similar in spirit to the muffin-tin approximation ), computing the integral of r × J ( r ) inside each sphere, and summing the contributions. [ 4 ] This approximation neglects the contributions from currents in the interstitial regions between the atomic spheres. Nevertheless, it is often a good approximation because the orbital currents associated with partially filled d and f shells are typically strongly localized inside these atomic spheres. It remains, however, an approximate approach. A general and exact formulation of the theory of orbital magnetization was developed in the mid-2000s by several authors, first based on a semiclassical approach, [ 5 ] then on a derivation from the Wannier representation , [ 6 ] [ 7 ] and finally from a long-wavelength expansion. [ 8 ] The resulting formula for the orbital magnetization, specialized to zero temperature, is where f n k is 0 or 1 respectively as the band energy E n k falls above or below the Fermi energy μ , is the effective Hamiltonian at wavevector k , and is the cell-periodic Bloch function satisfying A generalization to finite temperature is also available. [ 3 ] [ 8 ] Note that the term involving the band energy E n k in this formula is really just an integral of the band energy times the Berry curvature . Results computed using the above formula have appeared in the literature. [ 9 ] A recent review summarizes these developments. [ 10 ] The orbital magnetization of a material can be determined accurately by measuring the gyromagnetic ratio γ , i.e., the ratio between the magnetic dipole moment of a body and its angular momentum. The gyromagnetic ratio is related to the spin and orbital magnetization according to The two main experimental techniques are based either on the Barnett effect or the Einstein–de Haas effect . Experimental data for Fe, Co, Ni, and their alloys have been compiled. [ 11 ]
https://en.wikipedia.org/wiki/Orbital_magnetization
An orbital node is either of the two points where an orbit intersects a plane of reference to which it is inclined. [ 1 ] A non-inclined orbit , which is contained in the reference plane, has no nodes. Common planes of reference include the following: If a reference direction from one side of the plane of reference to the other is defined, the two nodes can be distinguished. For geocentric and heliocentric orbits, the ascending node (or north node ) is where the orbiting object moves north through the plane of reference, and the descending node (or south node ) is where it moves south through the plane. [ 4 ] In the case of objects outside the Solar System, the ascending node is the node where the orbiting secondary passes away from the observer, and the descending node is the node where it moves towards the observer. [ 5 ] , p. 137. The position of the node may be used as one of a set of parameters, called orbital elements , which describe the orbit. This is done by specifying the longitude of the ascending node (or, sometimes, the longitude of the node .) The line of nodes is the straight line resulting from the intersection of the object's orbital plane with the plane of reference; it passes through the two nodes. [ 2 ] The symbol of the ascending node is ( Unicode : U+260A, ☊), and the symbol of the descending node is ( Unicode : U+260B, ☋). In medieval and early modern times, the ascending and descending nodes of the Moon in the ecliptic plane were called the "dragon's head" ( Latin : caput draconis , Arabic : رأس الجوزهر ) and "dragon's tail" ( Latin : cauda draconis ), respectively. [ 6 ] : p.141, [ 7 ] : p.245 These terms originally referred to the times when the Moon crossed the apparent path of the sun in the sky (as in a solar eclipse ). Also, corruptions of the Arabic term such as ganzaar , genzahar , geuzaar and zeuzahar were used in the medieval West to denote either of the nodes. [ 8 ] : pp.196–197, [ 9 ] : p.65, [ 10 ] : pp.95–96 The Koine Greek terms αναβιβάζων and καταβιβάζων were also used for the ascending and descending nodes, giving rise to the English terms anabibazon and catabibazon . [ 11 ] [ 12 ] :  ¶27 For the orbit of the Moon around Earth , the plane is taken to be the ecliptic , not the equatorial plane . The gravitational pull of the Sun upon the Moon causes its nodes to gradually precess westward, completing a cycle in approximately 18.6 years. [ 1 ] [ 13 ] The image of the ascending and descending orbital nodes as the head and tail of a dragon, 180 degrees apart in the sky, goes back to the Chaldeans; it was used by the Zoroastrians, and then by Arabic astronomers and astrologers. In Middle Persian, its head and tail were respectively called gōzihr sar and gōzihr dumb ; in Arabic, al-ra's al-jawzihr and al-dhanab al-jawzihr — or in the case of the Moon, ___ al-tennin . [ 14 ] Among the arguments against astrologers made by Ibn Qayyim al-Jawziyya (1292–1350), in his Miftah Dar al-SaCadah: "Why is it that you have given an influence to al-Ra's [the head] and al-Dhanab [the tail], which are two imaginary points [ascending and descending nodes]?" [ 15 ]
https://en.wikipedia.org/wiki/Orbital_node