text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
TemplateData for PH indicator
No description.
Template parameters
This template prefers block formatting of parameters.
no description
no description
no description
no description
no description
no description
no description
no description
no description | https://en.wikipedia.org/wiki/Template:PH_indicator/doc |
This photosynthesis article is a stub . You can help Wikipedia by expanding it .
This template is used to identify a photosynthesis stub. It uses {{ article stub box }}, which is a meta-template designed to ease the process of creating and maintaining stub templates.
Typing {{Photosynthesis-stub}} produces the message shown at the beginning, and adds the article to the following categories:
This is a stub template . A brief explanation of these templates follows; for full details please consult Wikipedia:Stub .
A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject.
Further information can be found at:
New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals . This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. | https://en.wikipedia.org/wiki/Template:Photosynthesis-stub |
This template's initial visibility currently defaults to autocollapse , meaning that if there is another collapsible item on the page (a navbox, sidebar , or table with the collapsible attribute ), it is hidden apart from its title bar; if not, it is fully visible.
To change this template's initial visibility, the |state= parameter may be used: | https://en.wikipedia.org/wiki/Template:Phylogenetics |
This article about a physical chemistry journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page .
This template is used to identify a stub about a physical chemistry journal. It uses {{ article stub box }}, which is a meta-template designed to ease the process of creating and maintaining stub templates.
Typing {{Physical-chemistry-journal-stub}} produces the message shown at the beginning, and adds the article to the following categories:
This is a stub template . A brief explanation of these templates follows; for full details please consult Wikipedia:Stub .
A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject.
Further information can be found at:
New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals . This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. | https://en.wikipedia.org/wiki/Template:Physical-chemistry-journal-stub |
This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
This template is used to identify a physical chemistry -related stub. It uses {{ article stub box }}, which is a meta-template designed to ease the process of creating and maintaining stub templates.
Typing {{Physical-chemistry-stub}} produces the message shown at the beginning, and adds the article to the following category:
This is a stub template . A brief explanation of these templates follows; for full details please consult Wikipedia:Stub .
A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject.
Further information can be found at:
New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals . This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. | https://en.wikipedia.org/wiki/Template:Physical-chemistry-stub |
This template's initial visibility currently defaults to autocollapse , meaning that if there is another collapsible item on the page (a navbox, sidebar , or table with the collapsible attribute ), it is hidden apart from its title bar; if not, it is fully visible.
To change this template's initial visibility, the |state= parameter may be used: | https://en.wikipedia.org/wiki/Template:Poisoning_and_toxicity |
This radioactivity –related article is a stub . You can help Wikipedia by expanding it .
This template is used to identify a radioactivity –related stub. It uses {{ article stub box }}, which is a meta-template designed to ease the process of creating and maintaining stub templates.
Typing {{Radioactivity-stub}} produces the message shown at the beginning, and adds the article to the following categories:
This is a stub template . A brief explanation of these templates follows; for full details please consult Wikipedia:Stub .
A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject.
Further information can be found at:
New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals . This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. | https://en.wikipedia.org/wiki/Template:Radioactivity-stub |
This article about a rail accident is a stub . You can help Wikipedia by expanding it .
This template is used to identify a stub about a rail accident. It uses {{ article stub box }}, which is a meta-template designed to ease the process of creating and maintaining stub templates.
Typing {{Rail-accident-stub}} produces the message shown at the beginning, and adds the article to the following category:
This is a stub template . A brief explanation of these templates follows; for full details please consult Wikipedia:Stub .
A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject.
Further information can be found at:
New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals . This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. | https://en.wikipedia.org/wiki/Template:Rail-accident-stub |
This chemical reaction article is a stub . You can help Wikipedia by expanding it .
This template is used to identify a chemical reaction stub. It uses {{ article stub box }}, which is a meta-template designed to ease the process of creating and maintaining stub templates.
Typing {{Reaction-stub}} produces the message shown at the beginning, and adds the article to the following category:
This is a stub template . A brief explanation of these templates follows; for full details please consult Wikipedia:Stub .
A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject.
Further information can be found at:
New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals . This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. | https://en.wikipedia.org/wiki/Template:Reaction-stub |
This is a navigational template created using {{ navbox }} . It can be transcluded on pages by placing {{Reproductive Systems navbox}} below the standard article appendices .
This template's initial visibility currently defaults to autocollapse , meaning that if there is another collapsible item on the page (a navbox, sidebar , or table with the collapsible attribute ), it is hidden apart from its title bar; if not, it is fully visible.
To change this template's initial visibility, the |state= parameter may be used:
Templates using the classes class=navbox ( {{ navbox }} ) or class=nomobile ( {{ sidebar }} ) are not displayed in article space on the mobile web site of English Wikipedia. Mobile page views account for approximately 68% of all page views (90-day average as of September 2024 [update] ). Briefly, these templates are not included in articles because 1) they are not well designed for mobile, and 2) they significantly increase page sizes—bad for mobile downloads—in a way that is not useful for the mobile use case. You can review/watch phab:T124168 for further discussion.
A navigational box that can be placed at the bottom of articles.
Template parameters [ Edit template data ]
The initial visibility of the navbox | https://en.wikipedia.org/wiki/Template:Reproductive_Systems_navbox |
This is a navigational template created using {{ navbox }} . It can be transcluded on pages by placing {{Smartwatch}} below the standard article appendices .
This template's initial visibility currently defaults to autocollapse , meaning that if there is another collapsible item on the page (a navbox, sidebar , or table with the collapsible attribute ), it is hidden apart from its title bar; if not, it is fully visible.
To change this template's initial visibility, the |state= parameter may be used:
Templates using the classes class=navbox ( {{ navbox }} ) or class=nomobile ( {{ sidebar }} ) are not displayed in article space on the mobile web site of English Wikipedia. Mobile page views account for approximately 68% of all page views (90-day average as of September 2024 [update] ). Briefly, these templates are not included in articles because 1) they are not well designed for mobile, and 2) they significantly increase page sizes—bad for mobile downloads—in a way that is not useful for the mobile use case. You can review/watch phab:T124168 for further discussion.
A navigational box that can be placed at the bottom of articles.
Template parameters [ Edit template data ]
The initial visibility of the navbox | https://en.wikipedia.org/wiki/Template:Smartwatch |
This article about a software book or series of books is a stub . You can help Wikipedia by expanding it .
This template is used to identify a stub about a software book or series of books. It uses {{ article stub box }}, which is a meta-template designed to ease the process of creating and maintaining stub templates.
Typing {{Software-book-stub}} produces the message shown at the beginning, and adds the article to the following categories:
This is a stub template . A brief explanation of these templates follows; for full details please consult Wikipedia:Stub .
A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject.
Further information can be found at:
New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals . This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. | https://en.wikipedia.org/wiki/Template:Software-book-stub |
This template's initial visibility currently defaults to autocollapse , meaning that if there is another collapsible item on the page (a navbox, sidebar , or table with the collapsible attribute ), it is hidden apart from its title bar; if not, it is fully visible.
To change this template's initial visibility, the |state= parameter may be used: | https://en.wikipedia.org/wiki/Template:Sphingolipids |
This stereochemistry article is a stub . You can help Wikipedia by expanding it .
This template is used to identify a stereochemistry stub. It uses {{ article stub box }}, which is a meta-template designed to ease the process of creating and maintaining stub templates.
Typing {{Stereochemistry-stub}} produces the message shown at the beginning, and adds the article to the following category:
This is a stub template . A brief explanation of these templates follows; for full details please consult Wikipedia:Stub .
A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject.
Further information can be found at:
New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals . This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. | https://en.wikipedia.org/wiki/Template:Stereochemistry-stub |
This article about theoretical chemistry is a stub . You can help Wikipedia by expanding it .
This template is used to identify a stub about theoretical chemistry . It uses {{ article stub box }}, which is a meta-template designed to ease the process of creating and maintaining stub templates.
Typing {{Theoretical-chem-stub}} produces the message shown at the beginning, and adds the article to the following category:
This is a stub template . A brief explanation of these templates follows; for full details please consult Wikipedia:Stub .
A stub is an article containing only a few sentences of text which is too short to provide encyclopedic coverage of a subject.
Further information can be found at:
New stub templates and categories (collectively "stub types") should not be created without prior proposal at Wikipedia:WikiProject Stub sorting/Proposals . This allows for the proper coordination of all stub types across Wikipedia, and for the checking of any new stub type for possible problems prior to its creation. | https://en.wikipedia.org/wiki/Template:Theoretical-chem-stub |
In bioinformatics , the template modeling score or TM-score is a measure of similarity between two protein structures . The TM-score is intended as a more accurate measure of the global similarity of full-length protein structures than the often used RMSD measure. The TM-score indicates the similarity between two structures by a score between ( 0 , 1 ] {\displaystyle (0,1]} , where 1 indicates a perfect match between two structures (thus the higher the better). [ 1 ] Generally scores below 0.20 corresponds to randomly chosen unrelated proteins whereas structures with a score higher than 0.5 assume roughly the same fold. [ 2 ] A quantitative study [ 3 ] shows that proteins of TM-score = 0.5 have a posterior probability of 37% in the same CATH topology family and of 13% in the same SCOP fold family. The probabilities increase rapidly when TM-score > 0.5. The TM-score is designed to be independent of protein lengths.
TM-score between two protein structures (e.g., a template structure and a target structure) is defined by
where L target {\displaystyle L_{\text{target}}} is the length of the amino acid sequence of the target protein,
and L common {\displaystyle L_{\text{common}}} is the number of residues that appear in both the template and target structures. d i {\displaystyle d_{i}} is the distance between the i {\displaystyle i} th pair of residues in the template and target structures, and d 0 ( L target ) = 1.24 L target − 15 3 − 1.8 {\displaystyle d_{0}(L_{\text{target}})=1.24{\sqrt[{3}]{L_{\text{target}}-15}}-1.8} is a distance scale that normalizes distances. The maximum is taken over all possible structure superpositions of the model and template (or some sample thereof).
When comparing two protein structures that have the same residue order, L common {\displaystyle L_{\text{common}}} reads from the C-alpha order number of the structure files (i.e., Column 23-26 in Protein Data Bank (file format) ). When comparing two protein structures that have different sequences and/or different residue orders, a structural alignment is usually performed first, and TM-score is then calculated on the commonly aligned residues from the structural alignment.
An often used structural similarity measure is root-mean-square deviation (RMSD).
Because RMSD = ∑ i = 1 L d i 2 / L {\displaystyle ={\sqrt {\sum _{i=1}^{L}d_{i}^{2}/{L}}}} is calculated as an average of distance error ( d i {\displaystyle d_{i}} ) with equal weight over all residue pairs, a large local error on a few residue pairs can result in a quite large RMSD.
On the other hand, by putting d i {\displaystyle d_{i}} in the denominator, TM-score naturally weights smaller distance errors more strongly than larger distance errors. Therefore, TM-score value is more sensitive to the global structural similarity rather than to the local structural errors, compared to RMSD. Another advantage of TM-score is the introduction of the scale d 0 ( L target ) = 1.24 L target − 15 3 − 1.8 {\displaystyle d_{0}(L_{\text{target}})=1.24{\sqrt[{3}]{L_{\text{target}}-15}}-1.8} which makes the magnitude of TM-score length-independent for random structure pairs, while RMSD and most other measures are length-dependent metrics.
The Global Distance Test (GDT) algorithm, and its GDT TS score to represent "total score", is another measure of similarity between two protein structures with known amino acid correspondences (e.g. identical amino acid sequences ) but different tertiary structures . [ 4 ] GDT score has the same length-dependence issue as RMSD, because the average GDT score for random structure pairs has a power-law dependence on the protein size. [ 1 ] | https://en.wikipedia.org/wiki/Template_modeling_score |
In chemistry , a template reaction is any of a class of ligand -based reactions that occur between two or more adjacent coordination sites on a metal center. In the absence of the metal ion, the same organic reactants produce different products. The term is mainly used in coordination chemistry . The template effects emphasizes the pre-organization provided by the coordination sphere, although the coordination modifies the electronic properties (acidity, electrophilicity, etc.) of ligands. [ 1 ]
An early example is the dialkylation of a nickel dithiolate: [ 2 ]
The corresponding alkylation in the absence of a metal ion would yield polymers. Crown ethers arise from dialkylations that are templated by alkali metals. [ 3 ] Other template reactions include the Mannich and Schiff base condensations. [ 4 ] The condensation of formaldehyde , ammonia, and tris(ethylenediamine)cobalt(III) to give a clathrochelate complex is one example.
The phosphorus analogue of an aza crown can be prepared by a template reaction. [ 6 ] Where it is not possible to isolate the phosphine itself.
Many template reactions are only stoichiometric, and the decomplexation of the "templating ion" can be difficult. The alkali metal-templated syntheses of crown ether syntheses are notable exceptions. Metal Phthalocyanines are generated by metal-templated condensations of phthalonitriles , but the liberation of metal-free phthalocyanine is difficult.
Some so-called template reactions proceed similarly in the absence of the templating ion. One example being the condensation of acetone and ethylenediamine, which yields isomeric 14-membered tetraaza rings. [ 7 ] Similarly, porphyrins , which feature 16-membered central rings, form in the absence of metal templates.
In a general sense, transition metal-based catalysis can be viewed as template reactions: Reactants coordinate to adjacent sites on the metal ion and, owing to their adjacency, the two reactants interconnect (insert or couple) either directly or via the action of another reagent. In the area of homogeneous catalysis , the cyclo-oligomerization of acetylene to cyclooctatetraene at a nickel (II) centre reflects the templating effect of the nickel, where it is supposed that four acetylene molecules occupy four sites around the metal and react simultaneously to give the product. This simplistic mechanistic hypotheses was influential in the development of these catalytic reactions. For example, if a competing ligand such as triphenylphosphine were added to occupy one coordination site, then only three molecules of acetylene could bind, and these come together to form benzene (see Reppe chemistry ). [ 8 ] | https://en.wikipedia.org/wiki/Template_reaction |
A temple is an adjustable stretcher used on a loom to maintain the width and improve the edges of the woven fabric .
During the process of weaving , fabrics can decrease in width (draw in) due to the interlacement of the weft material. Temples prevent this decrease by keeping fabrics at a fixed width, thus requiring more weft to enter the weave with each pass of the shuttle . Fabric produced without draw-in has a smoother selvage , weft can be packed in more evenly, and warp threads are less likely to break from excessive friction in the reed . [ 1 ] [ 2 ] [ 3 ]
There are two main types of temples: metal and wood. Both types have a shaft, whose length can be adjusted, and sharp prongs at each end to attach to the fabric. Wooden temples tend to be lighter and have straight, fine teeth. The teeth on metal temples are angled and are wider at the base than the teeth on wooden temples. Metal temples are often recommended for rugs because the size and angle of the teeth are better for gripping the thick edges. [ 4 ]
To use a temple, the length is first adjusted so that it matches the total width (or spread) of warp threads in the reed. The prongs are then inserted into the fabric, on each side, at the very edges of the cloth. The temple must be moved frequently to keep it close to the fell of the fabric, where the weaving is taking place. [ 2 ]
This textile arts article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Temple_(weaving) |
Temple Reef is an artificial reef off the coast of Pondicherry, India . [ 1 ] It was constructed of fully recycled materials such as concrete blocks , rocks , trees , palm leaves , and iron bars by the Temple Adventures team starting from October 2013. [ 2 ] Temple Reef Foundation currently maintains and monitors the reef.
The reef was named both after its creator, Temple Adventures, and the shape of the site on the ocean floor. The current dive site is now divided into four different sites:
1) Original Temple Reef
2) Parking Lot [ 3 ]
3) Beer Garden
4) Temple 2 aka Wreck City
It is located 18 m (60 ft) below the surface, 5 km west, off the Coromandel coast of Pondicherry , India in the Bay of Bengal . [ 4 ]
Within a short span of time, the reef became home to a diverse aquatic life . There is a vast range of corals and fishes like groupers , lion fish , kingfish , eagle and manta rays , moray eels , sea snakes , triggerfish , parrot fish , angelfish , bannerfish , butterflyfish and crustaceans . [ 5 ] [ 6 ] Overall there has been 75 + different species recorded in this site. Some other marine life are: Malabar Grouper, Red Snapper, Blue line Grouper, Coral Banded Shrimp, Dancing Durban Shrimp, Spearing Mantis Shrimp, Humphead Batfish, Roundface Batfish, Zebra Batfish, Chevron Barracuda, Yellowtail Barracuda, Yellow Boxfish, Blue Spot Toby, Titan Triggerfish, Indian Vagabond Butterfly fish, Harlequin Sweetlips, Longfin Bannerfish, Blue tang surgeonfish Bronzelined Rabbitfish, Eyestripe Surgeonfish, Gold-lined spinefoot, Cleaner wrasse, Three spot Dascyllus, Blue ring angel fish, Yellowtail Chromis, Sargent fish, Copper Sweepers, Ring tailed Cardinalfish, Brown Lionfish, Chinese Trumpetfish, Salmacis Belli, Honeycomb Moray Eel, Moray Eels, Garden Eels, Porcupine Puffer fish, Blackspotted pufferfish, Peacock sole, Yellowspot Goatfish, Jackfish, Mackerels, Valenciennea Goby, Amblyeleotris Goby, Yellow Prawn Goby, Red Lionfish, Clearfin Lionfish, Pterois mombasae Lionfish. | https://en.wikipedia.org/wiki/Temple_Reef |
Temporal Key Integrity Protocol ( TKIP / t iː ˈ k ɪ p / ) is a security protocol used in the IEEE 802.11 wireless networking standard. TKIP was designed by the IEEE 802.11i task group and the Wi-Fi Alliance as an interim solution to replace WEP without requiring the replacement of legacy hardware. This was necessary because the breaking of WEP had left Wi-Fi networks without viable link-layer security, and a solution was required for already deployed hardware. However, TKIP itself is no longer considered secure, and was deprecated in the 2012 revision of the 802.11 standard. [ 1 ]
On October 31, 2002, the Wi-Fi Alliance endorsed TKIP under the name Wi-Fi Protected Access (WPA) . [ 2 ] The IEEE endorsed the final version of TKIP, along with more robust solutions such as 802.1X and the AES based CCMP , when they published IEEE 802.11i-2004 on 23 July 2004. [ 3 ] The Wi-Fi Alliance soon afterwards adopted the full specification under the marketing name WPA2 . [ 4 ]
TKIP was resolved to be deprecated by the IEEE in January 2009. [ 1 ]
TKIP and the related WPA standard implement three new security features to address security problems encountered in WEP protected networks. First, TKIP implements a key mixing function that combines the secret root key with the initialization vector before passing it to the RC4 cipher initialization. WEP, in comparison, merely concatenated the initialization vector to the root key, and passed this value to the RC4 routine. This permitted the vast majority of the RC4 based WEP related key attacks . [ 5 ] Second, WPA implements a sequence counter to protect against replay attacks. Packets received out of order will be rejected by the access point. Finally, TKIP implements a 64-bit Message Integrity Check (MIC) and re-initializes the sequence number each time when a new key (Temporal Key) is used. [ 6 ]
To be able to run on legacy WEP hardware with minor upgrades, TKIP uses RC4 as its cipher. TKIP also provides a rekeying mechanism. TKIP ensures that every data packet is sent with a unique encryption key (Interim Key/Temporal Key + Packet Sequence Counter). [ citation needed ]
Key mixing increases the complexity of decoding the keys by giving an attacker substantially less data that has been encrypted using any one key. WPA2 also implements a new message integrity code, MIC. The message integrity check prevents forged packets from being accepted. Under WEP it was possible to alter a packet whose content was known even if it had not been decrypted.
TKIP uses the same underlying mechanism as WEP, and consequently is vulnerable to a number of similar attacks. The message integrity check, per-packet key hashing , broadcast key rotation, and a sequence counter discourage many attacks. The key mixing function also eliminates the WEP key recovery attacks.
Notwithstanding these changes, the weakness of some of these additions have allowed for new, although narrower, attacks.
TKIP is vulnerable to a MIC key recovery attack that, if successfully executed, permits an attacker to transmit and decrypt arbitrary packets on the network being attacked. [ 7 ] The current publicly available TKIP-specific attacks do not reveal the Pairwise Master Key or the Pairwise Temporal Keys. On November 8, 2008, Martin Beck and Erik Tews released a paper detailing how to recover the MIC key and transmit a few packets. [ 8 ] This attack was improved by Mathy Vanhoef and Frank Piessens in 2013, where they increase the amount of packets an attacker can transmit, and show how an attacker can also decrypt arbitrary packets. [ 7 ]
The basis of the attack is an extension of the WEP chop-chop attack . Because WEP uses a cryptographically insecure checksum mechanism ( CRC32 ), an attacker can guess individual bytes of a packet, and the wireless access point will confirm or deny whether or not the guess is correct. If the guess is correct, the attacker will be able to detect the guess is correct and continue to guess other bytes of the packet. However, unlike the chop-chop attack against a WEP network, the attacker must wait for at least 60 seconds after an incorrect guess (a successful circumvention of the CRC32 mechanism) before continuing the attack. This is because although TKIP continues to use the CRC32 checksum mechanism, it implements an additional MIC code named Michael. If two incorrect Michael MIC codes are received within 60 seconds, the access point will implement countermeasures, meaning it will rekey the TKIP session key , thus changing future keystreams. Accordingly, attacks on TKIP will wait an appropriate amount of time to avoid these countermeasures. Because ARP packets are easily identified by their size, and the vast majority of the contents of this packet would be known to an attacker, the number of bytes an attacker must guess using the above method is rather small (approximately 14 bytes). Beck and Tews estimate recovery of 12 bytes is possible in about 12 minutes on a typical network, which would allow an attacker to transmit 3–7 packets of at most 28 bytes. [ 8 ] Vanhoef and Piessens improved this technique by relying on fragmentation , allowing an attacker to transmit arbitrarily many packets, each at most 112 bytes in size. [ 7 ] The Vanhoef–Piessens attacks also can be used to decrypt arbitrary packets of the attack's choice.
An attacker already has access to the entire ciphertext packet. Upon retrieving the entire plaintext of the same packet, the attacker has access to the keystream of the packet, as well as the MIC code of the session. Using this information the attacker can construct a new packet and transmit it on the network. To circumvent the WPA implemented replay protection, the attacks use QoS channels to transmit these newly constructed packets. An attacker able to transmit these packets may be able to implement any number of attacks, including ARP poisoning attacks, denial of service, and other similar attacks, with no need of being associated with the network.
A group of security researchers at the Information Security Group at Royal Holloway, University of London reported a theoretical attack on TKIP which exploits the underlying RC4 encryption mechanism. TKIP uses a similar key structure to WEP with the low 16-bit value of a sequence counter (used to prevent replay attacks) being expanded into the 24-bit "IV", and this sequence counter always increments on every new packet. An attacker can use this key structure to improve existing attacks on RC4. In particular, if the same data is encrypted multiple times, an attacker can learn this information from only 2 24 connections. [ 9 ] [ 10 ] [ 11 ] While they claim that this attack is on the verge of practicality, only simulations were performed, and the attack has not been demonstrated in practice.
In 2015, security researchers from KU Leuven presented new attacks against RC4 in both TLS and WPA-TKIP. Dubbed the Numerous Occurrence MOnitoring & Recovery Exploit (NOMORE) attack, it is the first attack of its kind that was demonstrated in practice. The attack against WPA-TKIP can be completed within an hour, and allows an attacker to decrypt and inject arbitrary packets. [ 12 ]
ZDNet reported on June 18, 2010, that WEP & TKIP would soon be disallowed on Wi-Fi devices by the Wi-Fi alliance. [ 13 ] However, a survey in 2013 showed that it was still in widespread use. [ 7 ]
The IEEE 802.11n standard prohibits the data rate exceed 54 Mbps if TKIP is used as the Wi-Fi cipher. [ 14 ] | https://en.wikipedia.org/wiki/Temporal_Key_Integrity_Protocol |
Temporal Analysis of Products (TAP), (TAP-2), (TAP-3) is an experimental technique for studying
the kinetics of physico-chemical interactions
between gases and complex solid materials, primarily heterogeneous catalysts .
The TAP methodology is based on short pulse-response experiments at low background pressure (10 −6 -10 2 Pa),
which are used to probe different steps in a catalytic process on the surface of a porous material including diffusion , adsorption , surface reactions , and desorption .
Since its invention by Dr. John T. Gleaves (then at Monsanto Company ) in late 1980s, [ 1 ] TAP has been used to study a variety of industrially and academically relevant catalytic reactions, bridging the gap between surface science
experiments and applied catalysis. [ 2 ] The state-of-the-art TAP installations (TAP-3) do not only provide better signal-to-noise ratio than the first generation TAP machines (TAP-1),
but also allow for advanced automation and direct coupling with other techniques.
TAP instrument consists of a heated packed-bed microreactor connected to a high-throughput vacuum system,
a pulsing manifold with fast electromagnetically-driven gas injectors, and a Quadrupole Mass Spectrometer (QMS)
located in the vacuum system below the micro-reactor outlet.
In a typical TAP pulse-response experiment, very small (~10 −9 mol) and narrow (~100 μs) gas pulses are introduced into the evacuated (~10 −6 torr ) microreactor
containing a catalytic sample. While the injected gas molecules traverse the microreactor packing through the interstitial voids,
they encounter the catalyst on which they may undergo chemical transformations. Unconverted and newly formed gas molecules eventually
reach the reactor's outlet and escape into an adjacent vacuum chamber, where they are detected with millisecond time resolution
by the QMS. The exit-flow rates of reactants, products and inert molecules recorded by the QMS are then
used to quantify catalytic properties and deduce reaction mechanisms. The same TAP instrument can
typically accommodate other types of kinetic measurements, including atmospheric pressure flow experiments (10 5 Pa ), Temperature-Programmed Desorption (TPD) , and Steady-State Isotopic Transient Kinetic Analysis (SSITKA).
The general methodology of TAP data analysis, developed in a series of papers by Grigoriy (Gregory) Yablonsky [ 3 ] [ 4 ] , [ 5 ] is based on comparing an inert gas response which is controlled only by Knudsen diffusion with a reactive gas response which is controlled by diffusion as well as adsorption and chemical reactions on the catalyst sample.
TAP pulse-response experiments can be effectively modeled by a one-dimensional (1D) diffusion equation with uniquely simple combination of boundary conditions. | https://en.wikipedia.org/wiki/Temporal_analysis_of_products |
The temporal dynamics of music and language describes how the brain coordinates its different regions to process musical and vocal sounds. Both music and language feature rhythmic and melodic structure. Both employ a finite set of basic elements (such as tones or words) that are combined in ordered ways to create complete musical or lingual ideas.
Key areas of the brain are used in both music processing and language processing , such as Brocas area that is devoted to language production and comprehension. Patients with lesions, or damage, in the Brocas area often exhibit poor grammar, slow speech production and poor sentence comprehension. The inferior frontal gyrus , is a gyrus of the frontal lobe that is involved in timing events and reading comprehension, particularly for the comprehension of verbs . The Wernickes area is located on the posterior section of the superior temporal gyrus and is important for understanding vocabulary and written language.
The primary auditory cortex is located on the temporal lobe of the cerebral cortex . This region is important in music processing and plays an important role in determining the pitch and volume of a sound. [ 1 ] Brain damage to this region often results in a loss of the ability to hear any sounds at all. The frontal cortex has been found to be involved in processing melodies and harmonies of music. For example, when a patient is asked to tap out a beat or try to reproduce a tone, this region is very active on fMRI and PET scans. [ 2 ] The cerebellum is the "mini" brain at the rear of the skull. Similar to the frontal cortex, brain imaging studies suggest that the cerebellum is involved in processing melodies and determining tempos . The medial prefrontal cortex along with the primary auditory cortex has also been implicated in tonality, or determining pitch and volume. [ 1 ]
In addition to the specific regions mentioned above many "information switch points" are active in language and music processing. These regions are believed to act as transmission routes that conduct information. These neural impulses allow the above regions to communicate and process information correctly. These structures include the thalamus and the basal ganglia . [ 2 ]
Some of the above-mentioned areas have been shown to be active in both music and language processing through PET and fMRI studies. These areas include the primary motor cortex, the Brocas area, the cerebellum, and the primary auditory cortices. [ 2 ]
The imaging techniques best suited for studying temporal dynamics provide information in real time. The methods most utilized in this research are functional magnetic resonance imaging, or fMRI, and positron emission tomography known as PET scans. [ 3 ]
Positron emission tomography involves injecting a short-lived radioactive tracer isotope into the blood. When the radioisotope decays, it emits positrons which are detected by the machine sensor. The isotope is chemically incorporated into a biologically active molecule, such as glucose , which powers metabolic activity. Whenever brain activity occurs in a given area these molecules are recruited to the area. Once the concentration of the biologically active molecule, and its radioactive "dye", rises enough, the scanner can detect it. [ 3 ] About one second elapses from when brain activity begins to when the activity is detected by the PET device. This is because it takes a certain amount of time for the dye to reach the needed concentrations can be detected. [ 4 ]
Functional magnetic resonance imaging or fMRI is a form of the traditional MRI imaging device that allows for brain activity to be observed in real time. An fMRI device works by detecting changes in neural blood flow that is associated with brain activity. fMRI devices use a strong, static magnetic field to align nuclei of atoms within the brain. An additional magnetic field, often called the gradient field , is then applied to elevate the nuclei to a higher energy state. [ 5 ] When the gradient field is removed, the nuclei revert to their original state and emit energy. The emitted energy is detected by the fMRI machine and is used to form an image. When neurons become active blood flow to those regions increases. This oxygen-rich blood displaces oxygen depleted blood in these areas. Hemoglobin molecules in the oxygen-carrying red blood cells have different magnetic properties depending on whether it is oxygenated. [ 5 ] By focusing the detection on the magnetic disturbances created by hemoglobin, the activity of neurons can be mapped in near real time. [ 5 ] Few other techniques allow for researchers to study temporal dynamics in real time.
Another important tool for analyzing temporal dynamics is magnetoencephalography , known as MEG. It is used to map brain activity by detecting and recording magnetic fields produced by electrical currents generated by neural activity. The device uses a large array of superconducting quantum interface devices, called SQUID S, to detect magnetic activity. Because the magnetic fields generated by the human brain are so small the entire device must be placed in a specially designed room that is built to shield the device from external magnetic fields. [ 5 ]
Another common method for studying brain activity when processing language and music is transcranial magnetic stimulation or TMS. TMS uses induction to create weak electromagnetic currents within the brain by using a rapidly changing magnetic field. The changes depolarize or hyper-polarize neurons. This can produce or inhibit activity in different regions. The effect of the disruptions on function can be used to assess brain interconnections. [ 6 ]
Many aspects of language and musical melodies are processed by the same brain areas. In 2006, Brown, Martinez and Parsons found that listening to a melody or a sentence resulted in activation of many of the same areas including the primary motor cortex , the supplementary motor area , the Brocas area, anterior insula, the primary audio cortex, the thalamus, the basal ganglia and the cerebellum. [ 7 ]
A 2008 study by Koelsch, Sallat and Friederici found that language impairment may also affect the ability to process music. Children with specific language impairments, or SLIs were not as proficient at matching tones to one another or at keeping tempo with a simple metronome as children with no language disabilities. This highlights the fact that neurological disorders that effect language may also affect musical processing ability. [ 8 ]
Walsh, Stewart, and Frith in 2001 investigated which regions processed melodies and language by asking subjects to create a melody on a simple keyboard or write a poem. They applied TMS to the location where musical and lingual data. The research found that TMS applied to the left frontal lobe had affected the ability to write or produce language material, while TMS applied to the auditory and Brocas area of the brain most inhibited the research subject's ability to play musical melodies. This suggests that some differences exist between music and language creation. [ 9 ]
The basic elements of musical and lingual processing appear to be present at birth. For example, a French 2011 study that monitored fetal heartbeats found that past the age of 28 weeks, fetuses respond to changes in musical pitch and tempo. Baseline heart rates were determined by 2 hours of monitoring before any stimulus. Descending and ascending frequencies at different tempos were played near the womb . The study also investigated fetal response to lingual patterns, such as playing a sound clip of different syllables, but found no response to different lingual stimulus. Heart rates increased in response to high pitch loud sounds compared to low pitched soft sounds. This suggests that the basic elements of sound processing, such as discerning pitch, tempo and loudness are present at birth, while later-developed processes discern speech patterns after birth. [ 10 ]
A 2010 study researched the development of lingual skills in children with speech difficulties. It found that musical stimulation improved the outcome of traditional speech therapy . Children aged 3.5 to 6 years old were separated into two groups. One group heard lyric-free music at each speech therapy session while the other group was given traditional speech therapy. The study found that both phonological capacity and the children's ability to understand speech increased faster in the group that was exposed to regular musical stimulation. [ 11 ]
Recent studies found that the effect of music in the brain is beneficial to individuals with brain disorders. [ 12 ] [ 13 ] [ 14 ] [ 15 ] Stegemöller discusses the underlying principles of music therapy being increased dopamine , neural synchrony and lastly, a clear signal which are important features for normal brain functioning. [ 15 ] This combination of effects induces the brain's neuroplasticity which is suggested to increase an individual's potential for learning and adaptation. [ 16 ] Existing literature examines the effect of music therapy on those with Parkinson's disease, Huntington's Disease and Dementia among others.
Individuals with Parkinson's disease experience gait and postural disorders caused by decreased dopamine in the brain. [ 17 ] One of hallmarks of this disease is shuffling gait , where the individual leans forward while walking and increases his speed progressively, which results in a fall or contact with a wall. Parkinson's patients also have difficulty in changing direction when walking. The principle of increased dopamine in music therapy would therefore ease parkinsonian symptoms. [ 15 ] These effects were observed in Ghai's study of various auditory feedback cues wherein patients with Parkinson's disease experience increased walking speed, stride length as well as decreased cadence. [ 12 ]
Huntington's disease affects a person's movement, cognitive as well as psychiatric functions which severely affects his or her quality of life. [ 18 ] Most commonly, patients with Huntington's Disease most commonly experience chorea , lack of impulse control, social withdrawal and apathy. Schwarz et al. conducted a review over the published literature concerning the effects of music and dance therapy to patients with Huntington's disease. The fact that music is able to enhance cognitive and motor abilities for activities other than those of music related ones suggests that music may be beneficial to patients with this disease. [ 13 ] Although studies concerning the effects of music on physiologic functions are essentially inconclusive, studies find that music therapy enhances patient participation and long term engagement in therapy [ 13 ] which are important in achieving the maximum potential of a patient's abilities.
Individuals with Alzeihmer's disease caused by dementia almost always become animated immediately when hearing a familiar song. [ 14 ] Särkämo et al. discusses the effects of music found through a systemic literature review in those with this disease. Experimental studies on music and dementia find that although higher level auditory functions such as melodic contour perception and auditory analysis are diminished in individuals, they retain their basic auditory awareness involving pitch, timbre and rhythm. [ 14 ] Interestingly, music-induced emotions and memories were also found to be preserved even in patients suffering from severe dementia. Studies demonstrate beneficial effects of music on agitation, anxiety and social behaviors and interactions. [ 14 ] Cognitive tasks are affected by music as well, such as episodic memory and verbal fluency. [ 14 ] Experimental studies on singing for individuals in this population enhanced memory storage, verbal working memory , remote episodic memory and executive functions . [ 14 ] | https://en.wikipedia.org/wiki/Temporal_dynamics_of_music_and_language |
Within molecular and cell biology , temporal feedback , also referred to as interlinked or interlocked feedback, is a biological regulatory motif in which fast and slow positive feedback loops are interlinked to create "all or none" switches. This interlinking produces separate, adjustable activation and de-activation times. This type of feedback is thought to be important in cellular processes in which an "all or none" decision is a necessary response to a specific input. The mitotic trigger, polarization in budding yeast , mammalian calcium signal transduction, EGF receptor signaling, platelet activation , and Xenopus oocyte maturation are examples for interlinked fast and slow multiple positive feedback systems. [ 1 ]
In biological systems, temporal feedback is a ubiquitous signal transduction motif that allows systems to convert graded inputs into decisive, all-or-none digital outputs. A system with interlinked fast and slow feedback loops produces a dual-time switch, which is rapidly inducible and robust to noise during stimulus. In contrast, a single fast or slow loop is separately responsible for the speed of switching and the stability of switches. Computer simulation studies have shown that linking two loops of the same kind brings no overall advantage over having a single loop, however the dual-loop switch performs in a monostable regime. Both single and dual loops can behave as a bistable switch. [ 1 ] Several computational models have been produced to demonstrate the responses of single and dual positive feedback loop switches to stimuli. [ 2 ] [ 3 ]
The transcription factor NF-κB regulates various genes that play essential roles in signaling, stress responses, cell growth and apoptosis . The temporal control of NF-κB activation by the degradation and synthesis of its inhibitor isoforms , I-κBα, -β, - ε has been computationally modeled. The model suggested that I-κBα results in robust negative feedback that leads to a fast turn off of NF-κB response. On the other hand, the oscillatory potential and stabilization of NF-κB during long stimulations has been shown to be reduced by I-κBβ and –ε. [ 4 ]
The outgrowth and progression is of limb organogenesis is controlled by self-regulatory, robust signalling system that involves interlinked feedback mechanisms instead of independent morphogen signals. The studies on morphogenesis of limb buds have been focused on one particular axis of limb bud. [ 5 ] However it has long been noted that zone of polarizing activity (ZPA) requires maintenance of apical ectodermal ridge (AER). The dependence of ZPA on ARE indicates the linkage between them. Three phases have been observed during the interplay between ARE and ZPA. Initiation phase involves the Grem1 expression in a fast initiator loop (~2h loop time) due to upregulation by BMP4 . The Shh signalling is activated independently of GREM1 and AER-FGFs. Propagation phase involves the control of distal progression during limb bud development. Finally termination of signalling system due to the widening gap between ZPA-SHH signalling and the Grem1 expression domain. [ 5 ] In mouse limb patterning, limb development is regulated by linking a fast GREM1 module to the slower SSH/FGF epithelial-mesenchymal feedback loop. [ 6 ]
Circadian rhythms , which regulate physiology and behavior in organisms, are dependent upon a system of interlinked feedback mechanisms as well. In mammals, this process is driven by the suprachiasmatic nuclei (SCN) in the hypothalamus , composed of the two negative feedback loops Per-Cry and Clock-Bmal . Transcription of the period ( Per ) and cryptochrome ( Cry ) genes cannot proceed until CLOCK and BMAL1 have dimerized and bound to the E-box element , a process initiated by CREB-binding protein (CPB) . Once bound to the E-box elements of per and cry , successful production of mRNA transcripts occurs and the proteins PER and CRY are synthesized. PER and CRY then dimerize and repress the transcription of the gene Rev-Erb , the protein product of which, REV-ERB, represses transcription of Bmal . The repression of BMAL in vivo prevents the transactivation of Per-Cry , thereby completing the cycle in just over 24 hours. [ 7 ] | https://en.wikipedia.org/wiki/Temporal_feedback |
Temporal finitism is the doctrine that time is finite in the past . [ clarification needed ] The philosophy of Aristotle , expressed in such works as his Physics , held that although space was finite, with only void existing beyond the outermost sphere of the heavens, time was infinite. This caused problems for mediaeval Islamic , Jewish , and Christian philosophers who, primarily creationist , were unable to reconcile the Aristotelian conception of the eternal with the Genesis creation narrative . [ 1 ]
In contrast to ancient Greek philosophers who believed that the universe had an infinite past with no beginning, medieval philosophers and theologians developed the concept of the universe having a finite past with a beginning. This view was inspired by the creation myth shared by the three Abrahamic religions : Judaism , Christianity and Islam . [ 2 ]
Prior to Maimonides , it was held that it was possible to prove, philosophically, creation theory. The Kalam cosmological argument held that creation was provable, for example. Maimonides himself held that neither creation nor Aristotle's infinite time were provable, or at least that no proof was available. (According to scholars of his work, he didn't make a formal distinction between unprovability and the simple absence of proof.) Thomas Aquinas was influenced by this belief, and held in his Summa Theologica that neither hypothesis was demonstrable. Some of Maimonides' Jewish successors, including Gersonides and Crescas , conversely held that the question was decidable, philosophically. [ 3 ]
John Philoponus was probably the first to use the argument that infinite time is impossible in order to establish temporal finitism. He was followed by many others including St. Bonaventure .
Philoponus ' arguments for temporal finitism were severalfold. Contra Aristotlem has been lost , and is chiefly known through the citations used by Simplicius of Cilicia in his commentaries on Aristotle's Physics and De Caelo . Philoponus' refutation of Aristotle extended to six books, the first five addressing De Caelo and the sixth addressing Physics , and from comments on Philoponus made by Simplicius can be deduced to have been quite lengthy. [ 4 ]
A full exposition of Philoponus' several arguments, as reported by Simplicius, can be found in Sorabji. [ 5 ]
One such argument was based upon Aristotle's own theorem that there were not multiple infinities, and ran as follows: If time were infinite, then as the universe continued in existence for another hour, the infinity of its age since creation at the end of that hour must be one hour greater than the infinity of its age since creation at the start of that hour. But since Aristotle holds that such treatments of infinity are impossible and ridiculous, the world cannot have existed for infinite time.
The most sophisticated medieval arguments against an infinite past were later developed by the early Muslim philosopher , Al-Kindi (Alkindus); the Jewish philosopher , Saadia Gaon (Saadia ben Joseph); and the Muslim theologian , Al-Ghazali (Algazel). They developed two logical arguments against an infinite past, the first being the "argument from the impossibility of the existence of an actual infinite", which states: [ 6 ]
This argument depends on the (unproved) assertion that an actual infinite cannot exist; and that an infinite past implies an infinite succession of "events", a word not clearly defined. The second argument, the "argument from the impossibility of completing an actual infinite by successive addition", states: [ 2 ]
The first statement states, correctly, that a finite (number) cannot be made into an infinite one by the finite addition of more finite numbers. The second skirts around this; the analogous idea in mathematics, that the (infinite) sequence of negative integers "..-3, -2, -1" may be extended by appending zero, then one, and so forth; is perfectly valid.
Both arguments were adopted by later Christian philosophers and theologians, and the second argument in particular became more famous after it was adopted by Immanuel Kant in his thesis of the first antinomy concerning time. [ 2 ]
Immanuel Kant 's argument for temporal finitism from his First Antinomy, runs as follows: [ 7 ] [ 8 ]
If we assume that the world has no beginning in time, then up to every given moment an eternity has elapsed, and there has passed away in that world an infinite series of successive states of things. Now the infinity of a series consists in the fact that it can never be completed through successive synthesis. It thus follows that it is impossible for an infinite world-series to have passed away, and that a beginning of the world is therefore a necessary condition of the world's existence.
Modern mathematics generally incorporates infinity. For most purposes it is simply used as convenient; when considered more carefully it is incorporated, or not, according to whether the axiom of infinity is included. This is the mathematical concept of infinity; while this may provide useful analogies or ways of thinking about the physical world, it says nothing directly about the physical world. Georg Cantor recognized two different kinds of infinity. The first, used in calculus, he called the variable finite, or potential infinite, represented by the ∞ {\displaystyle \infty } sign (known as the lemniscate ), and the actual infinite , which Cantor called the "true infinite." His notion of transfinite arithmetic became the standard system for working with infinity within set theory . David Hilbert thought that the role of the actual infinite was relegated only to the abstract realm of mathematics. "The infinite is nowhere to be found in reality. It neither exists in nature nor provides a legitimate basis for rational thought... The role that remain for the infinite to play is solely that of an idea." [ 9 ] Philosopher William Lane Craig argues that if the past were infinitely long, it would entail the existence of actual infinites in reality. [ 10 ]
Craig and Sinclair also argue that an actual infinite cannot be formed by successive addition. Quite independent of the absurdities arising from an actual infinite number of past events, the formation of an actual infinite has its own problems. For any finite number n, n+1 equals a finite number. An actual infinity has no immediate predecessor. [ 11 ]
The Tristram Shandy paradox is an attempt to illustrate the absurdity of an infinite past. Imagine Tristram Shandy, an immortal man who writes his biography so slowly that for every day that he lives, it takes him a year to record that day. Suppose that Shandy had always existed. Since there is a one-to-one correspondence between the number of past days and the number of past years on an infinite past, one could reason that Shandy could write his entire autobiography. [ 12 ] From another perspective, Shandy would only get farther and farther behind, and given a past eternity, would be infinitely far behind. [ 13 ]
Craig asks us to suppose that we met a man who claims to have been counting down from infinity and is now just finishing. We could ask why he did not finish counting yesterday or the day before, since eternity would have been over by then. In fact for any day in the past, if the man would have finished his countdown by day n, he would have finished his countdown by n-1. It follows that the man could not have finished his countdown at any point in the finite past, since he would have already been done. [ 14 ]
In 1984 physicist Paul Davies deduced a finite-time origin of the universe in a quite different way, from physical grounds: "the universe will eventually die, wallowing, as it were, in its own entropy . This is known among physicists as the 'heat death' of the universe... The universe cannot have existed for ever, otherwise it would have reached its equilibrium end state an infinite time ago. Conclusion: the universe did not always exist." [ 15 ]
More recently though physicists have proposed various ideas for how the universe could have existed for an infinite time, such as eternal inflation . But in 2012, Alexander Vilenkin and Audrey Mithani of Tufts University wrote a paper claiming that in any such scenario past time could not have been infinite. [ 16 ] It could however have been "before any nameable time", according to Leonard Susskind . [ 17 ] There are also very exotic but consistent physical scenarios under which the Universe has existed in eternity. [ 18 ]
Kant's argument for finitism has been widely discussed, for instance Jonathan Bennett [ 19 ] points out that Kant's argument is not a sound logical proof: His assertion that "Now the infinity of a series consists in the fact that it can never be completed through successive synthesis. It thus follows that it is impossible for an infinite world-series to have passed away", assumes that the universe was created at a beginning and then progressed from there, which seems to assume the conclusion. A universe that simply existed and had not been created, or a universe that was created as an infinite progression, for instance, would still be possible. Bennett quotes Strawson:
"A temporal process both completed and infinite in duration appears to be impossible only on the assumption that it has a beginning. If ... it is urged that we cannot conceive of a process of surveying which does not have a beginning, then we must inquire with what relevance and by what right the notion of surveying is introduced into the discussion at all."
Some of the criticism of William Lane Craig's argument for temporal finitism has been discussed and expanded on by Stephen Puryear. [ 20 ] [ 21 ]
In this, he writes Craig's argument as:
Puryear points out that Aristotle and Aquinas had an opposing view to point 2, but that the most contentious is point 3. Puryear says that many philosophers have disagreed with point 3, and adds his own objection:
Puryear then points out that Craig has defended his position by saying that time might or must be naturally divided and so there is not an actual infinity of instants between two times. Puryear then goes on to argue that if Craig is willing to turn an infinity of points into a finite number of divisions, then points 1, 2 and 4 are not true.
An article by Louis J. Swingrover makes a number of points relating to the idea that Craig's "absurdities" are not contradictions in themselves: they are all either mathematically consistent (like Hilbert's hotel or the man counting down to today), or do not lead to inescapable conclusions. He argues that if one makes the assumption that any mathematically coherent model is metaphysically possible, then it can be shown that an infinite temporal chain is metaphysically possible, since one can show that there exist mathematically coherent models of an infinite progression of times. He also says that Craig might be making a cardinality error similar to assuming that because an infinitely extended temporal series would contain an infinite number of times, then it would have to contain the number "infinity".
Quentin Smith [ 22 ] attacks "their supposition that an infinite series of past events must contain some events separated from the present event by an infinite number of intermediate events, and consequently that from one of these infinitely distant past events the present could never have been reached".
Smith asserts that Craig and Wiltrow are making a cardinality error by confusing an unending sequence with a sequence whose members must be separated by an infinity: None of the integers is separated from any other integer by an infinite number of integers, so why assert that an infinite series of times must contain a time infinitely far back in the past.
Smith then says that Craig uses false presuppositions when he makes statements about infinite collections (in particular the ones relating to Hilbert's Hotel and infinite sets being equivalent to proper subsets of them), often based on Craig finding things "unbelievable", when they are actually mathematically correct. He also points out that the Tristram Shandy paradox is mathematically coherent, but some of Craig's conclusions about when the biography would be finished are incorrect.
Ellery Eells [ 23 ] expands on this last point by showing that the Tristram Shandy paradox is internally consistent and fully compatible with an infinite universe.
Graham Oppy [ 24 ] embroiled in debate with Oderberg, points out that the Tristram Shandy story has been used in many versions. For it to be useful to the temporal finitism side, a version must be found that is logically consistent and not compatible with an infinite universe. To see this, note that the argument runs as follows:
The problem for the finitist is that point 1 is not necessarily true. If a version of the Tristram Shandy story is internally inconsistent, for instance, then the infinitist could just assert that an infinite past is possible, but that particular Tristram Shandy is not because it's not internally consistent. Oppy then lists the different versions of the Tristram Shandy story that have been put forward and shows that they are all either internally inconsistent or they don't lead to contradiction. | https://en.wikipedia.org/wiki/Temporal_finitism |
In computer science , temporal isolation is the capability of a set of processes running on the same system to run without interferences concerning their temporal constraints among each other.
Specifically, there is temporal isolation among processes whenever the ability for each process to respect its own timing constraints (e.g. terminating a computation within a specified time ) does not depend on the temporal behavior of other unrelated processes running on the same system, thus sharing with it a set of resources such as the CPU, disk, network, etc.
Operating systems able to provide such guarantees to running processes are suitable for hosting real-time applications. | https://en.wikipedia.org/wiki/Temporal_isolation |
Temporal isolation or performance isolation among virtual machine (VMs) refers to the capability of isolating the temporal behavior (or limiting the temporal interferences) of multiple VMs among each other, despite them running on the same physical host and sharing a set of physical resources such as processors, memory, and disks.
One of the key advantages of using virtualization in server consolidation , is the possibility to seamlessly "pack" multiple under-utilized systems into a single physical host, thus achieving a better overall utilization of the available hardware resources. In fact, an entire operating system (OS), along with the applications running within, can be run in a virtual machine (VM).
However, when multiple VMs concurrently run on the same physical host, they share the available physical resources, including CPU (s), network adapter (s), disk (s) and memory. This adds a level of unpredictability in the performance that may be exhibited by each individual VM, as compared to what is expected. For example, a VM with a temporary compute-intensive peak might disturb the other running VMs, causing a significant and undesirable temporary drop in their performance. In a world of computing that is shifting towards cloud computing paradigms where resources (computing, storage, networking) may be remotely rented in virtualized form under precise service-level agreements, it would be highly desirable that the performance of the virtualized resources be as stable and predictable as possible.
Multiple techniques may be used to face with the aforementioned problem. They aim to achieve some degree of temporal isolation across the concurrently running VMs, at the various critical levels of scheduling : CPU scheduling, network scheduling and disk scheduling.
For the CPU, it is possible to use proper scheduling techniques at the hypervisor level to contain the amount of computing each VM may impose on a shared physical CPU or core. For example, on the Xen hypervisor, the BVT, Credit-based and S-EDF schedulers have been proposed for controlling how the computing power is distributed among competing VMs. [ 1 ] To get stable performance in virtualized applications, it is necessary to use scheduler configurations that are not work-conserving .
Also, on the KVM hypervisor, some have proposed using EDF-based scheduling strategies [ 2 ] to maintain stable and predictable performance of virtualized applications. [ 3 ] [ 4 ] Finally, with a multi-core or multi-processor physical host , it is possible to deploy each VM on a separate processor or core to temporally isolate the performance of various VMs.
For the network, it is possible to use traffic shaping techniques to limit the amount of traffic that each VM can impose on the host. Also, it is possible to install multiple network adapters on the same physical host, and configure the virtualization layer so that each VM may grant exclusive access to each one of them. For example, this is possible with the driver domains of the Xen hypervisor. Multi-queue network adapters exist which support multiple VMs at the hardware level, having separate packet queues associated to the different hosted VMs (by means of the IP addresses of the VMs), such as the Virtual Machine Device Queue (VMDq) devices by Intel . [ 5 ] Finally, real-time scheduling of the CPU may also be used for enhancing temporal isolation of network traffic from multiple VMs deployed on the same CPU. [ 6 ]
When using real-time scheduling for controlling the amount of CPU resources reserved for each VM, one challenging problem is properly accounting for the CPU time applicable to system-wide activities. For example, in the case of the Xen scheduler, the Dom0 and the driver domains services might be shared across multiple VMs accessing them. Similarly, in the case of the KVM hypervisor, the workload imposed on the host OS due to serving network traffic for each individual guest OS might not be easily distinguishable, because it mainly involves kernel-level device drivers and the networking infrastructure (on the host OS). Some techniques for mitigating such problems have been proposed for the Xen case. [ 7 ]
Along the lines of adaptive reservations , it is possible to apply feedback-control strategies to dynamically adapt the amount of resources reserved to each virtual machine to maintain stable performance for the virtualized application(s). [ 8 ] Following the trend of adaptiveness, in those cases in which a virtualized system is not fulfilling the expected performance levels (either due to unforeseen interferences of other concurrently running VMs, or due to a bad deployment strategy that simply picked up a machine with insufficient hardware resources), it is possible to live-migrate virtual machines while they are running, so as to host them on a more capable (or less loaded) physical host. | https://en.wikipedia.org/wiki/Temporal_isolation_among_virtual_machines |
Temporal monotonicity is a normative principle in decision theory and psychology that postulates adding a period of neutral or unpleasant stimulus to an experience can only logically make the experience "worse overall." [ 1 ] [ 2 ] According to this rule, evaluations of experiences should remain consistent over time, preserving a logical relationship between duration, stimulus intensity, and the final subjective assessment. [ 2 ] Empirical research, however, has consistently demonstrated systematic deviations from temporal monotonicity. Cognitive heuristics , notably the peak–end rule , often significantly influence judgments in ways not predicted by temporal monotonicity, revealing discrepancies between normative principles and actual psychological processes. [ 2 ]
Temporal monotonicity is a normative rule in decision theory that describes how experiences ought to be evaluated over time. Specifically, it assumes that extending an episode by adding additional neutral or unpleasant moments cannot improve its overall subjective value.
As Kahneman and Frederick (2002) write,
"The most obvious rule is temporal monotonicity: there is a compelling intuition that adding an extra period of pain to an episode of discomfort can only make it worse overall." [ 1 ]
This principle plays a foundational role in normative models of rational evaluation, where judgments are expected to follow consistent, logical structures. [ 3 ]
The Peak-end rule , a psychological heuristic, is a notable example of a violation of temporal monotonicity. For example, Kahneman et al. (1993) found that participants were more likely to wish to repeat an experience where they exposed their hands to 14°C water that increased to 15°C at the one-minute mark for a total exposure of 1 minute and 30 seconds, rather than a shorter exposure to 14°C water for 60 seconds [ 2 ]
According to temporal monotonicity, the longer exposure—containing more total discomfort—should be evaluated more negatively. However, participants tended to prefer the longer trial because it ended less unpleasantly, illustrating how retrospective evaluations are influenced more by the ending than by total duration or intensity, [ 4 ] further demonstrating how preferences for certain sequences can violate normative models like temporal monotonicity."
Similar to the peak–end rule, duration neglect plays an important role in cognitive appraisal that seems to ignore temporal monotonicity. According to temporal monotonicity, longer experiences (even if the additional duration is neutral or mildly unpleasant) should logically be judged as less pleasant overall. [ 2 ] However, due to duration neglect, the actual length of an experience is often disregarded in retrospective evaluations. [ 5 ] Instead, individuals tend to base their judgments on momentary peaks and endings, rather than the total time elapsed.
Despite consistent empirical violations, temporal monotonicity remains a foundational principle within normative models of rational evaluation. As Kahneman and Frederick (2002) note, these violations should be viewed as an "expendable flourish." [ 1 ] In this view, the rule represents an ideal of consistent judgment, even if human behavior frequently departs from it. | https://en.wikipedia.org/wiki/Temporal_monotonicity |
In contemporary metaphysics , temporal parts are the parts of an object that exist in time. A temporal part would be something like "the first year of a person's life", or "all of a table from between 10:00 a.m. on June 21, 1994 to 11:00 p.m. on July 23, 1996". The term is used in the debate over the persistence of material objects. Objects typically have parts that exist in space—a human body, for example, has spatial parts like hands, feet, and legs. Some metaphysicians believe objects have temporal parts as well. [ 1 ]
Originally it was argued that those who believe in temporal parts believe in perdurantism , that persisting objects are wholes composed entirely of temporal parts. This view was contrasted with endurantism , the claim that objects are wholly present at any one time (thus not having different temporal parts at different times). [ 1 ] [ 2 ] This claim is still commonplace, but philosophers like Ted Sider believe that even endurantists should accept temporal parts.
Not everyone was happy with the definition by analogy: some philosophers, such as Peter van Inwagen , argued that—even given the definition by analogy—they still had no real idea what a temporal part was meant to be, [ 3 ] : 131 whilst others have felt that whether temporal parts existed or not is merely a verbal dispute ( Eli Hirsch holds this view). [ 4 ]
Gallois surveys some of the attempts to create a more specific definition. [ 5 ] : 256 The early attempts included identifying temporal parts with ordered pairs of times and objects, but it seems relatively unproblematic that temporal parts exist given the definition and ordered pairs seem unsuitable to play the role that perdurantists demand, such as being parts of persisting wholes—how can a set be a part of a material object? Later perdurantists identified persisting objects with events, and as events having temporal parts was not problematic (for example, the first and second halves of a football match), it was imagined that persisting objects could have temporal parts. There was a reluctance from many to identify objects with events, and this definition has long since fallen out of fashion.
Of the definitions closest to those commonly used in the literature, the earliest was Thomson:
x is a cross-sectional temporal part of y = df (∃ T )[ y and x exist through T & no part of x exists outside T & (∀ t )( t is in T ⊃ (∀ P )( y exactly occupies P at t ⊃ x exactly occupies P at t ))]. [ 6 ] : 207
Later, Sider tried to combat the fears of endurantists who could not understand what a temporal part is by defining it in terms of "part at a time" or "parthood at a time", a relation that the endurantist should accept, unlike parthood simpliciter —which an endurantist may say makes no sense, given that all parts are had at a time . (However, McDaniel argues [ 7 ] : 146–7 that even endurantists should accept that notion). Sider gave the following definition, which is widely used:
x is an instantaneous temporal part of y at instant t = df (i) x is a part of y ; (ii) x exists at, but only, at t ; and (iii) x overlaps every part of y that exists at t . [ 8 ] : 60
Sider also gave an alternative definition that is compatible with presentism , using the tensed operators "WILL" and "WAS":
x is an instantaneous temporal part of y = df (i) x is a part of y ; (ii) x overlaps every part of y ; (iii) it is not the case that WILL ( x exists); (iv) it is not the case that WAS ( x exists). [ 8 ] : 71
While Sider's definition is most commonly used, Zimmerman—troubled by the demand for instants (which may not exist in a gunky space-time that is such that every region has a sub-region)—gives the following:
x is a temporal part of y throughout T = df (i) x exists during and only during T ; (ii) for every subinterval T * of T , there is a z such that (a) z is a part of x , and (b) for all u , u has a part in common with z during T * if and only if u has a part in common with y during T *; and (iii) y exists at times outside of T . [ 9 ] : 122
Temporal parts are sometimes used to account for change. The problem of change is just that if an object x and an object y have different properties, then by Leibniz's Law , one ought to conclude that they are different. For example, if a person changes from having long hair to short hair, then the temporal-parts theorist can say that change is the difference between the temporal parts of a temporally extended object (the person). So, the person changes by having a temporal part with long hair, and a temporal part with short hair; the temporal parts are different, which is consistent with Leibniz's Law.
However, those who reject the notion that ordinary objects, like people, have temporal parts usually adopt a more common-sense view. They say that an object has properties at times. In this view, the person changes by having long hair at t , to short hair at t' . To them, there is no contradiction in thinking an object is capable of having different properties at different times.
An argument widely held to favor the concept of temporal parts arises from these points: [ 8 ] David Lewis ' argument from temporary intrinsics, which he first advanced in On the Plurality of Worlds . [ 10 ] [ citation needed ] The outline of the argument is as follows:
Premise P1 is an intuitive premise; generally we distinguish between properties and relations. An intrinsic property is just a property that something has independently of anything else; an extrinsic property is had only in relation to something. An example of an extrinsic property is "fatherhood": something is a father only if that something is a male and has a child. An example of an alleged intrinsic property is "shape".
According to Lewis, [ 8 ] if we know what "shapes" are, we know them to be properties, not relations. However, if properties are had to times, as endurantists say, then no property is intrinsic. Even if a ball is round throughout its existence, the endurantist must say "for all times in which the ball exists, the ball is round, i.e., it is round at those times; it has the property 'being round at a time'." So, if all properties are had to times, then there are no intrinsic properties, (premise P2).
However, if we think that Lewis is right and some properties are intrinsic, then some properties are not had to times—they are had simpliciter (premise C1).
It might be said that premise P3 is more controversial. For instance, suppose a timeless world is possible. If that were so, then in that world, even if there were intrinsic properties, they would not be had by temporal parts—since by definition a timeless world has no temporal dimension, and therefore in such a world there cannot be temporal parts. However, our world is not timeless, and the possibility of timeless worlds is questionable, so it seems reasonable to think that in worlds with a temporal dimension, only temporal parts can have properties simpliciter .
This is so because temporal parts exist only at an instant, and therefore it makes no sense to speak of them as having properties at a time. Temporal parts have properties, and have a temporal location. So if person A changes from having long hair to having short hair, then that can be paraphrased by saying that there is a temporal part of A that has long hair simpliciter and another that has short hair simpliciter , and the latter is after the former in the temporal sequence; that supports premise P3.
Premise C2 follows, so long as one is not considering empty worlds—if such worlds are even possible. An empty world doesn't have objects that change by having a temporal part with a certain property and another temporal part with a certain other property.
Premise P1, the key premise of the argument, can be coherently denied even if the resulting view—the abandonment of intrinsic properties—is counterintuitive. There are, however, ways to support the argument if one accepts relationalism about space-time. | https://en.wikipedia.org/wiki/Temporal_parts |
Temporal plasticity , also known as fine-grained environmental adaptation, [ 1 ] is a type of phenotypic plasticity that involves the phenotypic change of organisms in response to changes in the environment over time. Animals can respond to short-term environmental changes with physiological (reversible) and behavioral changes; plants, which are sedentary, respond to short-term environmental changes with both physiological and developmental (non-reversible) changes. [ 2 ]
Temporal plasticity takes place over a time scale of minutes, days, or seasons, and in environments that are both variable and predictable within the lifespan of an individual. Temporal plasticity is considered adaptive if the phenotypic response results in increased fitness . [ 3 ] Non-reversible phenotypic changes can be observed in metameric organisms such as plants that depend on the environmental condition(s) each metamer was developed under. [ 1 ] Under some circumstances early exposure to specific stressors can affect how an individual plant is capable of responding to future environmental changes ( Metaplasticity ). [ 4 ]
A reversible change is defined as one that is expressed in response to an environmental stressor but returns to a normal state after the stress is no longer present. [ 5 ] Reversible changes are more likely to be adaptive for an organism when the stress driving the change is temporary and the organism is likely to be exposed to it again within its lifetime. [ 6 ] Reversible plasticity often involves changes in physiology or behavior. Perennial plants, which often experience recurring stresses in their environment due to lack of mobility, benefit greatly from reversible physiological plasticity such as changes in resource uptake and allocation. [ 7 ] When essential nutrients are low, root and leaf resorption rates can increase, persisting at a high rate until there are more nutrients available in the soil and resorption rates can return to their normal state. [ 8 ]
Irreversible changes are described as changes that remain expressed in an organism after the environmental stress has ceased. [ 5 ] Environmental shifts that drive irreversible plasticity in an organism tend to be less rapidly changing, such as gradually increasing temperatures. This often leads to permanent changes in morphology or in the developmental process of an organism (developmental plasticity). [ 9 ] Plants are highly plastic and tend to express many irreversible developmental changes, such as shifts in timing of bud and flower development. [ 10 ] In animals, many organisms benefit from having multiple persisting morphs in a population that arise during development in response to environmental conditions. For example, freshwater snails will form more spherical shells when in the presence of a predator ( bluegill sunfish ) and conical shells when predators are absent. [ 11 ] These shell shapes are permanent and cannot be reverted, even if the predator status of the snail's environment changes.
Morphologically and developmentally plastic traits can be reversible in some cases, and there are some physiological responses which can be irreversible, which differs from the typical trend. One example of developmental plasticity that is reversible is the shift in mouth form of roundworm, Pristionchus pacificus , when exposed to a changes in food type and availability. [ 12 ] A second example of reversible developmental plasticity is the length of Galapagos marine iguanas, Amblyrhynchus cristatus , in response to El Niño weather conditions. During El Niño seasons, the algal food supply decreases, but increases during La Niña seasons. This change in food availability coincides with the changes in iguana size during the season. [ 13 ]
A unique and complex example of plasticity is camouflage, an adaption that allows animals to avoid predators by hiding in plain sight. [ 14 ] The mechanisms behind camouflage are not the same in all species - they can be morphological, physiological, behavioral, or even a combination of traits. [ 15 ] Camouflage can also be irreversible or reversible, depending on the species. Camouflage can be irreversible when color patterns or other morphological traits are set during development. However, camouflage can also be reversible, with color, texture, and behavioral changes occurring in response to immediate threats (e.g., Mimic octopus ).
In some cases, the exact same change in phenotype can be reversible in one species and irreversible in another. For example, both pea and wheat plants express changes in root growth due to environmental cues, but the changes are permanent only for wheat. [ 16 ] Sometimes this can even occur within the same species, due to the largely unpredictable results of interactions between an individual's genetic make-up and their specific environmental experiences. [ 17 ]
Dicerandra linearifolia leaves grown at the beginning of its development, with lower ambient temperature, are thicker, wider, and possess less stomata than those grown later in the same year. [ 1 ]
Leaf structure in plants is often affected by high light and low light conditions. After being exposed to varying levels of light, Aechmea aquilega , underwent significant changes in the development of its leaf characteristics. In heavy light exposure, the leaves of the plant were observed to be smaller and more rigid than those in low light conditions. [ 18 ]
In times of sporadic nutrient availability, fine root density increases in order to more efficiently absorb nutrients. In times of water inundation, plants will increase root mass in response to make use of the excess water in the environment. [ 4 ]
Plants are capable of adjusting the degree nutrients are reabsorbed from their leaves. Resorption tends to be incomplete in nutrient-rich environments, and conversely nutrient poor environments often trigger complete resorption in plants. [ 8 ]
Leaves grown during the dry season differ than those grown in wetter seasons. The leaves differ in their shape (leaves grown during the dry season were longer and narrower in comparison to those grown during the wet season), possessed higher trichome density, and lower anthocyanin levels. [ 19 ] | https://en.wikipedia.org/wiki/Temporal_plasticity |
Temporal resolution ( TR ) refers to the discrete resolution of a measurement with respect to time . It is defined as the amount of time needed to revisit and acquire data for exactly the same location. When applied to remote sensing , this amount of time is influenced by the sensor platform's orbital characteristics and the features of the sensor itself. The temporal resolution is low when the revisiting delay is high and vice-versa. Temporal resolution is typically expressed in days. [ 1 ]
Often there is a trade-off between the temporal resolution of a measurement and its spatial resolution , due to Heisenberg's uncertainty principle . In some contexts, such as particle physics , this trade-off can be attributed to the finite speed of light and the fact that it takes a certain period of time for the photons carrying information to reach the observer. In this time, the system might have undergone changes itself. Thus, the longer the light has to travel, the lower the temporal resolution.
In another context, there is often a tradeoff between temporal resolution and computer storage . A transducer may be able to record data every millisecond , [ 2 ] [ 3 ] [ 4 ] but available storage may not allow this, and in the case of 4D PET imaging the resolution may be limited to several minutes. [ 5 ]
In some applications, temporal resolution may instead be equated to the sampling period, or its inverse, the refresh rate , or update frequency in Hertz , of a TV, for example.
The temporal resolution is distinct from temporal uncertainty. This would be analogous to conflating image resolution with optical resolution . One is discrete, the other, continuous.
The temporal resolution is a resolution somewhat the 'time' dual to the 'space' resolution of an image. In a similar way, the sample rate is equivalent to the pixel pitch on a display screen, whereas the optical resolution of a display screen is equivalent to temporal uncertainty.
Note that both this form of image space and time resolutions are orthogonal to measurement resolution, even though space and time are also orthogonal to each other. Both an image or an oscilloscope capture can have a signal-to-noise ratio , since both also have measurement resolution.
An oscilloscope is the temporal equivalent of a microscope, and it is limited by temporal uncertainty the same way a microscope is limited by optical resolution. A digital sampling oscilloscope has also a limitation analogous to image resolution , which is the sample rate. A non-digital non-sampling oscilloscope is still limited by temporal uncertainty.
The temporal uncertainty can be related to the maximum frequency of continuous signal the oscilloscope could respond to, called the bandwidth and given in Hertz . But for oscilloscopes, this figure is not the temporal resolution. To reduce confusion, oscilloscope manufacturers use 'Sa/s' instead of 'Hz' to specify the temporal resolution.
Two cases for oscilloscopes exist: either the probe settling time is much shorter than the real time sampling rate, or it is much larger. The case where the settling time is the same as the sampling time is usually undesirable in an oscilloscope. It is more typical to prefer a larger ratio either way, or if not, to be somewhat longer than two sample periods.
In the case where it is much longer, the most typical case, it dominates the temporal resolution. The shape of the response during the settling time also has as strong effect on the temporal resolution. For this reason probe leads usually offer an arrangement to 'compensate' the leads to alter the trade off between minimal settling time, and minimal overshoot .
If it is much shorter, the oscilloscope may be prone to aliasing from radio frequency interference, but this can be removed by repeatedly sampling a repetitive signal and averaging the results together. If the relationship between the 'trigger' time and the sample clock can be controlled with greater accuracy than the sampling time, then it is possible to make a measurement of a repetitive waveform with much higher temporal resolution than the sample period by upsampling each record before averaging. In this case the temporal uncertainty may be limited by clock jitter . | https://en.wikipedia.org/wiki/Temporal_resolution |
The Temporally Ordered Routing Algorithm ( TORA ) is an algorithm for routing data across Wireless Mesh Networks or Mobile ad hoc networks . [ 1 ]
It was developed by Vincent Park and Scott Corson at the University of Maryland and the Naval Research Laboratory . Park has patented his work, and it was licensed by Nova Engineering , who are marketing a wireless router product based on Park's algorithm.
The TORA attempts to achieve a high degree of scalability using a "flat", non-hierarchical routing algorithm. In its operation the algorithm attempts to suppress, to the greatest extent possible, the generation of far-reaching control message propagation. In order to achieve this, the TORA does not use a shortest path solution, an approach which is unusual for routing algorithms of this type.
TORA builds and maintains a Directed Acyclic Graph (DAG) rooted at a destination. No two nodes may have the same height.
Information may flow from nodes with higher heights to nodes with lower heights. Information can therefore be thought of as a fluid that may only flow downhill. By maintaining a set of totally ordered heights at all times, TORA achieves loop-free multipath routing, as information cannot 'flow uphill' and so cross back on itself.
The key design concepts of TORA is localization of control messages to a very small set of nodes near the occurrence of a topological change. To accomplish this, nodes need to maintain the routing information about adjacent (one hop) nodes. The protocol performs three basic functions:
During the route creation and maintenance phases, nodes use a height metric to establish a directed acyclic graph (DAG) rooted at destination. Thereafter links are assigned based on the relative height metric of neighboring nodes. During the times of mobility the DAG is broken and the route maintenance unit comes into picture to reestablish a DAG routed at the destination.
Timing is an important factor for TORA because the height metric is dependent on the logical time of the link failure. TORA's route erasure phase is essentially involving flooding a broadcast clear packet (CLR) throughout the network to erase invalid routes.
A node which requires a link to a destination because it has no downstream neighbours for it sends a QRY (query) packet and sets its (formerly unset) route-required flag. A QRY packet contains the destination id of the node a route is sought to. The reply to a query is called an update UPD packet. It contains the height quintuple of the neighbour node answering to a query and the destination field which tells for which destination the update was meant for.
A node receiving a QRY packet does one of the following:
A node receiving an update packet updates the height value of its neighbour in the table and takes one of the following actions:
Each node maintains a neighbour table containing the height of the neighbour nodes. Initially the height of all the nodes is NULL. (This is not zero "0" but NULL "-") so their quintuple is (-,-,-,-,i). The height of a destination neighbour is (0,0,0,0,dest).
Node C requires a route, so it broadcasts a QRY.
The QRY propagates until it hits a node which has a route to the destination, this node then sends an UPD message.
The UPD is also propagated, while node E sends a new UPD.
Route maintenance in TORA has five different cases according to the flowchart below as an example:
Partition Detection and Route Erasure
When a node has detected a partition it sets its height and the heights of all its neighbours for the destination in its table to NULL and it issues a CLR (Clear) packet. The CLR packet consists of the reflected reference level (t,oid,1) and the destination id.
If a node receives a CLR packet and the reference level matches its own reference level it sets all heights of the neighbours and its own for the destination to NULL and broadcasts the CLR packet. If the reference level doesn't match its own it just sets the heights of the neighbours its table matching the reflected reference level to NULL and updates their link status. | https://en.wikipedia.org/wiki/Temporally_ordered_routing_algorithm |
Temporary adjustments are a set of operations which are performed on a theodolite to make it ready for taking observations. These include its initial setting up on a tripod or other stand, centering, levelling up and focusing of eyepiece.
Exact centering is done by using the shifting head of the instrument. During this, first the screw-clamping ring of the sliding head is loosened and the upper plate of the shifting head is slid over the lower one until the plumb bob is exactly over the station mark. After the exact centering, the screw clamping ring is tightened. This can be done by means of a forced centering plate or tribrach . An optical or laser plummet is normally used for the most accurate setting. The centering and levelling of the instrument is interactive and iterative; a re-levelling may change the centering, so error each is eliminated successively until negligible.
Leveling of an instrument is done to make its vertical axis adjusted with respect to the apparent force of gravity at the station.
For two spirit vials at right angles:
The same principle applies for a bulls-eye level:
To obtain an accurate clear sighting, the cross hairs should be in focus; adjust the eyepiece to do this.
To clearly view the object being sighted focus the objective lens. | https://en.wikipedia.org/wiki/Temporary_adjustments_of_theodolites |
Temporary appropriation refers to the action in which a person or a group of people realises an activity in a public space for which it was not designed for. According to Lara-Hernandez and Melis , [ 1 ] it is process that implies dynamism similar to what Graumann called the humanisation of the space, which is the fundamental societal defined meanings interiorised by the individual. [ 2 ] Representative activities of temporary appropriation can be grouped in three main categories: 1) sports , leisure and cultural activities ; 2) activities related to economy such as work and services; and 3) activities related to sacralisation or worship . Authors stress two main factors that encourage the temporary appropriation phenomenon , on the one hand the cultural factor (also known as Synthetic psychological environment) [ 3 ] while on the other the configuration or design of the built environment . The former refers to the group of symbols, values, attitudes, skills, knowledge, meanings, communication ways, social structure and physical objects that make possible the life of a determinate society . [ 4 ] While the latter refers to human-made structures, features, and facilities viewed collectively as an environment in which people live and work. [ 5 ] Temporary appropriation is an example of Architectural Exaptation in the urban environment.
The term appropriation was firstly introduced by Korosec-Serfaty [ 6 ] in the Proceedings of the Strasbourg conference in 1976. Within the field of environmental psychology , the term appropriation is described as a temporary phenomenon that implies a dynamic process of interaction between the individual and its surroundings. It is a process similar to that of humanisation . [ 7 ] Since then, several authors such as Purcell , [ 8 ] Pol, [ 9 ] and Yory [ 10 ] with the theory of topophilia , have used the term to explain the theoretical link between people and places. These authors consider the appropriation as an inborn necessity of humans that can be expressed through activities that occur in the urban landscape. Public spaces are an essential part of the urban landscape and their design is therefore strongly linked to the possibility of occurring activities related to the Temporary Appropriation. In other words, while appropriation is a broader term, its temporary variation refers more specifically to public spaces. [ 11 ] [ 12 ] The accent in the latter has always been placed on the informality of this action ( for more details see Temporary appropriation and urban informality: Exploring the subtle distinction ). Dr. Lara-Hernandez conceptualises temporary appropriation instead as a consequence of the necessity of adapting human needs to a city that deprives the population of reference points due to sudden and unexpected changes. [ 13 ] Additionally, it has been claimed that temporary appropriation plays a key role in enhancing urban resilience (see Temporary Appropriation in Cities: Human Spatialisation in Public Spaces and Community Resilience ). | https://en.wikipedia.org/wiki/Temporary_appropriation |
The abstract machine TDF (originally the Ten15 Distribution Format , but more recently redefined as the TenDRA Distribution Format ) evolved at the Royal Signals and Radar Establishment in the UK as a successor to Ten15 . Its design allowed support for the C programming language . TDF is the basis for the Architecture Neutral Distribution Format .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/TenDRA_Distribution_Format |
In mineralogy , tenacity is a mineral 's behavior when deformed or broken.
The mineral breaks or powders easily. Most ionic-bonded minerals are brittle. [ 1 ]
The mineral may be pounded out into thin sheets. Metallic-bonded minerals are usually malleable.
The mineral may be drawn into a wire. Ductile materials have to be malleable as well as tough .
May be cut smoothly with a knife. Relatively few minerals are sectile . Sectility is a form of tenacity and can be used to distinguish minerals of similar appearance. [ 2 ] Gold , for example, is sectile but pyrite ("fool's gold") is not.
If bent by an external force, an elastic mineral will spring back to its original shape and size when the stress, that is, the external force, is released.
If bent by an external force, a plastic mineral will not spring back to its original shape and size when the stress, that is, the external force, is released. It stays bent.
This mineralogy article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tenacity_(mineralogy) |
The specific strength is a material's (or muscle's) strength (force per unit area at failure) divided by its density . It is also known as the strength-to-weight ratio or strength/weight ratio or strength-to-mass ratio . In fiber or textile applications, tenacity is the usual measure of specific strength. The SI unit for specific strength is Pa ⋅ m 3 / kg , or N ⋅m/kg, which is dimensionally equivalent to m 2 /s 2 , though the latter form is rarely used. Specific strength has the same units as specific energy , and is related to the maximum specific energy of rotation that an object can have without flying apart due to centrifugal force .
Another way to describe specific strength is breaking length , also known as self support length : the maximum length of a vertical column of the material (assuming a fixed cross-section) that could suspend its own weight when supported only at the top. For this measurement, the definition of weight is the force of gravity at the Earth's surface ( standard gravity , 9.80665 m/s 2 ) applying to the entire length of the material, not diminishing with height. This usage is more common with certain specialty fiber or textile applications.
The materials with the highest specific strengths are typically fibers such as carbon fiber , glass fiber and various polymers, and these are frequently used to make composite materials (e.g. carbon fiber-epoxy ). These materials and others such as titanium , aluminium , magnesium and high strength steel alloys are widely used in aerospace and other applications where weight savings are worth the higher material cost.
Note that strength and stiffness are distinct. Both are important in design of efficient and safe structures.
where L {\displaystyle L} is the length, T s {\displaystyle T_{s}} is the tensile strength, ρ {\displaystyle \rho } is the density and g {\displaystyle \mathbf {g} } is the acceleration due to gravity ( ≈ 9.8 {\displaystyle \approx 9.8} m/s 2 {\displaystyle ^{2}} )
The data of this table is from best cases, and has been established for giving a rough figure.
Note: Multiwalled carbon nanotubes have the highest tensile strength of any material yet measured, with labs producing them at a tensile strength of 63 GPa, [ 36 ] still well below their theoretical limit of 300 GPa. The first nanotube ropes (20 mm long) whose tensile strength was published (in 2000) had a strength of 3.6 GPa, still well below their theoretical limit. [ 41 ] The density is different depending on the manufacturing method, and the lowest value is 0.037 or 0.55 (solid). [ 37 ]
The International Space Elevator Consortium uses the "Yuri" as a name for the SI units describing specific strength. Specific strength is of fundamental importance in the description of space elevator cable materials. One Yuri is conceived to be the SI unit for yield stress (or breaking stress) per unit of density of a material under tension. One Yuri equals 1 Pa⋅m 3 /kg or 1 N ⋅ m / kg , which is the breaking/yielding force per linear density of the cable under tension. [ 42 ] [ 43 ] A functional Earth space elevator would require a tether of 30–80 megaYuri (corresponding to 3100–8200 km of breaking length). [ 44 ]
The null energy condition places a fundamental limit on the specific strength of any material. [ 40 ] The specific strength is bounded to be no greater than c 2 ≈ 9 × 10 13 kN ⋅ m / kg , where c is the speed of light .
This limit is achieved by electric and magnetic field lines, QCD flux tubes , and the fundamental strings hypothesized by string theory . [ citation needed ]
Tenacity is the customary measure of strength of a fiber or yarn . It is usually defined as the ultimate (breaking) force of the fiber (in gram -force units) divided by the denier .
Because denier is a measure of the linear density, the tenacity works out to be not a measure of force per unit area, but rather a quasi-dimensionless measure analogous to specific strength. [ 45 ] A tenacity of 1 {\displaystyle 1} corresponds to: [ citation needed ] 1 g ⋅ 9.80665 m s − 2 1 g / 9000 m = 9.80665 m s − 2 1 / 9000 m = 9.80665 m s − 2 9000 m = 88259.85 m 2 s − 2 {\displaystyle {\frac {1{\rm {\,g}}\cdot 9.80665{\rm {\,ms^{-2}}}}{1{\rm {\,g}}/9000{\rm {\,m}}}}={\frac {9.80665{\rm {\,ms^{-2}}}}{1/9000{\rm {\,m}}}}=9.80665{\rm {\,ms^{-2}}}\,9000{\rm {\,m}}=88259.85{\rm {\,m^{2}s^{-2}}}} Mostly Tenacity expressed in report as cN/tex. | https://en.wikipedia.org/wiki/Tenacity_(textile_strength) |
A tenaja or tinaja is a water basin or retention area. The term usually implies a natural or geologic cistern in rock which retains water. They are often created by erosional processes within intermittent streams.
Before European settlers came to America , tenajas were a valuable source of water for early Native Americans traveling in the desert areas of the Southwest . Today, tenajas are an integral part of sustaining life in the arid Southwest. For example, tenajas at the Santa Rosa Plateau in southern California allow western pond turtles , California newts and red-legged frogs to survive through dry summer months. [ 1 ]
During prolonged dry spells, deep tinajas may trap desert animals who cannot climb out due to the smooth walls. [ 2 ] [ 3 ] [ 4 ]
From the Spanish tinaja: a clay pot or earthenware jar. [ 5 ]
This article related to topography is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tenaja |
Tencent Cloud ( Chinese : 腾讯云 ) is a cloud computing service operated by Tencent , a Chinese multinational technology company. It provides cloud services to domestic and international clients and operates data centers globally.
As of Q4 2024, Tencent Cloud held approximately 2% of the global cloud infrastructure service market by revenue, ranking 8th globally, according to Synergy Research Group. In China, it was the third-largest cloud service provider with a 15% market share in Q3 2024, according to Canalys. [ 2 ] [ 3 ]
Tencent Cloud operates 56 availability zones across 21 regions globally. [ 4 ] It has nine technical support centers across Indonesia, the Philippines, Malaysia, Singapore, Thailand, Japan, South Korea, the USA, and Germany. [ 5 ]
Tencent Cloud was launched in September 2013 as Tencent 's cloud computing division. [ 6 ]
Beginning in 2016, Tencent Cloud expanded its global footprint by establishing partnerships in Asia, Europe, and the Americas. In 2018, it created Tencent Cloud and Smart Industries Group (CSIG). [ 7 ] [ 8 ]
In 2020, Tencent Cloud partnered with Huawei to develop GameMatrix, a cloud-based gaming platform. [ 9 ]
In 2021, Tencent Cloud opened new data centers in Bangkok , Frankfurt , Hong Kong , Jakarta , Tokyo , and Sao Paulo . By then, it operated data centers in 27 geographic areas across five continents with 66 availability zones. [ 10 ]
Tencent Cloud was ranked the second-largest cloud service provider in China by IDC in the same year, based on market share and year-on-year growth. [ 11 ]
From 2022 onward, Tencent Cloud entered into several regional and technological partnerships, including the Indonesian game streaming service GOX and various Web3 entities such as Ankr, Avalanche, Scroll, and Sui, and Chainlink Labs. [ 12 ] [ 13 ] [ 14 ] In 2024, it became the cloud server provider for Pocketpair 's multiplayer game Palworld and launched Alto Cloud, a data center in Cyberjaya, Malaysia, with Global Resources Management Sdn. Bhd. (GRM). [ 15 ] [ 16 ]
In 2025, Tencent Cloud announced plans for a new cloud region in Saudi Arabia with two availability zones , set to begin operations within the year. [ 17 ] [ 18 ] That same year, the company closed its South Asia Pacific (Mumbai) Availability Zone. [ citation needed ]
Tencent Cloud provides over 400 cloud-based services across areas such as compute, storage, networking and databases. [ 19 ]
On April 8, 2024, Tencent Cloud experienced disruptions due to irregularities with its cloud programming interface, reportedly affecting at least 1,957 clients. [ 20 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tencent_Cloud |
Tendril perversion is a geometric phenomenon sometimes observed in helical structures in which the direction of the helix transitions between left-handed and right-handed. [ 1 ] [ 2 ] Such a reversal of chirality is commonly seen in helical plant tendrils and telephone handset cords. [ 3 ]
The phenomenon was known to Charles Darwin , [ 4 ] who wrote in 1865,
A tendril ... invariably becomes twisted in one part in one direction, and in another part in the opposite direction ...
This curious and symmetrical structure has been noticed by several botanists, but has not been sufficiently
explained. [ 5 ]
The term "tendril perversion" was coined by Alain Goriely and Michael Tabor in 1998 based on the word perversion found in 19th-century science literature. [ 6 ] [ 7 ] "Perversion" is a transition from one chirality to another and was known to James Clerk Maxwell , who attributed it to topologist J. B. Listing . [ 4 ] [ 8 ]
Tendril perversion can be viewed as an example of spontaneous symmetry breaking , in which the strained structure of the tendril adopts a configuration of minimum energy while preserving zero overall twist. [ 2 ]
Tendril perversion has been studied both experimentally and theoretically. Gerbode et al. have made experimental studies of the coiling of cucumber tendrils. [ 9 ] [ 10 ] A detailed study of a simple model of the physics of tendril perversion was made by McMillen and Goriely in the early 2000s. [ 2 ] Liu et al. showed in 2014 that "the transition from a helical to a hemihelical shape, as well as the number of perversions, depends on the height to width ratio of the strip's cross-section." [ 3 ]
Generalized tendril perversions were put forward by Silva et al., to include perversions that can be intrinsically produced in elastic filaments, leading to a multiplicity of geometries and dynamical properties. [ 11 ] | https://en.wikipedia.org/wiki/Tendril_perversion |
Tengion, Inc. is an American development-stage regenerative medicine company founded in 2003 with financing from J&J Development Corporation, HealthCap and Oak Investment Partners , which is headquartered in Winston-Salem, North Carolina . [ 1 ] Its goals are discovering, developing, manufacturing and commercializing a range of replacement organs and tissues, or neo-organs and neo-tissues, to address unmet medical needs in urologic, renal, gastrointestinal, and vascular diseases and disorders. The company creates these human neo-organs from a patient’s own cells or autologous cells, in conjunction with its Organ Regeneration Platform.
Tengion declared Chapter 7 bankruptcy in December 2014, and liquidated its assets. [ 2 ] In March 2015 its assets, including tissue engineering samples, were bought back by its creditors and former executives in March 2015. The purchase was expedited. [ 3 ] The new owners then formed Winston-Salem based RegenMedTX . [ 4 ]
Founded in 2003 and formerly headquartered in East Norriton Township, Pennsylvania before moving to Winston-Salem, North Carolina in 2012, [ 5 ] Tengion went public in 2010, after its stock has been approved for listing on the NASDAQ , through a $26 million IPO to help advance its research and development activities. [ 6 ] Some of the groundbreaking regenerative medicine technologies of Dr. Anthony Atala , director of the Wake Forest Institute for Regenerative Medicine , were the core from where those research and development activities developed. [ 7 ] [ 8 ]
On September 4, 2012, Tengion received a notice from NASDAQ stating that the company had not regained compliance with NASDAQ Listing Rule 5550(b)(1) and that its common stock would cease trading on the NASDAQ Capital Market effective on September 6, 2012, and would begin trading on the OTCQB tier of the OTC Marketplace . [ 9 ] The company was bought by former executives and creditors after declaring bankruptcy in 2014. [ 3 ]
All current Tengion's regenerative medicine product candidates are investigational and will not be commercially available until the completion of clinical trials and the review and approval of associated marketing applications by the Food and Drug Administration .
Its most advanced candidate is the Neo-Urinary Conduit. A Phase I clinical trial of the Tengion Neo-Urinary Conduit was completed in some health care institutions, in patients with bladder cancer who require a total cystectomy . The trial ended in December 2014, however information on the results has not yet been made publicly available. [ 10 ] | https://en.wikipedia.org/wiki/Tengion |
Tengiz Beridze ( Georgian : თენგიზ გიორგის ძე ბერიძე; 26 October 1939 – 3 December 2024) was a Georgian biochemist. [ 1 ]
In 1967 Beridze discovered satellite DNA in plants. [ 2 ] Through his research from 1972 to 1975, it was found that closely related species of one genus differ in satellite DNA content. In 1986 he published the monograph Satellite DNA in Springer Edition. In 2013 this monograph was edited as an eBOOK.
In 2011-17 he established a complete nucleotide sequence of four Georgian grape varieties, nuclear, chloroplast and mitochondria.
In 2015–21 he established a complete chloroplast DNA sequence of Georgian wheat species. [ 3 ]
In 1967, he defended his Candidate's Dissertation. In 1980, he defended his doctoral dissertation in the Bakh Institute of Biochemistry, Moscow. He was elected a corresponding member of the Academy of Sciences of Georgia in 1987 and a full member in 1993. [ 4 ]
Beridze held various positions in Soviet and Georgian institutions since 1960s. [ 5 ]
Beridze died on 3 December 2024, at the age of 85. [ 6 ]
Beridze was awarded the Order of Honour of Georgia in 1999. He was awarded the Serge Durmishidze prize in Biochemistry in 2009. | https://en.wikipedia.org/wiki/Tengiz_Beridze |
Tennenbaum's theorem , named for Stanley Tennenbaum who presented the theorem in 1959, is a result in mathematical logic that states that no countable nonstandard model of first-order Peano arithmetic (PA) can be recursive (Kaye 1991:153ff).
A structure M {\displaystyle M} in the language of PA is recursive if there are recursive functions ⊕ {\displaystyle \oplus } and ⊗ {\displaystyle \otimes } from N × N {\displaystyle \mathbb {N} \times \mathbb {N} } to N {\displaystyle \mathbb {N} } , a recursive two-place relation < M on N {\displaystyle \mathbb {N} } , and distinguished constants n 0 , n 1 {\displaystyle n_{0},n_{1}} such that
where ≅ {\displaystyle \cong } indicates isomorphism and N {\displaystyle \mathbb {N} } is the set of (standard) natural numbers . Because the isomorphism must be a bijection , every recursive model is countable. There are many nonisomorphic countable nonstandard models of PA.
Tennenbaum's theorem states that no countable nonstandard model of PA is recursive. Moreover, neither the addition nor the multiplication of such a model can be recursive.
This sketch follows the argument presented by Kaye (1991). The first step in the proof is to show that, if M is any countable nonstandard model of PA, then the standard system of M (defined below) contains at least one nonrecursive set S . The second step is to show that, if either the addition or multiplication operation on M were recursive, then this set S would be recursive, which is a contradiction.
Through the methods used to code ordered tuples, each element x ∈ M {\displaystyle x\in M} can be viewed as a code for a set S x {\displaystyle S_{x}} of elements of M . In particular, if we let p i {\displaystyle p_{i}} be the i th prime in M , then z ∈ S x ↔ M ⊨ p z | x {\displaystyle z\in S_{x}\leftrightarrow M\vDash p_{z}|x} . Each set S x {\displaystyle S_{x}} will be bounded in M , but if x is nonstandard then the set S x {\displaystyle S_{x}} may contain infinitely many standard natural numbers. The standard system of the model is the collection { S x ∩ N : x ∈ M } {\displaystyle \{S_{x}\cap \mathbb {N} :x\in M\}} . It can be shown that the standard system of any nonstandard model of PA contains a nonrecursive set, either by appealing to the incompleteness theorem or by directly considering a pair of recursively inseparable r.e. sets (Kaye 1991:154). These are disjoint r.e. sets A , B ⊆ N {\displaystyle A,B\subseteq \mathbb {N} } so that there is no recursive set C ⊆ N {\displaystyle C\subseteq \mathbb {N} } with A ⊆ C {\displaystyle A\subseteq C} and B ∩ C = ∅ {\displaystyle B\cap C=\emptyset } .
For the latter construction, begin with a pair of recursively inseparable r.e. sets A and B . For natural number x there is a y such that, for all i < x , if i ∈ A {\displaystyle i\in A} then p i | y {\displaystyle p_{i}|y} and if i ∈ B {\displaystyle i\in B} then p i ∤ y {\displaystyle p_{i}\nmid y} . By the overspill property, this means that there is some nonstandard x in M for which there is a (necessarily nonstandard) y in M so that, for every m ∈ M {\displaystyle m\in M} with m < M x {\displaystyle m<_{M}x} , we have
Let S = N ∩ S y {\displaystyle S=\mathbb {N} \cap S_{y}} be the corresponding set in the standard system of M . Because A and B are r.e., one can show that A ⊆ S {\displaystyle A\subseteq S} and B ∩ S = ∅ {\displaystyle B\cap S=\emptyset } . Hence S is a separating set for A and B , and by the choice of A and B this means S is nonrecursive.
Now, to prove Tennenbaum's theorem, begin with a nonstandard countable model M and an element a in M so that S = N ∩ S a {\displaystyle S=\mathbb {N} \cap S_{a}} is nonrecursive. The proof method shows that, because of the way the standard system is defined, it is possible to compute the characteristic function of the set S using the addition function ⊕ {\displaystyle \oplus } of M as an oracle. In particular, if n 0 {\displaystyle n_{0}} is the element of M corresponding to 0, and n 1 {\displaystyle n_{1}} is the element of M corresponding to 1, then for each i ∈ N {\displaystyle i\in \mathbb {N} } we can compute n i = n 1 ⊕ ⋯ ⊕ n 1 {\displaystyle n_{i}=n_{1}\oplus \cdots \oplus n_{1}} ( i times). To decide if a number n is in S , first compute p , the n th prime in N {\displaystyle \mathbb {N} } . Then, search for an element y of M so that
for some i < p {\displaystyle i<p} . This search will halt because the Euclidean algorithm can be applied to any model of PA. Finally, we have n ∈ S {\displaystyle n\in S} if and only if the i found in the search was 0. Because S is not recursive, this means that the addition operation on M is nonrecursive.
A similar argument shows that it is possible to compute the characteristic function of S using the multiplication of M as an oracle, so the multiplication operation on M is also nonrecursive (Kaye 1991:154).
Jockush and Soare have shown there exists a model of PA with low degree . [ 1 ] | https://en.wikipedia.org/wiki/Tennenbaum's_theorem |
In structural engineering , a tensile structure is a construction of elements carrying only tension and no compression or bending . The term tensile should not be confused with tensegrity , which is a structural form with both tension and compression elements. Tensile structures are the most common type of thin-shell structures .
Most tensile structures are supported by some form of compression or bending elements, such as masts (as in The O 2 , formerly the Millennium Dome ), compression rings or beams.
A tensile membrane structure is most often used as a roof , as they can economically and attractively span large distances. Tensile membrane structures may also be used as complete buildings, with a few common applications being sports facilities, warehousing and storage buildings, and exhibition venues.
This form of construction has only become more rigorously analyzed and widespread in large structures in the latter part of the twentieth century. Tensile structures have long been used in tents , where the guy ropes and tent poles provide pre-tension to the fabric and allow it to withstand loads.
Russian engineer Vladimir Shukhov was one of the first to develop practical calculations of stresses and deformations of tensile structures, shells and membranes. Shukhov designed eight tensile structures and thin-shell structures exhibition pavilions for the Nizhny Novgorod Fair of 1896 , covering the area of 27,000 square meters. A more recent large-scale use of a membrane-covered tensile structure is the Sidney Myer Music Bowl , constructed in 1958.
Antonio Gaudi used the concept in reverse to create a compression-only structure for the Colonia Guell Church . He created a hanging tensile model of the church to calculate the compression forces and to experimentally determine the column and vault geometries.
The concept was later championed by German architect and engineer Frei Otto , whose first use of the idea was in the construction of the West German pavilion at Expo 67 in Montreal. Otto next used the idea for the roof of the Olympic Stadium for the 1972 Summer Olympics in Munich .
Since the 1960s, tensile structures have been promoted by designers and engineers such as Ove Arup , Buro Happold , Frei Otto , Mahmoud Bodo Rasch , Eero Saarinen , Horst Berger , Matthew Nowicki , Jörg Schlaich , and David Geiger .
Steady technological progress has increased the popularity of fabric-roofed structures. The low weight of the materials makes construction easier and cheaper than standard designs, especially when vast open spaces have to be covered.
Common materials for doubly curved fabric structures are PTFE -coated fiberglass and PVC -coated polyester . These are woven materials with different strengths in different directions. The warp fibers (those fibers which are originally straight—equivalent to the starting fibers on a loom) can carry greater load than the weft or fill fibers, which are woven between the warp fibers.
Other structures make use of ETFE film, either as single layer or in cushion form (which can be inflated, to provide good insulation properties or for aesthetic effect—as on the Allianz Arena in Munich ). ETFE cushions can also be etched with patterns in order to let different levels of light through when inflated to different levels.
In daylight, fabric membrane translucency offers soft diffused naturally lit spaces, while at night, artificial lighting can be used to create an ambient exterior luminescence. They are most often supported by a structural frame as they cannot derive their strength from double curvature. [ 1 ]
Cables can be of mild steel , high strength steel (drawn carbon steel), stainless steel , polyester or aramid fibres . Structural cables are made of a series of small strands twisted or bound together to form a much larger cable. Steel cables are either spiral strand, where circular rods are twisted together and "glued" using a polymer, or locked coil strand, where individual interlocking steel strands form the cable (often with a spiral strand core).
Spiral strand is slightly weaker than locked coil strand. Steel spiral strand cables have a Young's modulus , E of 150±10 kN/mm 2 (or 150±10 GPa ) and come in sizes from 3 to 90 mm diameter. [ citation needed ] Spiral strand suffers from construction stretch, where the strands compact when the cable is loaded. This is normally removed by pre-stretching the cable and cycling the load up and down to 45% of the ultimate tensile load.
Locked coil strand typically has a Young's Modulus of 160±10 kN/mm 2 and comes in sizes from 20 mm to 160 mm diameter.
The properties of the individual strands of different materials are shown in the table below, where UTS is ultimate tensile strength , or the breaking load:
Air-supported structures are a form of tensile structures where the fabric envelope is supported by pressurised air only.
The majority of fabric structures derive their strength from their doubly curved shape. By forcing the fabric to take on double-curvature the fabric gains sufficient stiffness to withstand the loads it is subjected to (for example wind and snow loads). In order to induce an adequately doubly curved form it is most often necessary to pretension or prestress the fabric or its supporting structure.
The behaviour of structures which depend upon prestress to attain their strength is non-linear, so anything other than a very simple cable has, until the 1990s, been very difficult to design. The most common way to design doubly curved fabric structures was to construct scale models of the final buildings in order to understand their behaviour and to conduct form-finding exercises. Such scale models often employed stocking material or tights, or soap film, as they behave in a very similar way to structural fabrics (they cannot carry shear).
Soap films have uniform stress in every direction and require a closed boundary to form. They naturally form a minimal surface—the form with minimal area and embodying minimal energy. They are however very difficult to measure. For a large film, its weight can seriously affect its form.
For a membrane with curvature in two directions, the basic equation of equilibrium is:
where:
Lines of principal curvature have no twist and intersect other lines of principal curvature at right angles.
A geodesic or geodetic line is usually the shortest line between two points on the surface. These lines are typically used when defining the cutting pattern seam-lines. This is due to their relative straightness after the planar cloths have been generated, resulting in lower cloth wastage and closer alignment with the fabric weave.
In a pre-stressed but unloaded surface w = 0, so t 1 R 1 = − t 2 R 2 {\displaystyle {\frac {t_{1}}{R_{1}}}=-{\frac {t_{2}}{R_{2}}}} .
In a soap film surface tensions are uniform in both directions, so R 1 = − R 2 .
It is now possible to use powerful non-linear numerical analysis programs (or finite element analysis ) to formfind and design fabric and cable structures. The programs must allow for large deflections.
The final shape, or form, of a fabric structure depends upon:
It is important that the final form will not allow ponding of water, as this can deform the membrane and lead to local failure or progressive failure of the entire structure.
Snow loading can be a serious problem for membrane structure, as the snow often will not flow off the structure as water will. For example, this has in the past caused the (temporary) collapse of the Hubert H. Humphrey Metrodome , an air-inflated structure in Minneapolis, Minnesota . Some structures prone to ponding use heating to melt snow which settles on them.
There are many different doubly curved forms, many of which have special mathematical properties. The most basic doubly curved from is the saddle shape, which can be a hyperbolic paraboloid (not all saddle shapes are hyperbolic paraboloids). This is a double ruled surface and is often used in both in lightweight shell structures (see hyperboloid structures ). True ruled surfaces are rarely found in tensile structures. Other forms are anticlastic saddles, various radial, conical tent forms and any combination of them.
Pretension is tension artificially induced in the structural elements in addition to any self-weight or imposed loads they may carry. It is used to ensure that the normally very flexible structural elements remain stiff under all possible loads. [ 2 ] [ 3 ]
A day to day example of pretension is a shelving unit supported by wires running from floor to ceiling. The wires hold the shelves in place because they are tensioned – if the wires were slack the system would not work.
Pretension can be applied to a membrane by stretching it from its edges or by pretensioning cables which support it and hence changing its shape. The level of pretension applied determines the shape of a membrane structure.
The alternative approximated approach to the form-finding problem solution is based on the total energy balance of a grid-nodal system. Due to its physical meaning this approach is called the stretched grid method (SGM).
A uniformly loaded cable spanning between two supports forms a curve intermediate between a catenary curve and a parabola . The simplifying assumption can be made that it approximates a circular arc (of radius R ).
By equilibrium :
The horizontal and vertical reactions :
By geometry :
The length of the cable:
The tension in the cable:
By substitution:
The tension is also equal to:
The extension of the cable upon being loaded is (from Hooke's law , where the axial stiffness, k, is equal to k = E A L {\displaystyle k={\frac {EA}{L}}} ):
where E is the Young's modulus of the cable and A is its cross-sectional area .
If an initial pretension, T 0 {\displaystyle T_{0}} is added to the cable, the extension becomes:
Combining the above equations gives:
By plotting the left hand side of this equation against T, and plotting the right hand side on the same axes, also against T, the intersection will give the actual equilibrium tension in the cable for a given loading w and a given pretension T 0 {\displaystyle T_{0}} .
A similar solution to that above can be derived where:
By equilibrium:
By geometry:
This gives the following relationship:
As before, plotting the left hand side and right hand side of the equation against the tension, T, will give the equilibrium tension for a given pretension, T 0 {\displaystyle T_{0}} and load, W .
The fundamental natural frequency , f 1 of tensioned cables is given by:
where T = tension in newtons , m = mass in kilograms and L = span length.
The Construction Specifications Institute (CSI) and Construction Specifications Canada (CSC), MasterFormat 2018 Edition, Division 05 and 13:
CSI/CSC MasterFormat 1995 Edition: | https://en.wikipedia.org/wiki/Tensile_structure |
Tensile testing , also known as tension testing , [ 1 ] is a fundamental materials science and engineering test in which a sample is subjected to a controlled tension until failure. Properties that are directly measured via a tensile test are ultimate tensile strength , breaking strength , maximum elongation and reduction in area. [ 2 ] From these measurements the following properties can also be determined: Young's modulus , Poisson's ratio , yield strength , and strain-hardening characteristics. [ 3 ] Uniaxial tensile testing is the most commonly used for obtaining the mechanical characteristics of isotropic materials. Some materials use biaxial tensile testing . The main difference between these testing machines being how load is applied on the materials.
Tensile testing might have a variety of purposes, such as:
The preparation of test specimens depends on the purposes of testing and on the governing test method or specification . A tensile specimen usually has a standardized sample cross-section. It has two shoulders and a gauge (section) in between. The shoulders and grip section are generally larger than the gauge section by 33% [ 4 ] so they can be easily gripped. The gauge section's smaller diameter also allows the deformation and failure to occur in this area. [ 2 ] [ 5 ]
The shoulders of the test specimen can be manufactured in various ways to mate to various grips in the testing machine (see the image below). Each system has advantages and disadvantages; for example, shoulders designed for serrated grips are easy and cheap to manufacture, but the alignment of the specimen is dependent on the skill of the technician. On the other hand, a pinned grip assures good alignment. Threaded shoulders and grips also assure good alignment, but the technician must know to thread each shoulder into the grip at least one diameter's length, otherwise the threads can strip before the specimen fractures. [ 6 ]
In large castings and forgings it is common to add extra material, which is designed to be removed from the casting so that test specimens can be made from it. These specimens may not be exact representation of the whole workpiece because the grain structure may be different throughout. In smaller workpieces or when critical parts of the casting must be tested, a workpiece may be sacrificed to make the test specimens. [ 7 ] For workpieces that are machined from bar stock , the test specimen can be made from the same piece as the bar stock.
For soft and porous materials, like electrospun nonwovens made of nanofibers, the specimen is usually a sample strip supported by a paper frame to favour its mounting on the machine and to avoid membrane damaging. [ 8 ] [ 9 ]
A. A Threaded shoulder for use with a thread B. A round shoulder for use with serrated grips C. A butt end shoulder for use with a split collar D. A flat shoulder for used with serrated grips
The repeatability of a testing machine can be found by using special test specimens meticulously made to be as similar as possible. [ 7 ]
A standard specimen is prepared in a round or a square section along the gauge length, depending on the standard used. Both ends of the
specimens should have sufficient length and a surface condition such that they are firmly gripped
during testing. The initial gauge length Lo is standardized (in several countries) and varies with the
diameter (Do) or the cross-sectional area (Ao) of the specimen as listed
The following tables gives examples of test specimen dimensions and tolerances per standard ASTM E8.
The most common testing machine used in tensile testing is the universal testing machine . This type of machine has two crossheads; one is adjusted for the length of the specimen and the other is driven to apply tension to the test specimen. Testing machines are either electromechanical or hydraulic . [ 5 ]
The electromechanical machine uses an electric motor, gear reduction system and one, two or four screws to move the crosshead up or down. A range of crosshead speeds can be achieved by changing the speed of the motor. The speed of the crosshead, and consequently the load rate, can be controlled by a microprocessor in the closed-loop servo controller. A hydraulic testing machine uses either a single- or dual-acting piston to move the crosshead up or down. Manually operated testing systems are also available. Manual configurations require the operator to adjust a needle valve in order to control the load rate. A general comparison shows that the electromechanical machine is capable of a wide range of test speeds and long crosshead displacements, whereas the hydraulic machine is a cost-effective solution for generating high forces. [ 11 ]
The machine must have the proper capabilities for the test specimen being tested. There are four main parameters: force capacity, speed, precision and accuracy . Force capacity refers to the fact that the machine must be able to generate enough force to fracture the specimen. The machine must be able to apply the force quickly or slowly enough to properly mimic the actual application. Finally, the machine must be able to accurately and precisely measure the gauge length and forces applied; for instance, a large machine that is designed to measure long elongations may not work with a brittle material that experiences short elongations prior to fracturing. [ 6 ]
Alignment of the test specimen in the testing machine is critical, because if the specimen is misaligned, either at an angle or offset to one side, the machine will exert a bending force on the specimen. This is especially bad for brittle materials, because it will dramatically skew the results. This situation can be minimized by using spherical seats or U-joints between the grips and the test machine. [ 6 ] If the initial portion of the stress–strain curve is curved and not linear, it indicates the specimen is misaligned in the testing machine. [ 12 ]
The strain measurements are most commonly measured with an extensometer , but strain gauges are also frequently used on small test specimen or when Poisson's ratio is being measured. [ 6 ] Newer test machines have digital time, force, and elongation measurement systems consisting of electronic sensors connected to a data collection device (often a computer) and software to manipulate and output the data. However, analog machines continue to meet and exceed ASTM, NIST, and ASM metal tensile testing accuracy requirements, continuing to be used today. [ citation needed ]
The test process involves placing the test specimen in the testing machine and slowly extending it until it fractures. During this process, the elongation of the gauge section is recorded against the applied force. The data is manipulated so that it is not specific to the geometry of the test sample. The elongation measurement is used to calculate the engineering strain , ε , using the following equation: [ 5 ]
where Δ L is the change in gauge length, L 0 is the initial gauge length, and L is the final length. The force measurement is used to calculate the engineering stress , σ, using the following equation: [ 5 ]
where F is the tensile force and A is the nominal cross-section of the specimen. The machine does these calculations as the force increases, so that the data points can be graphed into a stress–strain curve . [ 5 ]
When dealing with porous and soft materials, as electrospun nanofibrous membranes, the application of the above stress formula is problematic. The membrane thickness, indeed, is dependent on the pressure applied during its measurement, leading to variable thicknesses value. As a consequence, the obtained stress-strain curves show high variability. In this case, the normalization of load with respect to the specimen mass instead of the cross-section area (A) is recommended to obtain reliable tensile results. [ 13 ]
Tensile testing can be used to test creep in materials, a slow plastic deformation of the material from constant applied stresses over extended periods of time. Creep is generally aided by diffusion and dislocation movement. While there are many ways to test creep, tensile testing is useful for materials such as concrete and ceramics that behave differently in tension and compression, and thus possess different tensile and compressive creep rates. As such, understanding the tensile creep is important in the design of concrete for structures that experience tension, such as water holding containers, or for general structural integrity. [ 14 ]
Tensile testing of creep generally follows the same testing process as standard testing albeit generally at lower stresses to remain in the creep domain rather than plastic deformation. Additionally, specialized tensile creep testing equipment may include incorporated high temperature furnace components to aid diffusion. [ 15 ] The sample is held at constant temperature and tension, and strain on the material is measured using strain gauges or laser gauges. The measured strain can be fitted with equations governing different mechanisms of creep, such as power law creep or diffusion creep (see creep for more information). Further analysis can be obtained from examining the sample post fracture. Understanding the creep mechanism and rate be able to aid materials selection and design.
It is important to note that sample alignment is important for tensile testing creep. Off centered loading will result in a bending stress being applied to the sample. Bending can be measured by tracking strain on all sides of the sample. The percent bending can then be defined as the difference between strain on one face ( ε 1 {\displaystyle \varepsilon _{1}} ) and the average strain ( ε 0 {\displaystyle \varepsilon _{0}} ): [ 16 ]
Percent Bending = ε 1 − ε 0 ε 0 × 100 {\displaystyle {\text{Percent Bending}}={\frac {\varepsilon _{1}-\varepsilon _{0}}{\varepsilon _{0}}}\times 100}
Percent bending should be under 1% on the wider face of loaded samples, and under 2% on the thinner face. Bending can be caused by misalignment on the loading clamp and asymmetric machining of samples. [ 16 ] | https://en.wikipedia.org/wiki/Tensile_testing |
In surface science , a tensiometer is a measuring instrument used to measure the surface tension ( γ ) of liquids or surfaces . Tensiometers are used in research and development laboratories to determine the surface tension of liquids like coatings , lacquers or adhesives . A further application field of tensiometers is the monitoring of industrial production processes like parts cleaning or electroplating .
Surface scientists commonly use an optical goniometer /tensiometer to measure the surface tension and interfacial tension of a liquid using the pendant or sessile drop methods. A drop is produced and captured using a CCD camera . The drop profile is subsequently extracted, and sophisticated software routines then fit the theoretical Young-Laplace equation to the experimental drop profile. The surface tension can then be calculated from the fitted parameters. Unlike other methods, this technique requires only a small amount of liquid making it suitable for measuring interfacial tensions of expensive liquids. [ 1 ]
This type of tensiometer uses a platinum ring which is submersed in a liquid. As the ring is pulled out of the liquid, the force required is precisely measured in order to determine the surface tension of the liquid.
The method is well-established as shown by a number of international standards on it such as ASTM D971. This method is widely used for interfacial tension measurement between two liquids but care should be taken to make sure to keep the platinum ring undeformed.
The Wilhelmy plate tensiometer requires a plate to make contact with the liquid surface. It is widely considered the simplest and most accurate method for surface tension measurement. Due to a large wetted length of the platinum plate, the surface tension reading is typically very stable compared to alternative methods. As an additional benefit, the Wilhelmy plate can also be made from paper for disposable use. For interfacial tension measurements, buoyancy of the probe needs to be taken into account which complicates the measurement.
This method uses a rod which is lowered into a test liquid. The rod is then pulled out of the liquid and the force required to pull the rod is precisely measured. The method isn't standardized but is sometimes used. The Du Noüy-Padday rod pull tensiometer will take measurements quickly and will work with liquids with a wide range of viscosities. Interfacial tensions cannot be measured.
Due to internal attractive forces of a liquid, air bubbles within the liquids are compressed. The resulting pressure (bubble pressure) rises at a decreasing bubble radius. The bubble pressure method makes use of this bubble pressure which is higher than in the surrounding environment (water). A gas stream is pumped into a capillary that is immersed in a fluid. The resulting bubble at the end of the capillary tip continually becomes bigger in surface; thereby, the bubble radius is decreasing.
The pressure rises to a maximum level. At this point the bubble has achieved its smallest radius (the capillary radius) and begins to form a hemisphere. Beyond this point the bubble quickly increases in size and soon bursts, tearing away from the capillary, thereby allowing a new bubble to develop at the capillary tip. It is during this process that a characteristic pressure pattern develops (see picture), which is evaluated for determining the surface tension.
Because of the easy handling and the low cleaning effort of the capillary, bubble pressure tensiometers are a common alternative for monitoring the detergent concentration in cleaning or electroplating processes.
Media related to Tensiometer (surface tension) at Wikimedia Commons | https://en.wikipedia.org/wiki/Tensiometer_(surface_tension) |
A tension-leg platform ( TLP ) or extended tension leg platform ( ETLP ) is a vertically moored floating structure normally used for the offshore production of oil or gas , and is particularly suited for water depths greater than 300 metres (about 1000 ft) and less than 1500 metres (about 4900 ft). Use of tension-leg platforms has also been proposed for offshore wind turbines .
The platform is permanently moored by means of tethers or tendons grouped at each of the structure's corners. A group of tethers is called a tension leg. A feature of the design of the tethers is that they have relatively high axial stiffness (low elasticity ), such that virtually all vertical motion of the platform is eliminated. This allows the platform to have the production wellheads on deck (connected directly to the subsea wells by rigid risers), instead of on the seafloor . This allows a simpler well completion and gives better control over the production from the oil or gas reservoir , and easier access for downhole intervention operations.
TLPs have been in use since the early 1980s. The first tension leg platform [ 1 ] was built for Conoco's Hutton field in the North Sea in the early 1980s. The hull was built in the dry-dock at Highland Fabricator's Nigg yard in the north of Scotland, with the deck section built nearby at McDermott's yard at Ardersier. The two parts were mated in the Moray Firth in 1984.
The Hutton TLP was originally designed for a service life of 25 years in North Sea depth of 100 to 1000 metres. It had 16 tension legs. Its weight varied between 46,500 and 55,000 tons when moored to the seabed, but up to 61,580 tons when floating freely. [ 1 ] The total area of its living quarters was about 3,500 square metres and accommodated over 100 cabins though only 40 people were necessary to maintain the structure in place. [ 1 ]
The hull of the Hutton TLP has been separated from the topsides. Topsides have been redeployed to the Prirazlomnoye field in the Barents Sea , while the hull was reportedly sold to a project in the Gulf of Mexico (although the hull has been moored in Cromarty Firth since 2009). [ 2 ]
Larger TLPs will normally have a full drilling rig on the platform with which to drill and intervene on the wells. The smaller TLPs may have a workover rig, or with most recent TLPs, production wellheads located at remote drillcentres subsea.
The deepest (E)TLPs measured from the sea floor to the surface are: [ 3 ]
Although the Massachusetts Institute of Technology and the National Renewable Energy Laboratory explored the concept of TLPs for offshore wind turbines in September 2006, architects had studied the idea as early as 2003. [ 1 ] Earlier offshore wind turbines cost more to produce, stood on towers dug deep into the ocean floor, were only possible in depths of at most 50 feet (15 m), and generated 1.5 megawatts for onshore units and 3.5 megawatts for conventional offshore setups. In contrast, TLP installation was calculated to cost a third as much. TLPs float, and researchers estimate they can operate in depths between 100 and 650 feet (200 m) and farther away from land, and they can generate 5.0 megawatts. [ 5 ]
MIT and NREL researchers planned a half-scale prototype south of Cape Cod to prove the concept. Computer simulations project that in a hurricane TLPs would shift 0.9 m to 1.8 m and the turbine blades would cycle above wave peaks. Dampers could be used to reduce motion in the event of a natural disaster . [ 5 ]
Blue H Technologies of the Netherlands deployed the world's first floating wind turbine on a tension-leg platform, 21.3 kilometres (13.2 mi) off the coast of Apulia , Italy in December 2007. [ 6 ] [ 7 ] The prototype was installed in waters 113 metres (371 ft) deep in order to gather test data on wind and sea conditions, and was decommissioned at the end of 2008. [ 8 ] The turbine utilized a tension-leg platform design and a two-bladed turbine. [ 8 ] Seawind Ocean Technology B.V., which was established by Martin Jakubowski and Silvestro Caruso (the founders of Blue H Technologies), acquired the proprietary rights to the two-bladed floating turbine technology developed by Blue H Technologies. [ 6 ] [ 9 ] [ 10 ]
A fictitious tension-leg platform anchored in the Gulf of Mexico is at the centre of the plot of the novel Seawitch (1977) by Alistair MacLean . At the time of publication there were no commercially active TLPs, and the plot involves a conspiracy to destroy Seawitch by competing oil companies. The prologue to the novel explains the principles of operation. | https://en.wikipedia.org/wiki/Tension-leg_platform |
Tension fabric buildings or tension fabric structures are constructed using a rigid frame—which can consist of timber , steel , rigid plastic, or aluminum —and a sturdy fabric outer membrane . Once the frame is erected, the fabric cover is stretched over the frame. The fabric cover is tensioned to provide the stable structural support of the building . The fabric is tensioned using multiple methods, varying by manufacturer, to create a tight fitting cover membrane.
Compared to traditional or conventional buildings, tension fabric buildings may have lower operational costs due to the daylight that comes through the fabric roof when light-coloured fabrics are used. This natural lighting process is known as daylighting and can improve both energy use and life-cycle costs, as well as occupant health. [ 1 ] [ 2 ]
Tension fabric structures may be more quickly installed than traditional structures as they use fewer materials and therefore usually require less ground works to install.
Some tension fabric structures, particularly those with aluminium frames, may be easily relocated.
Tension fabric buildings have gained popularity over the last few decades in industries using: indoor practice facilities , commercial structures, industrial buildings, manufacturing, warehousing , sand and salt storage for road maintenance departments, environmental management , aviation , airplane hangars , marine , government , military , remediation and emergency shelters, hay and feed storage, and horse riding arenas. [ 3 ]
These structures are suitable for quickly expanding existing facilities, by attaching the fabric structures to extend warehouses or workspaces. They can also be used as covered loading/unloading areas. [ 4 ]
Tension fabric buildings are often used for sports due to the natural light that permeates light-coloured fabrics. These buildings provide covered indoor spaces that allow teams to train under natural daylight when weather is inclement, combating a common problem in sports known as rainout .
The light weight of the fabric roofs enables the construction of tension fabric structures up to 100 m (330 ft) clear span without supporting pillars or columns , contributing to the use of these buildings for applications that require large open spaces. One example is Phase 2 of the Sport Ireland National Indoor Arena project which includes a tension fabric building that will be 18,480 m 2 (198,900 sq ft) in size, to be used for gaelic games , rugby and soccer . [ 5 ] [ 6 ]
These buildings may also be used for holding livestock or as indoor riding arenas, due to the controlled interior climate and the existence of tension structures that run over 1 mi (1.6 km) long. [ 7 ]
Building sizes are usually standardized by the nature of being a pre-engineered building . Some manufacturers produce tension fabric buildings spanning up to 300 feet wide and to almost any length. Buildings can be designed to be portable, mounted on wheels or other rolling crane-type designs fitted to the base-plates, or lifting in modules by overhead cranes .
Industrial strength fabric, which can have life expectancies of 20–30 years, have been used for many applications. Fabric life expectancy is affected by local environmental factors (e.g. sunlight, temperature, wind, air quality) and occupancy conditions (e.g. humidity, chemical vapours). The structural membranes available as of 2020 [update] are made of PVC or polyethylene . Some fabrics are sufficiently translucent to allow sunlight to pass through, creating a naturally lit environment inside the building. Fabric selection influences project capital cost and maintenance.
In some jurisdictions tension fabric buildings may qualify as temporary structures which benefit from a shorter capital depreciation period, relative to a permanent structure, for tax purposes. Buildings classified as temporary structures may have significant limitations on occupancy, applied load and fire safety considerations and period of installation.
Whilst a common application of tension fabric buildings is temporary use, it is not exempt from regulatory requirements including compliance with building codes, occupancy classifications, aesthetics and building permits. Fabric tension buildings are required to meet the same building code safety requirements and applicable design standards as any other structure.
Tension fabric buildings may also be permanent structures with structural longevity varying according to manufacturer. | https://en.wikipedia.org/wiki/Tension_fabric_building |
TensorFloat-32 ( TF32 ) is a numeric floating point format designed for Tensor Core running on certain Nvidia GPUs.
The binary format is:
The total 19-bit format fits within a double word (32 bits), and while it lacks precision compared with a normal 32-bit IEEE 754 floating-point number, provides much faster computation, up to 8 times on a A100 (compared to a V100 using FP32 ). [ 1 ] | https://en.wikipedia.org/wiki/TensorFloat-32 |
In mathematics , the modern component-free approach to the theory of a tensor views a tensor as an abstract object , expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or more generally; and the rules for manipulations of tensors arise as an extension of linear algebra to multilinear algebra .
In differential geometry , an intrinsic [ definition needed ] geometric statement may be described by a tensor field on a manifold , and then doesn't need to make reference to coordinates at all. The same is true in general relativity , of tensor fields describing a physical property . The component-free approach is also used extensively in abstract algebra and homological algebra , where tensors arise naturally.
Given a finite set { V 1 , ..., V n } of vector spaces over a common field F , one may form their tensor product V 1 ⊗ ... ⊗ V n , an element of which is termed a tensor .
A tensor on the vector space V is then defined to be an element of (i.e., a vector in) a vector space of the form: V ⊗ ⋯ ⊗ V ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ {\displaystyle V\otimes \cdots \otimes V\otimes V^{*}\otimes \cdots \otimes V^{*}} where V ∗ is the dual space of V .
If there are m copies of V and n copies of V ∗ in our product, the tensor is said to be of type ( m , n ) and contravariant of order m and covariant of order n and of total order m + n . The tensors of order zero are just the scalars (elements of the field F ), those of contravariant order 1 are the vectors in V , and those of covariant order 1 are the one-forms in V ∗ (for this reason, the elements of the last two spaces are often called the contravariant and covariant vectors). The space of all tensors of type ( m , n ) is denoted T n m ( V ) = V ⊗ ⋯ ⊗ V ⏟ m ⊗ V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ n . {\displaystyle T_{n}^{m}(V)=\underbrace {V\otimes \dots \otimes V} _{m}\otimes \underbrace {V^{*}\otimes \dots \otimes V^{*}} _{n}.}
Example 1. The space of type (1, 1) tensors, T 1 1 ( V ) = V ⊗ V ∗ , {\displaystyle T_{1}^{1}(V)=V\otimes V^{*},} is isomorphic in a natural way to the space of linear transformations from V to V .
Example 2. A bilinear form on a real vector space V , V × V → F , {\displaystyle V\times V\to F,} corresponds in a natural way to a type (0, 2) tensor in T 2 0 ( V ) = V ∗ ⊗ V ∗ . {\displaystyle T_{2}^{0}(V)=V^{*}\otimes V^{*}.} An example of such a bilinear form may be defined, [ clarification needed ] termed the associated metric tensor , and is usually denoted g .
A simple tensor (also called a tensor of rank one, elementary tensor or decomposable tensor [ 1 ] ) is a tensor that can be written as a product of tensors of the form T = a ⊗ b ⊗ ⋯ ⊗ d {\displaystyle T=a\otimes b\otimes \cdots \otimes d} where a , b , ..., d are nonzero and in V or V ∗ – that is, if the tensor is nonzero and completely factorizable . Every tensor can be expressed as a sum of simple tensors. The rank of a tensor T is the minimum number of simple tensors that sum to T . [ 2 ]
The zero tensor has rank zero. A nonzero order 0 or 1 tensor always has rank 1. The rank of a non-zero order 2 or higher tensor is less than or equal to the product of the dimensions of all but the highest-dimensioned vectors in (a sum of products of) which the tensor can be expressed, which is d n −1 when each product is of n vectors from a finite-dimensional vector space of dimension d .
The term rank of a tensor extends the notion of the rank of a matrix in linear algebra, although the term is also often used to mean the order (or degree) of a tensor. The rank of a matrix is the minimum number of column vectors needed to span the range of the matrix . A matrix thus has rank one if it can be written as an outer product of two nonzero vectors: A = v w T . {\displaystyle A=vw^{\mathrm {T} }.}
The rank of a matrix A is the smallest number of such outer products that can be summed to produce it: A = v 1 w 1 T + ⋯ + v k w k T . {\displaystyle A=v_{1}w_{1}^{\mathrm {T} }+\cdots +v_{k}w_{k}^{\mathrm {T} }.}
In indices, a tensor of rank 1 is a tensor of the form T i j … k ℓ … = a i b j ⋯ c k d ℓ ⋯ . {\displaystyle T_{ij\dots }^{k\ell \dots }=a_{i}b_{j}\cdots c^{k}d^{\ell }\cdots .}
The rank of a tensor of order 2 agrees with the rank when the tensor is regarded as a matrix , [ 3 ] and can be determined from Gaussian elimination for instance. The rank of an order 3 or higher tensor is however often very difficult to determine, and low rank decompositions of tensors are sometimes of great practical interest. [ 4 ] In fact, the problem of finding the rank of an order 3 tensor over any finite field is NP-Complete , and over the rationals, is NP-Hard . [ 5 ] Computational tasks such as the efficient multiplication of matrices and the efficient evaluation of polynomials can be recast as the problem of simultaneously evaluating a set of bilinear forms z k = ∑ i j T i j k x i y j {\displaystyle z_{k}=\sum _{ij}T_{ijk}x_{i}y_{j}} for given inputs x i and y j . If a low-rank decomposition of the tensor T is known, then an efficient evaluation strategy is known. [ 6 ]
The space T n m ( V ) {\displaystyle T_{n}^{m}(V)} can be characterized by a universal property in terms of multilinear mappings . Amongst the advantages of this approach are that it gives a way to show that many linear mappings are "natural" or "geometric" (in other words are independent of any choice of basis). Explicit computational information can then be written down using bases, and this order of priorities can be more convenient than proving a formula gives rise to a natural mapping. Another aspect is that tensor products are not used only for free modules , and the "universal" approach carries over more easily to more general situations.
A scalar-valued function on a Cartesian product (or direct sum ) of vector spaces f : V 1 × ⋯ × V N → F {\displaystyle f:V_{1}\times \cdots \times V_{N}\to F} is multilinear if it is linear in each argument. The space of all multilinear mappings from V 1 × ... × V N to W is denoted L N ( V 1 , ..., V N ; W ) . When N = 1 , a multilinear mapping is just an ordinary linear mapping, and the space of all linear mappings from V to W is denoted L ( V ; W ) .
The universal characterization of the tensor product implies that, for each multilinear function f ∈ L m + n ( V ∗ , … , V ∗ ⏟ m , V , … , V ⏟ n ; W ) {\displaystyle f\in L^{m+n}(\underbrace {V^{*},\ldots ,V^{*}} _{m},\underbrace {V,\ldots ,V} _{n};W)} (where W can represent the field of scalars, a vector space, or a tensor space) there exists a unique linear function T f ∈ L ( V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ m ⊗ V ⊗ ⋯ ⊗ V ⏟ n ; W ) {\displaystyle T_{f}\in L(\underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{m}\otimes \underbrace {V\otimes \cdots \otimes V} _{n};W)} such that f ( α 1 , … , α m , v 1 , … , v n ) = T f ( α 1 ⊗ ⋯ ⊗ α m ⊗ v 1 ⊗ ⋯ ⊗ v n ) {\displaystyle f(\alpha _{1},\ldots ,\alpha _{m},v_{1},\ldots ,v_{n})=T_{f}(\alpha _{1}\otimes \cdots \otimes \alpha _{m}\otimes v_{1}\otimes \cdots \otimes v_{n})} for all v i in V and α i in V ∗ .
Using the universal property, it follows, when V is finite dimensional , that the space of ( m , n ) -tensors admits a natural isomorphism T n m ( V ) ≅ L ( V ∗ ⊗ ⋯ ⊗ V ∗ ⏟ m ⊗ V ⊗ ⋯ ⊗ V ⏟ n ; F ) ≅ L m + n ( V ∗ , … , V ∗ ⏟ m , V , … , V ⏟ n ; F ) . {\displaystyle T_{n}^{m}(V)\cong L(\underbrace {V^{*}\otimes \cdots \otimes V^{*}} _{m}\otimes \underbrace {V\otimes \cdots \otimes V} _{n};F)\cong L^{m+n}(\underbrace {V^{*},\ldots ,V^{*}} _{m},\underbrace {V,\ldots ,V} _{n};F).}
Each V in the definition of the tensor corresponds to a V ∗ inside the argument of the linear maps, and vice versa. (Note that in the former case, there are m copies of V and n copies of V ∗ , and in the latter case vice versa). In particular, one has T 0 1 ( V ) ≅ L ( V ∗ ; F ) ≅ V , T 1 0 ( V ) ≅ L ( V ; F ) = V ∗ , T 1 1 ( V ) ≅ L ( V ; V ) . {\displaystyle {\begin{aligned}T_{0}^{1}(V)&\cong L(V^{*};F)\cong V,\\T_{1}^{0}(V)&\cong L(V;F)=V^{*},\\T_{1}^{1}(V)&\cong L(V;V).\end{aligned}}}
Differential geometry , physics and engineering must often deal with tensor fields on smooth manifolds . The term tensor is sometimes used as a shorthand for tensor field . A tensor field expresses the concept of a tensor that varies from point to point on the manifold. | https://en.wikipedia.org/wiki/Tensor_(intrinsic_definition) |
In machine learning , the term tensor informally refers to two different concepts (i) a way of organizing data and (ii) a multilinear (tensor) transformation. Data may be organized in a multidimensional array ( M -way array), informally referred to as a "data tensor"; however, in the strict mathematical sense, a tensor is a multilinear mapping over a set of domain vector spaces to a range vector space. Observations, such as images, movies, volumes, sounds, and relationships among words and concepts, stored in an M -way array ("data tensor"), may be analyzed either by artificial neural networks or tensor methods . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Tensor decomposition factorizes data tensors into smaller tensors. [ 1 ] [ 6 ] Operations on data tensors can be expressed in terms of matrix multiplication and the Kronecker product . [ 7 ] The computation of gradients, a crucial aspect of backpropagation , can be performed using software libraries such as PyTorch and TensorFlow . [ 8 ] [ 9 ]
Computations are often performed on graphics processing units (GPUs) using CUDA , and on dedicated hardware such as Google 's Tensor Processing Unit or Nvidia 's Tensor core . These developments have greatly accelerated neural network architectures, and increased the size and complexity of models that can be trained.
A tensor is by definition a multilinear map. In mathematics, this may express a multilinear relationship between sets of algebraic objects. In physics, tensor fields , considered as tensors at each point in space, are useful in expressing mechanics such as stress or elasticity . In machine learning, the exact use of tensors depends on the statistical approach being used.
In 2001, the field of signal processing and statistics were making use of tensor methods. Pierre Comon surveys the early adoption of tensor methods in the fields of telecommunications, radio surveillance, chemometrics and sensor processing. Linear tensor rank methods (such as, Parafac/CANDECOMP) analyzed M-way arrays ("data tensors") composed of higher order statistics that were employed in blind source separation problems to compute a linear model of the data. He noted several early limitations in determining the tensor rank and efficient tensor rank decomposition. [ 10 ]
In the early 2000s, multilinear tensor methods [ 1 ] [ 11 ] crossed over into computer vision, computer graphics and machine learning with papers by Vasilescu or in collaboration with Terzopoulos, such as Human Motion Signatures, [ 12 ] [ 13 ] TensorFaces [ 14 ] [ 15 ] TensorTexures [ 16 ] and Multilinear Projection. [ 17 ] [ 18 ] Multilinear algebra, the algebra of higher-order tensors, is a suitable and transparent framework for analyzing the multifactor structure of an ensemble of observations and for addressing the difficult problem of disentangling the causal factors based on second order [ 14 ] or higher order statistics associated with each causal factor. [ 15 ]
Tensor (multilinear) factor analysis disentangles and reduces the influence of different causal factors with multilinear subspace learning. [ 19 ] When treating an image or a video as a 2- or 3-way array, i.e., "data matrix/tensor", tensor methods reduce spatial or time redundancies as demonstrated by Wang and Ahuja. [ 20 ]
Yoshua Bengio, [ 21 ] [ 22 ] Geoff Hinton [ 23 ] [ 24 ] and their collaborators briefly discuss the relationship between deep neural networks and tensor factor analysis [ 14 ] [ 15 ] beyond the use of M-way arrays ("data tensors") as inputs. One of the early uses of tensors for neural networks appeared in natural language processing . A single word can be expressed as a vector via Word2vec . [ 5 ] Thus a relationship between two words can be encoded in a matrix. However, for more complex relationships such as subject-object-verb, it is necessary to build higher-dimensional networks. In 2009, the work of Sutskever introduced Bayesian Clustered Tensor Factorization to model relational concepts while reducing the parameter space. [ 25 ] From 2014 to 2015, tensor methods become more common in convolutional neural networks (CNNs). Tensor methods organize neural network weights in a "data tensor", analyze and reduce the number of neural network weights. [ 26 ] [ 27 ] Lebedev et al. accelerated CNN networks for character classification (the recognition of letters and digits in images) by using 4D kernel tensors. [ 28 ]
Let F {\displaystyle \mathbb {F} } be a field such as the real numbers R {\displaystyle \mathbb {R} } or the complex numbers C {\displaystyle \mathbb {C} } . A tensor T ∈ F I 0 × I 2 × … × I C {\displaystyle {\mathcal {T}}\in {\mathbb {F} }^{I_{0}\times I_{2}\times \ldots \times I_{C}}} is a multilinear transformation from a set of domain vector spaces to a range vector space:
T : { F I 1 × F I 2 × … F I C } ↦ F I 0 {\displaystyle {\mathcal {T}}:\{{\mathbb {F} }^{I_{1}}\times {\mathbb {F} }^{I_{2}}\times \ldots {\mathbb {F} }^{I_{C}}\}\mapsto {\mathbb {F} }^{I_{0}}}
Here, C {\displaystyle C} and I 0 , I 1 , … , I C {\displaystyle I_{0},I_{1},\ldots ,I_{C}} are positive integers, and ( C + 1 ) {\displaystyle (C+1)} is the number of modes of a tensor (also known as the number of ways of a multi-way array). The dimensionality of mode c {\displaystyle c} is I c {\displaystyle I_{c}} , for 0 ≤ c ≤ C {\displaystyle 0\leq c\leq C} . [ 14 ] [ 15 ] [ 29 ] [ 5 ]
In statistics and machine learning, an image is vectorized when viewed as a single observation, and a collection of vectorized images is organized as a "data tensor". For example, a set of facial images { d i p , i e , i l , i v ∈ R I X } {\displaystyle \{{\mathbb {d} }_{i_{p},i_{e},i_{l},i_{v}}\in {\mathbb {R} }^{I_{X}}\}} with I X {\displaystyle I_{X}} pixels that are the consequences of multiple causal factors, such as a facial geometry i p ( 1 ≤ i p ≤ I P ) {\displaystyle i_{p}(1\leq i_{p}\leq I_{P})} , an expression i e ( 1 ≤ i e ≤ I E ) {\displaystyle i_{e}(1\leq i_{e}\leq I_{E})} , an illumination condition i l ( 1 ≤ i l ≤ I L ) {\displaystyle i_{l}(1\leq i_{l}\leq I_{L})} , and a viewing condition i v ( 1 ≤ i v ≤ I V ) {\displaystyle i_{v}(1\leq i_{v}\leq I_{V})} may be organized into a data tensor (ie. multiway array) D ∈ R I X × I P × I E × I L × V {\displaystyle {\mathcal {D}}\in {\mathbb {R} }^{I_{X}\times I_{P}\times I_{E}\times I_{L}\times V}} where I P {\displaystyle I_{P}} are the total number of facial geometries, I E {\displaystyle I_{E}} are the total number of expressions, I L {\displaystyle I_{L}} are the total number of illumination conditions, and I V {\displaystyle I_{V}} are the total number of viewing conditions. Tensor factorizations methods such as TensorFaces and multilinear (tensor) independent component analysis factorizes the data tensor into a set of vector spaces that span the causal factor representations, where an image is the result of tensor transformation T {\displaystyle {\mathcal {T}}} that maps a set of causal factor representations to the pixel space.
Another approach to using tensors in machine learning is to embed various data types directly. For example, a grayscale image, commonly represented as a discrete 2-way array D ∈ R I R X × I C X {\displaystyle {\mathbf {D} }\in {\mathbb {R} }^{I_{RX}\times I_{CX}}} with dimensionality I R X × I C X {\displaystyle I_{RX}\times I_{CX}} where I R X {\displaystyle I_{RX}} are the number of rows and I C X {\displaystyle I_{CX}} are the number of columns. When an image is treated as 2-way array or 2nd order tensor (i.e. as a collection of column/row observations), tensor factorization methods compute the image column space, the image row space and the normalized PCA coefficients or the ICA coefficients.
Similarly, a color image with RGB channels, D ∈ R N × M × 3 . {\displaystyle {\mathcal {D}}\in \mathbb {R} ^{N\times M\times 3}.} may be viewed as a 3rd order data tensor or 3-way array.--------
In natural language processing, a word might be expressed as a vector v {\displaystyle v} via the Word2vec algorithm. Thus v {\displaystyle v} becomes a mode-1 tensor
The embedding of subject-object-verb semantics requires embedding relationships among three words. Because a word is itself a vector, subject-object-verb semantics could be expressed using mode-3 tensors
In practice the neural network designer is primarily concerned with the specification of embeddings, the connection of tensor layers, and the operations performed on them in a network. Modern machine learning frameworks manage the optimization, tensor factorization and backpropagation automatically.
Tensors may be used as the unit values of neural networks which extend the concept of scalar, vector and matrix values to multiple dimensions.
The output value of single layer unit y m {\displaystyle y_{m}} is the sum-product of its input units and the connection weights filtered through the activation function f {\displaystyle f} :
where
If each output element of y m {\displaystyle y_{m}} is a scalar, then we have the classical definition of an artificial neural network . By replacing each unit component with a tensor, the network is able to express higher dimensional data such as images or videos:
This use of tensors to replace unit values is common in convolutional neural networks where each unit might be an image processed through multiple layers. By embedding the data in tensors such network structures enable learning of complex data types.
Tensors may also be used to compute the layers of a fully connected neural network, where the tensor is applied to the entire layer instead of individual unit values.
The output value of single layer unit y m {\displaystyle y_{m}} is the sum-product of its input units and the connection weights filtered through the activation function f {\displaystyle f} :
The vectors x {\displaystyle x} and y {\displaystyle y} of output values can be expressed as a mode-1 tensors, while the hidden weights can be expressed as a mode-2 tensor. In this example the unit values are scalars while the tensor takes on the dimensions of the network layers:
In this notation, the output values can be computed as a tensor product of the input and weight tensors:
which computes the sum-product as a tensor multiplication (similar to matrix multiplication).
This formulation of tensors enables the entire layer of a fully connected network to be efficiently computed by mapping the units and weights to tensors.
A different reformulation of neural networks allows tensors to express the convolution layers of a neural network. A convolutional layer has multiple inputs, each of which is a spatial structure such as an image or volume. The inputs are convolved by filtering before being passed to the next layer. A typical use is to perform feature detection or isolation in image recognition.
Convolution is often computed as the multiplication of an input signal g {\displaystyle g} with a filter kernel f {\displaystyle f} . In two dimensions the discrete, finite form is:
where w {\displaystyle w} is the width of the kernel.
This definition can be rephrased as a matrix-vector product in terms of tensors that express the kernel, data and inverse transform of the kernel. [ 31 ]
where A , B {\displaystyle {\mathcal {A}},{\mathcal {B}}} and C {\displaystyle {\mathcal {C}}} are the inverse transform, data and kernel. The derivation is more complex when the filtering kernel also includes a non-linear activation function such as sigmoid or ReLU.
The hidden weights of the convolution layer are the parameters to the filter. These can be reduced with a pooling layer which reduces the resolution (size) of the data, and can also be expressed as a tensor operation.
An important contribution of tensors in machine learning is the ability to factorize tensors to decompose data into constituent factors or reduce the learned parameters. Data tensor modeling techniques stem from the linear tensor decomposition (CANDECOMP/Parafac decomposition) and the multilinear tensor decompositions (Tucker).
Tucker decomposition , for example, takes a 3-way array X ∈ R I × J × K {\displaystyle {\mathcal {X}}\in \mathbb {R} ^{I\times J\times K}} and decomposes the tensor into three matrices A , B , C {\displaystyle {\mathcal {A,B,C}}} and a smaller tensor G {\displaystyle {\mathcal {G}}} . The shape of the matrices and new tensor are such that the total number of elements is reduced. The new tensors have shapes
Then the original tensor can be expressed as the tensor product of these four tensors:
In the example shown in the figure, the dimensions of the tensors are
The total number of elements in the Tucker factorization is
The number of elements in the original X {\displaystyle {\mathcal {X}}} is 144, resulting in a data reduction from 144 down to 110 elements, a reduction of 23% in parameters or data size. For much larger initial tensors, and depending on the rank (redundancy) of the tensor, the gains can be more significant.
The work of Rabanser et al. provides an introduction to tensors with more details on the extension of Tucker decomposition to N-dimensions beyond the mode-3 example given here. [ 5 ]
Another technique for decomposing tensors rewrites the initial tensor as a sequence (train) of smaller sized tensors. A tensor-train (TT) is a sequence of tensors of reduced rank, called canonical factors . The original tensor can be expressed as the sum-product of the sequence.
Developed in 2011 by Ivan Oseledts, the author observes that Tucker decomposition is "suitable for small dimensions, especially for the three-dimensional case. For large d it is not suitable." [ 32 ] Thus tensor-trains can be used to factorize larger tensors in higher dimensions.
The unified data architecture and automatic differentiation of tensors has enabled higher-level designs of machine learning in the form of tensor graphs. This leads to new architectures, such as tensor-graph convolutional networks (TGCN), which identify highly non-linear associations in data, combine multiple relations, and scale gracefully, while remaining robust and performant. [ 33 ]
These developments are impacting all areas of machine learning, such as text mining and clustering, time varying data, and neural networks wherein the input data is a social graph and the data changes dynamically. [ 34 ] [ 35 ] [ 36 ] [ 37 ]
Tensors provide a unified way to train neural networks for more complex data sets. However, training is expensive to compute on classical CPU hardware.
In 2014, Nvidia developed cuDNN , CUDA Deep Neural Network, a library for a set of optimized primitives written in the parallel CUDA language. [ 38 ] CUDA and thus cuDNN run on dedicated GPUs that implement unified massive parallelism in hardware. These GPUs were not yet dedicated chips for tensors, but rather existing hardware adapted for parallel computation in machine learning.
In the period 2015–2017 Google invented the Tensor Processing Unit (TPU). [ 39 ] TPUs are dedicated, fixed function hardware units that specialize in the matrix multiplications needed for tensor products. Specifically, they implement an array of 65,536 multiply units that can perform a 256x256 matrix sum-product in just one global instruction cycle. [ 40 ]
Later in 2017, Nvidia released its own Tensor Core with the Volta GPU architecture. Each Tensor Core is a microunit that can perform a 4x4 matrix sum-product. There are eight tensor cores for each shared memory (SM) block. [ 41 ] The first GV100 GPU card has 108 SMs resulting in 672 tensor cores. This device accelerated machine learning by 12x over the previous Tesla GPUs. [ 42 ] The number of tensor cores scales as the number of cores and SM units continue to grow in each new generation of cards.
The development of GPU hardware, combined with the unified architecture of tensor cores, has enabled the training of much larger neural networks. In 2022, the largest neural network was Google's PaLM with 540 billion learned parameters (network weights) [ 43 ] (the older GPT-3 language model has over 175 billion learned parameters that produces human-like text; size isn't everything, Stanford's much smaller 2023 Alpaca model claims to be better, [ 44 ] having learned from Meta/Facebook's 2023 model LLaMA , the smaller 7 billion parameter variant). The widely popular chatbot ChatGPT is built on top of GPT-3.5 (and after an update GPT-4 ) using supervised and reinforcement learning. | https://en.wikipedia.org/wiki/Tensor_(machine_learning) |
The Tensor Contraction Engine (TCE) is a compiler for a domain-specific language that allows chemists to specify the computation in a high-level Mathematica -style language. It transforms tensor summation expressions to low-level code (C/Fortran) for specific hardware being mindful of memory availability, communication costs, loop fusion and ordering, etc. It is used primarily in computational chemistry .
This computational chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tensor_Contraction_Engine |
In mathematics , the tensor algebra of a vector space V , denoted T ( V ) or T • ( V ), is the algebra of tensors on V (of any rank) with multiplication being the tensor product . It is the free algebra on V , in the sense of being left adjoint to the forgetful functor from algebras to vector spaces: it is the "most general" algebra containing V , in the sense of the corresponding universal property (see below ).
The tensor algebra is important because many other algebras arise as quotient algebras of T ( V ). These include the exterior algebra , the symmetric algebra , Clifford algebras , the Weyl algebra and universal enveloping algebras .
The tensor algebra also has two coalgebra structures; one simple one, which does not make it a bi-algebra, but does lead to the concept of a cofree coalgebra , and a more complicated one, which yields a bialgebra , and can be extended by giving an antipode to create a Hopf algebra structure.
Note : In this article, all algebras are assumed to be unital and associative . The unit is explicitly required to define the coproduct .
Let V be a vector space over a field K . For any nonnegative integer k , we define the k th tensor power of V to be the tensor product of V with itself k times:
That is, T k V consists of all tensors on V of order k . By convention T 0 V is the ground field K (as a one-dimensional vector space over itself).
We then construct T ( V ) as the direct sum of T k V for k = 0,1,2,…
The multiplication in T ( V ) is determined by the canonical isomorphism
given by the tensor product, which is then extended by linearity to all of T ( V ). This multiplication rule implies that the tensor algebra T ( V ) is naturally a graded algebra with T k V serving as the grade- k subspace. This grading can be extended to a Z -grading by appending subspaces T k V = { 0 } {\displaystyle T^{k}V=\{0\}} for negative integers k .
The construction generalizes in a straightforward manner to the tensor algebra of any module M over a commutative ring . If R is a non-commutative ring , one can still perform the construction for any R - R bimodule M . (It does not work for ordinary R -modules because the iterated tensor products cannot be formed.)
The tensor algebra T ( V ) is also called the free algebra on the vector space V , and is functorial ; this means that the map V ↦ T ( V ) {\displaystyle V\mapsto T(V)} extends to linear maps for forming a functor from the category of K -vector spaces to the category of associative algebras . Similarly with other free constructions , the functor T is left adjoint to the forgetful functor that sends each associative K -algebra to its underlying vector space.
Explicitly, the tensor algebra satisfies the following universal property , which formally expresses the statement that it is the most general algebra containing V :
Here i is the canonical inclusion of V into T ( V ) . As for other universal properties, the tensor algebra T ( V ) can be defined as the unique algebra satisfying this property (specifically, it is unique up to a unique isomorphism), but this definition requires to prove that an object satisfying this property exists.
The above universal property implies that T is a functor from the category of vector spaces over K , to the category of K -algebras. This means that any linear map between K -vector spaces U and W extends uniquely to a K -algebra homomorphism from T ( U ) to T ( W ) .
If V has finite dimension n , another way of looking at the tensor algebra is as the "algebra of polynomials over K in n non-commuting variables". If we take basis vectors for V , those become non-commuting variables (or indeterminates ) in T ( V ), subject to no constraints beyond associativity , the distributive law and K -linearity.
Note that the algebra of polynomials on V is not T ( V ) {\displaystyle T(V)} , but rather T ( V ∗ ) {\displaystyle T(V^{*})} : a (homogeneous) linear function on V is an element of V ∗ , {\displaystyle V^{*},} for example coordinates x 1 , … , x n {\displaystyle x^{1},\dots ,x^{n}} on a vector space are covectors , as they take in a vector and give out a scalar (the given coordinate of the vector).
Because of the generality of the tensor algebra, many other algebras of interest can be constructed by starting with the tensor algebra and then imposing certain relations on the generators, i.e. by constructing certain quotient algebras of T ( V ). Examples of this are the exterior algebra , the symmetric algebra , Clifford algebras , the Weyl algebra and universal enveloping algebras .
The tensor algebra has two different coalgebra structures. One is compatible with the tensor product, and thus can be extended to a bialgebra , and can be further be extended with an antipode to a Hopf algebra structure. The other structure, although simpler, cannot be extended to a bialgebra. The first structure is developed immediately below; the second structure is given in the section on the cofree coalgebra , further down.
The development provided below can be equally well applied to the exterior algebra , using the wedge symbol ∧ {\displaystyle \wedge } in place of the tensor symbol ⊗ {\displaystyle \otimes } ; a sign must also be kept track of, when permuting elements of the exterior algebra. This correspondence also lasts through the definition of the bialgebra, and on to the definition of a Hopf algebra. That is, the exterior algebra can also be given a Hopf algebra structure.
Similarly, the symmetric algebra can also be given the structure of a Hopf algebra, in exactly the same fashion, by replacing everywhere the tensor product ⊗ {\displaystyle \otimes } by the symmetrized tensor product ⊗ S y m {\displaystyle \otimes _{\mathrm {Sym} }} , i.e. that product where v ⊗ S y m w = w ⊗ S y m v . {\displaystyle v\otimes _{\mathrm {Sym} }w=w\otimes _{\mathrm {Sym} }v.}
In each case, this is possible because the alternating product ∧ {\displaystyle \wedge } and the symmetric product ⊗ S y m {\displaystyle \otimes _{\mathrm {Sym} }} obey the required consistency conditions for the definition of a bialgebra and Hopf algebra; this can be explicitly checked in the manner below. Whenever one has a product obeying these consistency conditions, the construction goes through; insofar as such a product gave rise to a quotient space, the quotient space inherits the Hopf algebra structure.
In the language of category theory , one says that there is a functor T from the category of K -vector spaces to the category of K -associative algebras. But there is also a functor Λ taking vector spaces to the category of exterior algebras, and a functor Sym taking vector spaces to symmetric algebras. There is a natural map from T to each of these. Verifying that quotienting preserves the Hopf algebra structure is the same as verifying that the maps are indeed natural.
The coalgebra is obtained by defining a coproduct or diagonal operator
Here, T V {\displaystyle TV} is used as a short-hand for T ( V ) {\displaystyle T(V)} to avoid an explosion of parentheses. The ⊠ {\displaystyle \boxtimes } symbol is used to denote the "external" tensor product, needed for the definition of a coalgebra. It is being used to distinguish it from the "internal" tensor product ⊗ {\displaystyle \otimes } , which is already being used to denote multiplication in the tensor algebra (see the section Multiplication , below, for further clarification on this issue). In order to avoid confusion between these two symbols, most texts will replace ⊗ {\displaystyle \otimes } by a plain dot, or even drop it altogether, with the understanding that it is implied from context. This then allows the ⊗ {\displaystyle \otimes } symbol to be used in place of the ⊠ {\displaystyle \boxtimes } symbol. This is not done below, and the two symbols are used independently and explicitly, so as to show the proper location of each. The result is a bit more verbose, but should be easier to comprehend.
The definition of the operator Δ {\displaystyle \Delta } is most easily built up in stages, first by defining it for elements v ∈ V ⊂ T V {\displaystyle v\in V\subset TV} and then by homomorphically extending it to the whole algebra. A suitable choice for the coproduct is then
and
where 1 ∈ K = T 0 V ⊂ T V {\displaystyle 1\in K=T^{0}V\subset TV} is the unit of the field K {\displaystyle K} . By linearity, one obviously has
for all k ∈ K . {\displaystyle k\in K.} It is straightforward to verify that this definition satisfies the axioms of a coalgebra: that is, that
where i d T V : x ↦ x {\displaystyle \mathrm {id} _{TV}:x\mapsto x} is the identity map on T V {\displaystyle TV} . Indeed, one gets
and likewise for the other side. At this point, one could invoke a lemma, and say that Δ {\displaystyle \Delta } extends trivially, by linearity, to all of T V {\displaystyle TV} , because T V {\displaystyle TV} is a free object and V {\displaystyle V} is a generator of the free algebra, and Δ {\displaystyle \Delta } is a homomorphism. However, it is insightful to provide explicit expressions. So, for v ⊗ w ∈ T 2 V {\displaystyle v\otimes w\in T^{2}V} , one has (by definition) the homomorphism
Expanding, one has
In the above expansion, there is no need to ever write 1 ⊗ v {\displaystyle 1\otimes v} as this is just plain-old scalar multiplication in the algebra; that is, one trivially has that 1 ⊗ v = 1 ⋅ v = v . {\displaystyle 1\otimes v=1\cdot v=v.}
The extension above preserves the algebra grading. That is,
Continuing in this fashion, one can obtain an explicit expression for the coproduct acting on a homogenous element of order m :
where the ω {\displaystyle \omega } symbol, which should appear as ш, the sha, denotes the shuffle product . This is expressed in the second summation, which is taken over all ( p , m − p )-shuffles . The shuffle is
By convention, one takes that Sh( m, 0) and Sh(0, m ) equals {id: {1, ..., m } → {1, ..., m }}. It is also convenient to take the pure tensor products v σ ( 1 ) ⊗ ⋯ ⊗ v σ ( p ) {\displaystyle v_{\sigma (1)}\otimes \dots \otimes v_{\sigma (p)}} and v σ ( p + 1 ) ⊗ ⋯ ⊗ v σ ( m ) {\displaystyle v_{\sigma (p+1)}\otimes \dots \otimes v_{\sigma (m)}} to equal 1 for p = 0 and p = m , respectively (the empty product in T V {\displaystyle TV} ). The shuffle follows directly from the first axiom of a co-algebra: the relative order of the elements v k {\displaystyle v_{k}} is preserved in the riffle shuffle: the riffle shuffle merely splits the ordered sequence into two ordered sequences, one on the left, and one on the right.
Equivalently,
where the products are in T V {\displaystyle TV} , and where the sum is over all subsets of { 1 , … , n } {\displaystyle \{1,\dots ,n\}} .
As before, the algebra grading is preserved:
The counit ϵ : T V → K {\displaystyle \epsilon :TV\to K} is given by the projection of the field component out from the algebra. This can be written as ϵ : v ↦ 0 {\displaystyle \epsilon :v\mapsto 0} for v ∈ V {\displaystyle v\in V} and ϵ : k ↦ k {\displaystyle \epsilon :k\mapsto k} for k ∈ K = T 0 V {\displaystyle k\in K=T^{0}V} . By homomorphism under the tensor product ⊗ {\displaystyle \otimes } , this extends to
for all x ∈ T 1 V ⊕ T 2 V ⊕ ⋯ {\displaystyle x\in T^{1}V\oplus T^{2}V\oplus \cdots } It is a straightforward matter to verify that this counit satisfies the needed axiom for the coalgebra:
Working this explicitly, one has
where, for the last step, one has made use of the isomorphism T V ⊠ K ≅ T V {\displaystyle TV\boxtimes K\cong TV} , as is appropriate for the defining axiom of the counit.
A bialgebra defines both multiplication, and comultiplication, and requires them to be compatible.
Multiplication is given by an operator
which, in this case, was already given as the "internal" tensor product. That is,
That is, ∇ ( x ⊠ y ) = x ⊗ y . {\displaystyle \nabla (x\boxtimes y)=x\otimes y.} The above should make it clear why the ⊠ {\displaystyle \boxtimes } symbol needs to be used: the ⊗ {\displaystyle \otimes } was actually one and the same thing as ∇ {\displaystyle \nabla } ; and notational sloppiness here would lead to utter chaos. To strengthen this: the tensor product ⊗ {\displaystyle \otimes } of the tensor algebra corresponds to the multiplication ∇ {\displaystyle \nabla } used in the definition of an algebra, whereas the tensor product ⊠ {\displaystyle \boxtimes } is the one required in the definition of comultiplication in a coalgebra. These two tensor products are not the same thing!
The unit for the algebra
is just the embedding, so that
That the unit is compatible with the tensor product ⊗ {\displaystyle \otimes } is "trivial": it is just part of the standard definition of the tensor product of vector spaces. That is, k ⊗ x = k x {\displaystyle k\otimes x=kx} for field element k and any x ∈ T V . {\displaystyle x\in TV.} More verbosely, the axioms for an associative algebra require the two homomorphisms (or commuting diagrams):
on K ⊠ T V {\displaystyle K\boxtimes TV} , and that symmetrically, on T V ⊠ K {\displaystyle TV\boxtimes K} , that
where the right-hand side of these equations should be understood as the scalar product.
The unit and counit, and multiplication and comultiplication, all have to satisfy compatibility conditions. It is straightforward to see that
Similarly, the unit is compatible with comultiplication:
The above requires the use of the isomorphism K ⊠ K ≅ K {\displaystyle K\boxtimes K\cong K} in order to work; without this, one loses linearity. Component-wise,
with the right-hand side making use of the isomorphism.
Multiplication and the counit are compatible:
whenever x or y are not elements of K {\displaystyle K} , and otherwise, one has scalar multiplication on the field: k 1 ⊗ k 2 = k 1 k 2 . {\displaystyle k_{1}\otimes k_{2}=k_{1}k_{2}.} The most difficult to verify is the compatibility of multiplication and comultiplication:
where τ ( x ⊠ y ) = y ⊠ x {\displaystyle \tau (x\boxtimes y)=y\boxtimes x} exchanges elements. The compatibility condition only needs to be verified on V ⊂ T V {\displaystyle V\subset TV} ; the full compatibility follows as a homomorphic extension to all of T V . {\displaystyle TV.} The verification is verbose but straightforward; it is not given here, except for the final result:
For v , w ∈ V , {\displaystyle v,w\in V,} an explicit expression for this was given in the coalgebra section, above.
The Hopf algebra adds an antipode to the bialgebra axioms. The antipode S {\displaystyle S} on k ∈ K = T 0 V {\displaystyle k\in K=T^{0}V} is given by
This is sometimes called the "anti-identity". The antipode on v ∈ V = T 1 V {\displaystyle v\in V=T^{1}V} is given by
and on v ⊗ w ∈ T 2 V {\displaystyle v\otimes w\in T^{2}V} by
This extends homomorphically to
Compatibility of the antipode with multiplication and comultiplication requires that
This is straightforward to verify componentwise on k ∈ K {\displaystyle k\in K} :
Similarly, on v ∈ V {\displaystyle v\in V} :
Recall that
and that
for any x ∈ T V {\displaystyle x\in TV} that is not in K . {\displaystyle K.}
One may proceed in a similar manner, by homomorphism, verifying that the antipode inserts the appropriate cancellative signs in the shuffle, starting with the compatibility condition on T 2 V {\displaystyle T^{2}V} and proceeding by induction.
One may define a different coproduct on the tensor algebra, simpler than the one given above. It is given by
Here, as before, one uses the notational trick v 0 = v k + 1 = 1 ∈ K {\displaystyle v_{0}=v_{k+1}=1\in K} (recalling that v ⊗ 1 = v {\displaystyle v\otimes 1=v} trivially).
This coproduct gives rise to a coalgebra. It describes a coalgebra that is dual to the algebra structure on T ( V ∗ ), where V ∗ denotes the dual vector space of linear maps V → F . In the same way that the tensor algebra is a free algebra , the corresponding coalgebra is termed cocomplete co-free. With the usual product this is not a bialgebra. It can be turned into a bialgebra with the product v i ⋅ v j = ( i , j ) v i + j {\displaystyle v_{i}\cdot v_{j}=(i,j)v_{i+j}} where (i,j) denotes the binomial coefficient for ( i + j i ) {\displaystyle {\tbinom {i+j}{i}}} . This bialgebra is known as the divided power Hopf algebra .
The difference between this, and the other coalgebra is most easily seen in the T 2 V {\displaystyle T^{2}V} term. Here, one has that
for v , w ∈ V {\displaystyle v,w\in V} , which is clearly missing a shuffled term, as compared to before. | https://en.wikipedia.org/wiki/Tensor_algebra |
In mathematics , Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold , with or without a metric tensor or connection . [ a ] [ 1 ] [ 2 ] [ 3 ] It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), tensor calculus or tensor analysis developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. [ 4 ] Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and differential geometry in the early twentieth century. [ 5 ] The basis of modern tensor analysis was developed by Bernhard Riemann in a paper from 1861. [ 6 ]
A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays .
A tensor may be expressed as a linear sum of the tensor product of vector and covector basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space . The number of indices equals the degree (or order) of the tensor.
For compactness and convenience, the Ricci calculus incorporates Einstein notation , which implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules.
Tensor calculus has many applications in physics , engineering and computer science including elasticity , continuum mechanics , electromagnetism (see mathematical descriptions of the electromagnetic field ), general relativity (see mathematics of general relativity ), quantum field theory , and machine learning .
Working with a main proponent of the exterior calculus Élie Cartan , the influential geometer Shiing-Shen Chern summarizes the role of tensor calculus: [ 7 ]
In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus.
Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows: [ 8 ]
Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space.
The author(s) will usually make it clear whether a subscript is intended as an index or as a label.
For example, in 3-D Euclidean space and using Cartesian coordinates ; the coordinate vector A = ( A 1 , A 2 , A 3 ) = ( A x , A y , A z ) shows a direct correspondence between the subscripts 1, 2, 3 and the labels x , y , z . In the expression A i , i is interpreted as an index ranging over the values 1, 2, 3, while the x , y , z subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label t .
Indices themselves may be labelled using diacritic -like symbols, such as a hat (ˆ), bar (¯), tilde (˜), or prime (′) as in:
to denote a possibly different basis for that index. An example is in Lorentz transformations from one frame of reference to another, where one frame could be unprimed and the other primed, as in:
This is not to be confused with van der Waerden notation for spinors , which uses hats and overdots on indices to reflect the chirality of a spinor.
Ricci calculus, and index notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are not exponents, even though they may look as such to the reader only familiar with other parts of mathematics.
In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as a i j b j k {\displaystyle a_{ij}b_{jk}} for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained.
A lower index (subscript) indicates covariance of the components with respect to that index:
An upper index (superscript) indicates contravariance of the components with respect to that index:
A tensor may have both upper and lower indices:
Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the generalized Kronecker delta ).
The number of each upper and lower indices of a tensor gives its type : a tensor with p upper and q lower indices is said to be of type ( p , q ) , or to be a type- ( p , q ) tensor.
The number of indices of a tensor, regardless of variance, is called the degree of the tensor (alternatively, its valence , order or rank , although rank is ambiguous). Thus, a tensor of type ( p , q ) has degree p + q .
The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over:
The operation implied by such a summation is called tensor contraction :
This summation may occur more than once within a term with a distinct symbol per pair of indices, for example:
Other combinations of repeated indices within a term are considered to be ill-formed, such as
The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis.
If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list: [ 9 ]
where I = i 1 i 2 ⋅⋅⋅ i n and J = j 1 j 2 ⋅⋅⋅ j m .
A pair of vertical bars | ⋅ | around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is completely antisymmetric in each of the two sets of indices: [ 10 ]
means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example:
When using multi-index notation, an underarrow is placed underneath the block of indices: [ 11 ]
where
By contracting an index with a non-singular metric tensor , the type of a tensor can be changed, converting a lower index to an upper index or vice versa:
The base symbol in many cases is retained (e.g. using A where B appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.
This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a passive transformation between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation. [ 12 ]
The Kronecker delta is used, see also below .
Tensors are equal if and only if every corresponding component is equal; e.g., tensor A equals tensor B if and only if
for all α , β , γ . Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis ).
Indices not involved in contractions are called free indices . Indices used in contractions are termed dummy indices , or summation indices .
The components of tensors (like A α , B β γ etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has n free indices, and if the dimensionality of the underlying vector space is m , the equality represents m n equations: each index takes on every value of a specific set of values.
For instance, if
is in four dimensions (that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices ( α , β , δ ), there are 4 3 = 64 equations. Three of these are:
This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation.
Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is:
whereas an erroneous change is:
In the first replacement, λ replaced α and μ replaced γ everywhere , so the expression still has the same meaning. In the second, λ did not fully replace α , and μ did not fully replace γ (incidentally, the contraction on the γ index became a tensor product), which is entirely inconsistent for reasons shown next.
The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example:
as for an erroneous expression:
In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, α , β , δ line up throughout and γ occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while β lines up, α and δ do not, and γ appears twice in one term (contraction) and once in another term, which is inconsistent.
When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply.
If the brackets enclose covariant indices – the rule applies only to all covariant indices enclosed in the brackets , not to any contravariant indices which happen to be placed intermediately between the brackets.
Similarly if brackets enclose contravariant indices – the rule applies only to all enclosed contravariant indices , not to intermediately placed covariant indices.
Parentheses, ( ) , around multiple indices denotes the symmetrized part of the tensor. When symmetrizing p indices using σ to range over permutations of the numbers 1 to p , one takes a sum over the permutations of those indices α σ ( i ) for i = 1, 2, 3, ..., p , and then divides by the number of permutations:
For example, two symmetrizing indices mean there are two indices to permute and sum over:
while for three symmetrizing indices, there are three indices to sum over and permute:
The symmetrization is distributive over addition;
Indices are not part of the symmetrization when they are:
Here the α and γ indices are symmetrized, β is not.
Square brackets, [ ] , around multiple indices denotes the anti symmetrized part of the tensor. For p antisymmetrizing indices – the sum over the permutations of those indices α σ ( i ) multiplied by the signature of the permutation sgn( σ ) is taken, then divided by the number of permutations:
where δ β 1 ⋅⋅⋅ β p α 1 ⋅⋅⋅ α p is the generalized Kronecker delta of degree 2 p , with scaling as defined below.
For example, two antisymmetrizing indices imply:
while three antisymmetrizing indices imply:
as for a more specific example, if F represents the electromagnetic tensor , then the equation
represents Gauss's law for magnetism and Faraday's law of induction .
As before, the antisymmetrization is distributive over addition;
As with symmetrization, indices are not antisymmetrized when they are:
Here the α and γ indices are antisymmetrized, β is not.
Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices:
as can be seen by adding the above expressions for A ( αβ ) γ ⋅⋅⋅ and A [ αβ ] γ ⋅⋅⋅ . This does not hold for other than two indices.
For compactness, derivatives may be indicated by adding indices after a comma or semicolon. [ 13 ] [ 14 ]
While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis : a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by x μ , but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of differences in coordinates, Δ x μ , can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below.
To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable x γ , a comma is placed before an appended lower index of the coordinate variable.
This may be repeated (without adding further commas):
These components do not transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates
where δ is the Kronecker delta .
The covariant derivative is only defined if a connection is defined. For any tensor field, a semicolon ( ; ) placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a forward slash ( / ) [ 15 ] or in three-dimensional curved space a single vertical bar ( | ). [ 16 ]
The covariant derivative of a scalar function, a contravariant vector and a covariant vector are:
where Γ α γβ are the connection coefficients.
For an arbitrary tensor: [ 17 ]
An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol ∇ β . For the case of a vector field A α : [ 18 ]
The covariant formulation of the directional derivative of any tensor field along a vector v γ may be expressed as its contraction with the covariant derivative, e.g.:
The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly.
This derivative is characterized by the product rule:
A Koszul connection on the tangent bundle of a differentiable manifold is called an affine connection .
A connection is a metric connection when the covariant derivative of the metric tensor vanishes:
An affine connection that is also a metric connection is called a Riemannian connection . A Riemannian connection that is torsion-free (i.e., for which the torsion tensor vanishes: T α βγ = 0 ) is a Levi-Civita connection .
The Γ α βγ for a Levi-Civita connection in a coordinate basis are called Christoffel symbols of the second kind.
The exterior derivative of a totally antisymmetric type (0, s ) tensor field with components A α 1 ⋅⋅⋅ α s (also called a differential form ) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components: [ 19 ] : 232–233
This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule.
The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type ( r , s ) tensor field T along (the flow of) a contravariant vector field X ρ may be expressed using a coordinate basis as [ 20 ]
This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero:
The Kronecker delta is like the identity matrix when multiplied and contracted:
The components δ α β are the same in any basis and form an invariant tensor of type (1, 1) , i.e. the identity of the tangent bundle over the identity mapping of the base manifold , and so its trace is an invariant. [ 21 ] Its trace is the dimensionality of the space; for example, in four-dimensional spacetime ,
The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree 2 p may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of p ! on the right):
and acts as an antisymmetrizer on p indices:
An affine connection has a torsion tensor T α βγ :
where γ α βγ are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis.
For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations
If this tensor is defined as
then it is the commutator of the covariant derivative with itself: [ 22 ] [ 23 ]
since the connection is torsionless, which means that the torsion tensor vanishes.
This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows:
which are often referred to as the Ricci identities . [ 24 ]
The metric tensor g αβ is used for lowering indices and gives the length of any space-like curve
where γ is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve
where γ is any smooth strictly monotone parameterization of the trajectory. See also Line element .
The inverse matrix g αβ of the metric tensor is another important tensor, used for raising indices: | https://en.wikipedia.org/wiki/Tensor_calculus |
In multilinear algebra , a tensor contraction is an operation on a tensor that arises from the canonical pairing of a vector space and its dual . In components, it is expressed as a sum of products of scalar components of the tensor(s) caused by applying the summation convention to a pair of dummy indices that are bound to each other in an expression. The contraction of a single mixed tensor occurs when a pair of literal indices (one a subscript, the other a superscript) of the tensor are set equal to each other and summed over. In Einstein notation this summation is built into the notation. The result is another tensor with order reduced by 2.
Tensor contraction can be seen as a generalization of the trace .
Let V be a vector space over a field k . The core of the contraction operation, and the simplest case, is the canonical pairing of V with its dual vector space V ∗ . The pairing is the linear map from the tensor product of these two spaces to the field k :
corresponding to the bilinear form
where f is in V ∗ and v is in V . The map C defines the contraction operation on a tensor of type (1, 1) , which is an element of V ⊗ V ∗ {\displaystyle V\otimes V^{*}} . Note that the result is a scalar (an element of k ). In finite dimensions , using the natural isomorphism between V ⊗ V ∗ {\displaystyle V\otimes V^{*}} and the space of linear maps from V to V , [ 1 ] one obtains a basis-free definition of the trace .
In general, a tensor of type ( m , n ) (with m ≥ 1 and n ≥ 1 ) is an element of the vector space
(where there are m factors V and n factors V ∗ ). [ 2 ] [ 3 ] Applying the canonical pairing to the k th V factor and the l th V ∗ factor, and using the identity on all other factors, defines the ( k , l ) contraction operation, which is a linear map that yields a tensor of type ( m − 1, n − 1) . [ 2 ] By analogy with the (1, 1) case, the general contraction operation is sometimes called the trace.
In tensor index notation , the basic contraction of a vector and a dual vector is denoted by
which is shorthand for the explicit coordinate summation [ 4 ]
(where v i are the components of v in a particular basis and f i are the components of f in the corresponding dual basis).
Since a general mixed dyadic tensor is a linear combination of decomposable tensors of the form f ⊗ v {\displaystyle f\otimes v} , the explicit formula for the dyadic case follows: let
be a mixed dyadic tensor. Then its contraction is
A general contraction is denoted by labeling one covariant index and one contravariant index with the same letter, summation over that index being implied by the summation convention . The resulting contracted tensor inherits the remaining indices of the original tensor. For example, contracting a tensor T of type (2,2) on the second and third indices to create a new tensor U of type (1,1) is written as
By contrast, let
be an unmixed dyadic tensor. This tensor does not contract; if its base vectors are dotted, [ clarification needed ] the result is the contravariant metric tensor ,
whose rank is 2.
As in the previous example, contraction on a pair of indices that are either both contravariant or both covariant is not possible in general. However, in the presence of an inner product (also known as a metric ) g , such contractions are possible. One uses the metric to raise or lower one of the indices, as needed, and then one uses the usual operation of contraction. The combined operation is known as metric contraction . [ 5 ]
Contraction is often applied to tensor fields over spaces (e.g. Euclidean space , manifolds , or schemes [ citation needed ] ). Since contraction is a purely algebraic operation, it can be applied pointwise to a tensor field, e.g. if T is a (1,1) tensor field on Euclidean space, then in any coordinates, its contraction (a scalar field) U at a point x is given by
Since the role of x is not complicated here, it is often suppressed, and the notation for tensor fields becomes identical to that for purely algebraic tensors.
Over a Riemannian manifold , a metric (field of inner products) is available, and both metric and non-metric contractions are crucial to the theory. For example, the Ricci tensor is a non-metric contraction of the Riemann curvature tensor , and the scalar curvature is the unique metric contraction of the Ricci tensor.
One can also view contraction of a tensor field in the context of modules over an appropriate ring of functions on the manifold [ 5 ] or the context of sheaves of modules over the structure sheaf; [ 6 ] see the discussion at the end of this article.
As an application of the contraction of a tensor field, let V be a vector field on a Riemannian manifold (for example, Euclidean space ). Let V α β {\displaystyle V^{\alpha }{}_{\beta }} be the covariant derivative of V (in some choice of coordinates). In the case of Cartesian coordinates in Euclidean space, one can write
Then changing index β to α causes the pair of indices to become bound to each other, so that the derivative contracts with itself to obtain the following sum:
which is the divergence div V . Then
is a continuity equation for V .
In general, one can define various divergence operations on higher-rank tensor fields , as follows. If T is a tensor field with at least one contravariant index, taking the covariant differential and contracting the chosen contravariant index with the new covariant index corresponding to the differential results in a new tensor of rank one lower than that of T . [ 5 ]
One can generalize the core contraction operation (vector with dual vector) in a slightly different way, by considering a pair of tensors T and U . The tensor product T ⊗ U {\displaystyle T\otimes U} is a new tensor, which, if it has at least one covariant and one contravariant index, can be contracted. The case where T is a vector and U is a dual vector is exactly the core operation introduced first in this article.
In tensor index notation, to contract two tensors with each other, one places them side by side (juxtaposed) as factors of the same term. This implements the tensor product, yielding a composite tensor. Contracting two indices in this composite tensor implements the desired contraction of the two tensors.
For example, matrices can be represented as tensors of type (1,1) with the first index being contravariant and the second index being covariant. Let Λ α β {\displaystyle \Lambda ^{\alpha }{}_{\beta }} be the components of one matrix and let M β γ {\displaystyle \mathrm {M} ^{\beta }{}_{\gamma }} be the components of a second matrix. Then their multiplication is given by the following contraction, an example of the contraction of a pair of tensors:
Also, the interior product of a vector with a differential form is a special case of the contraction of two tensors with each other.
Let R be a commutative ring and let M be a finite free module over R . Then contraction operates on the full (mixed) tensor algebra of M in exactly the same way as it does in the case of vector spaces over a field. (The key fact is that the canonical pairing is still perfect in this case.)
More generally, let O X be a sheaf of commutative rings over a topological space X , e.g. O X could be the structure sheaf of a complex manifold , analytic space , or scheme . Let M be a locally free sheaf of modules over O X of finite rank. Then the dual of M is still well-behaved [ 6 ] and contraction operations make sense in this context. | https://en.wikipedia.org/wiki/Tensor_contraction |
In differential geometry , a tensor density or relative tensor is a generalization of the tensor field concept. A tensor density transforms as a tensor field when passing from one coordinate system to another (see tensor field ), except that it is additionally multiplied or weighted by a power W {\displaystyle W} of the Jacobian determinant of the coordinate transition function or its absolute value. A tensor density with a single index is called a vector density . A distinction is made among (authentic) tensor densities, pseudotensor densities, even tensor densities and odd tensor densities. Sometimes tensor densities with a negative weight W {\displaystyle W} are called tensor capacity. [ 1 ] [ 2 ] [ 3 ] A tensor density can also be regarded as a section of the tensor product of a tensor bundle with a density bundle .
In physics and related fields, it is often useful to work with the components of an algebraic object rather than the object itself. An example would be decomposing a vector into a sum of basis vectors weighted by some coefficients such as v → = c 1 e → 1 + c 2 e → 2 + c 3 e → 3 {\displaystyle {\vec {v}}=c_{1}{\vec {e}}_{1}+c_{2}{\vec {e}}_{2}+c_{3}{\vec {e}}_{3}} where v → {\displaystyle {\vec {v}}} is a vector in 3-dimensional Euclidean space , c i ∈ R 1 and e → i {\displaystyle c_{i}\in \mathbb {R} ^{1}{\text{ and }}{\vec {e}}_{i}} are the usual standard basis vectors in Euclidean space. This is usually necessary for computational purposes, and can often be insightful when algebraic objects represent complex abstractions but their components have concrete interpretations. However, with this identification, one has to be careful to track changes of the underlying basis in which the quantity is expanded; it may in the course of a computation become expedient to change the basis while the vector v → {\displaystyle {\vec {v}}} remains fixed in physical space. More generally, if an algebraic object represents a geometric object, but is expressed in terms of a particular basis, then it is necessary to, when the basis is changed, also change the representation. Physicists will often call this representation of a geometric object a tensor if it transforms under a sequence of linear maps given a linear change of basis (although confusingly others call the underlying geometric object which hasn't changed under the coordinate transformation a "tensor", a convention this article strictly avoids). In general there are representations which transform in arbitrary ways depending on how the geometric invariant is reconstructed from the representation. In certain special cases it is convenient to use representations which transform almost like tensors, but with an additional, nonlinear factor in the transformation. A prototypical example is a matrix representing the cross product (area of spanned parallelogram) on R 2 . {\displaystyle \mathbb {R} ^{2}.} The representation is given by in the standard basis by u → × v → = [ u 1 u 2 ] [ 0 1 − 1 0 ] [ v 1 v 2 ] = u 1 v 2 − u 2 v 1 {\displaystyle {\vec {u}}\times {\vec {v}}={\begin{bmatrix}u_{1}&u_{2}\end{bmatrix}}{\begin{bmatrix}0&1\\-1&0\end{bmatrix}}{\begin{bmatrix}v_{1}\\v_{2}\end{bmatrix}}=u_{1}v_{2}-u_{2}v_{1}}
If we now try to express this same expression in a basis other than the standard basis, then the components of the vectors will change, say according to [ u 1 ′ u 2 ′ ] T = A [ u 1 u 2 ] T {\textstyle {\begin{bmatrix}u'_{1}&u'_{2}\end{bmatrix}}^{\textsf {T}}=A{\begin{bmatrix}u_{1}&u_{2}\end{bmatrix}}^{\textsf {T}}} where A {\displaystyle A} is some 2 by 2 matrix of real numbers. Given that the area of the spanned parallelogram is a geometric invariant, it cannot have changed under the change of basis, and so the new representation of this matrix must be: ( A − 1 ) T [ 0 1 − 1 0 ] A − 1 {\displaystyle \left(A^{-1}\right)^{\textsf {T}}{\begin{bmatrix}0&1\\-1&0\end{bmatrix}}A^{-1}} which, when expanded is just the original expression but multiplied by the determinant of A − 1 , {\displaystyle A^{-1},} which is also 1 det A . {\textstyle {\frac {1}{\det A}}.} In fact this representation could be thought of as a two index tensor transformation, but instead, it is computationally easier to think of the tensor transformation rule as multiplication by 1 det A , {\textstyle {\frac {1}{\det A}},} rather than as 2 matrix multiplications (In fact in higher dimensions, the natural extension of this is n , n × n {\displaystyle n,n\times n} matrix multiplications, which for large n {\displaystyle n} is completely infeasible). Objects which transform in this way are called tensor densities because they arise naturally when considering problems regarding areas and volumes, and so are frequently used in integration.
Some authors classify tensor densities into the two types called (authentic) tensor densities and pseudotensor densities in this article. Other authors classify them differently, into the types called even tensor densities and odd tensor densities. When a tensor density weight is an integer there is an equivalence between these approaches that depends upon whether the integer is even or odd.
Note that these classifications elucidate the different ways that tensor densities may transform somewhat pathologically under orientation- reversing coordinate transformations. Regardless of their classifications into these types, there is only one way that tensor densities transform under orientation- preserving coordinate transformations.
In this article we have chosen the convention that assigns a weight of +2 to g = det ( g ρ σ ) {\displaystyle g=\det \left(g_{\rho \sigma }\right)} , the determinant of the metric tensor expressed with covariant indices. With this choice, classical densities, like charge density, will be represented by tensor densities of weight +1. Some authors use a sign convention for weights that is the negation of that presented here. [ 4 ]
In contrast to the meaning used in this article, in general relativity " pseudotensor " sometimes means an object that does not transform like a tensor or relative tensor of any weight.
For example, a mixed rank-two (authentic) tensor density of weight W {\displaystyle W} transforms as: [ 5 ] [ 6 ]
where T ¯ {\displaystyle {\bar {\mathfrak {T}}}} is the rank-two tensor density in the x ¯ {\displaystyle {\bar {x}}} coordinate system, T {\displaystyle {\mathfrak {T}}} is the transformed tensor density in the x {\displaystyle {x}} coordinate system; and we use the Jacobian determinant . Because the determinant can be negative, which it is for an orientation-reversing coordinate transformation, this formula is applicable only when W {\displaystyle W} is an integer. (However, see even and odd tensor densities below.)
We say that a tensor density is a pseudotensor density when there is an additional sign flip under an orientation-reversing coordinate transformation. A mixed rank-two pseudotensor density of weight W {\displaystyle W} transforms as
where sgn ( ⋅ {\displaystyle \cdot } ) is a function that returns +1 when its argument is positive or −1 when its argument is negative.
The transformations for even and odd tensor densities have the benefit of being well defined even when W {\displaystyle W} is not an integer. Thus one can speak of, say, an odd tensor density of weight +2 or an even tensor density of weight −1/2.
When W {\displaystyle W} is an even integer the above formula for an (authentic) tensor density can be rewritten as
Similarly, when W {\displaystyle W} is an odd integer the formula for an (authentic) tensor density can be rewritten as
A tensor density of any type that has weight zero is also called an absolute tensor . An (even) authentic tensor density of weight zero is also called an ordinary tensor .
If a weight is not specified but the word "relative" or "density" is used in a context where a specific weight is needed, it is usually assumed that the weight is +1.
If T α β {\displaystyle {\mathfrak {T}}_{\alpha \beta }} is a non-singular matrix and a rank-two tensor density of weight W {\displaystyle W} with covariant indices then its matrix inverse will be a rank-two tensor density of weight − W {\displaystyle -W} with contravariant indices. Similar statements apply when the two indices are contravariant or are mixed covariant and contravariant.
If T α β {\displaystyle {\mathfrak {T}}_{\alpha \beta }} is a rank-two tensor density of weight W {\displaystyle W} with covariant indices then the matrix determinant det T α β {\displaystyle \det {\mathfrak {T}}_{\alpha \beta }} will have weight N W + 2 , {\displaystyle NW+2,} where N {\displaystyle N} is the number of space-time dimensions. If T α β {\displaystyle {\mathfrak {T}}^{\alpha \beta }} is a rank-two tensor density of weight W {\displaystyle W} with contravariant indices then the matrix determinant det T α β {\displaystyle \det {\mathfrak {T}}^{\alpha \beta }} will have weight N W − 2. {\displaystyle NW-2.} The matrix determinant det T β α {\displaystyle \det {\mathfrak {T}}_{~\beta }^{\alpha }} will have weight N W . {\displaystyle NW.}
Any non-singular ordinary tensor T μ ν {\displaystyle T_{\mu \nu }} transforms as T μ ν = ∂ x ¯ κ ∂ x μ T ¯ κ λ ∂ x ¯ λ ∂ x ν , {\displaystyle T_{\mu \nu }={\frac {\partial {\bar {x}}^{\kappa }}{\partial {x}^{\mu }}}{\bar {T}}_{\kappa \lambda }{\frac {\partial {\bar {x}}^{\lambda }}{\partial {x}^{\nu }}}\,,}
where the right-hand side can be viewed as the product of three matrices. Taking the determinant of both sides of the equation (using that the determinant of a matrix product is the product of the determinants), dividing both sides by det ( T ¯ κ λ ) , {\displaystyle \det \left({\bar {T}}_{\kappa \lambda }\right),} and taking their square root gives | det [ ∂ x ¯ ι ∂ x γ ] | = det ( T μ ν ) det ( T ¯ κ λ ) . {\displaystyle \left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ={\sqrt {\frac {\det({T}_{\mu \nu })}{\det \left({\bar {T}}_{\kappa \lambda }\right)}}}\,.}
When the tensor T {\displaystyle T} is the metric tensor , g κ λ , {\displaystyle {g}_{\kappa \lambda },} and x ¯ ι {\displaystyle {\bar {x}}^{\iota }} is a locally inertial coordinate system where g ¯ κ λ = η κ λ = {\displaystyle {\bar {g}}_{\kappa \lambda }=\eta _{\kappa \lambda }=} diag(−1,+1,+1,+1), the Minkowski metric , then det ( g ¯ κ λ ) = det ( η κ λ ) = {\displaystyle \det \left({\bar {g}}_{\kappa \lambda }\right)=\det(\eta _{\kappa \lambda })=} −1 and so | det [ ∂ x ¯ ι ∂ x γ ] | = − g , {\displaystyle \left\vert \det {\left[{\frac {\partial {\bar {x}}^{\iota }}{\partial {x}^{\gamma }}}\right]}\right\vert ={\sqrt {-{g}}}\,,}
where g = det ( g μ ν ) {\displaystyle {g}=\det \left({g}_{\mu \nu }\right)} is the determinant of the metric tensor g μ ν . {\displaystyle {g}_{\mu \nu }.}
Consequently, an even tensor density, T ν … μ … , {\displaystyle {\mathfrak {T}}_{\nu \dots }^{\mu \dots },} of weight W {\displaystyle W} , can be written in the form T ν … μ … = − g W T ν … μ … , {\displaystyle {\mathfrak {T}}_{\nu \dots }^{\mu \dots }={\sqrt {-g}}\;^{W}T_{\nu \dots }^{\mu \dots }\,,}
where T ν … μ … {\displaystyle T_{\nu \dots }^{\mu \dots }\,} is an ordinary tensor. In a locally inertial coordinate system, where g κ λ = η κ λ , {\displaystyle g_{\kappa \lambda }=\eta _{\kappa \lambda },} it will be the case that T ν … μ … {\displaystyle {\mathfrak {T}}_{\nu \dots }^{\mu \dots }} and T ν … μ … {\displaystyle T_{\nu \dots }^{\mu \dots }\,} will be represented with the same numbers.
When using the metric connection ( Levi-Civita connection ), the covariant derivative of an even tensor density is defined as T ν … ; α μ … = − g W T ν … ; α μ … = − g W ( − g − W T ν … μ … ) ; α . {\displaystyle {\mathfrak {T}}_{\nu \dots ;\alpha }^{\mu \dots }={\sqrt {-g}}\;^{W}T_{\nu \dots ;\alpha }^{\mu \dots }={\sqrt {-g}}\;^{W}\left({\sqrt {-g}}\;^{-W}{\mathfrak {T}}_{\nu \dots }^{\mu \dots }\right)_{;\alpha }\,.}
For an arbitrary connection, the covariant derivative is defined by adding an extra term, namely − W Γ δ α δ T ν … μ … {\displaystyle -W\,\Gamma _{~\delta \alpha }^{\delta }\,{\mathfrak {T}}_{\nu \dots }^{\mu \dots }} to the expression that would be appropriate for the covariant derivative of an ordinary tensor.
Equivalently, the product rule is obeyed ( T ν … μ … S τ … σ … ) ; α = ( T ν … ; α μ … ) S τ … σ … + T ν … μ … ( S τ … ; α σ … ) , {\displaystyle \left({\mathfrak {T}}_{\nu \dots }^{\mu \dots }{\mathfrak {S}}_{\tau \dots }^{\sigma \dots }\right)_{;\alpha }=\left({\mathfrak {T}}_{\nu \dots ;\alpha }^{\mu \dots }\right){\mathfrak {S}}_{\tau \dots }^{\sigma \dots }+{\mathfrak {T}}_{\nu \dots }^{\mu \dots }\left({\mathfrak {S}}_{\tau \dots ;\alpha }^{\sigma \dots }\right)\,,}
where, for the metric connection, the covariant derivative of any function of g κ λ {\displaystyle g_{\kappa \lambda }} is always zero, g κ λ ; α = 0 ( − g W ) ; α = ( − g W ) , α − W Γ δ α δ − g W = W 2 g κ λ g κ λ , α − g W − W Γ δ α δ − g W = 0 . {\displaystyle {\begin{aligned}g_{\kappa \lambda ;\alpha }&=0\\\left({\sqrt {-g}}\;^{W}\right)_{;\alpha }&=\left({\sqrt {-g}}\;^{W}\right)_{,\alpha }-W\Gamma _{~\delta \alpha }^{\delta }{\sqrt {-g}}\;^{W}={\frac {W}{2}}g^{\kappa \lambda }g_{\kappa \lambda ,\alpha }{\sqrt {-g}}\;^{W}-W\Gamma _{~\delta \alpha }^{\delta }{\sqrt {-g}}\;^{W}=0\,.\end{aligned}}}
The expression − g {\displaystyle {\sqrt {-g}}} is a scalar density. By the convention of this article it has a weight of +1.
The density of electric current J μ {\displaystyle {\mathfrak {J}}^{\mu }} (for example, J 2 {\displaystyle {\mathfrak {J}}^{2}} is the amount of electric charge crossing the 3-volume element d x 3 d x 4 d x 1 {\displaystyle dx^{3}\,dx^{4}\,dx^{1}} divided by that element — do not use the metric in this calculation) is a contravariant vector density of weight +1. It is often written as J μ = J μ − g {\displaystyle {\mathfrak {J}}^{\mu }=J^{\mu }{\sqrt {-g}}} or J μ = ε μ α β γ J α β γ / 3 ! , {\displaystyle {\mathfrak {J}}^{\mu }=\varepsilon ^{\mu \alpha \beta \gamma }{\mathcal {J}}_{\alpha \beta \gamma }/3!,} where J μ {\displaystyle J^{\mu }\,} and the differential form J α β γ {\displaystyle {\mathcal {J}}_{\alpha \beta \gamma }} are absolute tensors, and where ε μ α β γ {\displaystyle \varepsilon ^{\mu \alpha \beta \gamma }} is the Levi-Civita symbol ; see below.
The density of Lorentz force f μ {\displaystyle {\mathfrak {f}}_{\mu }} (that is, the linear momentum transferred from the electromagnetic field to matter within a 4-volume element d x 1 d x 2 d x 3 d x 4 {\displaystyle dx^{1}\,dx^{2}\,dx^{3}\,dx^{4}} divided by that element — do not use the metric in this calculation) is a covariant vector density of weight +1.
In N {\displaystyle N} -dimensional space-time, the Levi-Civita symbol may be regarded as either a rank- N {\displaystyle N} contravariant (odd) authentic tensor density of weight +1 ( ϵ α 1 ⋯ ϵ α N {\displaystyle \epsilon ^{\alpha _{1}\cdots \epsilon _{\alpha _{N}}}} ) or a rank- N {\displaystyle N} covariant (odd) authentic tensor density of weight −1 ( ϵ α 1 ⋯ ϵ α N {\displaystyle \epsilon _{\alpha _{1}\cdots \epsilon _{\alpha _{N}}}} ) : ϵ α 1 ⋯ ϵ α N = ϵ ¯ β 1 ⋯ ϵ β N ∂ x α 1 ∂ x ¯ β 1 ⋯ ∂ x α N ∂ x ¯ β N ( det [ ∂ x ¯ β ∂ x α ] ) + 1 {\displaystyle \epsilon ^{\alpha _{1}\cdots \epsilon _{\alpha _{N}}}={\bar {\epsilon }}^{\beta _{1}\cdots \epsilon _{\beta _{N}}}{\frac {\partial x^{\alpha _{1}}}{\partial {\bar {x}}^{\beta _{1}}}}\cdots {\frac {\partial x^{\alpha _{N}}}{\partial {\bar {x}}^{\beta _{N}}}}\left(\det \left[{\frac {\partial {\bar {x}}^{\beta }}{\partial x^{\alpha }}}\right]\right)^{+1}} ϵ α 1 ⋯ ϵ α N = ϵ ¯ β 1 ⋯ ϵ β N ∂ x ¯ β 1 ∂ x α 1 ⋯ ∂ x ¯ β N ∂ x α N ( det [ ∂ x ¯ β ∂ x α ] ) − 1 . {\displaystyle \epsilon _{\alpha _{1}\cdots \epsilon _{\alpha _{N}}}={\bar {\epsilon }}_{\beta _{1}\cdots \epsilon _{\beta _{N}}}{\frac {\partial {\bar {x}}^{\beta _{1}}}{\partial x^{\alpha _{1}}}}\cdots {\frac {\partial {\bar {x}}^{\beta _{N}}}{\partial x^{\alpha _{N}}}}\left(\det \left[{\frac {\partial {\bar {x}}^{\beta }}{\partial x^{\alpha }}}\right]\right)^{-1}\,.} Notice that the Levi-Civita symbol (so regarded) does not obey the usual convention for raising or lowering of indices with the metric tensor. That is, it is true that ε α β γ δ g α κ g β λ g γ μ g δ ν = ε κ λ μ ν g , {\displaystyle \varepsilon ^{\alpha \beta \gamma \delta }\,g_{\alpha \kappa }\,g_{\beta \lambda }\,g_{\gamma \mu }g_{\delta \nu }\,=\,\varepsilon _{\kappa \lambda \mu \nu }\,g\,,} but in general relativity, where g = det ( g ρ σ ) {\displaystyle g=\det \left(g_{\rho \sigma }\right)} is always negative, this is never equal to ε κ λ μ ν . {\displaystyle \varepsilon _{\kappa \lambda \mu \nu }.}
The determinant of the metric tensor, g = det ( g ρ σ ) = 1 4 ! ε α β γ δ ε κ λ μ ν g α κ g β λ g γ μ g δ ν , {\displaystyle g=\det \left(g_{\rho \sigma }\right)={\frac {1}{4!}}\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon ^{\kappa \lambda \mu \nu }g_{\alpha \kappa }g_{\beta \lambda }g_{\gamma \mu }g_{\delta \nu }\,,} is an (even) authentic scalar density of weight +2, being the contraction of the product of 2 (odd) authentic tensor densities of weight +1 and four (even) authentic tensor densities of weight 0. | https://en.wikipedia.org/wiki/Tensor_density |
The derivatives of scalars , vectors , and second-order tensors with respect to second-order tensors are of considerable use in continuum mechanics . These derivatives are used in the theories of nonlinear elasticity and plasticity , particularly in the design of algorithms for numerical simulations . [ 1 ]
The directional derivative provides a systematic way of finding these derivatives. [ 2 ]
The definitions of directional derivatives for various situations are given below. It is assumed that the functions are sufficiently smooth that derivatives can be taken.
Let f ( v ) be a real valued function of the vector v . Then the derivative of f ( v ) with respect to v (or at v ) is the vector defined through its dot product with any vector u being
∂ f ∂ v ⋅ u = D f ( v ) [ u ] = [ d d α f ( v + α u ) ] α = 0 {\displaystyle {\frac {\partial f}{\partial \mathbf {v} }}\cdot \mathbf {u} =Df(\mathbf {v} )[\mathbf {u} ]=\left[{\frac {d}{d\alpha }}~f(\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}
for all vectors u . The above dot product yields a scalar, and if u is a unit vector gives the directional derivative of f at v , in the u direction.
Properties:
Let f ( v ) be a vector valued function of the vector v . Then the derivative of f ( v ) with respect to v (or at v ) is the second order tensor defined through its dot product with any vector u being
∂ f ∂ v ⋅ u = D f ( v ) [ u ] = [ d d α f ( v + α u ) ] α = 0 {\displaystyle {\frac {\partial \mathbf {f} }{\partial \mathbf {v} }}\cdot \mathbf {u} =D\mathbf {f} (\mathbf {v} )[\mathbf {u} ]=\left[{\frac {d}{d\alpha }}~\mathbf {f} (\mathbf {v} +\alpha ~\mathbf {u} )\right]_{\alpha =0}}
for all vectors u . The above dot product yields a vector, and if u is a unit vector gives the direction derivative of f at v , in the directional u .
Properties:
Let f ( S ) {\displaystyle f({\boldsymbol {S}})} be a real valued function of the second order tensor S {\displaystyle {\boldsymbol {S}}} . Then the derivative of f ( S ) {\displaystyle f({\boldsymbol {S}})} with respect to S {\displaystyle {\boldsymbol {S}}} (or at S {\displaystyle {\boldsymbol {S}}} ) in the direction T {\displaystyle {\boldsymbol {T}}} is the second order tensor defined as ∂ f ∂ S : T = D f ( S ) [ T ] = [ d d α f ( S + α T ) ] α = 0 {\displaystyle {\frac {\partial f}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=Df({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {d}{d\alpha }}~f({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}} for all second order tensors T {\displaystyle {\boldsymbol {T}}} .
Properties:
Let F ( S ) {\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})} be a second order tensor valued function of the second order tensor S {\displaystyle {\boldsymbol {S}}} . Then the derivative of F ( S ) {\displaystyle {\boldsymbol {F}}({\boldsymbol {S}})} with respect to S {\displaystyle {\boldsymbol {S}}} (or at S {\displaystyle {\boldsymbol {S}}} ) in the direction T {\displaystyle {\boldsymbol {T}}} is the fourth order tensor defined as ∂ F ∂ S : T = D F ( S ) [ T ] = [ d d α F ( S + α T ) ] α = 0 {\displaystyle {\frac {\partial {\boldsymbol {F}}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}=D{\boldsymbol {F}}({\boldsymbol {S}})[{\boldsymbol {T}}]=\left[{\frac {d}{d\alpha }}~{\boldsymbol {F}}({\boldsymbol {S}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}} for all second order tensors T {\displaystyle {\boldsymbol {T}}} .
Properties:
The gradient , ∇ T {\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {T}}} , of a tensor field T ( x ) {\displaystyle {\boldsymbol {T}}(\mathbf {x} )} in the direction of an arbitrary constant vector c is defined as: ∇ T ⋅ c = lim α → 0 d d α T ( x + α c ) {\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {T}}\cdot \mathbf {c} =\lim _{\alpha \rightarrow 0}\quad {\cfrac {d}{d\alpha }}~{\boldsymbol {T}}(\mathbf {x} +\alpha \mathbf {c} )} The gradient of a tensor field of order n is a tensor field of order n +1.
If e 1 , e 2 , e 3 {\displaystyle \mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}} are the basis vectors in a Cartesian coordinate system, with coordinates of points denoted by ( x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}} ), then the gradient of the tensor field T {\displaystyle {\boldsymbol {T}}} is given by ∇ T = ∂ T ∂ x i ⊗ e i {\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {T}}={\cfrac {\partial {\boldsymbol {T}}}{\partial x_{i}}}\otimes \mathbf {e} _{i}}
The vectors x and c can be written as x = x i e i {\displaystyle \mathbf {x} =x_{i}~\mathbf {e} _{i}} and c = c i e i {\displaystyle \mathbf {c} =c_{i}~\mathbf {e} _{i}} . Let y := x + α c . In that case the gradient is given by ∇ T ⋅ c = d d α T ( x 1 + α c 1 , x 2 + α c 2 , x 3 + α c 3 ) | α = 0 ≡ d d α T ( y 1 , y 2 , y 3 ) | α = 0 = [ ∂ T ∂ y 1 ∂ y 1 ∂ α + ∂ T ∂ y 2 ∂ y 2 ∂ α + ∂ T ∂ y 3 ∂ y 3 ∂ α ] α = 0 = [ ∂ T ∂ y 1 c 1 + ∂ T ∂ y 2 c 2 + ∂ T ∂ y 3 c 3 ] α = 0 = ∂ T ∂ x 1 c 1 + ∂ T ∂ x 2 c 2 + ∂ T ∂ x 3 c 3 ≡ ∂ T ∂ x i c i = ∂ T ∂ x i ( e i ⋅ c ) = [ ∂ T ∂ x i ⊗ e i ] ⋅ c ◻ {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}{\boldsymbol {T}}\cdot \mathbf {c} &=\left.{\cfrac {d}{d\alpha }}~{\boldsymbol {T}}(x_{1}+\alpha c_{1},x_{2}+\alpha c_{2},x_{3}+\alpha c_{3})\right|_{\alpha =0}\equiv \left.{\cfrac {d}{d\alpha }}~{\boldsymbol {T}}(y_{1},y_{2},y_{3})\right|_{\alpha =0}\\&=\left[{\cfrac {\partial {\boldsymbol {T}}}{\partial y_{1}}}~{\cfrac {\partial y_{1}}{\partial \alpha }}+{\cfrac {\partial {\boldsymbol {T}}}{\partial y_{2}}}~{\cfrac {\partial y_{2}}{\partial \alpha }}+{\cfrac {\partial {\boldsymbol {T}}}{\partial y_{3}}}~{\cfrac {\partial y_{3}}{\partial \alpha }}\right]_{\alpha =0}=\left[{\cfrac {\partial {\boldsymbol {T}}}{\partial y_{1}}}~c_{1}+{\cfrac {\partial {\boldsymbol {T}}}{\partial y_{2}}}~c_{2}+{\cfrac {\partial {\boldsymbol {T}}}{\partial y_{3}}}~c_{3}\right]_{\alpha =0}\\&={\cfrac {\partial {\boldsymbol {T}}}{\partial x_{1}}}~c_{1}+{\cfrac {\partial {\boldsymbol {T}}}{\partial x_{2}}}~c_{2}+{\cfrac {\partial {\boldsymbol {T}}}{\partial x_{3}}}~c_{3}\equiv {\cfrac {\partial {\boldsymbol {T}}}{\partial x_{i}}}~c_{i}={\cfrac {\partial {\boldsymbol {T}}}{\partial x_{i}}}~(\mathbf {e} _{i}\cdot \mathbf {c} )=\left[{\cfrac {\partial {\boldsymbol {T}}}{\partial x_{i}}}\otimes \mathbf {e} _{i}\right]\cdot \mathbf {c} \qquad \square \end{aligned}}}
Since the basis vectors do not vary in a Cartesian coordinate system we have the following relations for the gradients of a scalar field ϕ {\displaystyle \phi } , a vector field v , and a second-order tensor field S {\displaystyle {\boldsymbol {S}}} . ∇ ϕ = ∂ ϕ ∂ x i e i = ϕ , i e i ∇ v = ∂ ( v j e j ) ∂ x i ⊗ e i = ∂ v j ∂ x i e j ⊗ e i = v j , i e j ⊗ e i ∇ S = ∂ ( S j k e j ⊗ e k ) ∂ x i ⊗ e i = ∂ S j k ∂ x i e j ⊗ e k ⊗ e i = S j k , i e j ⊗ e k ⊗ e i {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\phi &={\cfrac {\partial \phi }{\partial x_{i}}}~\mathbf {e} _{i}=\phi _{,i}~\mathbf {e} _{i}\\{\boldsymbol {\nabla }}\mathbf {v} &={\cfrac {\partial (v_{j}\mathbf {e} _{j})}{\partial x_{i}}}\otimes \mathbf {e} _{i}={\cfrac {\partial v_{j}}{\partial x_{i}}}~\mathbf {e} _{j}\otimes \mathbf {e} _{i}=v_{j,i}~\mathbf {e} _{j}\otimes \mathbf {e} _{i}\\{\boldsymbol {\nabla }}{\boldsymbol {S}}&={\cfrac {\partial (S_{jk}\mathbf {e} _{j}\otimes \mathbf {e} _{k})}{\partial x_{i}}}\otimes \mathbf {e} _{i}={\cfrac {\partial S_{jk}}{\partial x_{i}}}~\mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{i}=S_{jk,i}~\mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{i}\end{aligned}}}
If g 1 , g 2 , g 3 {\displaystyle \mathbf {g} ^{1},\mathbf {g} ^{2},\mathbf {g} ^{3}} are the contravariant basis vectors in a curvilinear coordinate system, with coordinates of points denoted by ( ξ 1 , ξ 2 , ξ 3 {\displaystyle \xi ^{1},\xi ^{2},\xi ^{3}} ), then the gradient of the tensor field T {\displaystyle {\boldsymbol {T}}} is given by [ 3 ] ∇ T = ∂ T ∂ ξ i ⊗ g i {\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {T}}={\frac {\partial {\boldsymbol {T}}}{\partial \xi ^{i}}}\otimes \mathbf {g} ^{i}}
From this definition we have the following relations for the gradients of a scalar field ϕ {\displaystyle \phi } , a vector field v , and a second-order tensor field S {\displaystyle {\boldsymbol {S}}} . ∇ ϕ = ∂ ϕ ∂ ξ i g i ∇ v = ∂ ( v j g j ) ∂ ξ i ⊗ g i = ( ∂ v j ∂ ξ i + v k Γ i k j ) g j ⊗ g i = ( ∂ v j ∂ ξ i − v k Γ i j k ) g j ⊗ g i ∇ S = ∂ ( S j k g j ⊗ g k ) ∂ ξ i ⊗ g i = ( ∂ S j k ∂ ξ i − S l k Γ i j l − S j l Γ i k l ) g j ⊗ g k ⊗ g i {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\phi &={\frac {\partial \phi }{\partial \xi ^{i}}}~\mathbf {g} ^{i}\\[1.2ex]{\boldsymbol {\nabla }}\mathbf {v} &={\frac {\partial \left(v^{j}\mathbf {g} _{j}\right)}{\partial \xi ^{i}}}\otimes \mathbf {g} ^{i}\\&=\left({\frac {\partial v^{j}}{\partial \xi ^{i}}}+v^{k}~\Gamma _{ik}^{j}\right)~\mathbf {g} _{j}\otimes \mathbf {g} ^{i}=\left({\frac {\partial v_{j}}{\partial \xi ^{i}}}-v_{k}~\Gamma _{ij}^{k}\right)~\mathbf {g} ^{j}\otimes \mathbf {g} ^{i}\\[1.2ex]{\boldsymbol {\nabla }}{\boldsymbol {S}}&={\frac {\partial \left(S_{jk}~\mathbf {g} ^{j}\otimes \mathbf {g} ^{k}\right)}{\partial \xi ^{i}}}\otimes \mathbf {g} ^{i}\\&=\left({\frac {\partial S_{jk}}{\partial \xi _{i}}}-S_{lk}~\Gamma _{ij}^{l}-S_{jl}~\Gamma _{ik}^{l}\right)~\mathbf {g} ^{j}\otimes \mathbf {g} ^{k}\otimes \mathbf {g} ^{i}\end{aligned}}}
where the Christoffel symbol Γ i j k {\displaystyle \Gamma _{ij}^{k}} is defined using Γ i j k g k = ∂ g i ∂ ξ j ⟹ Γ i j k = ∂ g i ∂ ξ j ⋅ g k = − g i ⋅ ∂ g k ∂ ξ j {\displaystyle \Gamma _{ij}^{k}~\mathbf {g} _{k}={\frac {\partial \mathbf {g} _{i}}{\partial \xi ^{j}}}\quad \implies \quad \Gamma _{ij}^{k}={\frac {\partial \mathbf {g} _{i}}{\partial \xi ^{j}}}\cdot \mathbf {g} ^{k}=-\mathbf {g} _{i}\cdot {\frac {\partial \mathbf {g} ^{k}}{\partial \xi ^{j}}}}
In cylindrical coordinates , the gradient is given by ∇ ϕ = ∂ ϕ ∂ r e r + 1 r ∂ ϕ ∂ θ e θ + ∂ ϕ ∂ z e z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\phi ={}\quad &{\frac {\partial \phi }{\partial r}}~\mathbf {e} _{r}+{\frac {1}{r}}~{\frac {\partial \phi }{\partial \theta }}~\mathbf {e} _{\theta }+{\frac {\partial \phi }{\partial z}}~\mathbf {e} _{z}\\\end{aligned}}}
∇ v = ∂ v r ∂ r e r ⊗ e r + 1 r ( ∂ v r ∂ θ − v θ ) e r ⊗ e θ + ∂ v r ∂ z e r ⊗ e z + ∂ v θ ∂ r e θ ⊗ e r + 1 r ( ∂ v θ ∂ θ + v r ) e θ ⊗ e θ + ∂ v θ ∂ z e θ ⊗ e z + ∂ v z ∂ r e z ⊗ e r + 1 r ∂ v z ∂ θ e z ⊗ e θ + ∂ v z ∂ z e z ⊗ e z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\mathbf {v} ={}\quad &{\frac {\partial v_{r}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\frac {1}{r}}\left({\frac {\partial v_{r}}{\partial \theta }}-v_{\theta }\right)~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\frac {\partial v_{r}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\\{}+{}&{\frac {\partial v_{\theta }}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\frac {1}{r}}\left({\frac {\partial v_{\theta }}{\partial \theta }}+v_{r}\right)~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\frac {\partial v_{\theta }}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\{}+{}&{\frac {\partial v_{z}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\frac {1}{r}}{\frac {\partial v_{z}}{\partial \theta }}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\frac {\partial v_{z}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\\\end{aligned}}}
∇ S = ∂ S r r ∂ r e r ⊗ e r ⊗ e r + ∂ S r r ∂ z e r ⊗ e r ⊗ e z + 1 r [ ∂ S r r ∂ θ − ( S θ r + S r θ ) ] e r ⊗ e r ⊗ e θ + ∂ S r θ ∂ r e r ⊗ e θ ⊗ e r + ∂ S r θ ∂ z e r ⊗ e θ ⊗ e z + 1 r [ ∂ S r θ ∂ θ + ( S r r − S θ θ ) ] e r ⊗ e θ ⊗ e θ + ∂ S r z ∂ r e r ⊗ e z ⊗ e r + ∂ S r z ∂ z e r ⊗ e z ⊗ e z + 1 r [ ∂ S r z ∂ θ − S θ z ] e r ⊗ e z ⊗ e θ + ∂ S θ r ∂ r e θ ⊗ e r ⊗ e r + ∂ S θ r ∂ z e θ ⊗ e r ⊗ e z + 1 r [ ∂ S θ r ∂ θ + ( S r r − S θ θ ) ] e θ ⊗ e r ⊗ e θ + ∂ S θ θ ∂ r e θ ⊗ e θ ⊗ e r + ∂ S θ θ ∂ z e θ ⊗ e θ ⊗ e z + 1 r [ ∂ S θ θ ∂ θ + ( S r θ + S θ r ) ] e θ ⊗ e θ ⊗ e θ + ∂ S θ z ∂ r e θ ⊗ e z ⊗ e r + ∂ S θ z ∂ z e θ ⊗ e z ⊗ e z + 1 r [ ∂ S θ z ∂ θ + S r z ] e θ ⊗ e z ⊗ e θ + ∂ S z r ∂ r e z ⊗ e r ⊗ e r + ∂ S z r ∂ z e z ⊗ e r ⊗ e z + 1 r [ ∂ S z r ∂ θ − S z θ ] e z ⊗ e r ⊗ e θ + ∂ S z θ ∂ r e z ⊗ e θ ⊗ e r + ∂ S z θ ∂ z e z ⊗ e θ ⊗ e z + 1 r [ ∂ S z θ ∂ θ + S z r ] e z ⊗ e θ ⊗ e θ + ∂ S z z ∂ r e z ⊗ e z ⊗ e r + ∂ S z z ∂ z e z ⊗ e z ⊗ e z + 1 r ∂ S z z ∂ θ e z ⊗ e z ⊗ e θ {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}{\boldsymbol {S}}={}\quad &{\frac {\partial S_{rr}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\frac {\partial S_{rr}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{rr}}{\partial \theta }}-(S_{\theta r}+S_{r\theta })\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{r\theta }}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\frac {\partial S_{r\theta }}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{r\theta }}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{rz}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\frac {\partial S_{rz}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{rz}}{\partial \theta }}-S_{\theta z}\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{\theta r}}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\frac {\partial S_{\theta r}}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{\theta r}}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{\theta \theta }}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\frac {\partial S_{\theta \theta }}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{\theta \theta }}{\partial \theta }}+(S_{r\theta }+S_{\theta r})\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{\theta z}}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\frac {\partial S_{\theta z}}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{\theta z}}{\partial \theta }}+S_{rz}\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{zr}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\frac {\partial S_{zr}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{zr}}{\partial \theta }}-S_{z\theta }\right]~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{z\theta }}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\frac {\partial S_{z\theta }}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}+{\frac {1}{r}}\left[{\frac {\partial S_{z\theta }}{\partial \theta }}+S_{zr}\right]~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\\{}+{}&{\frac {\partial S_{zz}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\frac {\partial S_{zz}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}+{\frac {1}{r}}~{\frac {\partial S_{zz}}{\partial \theta }}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\end{aligned}}}
The divergence of a tensor field T ( x ) {\displaystyle {\boldsymbol {T}}(\mathbf {x} )} is defined using the recursive relation ( ∇ ⋅ T ) ⋅ c = ∇ ⋅ ( c ⋅ T T ) ; ∇ ⋅ v = tr ( ∇ v ) {\displaystyle ({\boldsymbol {\nabla }}\cdot {\boldsymbol {T}})\cdot \mathbf {c} ={\boldsymbol {\nabla }}\cdot \left(\mathbf {c} \cdot {\boldsymbol {T}}^{\textsf {T}}\right)~;\qquad {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\text{tr}}({\boldsymbol {\nabla }}\mathbf {v} )}
where c is an arbitrary constant vector and v is a vector field. If T {\displaystyle {\boldsymbol {T}}} is a tensor field of order n > 1 then the divergence of the field is a tensor of order n − 1.
In a Cartesian coordinate system we have the following relations for a vector field v and a second-order tensor field S {\displaystyle {\boldsymbol {S}}} . ∇ ⋅ v = ∂ v i ∂ x i = v i , i ∇ ⋅ S = ∂ S i k ∂ x i e k = S i k , i e k {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \mathbf {v} &={\frac {\partial v_{i}}{\partial x_{i}}}=v_{i,i}\\{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&={\frac {\partial S_{ik}}{\partial x_{i}}}~\mathbf {e} _{k}=S_{ik,i}~\mathbf {e} _{k}\end{aligned}}}
where tensor index notation for partial derivatives is used in the rightmost expressions. Note that ∇ ⋅ S ≠ ∇ ⋅ S T . {\displaystyle {\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}\neq {\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}^{\textsf {T}}.}
For a symmetric second-order tensor, the divergence is also often written as [ 4 ]
∇ ⋅ S = ∂ S k i ∂ x i e k = S k i , i e k {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&={\cfrac {\partial S_{ki}}{\partial x_{i}}}~\mathbf {e} _{k}=S_{ki,i}~\mathbf {e} _{k}\end{aligned}}}
The above expression is sometimes used as the definition of ∇ ⋅ S {\displaystyle {\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}} in Cartesian component form (often also written as div S {\displaystyle \operatorname {div} {\boldsymbol {S}}} ). Note that such a definition is not consistent with the rest of this article (see the section on curvilinear co-ordinates).
The difference stems from whether the differentiation is performed with respect to the rows or columns of S {\displaystyle {\boldsymbol {S}}} , and is conventional. This is demonstrated by an example. In a Cartesian coordinate system the second order tensor (matrix) S {\displaystyle \mathbf {S} } is the gradient of a vector function v {\displaystyle \mathbf {v} } .
∇ ⋅ ( ∇ v ) = ∇ ⋅ ( v i , j e i ⊗ e j ) = v i , j i e i ⋅ e i ⊗ e j = ( ∇ ⋅ v ) , j e j = ∇ ( ∇ ⋅ v ) ∇ ⋅ [ ( ∇ v ) T ] = ∇ ⋅ ( v j , i e i ⊗ e j ) = v j , i i e i ⋅ e i ⊗ e j = ∇ 2 v j e j = ∇ 2 v {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \left({\boldsymbol {\nabla }}\mathbf {v} \right)&={\boldsymbol {\nabla }}\cdot \left(v_{i,j}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\right)=v_{i,ji}~\mathbf {e} _{i}\cdot \mathbf {e} _{i}\otimes \mathbf {e} _{j}=\left({\boldsymbol {\nabla }}\cdot \mathbf {v} \right)_{,j}~\mathbf {e} _{j}={\boldsymbol {\nabla }}\left({\boldsymbol {\nabla }}\cdot \mathbf {v} \right)\\{\boldsymbol {\nabla }}\cdot \left[\left({\boldsymbol {\nabla }}\mathbf {v} \right)^{\textsf {T}}\right]&={\boldsymbol {\nabla }}\cdot \left(v_{j,i}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\right)=v_{j,ii}~\mathbf {e} _{i}\cdot \mathbf {e} _{i}\otimes \mathbf {e} _{j}={\boldsymbol {\nabla }}^{2}v_{j}~\mathbf {e} _{j}={\boldsymbol {\nabla }}^{2}\mathbf {v} \end{aligned}}}
The last equation is equivalent to the alternative definition / interpretation [ 4 ]
( ∇ ⋅ ) alt ( ∇ v ) = ( ∇ ⋅ ) alt ( v i , j e i ⊗ e j ) = v i , j j e i ⊗ e j ⋅ e j = ∇ 2 v i e i = ∇ 2 v {\displaystyle {\begin{aligned}\left({\boldsymbol {\nabla }}\cdot \right)_{\text{alt}}\left({\boldsymbol {\nabla }}\mathbf {v} \right)=\left({\boldsymbol {\nabla }}\cdot \right)_{\text{alt}}\left(v_{i,j}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\right)=v_{i,jj}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\cdot \mathbf {e} _{j}={\boldsymbol {\nabla }}^{2}v_{i}~\mathbf {e} _{i}={\boldsymbol {\nabla }}^{2}\mathbf {v} \end{aligned}}}
In curvilinear coordinates, the divergences of a vector field v and a second-order tensor field S {\displaystyle {\boldsymbol {S}}} are ∇ ⋅ v = ( ∂ v i ∂ ξ i + v k Γ i k i ) ∇ ⋅ S = ( ∂ S i k ∂ ξ i − S l k Γ i i l − S i l Γ i k l ) g k {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \mathbf {v} &=\left({\cfrac {\partial v^{i}}{\partial \xi ^{i}}}+v^{k}~\Gamma _{ik}^{i}\right)\\{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&=\left({\cfrac {\partial S_{ik}}{\partial \xi _{i}}}-S_{lk}~\Gamma _{ii}^{l}-S_{il}~\Gamma _{ik}^{l}\right)~\mathbf {g} ^{k}\end{aligned}}}
More generally, ∇ ⋅ S = [ ∂ S i j ∂ q k − Γ k i l S l j − Γ k j l S i l ] g i k b j = [ ∂ S i j ∂ q i + Γ i l i S l j + Γ i l j S i l ] b j = [ ∂ S j i ∂ q i + Γ i l i S j l − Γ i j l S l i ] b j = [ ∂ S i j ∂ q k − Γ i k l S l j + Γ k l j S i l ] g i k b j {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&=\left[{\cfrac {\partial S_{ij}}{\partial q^{k}}}-\Gamma _{ki}^{l}~S_{lj}-\Gamma _{kj}^{l}~S_{il}\right]~g^{ik}~\mathbf {b} ^{j}\\[8pt]&=\left[{\cfrac {\partial S^{ij}}{\partial q^{i}}}+\Gamma _{il}^{i}~S^{lj}+\Gamma _{il}^{j}~S^{il}\right]~\mathbf {b} _{j}\\[8pt]&=\left[{\cfrac {\partial S_{~j}^{i}}{\partial q^{i}}}+\Gamma _{il}^{i}~S_{~j}^{l}-\Gamma _{ij}^{l}~S_{~l}^{i}\right]~\mathbf {b} ^{j}\\[8pt]&=\left[{\cfrac {\partial S_{i}^{~j}}{\partial q^{k}}}-\Gamma _{ik}^{l}~S_{l}^{~j}+\Gamma _{kl}^{j}~S_{i}^{~l}\right]~g^{ik}~\mathbf {b} _{j}\end{aligned}}}
In cylindrical polar coordinates ∇ ⋅ v = ∂ v r ∂ r + 1 r ( ∂ v θ ∂ θ + v r ) + ∂ v z ∂ z ∇ ⋅ S = ∂ S r r ∂ r e r + ∂ S r θ ∂ r e θ + ∂ S r z ∂ r e z + 1 r [ ∂ S θ r ∂ θ + ( S r r − S θ θ ) ] e r + 1 r [ ∂ S θ θ ∂ θ + ( S r θ + S θ r ) ] e θ + 1 r [ ∂ S θ z ∂ θ + S r z ] e z + ∂ S z r ∂ z e r + ∂ S z θ ∂ z e θ + ∂ S z z ∂ z e z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \mathbf {v} =\quad &{\frac {\partial v_{r}}{\partial r}}+{\frac {1}{r}}\left({\frac {\partial v_{\theta }}{\partial \theta }}+v_{r}\right)+{\frac {\partial v_{z}}{\partial z}}\\{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}=\quad &{\frac {\partial S_{rr}}{\partial r}}~\mathbf {e} _{r}+{\frac {\partial S_{r\theta }}{\partial r}}~\mathbf {e} _{\theta }+{\frac {\partial S_{rz}}{\partial r}}~\mathbf {e} _{z}\\{}+{}&{\frac {1}{r}}\left[{\frac {\partial S_{\theta r}}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{r}+{\frac {1}{r}}\left[{\frac {\partial S_{\theta \theta }}{\partial \theta }}+(S_{r\theta }+S_{\theta r})\right]~\mathbf {e} _{\theta }+{\frac {1}{r}}\left[{\frac {\partial S_{\theta z}}{\partial \theta }}+S_{rz}\right]~\mathbf {e} _{z}\\{}+{}&{\frac {\partial S_{zr}}{\partial z}}~\mathbf {e} _{r}+{\frac {\partial S_{z\theta }}{\partial z}}~\mathbf {e} _{\theta }+{\frac {\partial S_{zz}}{\partial z}}~\mathbf {e} _{z}\end{aligned}}}
The curl of an order- n > 1 tensor field T ( x ) {\displaystyle {\boldsymbol {T}}(\mathbf {x} )} is also defined using the recursive relation ( ∇ × T ) ⋅ c = ∇ × ( c ⋅ T ) ; ( ∇ × v ) ⋅ c = ∇ ⋅ ( v × c ) {\displaystyle ({\boldsymbol {\nabla }}\times {\boldsymbol {T}})\cdot \mathbf {c} ={\boldsymbol {\nabla }}\times (\mathbf {c} \cdot {\boldsymbol {T}})~;\qquad ({\boldsymbol {\nabla }}\times \mathbf {v} )\cdot \mathbf {c} ={\boldsymbol {\nabla }}\cdot (\mathbf {v} \times \mathbf {c} )} where c is an arbitrary constant vector and v is a vector field.
Consider a vector field v and an arbitrary constant vector c . In index notation, the cross product is given by v × c = ε i j k v j c k e i {\displaystyle \mathbf {v} \times \mathbf {c} =\varepsilon _{ijk}~v_{j}~c_{k}~\mathbf {e} _{i}} where ε i j k {\displaystyle \varepsilon _{ijk}} is the permutation symbol , otherwise known as the Levi-Civita symbol. Then, ∇ ⋅ ( v × c ) = ε i j k v j , i c k = ( ε i j k v j , i e k ) ⋅ c = ( ∇ × v ) ⋅ c {\displaystyle {\boldsymbol {\nabla }}\cdot (\mathbf {v} \times \mathbf {c} )=\varepsilon _{ijk}~v_{j,i}~c_{k}=(\varepsilon _{ijk}~v_{j,i}~\mathbf {e} _{k})\cdot \mathbf {c} =({\boldsymbol {\nabla }}\times \mathbf {v} )\cdot \mathbf {c} } Therefore, ∇ × v = ε i j k v j , i e k {\displaystyle {\boldsymbol {\nabla }}\times \mathbf {v} =\varepsilon _{ijk}~v_{j,i}~\mathbf {e} _{k}}
For a second-order tensor S {\displaystyle {\boldsymbol {S}}} c ⋅ S = c m S m j e j {\displaystyle \mathbf {c} \cdot {\boldsymbol {S}}=c_{m}~S_{mj}~\mathbf {e} _{j}} Hence, using the definition of the curl of a first-order tensor field, ∇ × ( c ⋅ S ) = ε i j k c m S m j , i e k = ( ε i j k S m j , i e k ⊗ e m ) ⋅ c = ( ∇ × S ) ⋅ c {\displaystyle {\boldsymbol {\nabla }}\times (\mathbf {c} \cdot {\boldsymbol {S}})=\varepsilon _{ijk}~c_{m}~S_{mj,i}~\mathbf {e} _{k}=(\varepsilon _{ijk}~S_{mj,i}~\mathbf {e} _{k}\otimes \mathbf {e} _{m})\cdot \mathbf {c} =({\boldsymbol {\nabla }}\times {\boldsymbol {S}})\cdot \mathbf {c} } Therefore, we have ∇ × S = ε i j k S m j , i e k ⊗ e m {\displaystyle {\boldsymbol {\nabla }}\times {\boldsymbol {S}}=\varepsilon _{ijk}~S_{mj,i}~\mathbf {e} _{k}\otimes \mathbf {e} _{m}}
The most commonly used identity involving the curl of a tensor field, T {\displaystyle {\boldsymbol {T}}} , is ∇ × ( ∇ T ) = 0 {\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}{\boldsymbol {T}})={\boldsymbol {0}}} This identity holds for tensor fields of all orders. For the important case of a second-order tensor, S {\displaystyle {\boldsymbol {S}}} , this identity implies that ∇ × ( ∇ S ) = 0 ⟹ S m i , j − S m j , i = 0 {\displaystyle {\boldsymbol {\nabla }}\times ({\boldsymbol {\nabla }}{\boldsymbol {S}})={\boldsymbol {0}}\quad \implies \quad S_{mi,j}-S_{mj,i}=0}
The derivative of the determinant of a second order tensor A {\displaystyle {\boldsymbol {A}}} is given by ∂ ∂ A det ( A ) = det ( A ) [ A − 1 ] T . {\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\det({\boldsymbol {A}})=\det({\boldsymbol {A}})~\left[{\boldsymbol {A}}^{-1}\right]^{\textsf {T}}~.}
In an orthonormal basis, the components of A {\displaystyle {\boldsymbol {A}}} can be written as a matrix A . In that case, the right hand side corresponds the cofactors of the matrix.
Let A {\displaystyle {\boldsymbol {A}}} be a second order tensor and let f ( A ) = det ( A ) {\displaystyle f({\boldsymbol {A}})=\det({\boldsymbol {A}})} . Then, from the definition of the derivative of a scalar valued function of a tensor, we have ∂ f ∂ A : T = d d α det ( A + α T ) | α = 0 = d d α det [ α A ( 1 α I + A − 1 ⋅ T ) ] | α = 0 = d d α [ α 3 det ( A ) det ( 1 α I + A − 1 ⋅ T ) ] | α = 0 . {\displaystyle {\begin{aligned}{\frac {\partial f}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}&=\left.{\cfrac {d}{d\alpha }}\det({\boldsymbol {A}}+\alpha ~{\boldsymbol {T}})\right|_{\alpha =0}\\&=\left.{\cfrac {d}{d\alpha }}\det \left[\alpha ~{\boldsymbol {A}}\left({\cfrac {1}{\alpha }}~{\boldsymbol {\mathit {I}}}+{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)\right]\right|_{\alpha =0}\\&=\left.{\cfrac {d}{d\alpha }}\left[\alpha ^{3}~\det({\boldsymbol {A}})~\det \left({\cfrac {1}{\alpha }}~{\boldsymbol {\mathit {I}}}+{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)\right]\right|_{\alpha =0}.\end{aligned}}}
The determinant of a tensor can be expressed in the form of a characteristic equation in terms of the invariants I 1 , I 2 , I 3 {\displaystyle I_{1},I_{2},I_{3}} using det ( λ I + A ) = λ 3 + I 1 ( A ) λ 2 + I 2 ( A ) λ + I 3 ( A ) . {\displaystyle \det(\lambda ~{\boldsymbol {\mathit {I}}}+{\boldsymbol {A}})=\lambda ^{3}+I_{1}({\boldsymbol {A}})~\lambda ^{2}+I_{2}({\boldsymbol {A}})~\lambda +I_{3}({\boldsymbol {A}}).}
Using this expansion we can write ∂ f ∂ A : T = d d α [ α 3 det ( A ) ( 1 α 3 + I 1 ( A − 1 ⋅ T ) 1 α 2 + I 2 ( A − 1 ⋅ T ) 1 α + I 3 ( A − 1 ⋅ T ) ) ] | α = 0 = det ( A ) d d α [ 1 + I 1 ( A − 1 ⋅ T ) α + I 2 ( A − 1 ⋅ T ) α 2 + I 3 ( A − 1 ⋅ T ) α 3 ] | α = 0 = det ( A ) [ I 1 ( A − 1 ⋅ T ) + 2 I 2 ( A − 1 ⋅ T ) α + 3 I 3 ( A − 1 ⋅ T ) α 2 ] | α = 0 = det ( A ) I 1 ( A − 1 ⋅ T ) . {\displaystyle {\begin{aligned}{\frac {\partial f}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}&=\left.{\cfrac {d}{d\alpha }}\left[\alpha ^{3}~\det({\boldsymbol {A}})~\left({\cfrac {1}{\alpha ^{3}}}+I_{1}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)~{\cfrac {1}{\alpha ^{2}}}+I_{2}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)~{\cfrac {1}{\alpha }}+I_{3}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)\right)\right]\right|_{\alpha =0}\\&=\left.\det({\boldsymbol {A}})~{\cfrac {d}{d\alpha }}\left[1+I_{1}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)~\alpha +I_{2}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)~\alpha ^{2}+I_{3}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)~\alpha ^{3}\right]\right|_{\alpha =0}\\&=\left.\det({\boldsymbol {A}})~\left[I_{1}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}})+2~I_{2}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)~\alpha +3~I_{3}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)~\alpha ^{2}\right]\right|_{\alpha =0}\\&=\det({\boldsymbol {A}})~I_{1}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)~.\end{aligned}}}
Recall that the invariant I 1 {\displaystyle I_{1}} is given by I 1 ( A ) = tr A . {\displaystyle I_{1}({\boldsymbol {A}})={\text{tr}}{\boldsymbol {A}}.}
Hence, ∂ f ∂ A : T = det ( A ) tr ( A − 1 ⋅ T ) = det ( A ) [ A − 1 ] T : T . {\displaystyle {\frac {\partial f}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}=\det({\boldsymbol {A}})~{\text{tr}}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\right)=\det({\boldsymbol {A}})~\left[{\boldsymbol {A}}^{-1}\right]^{\textsf {T}}:{\boldsymbol {T}}.}
Invoking the arbitrariness of T {\displaystyle {\boldsymbol {T}}} we then have ∂ f ∂ A = det ( A ) [ A − 1 ] T . {\displaystyle {\frac {\partial f}{\partial {\boldsymbol {A}}}}=\det({\boldsymbol {A}})~\left[{\boldsymbol {A}}^{-1}\right]^{\textsf {T}}~.}
The principal invariants of a second order tensor are I 1 ( A ) = tr A I 2 ( A ) = 1 2 [ ( tr A ) 2 − tr A 2 ] I 3 ( A ) = det ( A ) {\displaystyle {\begin{aligned}I_{1}({\boldsymbol {A}})&={\text{tr}}{\boldsymbol {A}}\\I_{2}({\boldsymbol {A}})&={\tfrac {1}{2}}\left[({\text{tr}}{\boldsymbol {A}})^{2}-{\text{tr}}{{\boldsymbol {A}}^{2}}\right]\\I_{3}({\boldsymbol {A}})&=\det({\boldsymbol {A}})\end{aligned}}}
The derivatives of these three invariants with respect to A {\displaystyle {\boldsymbol {A}}} are ∂ I 1 ∂ A = 1 ∂ I 2 ∂ A = I 1 1 − A T ∂ I 3 ∂ A = det ( A ) [ A − 1 ] T = I 2 1 − A T ( I 1 1 − A T ) = ( A 2 − I 1 A + I 2 1 ) T {\displaystyle {\begin{aligned}{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}&={\boldsymbol {\mathit {1}}}\\[3pt]{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}&=I_{1}\,{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\\[3pt]{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}&=\det({\boldsymbol {A}})~\left[{\boldsymbol {A}}^{-1}\right]^{\textsf {T}}\\&=I_{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}~\left(I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\right)=\left({\boldsymbol {A}}^{2}-I_{1}~{\boldsymbol {A}}+I_{2}~{\boldsymbol {\mathit {1}}}\right)^{\textsf {T}}\end{aligned}}}
From the derivative of the determinant we know that ∂ I 3 ∂ A = det ( A ) [ A − 1 ] T . {\displaystyle {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}=\det({\boldsymbol {A}})~\left[{\boldsymbol {A}}^{-1}\right]^{\textsf {T}}~.}
For the derivatives of the other two invariants, let us go back to the characteristic equation det ( λ 1 + A ) = λ 3 + I 1 ( A ) λ 2 + I 2 ( A ) λ + I 3 ( A ) . {\displaystyle \det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})=\lambda ^{3}+I_{1}({\boldsymbol {A}})~\lambda ^{2}+I_{2}({\boldsymbol {A}})~\lambda +I_{3}({\boldsymbol {A}})~.}
Using the same approach as for the determinant of a tensor, we can show that ∂ ∂ A det ( λ 1 + A ) = det ( λ 1 + A ) [ ( λ 1 + A ) − 1 ] T . {\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})=\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})~\left[(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})^{-1}\right]^{\textsf {T}}~.}
Now the left hand side can be expanded as ∂ ∂ A det ( λ 1 + A ) = ∂ ∂ A [ λ 3 + I 1 ( A ) λ 2 + I 2 ( A ) λ + I 3 ( A ) ] = ∂ I 1 ∂ A λ 2 + ∂ I 2 ∂ A λ + ∂ I 3 ∂ A . {\displaystyle {\begin{aligned}{\frac {\partial }{\partial {\boldsymbol {A}}}}\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})&={\frac {\partial }{\partial {\boldsymbol {A}}}}\left[\lambda ^{3}+I_{1}({\boldsymbol {A}})~\lambda ^{2}+I_{2}({\boldsymbol {A}})~\lambda +I_{3}({\boldsymbol {A}})\right]\\&={\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~.\end{aligned}}}
Hence ∂ I 1 ∂ A λ 2 + ∂ I 2 ∂ A λ + ∂ I 3 ∂ A = det ( λ 1 + A ) [ ( λ 1 + A ) − 1 ] T {\displaystyle {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}=\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})~\left[(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})^{-1}\right]^{\textsf {T}}} or, ( λ 1 + A ) T ⋅ [ ∂ I 1 ∂ A λ 2 + ∂ I 2 ∂ A λ + ∂ I 3 ∂ A ] = det ( λ 1 + A ) 1 . {\displaystyle (\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})^{\textsf {T}}\cdot \left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\right]=\det(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}})~{\boldsymbol {\mathit {1}}}~.}
Expanding the right hand side and separating terms on the left hand side gives ( λ 1 + A T ) ⋅ [ ∂ I 1 ∂ A λ 2 + ∂ I 2 ∂ A λ + ∂ I 3 ∂ A ] = [ λ 3 + I 1 λ 2 + I 2 λ + I 3 ] 1 {\displaystyle \left(\lambda ~{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{\textsf {T}}\right)\cdot \left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\right]=\left[\lambda ^{3}+I_{1}~\lambda ^{2}+I_{2}~\lambda +I_{3}\right]{\boldsymbol {\mathit {1}}}}
or, [ ∂ I 1 ∂ A λ 3 + ∂ I 2 ∂ A λ 2 + ∂ I 3 ∂ A λ ] 1 + A T ⋅ ∂ I 1 ∂ A λ 2 + A T ⋅ ∂ I 2 ∂ A λ + A T ⋅ ∂ I 3 ∂ A = [ λ 3 + I 1 λ 2 + I 2 λ + I 3 ] 1 . {\displaystyle {\begin{aligned}\left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{3}\right.&\left.+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~\lambda \right]{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\\&=\left[\lambda ^{3}+I_{1}~\lambda ^{2}+I_{2}~\lambda +I_{3}\right]{\boldsymbol {\mathit {1}}}~.\end{aligned}}}
If we define I 0 := 1 {\displaystyle I_{0}:=1} and I 4 := 0 {\displaystyle I_{4}:=0} , we can write the above as [ ∂ I 1 ∂ A λ 3 + ∂ I 2 ∂ A λ 2 + ∂ I 3 ∂ A λ + ∂ I 4 ∂ A ] 1 + A T ⋅ ∂ I 0 ∂ A λ 3 + A T ⋅ ∂ I 1 ∂ A λ 2 + A T ⋅ ∂ I 2 ∂ A λ + A T ⋅ ∂ I 3 ∂ A = [ I 0 λ 3 + I 1 λ 2 + I 2 λ + I 3 ] 1 . {\displaystyle {\begin{aligned}\left[{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{3}\right.&\left.+{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~\lambda +{\frac {\partial I_{4}}{\partial {\boldsymbol {A}}}}\right]{\boldsymbol {\mathit {1}}}+{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{0}}{\partial {\boldsymbol {A}}}}~\lambda ^{3}+{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~\lambda ^{2}+{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~\lambda +{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\\&=\left[I_{0}~\lambda ^{3}+I_{1}~\lambda ^{2}+I_{2}~\lambda +I_{3}\right]{\boldsymbol {\mathit {1}}}~.\end{aligned}}}
Collecting terms containing various powers of λ, we get λ 3 ( I 0 1 − ∂ I 1 ∂ A 1 − A T ⋅ ∂ I 0 ∂ A ) + λ 2 ( I 1 1 − ∂ I 2 ∂ A 1 − A T ⋅ ∂ I 1 ∂ A ) + λ ( I 2 1 − ∂ I 3 ∂ A 1 − A T ⋅ ∂ I 2 ∂ A ) + ( I 3 1 − ∂ I 4 ∂ A 1 − A T ⋅ ∂ I 3 ∂ A ) = 0 . {\displaystyle {\begin{aligned}\lambda ^{3}&\left(I_{0}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{0}}{\partial {\boldsymbol {A}}}}\right)+\lambda ^{2}\left(I_{1}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}\right)+\\&\qquad \qquad \lambda \left(I_{2}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}\right)+\left(I_{3}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{4}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}\right)=0~.\end{aligned}}}
Then, invoking the arbitrariness of λ, we have I 0 1 − ∂ I 1 ∂ A 1 − A T ⋅ ∂ I 0 ∂ A = 0 I 1 1 − ∂ I 2 ∂ A 1 − I 2 1 − ∂ I 3 ∂ A 1 − A T ⋅ ∂ I 2 ∂ A = 0 I 3 1 − ∂ I 4 ∂ A 1 − A T ⋅ ∂ I 3 ∂ A = 0 . {\displaystyle {\begin{aligned}I_{0}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{0}}{\partial {\boldsymbol {A}}}}&=0\\I_{1}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-I_{2}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}&=0\\I_{3}~{\boldsymbol {\mathit {1}}}-{\frac {\partial I_{4}}{\partial {\boldsymbol {A}}}}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\cdot {\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}&=0~.\end{aligned}}}
This implies that ∂ I 1 ∂ A = 1 ∂ I 2 ∂ A = I 1 1 − A T ∂ I 3 ∂ A = I 2 1 − A T ( I 1 1 − A T ) = ( A 2 − I 1 A + I 2 1 ) T {\displaystyle {\begin{aligned}{\frac {\partial I_{1}}{\partial {\boldsymbol {A}}}}&={\boldsymbol {\mathit {1}}}\\{\frac {\partial I_{2}}{\partial {\boldsymbol {A}}}}&=I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\\{\frac {\partial I_{3}}{\partial {\boldsymbol {A}}}}&=I_{2}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}~\left(I_{1}~{\boldsymbol {\mathit {1}}}-{\boldsymbol {A}}^{\textsf {T}}\right)=\left({\boldsymbol {A}}^{2}-I_{1}~{\boldsymbol {A}}+I_{2}~{\boldsymbol {\mathit {1}}}\right)^{\textsf {T}}\end{aligned}}}
Let 1 {\displaystyle {\boldsymbol {\mathit {1}}}} be the second order identity tensor. Then the derivative of this tensor with respect to a second order tensor A {\displaystyle {\boldsymbol {A}}} is given by ∂ 1 ∂ A : T = 0 : T = 0 {\displaystyle {\frac {\partial {\boldsymbol {\mathit {1}}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}={\boldsymbol {\mathsf {0}}}:{\boldsymbol {T}}={\boldsymbol {\mathit {0}}}} This is because 1 {\displaystyle {\boldsymbol {\mathit {1}}}} is independent of A {\displaystyle {\boldsymbol {A}}} .
Let A {\displaystyle {\boldsymbol {A}}} be a second order tensor. Then ∂ A ∂ A : T = [ ∂ ∂ α ( A + α T ) ] α = 0 = T = I : T {\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}=\left[{\frac {\partial }{\partial \alpha }}({\boldsymbol {A}}+\alpha ~{\boldsymbol {T}})\right]_{\alpha =0}={\boldsymbol {T}}={\boldsymbol {\mathsf {I}}}:{\boldsymbol {T}}}
Therefore, ∂ A ∂ A = I {\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}={\boldsymbol {\mathsf {I}}}}
Here I {\displaystyle {\boldsymbol {\mathsf {I}}}} is the fourth order identity tensor. In index notation with respect to an orthonormal basis I = δ i k δ j l e i ⊗ e j ⊗ e k ⊗ e l {\displaystyle {\boldsymbol {\mathsf {I}}}=\delta _{ik}~\delta _{jl}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{l}}
This result implies that ∂ A T ∂ A : T = I T : T = T T {\displaystyle {\frac {\partial {\boldsymbol {A}}^{\textsf {T}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}={\boldsymbol {\mathsf {I}}}^{\textsf {T}}:{\boldsymbol {T}}={\boldsymbol {T}}^{\textsf {T}}} where I T = δ j k δ i l e i ⊗ e j ⊗ e k ⊗ e l {\displaystyle {\boldsymbol {\mathsf {I}}}^{\textsf {T}}=\delta _{jk}~\delta _{il}~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{l}}
Therefore, if the tensor A {\displaystyle {\boldsymbol {A}}} is symmetric, then the derivative is also symmetric and we get ∂ A ∂ A = I ( s ) = 1 2 ( I + I T ) {\displaystyle {\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}={\boldsymbol {\mathsf {I}}}^{(s)}={\frac {1}{2}}~\left({\boldsymbol {\mathsf {I}}}+{\boldsymbol {\mathsf {I}}}^{\textsf {T}}\right)} where the symmetric fourth order identity tensor is I ( s ) = 1 2 ( δ i k δ j l + δ i l δ j k ) e i ⊗ e j ⊗ e k ⊗ e l {\displaystyle {\boldsymbol {\mathsf {I}}}^{(s)}={\frac {1}{2}}~(\delta _{ik}~\delta _{jl}+\delta _{il}~\delta _{jk})~\mathbf {e} _{i}\otimes \mathbf {e} _{j}\otimes \mathbf {e} _{k}\otimes \mathbf {e} _{l}}
Let A {\displaystyle {\boldsymbol {A}}} and T {\displaystyle {\boldsymbol {T}}} be two second order tensors, then ∂ ∂ A ( A − 1 ) : T = − A − 1 ⋅ T ⋅ A − 1 {\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\left({\boldsymbol {A}}^{-1}\right):{\boldsymbol {T}}=-{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\cdot {\boldsymbol {A}}^{-1}} In index notation with respect to an orthonormal basis ∂ A i j − 1 ∂ A k l T k l = − A i k − 1 T k l A l j − 1 ⟹ ∂ A i j − 1 ∂ A k l = − A i k − 1 A l j − 1 {\displaystyle {\frac {\partial A_{ij}^{-1}}{\partial A_{kl}}}~T_{kl}=-A_{ik}^{-1}~T_{kl}~A_{lj}^{-1}\implies {\frac {\partial A_{ij}^{-1}}{\partial A_{kl}}}=-A_{ik}^{-1}~A_{lj}^{-1}} We also have ∂ ∂ A ( A − T ) : T = − A − T ⋅ T T ⋅ A − T {\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\left({\boldsymbol {A}}^{-{\textsf {T}}}\right):{\boldsymbol {T}}=-{\boldsymbol {A}}^{-{\textsf {T}}}\cdot {\boldsymbol {T}}^{\textsf {T}}\cdot {\boldsymbol {A}}^{-{\textsf {T}}}} In index notation ∂ A j i − 1 ∂ A k l T k l = − A j k − 1 T l k A l i − 1 ⟹ ∂ A j i − 1 ∂ A k l = − A l i − 1 A j k − 1 {\displaystyle {\frac {\partial A_{ji}^{-1}}{\partial A_{kl}}}~T_{kl}=-A_{jk}^{-1}~T_{lk}~A_{li}^{-1}\implies {\frac {\partial A_{ji}^{-1}}{\partial A_{kl}}}=-A_{li}^{-1}~A_{jk}^{-1}} If the tensor A {\displaystyle {\boldsymbol {A}}} is symmetric then ∂ A i j − 1 ∂ A k l = − 1 2 ( A i k − 1 A j l − 1 + A i l − 1 A j k − 1 ) {\displaystyle {\frac {\partial A_{ij}^{-1}}{\partial A_{kl}}}=-{\cfrac {1}{2}}\left(A_{ik}^{-1}~A_{jl}^{-1}+A_{il}^{-1}~A_{jk}^{-1}\right)}
Recall that ∂ 1 ∂ A : T = 0 {\displaystyle {\frac {\partial {\boldsymbol {\mathit {1}}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}={\boldsymbol {\mathit {0}}}}
Since A − 1 ⋅ A = 1 {\displaystyle {\boldsymbol {A}}^{-1}\cdot {\boldsymbol {A}}={\boldsymbol {\mathit {1}}}} , we can write ∂ ∂ A ( A − 1 ⋅ A ) : T = 0 {\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\left({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {A}}\right):{\boldsymbol {T}}={\boldsymbol {\mathit {0}}}}
Using the product rule for second order tensors ∂ ∂ S [ F 1 ( S ) ⋅ F 2 ( S ) ] : T = ( ∂ F 1 ∂ S : T ) ⋅ F 2 + F 1 ⋅ ( ∂ F 2 ∂ S : T ) {\displaystyle {\frac {\partial }{\partial {\boldsymbol {S}}}}[{\boldsymbol {F}}_{1}({\boldsymbol {S}})\cdot {\boldsymbol {F}}_{2}({\boldsymbol {S}})]:{\boldsymbol {T}}=\left({\frac {\partial {\boldsymbol {F}}_{1}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)\cdot {\boldsymbol {F}}_{2}+{\boldsymbol {F}}_{1}\cdot \left({\frac {\partial {\boldsymbol {F}}_{2}}{\partial {\boldsymbol {S}}}}:{\boldsymbol {T}}\right)}
we get ∂ ∂ A ( A − 1 ⋅ A ) : T = ( ∂ A − 1 ∂ A : T ) ⋅ A + A − 1 ⋅ ( ∂ A ∂ A : T ) = 0 {\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}({\boldsymbol {A}}^{-1}\cdot {\boldsymbol {A}}):{\boldsymbol {T}}=\left({\frac {\partial {\boldsymbol {A}}^{-1}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}\right)\cdot {\boldsymbol {A}}+{\boldsymbol {A}}^{-1}\cdot \left({\frac {\partial {\boldsymbol {A}}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}\right)={\boldsymbol {\mathit {0}}}} or, ( ∂ A − 1 ∂ A : T ) ⋅ A = − A − 1 ⋅ T {\displaystyle \left({\frac {\partial {\boldsymbol {A}}^{-1}}{\partial {\boldsymbol {A}}}}:{\boldsymbol {T}}\right)\cdot {\boldsymbol {A}}=-{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}}
Therefore, ∂ ∂ A ( A − 1 ) : T = − A − 1 ⋅ T ⋅ A − 1 {\displaystyle {\frac {\partial }{\partial {\boldsymbol {A}}}}\left({\boldsymbol {A}}^{-1}\right):{\boldsymbol {T}}=-{\boldsymbol {A}}^{-1}\cdot {\boldsymbol {T}}\cdot {\boldsymbol {A}}^{-1}}
Another important operation related to tensor derivatives in continuum mechanics is integration by parts. The formula for integration by parts can be written as ∫ Ω F ⊗ ∇ G d Ω = ∫ Γ n ⊗ ( F ⊗ G ) d Γ − ∫ Ω G ⊗ ∇ F d Ω {\displaystyle \int _{\Omega }{\boldsymbol {F}}\otimes {\boldsymbol {\nabla }}{\boldsymbol {G}}\,d\Omega =\int _{\Gamma }\mathbf {n} \otimes ({\boldsymbol {F}}\otimes {\boldsymbol {G}})\,d\Gamma -\int _{\Omega }{\boldsymbol {G}}\otimes {\boldsymbol {\nabla }}{\boldsymbol {F}}\,d\Omega }
where F {\displaystyle {\boldsymbol {F}}} and G {\displaystyle {\boldsymbol {G}}} are differentiable tensor fields of arbitrary order, n {\displaystyle \mathbf {n} } is the unit outward normal to the domain over which the tensor fields are defined, ⊗ {\displaystyle \otimes } represents a generalized tensor product operator, and ∇ {\displaystyle {\boldsymbol {\nabla }}} is a generalized gradient operator. When F {\displaystyle {\boldsymbol {F}}} is equal to the identity tensor, we get the divergence theorem ∫ Ω ∇ G d Ω = ∫ Γ n ⊗ G d Γ . {\displaystyle \int _{\Omega }{\boldsymbol {\nabla }}{\boldsymbol {G}}\,d\Omega =\int _{\Gamma }\mathbf {n} \otimes {\boldsymbol {G}}\,d\Gamma \,.}
We can express the formula for integration by parts in Cartesian index notation as ∫ Ω F i j k . . . . G l m n . . . , p d Ω = ∫ Γ n p F i j k . . . G l m n . . . d Γ − ∫ Ω G l m n . . . F i j k . . . , p d Ω . {\displaystyle \int _{\Omega }F_{ijk....}\,G_{lmn...,p}\,d\Omega =\int _{\Gamma }n_{p}\,F_{ijk...}\,G_{lmn...}\,d\Gamma -\int _{\Omega }G_{lmn...}\,F_{ijk...,p}\,d\Omega \,.}
For the special case where the tensor product operation is a contraction of one index and the gradient operation is a divergence, and both F {\displaystyle {\boldsymbol {F}}} and G {\displaystyle {\boldsymbol {G}}} are second order tensors, we have ∫ Ω F ⋅ ( ∇ ⋅ G ) d Ω = ∫ Γ n ⋅ ( G ⋅ F T ) d Γ − ∫ Ω ( ∇ F ) : G T d Ω . {\displaystyle \int _{\Omega }{\boldsymbol {F}}\cdot ({\boldsymbol {\nabla }}\cdot {\boldsymbol {G}})\,d\Omega =\int _{\Gamma }\mathbf {n} \cdot \left({\boldsymbol {G}}\cdot {\boldsymbol {F}}^{\textsf {T}}\right)\,d\Gamma -\int _{\Omega }({\boldsymbol {\nabla }}{\boldsymbol {F}}):{\boldsymbol {G}}^{\textsf {T}}\,d\Omega \,.}
In index notation, ∫ Ω F i j G p j , p d Ω = ∫ Γ n p F i j G p j d Γ − ∫ Ω G p j F i j , p d Ω . {\displaystyle \int _{\Omega }F_{ij}\,G_{pj,p}\,d\Omega =\int _{\Gamma }n_{p}\,F_{ij}\,G_{pj}\,d\Gamma -\int _{\Omega }G_{pj}\,F_{ij,p}\,d\Omega \,.} | https://en.wikipedia.org/wiki/Tensor_derivative_(continuum_mechanics) |
In mathematics and physics , a tensor field is a function assigning a tensor to each point of a region of a mathematical space (typically a Euclidean space or manifold ) or of the physical space . Tensor fields are used in differential geometry , algebraic geometry , general relativity , in the analysis of stress and strain in material object, and in numerous applications in the physical sciences . As a tensor is a generalization of a scalar (a pure number representing a value, for example speed) and a vector (a magnitude and a direction, like velocity), a tensor field is a generalization of a scalar field and a vector field that assigns, respectively, a scalar or vector to each point of space. If a tensor A is defined on a vector fields set X(M) over a module M , we call A a tensor field on M . [ 1 ] A tensor field, in common usage, is often referred to in the shorter form "tensor". For example, the Riemann curvature tensor refers a tensor field , as it associates a tensor to each point of a Riemannian manifold , a topological space .
Let M {\displaystyle M} be a manifold , for instance the Euclidean space R n {\displaystyle \mathbb {R} ^{n}} .
Definition. A tensor field of type ( p , q ) {\displaystyle (p,q)} is a section
T ∈ Γ ( M , V ⊗ p ⊗ ( V ∗ ) ⊗ q ) {\displaystyle T\ \in \ \Gamma (M,V^{\otimes p}\otimes (V^{*})^{\otimes q})} where V {\displaystyle V} is a vector bundle on M {\displaystyle M} , V ∗ {\displaystyle V^{*}} is its dual and ⊗ {\displaystyle \otimes } is the tensor product of vector bundles.
Equivalently, it is a collection of elements T x ∈ V x ⊗ p ⊗ ( V x ∗ ) ⊗ q {\displaystyle T_{x}\in V_{x}^{\otimes p}\otimes (V_{x}^{*})^{\otimes q}} for every point x ∈ M {\displaystyle x\in M} , such that it constitutes a smooth map T : M → V ⊗ p ⊗ ( V ∗ ) ⊗ q {\displaystyle T:M\rightarrow V^{\otimes p}\otimes (V^{*})^{\otimes q}} . The elements T x {\displaystyle T_{x}} are called tensors .
Often we take V = T M {\displaystyle V=TM} to be the tangent bundle of M {\displaystyle M} .
Intuitively, a vector field is best visualized as an "arrow" attached to each point of a region, with variable length and direction. One example of a vector field on a curved space is a weather map showing horizontal wind velocity at each point of the Earth's surface.
Now consider more complicated fields. For example, if the manifold is Riemannian, then it has a metric field g {\displaystyle g} , such that given any two vectors v , w {\displaystyle v,w} at point x {\displaystyle x} , their inner product is g x ( v , w ) {\displaystyle g_{x}(v,w)} . The field g {\displaystyle g} could be given in matrix form, but it depends on a choice of coordinates. It could instead be given as an ellipsoid of radius 1 at each point, which is coordinate-free. Applied to the Earth's surface, this is Tissot's indicatrix .
In general, we want to specify tensor fields in a coordinate-independent way: It should exist independently of latitude and longitude, or whatever particular "cartographic projection" we are using to introduce numerical coordinates.
Following Schouten (1951) and McConnell (1957) , the concept of a tensor relies on a concept of a reference frame (or coordinate system ), which may be fixed (relative to some background reference frame), but in general may be allowed to vary within some class of transformations of these coordinate systems. [ 2 ]
For example, coordinates belonging to the n -dimensional real coordinate space R n {\displaystyle \mathbb {R} ^{n}} may be subjected to arbitrary affine transformations :
(with n -dimensional indices, summation implied ). A covariant vector, or covector, is a system of functions v k {\displaystyle v_{k}} that transforms under this affine transformation by the rule
The list of Cartesian coordinate basis vectors e k {\displaystyle \mathbf {e} _{k}} transforms as a covector, since under the affine transformation e k ↦ A k i e i {\displaystyle \mathbf {e} _{k}\mapsto A_{k}^{i}\mathbf {e} _{i}} . A contravariant vector is a system of functions v k {\displaystyle v^{k}} of the coordinates that, under such an affine transformation undergoes a transformation
This is precisely the requirement needed to ensure that the quantity v k e k {\displaystyle v^{k}\mathbf {e} _{k}} is an invariant object that does not depend on the coordinate system chosen. More generally, the coordinates of a tensor of valence ( p , q ) have p upper indices and q lower indices, with the transformation law being
The concept of a tensor field may be obtained by specializing the allowed coordinate transformations to be smooth (or differentiable , analytic , etc.). A covector field is a function v k {\displaystyle v_{k}} of the coordinates that transforms by the Jacobian of the transition functions (in the given class). Likewise, a contravariant vector field v k {\displaystyle v^{k}} transforms by the inverse Jacobian.
A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle .
The vector bundle is a natural idea of "vector space depending continuously (or smoothly) on parameters" – the parameters being the points of a manifold M . For example, a vector space of one dimension depending on an angle could look like a Möbius strip or alternatively like a cylinder . Given a vector bundle V over M , the corresponding field concept is called a section of the bundle: for m varying over M , a choice of vector
where V m is the vector space "at" m .
Since the tensor product concept is independent of any choice of basis, taking the tensor product of two vector bundles on M is routine. Starting with the tangent bundle (the bundle of tangent spaces ) the whole apparatus explained at component-free treatment of tensors carries over in a routine way – again independently of coordinates, as mentioned in the introduction.
We therefore can give a definition of tensor field , namely as a section of some tensor bundle . (There are vector bundles that are not tensor bundles: the Möbius band for instance.) This is then guaranteed geometric content, since everything has been done in an intrinsic way. More precisely, a tensor field assigns to any given point of the manifold a tensor in the space
where V is the tangent space at that point and V ∗ is the cotangent space . See also tangent bundle and cotangent bundle .
Given two tensor bundles E → M and F → M , a linear map A : Γ( E ) → Γ( F ) from the space of sections of E to sections of F can be considered itself as a tensor section of E ∗ ⊗ F {\displaystyle \scriptstyle E^{*}\otimes F} if and only if it satisfies A ( fs ) = fA ( s ), for each section s in Γ( E ) and each smooth function f on M . Thus a tensor section is not only a linear map on the vector space of sections, but a C ∞ ( M )-linear map on the module of sections. This property is used to check, for example, that even though the Lie derivative and covariant derivative are not tensors, the torsion and curvature tensors built from them are.
The notation for tensor fields can sometimes be confusingly similar to the notation for tensor spaces. Thus, the tangent bundle TM = T ( M ) might sometimes be written as
to emphasize that the tangent bundle is the range space of the (1,0) tensor fields (i.e., vector fields) on the manifold M . This should not be confused with the very similar looking notation
in the latter case, we just have one tensor space, whereas in the former, we have a tensor space defined for each point in the manifold M .
Curly (script) letters are sometimes used to denote the set of infinitely-differentiable tensor fields on M . Thus,
are the sections of the ( m , n ) tensor bundle on M that are infinitely-differentiable. A tensor field is an element of this set.
There is another more abstract (but often useful) way of characterizing tensor fields on a manifold M , which makes tensor fields into honest tensors (i.e. single multilinear mappings), though of a different type (although this is not usually why one often says "tensor" when one really means "tensor field"). First, we may consider the set of all smooth ( C ∞ ) vector fields on M , X ( M ) := T 0 1 ( M ) {\displaystyle {\mathfrak {X}}(M):={\mathcal {T}}_{0}^{1}(M)} (see the section on notation above) as a single space – a module over the ring of smooth functions, C ∞ ( M ), by pointwise scalar multiplication. The notions of multilinearity and tensor products extend easily to the case of modules over any commutative ring .
As a motivating example, consider the space Ω 1 ( M ) = T 1 0 ( M ) {\displaystyle \Omega ^{1}(M)={\mathcal {T}}_{1}^{0}(M)} of smooth covector fields ( 1-forms ), also a module over the smooth functions. These act on smooth vector fields to yield smooth functions by pointwise evaluation, namely, given a covector field ω and a vector field X , we define
Because of the pointwise nature of everything involved, the action of ω ~ {\displaystyle {\tilde {\omega }}} on X is a C ∞ ( M )-linear map, that is,
for any p in M and smooth function f . Thus we can regard covector fields not just as sections of the cotangent bundle, but also linear mappings of vector fields into functions. By the double-dual construction, vector fields can similarly be expressed as mappings of covector fields into functions (namely, we could start "natively" with covector fields and work up from there).
In a complete parallel to the construction of ordinary single tensors (not tensor fields!) on M as multilinear maps on vectors and covectors, we can regard general ( k , l ) tensor fields on M as C ∞ ( M )-multilinear maps defined on k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C ∞ ( M ).
Now, given any arbitrary mapping T from a product of k copies of X ( M ) {\displaystyle {\mathfrak {X}}(M)} and l copies of Ω 1 ( M ) {\displaystyle \Omega ^{1}(M)} into C ∞ ( M ), it turns out that it arises from a tensor field on M if and only if it is multilinear over C ∞ ( M ). Namely C ∞ ( M )-module of tensor fields of type ( k , l ) {\displaystyle (k,l)} over M is canonically isomorphic to C ∞ ( M )-module of C ∞ ( M )- multilinear forms
This kind of multilinearity implicitly expresses the fact that we're really dealing with a pointwise-defined object, i.e. a tensor field, as opposed to a function which, even when evaluated at a single point, depends on all the values of vector fields and 1-forms simultaneously.
A frequent example application of this general rule is showing that the Levi-Civita connection , which is a mapping of smooth vector fields ( X , Y ) ↦ ∇ X Y {\displaystyle (X,Y)\mapsto \nabla _{X}Y} taking a pair of vector fields to a vector field, does not define a tensor field on M . This is because it is only R {\displaystyle \mathbb {R} } -linear in Y (in place of full C ∞ ( M )-linearity, it satisfies the Leibniz rule, ∇ X ( f Y ) = ( X f ) Y + f ∇ X Y {\displaystyle \nabla _{X}(fY)=(Xf)Y+f\nabla _{X}Y} )). Nevertheless, it must be stressed that even though it is not a tensor field, it still qualifies as a geometric object with a component-free interpretation.
The curvature tensor is discussed in differential geometry and the stress–energy tensor is important in physics, and these two tensors are related by Einstein's theory of general relativity .
In electromagnetism , the electric and magnetic fields are combined into an electromagnetic tensor field .
Differential forms , used in defining integration on manifolds, are a type of tensor field.
In theoretical physics and other fields, differential equations posed in terms of tensor fields provide a very general way to express relationships that are both geometric in nature (guaranteed by the tensor nature) and conventionally linked to differential calculus . Even to formulate such equations requires a fresh notion, the covariant derivative . This handles the formulation of variation of a tensor field along a vector field . The original absolute differential calculus notion, which was later called tensor calculus , led to the isolation of the geometric concept of connection .
An extension of the tensor field idea incorporates an extra line bundle L on M . If W is the tensor product bundle of V with L , then W is a bundle of vector spaces of just the same dimension as V . This allows one to define the concept of tensor density , a 'twisted' type of tensor field. A tensor density is the special case where L is the bundle of densities on a manifold , namely the determinant bundle of the cotangent bundle . (To be strictly accurate, one should also apply the absolute value to the transition functions – this makes little difference for an orientable manifold .) For a more traditional explanation see the tensor density article.
One feature of the bundle of densities (again assuming orientability) L is that L s is well-defined for real number values of s ; this can be read from the transition functions, which take strictly positive real values. This means for example that we can take a half-density , the case where s = 1 / 2 . In general we can take sections of W , the tensor product of V with L s , and consider tensor density fields with weight s .
Half-densities are applied in areas such as defining integral operators on manifolds, and geometric quantization .
When M is a Euclidean space and all the fields are taken to be invariant by translations by the vectors of M , we get back to a situation where a tensor field is synonymous with a tensor 'sitting at the origin'. This does no great harm, and is often used in applications. As applied to tensor densities, it does make a difference. The bundle of densities cannot seriously be defined 'at a point'; and therefore a limitation of the contemporary mathematical treatment of tensors is that tensor densities are defined in a roundabout fashion.
As an advanced explanation of the tensor concept, one can interpret the chain rule in the multivariable case, as applied to coordinate changes, also as the requirement for self-consistent concepts of tensor giving rise to tensor fields.
Abstractly, we can identify the chain rule as a 1- cocycle . It gives the consistency required to define the tangent bundle in an intrinsic way. The other vector bundles of tensors have comparable cocycles, which come from applying functorial properties of tensor constructions to the chain rule itself; this is why they also are intrinsic (read, 'natural') concepts.
What is usually spoken of as the 'classical' approach to tensors tries to read this backwards – and is therefore a heuristic, post hoc approach rather than truly a foundational one. Implicit in defining tensors by how they transform under a coordinate change is the kind of self-consistency the cocycle expresses. The construction of tensor densities is a 'twisting' at the cocycle level. Geometers have not been in any doubt about the geometric nature of tensor quantities ; this kind of descent argument justifies abstractly the whole theory.
The concept of a tensor field can be generalized by considering objects that transform differently. An object that transforms as an ordinary tensor field under coordinate transformations, except that it is also multiplied by the determinant of the Jacobian of the inverse coordinate transformation to the w th power, is called a tensor density with weight w . [ 4 ] Invariantly, in the language of multilinear algebra, one can think of tensor densities as multilinear maps taking their values in a density bundle such as the (1-dimensional) space of n -forms (where n is the dimension of the space), as opposed to taking their values in just R . Higher "weights" then just correspond to taking additional tensor products with this space in the range.
A special case are the scalar densities. Scalar 1-densities are especially important because it makes sense to define their integral over a manifold. They appear, for instance, in the Einstein–Hilbert action in general relativity. The most common example of a scalar 1-density is the volume element , which in the presence of a metric tensor g is the square root of its determinant in coordinates, denoted det g {\displaystyle {\sqrt {\det g}}} . The metric tensor is a covariant tensor of order 2, and so its determinant scales by the square of the coordinate transition:
which is the transformation law for a scalar density of weight +2.
More generally, any tensor density is the product of an ordinary tensor with a scalar density of the appropriate weight. In the language of vector bundles , the determinant bundle of the tangent bundle is a line bundle that can be used to 'twist' other bundles w times. While locally the more general transformation law can indeed be used to recognise these tensors, there is a global question that arises, reflecting that in the transformation law one may write either the Jacobian determinant, or its absolute value. Non-integral powers of the (positive) transition functions of the bundle of densities make sense, so that the weight of a density, in that sense, is not restricted to integer values. Restricting to changes of coordinates with positive Jacobian determinant is possible on orientable manifolds , because there is a consistent global way to eliminate the minus signs; but otherwise the line bundle of densities and the line bundle of n -forms are distinct. For more on the intrinsic meaning, see Density on a manifold . | https://en.wikipedia.org/wiki/Tensor_field |
Tensor networks or tensor network states are a class of variational wave functions used in the study of many-body quantum systems [ 1 ] and fluids. [ 2 ] [ 3 ] Tensor networks extend one-dimensional matrix product states to higher dimensions while preserving some of their useful mathematical properties. [ 4 ]
The wave function is encoded as a tensor contraction of a network of individual tensors . [ 5 ] The structure of the individual tensors can impose global symmetries on the wave function (such as antisymmetry under exchange of fermions ) or restrict the wave function to specific quantum numbers , like total charge , angular momentum , or spin . It is also possible to derive strict bounds on quantities like entanglement and correlation length using the mathematical structure of the tensor network. [ 6 ] This has made tensor networks useful in theoretical studies of quantum information in many-body systems . They have also proved useful in variational studies of ground states , excited states , and dynamics of strongly correlated many-body systems . [ 7 ]
In general, a tensor network diagram (Penrose diagram) can be viewed as a graph where nodes (or vertices) represent individual tensors, while edges represent summation over an index. Free indices are depicted as edges (or legs ) attached to a single vertex only. [ 8 ] Sometimes, there is also additional meaning to a node's shape. For instance, one can use trapezoids for unitary matrices or tensors with similar behaviour. This way, flipped trapezoids would be interpreted as complex conjugates to them.
Foundational research on tensor networks began in 1971 with a paper by Roger Penrose . [ 9 ] In “Applications of negative dimensional tensors” Penrose developed tensor diagram notation , describing how the diagrammatic language of tensor networks could be used in applications in physics. [ 10 ]
In 1992, Steven R. White developed the Density matrix renormalization group (DMRG) for quantum lattice systems. [ 11 ] [ 4 ] The DMRG was the first successful tensor network and associated algorithm. [ 12 ]
In 2002, Guifré Vidal and Reinhard Werner attempted to quantify entanglement, laying the groundwork for quantum resource theories. [ 13 ] [ 14 ] This was also the first description of the use of tensor networks as mathematical tools for describing quantum systems. [ 10 ]
In 2004, Frank Verstraete and Ignacio Cirac developed the theory of matrix product states, projected entangled pair states, and variational renormalization group methods for quantum spin systems. [ 15 ] [ 4 ]
In 2006, Vidal developed the multi-scale entanglement renormalization ansatz (MERA). [ 16 ] In 2007 he developed entanglement renormalization for quantum lattice systems. [ 17 ]
In 2010, Ulrich Schollwock developed the density-matrix renormalization group for the simulation of one-dimensional strongly correlated quantum lattice systems. [ 18 ]
In 2014, Román Orús introduced tensor networks for complex quantum systems and machine learning, as well as tensor network theories of symmetries, fermions, entanglement and holography. [ 1 ] [ 19 ]
Tensor networks have been adapted for supervised learning , [ 20 ] taking advantage of similar mathematical structure in variational studies in quantum mechanics and large-scale machine learning . This crossover has spurred collaboration between researchers in artificial intelligence and quantum information science . In June 2019, Google , the Perimeter Institute for Theoretical Physics , and X (company) , released TensorNetwork, [ 21 ] an open-source library for efficient tensor calculations. [ 22 ]
The main interest in tensor networks and their study from the perspective of machine learning is to reduce the number of trainable parameters (in a layer) by approximating a high-order tensor with a network of lower-order ones. Using the so-called tensor train technique (TT), [ 23 ] one can reduce an N-order tensor (containing exponentially many trainable parameters) to a chain of N tensors of order 2 or 3, which gives us a polynomial number of parameters. | https://en.wikipedia.org/wiki/Tensor_network |
In pure and applied mathematics , quantum mechanics and computer graphics , a tensor operator generalizes the notion of operators which are scalars and vectors . A special class of these are spherical tensor operators which apply the notion of the spherical basis and spherical harmonics . The spherical basis closely relates to the description of angular momentum in quantum mechanics and spherical harmonic functions. The coordinate-free generalization of a tensor operator is known as a representation operator . [ 1 ]
In quantum mechanics, physical observables that are scalars, vectors, and tensors, must be represented by scalar, vector, and tensor operators, respectively. Whether something is a scalar, vector, or tensor depends on how it is viewed by two observers whose coordinate frames are related to each other by a rotation. Alternatively, one may ask how, for a single observer, a physical quantity transforms if the state of the system is rotated. Consider, for example, a system consisting of a molecule of mass M {\displaystyle M} , traveling with a definite center of mass momentum, p z ^ {\displaystyle p{\mathbf {\hat {z}} }} , in the z {\displaystyle z} direction. If we rotate the system by 90 ∘ {\displaystyle 90^{\circ }} about the y {\displaystyle y} axis, the momentum will change to p x ^ {\displaystyle p{\mathbf {\hat {x}} }} , which is in the x {\displaystyle x} direction. The center-of-mass kinetic energy of the molecule will, however, be unchanged at p 2 / 2 M {\displaystyle p^{2}/2M} . The kinetic energy is a scalar and the momentum is a vector, and these two quantities must be represented by a scalar and a vector operator, respectively. By the latter in particular, we mean an operator whose expected values in the initial and the rotated states are p z ^ {\displaystyle p{\mathbf {\hat {z}} }} and p x ^ {\displaystyle p{\mathbf {\hat {x}} }} . The kinetic energy on the other hand must be represented by a scalar operator, whose expected value must be the same in the initial and the rotated states.
In the same way, tensor quantities must be represented by tensor operators. An example of a tensor quantity (of rank two) is the electrical quadrupole moment of the above molecule. Likewise, the octupole and hexadecapole moments would be tensors of rank three and four, respectively.
Other examples of scalar operators are the total energy operator (more commonly called the Hamiltonian ), the potential energy, and the dipole-dipole interaction energy of two atoms. Examples of vector operators are the momentum, the position, the orbital angular momentum, L {\displaystyle {\mathbf {L} }} , and the spin angular momentum, S {\displaystyle {\mathbf {S} }} . (Fine print: Angular momentum is a vector as far as rotations are concerned, but unlike position or momentum it does not change sign under space inversion, and when one wishes to provide this information, it is said to be a pseudovector.)
Scalar, vector and tensor operators can also be formed by products of operators. For example, the scalar product L ⋅ S {\displaystyle {\mathbf {L} }\cdot {\mathbf {S} }} of the two vector operators, L {\displaystyle {\mathbf {L} }} and S {\displaystyle {\mathbf {S} }} , is a scalar operator, which figures prominently in discussions of the spin–orbit interaction . Similarly, the quadrupole moment tensor of our example molecule has the nine components
Q i j = ∑ α q α ( 3 r α , i r α , j − r α 2 δ i j ) . {\displaystyle Q_{ij}=\sum _{\alpha }q_{\alpha }\left(3r_{\alpha ,i}r_{\alpha ,j}-r_{\alpha }^{2}\delta _{ij}\right).} Here, the indices i {\displaystyle i} and j {\displaystyle j} can independently take on the values 1, 2, and 3 (or x {\displaystyle x} , y {\displaystyle y} , and z {\displaystyle z} ) corresponding to the three Cartesian axes, the index α {\displaystyle \alpha } runs over all particles (electrons and nuclei) in the molecule, q α {\displaystyle q_{\alpha }} is the charge on particle α {\displaystyle \alpha } , and r α , i {\displaystyle r_{\alpha ,i}} is the i {\displaystyle i} -th component of the position of this particle. Each term in the sum is a tensor operator. In particular, the nine products r α , i r α , j {\displaystyle r_{\alpha ,i}r_{\alpha ,j}} together form a second rank tensor, formed by taking the outer product of the vector operator r α {\displaystyle {\mathbf {r} }_{\alpha }} with itself.
The rotation operator about the unit vector n (defining the axis of rotation) through angle θ is
U [ R ( θ , n ^ ) ] = exp ( − i θ ℏ n ^ ⋅ J ) {\displaystyle U[R(\theta ,{\hat {\mathbf {n} }})]=\exp \left(-{\frac {i\theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} \right)}
where J = ( J x , J y , J z ) are the rotation generators (also the angular momentum matrices):
J x = ℏ 2 ( 0 1 0 1 0 1 0 1 0 ) J y = ℏ 2 ( 0 i 0 − i 0 i 0 − i 0 ) J z = ℏ ( − 1 0 0 0 0 0 0 0 1 ) {\displaystyle J_{x}={\frac {\hbar }{\sqrt {2}}}{\begin{pmatrix}0&1&0\\1&0&1\\0&1&0\end{pmatrix}}\,\quad J_{y}={\frac {\hbar }{\sqrt {2}}}{\begin{pmatrix}0&i&0\\-i&0&i\\0&-i&0\end{pmatrix}}\,\quad J_{z}=\hbar {\begin{pmatrix}-1&0&0\\0&0&0\\0&0&1\end{pmatrix}}}
and let R ^ = R ^ ( θ , n ^ ) {\displaystyle {\widehat {R}}={\widehat {R}}(\theta ,{\hat {\mathbf {n} }})} be a rotation matrix . According to the Rodrigues' rotation formula , the rotation operator then amounts to U [ R ( θ , n ^ ) ] = 1 1 − i sin θ ℏ n ^ ⋅ J − 1 − cos θ ℏ 2 ( n ^ ⋅ J ) 2 . {\displaystyle U[R(\theta ,{\hat {\mathbf {n} }})]=1\!\!1-{\frac {i\sin \theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} -{\frac {1-\cos \theta }{\hbar ^{2}}}({\hat {\mathbf {n} }}\cdot \mathbf {J} )^{2}.}
An operator Ω ^ {\displaystyle {\widehat {\Omega }}} is invariant under a unitary transformation U if Ω ^ = U † Ω ^ U ; {\displaystyle {\widehat {\Omega }}={U}^{\dagger }{\widehat {\Omega }}U;} in this case for the rotation U ^ ( R ) {\displaystyle {\widehat {U}}(R)} , Ω ^ = U ( R ) † Ω ^ U ( R ) = exp ( i θ ℏ n ^ ⋅ J ) Ω ^ exp ( − i θ ℏ n ^ ⋅ J ) . {\displaystyle {\widehat {\Omega }}={U(R)}^{\dagger }{\widehat {\Omega }}U(R)=\exp \left({\frac {i\theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} \right){\widehat {\Omega }}\exp \left(-{\frac {i\theta }{\hbar }}{\hat {\mathbf {n} }}\cdot \mathbf {J} \right).}
The orthonormal basis set for total angular momentum is | j , m ⟩ {\displaystyle |j,m\rangle } , where j is the total angular momentum quantum number and m is the magnetic angular momentum quantum number, which takes values − j , − j + 1, ..., j − 1, j . A general state within the j subspace
| ψ ⟩ = ∑ m c j m | j , m ⟩ {\displaystyle |\psi \rangle =\sum _{m}c_{jm}|j,m\rangle }
rotates to a new state by:
| ψ ¯ ⟩ = U ( R ) | ψ ⟩ = ∑ m c j m U ( R ) | j , m ⟩ {\displaystyle |{\bar {\psi }}\rangle =U(R)|\psi \rangle =\sum _{m}c_{jm}U(R)|j,m\rangle }
Using the completeness condition :
I = ∑ m ′ | j , m ′ ⟩ ⟨ j , m ′ | {\displaystyle I=\sum _{m'}|j,m'\rangle \langle j,m'|}
we have
| ψ ¯ ⟩ = I U ( R ) | ψ ⟩ = ∑ m m ′ c j m | j , m ′ ⟩ ⟨ j , m ′ | U ( R ) | j , m ⟩ {\displaystyle |{\bar {\psi }}\rangle =IU(R)|\psi \rangle =\sum _{mm'}c_{jm}|j,m'\rangle \langle j,m'|U(R)|j,m\rangle }
Introducing the Wigner D matrix elements:
D ( R ) m ′ m ( j ) = ⟨ j , m ′ | U ( R ) | j , m ⟩ {\displaystyle {D(R)}_{m'm}^{(j)}=\langle j,m'|U(R)|j,m\rangle }
gives the matrix multiplication:
| ψ ¯ ⟩ = ∑ m m ′ c j m D m ′ m ( j ) | j , m ′ ⟩ ⇒ | ψ ¯ ⟩ = D ( j ) | ψ ⟩ {\displaystyle |{\bar {\psi }}\rangle =\sum _{mm'}c_{jm}D_{m'm}^{(j)}|j,m'\rangle \quad \Rightarrow \quad |{\bar {\psi }}\rangle =D^{(j)}|\psi \rangle }
For one basis ket:
| j , m ¯ ⟩ = ∑ m ′ D ( R ) m ′ m ( j ) | j , m ′ ⟩ {\displaystyle |{\overline {j,m}}\rangle =\sum _{m'}{D(R)}_{m'm}^{(j)}|j,m'\rangle }
For the case of orbital angular momentum, the eigenstates | ℓ , m ⟩ {\displaystyle |\ell ,m\rangle } of the orbital angular momentum operator L and solutions of Laplace's equation on a 3d sphere are spherical harmonics :
Y ℓ m ( θ , ϕ ) = ⟨ θ , ϕ | ℓ , m ⟩ = ( 2 ℓ + 1 ) 4 π ( ℓ − m ) ! ( ℓ + m ) ! P ℓ m ( cos θ ) e i m ϕ {\displaystyle Y_{\ell }^{m}(\theta ,\phi )=\langle \theta ,\phi |\ell ,m\rangle ={\sqrt {{(2\ell +1) \over 4\pi }{(\ell -m)! \over (\ell +m)!}}}\,P_{\ell }^{m}(\cos {\theta })\,e^{im\phi }}
where P ℓ m is an associated Legendre polynomial , ℓ is the orbital angular momentum quantum number, and m is the orbital magnetic quantum number which takes the values −ℓ, −ℓ + 1, ... ℓ − 1, ℓ The formalism of spherical harmonics have wide applications in applied mathematics, and are closely related to the formalism of spherical tensors, as shown below.
Spherical harmonics are functions of the polar and azimuthal angles, ϕ and θ respectively, which can be conveniently collected into a unit vector n ( θ , ϕ ) pointing in the direction of those angles, in the Cartesian basis it is:
n ^ ( θ , ϕ ) = cos ϕ sin θ e x + sin ϕ sin θ e y + cos θ e z {\displaystyle {\hat {\mathbf {n} }}(\theta ,\phi )=\cos \phi \sin \theta \mathbf {e} _{x}+\sin \phi \sin \theta \mathbf {e} _{y}+\cos \theta \mathbf {e} _{z}}
So a spherical harmonic can also be written Y ℓ m = ⟨ n | ℓ m ⟩ {\displaystyle Y_{\ell }^{m}=\langle \mathbf {n} |\ell m\rangle } . Spherical harmonic states | m , ℓ ⟩ {\displaystyle |m,\ell \rangle } rotate according to the inverse rotation matrix U ( R − 1 ) {\displaystyle U(R^{-1})} , while | ℓ , m ⟩ {\displaystyle |\ell ,m\rangle } rotates by the initial rotation matrix U ^ ( R ) {\displaystyle {\widehat {U}}(R)} .
| ℓ , m ¯ ⟩ = ∑ m ′ D m ′ m ( ℓ ) [ U ( R − 1 ) ] | ℓ , m ′ ⟩ , | n ^ ¯ ⟩ = U ( R ) | n ^ ⟩ {\displaystyle |{\overline {\ell ,m}}\rangle =\sum _{m'}D_{m'm}^{(\ell )}[U(R^{-1})]|\ell ,m'\rangle \,,\quad |{\overline {\hat {\mathbf {n} }}}\rangle =U(R)|{\hat {\mathbf {n} }}\rangle }
We define the Rotation of an operator by requiring that the expectation value of the original operator A ^ {\displaystyle {\widehat {\mathbf {A} }}} with respect to the initial state be equal to the expectation value of the rotated operator with respect to the rotated state,
⟨ ψ ′ | A ′ ^ | ψ ′ ⟩ = ⟨ ψ | A ^ | ψ ⟩ {\displaystyle \langle \psi '|{\widehat {A'}}|\psi '\rangle =\langle \psi |{\widehat {A}}|\psi \rangle }
Now as,
| ψ ⟩ → | ψ ′ ⟩ = U ( R ) | ψ ⟩ , ⟨ ψ | → ⟨ ψ ′ | = ⟨ ψ | U † ( R ) {\displaystyle |\psi \rangle ~\rightarrow ~|\psi '\rangle =U(R)|\psi \rangle \,,\quad \langle \psi |~\rightarrow ~\langle \psi '|=\langle \psi |U^{\dagger }(R)}
we have,
⟨ ψ | U † ( R ) A ^ ′ U ( R ) | ψ ⟩ = ⟨ ψ | A ^ | ψ ⟩ {\displaystyle \langle \psi |U^{\dagger }(R){\widehat {A}}'U(R)|\psi \rangle =\langle \psi |{\widehat {A}}|\psi \rangle }
since, | ψ ⟩ {\displaystyle |\psi \rangle } is arbitrary,
U † ( R ) A ^ ′ U ( R ) = A ^ {\displaystyle U^{\dagger }(R){\widehat {A}}'U(R)={\widehat {A}}}
A scalar operator is invariant under rotations: [ 2 ]
U ( R ) † S ^ U ( R ) = S ^ {\displaystyle U(R)^{\dagger }{\widehat {S}}U(R)={\widehat {S}}}
This is equivalent to saying a scalar operator commutes with the rotation generators:
[ S ^ , J ^ ] = 0 {\displaystyle \left[{\widehat {S}},{\widehat {\mathbf {J} }}\right]=0}
Examples of scalar operators include
Vector operators (as well as pseudovector operators) are a set of 3 operators that can be rotated according to: [ 2 ]
U ( R ) † V ^ i U ( R ) = ∑ j R i j V ^ j {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{i}U(R)=\sum _{j}R_{ij}{\widehat {V}}_{j}} Any observable vector quantity of a quantum mechanical system should be invariant of the choice of frame of reference. The transformation of expectation value vector which applies for any wavefunction, ensures the above equality. In Dirac notation: ⟨ ψ ¯ | V ^ a | ψ ¯ ⟩ = ⟨ ψ | U ( R ) † V ^ a U ( R ) | ψ ⟩ = ∑ b R a b ⟨ ψ | V ^ b | ψ ⟩ {\displaystyle \langle {\bar {\psi }}|{\widehat {V}}_{a}|{\bar {\psi }}\rangle =\langle \psi |{U(R)}^{\dagger }{\widehat {V}}_{a}U(R)|\psi \rangle =\sum _{b}R_{ab}\langle \psi |{\widehat {V}}_{b}|\psi \rangle } where the RHS is due to the rotation transformation acting on the vector formed by expectation values. Since | Ψ ⟩ is any quantum state, the same result follows: U ( R ) † V ^ a U ( R ) = ∑ b R a b V ^ b {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{a}U(R)=\sum _{b}R_{ab}{\widehat {V}}_{b}} Note that here, the term "vector" is used two different ways: kets such as | ψ ⟩ are elements of abstract Hilbert spaces, while the vector operator is defined as a quantity whose components transform in a certain way under rotations.
From the above relation for infinitesimal rotations and the Baker Hausdorff lemma , by equating coefficients of order δ θ {\displaystyle \delta \theta } , one can derive the commutation relation with the rotation generator: [ 2 ]
[ V ^ a , J ^ b ] = ∑ c i ℏ ε a b c V ^ c {\displaystyle {\left[{\widehat {V}}_{a},{\widehat {J}}_{b}\right]=\sum _{c}i\hbar \varepsilon _{abc}{\widehat {V}}_{c}}} where ε ijk is the Levi-Civita symbol , which all vector operators must satisfy, by construction. The above commutator rule can also be used as an alternative definition for vector operators which can be shown by using the Baker Hausdorff lemma . As the symbol ε ijk is a pseudotensor , pseudovector operators are invariant up to a sign: +1 for proper rotations and −1 for improper rotations .
Since operators can be shown to form a vector operator by their commutation relation with angular momentum components (which are generators of rotation), its examples include:
and peusodovector operators include
If V → {\displaystyle {\vec {V}}} and W → {\displaystyle {\vec {W}}} are two vector operators, the dot product between the two vector operators can be defined as:
V → ⋅ W → = ∑ i = 1 3 V i ^ W i ^ {\displaystyle {\vec {V}}\cdot {\vec {W}}=\sum _{i=1}^{3}{\hat {V_{i}}}{\hat {W_{i}}}}
Under rotation of coordinates, the newly defined operator transforms as: U ( R ) † ( V → ⋅ W → ) U ( R ) = U ( R ) † ( ∑ i = 1 3 V i ^ W i ^ ) U ( R ) = ∑ i = 1 3 ( U ( R ) † V ^ i U ( R ) ) ( U ( R ) † W ^ i U ( R ) ) = ∑ i = 1 3 ( ∑ j = 1 3 R i j V ^ j ⋅ ∑ k = 1 3 R i k W ^ k ) {\displaystyle {U(R)}^{\dagger }({\vec {V}}\cdot {\vec {W}})U(R)={U(R)}^{\dagger }\left(\sum _{i=1}^{3}{\hat {V_{i}}}{\hat {W_{i}}}\right)U(R)=\sum _{i=1}^{3}({U(R)}^{\dagger }{\hat {V}}_{i}U(R))({U(R)}^{\dagger }{\hat {W}}_{i}U(R))=\sum _{i=1}^{3}\left(\sum _{j=1}^{3}R_{ij}{\widehat {V}}_{j}\cdot \sum _{k=1}^{3}R_{ik}{\widehat {W}}_{k}\right)} Rearranging terms and using transpose of rotation matrix as its inverse property: U ( R ) † ( V → ⋅ W → ) U ( R ) = ∑ k = 1 3 ∑ j = 1 3 ( ∑ i = 1 3 R j i T R i k ) V ^ j W ^ k = ∑ k = 1 3 ∑ j = 1 3 δ j , k V ^ j W ^ k = ∑ i = 1 3 V ^ i W ^ i {\displaystyle {U(R)}^{\dagger }({\vec {V}}\cdot {\vec {W}})U(R)=\sum _{k=1}^{3}\sum _{j=1}^{3}\left(\sum _{i=1}^{3}R_{ji}^{T}R_{ik}\right){\widehat {V}}_{j}{\widehat {W}}_{k}=\sum _{k=1}^{3}\sum _{j=1}^{3}\delta _{j,k}{\widehat {V}}_{j}{\widehat {W}}_{k}=\sum _{i=1}^{3}{\widehat {V}}_{i}{\widehat {W}}_{i}} Where the RHS is the V → ⋅ W → {\displaystyle {\vec {V}}\cdot {\vec {W}}} operator originally defined. Since the dot product defined is invariant under rotation transformation, it is said to be a scalar operator.
A vector operator in the spherical basis is V = ( V +1 , V 0 , V −1 ) where the components are: [ 2 ] V + 1 = − 1 2 ( V x + i V y ) V − 1 = 1 2 ( V x − i V y ) , V 0 = V z , {\displaystyle V_{+1}=-{\frac {1}{\sqrt {2}}}(V_{x}+iV_{y})\,\quad V_{-1}={\frac {1}{\sqrt {2}}}(V_{x}-iV_{y})\,,\quad V_{0}=V_{z}\,,} using J ± = J x ± i J y , {\textstyle J_{\pm }=J_{x}\pm iJ_{y}\,,} the various commutators with the rotation generators and ladder operators are: [ J z , V + 1 ] = + ℏ V + 1 [ J z , V 0 ] = 0 V 0 [ J z , V − 1 ] = − ℏ V − 1 [ J + , V + 1 ] = 0 [ J + , V 0 ] = 2 ℏ V + 1 [ J + , V − 1 ] = 2 ℏ V 0 [ J − , V + 1 ] = 2 ℏ V 0 [ J − , V 0 ] = 2 ℏ V − 1 [ J − , V − 1 ] = 0 {\displaystyle {\begin{aligned}\left[J_{z},V_{+1}\right]&=+\hbar V_{+1}\\[1ex]\left[J_{z},V_{0}\right]&=0V_{0}\\[1ex]\left[J_{z},V_{-1}\right]&=-\hbar V_{-1}\\[2ex]\left[J_{+},V_{+1}\right]&=0\\[1ex]\left[J_{+},V_{0}\right]&={\sqrt {2}}\hbar V_{+1}\\[1ex]\left[J_{+},V_{-1}\right]&={\sqrt {2}}\hbar V_{0}\\[2ex]\left[J_{-},V_{+1}\right]&={\sqrt {2}}\hbar V_{0}\\[1ex]\left[J_{-},V_{0}\right]&={\sqrt {2}}\hbar V_{-1}\\[1ex]\left[J_{-},V_{-1}\right]&=0\\[1ex]\end{aligned}}}
which are of similar form of J z | 1 , + 1 ⟩ = + ℏ | 1 , + 1 ⟩ J z | 1 , 0 ⟩ = 0 | 1 , 0 ⟩ J z | 1 , − 1 ⟩ = − ℏ | 1 , − 1 ⟩ J + | 1 , + 1 ⟩ = 0 J + | 1 , 0 ⟩ = 2 ℏ | 1 , + 1 ⟩ J + | 1 , − 1 ⟩ = 2 ℏ | 1 , 0 ⟩ J − | 1 , + 1 ⟩ = 2 ℏ | 1 , 0 ⟩ J − | 1 , 0 ⟩ = 2 ℏ | 1 , − 1 ⟩ J − | 1 , − 1 ⟩ = 0 {\displaystyle {\begin{aligned}J_{z}|1,+1\rangle &=+\hbar |1,+1\rangle \\[1ex]J_{z}|1,0\rangle &=0|1,0\rangle \\[1ex]J_{z}|1,-1\rangle &=-\hbar |1,-1\rangle \\[2ex]J_{+}|1,+1\rangle &=0\\[1ex]J_{+}|1,0\rangle &={\sqrt {2}}\hbar |1,+1\rangle \\[1ex]J_{+}|1,-1\rangle &={\sqrt {2}}\hbar |1,0\rangle \\[2ex]J_{-}|1,+1\rangle &={\sqrt {2}}\hbar |1,0\rangle \\[1ex]J_{-}|1,0\rangle &={\sqrt {2}}\hbar |1,-1\rangle \\[1ex]J_{-}|1,-1\rangle &=0\\[1ex]\end{aligned}}}
In the spherical basis, the generators of rotation are: J ± 1 = ∓ 1 2 J ± , J 0 = J z {\displaystyle J_{\pm 1}=\mp {\frac {1}{\sqrt {2}}}J_{\pm }\,,\quad J_{0}=J_{z}}
From the transformation of operators and Baker Hausdorff lemma :
U ( R ) † V ^ q U ( R ) = V ^ q + i θ ℏ [ n ^ ⋅ J → , V ^ q ] + ∑ k = 2 ∞ ( i θ ℏ [ n ^ ⋅ J → , . ] ) k k ! V ^ q = e x p ( i θ ℏ n ^ ⋅ A d J → ) V ^ q {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{q}U(R)={\widehat {V}}_{q}+i{\frac {\theta }{\hbar }}\left[{\hat {n}}\cdot {\vec {J}},{\widehat {V}}_{q}\right]+\sum _{k=2}^{\infty }{\frac {\left(i{\frac {\theta }{\hbar }}[{\hat {n}}\cdot {\vec {J}},.]\right)^{k}}{k!}}{\widehat {V}}_{q}=exp\left({i{\frac {\theta }{\hbar }}{\hat {n}}\cdot Ad_{\vec {J}}}\right){\widehat {V}}_{q}}
compared to
U ( R ) | j , k ⟩ = | j , k ⟩ − i θ ℏ n ^ ⋅ J → | j , k ⟩ + ∑ k = 2 ∞ ( − i θ ℏ n ^ ⋅ J → ) k k ! | j , k ⟩ = e x p ( − i θ ℏ n ^ ⋅ J → ) | j , k ⟩ {\displaystyle U(R)|j,k\rangle =|j,k\rangle -i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}|j,k\rangle +\sum _{k=2}^{\infty }{\frac {\left(-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}\right)^{k}}{k!}}|j,k\rangle =exp\left({-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,k\rangle }
it can be argued that the commutator with operator replaces the action of operator on state for transformations of operators as compared with that of states:
U ( R ) | j , k ⟩ = exp ( − i θ ℏ n ^ ⋅ J → ) | j , k ⟩ = ∑ j ′ , k ′ | j ′ , k ′ ⟩ ⟨ j ′ , k ′ | exp ( − i θ ℏ n ^ ⋅ J → ) | j , k ⟩ = ∑ k ′ D k ′ k ( j ) ( R ) | j , k ′ ⟩ {\displaystyle U(R)|j,k\rangle =\exp \left({-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,k\rangle =\sum _{j',k'}|j',k'\rangle \langle j',k'|\exp \left({-i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,k\rangle =\sum _{k'}D_{k'k}^{(j)}(R)|j,k'\rangle }
The rotation transformation in the spherical basis (originally written in the Cartesian basis) is then, due to similarity of commutation and operator shown above: U ( R ) † V ^ q U ( R ) = ∑ q ′ D q ′ q ( 1 ) ( R − 1 ) V ^ q ′ {\displaystyle {U(R)}^{\dagger }{\widehat {V}}_{q}U(R)=\sum _{q'}{{D_{q'q}^{(1)}}(R^{-1})}{\widehat {V}}_{q'}}
One can generalize the vector operator concept easily to tensorial operators , shown next.
In general, a tensor operator is one that transforms according to a tensor: U ( R ) † T ^ p q r ⋯ a b c ⋯ U ( R ) = R p , α R q , β R r , γ ⋯ T ^ i j k ⋯ α β γ ⋯ R i , a − 1 R j , b − 1 R k , c − 1 ⋯ {\displaystyle U(R)^{\dagger }{\widehat {T}}_{pqr\cdots }^{abc\cdots }U(R)=R_{p,\alpha }R_{q,\beta }R_{r,\gamma }\cdots {\widehat {T}}_{ijk\cdots }^{\alpha \beta \gamma \cdots }R_{i,a}^{-1}R_{j,b}^{-1}R_{k,c}^{-1}\cdots } where the basis are transformed by R − 1 {\displaystyle R^{-1}} or the vector components transform by R {\displaystyle R} .
In the subsequent discussion surrounding tensor operators, the index notation regarding covariant/contravariant behavior is ignored entirely. Instead, contravariant components is implied by context. Hence for an n times contravariant tensor: [ 2 ]
U ( R ) † T ^ p q r ⋯ U ( R ) = R p i R q j R r k ⋯ T ^ i j k ⋯ {\displaystyle U(R)^{\dagger }{\widehat {T}}_{pqr\cdots }U(R)=R_{pi}R_{qj}R_{rk}\cdots {\widehat {T}}_{ijk\cdots }}
Note: In general, a tensor operator cannot be written as the tensor product of other tensor operators as given in the above example.
If V → {\displaystyle {\vec {V}}} and W → {\displaystyle {\vec {W}}} are two three dimensional vector operators, then a rank 2 Cartesian dyadic tensors can be formed from nine operators of form T ^ i j = V i ^ W j ^ {\displaystyle {\hat {T}}_{ij}={\hat {V_{i}}}{\hat {W_{j}}}} , U ( R ) † T ^ i j U ( R ) = U ( R ) † ( V i ^ W j ^ ) U ( R ) = ( U ( R ) † V ^ i U ( R ) ) ( U ( R ) † W ^ j U ( R ) ) = ( ∑ l = 1 3 R i l V ^ l ⋅ ∑ k = 1 3 R j k W ^ k ) {\displaystyle {U(R)}^{\dagger }{\hat {T}}_{ij}U(R)={U(R)}^{\dagger }({\hat {V_{i}}}{\hat {W_{j}}})U(R)=({U(R)}^{\dagger }{\hat {V}}_{i}U(R))({U(R)}^{\dagger }{\hat {W}}_{j}U(R))=\left(\sum _{l=1}^{3}R_{il}{\hat {V}}_{l}\cdot \sum _{k=1}^{3}R_{jk}{\hat {W}}_{k}\right)} Rearranging terms, we get: U ( R ) † T ^ i j U ( R ) = ∑ k = 1 3 ∑ l = 1 3 ( R i l R j k T ^ l k ) {\displaystyle {U(R)}^{\dagger }{\hat {T}}_{ij}U(R)=\sum _{k=1}^{3}\sum _{l=1}^{3}\left(R_{il}R_{jk}{\hat {T}}_{lk}\right)} The RHS of the equation is change of basis equation for twice contravariant tensors where the basis are transformed by R − 1 {\displaystyle R^{-1}} or the vector components transform by R {\displaystyle R} which matches transformation of vector operator components. Hence the operator tensor described forms a rank 2 tensor, in tensor representation, T ^ = V → ⊗ W → = ( V ^ i W ^ j ) ( e i ⊗ e j ) {\displaystyle {\hat {\mathbf {T} }}={\vec {V}}\otimes {\vec {W}}=({\hat {V}}_{i}{\hat {W}}_{j})(\mathbf {e} _{i}\otimes \mathbf {e} _{j})} Similarly, an n-times contravariant tensor operator can be formed similarly by n vector operators.
We observe that the subspace spanned by linear combinations of the rank two tensor components form an invariant subspace, ie. the subspace does not change under rotation since the transformed components itself is a linear combination of the tensor components. However, this subspace is not irreducible ie. it can be further divided into invariant subspaces under rotation. Otherwise, the subspace is called reducible. In other words, there exists specific sets of different linear combinations of the components such that they transforms into a linear combination of the same set under rotation. [ 3 ] In the above example, we will show that the 9 independent tensor components can be divided into a set of 1, 3 and 5 combination of operators that each form irreducible invariant subspaces.
The subspace spanned by { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} can be divided two subspaces; three independent antisymmetric components { A ^ i j } {\displaystyle \{{\hat {A}}_{ij}\}} and six independent symmetric component { S ^ i j } {\displaystyle \{{\hat {S}}_{ij}\}} , defined as A ^ i j = 1 2 ( T ^ i j − T ^ j i ) {\displaystyle {\hat {A}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}-{\hat {T}}_{ji})} and S ^ i j = 1 2 ( T ^ i j + T ^ j i ) {\displaystyle {\hat {S}}_{ij}={\frac {1}{2}}({\hat {T}}_{ij}+{\hat {T}}_{ji})} . Using the { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} transformation under rotation formula, it can be shown that both { A ^ i j } {\displaystyle \{{\hat {A}}_{ij}\}} and { S ^ i j } {\displaystyle \{{\hat {S}}_{ij}\}} are transformed into a linear combination of members of its own sets. Although { A ^ i j } {\displaystyle \{{\hat {A}}_{ij}\}} is irreducible, the same cannot be said about { S ^ i j } {\displaystyle \{{\hat {S}}_{ij}\}} .
The six independent symmetric component set can be divided into five independent traceless symmetric component and the invariant trace can be its own subspace.
Hence, the invariant subspaces of { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} are formed respectively by:
If T ^ i j = V i ^ W j ^ {\displaystyle {\hat {T}}_{ij}={\hat {V_{i}}}{\hat {W_{j}}}} , the invariant subspaces of { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} formed are represented by: [ 4 ]
From the above examples, the nine component { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} are split into subspaces formed by one, three and five components. These numbers add up to the number of components of the original tensor in a manner similar to the dimension of vector subspaces adding to the dimension of the space that is a direct sum of these subspaces. Similarly, every element of { T ^ i j } {\displaystyle \{{\hat {T}}_{ij}\}} can be expressed in terms of a linear combination of components from its invariant subspaces:
T ^ i j = 1 3 t ^ δ i j + A ^ i j + S ^ i j {\displaystyle {\hat {T}}_{ij}={\frac {1}{3}}{\hat {t}}\delta _{ij}+{\hat {A}}_{ij}+{\hat {S}}_{ij}}
or
T ^ i j = 1 3 ( V → ⋅ W → ) δ i j + ( 1 2 ( V ^ i W ^ j − V ^ j W ^ i ) ) + ( 1 2 ( V ^ i W ^ j + V ^ j W ^ i ) − 1 3 ( V → ⋅ W → ) δ i j ) = T ( 0 ) + T ( 1 ) + T ( 2 ) {\displaystyle {\hat {T}}_{ij}={\frac {1}{3}}({\vec {V}}\cdot {\vec {W}})\delta _{ij}+\left({\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}-{\hat {V}}_{j}{\hat {W}}_{i})\right)+\left({\frac {1}{2}}({\hat {V}}_{i}{\hat {W}}_{j}+{\hat {V}}_{j}{\hat {W}}_{i})-{\frac {1}{3}}({\vec {V}}\cdot {\vec {W}})\delta _{ij}\right)=\mathbf {T} ^{(0)}+\mathbf {T} ^{(1)}+\mathbf {T} ^{(2)}}
where: T ^ i j ( 0 ) = V ^ k W ^ k 3 δ i j {\displaystyle {\widehat {T}}_{ij}^{(0)}={\frac {{\widehat {V}}_{k}{\widehat {W}}_{k}}{3}}\delta _{ij}} T ^ i j ( 1 ) = 1 2 [ V ^ i W ^ j − V ^ j W ^ i ] = V ^ [ i W ^ j ] {\displaystyle {\widehat {T}}_{ij}^{(1)}={\frac {1}{2}}\left[{\widehat {V}}_{i}{\widehat {W}}_{j}-{\widehat {V}}_{j}{\widehat {W}}_{i}\right]={\widehat {V}}_{[i}{\widehat {W}}_{j]}} T ^ i j ( 2 ) = 1 2 ( V ^ i W ^ j + V ^ j W ^ i ) − 1 3 V ^ k W ^ k δ i j = V ^ ( i W ^ j ) − T i j ( 0 ) {\displaystyle {\widehat {T}}_{ij}^{(2)}={\tfrac {1}{2}}\left({\widehat {V}}_{i}{\widehat {W}}_{j}+{\widehat {V}}_{j}{\widehat {W}}_{i}\right)-{\tfrac {1}{3}}{\widehat {V}}_{k}{\widehat {W}}_{k}\delta _{ij}={\widehat {V}}_{(i}{\widehat {W}}_{j)}-T_{ij}^{(0)}}
In general cartesian tensors of rank greater than 1 are reducible. In quantum mechanics, this particular example bears resemblance to the addition of two spin one particles where both are 3 dimensional, hence the total space being 9 dimensional, can be formed by spin 0, spin 1 and spin 2 systems each having 1 dimensional, 3 dimensional and 5 dimensional space respectively. [ 4 ] These three terms are irreducible, which means they cannot be decomposed further and still be tensors satisfying the defining transformation laws under which they must be invariant. Each of the irreducible representations T (0) , T (1) , T (2) ... transform like angular momentum eigenstates according to the number of independent components.
It is possible that a given tensor may have one or more of these components vanish. For example, the quadrupole moment tensor is already symmetric and traceless, and hence has only 5 independent components to begin with. [ 3 ]
Spherical tensor operators are generally defined as operators with the following transformation rule, under rotation of coordinate system:
T ^ m ( j ) → U ( R ) † T ^ m ( j ) U ( R ) = ∑ m ′ D m ′ m ( j ) ( R − 1 ) T ^ m ′ ( j ) {\displaystyle {\widehat {T}}_{m}^{(j)}\rightarrow U(R)^{\dagger }{\widehat {T}}_{m}^{(j)}U(R)=\sum _{m'}D_{m'm}^{(j)}(R^{-1}){\widehat {T}}_{m'}^{(j)}}
The commutation relations can be found by expanding LHS and RHS as: [ 4 ]
U ( R ) † T ^ m ( j ) U ( R ) = ( 1 + i ϵ n ^ ⋅ J → ℏ + O ( ϵ 2 ) ) T ^ m ( j ) ( 1 − i ϵ n ^ ⋅ J → ℏ + O ( ϵ 2 ) ) = ∑ m ′ ⟨ j , m ′ | ( 1 + i ϵ n ^ ⋅ J → ℏ + O ( ϵ 2 ) ) | j , m ⟩ T ^ m ′ ( j ) {\displaystyle U(R)^{\dagger }{\widehat {T}}_{m}^{(j)}U(R)=\left(1+{\frac {i\epsilon {\hat {n}}\cdot {\vec {J}}}{\hbar }}+{\mathcal {O}}(\epsilon ^{2})\right){\widehat {T}}_{m}^{(j)}\left(1-{\frac {i\epsilon {\hat {n}}\cdot {\vec {J}}}{\hbar }}+{\mathcal {O}}(\epsilon ^{2})\right)=\sum _{m'}\langle j,m'|\left(1+{\frac {i\epsilon {\hat {n}}\cdot {\vec {J}}}{\hbar }}+{\mathcal {O}}(\epsilon ^{2})\right)|j,m\rangle {\widehat {T}}_{m'}^{(j)}}
Simplifying and applying limits to select only first order terms, we get:
[ n ^ ⋅ J → , T ^ m ( j ) ] = ∑ m ′ T ^ m ′ ( j ) ⟨ j , m ′ | J → ⋅ n ^ | j , m ⟩ {\displaystyle {[{\hat {n}}\cdot {\vec {J}}},{\widehat {T}}_{m}^{(j)}]=\sum _{m'}{\widehat {T}}_{m'}^{(j)}\langle j,m'|{\vec {J}}\cdot {\hat {n}}|j,m\rangle }
For choices of n ^ = x ^ ± i y ^ {\displaystyle {\hat {n}}={\hat {x}}\pm i{\hat {y}}} or n ^ = z ^ {\displaystyle {\hat {n}}={\hat {z}}} , we get: [ J ± , T ^ m ( j ) ] = ℏ ( j ∓ m ) ( j ± m + 1 ) T ^ m ± 1 ( j ) [ J z , T ^ m ( j ) ] = ℏ m T ^ m ( j ) {\displaystyle {\begin{aligned}\left[J_{\pm },{\widehat {T}}_{m}^{(j)}\right]&=\hbar {\sqrt {(j\mp m)(j\pm m+1)}}{\widehat {T}}_{m\pm 1}^{(j)}\\[1ex]\left[J_{z},{\widehat {T}}_{m}^{(j)}\right]&=\hbar m{\widehat {T}}_{m}^{(j)}\end{aligned}}} Note the similarity of the above to: J ± | j , m ⟩ = ℏ ( j ∓ m ) ( j ± m + 1 ) | j , m ± 1 ⟩ J z | j , m ⟩ = ℏ m | j , m ⟩ {\displaystyle {\begin{aligned}J_{\pm }|j,m\rangle &=\hbar {\sqrt {(j\mp m)(j\pm m+1)}}|j,m\pm 1\rangle \\[1ex]J_{z}|j,m\rangle &=\hbar m|j,m\rangle \end{aligned}}} Since J x {\displaystyle J_{x}} and J y {\displaystyle J_{y}} are linear combinations of J ± {\displaystyle J_{\pm }} , they share the same similarity due to linearity.
If, only the commutation relations hold, using the following relation, | j , m ⟩ → U ( R ) | j , m ⟩ = e x p ( − i θ ℏ n ^ ⋅ J → ) | j , m ⟩ = ∑ m ′ D m ′ m ( j ) ( R ) | j , m ′ ⟩ {\displaystyle |j,m\rangle \rightarrow U(R)|j,m\rangle =exp\left(-{i{\frac {\theta }{\hbar }}{\hat {n}}\cdot {\vec {J}}}\right)|j,m\rangle =\sum _{m'}D_{m'm}^{(j)}(R)|j,m'\rangle }
we find due to similarity of actions of J {\displaystyle J} on wavefunction | j , m ⟩ {\displaystyle |j,m\rangle } and the commutation relations on T ^ m ( j ) {\displaystyle {\widehat {T}}_{m}^{(j)}} , that:
T ^ m ( j ) → U ( R ) † T ^ m ( j ) U ( R ) = e x p ( i θ ℏ n ^ ⋅ a d J → ) T ^ m ( j ) = ∑ m ′ D m ′ m ( j ) ( R − 1 ) T ^ m ′ ( j ) {\displaystyle {\widehat {T}}_{m}^{(j)}\rightarrow U(R)^{\dagger }{\widehat {T}}_{m}^{(j)}U(R)=exp\left({i{\frac {\theta }{\hbar }}{\hat {n}}\cdot ad_{\vec {J}}}\right){\widehat {T}}_{m}^{(j)}=\sum _{m'}D_{m'm}^{(j)}(R^{-1}){\widehat {T}}_{m'}^{(j)}}
where the exponential form is given by Baker–Hausdorff lemma . Hence, the above commutation relations and the transformation property are equivalent definitions of spherical tensor operators. It can also be shown that { a d J ^ i } {\displaystyle \{ad_{{\hat {J}}_{i}}\}} transform like a vector due to their commutation relation.
In the following section, construction of spherical tensors will be discussed. For example, since example of spherical vector operators is shown, it can be used to construct higher order spherical tensor operators. In general, spherical tensor operators can be constructed from two perspectives. [ 5 ] One way is to specify how spherical tensors transform under a physical rotation - a group theoretical definition. A rotated angular momentum eigenstate can be decomposed into a linear combination of the initial eigenstates: the coefficients in the linear combination consist of Wigner rotation matrix entries. Or by continuing the previous example of the second order dyadic tensor T = a ⊗ b , casting each of a and b into the spherical basis and substituting into T gives the spherical tensor operators of the second order. [ citation needed ]
Combination of two spherical tensors A q 1 ( k 1 ) {\displaystyle A_{q_{1}}^{(k_{1})}} and B q 2 ( k 2 ) {\displaystyle B_{q_{2}}^{(k_{2})}} in the following manner involving the Clebsch–Gordan coefficients can be proved to give another spherical tensor of the form: [ 4 ] T q ( k ) = ∑ q 1 , q 2 ⟨ k 1 , k 2 ; q 1 , q 2 | k 1 , k 2 ; k , q ⟩ A q 1 ( k 1 ) B q 2 ( k 2 ) {\displaystyle T_{q}^{(k)}=\sum _{q_{1},q_{2}}\langle k_{1},k_{2};q_{1},q_{2}|k_{1},k_{2};k,q\rangle A_{q_{1}}^{(k_{1})}B_{q_{2}}^{(k_{2})}}
This equation can be used to construct higher order spherical tensor operators, for example, second order spherical tensor operators using two first order spherical tensor operators, say A and B, discussed previously:
T ^ ± 2 ( 2 ) = a ^ ± 1 b ^ ± 1 T ^ ± 1 ( 2 ) = 1 2 ( a ^ ± 1 b ^ 0 + a ^ 0 b ^ ± 1 ) T ^ 0 ( 2 ) = 1 6 ( a ^ + 1 b ^ − 1 + a ^ − 1 b ^ + 1 + 2 a ^ 0 b ^ 0 ) {\displaystyle {\begin{aligned}{\widehat {T}}_{\pm 2}^{(2)}&={\widehat {a}}_{\pm 1}{\widehat {b}}_{\pm 1}\\[1ex]{\widehat {T}}_{\pm 1}^{(2)}&={\tfrac {1}{\sqrt {2}}}\left({\widehat {a}}_{\pm 1}{\widehat {b}}_{0}+{\widehat {a}}_{0}{\widehat {b}}_{\pm 1}\right)\\[1ex]{\widehat {T}}_{0}^{(2)}&={\tfrac {1}{\sqrt {6}}}\left({\widehat {a}}_{+1}{\widehat {b}}_{-1}+{\widehat {a}}_{-1}{\widehat {b}}_{+1}+2{\widehat {a}}_{0}{\widehat {b}}_{0}\right)\end{aligned}}}
Using the infinitesimal rotation operator and its Hermitian conjugate, one can derive the commutation relation in the spherical basis: [ J a , T ^ q ( 2 ) ] = ∑ q ′ D ( J a ) q q ′ ( 2 ) T ^ q ′ ( 2 ) = ∑ q ′ ⟨ j = 2 , m = q | J a | j = 2 , m = q ′ ⟩ T ^ q ′ ( 2 ) {\displaystyle \left[J_{a},{\widehat {T}}_{q}^{(2)}\right]=\sum _{q'}{D(J_{a})}_{qq'}^{(2)}{\widehat {T}}_{q'}^{(2)}=\sum _{q'}\langle j{=}2,m{=}q|J_{a}|j{=}2,m{=}q'\rangle {\widehat {T}}_{q'}^{(2)}} and the finite rotation transformation in the spherical basis can be verified: U ( R ) † T ^ q ( 2 ) U ( R ) = ∑ q ′ D ( R ) q q ′ ( 2 ) ∗ T ^ q ′ ( 2 ) {\displaystyle {U(R)}^{\dagger }{\widehat {T}}_{q}^{(2)}U(R)=\sum _{q'}{{D(R)}_{qq'}^{(2)}}^{*}{\widehat {T}}_{q'}^{(2)}}
Define an operator by its spectrum: Υ l m | r ⟩ = r l Y l m ( θ , ϕ ) | r ⟩ = Υ l m ( r → ) | r ⟩ {\displaystyle \Upsilon _{l}^{m}|r\rangle =r^{l}Y_{l}^{m}(\theta ,\phi )|r\rangle =\Upsilon _{l}^{m}({\vec {r}})|r\rangle } Since for spherical harmonics under rotation: Y ℓ = k m = q ( n ) = ⟨ n | k , q ⟩ → U ( R ) † Y ℓ = k m = q ( n ) U ( R ) = Y ℓ = k m = q ( R n ) = ⟨ n | D ( R ) † | k , q ⟩ = ∑ q ′ D q ′ , q ( k ) ( R − 1 ) Y ℓ = k m = q ′ ( n ) {\displaystyle Y_{\ell =k}^{m=q}(\mathbf {n} )=\langle \mathbf {n} |k,q\rangle \rightarrow U(R)^{\dagger }Y_{\ell =k}^{m=q}(\mathbf {n} )U(R)=Y_{\ell =k}^{m=q}(R\mathbf {n} )=\langle \mathbf {n} |D(R)^{\dagger }|k,q\rangle =\sum _{q'}D_{q',q}^{(k)}(R^{-1})Y_{\ell =k}^{m=q'}(\mathbf {n} )} It can also been shown that: Υ l m ( r → ) → U ( R ) † Υ l m ( r → ) U ( R ) = ∑ m ′ D m ′ , m ( l ) ( R − 1 ) Υ l m ′ ( r → ) {\displaystyle \Upsilon _{l}^{m}({\vec {r}})\rightarrow U(R)^{\dagger }\Upsilon _{l}^{m}({\vec {r}})U(R)=\sum _{m'}D_{m',m}^{(l)}(R^{-1})\Upsilon _{l}^{m'}({\vec {r}})} Then Υ l m ( V → ) {\displaystyle \Upsilon _{l}^{m}({\vec {V}})} , where V → {\displaystyle {\vec {V}}} is a vector operator, also transforms in the same manner ie, is a spherical tensor operator. The process involves expressing Υ l m ( r → ) = r l Y l m ( θ , ϕ ) = Υ l m ( x , y , z ) {\displaystyle \Upsilon _{l}^{m}({\vec {r}})=r^{l}Y_{l}^{m}(\theta ,\phi )=\Upsilon _{l}^{m}(x,y,z)} in terms of x, y and z and replacing x, y and z with operators V x V y and V z which from vector operator. The resultant operator is hence a spherical tensor operator T ^ m ( l ) {\displaystyle {\hat {T}}_{m}^{(l)}} . ^ This may include constant due to normalization from spherical harmonics which is meaningless in context of operators.
The Hermitian adjoint of a spherical tensor may be defined as ( T † ) q ( k ) = ( − 1 ) k − q ( T − q ( k ) ) † . {\displaystyle (T^{\dagger })_{q}^{(k)}=(-1)^{k-q}(T_{-q}^{(k)})^{\dagger }.} There is some arbitrariness in the choice of the phase factor: any factor containing (−1) ± q will satisfy the commutation relations. [ 6 ] The above choice of phase has the advantages of being real and that the tensor product of two commuting Hermitian operators is still Hermitian. [ 7 ] Some authors define it with a different sign on q , without the k , or use only the floor of k . [ 8 ]
Orbital angular momentum operators have the ladder operators :
L ± = L x ± i L y {\displaystyle L_{\pm }=L_{x}\pm iL_{y}}
which raise or lower the orbital magnetic quantum number m ℓ by one unit. This has almost exactly the same form as the spherical basis, aside from constant multiplicative factors.
Spherical tensors can also be formed from algebraic combinations of the spin operators S x , S y , S z , as matrices, for a spin system with total quantum number j = ℓ + s (and ℓ = 0). Spin operators have the ladder operators:
S ± = S x ± i S y {\displaystyle S_{\pm }=S_{x}\pm iS_{y}}
which raise or lower the spin magnetic quantum number m s by one unit.
Spherical bases have broad applications in pure and applied mathematics and physical sciences where spherical geometries occur.
The transition amplitude is proportional to matrix elements of the dipole operator between the initial and final states. We use an electrostatic, spinless model for the atom and we consider the transition from the initial energy level E nℓ to final level E n′ℓ′ . These levels are degenerate, since the energy does not depend on the magnetic quantum number m or m′. The wave functions have the form,
ψ n ℓ m ( r , θ , ϕ ) = R n ℓ ( r ) Y ℓ m ( θ , ϕ ) {\displaystyle \psi _{n\ell m}(r,\theta ,\phi )=R_{n\ell }(r)Y_{\ell m}(\theta ,\phi )}
The dipole operator is proportional to the position operator of the electron, so we must evaluate matrix elements of the form,
⟨ n ′ ℓ ′ m ′ | r | n ℓ m ⟩ {\displaystyle \langle n'\ell 'm'|\mathbf {r} |n\ell m\rangle }
where, the initial state is on the right and the final one on the left. The position operator r has three components, and the initial and final levels consist of 2ℓ + 1 and 2ℓ′ + 1 degenerate states, respectively. Therefore if we wish to evaluate the intensity of a spectral line as it would be observed, we really have to evaluate 3(2ℓ′+ 1)(2ℓ+ 1) matrix elements, for example, 3×3×5 = 45 in a 3d → 2p transition. This is actually an exaggeration, as we shall see, because many of the matrix elements vanish, but there are still many non-vanishing matrix elements to be calculated.
A great simplification can be achieved by expressing the components of r, not with respect to the Cartesian basis, but with respect to the spherical basis. First we define,
r q = e ^ q ⋅ r {\displaystyle r_{q}={\hat {\mathbf {e} }}_{q}\cdot \mathbf {r} }
Next, by inspecting a table of the Y ℓm ′s, we find that for ℓ = 1 we have,
r Y 11 ( θ , ϕ ) = − r 3 8 π sin ( θ ) e i ϕ = 3 4 π ( − x + i y 2 ) r Y 10 ( θ , ϕ ) = r 3 4 π cos ( θ ) = 3 4 π z r Y 1 − 1 ( θ , ϕ ) = r 3 8 π sin ( θ ) e − i ϕ = 3 4 π ( x − i y 2 ) {\displaystyle {\begin{aligned}rY_{11}(\theta ,\phi )&=&&-r{\sqrt {\frac {3}{8\pi }}}\sin(\theta )e^{i\phi }&=&{\sqrt {\frac {3}{4\pi }}}\left(-{\frac {x+iy}{\sqrt {2}}}\right)\\rY_{10}(\theta ,\phi )&=&&r{\sqrt {\frac {3}{4\pi }}}\cos(\theta )&=&{\sqrt {\frac {3}{4\pi }}}z\\rY_{1-1}(\theta ,\phi )&=&&r{\sqrt {\frac {3}{8\pi }}}\sin(\theta )e^{-i\phi }&=&{\sqrt {\frac {3}{4\pi }}}\left({\frac {x-iy}{\sqrt {2}}}\right)\end{aligned}}}
where, we have multiplied each Y 1 m by the radius r . On the right hand side we see the spherical components r q of the position vector r . The results can be summarized by,
r Y 1 q ( θ , ϕ ) = 3 4 π r q {\displaystyle rY_{1q}(\theta ,\phi )={\sqrt {\frac {3}{4\pi }}}r_{q}}
for q = 1, 0, −1, where q appears explicitly as a magnetic quantum number. This equation reveals a relationship between vector operators and the angular momentum value ℓ = 1, something we will have more to say about presently. Now the matrix elements become a product of a radial integral times an angular integral, ⟨ n ′ ℓ ′ m ′ | r q | n ℓ m ⟩ = ( ∫ 0 ∞ r 2 d r R n ′ ℓ ′ ∗ ( r ) r R n ℓ ( r ) ) ( 4 π 3 ∫ sin ( θ ) d Ω Y ℓ ′ m ′ ∗ ( θ , ϕ ) Y 1 q ( θ , ϕ ) Y ℓ m ( θ , ϕ ) ) {\displaystyle \langle n'\ell 'm'|r_{q}|n\ell m\rangle =\left(\int _{0}^{\infty }r^{2}drR_{n'\ell '}^{*}(r)rR_{n\ell }(r)\right)\left({\sqrt {\frac {4\pi }{3}}}\int \sin {(\theta )}d\Omega Y_{\ell 'm'}^{*}(\theta ,\phi )Y_{1q}(\theta ,\phi )Y_{\ell m}(\theta ,\phi )\right)}
We see that all the dependence on the three magnetic quantum numbers (m′,q,m) is contained in the angular part of the integral. Moreover, the angular integral can be evaluated by the three- Y ℓm formula, whereupon it becomes proportional to the Clebsch-Gordan coefficient,
⟨ ℓ ′ m ′ | ℓ 1 m q ⟩ {\displaystyle \langle \ell 'm'|\ell 1mq\rangle }
The radial integral is independent of the three magnetic quantum numbers ( m ′, q , m ), and the trick we have just used does not help us to evaluate it. But it is only one integral, and after it has been done, all the other integrals can be evaluated just by computing or looking up Clebsch–Gordan coefficients.
The selection rule m ′ = q + m in the Clebsch–Gordan coefficient means that many of the integrals vanish, so we have exaggerated the total number of integrals that need to be done. But had we worked with the Cartesian components r i of r , this selection rule might not have been obvious. In any case, even with the selection rule, there may still be many nonzero integrals to be done (nine, in the case 3d → 2p).
The example we have just given of simplifying the calculation of matrix elements for a dipole transition is really an application of the Wigner–Eckart theorem, which we take up later in these notes.
The spherical tensor formalism provides a common platform for treating coherence and relaxation in nuclear magnetic resonance . In NMR and EPR , spherical tensor operators are employed to express the quantum dynamics of particle spin , by means of an equation of motion for the density matrix entries, or to formulate dynamics in terms of an equation of motion in Liouville space . The Liouville space equation of motion governs the observable averages of spin variables. When relaxation is formulated using a spherical tensor basis in Liouville space, insight is gained because the relaxation matrix exhibits the cross-relaxation of spin observables directly. [ 5 ] | https://en.wikipedia.org/wiki/Tensor_operator |
In mathematics , and in particular functional analysis , the tensor product of Hilbert spaces is a way to extend the tensor product construction so that the result of taking a tensor product of two Hilbert spaces is another Hilbert space. Roughly speaking, the tensor product is the metric space completion of the ordinary tensor product. This is an example of a topological tensor product . The tensor product allows Hilbert spaces to be collected into a symmetric monoidal category . [ 1 ]
Since Hilbert spaces have inner products , one would like to introduce an inner product, and thereby a topology, on the tensor product that arises naturally from the inner products on the factors. Let H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} be two Hilbert spaces with inner products ⟨ ⋅ , ⋅ ⟩ 1 {\displaystyle \langle \cdot ,\cdot \rangle _{1}} and ⟨ ⋅ , ⋅ ⟩ 2 , {\displaystyle \langle \cdot ,\cdot \rangle _{2},} respectively. Construct the tensor product of H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} as vector spaces as explained in the article on tensor products . We can turn this vector space tensor product into an inner product space by defining ⟨ ϕ 1 ⊗ ϕ 2 , ψ 1 ⊗ ψ 2 ⟩ = ⟨ ϕ 1 , ψ 1 ⟩ 1 ⟨ ϕ 2 , ψ 2 ⟩ 2 for all ϕ 1 , ψ 1 ∈ H 1 and ϕ 2 , ψ 2 ∈ H 2 {\displaystyle \left\langle \phi _{1}\otimes \phi _{2},\psi _{1}\otimes \psi _{2}\right\rangle =\left\langle \phi _{1},\psi _{1}\right\rangle _{1}\,\left\langle \phi _{2},\psi _{2}\right\rangle _{2}\quad {\mbox{for all }}\phi _{1},\psi _{1}\in H_{1}{\mbox{ and }}\phi _{2},\psi _{2}\in H_{2}} and extending by linearity. That this inner product is the natural one is justified by the identification of scalar-valued bilinear maps on H 1 × H 2 {\displaystyle H_{1}\times H_{2}} and linear functionals on their vector space tensor product. Finally, take the completion under this inner product. The resulting Hilbert space is the tensor product of H 1 {\displaystyle H_{1}} and H 2 . {\displaystyle H_{2}.}
The tensor product can also be defined without appealing to the metric space completion. If H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} are two Hilbert spaces, one associates to every simple tensor product x 1 ⊗ x 2 {\displaystyle x_{1}\otimes x_{2}} the rank one operator from H 1 ∗ {\displaystyle H_{1}^{*}} to H 2 {\displaystyle H_{2}} that maps a given x ∗ ∈ H 1 ∗ {\displaystyle x^{*}\in H_{1}^{*}} as x ∗ ↦ x ∗ ( x 1 ) x 2 . {\displaystyle x^{*}\mapsto x^{*}(x_{1})\,x_{2}.}
This extends to a linear identification between H 1 ⊗ H 2 {\displaystyle H_{1}\otimes H_{2}} and the space of finite rank operators from H 1 ∗ {\displaystyle H_{1}^{*}} to H 2 . {\displaystyle H_{2}.} The finite rank operators are embedded in the Hilbert space H S ( H 1 ∗ , H 2 ) {\displaystyle HS(H_{1}^{*},H_{2})} of Hilbert–Schmidt operators from H 1 ∗ {\displaystyle H_{1}^{*}} to H 2 . {\displaystyle H_{2}.} The scalar product in H S ( H 1 ∗ , H 2 ) {\displaystyle HS(H_{1}^{*},H_{2})} is given by ⟨ T 1 , T 2 ⟩ = ∑ n ⟨ T 1 e n ∗ , T 2 e n ∗ ⟩ , {\displaystyle \langle T_{1},T_{2}\rangle =\sum _{n}\left\langle T_{1}e_{n}^{*},T_{2}e_{n}^{*}\right\rangle ,} where ( e n ∗ ) {\displaystyle \left(e_{n}^{*}\right)} is an arbitrary orthonormal basis of H 1 ∗ . {\displaystyle H_{1}^{*}.}
Under the preceding identification, one can define the Hilbertian tensor product of H 1 {\displaystyle H_{1}} and H 2 , {\displaystyle H_{2},} that is isometrically and linearly isomorphic to H S ( H 1 ∗ , H 2 ) . {\displaystyle HS(H_{1}^{*},H_{2}).}
The Hilbert tensor product H 1 ⊗ H 2 {\displaystyle H_{1}\otimes H_{2}} is characterized by the following universal property ( Kadison & Ringrose 1997 , Theorem 2.6.4):
Theorem — There is a weakly Hilbert–Schmidt mapping p : H 1 × H 2 → H 1 ⊗ H 2 {\displaystyle p:H_{1}\times H_{2}\to H_{1}\otimes H_{2}} such that, given any weakly Hilbert–Schmidt mapping L : H 1 × H 2 → K {\displaystyle L:H_{1}\times H_{2}\to K} to a Hilbert space K , {\displaystyle K,} there is a unique bounded operator T : H 1 ⊗ H 2 → K {\displaystyle T:H_{1}\otimes H_{2}\to K} such that L = T p . {\displaystyle L=Tp.}
A weakly Hilbert-Schmidt mapping L : H 1 × H 2 → K {\displaystyle L:H_{1}\times H_{2}\to K} is defined as a bilinear map for which a real number d {\displaystyle d} exists, such that ∑ i , j = 1 ∞ | ⟨ L ( e i , f j ) , u ⟩ | 2 ≤ d 2 ‖ u ‖ 2 {\displaystyle \sum _{i,j=1}^{\infty }{\bigl |}\left\langle L(e_{i},f_{j}),u\right\rangle {\bigr |}^{2}\leq d^{2}\,\|u\|^{2}} for all u ∈ K {\displaystyle u\in K} and one (hence all) orthonormal bases e 1 , e 2 , … {\displaystyle e_{1},e_{2},\ldots } of H 1 {\displaystyle H_{1}} and f 1 , f 2 , … {\displaystyle f_{1},f_{2},\ldots } of H 2 . {\displaystyle H_{2}.}
As with any universal property, this characterizes the tensor product H uniquely, up to isomorphism. The same universal property, with obvious modifications, also applies for the tensor product of any finite number of Hilbert spaces. It is essentially the same universal property shared by all definitions of tensor products, irrespective of the spaces being tensored: this implies that any space with a tensor product is a symmetric monoidal category , and Hilbert spaces are a particular example thereof.
Two different definitions have historically been proposed for the tensor product of an arbitrary-sized collection { H n } n ∈ N {\textstyle \{H_{n}\}_{n\in N}} of Hilbert spaces. Von Neumann 's traditional definition simply takes the "obvious" tensor product: to compute ⨂ n H n {\textstyle \bigotimes _{n}{H_{n}}} , first collect all simple tensors of the form ⨂ n ∈ N e n {\textstyle \bigotimes _{n\in N}{e_{n}}} such that ∏ n ∈ N ‖ e n ‖ < ∞ {\textstyle \prod _{n\in N}{\|e_{n}\|}<\infty } . The latter describes a pre-inner product through the polarization identity , so take the closed span of such simple tensors modulo that inner product's isotropy subspaces. This definition is almost never separable, in part because, in physical applications , "most" of the space describes impossible states. Modern authors typically use instead a definition due to Guichardet: to compute ⨂ n H n {\textstyle \bigotimes _{n}{H_{n}}} , first select a unit vector v n ∈ H n {\textstyle v_{n}\in H_{n}} in each Hilbert space, and then collect all simple tensors of the form ⨂ n ∈ N e n {\textstyle \bigotimes _{n\in N}{e_{n}}} , in which only finitely-many e n {\textstyle e_{n}} are not v n {\textstyle v_{n}} . Then take the L 2 {\displaystyle L^{2}} completion of these simple tensors. [ 2 ] [ 3 ]
Let A i {\displaystyle {\mathfrak {A}}_{i}} be the von Neumann algebra of bounded operators on H i {\displaystyle H_{i}} for i = 1 , 2. {\displaystyle i=1,2.} Then the von Neumann tensor product of the von Neumann algebras is the strong completion of the set of all finite linear combinations of simple tensor products A 1 ⊗ A 2 {\displaystyle A_{1}\otimes A_{2}} where A i ∈ A i {\displaystyle A_{i}\in {\mathfrak {A}}_{i}} for i = 1 , 2. {\displaystyle i=1,2.} This is exactly equal to the von Neumann algebra of bounded operators of H 1 ⊗ H 2 . {\displaystyle H_{1}\otimes H_{2}.} Unlike for Hilbert spaces, one may take infinite tensor products of von Neumann algebras, and for that matter C*-algebras of operators, without defining reference states. [ 3 ] This is one advantage of the "algebraic" method in quantum statistical mechanics.
If H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} have orthonormal bases { ϕ k } {\displaystyle \left\{\phi _{k}\right\}} and { ψ l } , {\displaystyle \left\{\psi _{l}\right\},} respectively, then { ϕ k ⊗ ψ l } {\displaystyle \left\{\phi _{k}\otimes \psi _{l}\right\}} is an orthonormal basis for H 1 ⊗ H 2 . {\displaystyle H_{1}\otimes H_{2}.} In particular, the Hilbert dimension of the tensor product is the product (as cardinal numbers ) of the Hilbert dimensions.
The following examples show how tensor products arise naturally.
Given two measure spaces X {\displaystyle X} and Y {\displaystyle Y} , with measures μ {\displaystyle \mu } and ν {\displaystyle \nu } respectively, one may look at L 2 ( X × Y ) , {\displaystyle L^{2}(X\times Y),} the space of functions on X × Y {\displaystyle X\times Y} that are square integrable with respect to the product measure μ × ν . {\displaystyle \mu \times \nu .} If f {\displaystyle f} is a square integrable function on X , {\displaystyle X,} and g {\displaystyle g} is a square integrable function on Y , {\displaystyle Y,} then we can define a function h {\displaystyle h} on X × Y {\displaystyle X\times Y} by h ( x , y ) = f ( x ) g ( y ) . {\displaystyle h(x,y)=f(x)g(y).} The definition of the product measure ensures that all functions of this form are square integrable, so this defines a bilinear mapping L 2 ( X ) × L 2 ( Y ) → L 2 ( X × Y ) . {\displaystyle L^{2}(X)\times L^{2}(Y)\to L^{2}(X\times Y).} Linear combinations of functions of the form f ( x ) g ( y ) {\displaystyle f(x)g(y)} are also in L 2 ( X × Y ) . {\displaystyle L^{2}(X\times Y).} It turns out that the set of linear combinations is in fact dense in L 2 ( X × Y ) , {\displaystyle L^{2}(X\times Y),} if L 2 ( X ) {\displaystyle L^{2}(X)} and L 2 ( Y ) {\displaystyle L^{2}(Y)} are separable. [ 4 ] This shows that L 2 ( X ) ⊗ L 2 ( Y ) {\displaystyle L^{2}(X)\otimes L^{2}(Y)} is isomorphic to L 2 ( X × Y ) , {\displaystyle L^{2}(X\times Y),} and it also explains why we need to take the completion in the construction of the Hilbert space tensor product.
Similarly, we can show that L 2 ( X ; H ) {\displaystyle L^{2}(X;H)} , denoting the space of square integrable functions X → H , {\displaystyle X\to H,} is isomorphic to L 2 ( X ) ⊗ H {\displaystyle L^{2}(X)\otimes H} if this space is separable. The isomorphism maps f ( x ) ⊗ ϕ ∈ L 2 ( X ) ⊗ H {\displaystyle f(x)\otimes \phi \in L^{2}(X)\otimes H} to f ( x ) ϕ ∈ L 2 ( X ; H ) . {\displaystyle f(x)\phi \in L^{2}(X;H).} We can combine this with the previous example and conclude that L 2 ( X ) ⊗ L 2 ( Y ) {\displaystyle L^{2}(X)\otimes L^{2}(Y)} and L 2 ( X × Y ) {\displaystyle L^{2}(X\times Y)} are both isomorphic to L 2 ( X ; L 2 ( Y ) ) . {\displaystyle L^{2}\left(X;L^{2}(Y)\right).}
Tensor products of Hilbert spaces arise often in quantum mechanics . If some particle is described by the Hilbert space H 1 , {\displaystyle H_{1},} and another particle is described by H 2 , {\displaystyle H_{2},} then the system consisting of both particles is described by the tensor product of H 1 {\displaystyle H_{1}} and H 2 . {\displaystyle H_{2}.} For example, the state space of a quantum harmonic oscillator is L 2 ( R ) , {\displaystyle L^{2}(\mathbb {R} ),} so the state space of two oscillators is L 2 ( R ) ⊗ L 2 ( R ) , {\displaystyle L^{2}(\mathbb {R} )\otimes L^{2}(\mathbb {R} ),} which is isomorphic to L 2 ( R 2 ) . {\displaystyle L^{2}\left(\mathbb {R} ^{2}\right).} Therefore, the two-particle system is described by wave functions of the form ψ ( x 1 , x 2 ) . {\displaystyle \psi \left(x_{1},x_{2}\right).} A more intricate example is provided by the Fock spaces , which describe a variable number of particles. | https://en.wikipedia.org/wiki/Tensor_product_of_Hilbert_spaces |
In mathematics , the tensor product of quadratic forms is most easily understood when one views the quadratic forms as quadratic spaces . [ 1 ] If R is a commutative ring where 2 is invertible , and if ( V 1 , q 1 ) {\displaystyle (V_{1},q_{1})} and ( V 2 , q 2 ) {\displaystyle (V_{2},q_{2})} are two quadratic spaces over R , then their tensor product ( V 1 ⊗ V 2 , q 1 ⊗ q 2 ) {\displaystyle (V_{1}\otimes V_{2},q_{1}\otimes q_{2})} is the quadratic space whose underlying R - module is the tensor product V 1 ⊗ V 2 {\displaystyle V_{1}\otimes V_{2}} of R -modules and whose quadratic form is the quadratic form associated to the tensor product of the bilinear forms associated to q 1 {\displaystyle q_{1}} and q 2 {\displaystyle q_{2}} .
In particular, the form q 1 ⊗ q 2 {\displaystyle q_{1}\otimes q_{2}} satisfies
(which does uniquely characterize it however). It follows from this that if the quadratic forms are diagonalizable (which is always possible if 2 is invertible in R ), i.e.,
then the tensor product has diagonalization
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tensor_product_of_quadratic_forms |
In multilinear algebra , the tensor rank decomposition [ 1 ] or rank- R decomposition is the decomposition of a tensor as a sum of R rank-1 tensors, where R is minimal. Computing this decomposition is an open problem. [ clarification needed ]
Canonical polyadic decomposition (CPD) is a variant of the tensor rank decomposition, in which the tensor is approximated as a sum of K rank-1 tensors for a user-specified K . The CP decomposition has found some applications in linguistics and chemometrics . It was introduced by Frank Lauren Hitchcock in 1927 [ 2 ] and later rediscovered several times, notably in psychometrics. [ 3 ] [ 4 ] The CP decomposition is referred to as CANDECOMP, [ 3 ] PARAFAC, [ 4 ] or CANDECOMP/PARAFAC (CP). Note that the PARAFAC2 rank decomposition is a variation of the CP decomposition. [ 5 ]
Another popular generalization of the matrix SVD known as the higher-order singular value decomposition computes orthonormal mode matrices and has found applications in econometrics , signal processing , computer vision , computer graphics , and psychometrics .
A scalar variable is denoted by lower case italic letters, a {\displaystyle a} and an upper bound scalar is denoted by an upper case italic letter, A {\displaystyle A} .
Indices are denoted by a combination of lowercase and upper case italic letters, 1 ≤ i ≤ I {\displaystyle 1\leq i\leq I} . Multiple indices that one might encounter when referring to the multiple modes of a tensor are conveniently denoted by 1 ≤ i m ≤ I m {\displaystyle 1\leq i_{m}\leq I_{m}} where 1 ≤ m ≤ M {\displaystyle 1\leq m\leq M} .
A vector is denoted by a lower case bold Times Roman, a {\displaystyle \mathbf {a} } and a matrix is denoted by bold upper case letters A {\displaystyle \mathbf {A} } .
A higher order tensor is denoted by calligraphic letters, A {\displaystyle {\mathcal {A}}} . An element of an M {\displaystyle M} -order tensor A ∈ C I 1 × I 2 × … I m × … I M {\displaystyle {\mathcal {A}}\in \mathbb {C} ^{I_{1}\times I_{2}\times \dots I_{m}\times \dots I_{M}}} is denoted by a i 1 , i 2 , … , i m , … i M {\displaystyle a_{i_{1},i_{2},\dots ,i_{m},\dots i_{M}}} or A i 1 , i 2 , … , i m , … i M {\displaystyle {\mathcal {A}}_{i_{1},i_{2},\dots ,i_{m},\dots i_{M}}} .
A data tensor A ∈ F I 0 × I 1 × … × I C {\displaystyle {\mathcal {A}}\in {\mathbb {F} }^{I_{0}\times I_{1}\times \ldots \times I_{C}}} is a collection of multivariate observations organized into a M -way array where M = C +1. Every tensor may be represented with a suitably large R {\displaystyle R} as a linear combination of R {\displaystyle R} rank-1 tensors:
where λ r ∈ F {\displaystyle \lambda _{r}\in {\mathbb {F} }} and a m , r ∈ F I m {\displaystyle \mathbf {a} _{m,r}\in {\mathbb {F} }^{I_{m}}} where 1 ≤ m ≤ M {\displaystyle 1\leq m\leq M} . When the number of terms R {\displaystyle R} is minimal in the above expression, then R {\displaystyle R} is called the rank of the tensor, and the decomposition is often referred to as a (tensor) rank decomposition , minimal CP decomposition , or Canonical Polyadic Decomposition (CPD) . If the number of terms is not minimal, then the above decomposition is often referred to as CANDECOMP/PARAFAC , Polyadic decomposition'.
Contrary to the case of matrices, computing the rank of a tensor is NP-hard . [ 6 ] The only notable well-understood case consists of tensors in F I m ⊗ F I n ⊗ F 2 {\displaystyle F^{I_{m}}\otimes F^{I_{n}}\otimes F^{2}} , whose rank can be obtained from the Kronecker – Weierstrass normal form of the linear matrix pencil that the tensor represents. [ 7 ] A simple polynomial-time algorithm exists for certifying that a tensor is of rank 1, namely the higher-order singular value decomposition .
The rank of the tensor of zeros is zero by convention. The rank of a tensor a 1 ⊗ ⋯ ⊗ a M {\displaystyle \mathbf {a} _{1}\otimes \cdots \otimes \mathbf {a} _{M}} is one, provided that a m ∈ F I m ∖ { 0 } {\displaystyle \mathbf {a} _{m}\in F^{I_{m}}\setminus \{0\}} .
The rank of a tensor depends on the field over which the tensor is decomposed. It is known that some real tensors may admit a complex decomposition whose rank is strictly less than the rank of a real decomposition of the same tensor. As an example, [ 8 ] consider the following real tensor
where x i , y j ∈ R 2 {\displaystyle \mathbf {x} _{i},\mathbf {y} _{j}\in \mathbb {R} ^{2}} . The rank of this tensor over the reals is known to be 3, while its complex rank is only 2 because it is the sum of a complex rank-1 tensor with its complex conjugate , namely
where z k = x k + i y k {\displaystyle \mathbf {z} _{k}=\mathbf {x} _{k}+i\mathbf {y} _{k}} .
In contrast, the rank of real matrices will never decrease under a field extension to C {\displaystyle \mathbb {C} } : real matrix rank and complex matrix rank coincide for real matrices.
The generic rank r ( I 1 , … , I M ) {\displaystyle r(I_{1},\ldots ,I_{M})} is defined as the least rank r {\displaystyle r} such that the closure in the Zariski topology of the set of tensors of rank at most r {\displaystyle r} is the entire space F I 1 ⊗ ⋯ ⊗ F I M {\displaystyle F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}} . In the case of complex tensors, tensors of rank at most r ( I 1 , … , I M ) {\displaystyle r(I_{1},\ldots ,I_{M})} form a dense set S {\displaystyle S} : every tensor in the aforementioned space is either of rank less than the generic rank, or it is the limit in the Euclidean topology of a sequence of tensors from S {\displaystyle S} . In the case of real tensors, the set of tensors of rank at most r ( I 1 , … , I M ) {\displaystyle r(I_{1},\ldots ,I_{M})} only forms an open set of positive measure in the Euclidean topology. There may exist Euclidean-open sets of tensors of rank strictly higher than the generic rank. All ranks appearing on open sets in the Euclidean topology are called typical ranks . The smallest typical rank is called the generic rank; this definition applies to both complex and real tensors. The generic rank of tensor spaces was initially studied in 1983 by Volker Strassen . [ 9 ]
As an illustration of the above concepts, it is known that both 2 and 3 are typical ranks of R 2 ⊗ R 2 ⊗ R 2 {\displaystyle \mathbb {R} ^{2}\otimes \mathbb {R} ^{2}\otimes \mathbb {R} ^{2}} while the generic rank of C 2 ⊗ C 2 ⊗ C 2 {\displaystyle \mathbb {C} ^{2}\otimes \mathbb {C} ^{2}\otimes \mathbb {C} ^{2}} is 2. Practically, this means that a randomly sampled real tensor (from a continuous probability measure on the space of tensors) of size 2 × 2 × 2 {\displaystyle 2\times 2\times 2} will be a rank-1 tensor with probability zero, a rank-2 tensor with positive probability, and rank-3 with positive probability. On the other hand, a randomly sampled complex tensor of the same size will be a rank-1 tensor with probability zero, a rank-2 tensor with probability one, and a rank-3 tensor with probability zero. It is even known that the generic rank-3 real tensor in R 2 ⊗ R 2 ⊗ R 2 {\displaystyle \mathbb {R} ^{2}\otimes \mathbb {R} ^{2}\otimes \mathbb {R} ^{2}} will be of complex rank equal to 2.
The generic rank of tensor spaces depends on the distinction between balanced and unbalanced tensor spaces. A tensor space F I 1 ⊗ ⋯ ⊗ F I M {\displaystyle F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}} , where I 1 ≥ I 2 ≥ ⋯ ≥ I M {\displaystyle I_{1}\geq I_{2}\geq \cdots \geq I_{M}} ,
is called unbalanced whenever
and it is called balanced otherwise.
When the first factor is very large with respect to the other factors in the tensor product, then the tensor space essentially behaves as a matrix space. The generic rank of tensors living in an unbalanced tensor spaces is known to equal
almost everywhere . More precisely, the rank of every tensor in an unbalanced tensor space F I 1 × ⋯ × I M ∖ Z {\displaystyle F^{I_{1}\times \cdots \times I_{M}}\setminus Z} , where Z {\displaystyle Z} is some indeterminate closed set in the Zariski topology, equals the above value. [ 10 ]
The expected generic rank of tensors living in a balanced tensor space is equal to
almost everywhere for complex tensors and on a Euclidean-open set for real tensors, where
More precisely, the rank of every tensor in C I 1 × ⋯ × I M ∖ Z {\displaystyle \mathbb {C} ^{I_{1}\times \cdots \times I_{M}}\setminus Z} , where Z {\displaystyle Z} is some indeterminate closed set in the Zariski topology , is expected to equal the above value. [ 11 ] For real tensors, r E ( I 1 , … , I M ) {\displaystyle r_{E}(I_{1},\ldots ,I_{M})} is the least rank that is expected to occur on a set of positive Euclidean measure. The value r E ( I 1 , … , I M ) {\displaystyle r_{E}(I_{1},\ldots ,I_{M})} is often referred to as the expected generic rank of the tensor space F I 1 × ⋯ × I M {\displaystyle F^{I_{1}\times \cdots \times I_{M}}} because it is only conjecturally correct. It is known that the true generic rank always satisfies
The Abo–Ottaviani–Peterson conjecture [ 11 ] states that equality is expected, i.e., r ( I 1 , … , I M ) = r E ( I 1 , … , I M ) {\displaystyle r(I_{1},\ldots ,I_{M})=r_{E}(I_{1},\ldots ,I_{M})} , with the following exceptional cases:
In each of these exceptional cases, the generic rank is known to be r ( I 1 , … , I m , … , I M ) = r E ( I 1 , … , I M ) + 1 {\displaystyle r(I_{1},\ldots ,I_{m},\ldots ,I_{M})=r_{E}(I_{1},\ldots ,I_{M})+1} . Note that while the set of tensors of rank 3 in F 2 × 2 × 2 × 2 {\displaystyle F^{2\times 2\times 2\times 2}} is defective (13 and not the expected 14), the generic rank in that space is still the expected one, 4. Similarly, the set of tensors of rank 5 in F 4 × 4 × 3 {\displaystyle F^{4\times 4\times 3}} is defective (44 and not the expected 45), but the generic rank in that space is still the expected 6.
The AOP conjecture has been proved completely in a number of special cases. Lickteig showed already in 1985 that r ( n , n , n ) = r E ( n , n , n ) {\displaystyle r(n,n,n)=r_{E}(n,n,n)} , provided that n ≠ 3 {\displaystyle n\neq 3} . [ 12 ] In 2011, a major breakthrough was established by Catalisano, Geramita, and Gimigliano who proved that the expected dimension of the set of rank s {\displaystyle s} tensors of format 2 × 2 × ⋯ × 2 {\displaystyle 2\times 2\times \cdots \times 2} is the expected one except for rank 3 tensors in the 4 factor case, yet the expected rank in that case is still 4. As a consequence, r ( 2 , 2 , … , 2 ) = r E ( 2 , 2 , … , 2 ) {\displaystyle r(2,2,\ldots ,2)=r_{E}(2,2,\ldots ,2)} for all binary tensors. [ 13 ]
The maximum rank that can be admitted by any of the tensors in a tensor space is unknown in general; even a conjecture about this maximum rank is missing. Presently, the best general upper bound states that the maximum rank r max ( I 1 , … , I M ) {\displaystyle r_{\mbox{max}}(I_{1},\ldots ,I_{M})} of F I 1 ⊗ ⋯ ⊗ F I M {\displaystyle F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}} , where I 1 ≥ I 2 ≥ ⋯ ≥ I M {\displaystyle I_{1}\geq I_{2}\geq \cdots \geq I_{M}} , satisfies
where r ( I 1 , … , I M ) {\displaystyle r(I_{1},\ldots ,I_{M})} is the (least) generic rank of F I 1 ⊗ ⋯ ⊗ F I M {\displaystyle F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}} . [ 14 ] It is well-known that the foregoing inequality may be strict. For instance, the generic rank of tensors in R 2 × 2 × 2 {\displaystyle \mathbb {R} ^{2\times 2\times 2}} is two, so that the above bound yields r max ( 2 , 2 , 2 ) ≤ 4 {\displaystyle r_{\mbox{max}}(2,2,2)\leq 4} , while it is known that the maximum rank equals 3. [ 8 ]
A rank- s {\displaystyle s} tensor A {\displaystyle {\mathcal {A}}} is called a border tensor if there exists a sequence of tensors of rank at most r < s {\displaystyle r<s} whose limit is A {\displaystyle {\mathcal {A}}} . If r {\displaystyle r} is the least value for which such a convergent sequence exists, then it is called the border rank of A {\displaystyle {\mathcal {A}}} . For order-2 tensors, i.e., matrices, rank and border rank always coincide, however, for tensors of order ≥ 3 {\displaystyle \geq 3} they may differ. Border tensors were first studied in the context of fast approximate matrix multiplication algorithms by Bini, Lotti, and Romani in 1980. [ 15 ]
A classic example of a border tensor is the rank-3 tensor
It can be approximated arbitrarily well by the following sequence of rank-2 tensors
as m → ∞ {\displaystyle m\to \infty } . Therefore, its border rank is 2, which is strictly less than its rank. When the two vectors are orthogonal, this example is also known as a W state .
It follows from the definition of a pure tensor that A = a 1 ⊗ a 2 ⊗ ⋯ ⊗ a M = b 1 ⊗ b 2 ⊗ ⋯ ⊗ b M {\displaystyle {\mathcal {A}}=\mathbf {a} _{1}\otimes \mathbf {a} _{2}\otimes \cdots \otimes \mathbf {a} _{M}=\mathbf {b} _{1}\otimes \mathbf {b} _{2}\otimes \cdots \otimes \mathbf {b} _{M}} if and only if there exist λ k {\displaystyle \lambda _{k}} such that λ 1 λ 2 ⋯ λ M = 1 {\displaystyle \lambda _{1}\lambda _{2}\cdots \lambda _{M}=1} and a m = λ m b m {\displaystyle \mathbf {a} _{m}=\lambda _{m}\mathbf {b} _{m}} for all m . For this reason, the parameters { a m } m = 1 M {\displaystyle \{\mathbf {a} _{m}\}_{m=1}^{M}} of a rank-1 tensor A {\displaystyle {\mathcal {A}}} are called identifiable or essentially unique. A rank- r {\displaystyle r} tensor A ∈ F I 1 ⊗ F I 2 ⊗ ⋯ ⊗ F I M {\displaystyle {\mathcal {A}}\in F^{I_{1}}\otimes F^{I_{2}}\otimes \cdots \otimes F^{I_{M}}} is called identifiable if every of its tensor rank decompositions is the sum of the same set of r {\displaystyle r} distinct tensors { A 1 , A 2 , … , A r } {\displaystyle \{{\mathcal {A}}_{1},{\mathcal {A}}_{2},\ldots ,{\mathcal {A}}_{r}\}} where the A i {\displaystyle {\mathcal {A}}_{i}} 's are of rank 1. An identifiable rank- r {\displaystyle r} thus has only one essentially unique decomposition A = ∑ i = 1 r A i , {\displaystyle {\mathcal {A}}=\sum _{i=1}^{r}{\mathcal {A}}_{i},} and all r ! {\displaystyle r!} tensor rank decompositions of A {\displaystyle {\mathcal {A}}} can be obtained by permuting the order of the summands. Observe that in a tensor rank decomposition all the A i {\displaystyle {\mathcal {A}}_{i}} 's are distinct, for otherwise the rank of A {\displaystyle {\mathcal {A}}} would be at most r − 1 {\displaystyle r-1} .
Order-2 tensors in F I 1 ⊗ F I 2 ≃ F I 1 × I 2 {\displaystyle F^{I_{1}}\otimes F^{I_{2}}\simeq F^{I_{1}\times I_{2}}} , i.e., matrices, are not identifiable for r > 1 {\displaystyle r>1} . This follows essentially from the observation A = ∑ i = 1 r a i ⊗ b i = ∑ i = 1 r a i b i T = A B T = ( A X − 1 ) ( B X T ) T = ∑ i = 1 r c i d i T = ∑ i = 1 r c i ⊗ d i , {\displaystyle {\mathcal {A}}=\sum _{i=1}^{r}\mathbf {a} _{i}\otimes \mathbf {b} _{i}=\sum _{i=1}^{r}\mathbf {a} _{i}\mathbf {b} _{i}^{T}=AB^{T}=(AX^{-1})(BX^{T})^{T}=\sum _{i=1}^{r}\mathbf {c} _{i}\mathbf {d} _{i}^{T}=\sum _{i=1}^{r}\mathbf {c} _{i}\otimes \mathbf {d} _{i},} where X ∈ G L r ( F ) {\displaystyle X\in \mathrm {GL} _{r}(F)} is an invertible r × r {\displaystyle r\times r} matrix, A = [ a i ] i = 1 r {\displaystyle A=[\mathbf {a} _{i}]_{i=1}^{r}} , B = [ b i ] i = 1 r {\displaystyle B=[\mathbf {b} _{i}]_{i=1}^{r}} , A X − 1 = [ c i ] i = 1 r {\displaystyle AX^{-1}=[\mathbf {c} _{i}]_{i=1}^{r}} and B X T = [ d i ] i = 1 r {\displaystyle BX^{T}=[\mathbf {d} _{i}]_{i=1}^{r}} . It can be shown [ 16 ] that for every X ∈ G L n ( F ) ∖ Z {\displaystyle X\in \mathrm {GL} _{n}(F)\setminus Z} , where Z {\displaystyle Z} is a closed set in the Zariski topology, the decomposition on the right-hand side is a sum of a different set of rank-1 tensors than the decomposition on the left-hand side, entailing that order-2 tensors of rank r > 1 {\displaystyle r>1} are generically not identifiable.
The situation changes completely for higher-order tensors in F I 1 ⊗ F I 2 ⊗ ⋯ ⊗ F I M {\displaystyle F^{I_{1}}\otimes F^{I_{2}}\otimes \cdots \otimes F^{I_{M}}} with M > 2 {\displaystyle M>2} and all I m ≥ 2 {\displaystyle I_{m}\geq 2} . For simplicity in notation, assume without loss of generality that the factors are ordered such that I 1 ≥ I 2 ≥ ⋯ ≥ I M ≥ 2 {\displaystyle I_{1}\geq I_{2}\geq \cdots \geq I_{M}\geq 2} . Let S r ⊂ F I 1 ⊗ ⋯ F I m ⊗ ⋯ ⊗ F I M {\displaystyle S_{r}\subset F^{I_{1}}\otimes \cdots F^{I_{m}}\otimes \cdots \otimes F^{I_{M}}} denote the set of tensors of rank bounded by r {\displaystyle r} . Then, the following statement was proved to be correct using a computer-assisted proof for all spaces of dimension Π < 15000 {\displaystyle \Pi <15000} , [ 17 ] and it is conjectured to be valid in general: [ 17 ] [ 18 ] [ 19 ]
There exists a closed set Z r {\displaystyle Z_{r}} in the Zariski topology such that every tensor A ∈ S r ∖ Z r {\displaystyle {\mathcal {A}}\in S_{r}\setminus Z_{r}} is identifiable ( S r {\displaystyle S_{r}} is called generically identifiable in this case), unless either one of the following exceptional cases holds:
In these exceptional cases, the generic (and also minimum) number of complex decompositions is
In summary, the generic tensor of order M > 2 {\displaystyle M>2} and rank r < Π Σ + 1 {\textstyle r<{\frac {\Pi }{\Sigma +1}}} that is not identifiability-unbalanced is expected to be identifiable (modulo the exceptional cases in small spaces).
The rank approximation problem asks for the rank- r {\displaystyle r} decomposition closest (in the usual Euclidean topology) to some rank- s {\displaystyle s} tensor A {\displaystyle {\mathcal {A}}} , where r < s {\displaystyle r<s} . That is, one seeks to solve
where ‖ ⋅ ‖ F {\displaystyle \|\cdot \|_{F}} is the Frobenius norm .
It was shown in a 2008 paper by de Silva and Lim [ 8 ] that the above standard approximation problem may be ill-posed . A solution to aforementioned problem may sometimes not exist because the set over which one optimizes is not closed. As such, a minimizer may not exist, even though an infimum would exist. In particular, it is known that certain so-called border tensors may be approximated arbitrarily well by a sequence of tensor of rank at most r {\displaystyle r} , even though the limit of the sequence converges to a tensor of rank strictly higher than r {\displaystyle r} . The rank-3 tensor
can be approximated arbitrarily well by the following sequence of rank-2 tensors
as n → ∞ {\displaystyle n\to \infty } . This example neatly illustrates the general principle that a sequence of rank- r {\displaystyle r} tensors that converges to a tensor of strictly higher rank needs to admit at least two individual rank-1 terms whose norms become unbounded. Stated formally, whenever a sequence
has the property that A n → A {\displaystyle {\mathcal {A}}_{n}\to {\mathcal {A}}} (in the Euclidean topology) as n → ∞ {\displaystyle n\to \infty } , then there should exist at least 1 ≤ i ≠ j ≤ r {\displaystyle 1\leq i\neq j\leq r} such that
as n → ∞ {\displaystyle n\to \infty } . This phenomenon is often encountered when attempting to approximate a tensor using numerical optimization algorithms. It is sometimes called the problem of diverging components . It was, in addition, shown that a random low-rank tensor over the reals may not admit a rank-2 approximation with positive probability, leading to the understanding that the ill-posedness problem is an important consideration when employing the tensor rank decomposition.
A common partial solution to the ill-posedness problem consists of imposing an additional inequality constraint that bounds the norm of the individual rank-1 terms by some constant. Other constraints that result in a closed set, and, thus, well-posed optimization problem, include imposing positivity or a bounded inner product strictly less than unity between the rank-1 terms appearing in the sought decomposition.
Alternating algorithms:
Direct algorithms:
General optimization algorithms:
General polynomial system solving algorithms:
In machine learning, the CP-decomposition is the central ingredient in learning probabilistic latent variables models via the technique of moment-matching. For example, consider the multi-view model [ 32 ] which is a probabilistic latent variable model. In this model, the generation of samples are posited as follows: there exists a hidden random variable that is not observed directly, given which, there are several conditionally independent random variables known as the different "views" of the hidden variable. For example, assume there are three views x 1 , x 2 , x 3 {\displaystyle x_{1},x_{2},x_{3}} of a k {\displaystyle k} -state categorical hidden variable h {\displaystyle h} . Then the empirical third moment of this latent variable model E [ x 1 ⊗ x 2 ⊗ x 3 ] {\displaystyle E[x_{1}\otimes x_{2}\otimes x_{3}]} is a rank 3 tensor and can be decomposed as: E [ x 1 ⊗ x 2 ⊗ x 3 ] = ∑ i = 1 k P r ( h = i ) E [ x 1 | h = i ] ⊗ E [ x 2 | h = i ] ⊗ E [ x 3 | h = i ] {\displaystyle E[x_{1}\otimes x_{2}\otimes x_{3}]=\sum _{i=1}^{k}Pr(h=i)E[x_{1}|h=i]\otimes E[x_{2}|h=i]\otimes E[x_{3}|h=i]} .
In applications such as topic modeling , this can be interpreted as the co-occurrence of words in a document. Then the coefficients in the decomposition of this empirical moment tensor can be interpreted as the probability of choosing a specific topic and each column of the factor matrix E [ x | h = i ] {\displaystyle E[x|h=i]} corresponds to probabilities of words in the vocabulary in the corresponding topic. | https://en.wikipedia.org/wiki/Tensor_rank_decomposition |
In mathematics , the tensor representations of the general linear group are those that are obtained by taking finitely many tensor products of the fundamental representation and its dual. The irreducible factors of such a representation are also called tensor representations, and can be obtained by applying Schur functors (associated to Young tableaux ). These coincide with the rational representations of the general linear group.
More generally, a matrix group is any subgroup of the general linear group. A tensor representation of a matrix group is any representation that is contained in a tensor representation of the general linear group. For example, the orthogonal group O( n ) admits a tensor representation on the space of all trace-free symmetric tensors of order two. For orthogonal groups, the tensor representations are contrasted with the spin representations .
The classical groups , like the symplectic group , have the property that all finite-dimensional representations are tensor representations (by Weyl's construction ), while other representations (like the metaplectic representation ) exist in infinite dimensions. | https://en.wikipedia.org/wiki/Tensor_representation |
In multilinear algebra , a reshaping of tensors is any bijection between the set of indices of an order - M {\displaystyle M} tensor and the set of indices of an order- L {\displaystyle L} tensor, where L < M {\displaystyle L<M} . The use of indices presupposes tensors in coordinate representation with respect to a basis. The coordinate representation of a tensor can be regarded as a multi-dimensional array, and a bijection from one set of indices to another therefore amounts to a rearrangement of the array elements into an array of a different shape. Such a rearrangement constitutes a particular kind of linear map between the vector space of order- M {\displaystyle M} tensors and the vector space of order- L {\displaystyle L} tensors.
Given a positive integer M {\displaystyle M} , the notation [ M ] {\displaystyle [M]} refers to the set { 1 , … , M } {\displaystyle \{1,\dots ,M\}} of the first M positive integers.
For each integer m {\displaystyle m} where 1 ≤ m ≤ M {\displaystyle 1\leq m\leq M} for a positive integer M {\displaystyle M} , let V m {\displaystyle V_{m}} denote an I m {\displaystyle I_{m}} - dimensional vector space over a field F {\displaystyle F} . Then there are vector space isomorphisms (linear maps)
V 1 ⊗ ⋯ ⊗ V M ≃ F I 1 ⊗ ⋯ ⊗ F I M ≃ F I π 1 ⊗ ⋯ ⊗ F I π M ≃ F I π 1 I π 2 ⊗ F I π 3 ⊗ ⋯ ⊗ F I π M ≃ F I π 1 I π 3 ⊗ F I π 2 ⊗ F I π 4 ⊗ ⋯ ⊗ F I π M ⋮ ≃ F I 1 I 2 … I M , {\displaystyle {\begin{aligned}V_{1}\otimes \cdots \otimes V_{M}&\simeq F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}\\&\simeq F^{I_{\pi _{1}}}\otimes \cdots \otimes F^{I_{\pi _{M}}}\\&\simeq F^{I_{\pi _{1}}I_{\pi _{2}}}\otimes F^{I_{\pi _{3}}}\otimes \cdots \otimes F^{I_{\pi _{M}}}\\&\simeq F^{I_{\pi _{1}}I_{\pi _{3}}}\otimes F^{I_{\pi _{2}}}\otimes F^{I_{\pi _{4}}}\otimes \cdots \otimes F^{I_{\pi _{M}}}\\&\,\,\,\vdots \\&\simeq F^{I_{1}I_{2}\ldots I_{M}},\end{aligned}}}
where π ∈ S M {\displaystyle \pi \in {\mathfrak {S}}_{M}} is any permutation and S M {\displaystyle {\mathfrak {S}}_{M}} is the symmetric group on M {\displaystyle M} elements. Via these (and other) vector space isomorphisms, a tensor can be interpreted in several ways as an order- L {\displaystyle L} tensor where L ≤ M {\displaystyle L\leq M} .
The first vector space isomorphism on the list above, V 1 ⊗ ⋯ ⊗ V M ≃ F I 1 ⊗ ⋯ ⊗ F I M {\displaystyle V_{1}\otimes \cdots \otimes V_{M}\simeq F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}} , gives the coordinate representation of an abstract tensor. Assume that each of the M {\displaystyle M} vector spaces V m {\displaystyle V_{m}} has a basis { v 1 m , v 2 m , … , v I m m } {\displaystyle \{v_{1}^{m},v_{2}^{m},\ldots ,v_{I_{m}}^{m}\}} . The expression of a tensor with respect to this basis has the form A = ∑ i 1 = 1 I 1 … ∑ i M = 1 I M a i 1 , i 2 , … , i M v i 1 1 ⊗ v i 2 2 ⊗ ⋯ ⊗ v i M M , {\displaystyle {\mathcal {A}}=\sum _{i_{1}=1}^{I_{1}}\ldots \sum _{i_{M}=1}^{I_{M}}a_{i_{1},i_{2},\ldots ,i_{M}}v_{i_{1}}^{1}\otimes v_{i_{2}}^{2}\otimes \cdots \otimes v_{i_{M}}^{M},} where the coefficients a i 1 , i 2 , … , i M {\displaystyle a_{i_{1},i_{2},\ldots ,i_{M}}} are elements of F {\displaystyle F} . The coordinate representation of A {\displaystyle {\mathcal {A}}} is ∑ i 1 = 1 I 1 … ∑ i M = 1 I M a i 1 , i 2 , … , i M e i 1 1 ⊗ e i 2 2 ⊗ ⋯ ⊗ e i M M , {\displaystyle \sum _{i_{1}=1}^{I_{1}}\ldots \sum _{i_{M}=1}^{I_{M}}a_{i_{1},i_{2},\ldots ,i_{M}}\mathbf {e} _{i_{1}}^{1}\otimes \mathbf {e} _{i_{2}}^{2}\otimes \cdots \otimes \mathbf {e} _{i_{M}}^{M},} where e i m {\displaystyle \mathbf {e} _{i}^{m}} is the i th {\displaystyle i^{\text{th}}} standard basis vector of F I m {\displaystyle F^{I_{m}}} . This can be regarded as a M -way array whose elements are the coefficients a i 1 , i 2 , … , i M {\displaystyle a_{i_{1},i_{2},\ldots ,i_{M}}} .
For any permutation π ∈ S M {\displaystyle \pi \in {\mathfrak {S}}_{M}} there is a canonical isomorphism between the two tensor products of vector spaces V 1 ⊗ V 2 ⊗ ⋯ ⊗ V M {\displaystyle V_{1}\otimes V_{2}\otimes \cdots \otimes V_{M}} and V π ( 1 ) ⊗ V π ( 2 ) ⊗ ⋯ ⊗ V π ( M ) {\displaystyle V_{\pi (1)}\otimes V_{\pi (2)}\otimes \cdots \otimes V_{\pi (M)}} . Parentheses are usually omitted from such products due to the natural isomorphism between V i ⊗ ( V j ⊗ V k ) {\displaystyle V_{i}\otimes (V_{j}\otimes V_{k})} and ( V i ⊗ V j ) ⊗ V k {\displaystyle (V_{i}\otimes V_{j})\otimes V_{k}} , but may, of course, be reintroduced to emphasize a particular grouping of factors. In the grouping, ( V π ( 1 ) ⊗ ⋯ ⊗ V π ( r 1 ) ) ⊗ ( V π ( r 1 + 1 ) ⊗ ⋯ ⊗ V π ( r 2 ) ) ⊗ ⋯ ⊗ ( V π ( r L − 1 + 1 ) ⊗ ⋯ ⊗ V π ( r L ) ) , {\displaystyle (V_{\pi (1)}\otimes \cdots \otimes V_{\pi (r_{1})})\otimes (V_{\pi (r_{1}+1)}\otimes \cdots \otimes V_{\pi (r_{2})})\otimes \cdots \otimes (V_{\pi (r_{L-1}+1)}\otimes \cdots \otimes V_{\pi (r_{L})}),} there are L {\displaystyle L} groups with r l − r l − 1 {\displaystyle r_{l}-r_{l-1}} factors in the l th {\displaystyle l^{\text{th}}} group (where r 0 = 0 {\displaystyle r_{0}=0} and r L = M {\displaystyle r_{L}=M} ).
Letting S l = ( π ( r l − 1 + 1 ) , π ( r l − 1 + 2 ) , … , π ( r l ) ) {\displaystyle S_{l}=(\pi (r_{l-1}+1),\pi (r_{l-1}+2),\ldots ,\pi (r_{l}))} for each l {\displaystyle l} satisfying 1 ≤ l ≤ L {\displaystyle 1\leq l\leq L} , an ( S 1 , S 2 , … , S L ) {\displaystyle (S_{1},S_{2},\ldots ,S_{L})} -flattening of a tensor A {\displaystyle {\mathcal {A}}} , denoted A ( S 1 , S 2 , … , S L ) {\displaystyle {\mathcal {A}}_{(S_{1},S_{2},\ldots ,S_{L})}} , is obtained by applying the two processes above within each of the L {\displaystyle L} groups of factors. That is, the coordinate representation of the l th {\displaystyle l^{\text{th}}} group of factors is obtained using the isomorphism ( V π ( r l − 1 + 1 ) ⊗ V π ( r l − 1 + 2 ) ⊗ ⋯ ⊗ V π ( r l ) ) ≃ ( F I π ( r l − 1 + 1 ) ⊗ F I π ( r l − 1 + 2 ) ⊗ ⋯ ⊗ F I π ( r l ) ) {\displaystyle (V_{\pi (r_{l-1}+1)}\otimes V_{\pi (r_{l-1}+2)}\otimes \cdots \otimes V_{\pi (r_{l})})\simeq (F^{I_{\pi (r_{l-1}+1)}}\otimes F^{I_{\pi (r_{l-1}+2)}}\otimes \cdots \otimes F^{I_{\pi (r_{l})}})} , which requires specifying bases for all of the vector spaces V k {\displaystyle V_{k}} . The result is then vectorized using a bijection μ l : [ I π ( r l − 1 + 1 ) ] × [ I π ( r l − 1 + 2 ) ] × ⋯ × [ I π ( r l ) ] → [ I S l ] {\displaystyle \mu _{l}:[I_{\pi (r_{l-1}+1)}]\times [I_{\pi (r_{l-1}+2)}]\times \cdots \times [I_{\pi (r_{l})}]\to [I_{S_{l}}]} to obtain an element of F I S l {\displaystyle F^{I_{S_{l}}}} , where I S l := ∏ i = r l − 1 + 1 r l I π ( i ) {\textstyle I_{S_{l}}:=\prod _{i=r_{l-1}+1}^{r_{l}}I_{\pi (i)}} , the product of the dimensions of the vector spaces in the l th {\displaystyle l^{\text{th}}} group of factors. The result of applying these isomorphisms within each group of factors is an element of F I S 1 ⊗ ⋯ ⊗ F I S L {\displaystyle F^{I_{S_{1}}}\otimes \cdots \otimes F^{I_{S_{L}}}} , which is a tensor of order L {\displaystyle L} .
By means of a bijective map μ : [ I 1 ] × ⋯ × [ I M ] → [ I 1 ⋯ I M ] {\displaystyle \mu :[I_{1}]\times \cdots \times [I_{M}]\to [I_{1}\cdots I_{M}]} , a vector space isomorphism between F I 1 ⊗ ⋯ ⊗ F I M {\displaystyle F^{I_{1}}\otimes \cdots \otimes F^{I_{M}}} and F I 1 ⋯ I M {\displaystyle F^{I_{1}\cdots I_{M}}} is constructed via the mapping e i 1 1 ⊗ ⋯ e i m m ⊗ ⋯ ⊗ e i M M ↦ e μ ( i 1 , i 2 , … , i M ) , {\displaystyle \mathbf {e} _{i_{1}}^{1}\otimes \cdots \mathbf {e} _{i_{m}}^{m}\otimes \cdots \otimes \mathbf {e} _{i_{M}}^{M}\mapsto \mathbf {e} _{\mu (i_{1},i_{2},\ldots ,i_{M})},} where for every natural number i {\displaystyle i} such that 1 ≤ i ≤ I 1 ⋯ I M {\displaystyle 1\leq i\leq I_{1}\cdots I_{M}} , the vector e i {\displaystyle \mathbf {e} _{i}} denotes the i th standard basis vector of F i 1 ⋯ i M {\displaystyle F^{i_{1}\cdots i_{M}}} . In such a reshaping, the tensor is simply interpreted as a vector in F I 1 ⋯ I M {\displaystyle F^{I_{1}\cdots I_{M}}} . This is known as vectorization , and is analogous to vectorization of matrices . A standard choice of bijection μ {\displaystyle \mu } is such that
vec ( A ) = [ a 1 , 1 , … , 1 a 2 , 1 , … , 1 ⋯ a n 1 , 1 , … , 1 a 1 , 2 , 1 , … , 1 ⋯ a I 1 , I 2 , … , I M ] T , {\displaystyle \operatorname {vec} ({\mathcal {A}})={\begin{bmatrix}a_{1,1,\ldots ,1}&a_{2,1,\ldots ,1}&\cdots &a_{n_{1},1,\ldots ,1}&a_{1,2,1,\ldots ,1}&\cdots &a_{I_{1},I_{2},\ldots ,I_{M}}\end{bmatrix}}^{T},}
which is consistent with the way in which the colon operator in Matlab and GNU Octave reshapes a higher-order tensor into a vector. In general, the vectorization of A {\displaystyle {\mathcal {A}}} is the vector [ a μ − 1 ( i ) ] i = 1 I 1 ⋯ I M {\displaystyle [a_{\mu ^{-1}(i)}]_{i=1}^{I_{1}\cdots I_{M}}} .
The vectorization of A {\displaystyle {\mathcal {A}}} denoted with v e c ( A ) {\displaystyle vec({\mathcal {A}})} or A [ : ] {\displaystyle {\mathcal {A}}_{[:]}} is an [ S 1 , S 2 ] {\displaystyle [S_{1},S_{2}]} -reshaping where S 1 = ( 1 , 2 , … , M ) {\displaystyle S_{1}=(1,2,\ldots ,M)} and S 2 = ∅ {\displaystyle S_{2}=\emptyset } .
Let A ∈ F I 1 ⊗ F I 2 ⊗ ⋯ ⊗ F I M {\displaystyle {\mathcal {A}}\in F^{I_{1}}\otimes F^{I_{2}}\otimes \cdots \otimes F^{I_{M}}} be the coordinate representation of an abstract tensor with respect to a basis. Mode- m matrixizing (a.k.a. flattening ) of A {\displaystyle {\mathcal {A}}} is an [ S 1 , S 2 ] {\displaystyle [S_{1},S_{2}]} -reshaping in which S 1 = ( m ) {\displaystyle S_{1}=(m)} and S 2 = ( 1 , 2 , … , m − 1 , m + 1 , … , M ) {\displaystyle S_{2}=(1,2,\ldots ,m-1,m+1,\ldots ,M)} . Usually, a standard matrixizing is denoted by
A [ m ] = A [ S 1 , S 2 ] {\displaystyle {\mathbf {A} }_{[m]}={\mathcal {A}}_{[S_{1},S_{2}]}}
This reshaping is sometimes called matrixizing , matricizing , flattening or unfolding in the literature. A standard choice for the bijections μ 1 , μ 2 {\displaystyle \mu _{1},\ \mu _{2}} is the one that is consistent with the reshape function in Matlab and GNU Octave, namely
A [ m ] := [ a 1 , 1 , … , 1 , 1 , 1 , … , 1 a 2 , 1 , … , 1 , 1 , 1 , … , 1 ⋯ a I 1 , I 2 , … , I m − 1 , 1 , I m + 1 , … , I M a 1 , 1 , … , 1 , 2 , 1 , … , 1 a 2 , 1 , … , 1 , 2 , 1 , … , 1 ⋯ a I 1 , I 2 , … , I m − 1 , 2 , I m + 1 , … , I M ⋮ ⋮ ⋮ a 1 , 1 , … , 1 , I m , 1 , … , 1 a 2 , 1 , … , 1 , I m , 1 , … , 1 ⋯ a I 1 , I 2 , … , I m − 1 , I m , I m + 1 , … , I M ] {\displaystyle {\mathbf {A} }_{[m]}:={\begin{bmatrix}a_{1,1,\ldots ,1,1,1,\ldots ,1}&a_{2,1,\ldots ,1,1,1,\ldots ,1}&\cdots &a_{I_{1},I_{2},\ldots ,I_{m-1},1,I_{m+1},\ldots ,I_{M}}\\a_{1,1,\ldots ,1,2,1,\ldots ,1}&a_{2,1,\ldots ,1,2,1,\ldots ,1}&\cdots &a_{I_{1},I_{2},\ldots ,I_{m-1},2,I_{m+1},\ldots ,I_{M}}\\\vdots &\vdots &&\vdots \\a_{1,1,\ldots ,1,I_{m},1,\ldots ,1}&a_{2,1,\ldots ,1,I_{m},1,\ldots ,1}&\cdots &a_{I_{1},I_{2},\ldots ,I_{m-1},I_{m},I_{m+1},\ldots ,I_{M}}\end{bmatrix}}}
Definition Mode- m Matrixizing: [ 1 ] [ A [ m ] ] j k = a i 1 … i m … i M , where j = i m and k = 1 + ∑ n = 0 n ≠ m M ( i n − 1 ) ∏ l = 0 l ≠ m n − 1 I l . {\displaystyle [{\mathbf {A} }_{[m]}]_{jk}=a_{i_{1}\dots i_{m}\dots i_{M}},\;\;{\text{ where }}j=i_{m}{\text{ and }}k=1+\sum _{n=0 \atop n\neq m}^{M}(i_{n}-1)\prod _{l=0 \atop l\neq m}^{n-1}I_{l}.} The mode- m matrixizing of a tensor A ∈ F I 1 × . . . I M , {\displaystyle {\mathcal {A}}\in F^{I_{1}\times ...I_{M}},} is defined as the matrix A [ m ] ∈ F I m × ( I 1 … I m − 1 I m + 1 … I M ) {\displaystyle {\mathbf {A} }_{[m]}\in F^{I_{m}\times (I_{1}\dots I_{m-1}I_{m+1}\dots I_{M})}} . As the parenthetical ordering indicates, the mode- m column vectors are arranged by
sweeping all the other mode indices through their ranges,
with smaller mode indexes varying more rapidly than larger ones; thus | https://en.wikipedia.org/wiki/Tensor_reshaping |
In statistics , machine learning and algorithms , a tensor sketch is a type of dimensionality reduction that is particularly efficient when applied to vectors that have tensor structure. [ 1 ] [ 2 ] Such a sketch can be used to speed up explicit kernel methods , bilinear pooling in neural networks and is a cornerstone in many numerical linear algebra algorithms. [ 3 ]
Mathematically, a dimensionality reduction or sketching matrix is a matrix M ∈ R k × d {\displaystyle M\in \mathbb {R} ^{k\times d}} , where k < d {\displaystyle k<d} , such that for any vector x ∈ R d {\displaystyle x\in \mathbb {R} ^{d}}
with high probability.
In other words, M {\displaystyle M} preserves the norm of vectors up to a small error.
A tensor sketch has the extra property that if x = y ⊗ z {\displaystyle x=y\otimes z} for some vectors y ∈ R d 1 , z ∈ R d 2 {\displaystyle y\in \mathbb {R} ^{d_{1}},z\in \mathbb {R} ^{d_{2}}} such that d 1 d 2 = d {\displaystyle d_{1}d_{2}=d} , the transformation M ( y ⊗ z ) {\displaystyle M(y\otimes z)} can be computed more efficiently. Here ⊗ {\displaystyle \otimes } denotes the Kronecker product , rather than the outer product , though the two are related by a flattening .
The speedup is achieved by first rewriting M ( y ⊗ z ) = M ′ y ∘ M ″ z {\displaystyle M(y\otimes z)=M'y\circ M''z} , where ∘ {\displaystyle \circ } denotes the elementwise ( Hadamard ) product.
Each of M ′ y {\displaystyle M'y} and M ″ z {\displaystyle M''z} can be computed in time O ( k d 1 ) {\displaystyle O(kd_{1})} and O ( k d 2 ) {\displaystyle O(kd_{2})} , respectively; including the Hadamard product gives overall time O ( d 1 d 2 + k d 1 + k d 2 ) {\displaystyle O(d_{1}d_{2}+kd_{1}+kd_{2})} . In most use cases this method is significantly faster than the full M ( y ⊗ z ) {\displaystyle M(y\otimes z)} requiring O ( k d ) = O ( k d 1 d 2 ) {\displaystyle O(kd)=O(kd_{1}d_{2})} time.
For higher-order tensors, such as x = y ⊗ z ⊗ t {\displaystyle x=y\otimes z\otimes t} , the savings are even more impressive.
The term tensor sketch was coined in 2013 [ 4 ] describing a technique by Rasmus Pagh [ 5 ] from the same year.
Originally it was understood using the fast Fourier transform to do fast convolution of count sketches .
Later research works generalized it to a much larger class of dimensionality reductions via Tensor random embeddings.
Tensor random embeddings were introduced in 2010 in a paper [ 6 ] on differential privacy and were first analyzed by Rudelson et al. in 2012 in the context of sparse recovery. [ 7 ]
Avron et al. [ 8 ] were the first to study the subspace embedding properties of tensor sketches, particularly focused on applications to polynomial kernels .
In this context, the sketch is required not only to preserve the norm of each individual vector with a certain probability but to preserve the norm of all vectors in each individual linear subspace .
This is a much stronger property, and it requires larger sketch sizes, but it allows the kernel methods to be used very broadly as explored in the book by David Woodruff. [ 3 ]
The face-splitting product is defined as the tensor products of the rows (was proposed by V. Slyusar [ 9 ] in 1996 [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] for radar and digital antenna array applications).
More directly, let C ∈ R 3 × 3 {\displaystyle \mathbf {C} \in \mathbb {R} ^{3\times 3}} and D ∈ R 3 × 3 {\displaystyle \mathbf {D} \in \mathbb {R} ^{3\times 3}} be two matrices.
Then the face-splitting product C ∙ D {\displaystyle \mathbf {C} \bullet \mathbf {D} } is [ 10 ] [ 11 ] [ 12 ] [ 13 ] C ∙ D = [ C 1 ⊗ D 1 C 2 ⊗ D 2 C 3 ⊗ D 3 ] = [ C 1 , 1 D 1 , 1 C 1 , 1 D 1 , 2 C 1 , 1 D 1 , 3 C 1 , 2 D 1 , 1 C 1 , 2 D 1 , 2 C 1 , 2 D 1 , 3 C 1 , 3 D 1 , 1 C 1 , 3 D 1 , 2 C 1 , 3 D 1 , 3 C 2 , 1 D 2 , 1 C 2 , 1 D 2 , 2 C 2 , 1 D 2 , 3 C 2 , 2 D 2 , 1 C 2 , 2 D 2 , 2 C 2 , 2 D 2 , 3 C 2 , 3 D 2 , 1 C 2 , 3 D 2 , 2 C 2 , 3 D 2 , 3 C 3 , 1 D 3 , 1 C 3 , 1 D 3 , 2 C 3 , 1 D 3 , 3 C 3 , 2 D 3 , 1 C 3 , 2 D 3 , 2 C 3 , 2 D 3 , 3 C 3 , 3 D 3 , 1 C 3 , 3 D 3 , 2 C 3 , 3 D 3 , 3 ] . {\displaystyle \mathbf {C} \bullet \mathbf {D} =\left[{\begin{array}{c }\mathbf {C} _{1}\otimes \mathbf {D} _{1}\\\hline \mathbf {C} _{2}\otimes \mathbf {D} _{2}\\\hline \mathbf {C} _{3}\otimes \mathbf {D} _{3}\\\end{array}}\right]=\left[{\begin{array}{c c c c c c c c c }\mathbf {C} _{1,1}\mathbf {D} _{1,1}&\mathbf {C} _{1,1}\mathbf {D} _{1,2}&\mathbf {C} _{1,1}\mathbf {D} _{1,3}&\mathbf {C} _{1,2}\mathbf {D} _{1,1}&\mathbf {C} _{1,2}\mathbf {D} _{1,2}&\mathbf {C} _{1,2}\mathbf {D} _{1,3}&\mathbf {C} _{1,3}\mathbf {D} _{1,1}&\mathbf {C} _{1,3}\mathbf {D} _{1,2}&\mathbf {C} _{1,3}\mathbf {D} _{1,3}\\\hline \mathbf {C} _{2,1}\mathbf {D} _{2,1}&\mathbf {C} _{2,1}\mathbf {D} _{2,2}&\mathbf {C} _{2,1}\mathbf {D} _{2,3}&\mathbf {C} _{2,2}\mathbf {D} _{2,1}&\mathbf {C} _{2,2}\mathbf {D} _{2,2}&\mathbf {C} _{2,2}\mathbf {D} _{2,3}&\mathbf {C} _{2,3}\mathbf {D} _{2,1}&\mathbf {C} _{2,3}\mathbf {D} _{2,2}&\mathbf {C} _{2,3}\mathbf {D} _{2,3}\\\hline \mathbf {C} _{3,1}\mathbf {D} _{3,1}&\mathbf {C} _{3,1}\mathbf {D} _{3,2}&\mathbf {C} _{3,1}\mathbf {D} _{3,3}&\mathbf {C} _{3,2}\mathbf {D} _{3,1}&\mathbf {C} _{3,2}\mathbf {D} _{3,2}&\mathbf {C} _{3,2}\mathbf {D} _{3,3}&\mathbf {C} _{3,3}\mathbf {D} _{3,1}&\mathbf {C} _{3,3}\mathbf {D} _{3,2}&\mathbf {C} _{3,3}\mathbf {D} _{3,3}\end{array}}\right].} The reason this product is useful is the following identity:
where ∘ {\displaystyle \circ } is the element-wise ( Hadamard ) product.
Since this operation can be computed in linear time, C ∙ D {\displaystyle \mathbf {C} \bullet \mathbf {D} } can be multiplied on vectors with tensor structure much faster than normal matrices.
The tensor sketch of Pham and Pagh [ 4 ] computes C ( 1 ) x ∗ C ( 2 ) y {\displaystyle C^{(1)}x\ast C^{(2)}y} , where C ( 1 ) {\displaystyle C^{(1)}} and C ( 2 ) {\displaystyle C^{(2)}} are independent count sketch matrices and ∗ {\displaystyle \ast } is vector convolution .
They show that, amazingly, this equals C ( x ⊗ y ) {\displaystyle C(x\otimes y)} – a count sketch of the tensor product!
It turns out that this relation can be seen in terms of the face-splitting product as
Since F {\displaystyle {\mathcal {F}}} is an orthonormal matrix, F − 1 {\displaystyle {\mathcal {F}}^{-1}} doesn't impact the norm of C x {\displaystyle Cx} and may be ignored.
What's left is that C ∼ C ( 1 ) ∙ C ( 2 ) {\displaystyle C\sim {\mathcal {C}}^{(1)}\bullet {\mathcal {C}}^{(2)}} .
On the other hand,
The problem with the original tensor sketch algorithm was that it used count sketch matrices, which aren't always very good dimensionality reductions.
In 2020 [ 15 ] it was shown that any matrices with random enough independent rows suffice to create a tensor sketch.
This allows using matrices with stronger guarantees, such as real Gaussian Johnson Lindenstrauss matrices.
In particular, we get the following theorem
In particular, if the entries of T {\displaystyle T} are ± 1 {\displaystyle \pm 1} we get m = O ( ε − 2 log 1 / δ + ε − 1 ( 1 c log 1 / δ ) c ) {\displaystyle m=O(\varepsilon ^{-2}\log 1/\delta +\varepsilon ^{-1}({\tfrac {1}{c}}\log 1/\delta )^{c})} which matches the normal Johnson Lindenstrauss theorem of m = O ( ε − 2 log 1 / δ ) {\displaystyle m=O(\varepsilon ^{-2}\log 1/\delta )} when ε {\displaystyle \varepsilon } is small.
The paper [ 15 ] also shows that the dependency on ε − 1 ( 1 c log 1 / δ ) c {\displaystyle \varepsilon ^{-1}({\tfrac {1}{c}}\log 1/\delta )^{c}} is necessary for constructions using tensor randomized projections with Gaussian entries.
Because of the exponential dependency on c {\displaystyle c} in tensor sketches based on the face-splitting product , a different approach was developed in 2020 [ 15 ] which applies
We can achieve such an M {\displaystyle M} by letting
With this method, we only apply the general tensor sketch method to order 2 tensors, which avoids the exponential dependency in the number of rows.
It can be proved [ 15 ] that combining c {\displaystyle c} dimensionality reductions like this only increases ε {\displaystyle \varepsilon } by a factor c {\displaystyle {\sqrt {c}}} .
The fast Johnson–Lindenstrauss transform is a dimensionality reduction matrix
Given a matrix M ∈ R k × d {\displaystyle M\in \mathbb {R} ^{k\times d}} , computing the matrix vector product M x {\displaystyle Mx} takes k d {\displaystyle kd} time.
The Fast Johnson Lindenstrauss Transform (FJLT), [ 16 ] was introduced by Ailon and Chazelle in 2006.
A version of this method takes M = SHD {\displaystyle M=\operatorname {SHD} } where
The matrix-vector multiplication D x {\displaystyle Dx} can be computed in O ( d ) {\displaystyle O(d)} time.
If the diagonal matrix is replaced by one which has a tensor product of ± 1 {\displaystyle \pm 1} values on the diagonal, instead of being fully independent, it is possible to compute SHD ( x ⊗ y ) {\displaystyle \operatorname {SHD} (x\otimes y)} fast.
For an example of this, let ρ , σ ∈ { − 1 , 1 } 2 {\displaystyle \rho ,\sigma \in \{-1,1\}^{2}} be two independent ± 1 {\displaystyle \pm 1} vectors and let D {\displaystyle D} be a diagonal matrix with ρ ⊗ σ {\displaystyle \rho \otimes \sigma } on the diagonal.
We can then split up SHD ( x ⊗ y ) {\displaystyle \operatorname {SHD} (x\otimes y)} as follows:
In other words, SHD = S ( 1 ) H D ( 1 ) ∙ S ( 2 ) H D ( 2 ) {\displaystyle \operatorname {SHD} =S^{(1)}HD^{(1)}\bullet S^{(2)}HD^{(2)}} , splits up into two Fast Johnson–Lindenstrauss transformations, and the total reduction takes time O ( d 1 log d 1 + d 2 log d 2 ) {\displaystyle O(d_{1}\log d_{1}+d_{2}\log d_{2})} rather than d 1 d 2 log ( d 1 d 2 ) {\displaystyle d_{1}d_{2}\log(d_{1}d_{2})} as with the direct approach.
The same approach can be extended to compute higher degree products, such as SHD ( x ⊗ y ⊗ z ) {\displaystyle \operatorname {SHD} (x\otimes y\otimes z)}
Ahle et al. [ 15 ] shows that if SHD {\displaystyle \operatorname {SHD} } has ε − 2 ( log 1 / δ ) c + 1 {\displaystyle \varepsilon ^{-2}(\log 1/\delta )^{c+1}} rows, then | ‖ SHD x ‖ 2 − ‖ x ‖ | ≤ ε ‖ x ‖ 2 {\displaystyle |\|\operatorname {SHD} x\|_{2}-\|x\||\leq \varepsilon \|x\|_{2}} for any vector x ∈ R d c {\displaystyle x\in \mathbb {R} ^{d^{c}}} with probability 1 − δ {\displaystyle 1-\delta } , while allowing fast multiplication with degree c {\displaystyle c} tensors.
Jin et al., [ 17 ] the same year, showed a similar result for the more general class of matrices call RIP , which includes the subsampled Hadamard matrices.
They showed that these matrices allow splitting into tensors provided the number of rows is ε − 2 ( log 1 / δ ) 2 c − 1 log d {\displaystyle \varepsilon ^{-2}(\log 1/\delta )^{2c-1}\log d} .
In the case c = 2 {\displaystyle c=2} this matches the previous result.
These fast constructions can again be combined with the recursion approach mentioned above, giving the fastest overall tensor sketch.
It is also possible to do so-called "data aware" tensor sketching.
Instead of multiplying a random matrix on the data, the data points are sampled independently with a certain probability depending on the norm of the point. [ 18 ]
Kernel methods are popular in machine learning as they give the algorithm designed the freedom to design a "feature space" in which to measure the similarity of their data points.
A simple kernel-based binary classifier is based on the following computation:
where x i ∈ R d {\displaystyle \mathbf {x} _{i}\in \mathbb {R} ^{d}} are the data points, y i {\displaystyle y_{i}} is the label of the i {\displaystyle i} th point (either −1 or +1), and y ^ ( x ′ ) {\displaystyle {\hat {y}}(\mathbf {x'} )} is the prediction of the class of x ′ {\displaystyle \mathbf {x'} } .
The function k : R d × R d → R {\displaystyle k:\mathbb {R} ^{d}\times \mathbb {R} ^{d}\to \mathbb {R} } is the kernel.
Typical examples are the radial basis function kernel , k ( x , x ′ ) = exp ( − ‖ x − x ′ ‖ 2 2 ) {\displaystyle k(x,x')=\exp(-\|x-x'\|_{2}^{2})} , and polynomial kernels such as k ( x , x ′ ) = ( 1 + ⟨ x , x ′ ⟩ ) 2 {\displaystyle k(x,x')=(1+\langle x,x'\rangle )^{2}} .
When used this way, the kernel method is called "implicit".
Sometimes it is faster to do an "explicit" kernel method, in which a pair of functions f , g : R d → R D {\displaystyle f,g:\mathbb {R} ^{d}\to \mathbb {R} ^{D}} are found, such that k ( x , x ′ ) = ⟨ f ( x ) , g ( x ′ ) ⟩ {\displaystyle k(x,x')=\langle f(x),g(x')\rangle } .
This allows the above computation to be expressed as
where the value ∑ i = 1 n y i f ( x i ) {\displaystyle \sum _{i=1}^{n}y_{i}f(\mathbf {x} _{i})} can be computed in advance.
The problem with this method is that the feature space can be very large. That is D >> d {\displaystyle D>>d} .
For example, for the polynomial kernel k ( x , x ′ ) = ⟨ x , x ′ ⟩ 3 {\displaystyle k(x,x')=\langle x,x'\rangle ^{3}} we get f ( x ) = x ⊗ x ⊗ x {\displaystyle f(x)=x\otimes x\otimes x} and g ( x ′ ) = x ′ ⊗ x ′ ⊗ x ′ {\displaystyle g(x')=x'\otimes x'\otimes x'} , where ⊗ {\displaystyle \otimes } is the tensor product and f ( x ) , g ( x ′ ) ∈ R D {\displaystyle f(x),g(x')\in \mathbb {R} ^{D}} where D = d 3 {\displaystyle D=d^{3}} .
If d {\displaystyle d} is already large, D {\displaystyle D} can be much larger than the number of data points ( n {\displaystyle n} ) and so the explicit method is inefficient.
The idea of tensor sketch is that we can compute approximate functions f ′ , g ′ : R d → R t {\displaystyle f',g':\mathbb {R} ^{d}\to \mathbb {R} ^{t}} where t {\displaystyle t} can even be smaller than d {\displaystyle d} , and which still have the property that ⟨ f ′ ( x ) , g ′ ( x ′ ) ⟩ ≈ k ( x , x ′ ) {\displaystyle \langle f'(x),g'(x')\rangle \approx k(x,x')} .
This method was shown in 2020 [ 15 ] to work even for high degree polynomials and radial basis function kernels.
Assume we have two large datasets, represented as matrices X , Y ∈ R n × d {\displaystyle X,Y\in \mathbb {R} ^{n\times d}} , and we want to find the rows i , j {\displaystyle i,j} with the largest inner products ⟨ X i , Y j ⟩ {\displaystyle \langle X_{i},Y_{j}\rangle } .
We could compute Z = X Y T ∈ R n × n {\displaystyle Z=XY^{T}\in \mathbb {R} ^{n\times n}} and simply look at all n 2 {\displaystyle n^{2}} possibilities.
However, this would take at least n 2 {\displaystyle n^{2}} time, and probably closer to n 2 d {\displaystyle n^{2}d} using standard matrix multiplication techniques.
The idea of Compressed Matrix Multiplication is the general identity
where ⊗ {\displaystyle \otimes } is the tensor product .
Since we can compute a ( linear ) approximation to X i ⊗ Y i {\displaystyle X_{i}\otimes Y_{i}} efficiently, we can sum those up to get an approximation for the complete product.
Bilinear pooling is the technique of taking two input vectors, x , y {\displaystyle x,y} from different sources, and using the tensor product x ⊗ y {\displaystyle x\otimes y} as the input layer to a neural network.
In [ 19 ] the authors considered using tensor sketch to reduce the number of variables needed.
In 2017 another paper [ 20 ] takes the FFT of the input features, before they are combined using the element-wise product.
This again corresponds to the original tensor sketch. | https://en.wikipedia.org/wiki/Tensor_sketch |
Curvilinear coordinates can be formulated in tensor calculus , with important applications in physics and engineering , particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mechanics .
Elementary vector and tensor algebra in curvilinear coordinates is used in some of the older scientific literature in mechanics and physics and can be indispensable to understanding work from the early and mid 1900s, for example the text by Green and Zerna. [ 1 ] Some useful relations in the algebra of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, [ 2 ] Naghdi, [ 3 ] Simmonds, [ 4 ] Green and Zerna, [ 1 ] Basar and Weichert, [ 5 ] and Ciarlet. [ 6 ]
Consider two coordinate systems with coordinate variables ( Z 1 , Z 2 , Z 3 ) {\displaystyle (Z^{1},Z^{2},Z^{3})} and ( Z 1 ´ , Z 2 ´ , Z 3 ´ ) {\displaystyle (Z^{\acute {1}},Z^{\acute {2}},Z^{\acute {3}})} , which we shall represent in short as just Z i {\displaystyle Z^{i}} and Z i ´ {\displaystyle Z^{\acute {i}}} respectively and always assume our index i {\displaystyle i} runs from 1 through 3. We shall assume that these coordinates systems are embedded in the three-dimensional euclidean space. Coordinates Z i {\displaystyle Z^{i}} and Z i ´ {\displaystyle Z^{\acute {i}}} may be used to explain each other, because as we move along the coordinate line in one coordinate system we can use the other to describe our position. In this way Coordinates Z i {\displaystyle Z^{i}} and Z i ´ {\displaystyle Z^{\acute {i}}} are functions of each other
Z i = f i ( Z 1 ´ , Z 2 ´ , Z 3 ´ ) {\displaystyle Z^{i}=f^{i}(Z^{\acute {1}},Z^{\acute {2}},Z^{\acute {3}})} for i = 1 , 2 , 3 {\displaystyle i=1,2,3}
which can be written as
Z i = Z i ( Z 1 ´ , Z 2 ´ , Z 3 ´ ) = Z i ( Z i ´ ) {\displaystyle Z^{i}=Z^{i}(Z^{\acute {1}},Z^{\acute {2}},Z^{\acute {3}})=Z^{i}(Z^{\acute {i}})} for i ´ , i = 1 , 2 , 3 {\displaystyle {\acute {i}},i=1,2,3}
These three equations together are also called a coordinate transformation from Z i ´ {\displaystyle Z^{\acute {i}}} to Z i {\displaystyle Z^{i}} . Let us denote this transformation by T {\displaystyle T} . We will therefore represent the transformation from the coordinate system with coordinate variables Z i ´ {\displaystyle Z^{\acute {i}}} to the coordinate system with coordinates Z i {\displaystyle Z^{i}} as:
Z = T ( z ´ ) {\displaystyle Z=T({\acute {z}})}
Similarly we can represent Z i ´ {\displaystyle Z^{\acute {i}}} as a function of Z i {\displaystyle Z^{i}} as follows:
Z i ´ = g i ´ ( Z 1 , Z 2 , Z 3 ) {\displaystyle Z^{\acute {i}}=g^{\acute {i}}(Z^{1},Z^{2},Z^{3})} for i ´ = 1 , 2 , 3 {\displaystyle {\acute {i}}=1,2,3}
and we can write the free equations more compactly as
Z i ´ = Z i ´ ( Z 1 , Z 2 , Z 3 ) = Z i ´ ( Z i ) {\displaystyle Z^{\acute {i}}=Z^{\acute {i}}(Z^{1},Z^{2},Z^{3})=Z^{\acute {i}}(Z^{i})} for i ´ , i = 1 , 2 , 3 {\displaystyle {\acute {i}},i=1,2,3}
These three equations together are also called a coordinate transformation from Z i {\displaystyle Z^{i}} to Z i ´ {\displaystyle Z^{\acute {i}}} . Let us denote this transformation by S {\displaystyle S} . We will represent the transformation from the coordinate system with coordinate variables Z i {\displaystyle Z^{i}} to the coordinate system with coordinates Z i ´ {\displaystyle Z^{\acute {i}}} as:
z ´ = S ( z ) {\displaystyle {\acute {z}}=S(z)}
If the transformation T {\displaystyle T} is bijective then we call the image of the transformation, namely Z i {\displaystyle Z^{i}} , a set of admissible coordinates for Z i ´ {\displaystyle Z^{\acute {i}}} . If T {\displaystyle T} is linear the coordinate system Z i {\displaystyle Z^{i}} will be called an affine coordinate system , otherwise Z i {\displaystyle Z^{i}} is called a curvilinear coordinate system .
As we now see that the Coordinates Z i {\displaystyle Z^{i}} and Z i ´ {\displaystyle Z^{\acute {i}}} are functions of each other, we can take the derivative of the coordinate variable Z i {\displaystyle Z^{i}} with respect to the coordinate variable Z i ´ {\displaystyle Z^{\acute {i}}} .
Consider
∂ Z i ∂ Z i ´ = d e f J i ´ i {\displaystyle {\frac {\partial {Z^{i}}}{\partial {Z^{\acute {i}}}}}\;{\overset {\underset {\mathrm {def} }{}}{=}}\;J_{\acute {i}}^{i}} for i ´ , i = 1 , 2 , 3 {\displaystyle {\acute {i}},i=1,2,3} , these derivatives can be arranged in a matrix, say J {\displaystyle J} , in which J i ´ i {\displaystyle J_{\acute {i}}^{i}} is the element in the i {\displaystyle i} -th row and i ´ {\displaystyle {\acute {i}}} -th column
J = ( J 1 ´ 1 J 2 ´ 1 J 3 ´ 1 J 1 ´ 2 J 2 ´ 2 J 3 ´ 2 J 1 ´ 3 J 2 ´ 3 J 3 ´ 3 ) = ( ∂ Z 1 ∂ Z 1 ´ ∂ Z 1 ∂ Z 2 ´ ∂ Z 1 ∂ Z 3 ´ ∂ Z 2 ∂ Z 1 ´ ∂ Z 2 ∂ Z 2 ´ ∂ Z 2 ∂ Z 3 ´ ∂ Z 3 ∂ Z 1 ´ ∂ Z 3 ∂ Z 2 ´ ∂ Z 3 ∂ Z 3 ´ ) {\displaystyle J={\begin{pmatrix}J_{\acute {1}}^{1}&J_{\acute {2}}^{1}&J_{\acute {3}}^{1}\\J_{\acute {1}}^{2}&J_{\acute {2}}^{2}&J_{\acute {3}}^{2}\\J_{\acute {1}}^{3}&J_{\acute {2}}^{3}&J_{\acute {3}}^{3}\end{pmatrix}}={\begin{pmatrix}{\partial {Z^{1}} \over \partial {Z^{\acute {1}}}}&{\partial {Z^{1}} \over \partial {Z^{\acute {2}}}}&{\partial {Z^{1}} \over \partial {Z^{\acute {3}}}}\\{\partial {Z^{2}} \over \partial {Z^{\acute {1}}}}&{\partial {Z^{2}} \over \partial {Z^{\acute {2}}}}&{\partial {Z^{2}} \over \partial {Z^{\acute {3}}}}\\{\partial {Z^{3}} \over \partial {Z^{\acute {1}}}}&{\partial {Z^{3}} \over \partial {Z^{\acute {2}}}}&{\partial {Z^{3}} \over \partial {Z^{\acute {3}}}}\end{pmatrix}}}
The resultant matrix is called the Jacobian matrix.
Let ( b 1 , b 2 , b 3 ) {\displaystyle (\mathbf {b} _{1},\mathbf {b} _{2},\mathbf {b} _{3})} be an arbitrary basis for three-dimensional Euclidean space. In general, the basis vectors are neither unit vectors nor mutually orthogonal . However, they are required to be linearly independent. Then a vector v {\displaystyle \mathbf {v} } can be expressed as [ 4 ] : 27 v = v k b k {\displaystyle \mathbf {v} =v^{k}\,\mathbf {b} _{k}} The components v k {\displaystyle v^{k}} are the contravariant components of the vector v {\displaystyle \mathbf {v} } .
The reciprocal basis ( b 1 , b 2 , b 3 ) {\displaystyle (\mathbf {b} ^{1},\mathbf {b} ^{2},\mathbf {b} ^{3})} is defined by the relation [ 4 ] : 28–29 b i ⋅ b j = δ j i {\displaystyle \mathbf {b} ^{i}\cdot \mathbf {b} _{j}=\delta _{j}^{i}} where δ j i {\displaystyle \delta _{j}^{i}} is the Kronecker delta .
The vector v {\displaystyle \mathbf {v} } can also be expressed in terms of the reciprocal basis: v = v k b k {\displaystyle \mathbf {v} =v_{k}~\mathbf {b} ^{k}} The components v k {\displaystyle v_{k}} are the covariant components of the vector v {\displaystyle \mathbf {v} } .
A second-order tensor can be expressed as S = S i j b i ⊗ b j = S j i b i ⊗ b j = S i j b i ⊗ b j = S i j b i ⊗ b j {\displaystyle {\boldsymbol {S}}=S^{ij}~\mathbf {b} _{i}\otimes \mathbf {b} _{j}=S_{~j}^{i}~\mathbf {b} _{i}\otimes \mathbf {b} ^{j}=S_{i}^{~j}~\mathbf {b} ^{i}\otimes \mathbf {b} _{j}=S_{ij}~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}} The components S i j {\displaystyle S^{ij}} are called the contravariant components, S j i {\displaystyle S_{~j}^{i}} the mixed right-covariant components, S i j {\displaystyle S_{i}^{~j}} the mixed left-covariant components, and S i j {\displaystyle S_{ij}} the covariant components of the second-order tensor.
The quantities g i j {\displaystyle g_{ij}} , g i j {\displaystyle g^{ij}} are defined as [ 4 ] : 39
g i j = b i ⋅ b j = g j i ; g i j = b i ⋅ b j = g j i {\displaystyle g_{ij}=\mathbf {b} _{i}\cdot \mathbf {b} _{j}=g_{ji}~;~~g^{ij}=\mathbf {b} ^{i}\cdot \mathbf {b} ^{j}=g^{ji}} From the above equations we have v i = g i k v k ; v i = g i k v k ; b i = g i j b j ; b i = g i j b j {\displaystyle v^{i}=g^{ik}~v_{k}~;~~v_{i}=g_{ik}~v^{k}~;~~\mathbf {b} ^{i}=g^{ij}~\mathbf {b} _{j}~;~~\mathbf {b} _{i}=g_{ij}~\mathbf {b} ^{j}}
The components of a vector are related by [ 4 ] : 30–32 v ⋅ b i = v k b k ⋅ b i = v k δ k i = v i {\displaystyle \mathbf {v} \cdot \mathbf {b} ^{i}=v^{k}~\mathbf {b} _{k}\cdot \mathbf {b} ^{i}=v^{k}~\delta _{k}^{i}=v^{i}} v ⋅ b i = v k b k ⋅ b i = v k δ i k = v i {\displaystyle \mathbf {v} \cdot \mathbf {b} _{i}=v_{k}~\mathbf {b} ^{k}\cdot \mathbf {b} _{i}=v_{k}~\delta _{i}^{k}=v_{i}} Also, v ⋅ b i = v k b k ⋅ b i = g k i v k {\displaystyle \mathbf {v} \cdot \mathbf {b} _{i}=v^{k}~\mathbf {b} _{k}\cdot \mathbf {b} _{i}=g_{ki}~v^{k}} v ⋅ b i = v k b k ⋅ b i = g k i v k {\displaystyle \mathbf {v} \cdot \mathbf {b} ^{i}=v_{k}~\mathbf {b} ^{k}\cdot \mathbf {b} ^{i}=g^{ki}~v_{k}}
The components of the second-order tensor are related by S i j = g i k S k j = g j k S k i = g i k g j l S k l {\displaystyle S^{ij}=g^{ik}~S_{k}^{~j}=g^{jk}~S_{~k}^{i}=g^{ik}~g^{jl}~S_{kl}}
In an orthonormal right-handed basis, the third-order alternating tensor is defined as E = ε i j k e i ⊗ e j ⊗ e k {\displaystyle {\boldsymbol {\mathcal {E}}}=\varepsilon _{ijk}~\mathbf {e} ^{i}\otimes \mathbf {e} ^{j}\otimes \mathbf {e} ^{k}} In a general curvilinear basis the same tensor may be expressed as E = E i j k b i ⊗ b j ⊗ b k = E i j k b i ⊗ b j ⊗ b k {\displaystyle {\boldsymbol {\mathcal {E}}}={\mathcal {E}}_{ijk}~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}\otimes \mathbf {b} ^{k}={\mathcal {E}}^{ijk}~\mathbf {b} _{i}\otimes \mathbf {b} _{j}\otimes \mathbf {b} _{k}} It can be shown that E i j k = [ b i , b j , b k ] = ( b i × b j ) ⋅ b k ; E i j k = [ b i , b j , b k ] {\displaystyle {\mathcal {E}}_{ijk}=\left[\mathbf {b} _{i},\mathbf {b} _{j},\mathbf {b} _{k}\right]=(\mathbf {b} _{i}\times \mathbf {b} _{j})\cdot \mathbf {b} _{k}~;~~{\mathcal {E}}^{ijk}=\left[\mathbf {b} ^{i},\mathbf {b} ^{j},\mathbf {b} ^{k}\right]} Now, b i × b j = J ε i j p b p = g ε i j p b p {\displaystyle \mathbf {b} _{i}\times \mathbf {b} _{j}=J~\varepsilon _{ijp}~\mathbf {b} ^{p}={\sqrt {g}}~\varepsilon _{ijp}~\mathbf {b} ^{p}} Hence, E i j k = J ε i j k = g ε i j k {\displaystyle {\mathcal {E}}_{ijk}=J~\varepsilon _{ijk}={\sqrt {g}}~\varepsilon _{ijk}} Similarly, we can show that E i j k = 1 J ε i j k = 1 g ε i j k {\displaystyle {\mathcal {E}}^{ijk}={\cfrac {1}{J}}~\varepsilon ^{ijk}={\cfrac {1}{\sqrt {g}}}~\varepsilon ^{ijk}}
The identity map I {\displaystyle \mathbf {I} } defined by I ⋅ v = v {\displaystyle \mathbf {I} \cdot \mathbf {v} =\mathbf {v} } can be shown to be: [ 4 ] : 39
I = g i j b i ⊗ b j = g i j b i ⊗ b j = b i ⊗ b i = b i ⊗ b i {\displaystyle \mathbf {I} =g^{ij}\mathbf {b} _{i}\otimes \mathbf {b} _{j}=g_{ij}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}=\mathbf {b} _{i}\otimes \mathbf {b} ^{i}=\mathbf {b} ^{i}\otimes \mathbf {b} _{i}}
The scalar product of two vectors in curvilinear coordinates is [ 4 ] : 32
u ⋅ v = u i v i = u i v i = g i j u i v j = g i j u i v j {\displaystyle \mathbf {u} \cdot \mathbf {v} =u^{i}v_{i}=u_{i}v^{i}=g_{ij}u^{i}v^{j}=g^{ij}u_{i}v_{j}}
The cross product of two vectors is given by: [ 4 ] : 32–34
u × v = ε i j k u j v k e i {\displaystyle \mathbf {u} \times \mathbf {v} =\varepsilon _{ijk}u_{j}v_{k}\mathbf {e} _{i}}
where ε i j k {\displaystyle \varepsilon _{ijk}} is the permutation symbol and e i {\displaystyle \mathbf {e} _{i}} is a Cartesian basis vector. In curvilinear coordinates, the equivalent expression is:
u × v = [ ( b m × b n ) ⋅ b s ] u m v n b s = E s m n u m v n b s {\displaystyle \mathbf {u} \times \mathbf {v} =[(\mathbf {b} _{m}\times \mathbf {b} _{n})\cdot \mathbf {b} _{s}]u^{m}v^{n}\mathbf {b} ^{s}={\mathcal {E}}_{smn}u^{m}v^{n}\mathbf {b} ^{s}}
where E i j k {\displaystyle {\mathcal {E}}_{ijk}} is the third-order alternating tensor . The cross product of two vectors is given by:
u × v = ε i j k u ^ j v ^ k e i {\displaystyle \mathbf {u} \times \mathbf {v} =\varepsilon _{ijk}{\hat {u}}_{j}{\hat {v}}_{k}\mathbf {e} _{i}}
where ε i j k {\displaystyle \varepsilon _{ijk}} is the permutation symbol and e i {\displaystyle \mathbf {e} _{i}} is a Cartesian basis vector. Therefore,
e p × e q = ε i p q e i {\displaystyle \mathbf {e} _{p}\times \mathbf {e} _{q}=\varepsilon _{ipq}\mathbf {e} _{i}}
and
b m × b n = ∂ x ∂ q m × ∂ x ∂ q n = ∂ ( x p e p ) ∂ q m × ∂ ( x q e q ) ∂ q n = ∂ x p ∂ q m ∂ x q ∂ q n e p × e q = ε i p q ∂ x p ∂ q m ∂ x q ∂ q n e i . {\displaystyle \mathbf {b} _{m}\times \mathbf {b} _{n}={\frac {\partial \mathbf {x} }{\partial q^{m}}}\times {\frac {\partial \mathbf {x} }{\partial q^{n}}}={\frac {\partial (x_{p}\mathbf {e} _{p})}{\partial q^{m}}}\times {\frac {\partial (x_{q}\mathbf {e} _{q})}{\partial q^{n}}}={\frac {\partial x_{p}}{\partial q^{m}}}{\frac {\partial x_{q}}{\partial q^{n}}}\mathbf {e} _{p}\times \mathbf {e} _{q}=\varepsilon _{ipq}{\frac {\partial x_{p}}{\partial q^{m}}}{\frac {\partial x_{q}}{\partial q^{n}}}\mathbf {e} _{i}.}
Hence,
( b m × b n ) ⋅ b s = ε i p q ∂ x p ∂ q m ∂ x q ∂ q n ∂ x i ∂ q s {\displaystyle (\mathbf {b} _{m}\times \mathbf {b} _{n})\cdot \mathbf {b} _{s}=\varepsilon _{ipq}{\frac {\partial x_{p}}{\partial q^{m}}}{\frac {\partial x_{q}}{\partial q^{n}}}{\frac {\partial x_{i}}{\partial q^{s}}}}
Returning to the vector product and using the relations:
u ^ j = ∂ x j ∂ q m u m , v ^ k = ∂ x k ∂ q n v n , e i = ∂ x i ∂ q s b s , {\displaystyle {\hat {u}}_{j}={\frac {\partial x_{j}}{\partial q^{m}}}u^{m},\quad {\hat {v}}_{k}={\frac {\partial x_{k}}{\partial q^{n}}}v^{n},\quad \mathbf {e} _{i}={\frac {\partial x_{i}}{\partial q^{s}}}\mathbf {b} ^{s},}
gives us:
u × v = ε i j k u ^ j v ^ k e i = ε i j k ∂ x j ∂ q m ∂ x k ∂ q n ∂ x i ∂ q s u m v n b s = [ ( b m × b n ) ⋅ b s ] u m v n b s = E s m n u m v n b s {\displaystyle \mathbf {u} \times \mathbf {v} =\varepsilon _{ijk}{\hat {u}}_{j}{\hat {v}}_{k}\mathbf {e} _{i}=\varepsilon _{ijk}{\frac {\partial x_{j}}{\partial q^{m}}}{\frac {\partial x_{k}}{\partial q^{n}}}{\frac {\partial x_{i}}{\partial q^{s}}}u^{m}v^{n}\mathbf {b} ^{s}=[(\mathbf {b} _{m}\times \mathbf {b} _{n})\cdot \mathbf {b} _{s}]u^{m}v^{n}\mathbf {b} ^{s}={\mathcal {E}}_{smn}u^{m}v^{n}\mathbf {b} ^{s}}
The identity map I {\displaystyle {\mathsf {I}}} defined by I ⋅ v = v {\displaystyle {\mathsf {I}}\cdot \mathbf {v} =\mathbf {v} } can be shown to be [ 4 ] : 39
I = g i j b i ⊗ b j = g i j b i ⊗ b j = b i ⊗ b i = b i ⊗ b i {\displaystyle {\mathsf {I}}=g^{ij}\mathbf {b} _{i}\otimes \mathbf {b} _{j}=g_{ij}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}=\mathbf {b} _{i}\otimes \mathbf {b} ^{i}=\mathbf {b} ^{i}\otimes \mathbf {b} _{i}}
The action v = S u {\displaystyle \mathbf {v} ={\boldsymbol {S}}\mathbf {u} } can be expressed in curvilinear coordinates as
v i b i = S i j u j b i = S j i u j b i ; v i b i = S i j u i b i = S i j u j b i {\displaystyle v^{i}\mathbf {b} _{i}=S^{ij}u_{j}\mathbf {b} _{i}=S_{j}^{i}u^{j}\mathbf {b} _{i};\qquad v_{i}\mathbf {b} ^{i}=S_{ij}u^{i}\mathbf {b} ^{i}=S_{i}^{j}u_{j}\mathbf {b} ^{i}}
The inner product of two second-order tensors U = S ⋅ T {\displaystyle {\boldsymbol {U}}={\boldsymbol {S}}\cdot {\boldsymbol {T}}} can be expressed in curvilinear coordinates as
U i j b i ⊗ b j = S i k T . j k b i ⊗ b j = S i . k T k j b i ⊗ b j {\displaystyle U_{ij}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}=S_{ik}T_{.j}^{k}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}=S_{i}^{.k}T_{kj}\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}}
Alternatively,
U = S i j T . n m g j m b i ⊗ b n = S . m i T . n m b i ⊗ b n = S i j T j n b i ⊗ b n {\displaystyle {\boldsymbol {U}}=S^{ij}T_{.n}^{m}g_{jm}\mathbf {b} _{i}\otimes \mathbf {b} ^{n}=S_{.m}^{i}T_{.n}^{m}\mathbf {b} _{i}\otimes \mathbf {b} ^{n}=S^{ij}T_{jn}\mathbf {b} _{i}\otimes \mathbf {b} ^{n}}
If S {\displaystyle {\boldsymbol {S}}} is a second-order tensor, then the determinant is defined by the relation
[ S u , S v , S w ] = det S [ u , v , w ] {\displaystyle \left[{\boldsymbol {S}}\mathbf {u} ,{\boldsymbol {S}}\mathbf {v} ,{\boldsymbol {S}}\mathbf {w} \right]=\det {\boldsymbol {S}}\left[\mathbf {u} ,\mathbf {v} ,\mathbf {w} \right]}
where u , v , w {\displaystyle \mathbf {u} ,\mathbf {v} ,\mathbf {w} } are arbitrary vectors and
[ u , v , w ] := u ⋅ ( v × w ) . {\displaystyle \left[\mathbf {u} ,\mathbf {v} ,\mathbf {w} \right]:=\mathbf {u} \cdot (\mathbf {v} \times \mathbf {w} ).}
Let ( e 1 , e 2 , e 3 ) {\displaystyle (\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3})} be the usual Cartesian basis vectors for the Euclidean space of interest and let b i = F e i {\displaystyle \mathbf {b} _{i}={\boldsymbol {F}}\mathbf {e} _{i}} where F {\displaystyle {\boldsymbol {F}}} is a second-order transformation tensor that maps e i {\displaystyle \mathbf {e} _{i}} to b i {\displaystyle \mathbf {b} _{i}} . Then, b i ⊗ e i = ( F e i ) ⊗ e i = F ( e i ⊗ e i ) = F . {\displaystyle \mathbf {b} _{i}\otimes \mathbf {e} _{i}=({\boldsymbol {F}}\mathbf {e} _{i})\otimes \mathbf {e} _{i}={\boldsymbol {F}}(\mathbf {e} _{i}\otimes \mathbf {e} _{i})={\boldsymbol {F}}~.} From this relation we can show that b i = F − T e i ; g i j = [ F − 1 F − T ] i j ; g i j = [ g i j ] − 1 = [ F T F ] i j {\displaystyle \mathbf {b} ^{i}={\boldsymbol {F}}^{-{\rm {T}}}\mathbf {e} ^{i}~;~~g^{ij}=[{\boldsymbol {F}}^{-{\rm {1}}}{\boldsymbol {F}}^{-{\rm {T}}}]_{ij}~;~~g_{ij}=[g^{ij}]^{-1}=[{\boldsymbol {F}}^{\rm {T}}{\boldsymbol {F}}]_{ij}} Let J := det F {\displaystyle J:=\det {\boldsymbol {F}}} be the Jacobian of the transformation. Then, from the definition of the determinant, [ b 1 , b 2 , b 3 ] = det F [ e 1 , e 2 , e 3 ] . {\displaystyle \left[\mathbf {b} _{1},\mathbf {b} _{2},\mathbf {b} _{3}\right]=\det {\boldsymbol {F}}\left[\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}\right]~.} Since [ e 1 , e 2 , e 3 ] = 1 {\displaystyle \left[\mathbf {e} _{1},\mathbf {e} _{2},\mathbf {e} _{3}\right]=1} we have J = det F = [ b 1 , b 2 , b 3 ] = b 1 ⋅ ( b 2 × b 3 ) {\displaystyle J=\det {\boldsymbol {F}}=\left[\mathbf {b} _{1},\mathbf {b} _{2},\mathbf {b} _{3}\right]=\mathbf {b} _{1}\cdot (\mathbf {b} _{2}\times \mathbf {b} _{3})} A number of interesting results can be derived using the above relations.
First, consider g := det [ g i j ] {\displaystyle g:=\det[g_{ij}]} Then g = det [ F T ] ⋅ det [ F ] = J ⋅ J = J 2 {\displaystyle g=\det[{\boldsymbol {F}}^{\rm {T}}]\cdot \det[{\boldsymbol {F}}]=J\cdot J=J^{2}} Similarly, we can show that det [ g i j ] = 1 J 2 {\displaystyle \det[g^{ij}]={\cfrac {1}{J^{2}}}} Therefore, using the fact that [ g i j ] = [ g i j ] − 1 {\displaystyle [g^{ij}]=[g_{ij}]^{-1}} , ∂ g ∂ g i j = 2 J ∂ J ∂ g i j = g g i j {\displaystyle {\cfrac {\partial g}{\partial g_{ij}}}=2~J~{\cfrac {\partial J}{\partial g_{ij}}}=g~g^{ij}}
Another interesting relation is derived below. Recall that b i ⋅ b j = δ j i ⇒ b 1 ⋅ b 1 = 1 , b 1 ⋅ b 2 = b 1 ⋅ b 3 = 0 ⇒ b 1 = A ( b 2 × b 3 ) {\displaystyle \mathbf {b} ^{i}\cdot \mathbf {b} _{j}=\delta _{j}^{i}\quad \Rightarrow \quad \mathbf {b} ^{1}\cdot \mathbf {b} _{1}=1,~\mathbf {b} ^{1}\cdot \mathbf {b} _{2}=\mathbf {b} ^{1}\cdot \mathbf {b} _{3}=0\quad \Rightarrow \quad \mathbf {b} ^{1}=A~(\mathbf {b} _{2}\times \mathbf {b} _{3})} where A {\displaystyle A} is a, yet undetermined, constant. Then b 1 ⋅ b 1 = A b 1 ⋅ ( b 2 × b 3 ) = A J = 1 ⇒ A = 1 J {\displaystyle \mathbf {b} ^{1}\cdot \mathbf {b} _{1}=A~\mathbf {b} _{1}\cdot (\mathbf {b} _{2}\times \mathbf {b} _{3})=AJ=1\quad \Rightarrow \quad A={\cfrac {1}{J}}} This observation leads to the relations b 1 = 1 J ( b 2 × b 3 ) ; b 2 = 1 J ( b 3 × b 1 ) ; b 3 = 1 J ( b 1 × b 2 ) {\displaystyle \mathbf {b} ^{1}={\cfrac {1}{J}}(\mathbf {b} _{2}\times \mathbf {b} _{3})~;~~\mathbf {b} ^{2}={\cfrac {1}{J}}(\mathbf {b} _{3}\times \mathbf {b} _{1})~;~~\mathbf {b} ^{3}={\cfrac {1}{J}}(\mathbf {b} _{1}\times \mathbf {b} _{2})} In index notation, ε i j k b k = 1 J ( b i × b j ) = 1 g ( b i × b j ) {\displaystyle \varepsilon _{ijk}~\mathbf {b} ^{k}={\cfrac {1}{J}}(\mathbf {b} _{i}\times \mathbf {b} _{j})={\cfrac {1}{\sqrt {g}}}(\mathbf {b} _{i}\times \mathbf {b} _{j})} where ε i j k {\displaystyle \varepsilon _{ijk}} is the usual permutation symbol .
We have not identified an explicit expression for the transformation tensor F {\displaystyle {\boldsymbol {F}}} because an alternative form of the mapping between curvilinear and Cartesian bases is more useful. Assuming a sufficient degree of smoothness in the mapping (and a bit of abuse of notation), we have b i = ∂ x ∂ q i = ∂ x ∂ x j ∂ x j ∂ q i = e j ∂ x j ∂ q i {\displaystyle \mathbf {b} _{i}={\cfrac {\partial \mathbf {x} }{\partial q^{i}}}={\cfrac {\partial \mathbf {x} }{\partial x_{j}}}~{\cfrac {\partial x_{j}}{\partial q^{i}}}=\mathbf {e} _{j}~{\cfrac {\partial x_{j}}{\partial q^{i}}}} Similarly, e i = b j ∂ q j ∂ x i {\displaystyle \mathbf {e} _{i}=\mathbf {b} _{j}~{\cfrac {\partial q^{j}}{\partial x_{i}}}} From these results we have e k ⋅ b i = ∂ x k ∂ q i ⇒ ∂ x k ∂ q i b i = e k ⋅ ( b i ⊗ b i ) = e k {\displaystyle \mathbf {e} ^{k}\cdot \mathbf {b} _{i}={\frac {\partial x_{k}}{\partial q^{i}}}\quad \Rightarrow \quad {\frac {\partial x_{k}}{\partial q^{i}}}~\mathbf {b} ^{i}=\mathbf {e} ^{k}\cdot (\mathbf {b} _{i}\otimes \mathbf {b} ^{i})=\mathbf {e} ^{k}} and b k = ∂ q k ∂ x i e i {\displaystyle \mathbf {b} ^{k}={\frac {\partial q^{k}}{\partial x_{i}}}~\mathbf {e} ^{i}}
Simmonds, [ 4 ] in his book on tensor analysis , quotes Albert Einstein saying [ 7 ]
The magic of this theory will hardly fail to impose itself on anybody who has truly understood it; it represents a genuine triumph of the method of absolute differential calculus, founded by Gauss, Riemann, Ricci, and Levi-Civita.
Vector and tensor calculus in general curvilinear coordinates is used in tensor analysis on four-dimensional curvilinear manifolds in general relativity , [ 8 ] in the mechanics of curved shells , [ 6 ] in examining the invariance properties of Maxwell's equations which has been of interest in metamaterials [ 9 ] [ 10 ] and in many other fields.
Some useful relations in the calculus of vectors and second-order tensors in curvilinear coordinates are given in this section. The notation and contents are primarily from Ogden, [ 2 ] Simmonds, [ 4 ] Green and Zerna, [ 1 ] Basar and Weichert, [ 5 ] and Ciarlet. [ 6 ]
Let the position of a point in space be characterized by three coordinate variables ( q 1 , q 2 , q 3 ) {\displaystyle (q^{1},q^{2},q^{3})} .
The coordinate curve q 1 {\displaystyle q^{1}} represents a curve on which q 2 {\displaystyle q^{2}} and q 3 {\displaystyle q^{3}} are constant. Let x {\displaystyle \mathbf {x} } be the position vector of the point relative to some origin. Then, assuming that such a mapping and its inverse exist and are continuous, we can write [ 2 ] : 55 x = φ ( q 1 , q 2 , q 3 ) ; q i = ψ i ( x ) = [ φ − 1 ( x ) ] i {\displaystyle \mathbf {x} ={\boldsymbol {\varphi }}(q^{1},q^{2},q^{3})~;~~q^{i}=\psi ^{i}(\mathbf {x} )=[{\boldsymbol {\varphi }}^{-1}(\mathbf {x} )]^{i}} The fields ψ i ( x ) {\displaystyle \psi ^{i}(\mathbf {x} )} are called the curvilinear coordinate functions of the curvilinear coordinate system ψ ( x ) = φ − 1 ( x ) {\displaystyle {\boldsymbol {\psi }}(\mathbf {x} )={\boldsymbol {\varphi }}^{-1}(\mathbf {x} )} .
The q i {\displaystyle q^{i}} coordinate curves are defined by the one-parameter family of functions given by x i ( α ) = φ ( α , q j , q k ) , i ≠ j ≠ k {\displaystyle \mathbf {x} _{i}(\alpha )={\boldsymbol {\varphi }}(\alpha ,q^{j},q^{k})~,~~i\neq j\neq k} with q j {\displaystyle q^{j}} , q k {\displaystyle q^{k}} fixed.
The tangent vector to the curve x i {\displaystyle \mathbf {x} _{i}} at the point x i ( α ) {\displaystyle \mathbf {x} _{i}(\alpha )} (or to the coordinate curve q i {\displaystyle q_{i}} at the point x {\displaystyle \mathbf {x} } ) is d x i d α ≡ ∂ x ∂ q i {\displaystyle {\cfrac {\rm {{d}\mathbf {x} _{i}}}{\rm {{d}\alpha }}}\equiv {\cfrac {\partial \mathbf {x} }{\partial q^{i}}}}
Let f ( x ) {\displaystyle f(\mathbf {x} )} be a scalar field in space. Then f ( x ) = f [ φ ( q 1 , q 2 , q 3 ) ] = f φ ( q 1 , q 2 , q 3 ) {\displaystyle f(\mathbf {x} )=f[{\boldsymbol {\varphi }}(q^{1},q^{2},q^{3})]=f_{\varphi }(q^{1},q^{2},q^{3})} The gradient of the field f {\displaystyle f} is defined by [ ∇ f ( x ) ] ⋅ c = d d α f ( x + α c ) | α = 0 {\displaystyle [{\boldsymbol {\nabla }}f(\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\rm {d}}{\rm {{d}\alpha }}}f(\mathbf {x} +\alpha \mathbf {c} ){\biggr |}_{\alpha =0}} where c {\displaystyle \mathbf {c} } is an arbitrary constant vector. If we define the components c i {\displaystyle c^{i}} of c {\displaystyle \mathbf {c} } are such that q i + α c i = ψ i ( x + α c ) {\displaystyle q^{i}+\alpha ~c^{i}=\psi ^{i}(\mathbf {x} +\alpha ~\mathbf {c} )} then [ ∇ f ( x ) ] ⋅ c = d d α f φ ( q 1 + α c 1 , q 2 + α c 2 , q 3 + α c 3 ) | α = 0 = ∂ f φ ∂ q i c i = ∂ f ∂ q i c i {\displaystyle [{\boldsymbol {\nabla }}f(\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\rm {d}}{\rm {{d}\alpha }}}f_{\varphi }(q^{1}+\alpha ~c^{1},q^{2}+\alpha ~c^{2},q^{3}+\alpha ~c^{3}){\biggr |}_{\alpha =0}={\cfrac {\partial f_{\varphi }}{\partial q^{i}}}~c^{i}={\cfrac {\partial f}{\partial q^{i}}}~c^{i}}
If we set f ( x ) = ψ i ( x ) {\displaystyle f(\mathbf {x} )=\psi ^{i}(\mathbf {x} )} , then since q i = ψ i ( x ) {\displaystyle q^{i}=\psi ^{i}(\mathbf {x} )} , we have [ ∇ ψ i ( x ) ] ⋅ c = ∂ ψ i ∂ q j c j = c i {\displaystyle [{\boldsymbol {\nabla }}\psi ^{i}(\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\partial \psi ^{i}}{\partial q^{j}}}~c^{j}=c^{i}} which provides a means of extracting the contravariant component of a vector c {\displaystyle \mathbf {c} } .
If b i {\displaystyle \mathbf {b} _{i}} is the covariant (or natural) basis at a point, and if b i {\displaystyle \mathbf {b} ^{i}} is the contravariant (or reciprocal) basis at that point, then [ ∇ f ( x ) ] ⋅ c = ∂ f ∂ q i c i = ( ∂ f ∂ q i b i ) ( c i b i ) ⇒ ∇ f ( x ) = ∂ f ∂ q i b i {\displaystyle [{\boldsymbol {\nabla }}f(\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\partial f}{\partial q^{i}}}~c^{i}=\left({\cfrac {\partial f}{\partial q^{i}}}~\mathbf {b} ^{i}\right)\left(c^{i}~\mathbf {b} _{i}\right)\quad \Rightarrow \quad {\boldsymbol {\nabla }}f(\mathbf {x} )={\cfrac {\partial f}{\partial q^{i}}}~\mathbf {b} ^{i}} A brief rationale for this choice of basis is given in the next section.
A similar process can be used to arrive at the gradient of a vector field f ( x ) {\displaystyle \mathbf {f} (\mathbf {x} )} . The gradient is given by [ ∇ f ( x ) ] ⋅ c = ∂ f ∂ q i c i {\displaystyle [{\boldsymbol {\nabla }}\mathbf {f} (\mathbf {x} )]\cdot \mathbf {c} ={\cfrac {\partial \mathbf {f} }{\partial q^{i}}}~c^{i}} If we consider the gradient of the position vector field r ( x ) = x {\displaystyle \mathbf {r} (\mathbf {x} )=\mathbf {x} } , then we can show that c = ∂ x ∂ q i c i = b i ( x ) c i ; b i ( x ) := ∂ x ∂ q i {\displaystyle \mathbf {c} ={\cfrac {\partial \mathbf {x} }{\partial q^{i}}}~c^{i}=\mathbf {b} _{i}(\mathbf {x} )~c^{i}~;~~\mathbf {b} _{i}(\mathbf {x} ):={\cfrac {\partial \mathbf {x} }{\partial q^{i}}}} The vector field b i {\displaystyle \mathbf {b} _{i}} is tangent to the q i {\displaystyle q^{i}} coordinate curve and forms a natural basis at each point on the curve. This basis, as discussed at the beginning of this article, is also called the covariant curvilinear basis. We can also define a reciprocal basis , or contravariant curvilinear basis, b i {\displaystyle \mathbf {b} ^{i}} . All the algebraic relations between the basis vectors, as discussed in the section on tensor algebra, apply for the natural basis and its reciprocal at each point x {\displaystyle \mathbf {x} } .
Since c {\displaystyle \mathbf {c} } is arbitrary, we can write ∇ f ( x ) = ∂ f ∂ q i ⊗ b i {\displaystyle {\boldsymbol {\nabla }}\mathbf {f} (\mathbf {x} )={\cfrac {\partial \mathbf {f} }{\partial q^{i}}}\otimes \mathbf {b} ^{i}}
Note that the contravariant basis vector b i {\displaystyle \mathbf {b} ^{i}} is perpendicular to the surface of constant ψ i {\displaystyle \psi ^{i}} and is given by b i = ∇ ψ i {\displaystyle \mathbf {b} ^{i}={\boldsymbol {\nabla }}\psi ^{i}}
The Christoffel symbols of the first kind are defined as b i , j = ∂ b i ∂ q j := Γ i j k b k ⇒ b i , j ⋅ b l = Γ i j l {\displaystyle \mathbf {b} _{i,j}={\frac {\partial \mathbf {b} _{i}}{\partial q^{j}}}:=\Gamma _{ijk}~\mathbf {b} ^{k}\quad \Rightarrow \quad \mathbf {b} _{i,j}\cdot \mathbf {b} _{l}=\Gamma _{ijl}} To express Γ i j k {\displaystyle \Gamma _{ijk}} in terms of g i j {\displaystyle g_{ij}} we note that g i j , k = ( b i ⋅ b j ) , k = b i , k ⋅ b j + b i ⋅ b j , k = Γ i k j + Γ j k i g i k , j = ( b i ⋅ b k ) , j = b i , j ⋅ b k + b i ⋅ b k , j = Γ i j k + Γ k j i g j k , i = ( b j ⋅ b k ) , i = b j , i ⋅ b k + b j ⋅ b k , i = Γ j i k + Γ k i j {\displaystyle {\begin{aligned}g_{ij,k}&=(\mathbf {b} _{i}\cdot \mathbf {b} _{j})_{,k}=\mathbf {b} _{i,k}\cdot \mathbf {b} _{j}+\mathbf {b} _{i}\cdot \mathbf {b} _{j,k}=\Gamma _{ikj}+\Gamma _{jki}\\g_{ik,j}&=(\mathbf {b} _{i}\cdot \mathbf {b} _{k})_{,j}=\mathbf {b} _{i,j}\cdot \mathbf {b} _{k}+\mathbf {b} _{i}\cdot \mathbf {b} _{k,j}=\Gamma _{ijk}+\Gamma _{kji}\\g_{jk,i}&=(\mathbf {b} _{j}\cdot \mathbf {b} _{k})_{,i}=\mathbf {b} _{j,i}\cdot \mathbf {b} _{k}+\mathbf {b} _{j}\cdot \mathbf {b} _{k,i}=\Gamma _{jik}+\Gamma _{kij}\end{aligned}}} Since b i , j = b j , i {\displaystyle \mathbf {b} _{i,j}=\mathbf {b} _{j,i}} we have Γ i j k = Γ j i k {\displaystyle \Gamma _{ijk}=\Gamma _{jik}} . Using these to rearrange the above relations gives Γ i j k = 1 2 ( g i k , j + g j k , i − g i j , k ) = 1 2 [ ( b i ⋅ b k ) , j + ( b j ⋅ b k ) , i − ( b i ⋅ b j ) , k ] {\displaystyle \Gamma _{ijk}={\frac {1}{2}}(g_{ik,j}+g_{jk,i}-g_{ij,k})={\frac {1}{2}}[(\mathbf {b} _{i}\cdot \mathbf {b} _{k})_{,j}+(\mathbf {b} _{j}\cdot \mathbf {b} _{k})_{,i}-(\mathbf {b} _{i}\cdot \mathbf {b} _{j})_{,k}]}
The Christoffel symbols of the second kind are defined as Γ i j k = Γ j i k {\displaystyle \Gamma _{ij}^{k}=\Gamma _{ji}^{k}} in which
∂ b i ∂ q j = Γ i j k b k {\displaystyle {\cfrac {\partial \mathbf {b} _{i}}{\partial q^{j}}}=\Gamma _{ij}^{k}~\mathbf {b} _{k}}
This implies that Γ i j k = ∂ b i ∂ q j ⋅ b k = − b i ⋅ ∂ b k ∂ q j {\displaystyle \Gamma _{ij}^{k}={\cfrac {\partial \mathbf {b} _{i}}{\partial q^{j}}}\cdot \mathbf {b} ^{k}=-\mathbf {b} _{i}\cdot {\cfrac {\partial \mathbf {b} ^{k}}{\partial q^{j}}}} Other relations that follow are ∂ b i ∂ q j = − Γ j k i b k ; ∇ b i = Γ i j k b k ⊗ b j ; ∇ b i = − Γ j k i b k ⊗ b j {\displaystyle {\cfrac {\partial \mathbf {b} ^{i}}{\partial q^{j}}}=-\Gamma _{jk}^{i}~\mathbf {b} ^{k}~;~~{\boldsymbol {\nabla }}\mathbf {b} _{i}=\Gamma _{ij}^{k}~\mathbf {b} _{k}\otimes \mathbf {b} ^{j}~;~~{\boldsymbol {\nabla }}\mathbf {b} ^{i}=-\Gamma _{jk}^{i}~\mathbf {b} ^{k}\otimes \mathbf {b} ^{j}}
Another particularly useful relation, which shows that the Christoffel symbol depends only on the metric tensor and its derivatives, is Γ i j k = g k m 2 ( ∂ g m i ∂ q j + ∂ g m j ∂ q i − ∂ g i j ∂ q m ) {\displaystyle \Gamma _{ij}^{k}={\frac {g^{km}}{2}}\left({\frac {\partial g_{mi}}{\partial q^{j}}}+{\frac {\partial g_{mj}}{\partial q^{i}}}-{\frac {\partial g_{ij}}{\partial q^{m}}}\right)}
The following expressions for the gradient of a vector field in curvilinear coordinates are quite useful. ∇ v = [ ∂ v i ∂ q k + Γ l k i v l ] b i ⊗ b k = [ ∂ v i ∂ q k − Γ k i l v l ] b i ⊗ b k {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\mathbf {v} &=\left[{\cfrac {\partial v^{i}}{\partial q^{k}}}+\Gamma _{lk}^{i}~v^{l}\right]~\mathbf {b} _{i}\otimes \mathbf {b} ^{k}\\[8pt]&=\left[{\cfrac {\partial v_{i}}{\partial q^{k}}}-\Gamma _{ki}^{l}~v_{l}\right]~\mathbf {b} ^{i}\otimes \mathbf {b} ^{k}\end{aligned}}}
The vector field v {\displaystyle \mathbf {v} } can be represented as v = v i b i = v ^ i b ^ i {\displaystyle \mathbf {v} =v_{i}~\mathbf {b} ^{i}={\hat {v}}_{i}~{\hat {\mathbf {b} }}^{i}} where v i {\displaystyle v_{i}} are the covariant components of the field, v ^ i {\displaystyle {\hat {v}}_{i}} are the physical components, and (no summation ) b ^ i = b i g i i {\displaystyle {\hat {\mathbf {b} }}^{i}={\cfrac {\mathbf {b} ^{i}}{\sqrt {g^{ii}}}}} is the normalized contravariant basis vector.
The gradient of a second order tensor field can similarly be expressed as ∇ S = ∂ S ∂ q i ⊗ b i {\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {S}}={\frac {\partial {\boldsymbol {S}}}{\partial q^{i}}}\otimes \mathbf {b} ^{i}}
If we consider the expression for the tensor in terms of a contravariant basis, then ∇ S = ∂ ∂ q k [ S i j b i ⊗ b j ] ⊗ b k = [ ∂ S i j ∂ q k − Γ k i l S l j − Γ k j l S i l ] b i ⊗ b j ⊗ b k {\displaystyle {\boldsymbol {\nabla }}{\boldsymbol {S}}={\frac {\partial }{\partial q^{k}}}[S_{ij}~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}]\otimes \mathbf {b} ^{k}=\left[{\frac {\partial S_{ij}}{\partial q^{k}}}-\Gamma _{ki}^{l}~S_{lj}-\Gamma _{kj}^{l}~S_{il}\right]~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}\otimes \mathbf {b} ^{k}} We may also write ∇ S = [ ∂ S i j ∂ q k + Γ k l i S l j + Γ k l j S i l ] b i ⊗ b j ⊗ b k = [ ∂ S j i ∂ q k + Γ k l i S j l − Γ k j l S l i ] b i ⊗ b j ⊗ b k = [ ∂ S i j ∂ q k − Γ i k l S l j + Γ k l j S i l ] b i ⊗ b j ⊗ b k {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}{\boldsymbol {S}}&=\left[{\cfrac {\partial S^{ij}}{\partial q^{k}}}+\Gamma _{kl}^{i}~S^{lj}+\Gamma _{kl}^{j}~S^{il}\right]~\mathbf {b} _{i}\otimes \mathbf {b} _{j}\otimes \mathbf {b} ^{k}\\[8pt]&=\left[{\cfrac {\partial S_{~j}^{i}}{\partial q^{k}}}+\Gamma _{kl}^{i}~S_{~j}^{l}-\Gamma _{kj}^{l}~S_{~l}^{i}\right]~\mathbf {b} _{i}\otimes \mathbf {b} ^{j}\otimes \mathbf {b} ^{k}\\[8pt]&=\left[{\cfrac {\partial S_{i}^{~j}}{\partial q^{k}}}-\Gamma _{ik}^{l}~S_{l}^{~j}+\Gamma _{kl}^{j}~S_{i}^{~l}\right]~\mathbf {b} ^{i}\otimes \mathbf {b} _{j}\otimes \mathbf {b} ^{k}\end{aligned}}}
The physical components of a second-order tensor field can be obtained by using a normalized contravariant basis, i.e., S = S i j b i ⊗ b j = S ^ i j b ^ i ⊗ b ^ j {\displaystyle {\boldsymbol {S}}=S_{ij}~\mathbf {b} ^{i}\otimes \mathbf {b} ^{j}={\hat {S}}_{ij}~{\hat {\mathbf {b} }}^{i}\otimes {\hat {\mathbf {b} }}^{j}} where the hatted basis vectors have been normalized. This implies that (again no summation)
S ^ i j = S i j g i i g j j {\displaystyle {\hat {S}}_{ij}=S_{ij}~{\sqrt {g^{ii}~g^{jj}}}}
The divergence of a vector field v {\displaystyle \mathbf {v} } is defined as div v = ∇ ⋅ v = tr ( ∇ v ) {\displaystyle \operatorname {div} ~\mathbf {v} ={\boldsymbol {\nabla }}\cdot \mathbf {v} ={\text{tr}}({\boldsymbol {\nabla }}\mathbf {v} )} In terms of components with respect to a curvilinear basis ∇ ⋅ v = ∂ v i ∂ q i + Γ ℓ i i v ℓ = [ ∂ v i ∂ q j − Γ j i ℓ v ℓ ] g i j {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\cfrac {\partial v^{i}}{\partial q^{i}}}+\Gamma _{\ell i}^{i}~v^{\ell }=\left[{\cfrac {\partial v_{i}}{\partial q^{j}}}-\Gamma _{ji}^{\ell }~v_{\ell }\right]~g^{ij}}
An alternative equation for the divergence of a vector field is frequently used. To derive this relation recall that ∇ ⋅ v = ∂ v i ∂ q i + Γ ℓ i i v ℓ {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\frac {\partial v^{i}}{\partial q^{i}}}+\Gamma _{\ell i}^{i}~v^{\ell }} Now, Γ ℓ i i = Γ i ℓ i = g m i 2 [ ∂ g i m ∂ q ℓ + ∂ g ℓ m ∂ q i − ∂ g i l ∂ q m ] {\displaystyle \Gamma _{\ell i}^{i}=\Gamma _{i\ell }^{i}={\cfrac {g^{mi}}{2}}\left[{\frac {\partial g_{im}}{\partial q^{\ell }}}+{\frac {\partial g_{\ell m}}{\partial q^{i}}}-{\frac {\partial g_{il}}{\partial q^{m}}}\right]} Noting that, due to the symmetry of g {\displaystyle {\boldsymbol {g}}} , g m i ∂ g ℓ m ∂ q i = g m i ∂ g i ℓ ∂ q m {\displaystyle g^{mi}~{\frac {\partial g_{\ell m}}{\partial q^{i}}}=g^{mi}~{\frac {\partial g_{i\ell }}{\partial q^{m}}}} we have ∇ ⋅ v = ∂ v i ∂ q i + g m i 2 ∂ g i m ∂ q ℓ v ℓ {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\frac {\partial v^{i}}{\partial q^{i}}}+{\cfrac {g^{mi}}{2}}~{\frac {\partial g_{im}}{\partial q^{\ell }}}~v^{\ell }} Recall that if [ g i j ] {\displaystyle [g_{ij}]} is the matrix whose components are g i j {\displaystyle g_{ij}} , then the inverse of the matrix is [ g i j ] − 1 = [ g i j ] {\displaystyle [g_{ij}]^{-1}=[g^{ij}]} . The inverse of the matrix is given by [ g i j ] = [ g i j ] − 1 = A i j g ; g := det ( [ g i j ] ) = det g {\displaystyle [g^{ij}]=[g_{ij}]^{-1}={\cfrac {A^{ij}}{g}}~;~~g:=\det([g_{ij}])=\det {\boldsymbol {g}}} where A i j {\displaystyle A^{ij}} is the cofactor matrix of the components g i j {\displaystyle g_{ij}} . From matrix algebra we have g = det ( [ g i j ] ) = ∑ i g i j A i j ⇒ ∂ g ∂ g i j = A i j {\displaystyle g=\det([g_{ij}])=\sum _{i}g_{ij}~A^{ij}\quad \Rightarrow \quad {\frac {\partial g}{\partial g_{ij}}}=A^{ij}} Hence, [ g i j ] = 1 g ∂ g ∂ g i j {\displaystyle [g^{ij}]={\cfrac {1}{g}}~{\frac {\partial g}{\partial g_{ij}}}} Plugging this relation into the expression for the divergence gives ∇ ⋅ v = ∂ v i ∂ q i + 1 2 g ∂ g ∂ g m i ∂ g i m ∂ q ℓ v ℓ = ∂ v i ∂ q i + 1 2 g ∂ g ∂ q ℓ v ℓ {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\frac {\partial v^{i}}{\partial q^{i}}}+{\cfrac {1}{2g}}~{\frac {\partial g}{\partial g_{mi}}}~{\frac {\partial g_{im}}{\partial q^{\ell }}}~v^{\ell }={\frac {\partial v^{i}}{\partial q^{i}}}+{\cfrac {1}{2g}}~{\frac {\partial g}{\partial q^{\ell }}}~v^{\ell }} A little manipulation leads to the more compact form ∇ ⋅ v = 1 g ∂ ∂ q i ( v i g ) {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\cfrac {1}{\sqrt {g}}}~{\frac {\partial }{\partial q^{i}}}(v^{i}~{\sqrt {g}})}
The divergence of a second-order tensor field is defined using ( ∇ ⋅ S ) ⋅ a = ∇ ⋅ ( S a ) {\displaystyle ({\boldsymbol {\nabla }}\cdot {\boldsymbol {S}})\cdot \mathbf {a} ={\boldsymbol {\nabla }}\cdot ({\boldsymbol {S}}\mathbf {a} )} where a {\displaystyle \mathbf {a} } is an arbitrary constant vector. [ 11 ] In curvilinear coordinates, ∇ ⋅ S = [ ∂ S i j ∂ q k − Γ k i l S l j − Γ k j l S i l ] g i k b j = [ ∂ S i j ∂ q i + Γ i l i S l j + Γ i l j S i l ] b j = [ ∂ S j i ∂ q i + Γ i l i S j l − Γ i j l S l i ] b j = [ ∂ S i j ∂ q k − Γ i k l S l j + Γ k l j S i l ] g i k b j {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&=\left[{\cfrac {\partial S_{ij}}{\partial q^{k}}}-\Gamma _{ki}^{l}~S_{lj}-\Gamma _{kj}^{l}~S_{il}\right]~g^{ik}~\mathbf {b} ^{j}\\[8pt]&=\left[{\cfrac {\partial S^{ij}}{\partial q^{i}}}+\Gamma _{il}^{i}~S^{lj}+\Gamma _{il}^{j}~S^{il}\right]~\mathbf {b} _{j}\\[8pt]&=\left[{\cfrac {\partial S_{~j}^{i}}{\partial q^{i}}}+\Gamma _{il}^{i}~S_{~j}^{l}-\Gamma _{ij}^{l}~S_{~l}^{i}\right]~\mathbf {b} ^{j}\\[8pt]&=\left[{\cfrac {\partial S_{i}^{~j}}{\partial q^{k}}}-\Gamma _{ik}^{l}~S_{l}^{~j}+\Gamma _{kl}^{j}~S_{i}^{~l}\right]~g^{ik}~\mathbf {b} _{j}\end{aligned}}}
The Laplacian of a scalar field φ ( x ) {\displaystyle \varphi (\mathbf {x} )} is defined as ∇ 2 φ := ∇ ⋅ ( ∇ φ ) {\displaystyle \nabla ^{2}\varphi :={\boldsymbol {\nabla }}\cdot ({\boldsymbol {\nabla }}\varphi )} Using the alternative expression for the divergence of a vector field gives us ∇ 2 φ = 1 g ∂ ∂ q i ( [ ∇ φ ] i g ) {\displaystyle \nabla ^{2}\varphi ={\cfrac {1}{\sqrt {g}}}~{\frac {\partial }{\partial q^{i}}}([{\boldsymbol {\nabla }}\varphi ]^{i}~{\sqrt {g}})} Now ∇ φ = ∂ φ ∂ q l b l = g l i ∂ φ ∂ q l b i ⇒ [ ∇ φ ] i = g l i ∂ φ ∂ q l {\displaystyle {\boldsymbol {\nabla }}\varphi ={\frac {\partial \varphi }{\partial q^{l}}}~\mathbf {b} ^{l}=g^{li}~{\frac {\partial \varphi }{\partial q^{l}}}~\mathbf {b} _{i}\quad \Rightarrow \quad [{\boldsymbol {\nabla }}\varphi ]^{i}=g^{li}~{\frac {\partial \varphi }{\partial q^{l}}}} Therefore, ∇ 2 φ = 1 g ∂ ∂ q i ( g l i ∂ φ ∂ q l g ) {\displaystyle \nabla ^{2}\varphi ={\cfrac {1}{\sqrt {g}}}~{\frac {\partial }{\partial q^{i}}}\left(g^{li}~{\frac {\partial \varphi }{\partial q^{l}}}~{\sqrt {g}}\right)}
The curl of a vector field v {\displaystyle \mathbf {v} } in covariant curvilinear coordinates can be written as ∇ × v = E r s t v s | r b t {\displaystyle {\boldsymbol {\nabla }}\times \mathbf {v} ={\mathcal {E}}^{rst}v_{s|r}~\mathbf {b} _{t}} where v s | r = v s , r − Γ s r i v i {\displaystyle v_{s|r}=v_{s,r}-\Gamma _{sr}^{i}~v_{i}}
Assume, for the purposes of this section, that the curvilinear coordinate system is orthogonal , i.e., b i ⋅ b j = { g i i if i = j 0 if i ≠ j , {\displaystyle \mathbf {b} _{i}\cdot \mathbf {b} _{j}={\begin{cases}g_{ii}&{\text{if }}i=j\\0&{\text{if }}i\neq j,\end{cases}}} or equivalently, b i ⋅ b j = { g i i if i = j 0 if i ≠ j , {\displaystyle \mathbf {b} ^{i}\cdot \mathbf {b} ^{j}={\begin{cases}g^{ii}&{\text{if }}i=j\\0&{\text{if }}i\neq j,\end{cases}}} where g i i = g i i − 1 {\displaystyle g^{ii}=g_{ii}^{-1}} . As before, b i , b j {\displaystyle \mathbf {b} _{i},\mathbf {b} _{j}} are covariant basis vectors and b i {\displaystyle \mathbf {b} ^{i}} , b j {\displaystyle \mathbf {b} ^{j}} are contravariant basis vectors. Also, let ( e 1 , e 2 , e 3 ) {\displaystyle (\mathbf {e} ^{1},\mathbf {e} ^{2},\mathbf {e} ^{3})} be a background, fixed, Cartesian basis. A list of orthogonal curvilinear coordinates is given below.
Let r ( x ) {\displaystyle \mathbf {r} (\mathbf {x} )} be the position vector of the point x {\displaystyle \mathbf {x} } with respect to the origin of the coordinate system. The notation can be simplified by noting that x {\displaystyle \mathbf {x} } = r ( x ) {\displaystyle \mathbf {r} (\mathbf {x} )} . At each point we can construct a small line element d x {\displaystyle \mathrm {d} \mathbf {x} } . The square of the length of the line element is the scalar product d x ⋅ d x {\displaystyle \mathrm {d} \mathbf {x} \cdot \mathrm {d} \mathbf {x} } and is called the metric of the space . Recall that the space of interest is assumed to be Euclidean when we talk of curvilinear coordinates. Let us express the position vector in terms of the background, fixed, Cartesian basis, i.e., x = ∑ i = 1 3 x i e i {\displaystyle \mathbf {x} =\sum _{i=1}^{3}x_{i}~\mathbf {e} _{i}}
Using the chain rule , we can then express d x {\displaystyle \mathrm {d} \mathbf {x} } in terms of three-dimensional orthogonal curvilinear coordinates ( q 1 , q 2 , q 3 ) {\displaystyle (q^{1},q^{2},q^{3})} as d x = ∑ i = 1 3 ∑ j = 1 3 ( ∂ x i ∂ q j e i ) d q j {\displaystyle \mathrm {d} \mathbf {x} =\sum _{i=1}^{3}\sum _{j=1}^{3}\left({\cfrac {\partial x_{i}}{\partial q^{j}}}~\mathbf {e} _{i}\right)\mathrm {d} q^{j}} Therefore, the metric is given by d x ⋅ d x = ∑ i = 1 3 ∑ j = 1 3 ∑ k = 1 3 ∂ x i ∂ q j ∂ x i ∂ q k d q j d q k {\displaystyle \mathrm {d} \mathbf {x} \cdot \mathrm {d} \mathbf {x} =\sum _{i=1}^{3}\sum _{j=1}^{3}\sum _{k=1}^{3}{\cfrac {\partial x_{i}}{\partial q^{j}}}~{\cfrac {\partial x_{i}}{\partial q^{k}}}~\mathrm {d} q^{j}~\mathrm {d} q^{k}}
The symmetric quantity g i j ( q i , q j ) = ∑ k = 1 3 ∂ x k ∂ q i ∂ x k ∂ q j = b i ⋅ b j {\displaystyle g_{ij}(q^{i},q^{j})=\sum _{k=1}^{3}{\cfrac {\partial x_{k}}{\partial q^{i}}}~{\cfrac {\partial x_{k}}{\partial q^{j}}}=\mathbf {b} _{i}\cdot \mathbf {b} _{j}} is called the fundamental (or metric) tensor of the Euclidean space in curvilinear coordinates.
Note also that g i j = ∂ x ∂ q i ⋅ ∂ x ∂ q j = ( ∑ k h k i e k ) ⋅ ( ∑ m h m j e m ) = ∑ k h k i h k j {\displaystyle g_{ij}={\cfrac {\partial \mathbf {x} }{\partial q^{i}}}\cdot {\cfrac {\partial \mathbf {x} }{\partial q^{j}}}=\left(\sum _{k}h_{ki}~\mathbf {e} _{k}\right)\cdot \left(\sum _{m}h_{mj}~\mathbf {e} _{m}\right)=\sum _{k}h_{ki}~h_{kj}} where h i j {\displaystyle h_{ij}} are the Lamé coefficients.
If we define the scale factors, h i {\displaystyle h_{i}} , using b i ⋅ b i = g i i = ∑ k h k i 2 =: h i 2 ⇒ | ∂ x ∂ q i | = | b i | = g i i = h i {\displaystyle \mathbf {b} _{i}\cdot \mathbf {b} _{i}=g_{ii}=\sum _{k}h_{ki}^{2}=:h_{i}^{2}\quad \Rightarrow \quad \left|{\cfrac {\partial \mathbf {x} }{\partial q^{i}}}\right|=\left|\mathbf {b} _{i}\right|={\sqrt {g_{ii}}}=h_{i}} we get a relation between the fundamental tensor and the Lamé coefficients.
If we consider polar coordinates for R 2 {\displaystyle \mathbb {R} ^{2}} , note that ( x , y ) = ( r cos θ , r sin θ ) {\displaystyle (x,y)=(r\cos \theta ,r\sin \theta )} ( r , θ ) {\displaystyle (r,\theta )} are the curvilinear coordinates, and the Jacobian determinant of the transformation ( r , θ ) → ( r cos θ , r sin θ ) {\displaystyle (r,\theta )\to (r\cos \theta ,r\sin \theta )} is r {\displaystyle r} .
The orthogonal basis vectors are b r = ( cos θ , sin θ ) {\displaystyle \mathbf {b} _{r}=(\cos \theta ,\sin \theta )} , b θ = ( − r sin θ , r cos θ ) {\displaystyle \mathbf {b} _{\theta }=(-r\sin \theta ,r\cos \theta )} . The normalized basis vectors are e r = ( cos θ , sin θ ) {\displaystyle \mathbf {e} _{r}=(\cos \theta ,\sin \theta )} , e θ = ( − sin θ , cos θ ) {\displaystyle \mathbf {e} _{\theta }=(-\sin \theta ,\cos \theta )} and the scale factors are h r = 1 {\displaystyle h_{r}=1} and h θ = r {\displaystyle h_{\theta }=r} . The fundamental tensor is g 11 = 1 {\displaystyle g_{11}=1} , g 22 = r 2 {\displaystyle g_{22}=r^{2}} , g 12 = g 21 = 0 {\displaystyle g_{12}=g_{21}=0} .
If we wish to use curvilinear coordinates for vector calculus calculations, adjustments need to be made in the calculation of line, surface and volume integrals. For simplicity, we again restrict the discussion to three dimensions and orthogonal curvilinear coordinates. However, the same arguments apply for n {\displaystyle n} -dimensional problems though there are some additional terms in the expressions when the coordinate system is not orthogonal.
Normally in the calculation of line integrals we are interested in calculating ∫ C f d s = ∫ a b f ( x ( t ) ) | ∂ x ∂ t | d t {\displaystyle \int _{C}f\,ds=\int _{a}^{b}f(\mathbf {x} (t))\left|{\partial \mathbf {x} \over \partial t}\right|\;dt} where x ( t ) {\displaystyle \mathbf {x} (t)} parametrizes C {\displaystyle C} in Cartesian coordinates.
In curvilinear coordinates, the term
| ∂ x ∂ t | = | ∑ i = 1 3 ∂ x ∂ q i ∂ q i ∂ t | {\displaystyle \left|{\partial \mathbf {x} \over \partial t}\right|=\left|\sum _{i=1}^{3}{\partial \mathbf {x} \over \partial q^{i}}{\partial q^{i} \over \partial t}\right|}
by the chain rule . And from the definition of the Lamé coefficients,
∂ x ∂ q i = ∑ k h k i e k {\displaystyle {\partial \mathbf {x} \over \partial q^{i}}=\sum _{k}h_{ki}~\mathbf {e} _{k}}
and thus
| ∂ x ∂ t | = | ∑ k ( ∑ i h k i ∂ q i ∂ t ) e k | = ∑ i ∑ j ∑ k h k i h k j ∂ q i ∂ t ∂ q j ∂ t = ∑ i ∑ j g i j ∂ q i ∂ t ∂ q j ∂ t {\displaystyle {\begin{aligned}\left|{\partial \mathbf {x} \over \partial t}\right|&=\left|\sum _{k}\left(\sum _{i}h_{ki}~{\cfrac {\partial q^{i}}{\partial t}}\right)\mathbf {e} _{k}\right|\\[8pt]&={\sqrt {\sum _{i}\sum _{j}\sum _{k}h_{ki}~h_{kj}{\cfrac {\partial q^{i}}{\partial t}}{\cfrac {\partial q^{j}}{\partial t}}}}={\sqrt {\sum _{i}\sum _{j}g_{ij}~{\cfrac {\partial q^{i}}{\partial t}}{\cfrac {\partial q^{j}}{\partial t}}}}\end{aligned}}}
Now, since g i j = 0 {\displaystyle g_{ij}=0} when i ≠ j {\displaystyle i\neq j} , we have | ∂ x ∂ t | = ∑ i g i i ( ∂ q i ∂ t ) 2 = ∑ i h i 2 ( ∂ q i ∂ t ) 2 {\displaystyle \left|{\partial \mathbf {x} \over \partial t}\right|={\sqrt {\sum _{i}g_{ii}~\left({\cfrac {\partial q^{i}}{\partial t}}\right)^{2}}}={\sqrt {\sum _{i}h_{i}^{2}~\left({\cfrac {\partial q^{i}}{\partial t}}\right)^{2}}}} and we can proceed normally.
Likewise, if we are interested in a surface integral , the relevant calculation, with the parameterization of the surface in Cartesian coordinates is: ∫ S f d S = ∬ T f ( x ( s , t ) ) | ∂ x ∂ s × ∂ x ∂ t | d s d t {\displaystyle \int _{S}f\,dS=\iint _{T}f(\mathbf {x} (s,t))\left|{\partial \mathbf {x} \over \partial s}\times {\partial \mathbf {x} \over \partial t}\right|\,ds\,dt} Again, in curvilinear coordinates, we have | ∂ x ∂ s × ∂ x ∂ t | = | ( ∑ i ∂ x ∂ q i ∂ q i ∂ s ) × ( ∑ j ∂ x ∂ q j ∂ q j ∂ t ) | {\displaystyle \left|{\partial \mathbf {x} \over \partial s}\times {\partial \mathbf {x} \over \partial t}\right|=\left|\left(\sum _{i}{\partial \mathbf {x} \over \partial q^{i}}{\partial q^{i} \over \partial s}\right)\times \left(\sum _{j}{\partial \mathbf {x} \over \partial q^{j}}{\partial q^{j} \over \partial t}\right)\right|} and we make use of the definition of curvilinear coordinates again to yield ∂ x ∂ q i ∂ q i ∂ s = ∑ k ( ∑ i = 1 3 h k i ∂ q i ∂ s ) e k ; ∂ x ∂ q j ∂ q j ∂ t = ∑ m ( ∑ j = 1 3 h m j ∂ q j ∂ t ) e m {\displaystyle {\partial \mathbf {x} \over \partial q^{i}}{\partial q^{i} \over \partial s}=\sum _{k}\left(\sum _{i=1}^{3}h_{ki}~{\partial q^{i} \over \partial s}\right)\mathbf {e} _{k}~;~~{\partial \mathbf {x} \over \partial q^{j}}{\partial q^{j} \over \partial t}=\sum _{m}\left(\sum _{j=1}^{3}h_{mj}~{\partial q^{j} \over \partial t}\right)\mathbf {e} _{m}}
Therefore, | ∂ x ∂ s × ∂ x ∂ t | = | ∑ k ∑ m ( ∑ i = 1 3 h k i ∂ q i ∂ s ) ( ∑ j = 1 3 h m j ∂ q j ∂ t ) e k × e m | = | ∑ p ∑ k ∑ m E k m p ( ∑ i = 1 3 h k i ∂ q i ∂ s ) ( ∑ j = 1 3 h m j ∂ q j ∂ t ) e p | {\displaystyle {\begin{aligned}\left|{\partial \mathbf {x} \over \partial s}\times {\partial \mathbf {x} \over \partial t}\right|&=\left|\sum _{k}\sum _{m}\left(\sum _{i=1}^{3}h_{ki}~{\partial q^{i} \over \partial s}\right)\left(\sum _{j=1}^{3}h_{mj}~{\partial q^{j} \over \partial t}\right)\mathbf {e} _{k}\times \mathbf {e} _{m}\right|\\[8pt]&=\left|\sum _{p}\sum _{k}\sum _{m}{\mathcal {E}}_{kmp}\left(\sum _{i=1}^{3}h_{ki}~{\partial q^{i} \over \partial s}\right)\left(\sum _{j=1}^{3}h_{mj}~{\partial q^{j} \over \partial t}\right)\mathbf {e} _{p}\right|\end{aligned}}} where E {\displaystyle {\mathcal {E}}} is the permutation symbol .
In determinant form, the cross product in terms of curvilinear coordinates will be: | e 1 e 2 e 3 ∑ i h 1 i ∂ q i ∂ s ∑ i h 2 i ∂ q i ∂ s ∑ i h 3 i ∂ q i ∂ s ∑ j h 1 j ∂ q j ∂ t ∑ j h 2 j ∂ q j ∂ t ∑ j h 3 j ∂ q j ∂ t | {\displaystyle {\begin{vmatrix}\mathbf {e} _{1}&\mathbf {e} _{2}&\mathbf {e} _{3}\\&&\\\sum _{i}h_{1i}{\partial q^{i} \over \partial s}&\sum _{i}h_{2i}{\partial q^{i} \over \partial s}&\sum _{i}h_{3i}{\partial q^{i} \over \partial s}\\&&\\\sum _{j}h_{1j}{\partial q^{j} \over \partial t}&\sum _{j}h_{2j}{\partial q^{j} \over \partial t}&\sum _{j}h_{3j}{\partial q^{j} \over \partial t}\end{vmatrix}}}
In orthogonal curvilinear coordinates of 3 dimensions, where b i = ∑ k g i k b k ; g i i = 1 g i i = 1 h i 2 {\displaystyle \mathbf {b} ^{i}=\sum _{k}g^{ik}~\mathbf {b} _{k}~;~~g^{ii}={\cfrac {1}{g_{ii}}}={\cfrac {1}{h_{i}^{2}}}} one can express the gradient of a scalar or vector field as ∇ φ = ∑ i ∂ φ ∂ q i b i = ∑ i ∑ j ∂ φ ∂ q i g i j b j = ∑ i 1 h i 2 ∂ f ∂ q i b i ; ∇ v = ∑ i 1 h i 2 ∂ v ∂ q i ⊗ b i {\displaystyle \nabla \varphi =\sum _{i}{\partial \varphi \over \partial q^{i}}~\mathbf {b} ^{i}=\sum _{i}\sum _{j}{\partial \varphi \over \partial q^{i}}~g^{ij}~\mathbf {b} _{j}=\sum _{i}{\cfrac {1}{h_{i}^{2}}}~{\partial f \over \partial q^{i}}~\mathbf {b} _{i}~;~~\nabla \mathbf {v} =\sum _{i}{\cfrac {1}{h_{i}^{2}}}~{\partial \mathbf {v} \over \partial q^{i}}\otimes \mathbf {b} _{i}} For an orthogonal basis g = g 11 g 22 g 33 = h 1 2 h 2 2 h 3 2 ⇒ g = h 1 h 2 h 3 {\displaystyle g=g_{11}~g_{22}~g_{33}=h_{1}^{2}~h_{2}^{2}~h_{3}^{2}\quad \Rightarrow \quad {\sqrt {g}}=h_{1}h_{2}h_{3}} The divergence of a vector field can then be written as ∇ ⋅ v = 1 h 1 h 2 h 3 ∂ ∂ q i ( h 1 h 2 h 3 v i ) {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\cfrac {1}{h_{1}h_{2}h_{3}}}~{\frac {\partial }{\partial q^{i}}}(h_{1}h_{2}h_{3}~v^{i})} Also, v i = g i k v k ⇒ v 1 = g 11 v 1 = v 1 h 1 2 ; v 2 = g 22 v 2 = v 2 h 2 2 ; v 3 = g 33 v 3 = v 3 h 3 2 {\displaystyle v^{i}=g^{ik}~v_{k}\quad \Rightarrow v^{1}=g^{11}~v_{1}={\cfrac {v_{1}}{h_{1}^{2}}}~;~~v^{2}=g^{22}~v_{2}={\cfrac {v_{2}}{h_{2}^{2}}}~;~~v^{3}=g^{33}~v_{3}={\cfrac {v_{3}}{h_{3}^{2}}}} Therefore, ∇ ⋅ v = 1 h 1 h 2 h 3 ∑ i ∂ ∂ q i ( h 1 h 2 h 3 h i 2 v i ) {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\cfrac {1}{h_{1}h_{2}h_{3}}}~\sum _{i}{\frac {\partial }{\partial q^{i}}}\left({\cfrac {h_{1}h_{2}h_{3}}{h_{i}^{2}}}~v_{i}\right)} We can get an expression for the Laplacian in a similar manner by noting that g l i ∂ φ ∂ q l = { g 11 ∂ φ ∂ q 1 , g 22 ∂ φ ∂ q 2 , g 33 ∂ φ ∂ q 3 } = { 1 h 1 2 ∂ φ ∂ q 1 , 1 h 2 2 ∂ φ ∂ q 2 , 1 h 3 2 ∂ φ ∂ q 3 } {\displaystyle g^{li}~{\frac {\partial \varphi }{\partial q^{l}}}=\left\{g^{11}~{\frac {\partial \varphi }{\partial q^{1}}},g^{22}~{\frac {\partial \varphi }{\partial q^{2}}},g^{33}~{\frac {\partial \varphi }{\partial q^{3}}}\right\}=\left\{{\cfrac {1}{h_{1}^{2}}}~{\frac {\partial \varphi }{\partial q^{1}}},{\cfrac {1}{h_{2}^{2}}}~{\frac {\partial \varphi }{\partial q^{2}}},{\cfrac {1}{h_{3}^{2}}}~{\frac {\partial \varphi }{\partial q^{3}}}\right\}} Then we have ∇ 2 φ = 1 h 1 h 2 h 3 ∑ i ∂ ∂ q i ( h 1 h 2 h 3 h i 2 ∂ φ ∂ q i ) {\displaystyle \nabla ^{2}\varphi ={\cfrac {1}{h_{1}h_{2}h_{3}}}~\sum _{i}{\frac {\partial }{\partial q^{i}}}\left({\cfrac {h_{1}h_{2}h_{3}}{h_{i}^{2}}}~{\frac {\partial \varphi }{\partial q^{i}}}\right)} The expressions for the gradient, divergence, and Laplacian can be directly extended to n {\displaystyle n} -dimensions.
The curl of a vector field is given by ∇ × v = 1 h 1 h 2 h 3 ∑ i = 1 n e i ∑ j k ε i j k h i ∂ ( h k v k ) ∂ q j {\displaystyle \nabla \times \mathbf {v} ={\frac {1}{h_{1}h_{2}h_{3}}}\sum _{i=1}^{n}\mathbf {e} _{i}\sum _{jk}\varepsilon _{ijk}h_{i}{\frac {\partial (h_{k}v_{k})}{\partial q^{j}}}} where ε i j k {\displaystyle \varepsilon _{ijk}} is the Levi-Civita symbol .
For cylindrical coordinates we have ( x 1 , x 2 , x 3 ) = x = φ ( q 1 , q 2 , q 3 ) = φ ( r , θ , z ) = { r cos θ , r sin θ , z } {\displaystyle (x_{1},x_{2},x_{3})=\mathbf {x} ={\boldsymbol {\varphi }}(q^{1},q^{2},q^{3})={\boldsymbol {\varphi }}(r,\theta ,z)=\{r\cos \theta ,r\sin \theta ,z\}} and { ψ 1 ( x ) , ψ 2 ( x ) , ψ 3 ( x ) } = ( q 1 , q 2 , q 3 ) ≡ ( r , θ , z ) = { x 1 2 + x 2 2 , tan − 1 ( x 2 / x 1 ) , x 3 } {\displaystyle \{\psi ^{1}(\mathbf {x} ),\psi ^{2}(\mathbf {x} ),\psi ^{3}(\mathbf {x} )\}=(q^{1},q^{2},q^{3})\equiv (r,\theta ,z)=\{{\sqrt {x_{1}^{2}+x_{2}^{2}}},\tan ^{-1}(x_{2}/x_{1}),x_{3}\}} where 0 < r < ∞ , 0 < θ < 2 π , − ∞ < z < ∞ {\displaystyle 0<r<\infty ~,~~0<\theta <2\pi ~,~~-\infty <z<\infty }
Then the covariant and contravariant basis vectors are b 1 = e r = b 1 b 2 = r e θ = r 2 b 2 b 3 = e z = b 3 {\displaystyle {\begin{aligned}\mathbf {b} _{1}&=\mathbf {e} _{r}=\mathbf {b} ^{1}\\\mathbf {b} _{2}&=r~\mathbf {e} _{\theta }=r^{2}~\mathbf {b} ^{2}\\\mathbf {b} _{3}&=\mathbf {e} _{z}=\mathbf {b} ^{3}\end{aligned}}} where e r , e θ , e z {\displaystyle \mathbf {e} _{r},\mathbf {e} _{\theta },\mathbf {e} _{z}} are the unit vectors in the r , θ , z {\displaystyle r,\theta ,z} directions.
Note that the components of the metric tensor are such that g i j = g i j = 0 ( i ≠ j ) ; g 11 = 1 , g 22 = 1 r , g 33 = 1 {\displaystyle g^{ij}=g_{ij}=0(i\neq j)~;~~{\sqrt {g^{11}}}=1,~{\sqrt {g^{22}}}={\cfrac {1}{r}},~{\sqrt {g^{33}}}=1} which shows that the basis is orthogonal.
The non-zero components of the Christoffel symbol of the second kind are Γ 12 2 = Γ 21 2 = 1 r ; Γ 22 1 = − r {\displaystyle \Gamma _{12}^{2}=\Gamma _{21}^{2}={\cfrac {1}{r}}~;~~\Gamma _{22}^{1}=-r}
The normalized contravariant basis vectors in cylindrical polar coordinates are b ^ 1 = e r ; b ^ 2 = e θ ; b ^ 3 = e z {\displaystyle {\hat {\mathbf {b} }}^{1}=\mathbf {e} _{r}~;~~{\hat {\mathbf {b} }}^{2}=\mathbf {e} _{\theta }~;~~{\hat {\mathbf {b} }}^{3}=\mathbf {e} _{z}} and the physical components of a vector v {\displaystyle \mathbf {v} } are ( v ^ 1 , v ^ 2 , v ^ 3 ) = ( v 1 , v 2 / r , v 3 ) =: ( v r , v θ , v z ) {\displaystyle ({\hat {v}}_{1},{\hat {v}}_{2},{\hat {v}}_{3})=(v_{1},v_{2}/r,v_{3})=:(v_{r},v_{\theta },v_{z})}
The gradient of a scalar field, f ( x ) {\displaystyle f(\mathbf {x} )} , in cylindrical coordinates can now be computed from the general expression in curvilinear coordinates and has the form ∇ f = ∂ f ∂ r e r + 1 r ∂ f ∂ θ e θ + ∂ f ∂ z e z {\displaystyle {\boldsymbol {\nabla }}f={\cfrac {\partial f}{\partial r}}~\mathbf {e} _{r}+{\cfrac {1}{r}}~{\cfrac {\partial f}{\partial \theta }}~\mathbf {e} _{\theta }+{\cfrac {\partial f}{\partial z}}~\mathbf {e} _{z}}
Similarly, the gradient of a vector field, v ( x ) {\displaystyle \mathbf {v} (\mathbf {x} )} , in cylindrical coordinates can be shown to be ∇ v = ∂ v r ∂ r e r ⊗ e r + 1 r ( ∂ v r ∂ θ − v θ ) e r ⊗ e θ + ∂ v r ∂ z e r ⊗ e z + ∂ v θ ∂ r e θ ⊗ e r + 1 r ( ∂ v θ ∂ θ + v r ) e θ ⊗ e θ + ∂ v θ ∂ z e θ ⊗ e z + ∂ v z ∂ r e z ⊗ e r + 1 r ∂ v z ∂ θ e z ⊗ e θ + ∂ v z ∂ z e z ⊗ e z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\mathbf {v} &={\cfrac {\partial v_{r}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left({\cfrac {\partial v_{r}}{\partial \theta }}-v_{\theta }\right)~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\cfrac {\partial v_{r}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\\[8pt]&+{\cfrac {\partial v_{\theta }}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left({\cfrac {\partial v_{\theta }}{\partial \theta }}+v_{r}\right)~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\cfrac {\partial v_{\theta }}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\[8pt]&+{\cfrac {\partial v_{z}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}{\cfrac {\partial v_{z}}{\partial \theta }}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\cfrac {\partial v_{z}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\end{aligned}}}
Using the equation for the divergence of a vector field in curvilinear coordinates, the divergence in cylindrical coordinates can be shown to be ∇ ⋅ v = ∂ v r ∂ r + 1 r ( ∂ v θ ∂ θ + v r ) + ∂ v z ∂ z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot \mathbf {v} &={\cfrac {\partial v_{r}}{\partial r}}+{\cfrac {1}{r}}\left({\cfrac {\partial v_{\theta }}{\partial \theta }}+v_{r}\right)+{\cfrac {\partial v_{z}}{\partial z}}\end{aligned}}}
The Laplacian is more easily computed by noting that ∇ 2 f = ∇ ⋅ ∇ f {\displaystyle {\boldsymbol {\nabla }}^{2}f={\boldsymbol {\nabla }}\cdot {\boldsymbol {\nabla }}f} . In cylindrical polar coordinates v = ∇ f = [ v r v θ v z ] = [ ∂ f ∂ r 1 r ∂ f ∂ θ ∂ f ∂ z ] {\displaystyle \mathbf {v} ={\boldsymbol {\nabla }}f=\left[v_{r}~~v_{\theta }~~v_{z}\right]=\left[{\cfrac {\partial f}{\partial r}}~~{\cfrac {1}{r}}{\cfrac {\partial f}{\partial \theta }}~~{\cfrac {\partial f}{\partial z}}\right]} Hence, ∇ ⋅ v = ∇ 2 f = ∂ 2 f ∂ r 2 + 1 r ( 1 r ∂ 2 f ∂ θ 2 + ∂ f ∂ r ) + ∂ 2 f ∂ z 2 = 1 r [ ∂ ∂ r ( r ∂ f ∂ r ) ] + 1 r 2 ∂ 2 f ∂ θ 2 + ∂ 2 f ∂ z 2 {\displaystyle {\boldsymbol {\nabla }}\cdot \mathbf {v} ={\boldsymbol {\nabla }}^{2}f={\cfrac {\partial ^{2}f}{\partial r^{2}}}+{\cfrac {1}{r}}\left({\cfrac {1}{r}}{\cfrac {\partial ^{2}f}{\partial \theta ^{2}}}+{\cfrac {\partial f}{\partial r}}\right)+{\cfrac {\partial ^{2}f}{\partial z^{2}}}={\cfrac {1}{r}}\left[{\cfrac {\partial }{\partial r}}\left(r{\cfrac {\partial f}{\partial r}}\right)\right]+{\cfrac {1}{r^{2}}}{\cfrac {\partial ^{2}f}{\partial \theta ^{2}}}+{\cfrac {\partial ^{2}f}{\partial z^{2}}}}
The physical components of a second-order tensor field are those obtained when the tensor is expressed in terms of a normalized contravariant basis. In cylindrical polar coordinates these components are:
S ^ 11 = S 11 =: S r r , S ^ 12 = S 12 r =: S r θ , S ^ 13 = S 13 =: S r z S ^ 21 = S 21 r =: S θ r , S ^ 22 = S 22 r 2 =: S θ θ , S ^ 23 = S 23 r =: S θ z S ^ 31 = S 31 =: S z r , S ^ 32 = S 32 r =: S z θ , S ^ 33 = S 33 =: S z z {\displaystyle {\begin{aligned}{\hat {S}}_{11}&=S_{11}=:S_{rr},&{\hat {S}}_{12}&={\frac {S_{12}}{r}}=:S_{r\theta },&{\hat {S}}_{13}&=S_{13}=:S_{rz}\\[6pt]{\hat {S}}_{21}&={\frac {S_{21}}{r}}=:S_{\theta r},&{\hat {S}}_{22}&={\frac {S_{22}}{r^{2}}}=:S_{\theta \theta },&{\hat {S}}_{23}&={\frac {S_{23}}{r}}=:S_{\theta z}\\[6pt]{\hat {S}}_{31}&=S_{31}=:S_{zr},&{\hat {S}}_{32}&={\frac {S_{32}}{r}}=:S_{z\theta },&{\hat {S}}_{33}&=S_{33}=:S_{zz}\end{aligned}}}
Using the above definitions we can show that the gradient of a second-order tensor field in cylindrical polar coordinates can be expressed as ∇ S = ∂ S r r ∂ r e r ⊗ e r ⊗ e r + 1 r [ ∂ S r r ∂ θ − ( S θ r + S r θ ) ] e r ⊗ e r ⊗ e θ + ∂ S r r ∂ z e r ⊗ e r ⊗ e z + ∂ S r θ ∂ r e r ⊗ e θ ⊗ e r + 1 r [ ∂ S r θ ∂ θ + ( S r r − S θ θ ) ] e r ⊗ e θ ⊗ e θ + ∂ S r θ ∂ z e r ⊗ e θ ⊗ e z + ∂ S r z ∂ r e r ⊗ e z ⊗ e r + 1 r [ ∂ S r z ∂ θ − S θ z ] e r ⊗ e z ⊗ e θ + ∂ S r z ∂ z e r ⊗ e z ⊗ e z + ∂ S θ r ∂ r e θ ⊗ e r ⊗ e r + 1 r [ ∂ S θ r ∂ θ + ( S r r − S θ θ ) ] e θ ⊗ e r ⊗ e θ + ∂ S θ r ∂ z e θ ⊗ e r ⊗ e z + ∂ S θ θ ∂ r e θ ⊗ e θ ⊗ e r + 1 r [ ∂ S θ θ ∂ θ + ( S r θ + S θ r ) ] e θ ⊗ e θ ⊗ e θ + ∂ S θ θ ∂ z e θ ⊗ e θ ⊗ e z + ∂ S θ z ∂ r e θ ⊗ e z ⊗ e r + 1 r [ ∂ S θ z ∂ θ + S r z ] e θ ⊗ e z ⊗ e θ + ∂ S θ z ∂ z e θ ⊗ e z ⊗ e z + ∂ S z r ∂ r e z ⊗ e r ⊗ e r + 1 r [ ∂ S z r ∂ θ − S z θ ] e z ⊗ e r ⊗ e θ + ∂ S z r ∂ z e z ⊗ e r ⊗ e z + ∂ S z θ ∂ r e z ⊗ e θ ⊗ e r + 1 r [ ∂ S z θ ∂ θ + S z r ] e z ⊗ e θ ⊗ e θ + ∂ S z θ ∂ z e z ⊗ e θ ⊗ e z + ∂ S z z ∂ r e z ⊗ e z ⊗ e r + 1 r ∂ S z z ∂ θ e z ⊗ e z ⊗ e θ + ∂ S z z ∂ z e z ⊗ e z ⊗ e z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}{\boldsymbol {S}}&={\frac {\partial S_{rr}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{rr}}{\partial \theta }}-(S_{\theta r}+S_{r\theta })\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{rr}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{r\theta }}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{r\theta }}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{r\theta }}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{rz}}{\partial r}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{rz}}{\partial \theta }}-S_{\theta z}\right]~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{rz}}{\partial z}}~\mathbf {e} _{r}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{\theta r}}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta r}}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{\theta r}}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{\theta \theta }}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta \theta }}{\partial \theta }}+(S_{r\theta }+S_{\theta r})\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{\theta \theta }}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{\theta z}}{\partial r}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta z}}{\partial \theta }}+S_{rz}\right]~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{\theta z}}{\partial z}}~\mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{zr}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{zr}}{\partial \theta }}-S_{z\theta }\right]~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{zr}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{r}\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{z\theta }}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{z\theta }}{\partial \theta }}+S_{zr}\right]~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{z\theta }}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{\theta }\otimes \mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{zz}}{\partial r}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{r}+{\cfrac {1}{r}}~{\frac {\partial S_{zz}}{\partial \theta }}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{\theta }+{\frac {\partial S_{zz}}{\partial z}}~\mathbf {e} _{z}\otimes \mathbf {e} _{z}\otimes \mathbf {e} _{z}\end{aligned}}}
The divergence of a second-order tensor field in cylindrical polar coordinates can be obtained from the expression for the gradient by collecting terms where the scalar product of the two outer vectors in the dyadic products is nonzero. Therefore, ∇ ⋅ S = ∂ S r r ∂ r e r + ∂ S r θ ∂ r e θ + ∂ S r z ∂ r e z + 1 r [ ∂ S r θ ∂ θ + ( S r r − S θ θ ) ] e r + 1 r [ ∂ S θ θ ∂ θ + ( S r θ + S θ r ) ] e θ + 1 r [ ∂ S θ z ∂ θ + S r z ] e z + ∂ S z r ∂ z e r + ∂ S z θ ∂ z e θ + ∂ S z z ∂ z e z {\displaystyle {\begin{aligned}{\boldsymbol {\nabla }}\cdot {\boldsymbol {S}}&={\frac {\partial S_{rr}}{\partial r}}~\mathbf {e} _{r}+{\frac {\partial S_{r\theta }}{\partial r}}~\mathbf {e} _{\theta }+{\frac {\partial S_{rz}}{\partial r}}~\mathbf {e} _{z}\\[8pt]&+{\cfrac {1}{r}}\left[{\frac {\partial S_{r\theta }}{\partial \theta }}+(S_{rr}-S_{\theta \theta })\right]~\mathbf {e} _{r}+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta \theta }}{\partial \theta }}+(S_{r\theta }+S_{\theta r})\right]~\mathbf {e} _{\theta }+{\cfrac {1}{r}}\left[{\frac {\partial S_{\theta z}}{\partial \theta }}+S_{rz}\right]~\mathbf {e} _{z}\\[8pt]&+{\frac {\partial S_{zr}}{\partial z}}~\mathbf {e} _{r}+{\frac {\partial S_{z\theta }}{\partial z}}~\mathbf {e} _{\theta }+{\frac {\partial S_{zz}}{\partial z}}~\mathbf {e} _{z}\end{aligned}}} | https://en.wikipedia.org/wiki/Tensors_in_curvilinear_coordinates |
Tensor–vector–scalar gravity ( TeVeS ), [ 1 ] developed by Jacob Bekenstein in 2004, is a relativistic generalization of Mordehai Milgrom 's Modified Newtonian dynamics (MOND) paradigm. [ 2 ] [ 3 ]
The main features of TeVeS can be summarized as follows:
The theory is based on the following ingredients:
These components are combined into a relativistic Lagrangian density , which forms the basis of TeVeS theory.
MOND [ 2 ] is a phenomenological modification of the Newtonian acceleration law. In Newtonian gravity theory, the gravitational acceleration in the spherically symmetric, static field of a point mass M {\displaystyle M} at distance r {\displaystyle r} from the source can be written as
where G {\displaystyle G} is Newton's constant of gravitation. The corresponding force acting on a test mass m {\displaystyle m} is
To account for the anomalous rotation curves of spiral galaxies, Milgrom proposed a modification of this force law in the form
where μ ( x ) {\displaystyle \mu (x)} is an arbitrary function subject to the following conditions:
In this form, MOND is not a complete theory: for instance, it violates the law of momentum conservation .
However, such conservation laws are automatically satisfied for physical theories that are derived using an action principle. This led Bekenstein [ 1 ] to a first, nonrelativistic generalization of MOND. This theory, called AQUAL (for A QUAdratic Lagrangian) is based on the Lagrangian
where Φ {\displaystyle \Phi } is the Newtonian gravitational potential, ρ {\displaystyle \rho } is the mass density, and f ( y ) {\displaystyle f(y)} is a dimensionless function.
In the case of a spherically symmetric, static gravitational field, this Lagrangian reproduces the MOND acceleration law after the substitutions a = − ∇ Φ {\displaystyle a=-\nabla \Phi } and μ ( y ) = d f ( y ) / d y {\displaystyle \mu ({\sqrt {y}})=df(y)/dy} are made.
Bekenstein further found that AQUAL can be obtained as the nonrelativistic limit of a relativistic field theory. This theory is written in terms of a Lagrangian that contains, in addition to the Einstein–Hilbert action for the metric field g μ ν {\displaystyle g_{\mu \nu }} , terms pertaining to a unit vector field u α {\displaystyle u^{\alpha }} and two scalar fields σ {\displaystyle \sigma } and ϕ {\displaystyle \phi } , of which only ϕ {\displaystyle \phi } is dynamical. The TeVeS action, therefore, can be written as
The terms in this action include the Einstein–Hilbert Lagrangian (using a metric signature [ + , − , − , − ] {\displaystyle [+,-,-,-]} and setting the speed of light, c = 1 {\displaystyle c=1} ):
where R {\displaystyle R} is the Ricci scalar and g {\displaystyle g} is the determinant of the metric tensor.
The scalar field Lagrangian is
where h α β = g α β − u α u β , l {\displaystyle h^{\alpha \beta }=g^{\alpha \beta }-u^{\alpha }u^{\beta },l} is a constant length, k {\displaystyle k} is the dimensionless parameter and F {\displaystyle F} an unspecified dimensionless function; while the vector field Lagrangian is
where B α β = ∂ α u β − ∂ β u α , {\displaystyle B_{\alpha \beta }=\partial _{\alpha }u_{\beta }-\partial _{\beta }u_{\alpha },} while K {\displaystyle K} is a dimensionless parameter. k {\displaystyle k} and K {\displaystyle K} are respectively called the scalar and vector coupling constants of the theory. The consistency between the Gravitoelectromagnetism of the TeVeS theory and that predicted and measured by the Gravity Probe B leads to K = k 2 π {\displaystyle K={\frac {k}{2\pi }}} , [ 4 ] and requiring consistency between the near horizon geometry of a black hole in TeVeS and that of the Einstein theory, as observed by the Event Horizon Telescope leads to K = − 30 + 72 π k . {\displaystyle K=-30+{\frac {72\pi }{k}}.} [ 5 ] So the coupling constants read:
The function F {\displaystyle F} in TeVeS is unspecified.
TeVeS also introduces a "physical metric" in the form
The action of ordinary matter is defined using the physical metric:
where covariant derivatives with respect to g ^ μ ν {\displaystyle {\hat {g}}_{\mu \nu }} are denoted by | . {\displaystyle |.}
TeVeS solves problems associated with earlier attempts to generalize MOND, such as superluminal propagation. In his paper, Bekenstein also investigated the consequences of TeVeS in relation to gravitational lensing and cosmology.
In addition to its ability to account for the flat rotation curves of galaxies (which is what MOND was originally designed to address), TeVeS is claimed to be consistent with a range of other phenomena, such as gravitational lensing and cosmological observations. However, Seifert [ 6 ] shows that with Bekenstein's proposed parameters, a TeVeS star is highly unstable, on the scale of approximately 10 6 seconds (two weeks). The ability of the theory to simultaneously account for galactic dynamics and lensing is also challenged. [ 7 ] A possible resolution may be in the form of massive (around 2 eV) neutrinos . [ 8 ]
A study in August 2006 reported an observation of a pair of colliding galaxy clusters, the Bullet Cluster , whose behavior, it was reported, was not compatible with any current modified gravity theory. [ 9 ]
A quantity E G {\displaystyle E_{G}} [ 10 ] probing general relativity (GR) on large scales (a hundred billion times the size of the Solar System ) for the first time has been measured with data from the Sloan Digital Sky Survey to be [ 11 ] E G = 0.392 ± 0.065 {\displaystyle E_{G}=0.392\pm {0.065}} (~16%) consistent with GR, GR plus Lambda CDM and the extended form of GR known as f ( R ) {\displaystyle f(R)} theory , but ruling out a particular TeVeS model predicting E G = 0.22 {\displaystyle E_{G}=0.22} . This estimate should improve to ~1% with the next generation of sky surveys and may put tighter constraints on the parameter space of all modified gravity theories.
TeVeS appears inconsistent with recent measurements made by LIGO of gravitational waves. [ 12 ] | https://en.wikipedia.org/wiki/Tensor–vector–scalar_gravity |
Tenuifolins are bio-active terpenoids . Tenuifolins inhibit beta-amyloid synthesis in vitro . Tenuifolins have nootropic activity in vivo via acetylcholinesterase inhibition and increased norepinephrine and dopamine production.
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tenuifolin |
Ferdinando 'Teo' Mora [ a ] is an Italian mathematician , and since 1990 until 2019 a professor of algebra at the University of Genoa .
Mora's degree is in mathematics from the University of Genoa in 1974. Mora's publications span forty years; his notable contributions in computergebra are the tangent cone algorithm [ 1 ] [ 2 ] and its extension of Buchberger theory of Gröbner bases and related algorithm earlier [ 3 ] to non-commutative polynomial rings [ 4 ] and more recently [ 5 ] to effective rings; less significant [ 6 ] the notion of Gröbner fan ; marginal, with respect to the other authors, his contribution to the FGLM algorithm .
Mora is on the managing-editorial-board of the journal Applicable Algebra in Engineering, Communication and Computing published by Springer , [ 7 ] and was also formerly an editor of the Bulletin of the Iranian Mathematical Society .
He is the author of the tetralogy Solving Polynomial Equation Systems :
Mora lives in Genoa . [ 10 ] Mora published a book trilogy in 1977-1978 (reprinted 2001-2003) called Storia del cinema dell'orrore on the history of horror films . [ 10 ] Italian television said in 2014 that the books are an "authoritative guide with in-depth detailed descriptions and analysis." [ 11 ] | https://en.wikipedia.org/wiki/Teo_Mora |
Tephrochronology is a geochronological technique for dating archaeological, geological and palaeoenvironmental sequences and events by their location between upper and lower layers of tephra (volcanic ejecta) of known date, and for correlating such sequences and events at separate locations between the same layers. The premise of the technique is that each volcanic event produces a "tephra horizon", a layer of ash with a unique chemical "fingerprint" that allows the deposit to be identified across the area affected by fallout. Thus, once the volcanic event has been independently dated, the tephra horizon will act as time marker. It is a variant of the basic geological technique of stratigraphy .
The main advantages of the technique are that the volcanic ash layers can be relatively easily identified in many sediments and that the tephra layers are deposited relatively instantaneously over a wide spatial area. This means they provide accurate temporal marker layers which can be used to verify or corroborate other dating techniques, linking sequences widely separated by location into a unified chronology that correlates climatic sequences and events. This results in " age-equivalent dating ". [ 1 ]
Effective tephrochronology requires accurate geochemical fingerprinting (usually via an electron microprobe ). [ 2 ] An important recent advance is the use of LA-ICP-MS (i.e. laser ablation ICP-MS ) to measure trace-element abundances in individual tephra shards. [ 3 ] One problem in tephrochronology is that tephra chemistry can become altered over time, at least for basaltic tephras. [ 4 ] Some tephra horizons and the use of zircon directed techniques are more useful than others in linking layers over wide areas and determining eruption details. [ 5 ] For example the often very explosive nature of rhyolytic eruptions will cause wider distribution, the higher potassium content of rhyolite allows more accurate time determinations, and the location of a deposit will influence its potential for chemical alteration after being laid down. [ 5 ] Zircon techniques applied to tephra and other samples from the same eruption, may allow magma sources, magma residence times and the geochemical conditions of the magma formation to be better understood with dating of more than just the eruption itself, but also when the magma first evolved separately, or incorporated other rocks. [ 5 ]
The term tephrochronology appears to have been used by Sigurdur Thórarinsson as early as 1944. [ 6 ] A key point in the establishment of this scientific field of study with what evolved to be a unique geoscientific method was in 1961 after a proposal supported by him led by Japanese researchers including Professor Kunio Kobayashi resulted in the establishment of an international scientific group. Much work had preceded this, but was limited by the techniques available at the time in geology. This had resulted in tephra formations not being linked and inaccurate timings that could not be related to events say with worldwide traces.
What would now be known as cryptotephra studies occurred in sea floor samples in the 1940s but Christer Persson in Scandinavia, was the first to publish articles in this field in the 1960s. [ 6 ] Andrew Dugmore in 1989 was the first to use modern systematic methodology. [ 6 ] Since then researchers have targeted stratigraphic archives of peat , lake sediment, ice cores, marine sediments, loess , floors of caves and rock shelters or stalagmites as well as contemporary eruption deposits. [ 6 ]
Early tephra horizons were identified with the Saksunarvatn tephra (Icelandic origin, c. 10.2 cal. ka BP), forming a horizon in the late Pre-Boreal of Northern Europe, the Vedde ash (also Icelandic in origin, c. 12.0 cal. ka BP) and the Laacher See tephra (in the Eifel volcanic field, c. 12.9 cal. ka BP). Major volcanoes which have been used in tephrochronological studies include Vesuvius , Hekla and Santorini . Minor volcanic events may also leave their fingerprint in the geological record: Hayes Volcano is responsible for a series of six major tephra layers in the Cook Inlet region of Alaska. Tephra horizons provide a synchronous check against which to correlate the palaeoclimatic reconstructions that are obtained from terrestrial records, like fossil pollen studies ( palynology ), from varves in lake sediments or from marine deposits and ice-core records , and to extend the limits of carbon-14 dating .
A pioneer in the use of tephra layers as marker horizons to establish chronology was Sigurdur Thorarinsson , who began by studying the layers he found in his native Iceland. [ 7 ] Since the late 1990s, techniques developed by Chris S. M. Turney ( QUB , Belfast; now University of Exeter ) and others for extracting tephra horizons invisible to the naked eye ("cryptotephra") [ 8 ] have revolutionised the application of tephrochronology. This technique relies upon the difference between the specific gravity of the microtephra shards and the host sediment matrix. It has led to the first discovery of the Vedde ash on the mainland of Britain, in Sweden, in the Netherlands , in the Swiss Lake Soppensee and in two sites on the Karelian Isthmus of Baltic Russia.
It has also revealed previously undetected ash layers, such as the Borrobol Tephra first discovered in northern Scotland , dated to c. 14.4 cal. ka BP, [ 8 ] the microtephra horizons of equivalent geochemistry from southern Sweden , dated at 13,900 Cariaco varve yrs BP [ 9 ] and from northwest Scotland, dated at 13.6 cal. ka BP. [ 10 ]
Since 2010 Bayesian age modelling built around ever-improving 14C-calibration curves and other age-related data,such as zircon double dating continues to better define tephrochronology. [ 6 ] | https://en.wikipedia.org/wiki/Tephrochronology |
In organic chemistry , nitro compounds are organic compounds that contain one or more nitro functional groups ( −NO 2 ). The nitro group is one of the most common explosophores (functional group that makes a compound explosive) used globally. The nitro group is also strongly electron-withdrawing . Because of this property, C−H bonds alpha (adjacent) to the nitro group can be acidic. For similar reasons, the presence of nitro groups in aromatic compounds retards electrophilic aromatic substitution but facilitates nucleophilic aromatic substitution . Nitro groups are rarely found in nature. They are almost invariably produced by nitration reactions starting with nitric acid . [ 1 ]
Aromatic nitro compounds are typically synthesized by nitration. Nitration is achieved using a mixture of nitric acid and sulfuric acid , which produce the nitronium ion ( NO + 2 ), which is the electrophile:
The nitration product produced on the largest scale, by far, is nitrobenzene . Many explosives are produced by nitration including trinitrophenol (picric acid), trinitrotoluene (TNT), and trinitroresorcinol (styphnic acid). [ 3 ] Another but more specialized method for making aryl–NO 2 group starts from halogenated phenols, is the Zinke nitration .
Aliphatic nitro compounds can be synthesized by various methods; notable examples include:
In nucleophilic aliphatic substitution , sodium nitrite (NaNO 2 ) replaces an alkyl halide . In the so-called Ter Meer reaction (1876) named after Edmund ter Meer , [ 14 ] the reactant is a 1,1-halonitroalkane:
The reaction mechanism is proposed in which in the first slow step a proton is abstracted from nitroalkane 1 to a carbanion 2 followed by protonation to an aci-nitro 3 and finally nucleophilic displacement of chlorine based on an experimentally observed hydrogen kinetic isotope effect of 3.3. [ 15 ] When the same reactant is reacted with potassium hydroxide the reaction product is the 1,2-dinitro dimer. [ 16 ]
Chloramphenicol is a rare example of a naturally occurring nitro compound. At least some naturally occurring nitro groups arose by the oxidation of amino groups. [ 17 ] 2-Nitrophenol is an aggregation pheromone of ticks .
Examples of nitro compounds are rare in nature. 3-Nitropropionic acid found in fungi and plants ( Indigofera ). Nitropentadecene is a defense compound found in termites . Aristolochic acids are found in the flowering plant family Aristolochiaceae . Nitrophenylethane is found in Aniba canelilla . [ 18 ] Nitrophenylethane is also found in members of the Annonaceae , Lauraceae and Papaveraceae . [ 19 ]
Despite the occasional use in pharmaceuticals, the nitro group is associated with mutagenicity and genotoxicity and therefore is often regarded as a liability in the drug discovery process. [ 20 ]
Nitro compounds participate in several organic reactions , the most important being reduction of nitro compounds to the corresponding amines:
Virtually all aromatic amines (e.g. aniline ) are derived from nitroaromatics through such catalytic hydrogenation . A variation is formation of a dimethylaminoarene with palladium on carbon and formaldehyde : [ 21 ]
The α-carbon of nitroalkanes is somewhat acidic. The p K a values of nitromethane and 2-nitropropane are respectively 17.2 and 16.9 in dimethyl sulfoxide (DMSO) solution, suggesting an aqueous p K a of around 11. [ 22 ] In other words, these carbon acids can be deprotonated in aqueous solution. The conjugate base is called a nitronate , and behaves similar to an enolate . In the nitroaldol reaction , it adds directly to aldehydes , and, with enones , can serve as a Michael donor . Conversely, a nitroalkene reacts with enols as a Michael acceptor. [ 23 ] [ 24 ] Nitrosating a nitronate gives a nitrolic acid . [ 25 ]
Nitronates are also key intermediates in the Nef reaction : when exposed to acids or oxidants, a nitronate hydrolyzes to a carbonyl and azanone . [ 26 ]
Grignard reagents combine with nitro compounds to give a nitrone ; but a Grignard reagent with an α hydrogen will then add again to the nitrone to give a hydroxylamine salt. [ 27 ]
The Leimgruber–Batcho , Bartoli and Baeyer–Emmerling indole syntheses begin with aromatic nitro compounds. Indigo can be synthesized in a condensation reaction from ortho -nitrobenzaldehyde and acetone in strongly basic conditions in a reaction known as the Baeyer–Drewson indigo synthesis .
Many flavin -dependent enzymes are capable of oxidizing aliphatic nitro compounds to less-toxic aldehydes and ketones. Nitroalkane oxidase and 3-nitropropionate oxidase oxidize aliphatic nitro compounds exclusively, whereas other enzymes such as glucose oxidase have other physiological substrates. [ 28 ]
Explosive decomposition of organo nitro compounds are redox reactions, wherein both the oxidant (nitro group) and the fuel (hydrocarbon substituent) are bound within the same molecule. The explosion process generates heat by forming highly stable products including molecular nitrogen (N 2 ), carbon dioxide, and water. The explosive power of this redox reaction is enhanced because these stable products are gases at mild temperatures. Many contact explosives contain the nitro group. | https://en.wikipedia.org/wiki/Ter_Meer_reaction |
TeraChem is a computational chemistry software program designed for CUDA -enabled Nvidia GPUs . The initial development started at the University of Illinois at Urbana-Champaign and was subsequently commercialized. It is currently distributed by PetaChem, LLC, located in Silicon Valley . [ 1 ] As of 2020, the software package is still under active development.
TeraChem is capable of fast ab initio molecular dynamics and can utilize density functional theory (DFT) methods for nanoscale biomolecular systems with hundreds of atoms . [ 2 ] All the methods used are based on Gaussian orbitals , in order to improve performance on contemporary (2010s) computer hardware . [ 3 ]
2017
2016
2012
2011
2010 | https://en.wikipedia.org/wiki/TeraChem |
TeraGrid was an e-Science grid computing infrastructure combining resources at eleven partner sites. The project started in 2001 and operated from 2004 through 2011.
The TeraGrid integrated high-performance computers, data resources and tools, and experimental facilities. Resources included more than a petaflops of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance computer network connections. Researchers could also access more than 100 discipline-specific databases.
TeraGrid was coordinated through the Grid Infrastructure Group (GIG) at the University of Chicago , working in partnership with the resource provider sites in the United States.
The US National Science Foundation (NSF) issued a solicitation asking for a "distributed terascale facility" from program director Richard L. Hilderbrandt. [ 1 ] The TeraGrid project was launched in August 2001 with $53 million in funding to four sites: the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign , the San Diego Supercomputer Center (SDSC) at the University of California, San Diego , the University of Chicago Argonne National Laboratory , and the Center for Advanced Computing Research (CACR) at the California Institute of Technology in Pasadena, California .
The design was meant to be an extensible distributed open system from the start. [ 2 ] In October 2002, the Pittsburgh Supercomputing Center (PSC) at Carnegie Mellon University and the University of Pittsburgh joined the TeraGrid as major new partners when NSF announced $35 million in supplementary funding. The TeraGrid network was transformed through the ETF project from a 4-site mesh to a dual-hub backbone network with connection points in Los Angeles and at the Starlight facilities in Chicago .
In October 2003, NSF awarded $10 million to add four sites to TeraGrid as well as to establish a third network hub, in Atlanta . These new sites were Oak Ridge National Laboratory (ORNL), Purdue University , Indiana University , and the Texas Advanced Computing Center (TACC) at The University of Texas at Austin .
TeraGrid construction was also made possible through corporate partnerships with Sun Microsystems , IBM , Intel Corporation , Qwest Communications , Juniper Networks , Myricom , Hewlett-Packard Company , and Oracle Corporation .
TeraGrid construction was completed in October 2004, at which time the TeraGrid facility began full production.
In August 2005, NSF's newly created office of cyberinfrastructure extended support for another five years with a $150 million set of awards. It included $48 million for coordination and user support to the Grid Infrastructure Group at the University of Chicago led by Charlie Catlett . [ 3 ] Using high-performance network connections, the TeraGrid featured high-performance computers, data resources and tools, and high-end experimental facilities around the USA. The work supported by the project is sometimes called e-Science .
In 2006, the University of Michigan 's School of Information began a study of TeraGrid. [ 4 ]
In May 2007, TeraGrid integrated resources included more than 250 teraflops of computing capability and more than 30 petabytes (quadrillions of bytes) of online and archival data storage with rapid access and retrieval over high-performance networks. Researchers could access more than 100 discipline-specific databases. In late 2009, The TeraGrid resources had grown to 2 petaflops of computing capability and more than 60 petabytes storage. In mid 2009, NSF extended the operation of TeraGrid to 2011.
A follow-on project was approved in May 2011. [ 5 ] In July 2011, a partnership of 17 institutions announced the Extreme Science and Engineering Discovery Environment (XSEDE). NSF announced funding the XSEDE project for five years, at $121 million. [ 6 ] XSEDE is led by John Towns at the University of Illinois 's National Center for Supercomputing Applications . [ 6 ]
TeraGrid resources are integrated through a service-oriented architecture in that each resource provides a "service" that is defined in terms of interface and operation. Computational resources run a set of software packages called "Coordinated TeraGrid Software and Services" (CTSS). CTSS provides a familiar user environment on all TeraGrid systems, allowing scientists to more easily port code from one system to another. CTSS also provides integrative functions such as single-signon, remote job submission, workflow support, data movement tools, etc. CTSS includes the Globus Toolkit, Condor, distributed accounting and account management software, verification and validation software, and a set of compilers, programming tools, and environment variables .
TeraGrid uses a 10 Gigabits per second dedicated fiber-optical backbone network, with hubs in Chicago, Denver, and Los Angeles. All resource provider sites connect to a backbone node at 10 Gigabits per second. Users accessed the facility through national research networks such as the Internet2 Abilene backbone and National LambdaRail .
TeraGrid users primarily came from U.S. universities. There are roughly 4,000 users at over 200 universities. Academic researchers in the United States can obtain exploratory, or development allocations (roughly, in "CPU hours") based on an abstract describing the work to be done. More extensive allocations involve a proposal that is reviewed during a quarterly peer-review process. All allocation proposals are handled through the TeraGrid website. Proposers select a scientific discipline that most closely describes their work, and this enables reporting on the allocation of, and use of, TeraGrid by scientific discipline. As of July 2006 the scientific profile of TeraGrid allocations and usage was:
Each of these discipline categories correspond to a specific program area of the National Science Foundation .
Starting in 2006, TeraGrid provided application-specific services to Science Gateway partners, who serve (generally via a web portal) discipline-specific scientific and education communities. Through the Science Gateways program TeraGrid aims to broaden access by at least an order of magnitude in terms of the number of scientists, students, and educators who are able to use TeraGrid. | https://en.wikipedia.org/wiki/TeraGrid |
The byte is a unit of digital information that most commonly consists of eight bits . Historically, the byte was the number of bits used to encode a single character of text in a computer [ 1 ] [ 2 ] and for this reason it is the smallest addressable unit of memory in many computer architectures . To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as the Internet Protocol ( RFC 791 ) refer to an 8-bit byte as an octet . [ 3 ] Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness .
The size of the byte has historically been hardware -dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes, and persisted, in legacy systems, into the twenty-first century. In this era, bit groupings in the instruction stream were often referred to as syllables [ a ] or slab , before the term byte became common.
The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256. [ 8 ] The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte. [ 9 ] Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively.
The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electrical and Electronics Engineers (IEEE). [ 10 ] Internationally, the unit octet explicitly defines a sequence of eight bits, eliminating the potential ambiguity of the term "byte". [ 11 ] [ 12 ] The symbol for octet, 'o', also conveniently eliminates the ambiguity in the symbol 'B' between byte and bel .
The term byte was coined by Werner Buchholz in June 1956, [ 4 ] [ 13 ] [ 14 ] [ b ] during the early design phase for the IBM Stretch [ 15 ] [ 16 ] [ 1 ] [ 13 ] [ 14 ] [ 17 ] [ 18 ] computer, which had addressing to the bit and variable field length (VFL) instructions with a byte size encoded in the instruction. [ 13 ] It is a deliberate respelling of bite to avoid accidental mutation to bit . [ 1 ] [ 13 ] [ 19 ] [ c ]
Another origin of byte for bit groups smaller than a computer's word size, and in particular groups of four bits , is on record by Louis G. Dooley, who claimed he coined the term while working with Jules Schwartz and Dick Beeler on an air defense system called SAGE at MIT Lincoln Laboratory in 1956 or 1957, which was jointly developed by Rand , MIT, and IBM. [ 20 ] [ 21 ] Later on, Schwartz's language JOVIAL actually used the term, but the author recalled vaguely that it was derived from AN/FSQ-31 . [ 22 ] [ 21 ]
Early computers used a variety of four-bit binary-coded decimal (BCD) representations and the six-bit codes for printable graphic patterns common in the U.S. Army ( FIELDATA ) and Navy . These representations included alphanumeric characters and special graphical symbols. These sets were expanded in 1963 to seven bits of coding, called the American Standard Code for Information Interchange (ASCII) as the Federal Information Processing Standard , which replaced the incompatible teleprinter codes in use by different branches of the U.S. government and universities during the 1960s. ASCII included the distinction of upper- and lowercase alphabets and a set of control characters to facilitate the transmission of written language as well as printing device functions, such as page advance and line feed, and the physical or logical control of data flow over the transmission media. [ 18 ] During the early 1960s, while also active in ASCII standardization, IBM simultaneously introduced in its product line of System/360 the eight-bit Extended Binary Coded Decimal Interchange Code (EBCDIC), an expansion of their six-bit binary-coded decimal (BCDIC) representations [ d ] used in earlier card punches. [ 23 ] The prominence of the System/360 led to the ubiquitous adoption of the eight-bit storage size, [ 18 ] [ 16 ] [ 13 ] while in detail the EBCDIC and ASCII encoding schemes are different.
In the early 1960s, AT&T introduced digital telephony on long-distance trunk lines . These used the eight-bit μ-law encoding . This large investment promised to reduce transmission costs for eight-bit data.
In Volume 1 of The Art of Computer Programming (first published in 1968), Donald Knuth uses byte in his hypothetical MIX computer to denote a unit which "contains an unspecified amount of information ... capable of holding at least 64 distinct values ... at most 100 distinct values. On a binary computer a byte must therefore be composed of six bits". [ 24 ] He notes that "Since 1975 or so, the word byte has come to mean a sequence of precisely eight binary digits...When we speak of bytes in connection with MIX we shall confine ourselves to the former sense of the word, harking back to the days when bytes were not yet standardized." [ 24 ]
The development of eight-bit microprocessors in the 1970s popularized this storage size. Microprocessors such as the Intel 8080 , the direct predecessor of the 8086 , could also perform a small number of operations on the four-bit pairs in a byte, such as the decimal-add-adjust (DAA) instruction. A four-bit quantity is often called a nibble , also nybble , which is conveniently represented by a single hexadecimal digit.
The term octet unambiguously specifies a size of eight bits. [ 18 ] [ 12 ] It is used extensively in protocol definitions.
Historically, the term octad or octade was used to denote eight bits as well at least in Western Europe; [ 25 ] [ 26 ] however, this usage is no longer common. The exact origin of the term is unclear, but it can be found in British, Dutch, and German sources of the 1960s and 1970s, and throughout the documentation of Philips mainframe computers.
The unit symbol for the byte is specified in IEC 80000-13 , IEEE 1541 and the Metric Interchange Format [ 10 ] as the upper-case character B.
In the International System of Quantities (ISQ), B is also the symbol of the bel , a unit of logarithmic power ratio named after Alexander Graham Bell , creating a conflict with the IEC specification. However, little danger of confusion exists, because the bel is a rarely used unit. It is used primarily in its decadic fraction, the decibel (dB), for signal strength and sound pressure level measurements, while a unit for one-tenth of a byte, the decibyte, and other fractions, are only used in derived units, such as transmission rates.
The lowercase letter o for octet is defined as the symbol for octet in IEC 80000-13 and is commonly used in languages such as French [ 27 ] and Romanian , and is also combined with metric prefixes for multiples, for example ko and Mo.
More than one system exists to define unit multiples based on the byte. Some systems are based on powers of 10 , following the International System of Units (SI), which defines for example the prefix kilo as 1000 (10 3 ); other systems are based on powers of two . Nomenclature for these systems has led to confusion. Systems based on powers of 10 use standard SI prefixes ( kilo , mega , giga , ...) and their corresponding symbols (k, M, G, ...). Systems based on powers of 2, however, might use binary prefixes ( kibi , mebi , gibi , ...) and their corresponding symbols (Ki, Mi, Gi, ...) or they might use the prefixes K, M, and G, creating ambiguity when the prefixes M or G are used.
While the difference between the decimal and binary interpretations is relatively small for the kilobyte (about 2% smaller than the kibibyte), the systems deviate increasingly as units grow larger (the relative deviation grows by 2.4% for each three orders of magnitude). For example, a power-of-10-based terabyte is about 9% smaller than power-of-2-based tebibyte.
Definition of prefixes using powers of 10—in which 1 kilobyte (symbol kB) is defined to equal 1,000 bytes—is recommended by the International Electrotechnical Commission (IEC). [ 28 ] The IEC standard defines eight such multiples, up to 1 yottabyte (YB), equal to 1000 8 bytes. [ 29 ] The additional prefixes ronna- for 1000 9 and quetta- for 1000 10 were adopted by the International Bureau of Weights and Measures (BIPM) in 2022. [ 30 ] [ 31 ]
This definition is most commonly used for data-rate units in computer networks , internal bus, hard drive and flash media transfer speeds, and for the capacities of most storage media , particularly hard drives , [ 32 ] flash -based storage, [ 33 ] and DVDs . [ citation needed ] Operating systems that use this definition include macOS , [ 34 ] iOS , [ 34 ] Ubuntu , [ 35 ] and Debian . [ 36 ] It is also consistent with the other uses of the SI prefixes in computing, such as CPU clock speeds or measures of performance .
Prior art, the IBM System 360 and the related tape systems set the byte at 8 bits. [ 37 ] Early 5.25-inch disks used decimal [ dubious – discuss ] even though they used 128-byte and 256-byte sectors. [ 38 ] Hard disks used mostly 256-byte and then 512-byte before 4096-byte blocks became standard. [ 39 ] RAM was always sold in powers of 2. [ citation needed ]
A system of units based on powers of 2 in which 1 kibibyte (KiB) is equal to 1,024 (i.e., 2 10 ) bytes is defined by international standard IEC 80000-13 and is supported by national and international standards bodies ( BIPM , IEC , NIST ). The IEC standard defines eight such multiples, up to 1 yobibyte (YiB), equal to 1024 8 bytes. The natural binary counterparts to ronna- and quetta- were given in a consultation paper of the International Committee for Weights and Measures' Consultative Committee for Units (CCU) as robi- (Ri, 1024 9 ) and quebi- (Qi, 1024 10 ), but have not yet been adopted by the IEC or ISO. [ 40 ]
An alternative system of nomenclature for the same units (referred to here as the customary convention ), in which 1 kilobyte (KB) is equal to 1,024 bytes, [ 41 ] [ 42 ] [ 43 ] 1 megabyte (MB) is equal to 1024 2 bytes and 1 gigabyte (GB) is equal to 1024 3 bytes is mentioned by a 1990s JEDEC standard. Only the first three multiples (up to GB) are mentioned by the JEDEC standard, which makes no mention of TB and larger. While confusing and incorrect, [ 44 ] the customary convention is used by the Microsoft Windows operating system [ 45 ] [ better source needed ] and random-access memory capacity, such as main memory and CPU cache size, and in marketing and billing by telecommunication companies, such as Vodafone , [ 46 ] AT&T , [ 47 ] Orange [ 48 ] and Telstra . [ 49 ]
For storage capacity, the customary convention was used by macOS and iOS through Mac OS X 10.5 Leopard and iOS 10, after which they switched to units based on powers of 10. [ 34 ]
Various computer vendors have coined terms for data of various sizes, sometimes with different sizes for the same term even within a single vendor. These terms include double word , half word , long word , quad word , slab , superword and syllable . There are also informal terms. e.g., half byte and nybble for 4 bits, octal K for 1000 8 .
When I see a disk advertised as having a capacity of one megabyte, what is this telling me? There are three plausible answers, and I wonder if anybody knows which one is correct ... Now this is not a really vital issue, as there is just under 5% difference between the smallest and largest alternatives. Nevertheless, it would [be] nice to know what the standard measure is, or if there is one.
Contemporary [ e ] computer memory has a binary architecture making a definition of memory units based on powers of 2 most practical. The use of the metric prefix kilo for binary multiples arose as a convenience, because 1024 is approximately 1000 . [ 27 ] This definition was popular in early decades of personal computing , with products like the Tandon 5 1 ⁄ 4 -inch DD floppy format (holding 368 640 bytes) being advertised as "360 KB", following the 1024 -byte convention. It was not universal, however. The Shugart SA-400 5 1 ⁄ 4 -inch floppy disk held 109,375 bytes unformatted, [ 51 ] and was advertised as "110 Kbyte", using the 1000 convention. [ 52 ] Likewise, the 8-inch DEC RX01 floppy (1975) held 256 256 bytes formatted, and was advertised as "256k". [ 53 ] Some devices were advertised using a mixture of the two definitions: most notably, floppy disks advertised as "1.44 MB" have an actual capacity of 1440 KiB , the equivalent of 1.47 MB or 1.41 MiB.
In 1995, the International Union of Pure and Applied Chemistry 's (IUPAC) Interdivisional Committee on Nomenclature and Symbols attempted to resolve this ambiguity by proposing a set of binary prefixes for the powers of 1024, including kibi (kilobinary), mebi (megabinary), and gibi (gigabinary). [ 54 ] [ 55 ]
In December 1998, the IEC addressed such multiple usages and definitions by adopting the IUPAC's proposed prefixes (kibi, mebi, gibi, etc.) to unambiguously denote powers of 1024. [ 56 ] Thus one kibibyte (1 KiB) is 1024 1 bytes = 1024 bytes, one mebibyte (1 MiB) is 1024 2 bytes = 1 048 576 bytes, and so on.
In 1999, Donald Knuth suggested calling the kibibyte a "large kilobyte" ( KKB ). [ 57 ]
The IEC adopted the IUPAC proposal and published the standard in January 1999. [ 58 ] [ 59 ] The IEC prefixes are part of the International System of Quantities . The IEC further specified that the kilobyte should only be used to refer to 1000 bytes. [ 60 ]
Lawsuits arising from alleged consumer confusion over the binary and decimal definitions of multiples of the byte have generally ended in favor of the manufacturers, with courts holding that the legal definition of gigabyte or GB is 1 GB = 1 000 000 000 (10 9 ) bytes (the decimal definition), rather than the binary definition (2 30 , i.e., 1 073 741 824 ). Specifically, the United States District Court for the Northern District of California held that "the U.S. Congress has deemed the decimal definition of gigabyte to be the 'preferred' one for the purposes of 'U.S. trade and commerce' [...] The California Legislature has likewise adopted the decimal system for all 'transactions in this state. ' " [ 61 ]
Earlier lawsuits had ended in settlement with no court ruling on the question, such as a lawsuit against drive manufacturer Western Digital . [ 62 ] [ 63 ] Western Digital settled the challenge and added explicit disclaimers to products that the usable capacity may differ from the advertised capacity. [ 62 ] Seagate was sued on similar grounds and also settled. [ 62 ] [ 64 ]
Many programming languages define the data type byte .
The C and C++ programming languages define byte as an "addressable unit of data storage large enough to hold any member of the basic character set of the execution environment" (clause 3.6 of the C standard). The C standard requires that the integral data type unsigned char must hold at least 256 different values, and is represented by at least eight bits (clause 5.2.4.2.1). Various implementations of C and C++ reserve 8, 9, 16, 32, or 36 bits for the storage of a byte. [ 71 ] [ 72 ] [ f ] In addition, the C and C++ standards require that there be no gaps between two bytes. This means every bit in memory is part of a byte. [ 73 ]
Java's primitive data type byte is defined as eight bits. It is a signed data type, holding values from −128 to 127.
.NET programming languages, such as C# , define byte as an unsigned type, and the sbyte as a signed data type, holding values from 0 to 255, and −128 to 127 , respectively.
In data transmission systems, the byte is used as a contiguous sequence of bits in a serial data stream, representing the smallest distinguished unit of data. For asynchronous communication a full transmission unit usually additionally includes a start bit, 1 or 2 stop bits, and possibly a parity bit , and thus its size may vary from seven to twelve bits for five to eight bits of actual data. [ 74 ] For synchronous communication the error checking usually uses bytes at the end of a frame .
Terms used here to describe the structure imposed by the machine design, in addition to bit , are listed below. Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (i.e., different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite , but respelled to avoid accidental mutation to bit .) A word consists of the number of data bits transmitted in parallel from or to memory in one memory cycle. Word size is thus defined as a structural property of the memory. (The term catena was coined for this purpose by the designers of the Bull GAMMA 60 [ fr ] computer.) Block refers to the number of words transmitted to or from an input-output unit in response to a single input-output instruction. Block size is a structural property of an input-output unit; it may have been fixed by the design or left to be varied by the program.
[...] Most important, from the point of view of editing, will be the ability to handle any characters or digits, from 1 to 6 bits long. Figure 2 shows the Shift Matrix to be used to convert a 60-bit word , coming from Memory in parallel, into characters , or 'bytes' as we have called them, to be sent to the Adder serially. The 60 bits are dumped into magnetic cores on six different levels. Thus, if a 1 comes out of position 9, it appears in all six cores underneath. Pulsing any diagonal line will send the six bits stored along that line to the Adder. The Adder may accept all or only some of the bits. Assume that it is desired to operate on 4 bit decimal digits , starting at the right. The 0-diagonal is pulsed first, sending out the six bits 0 to 5, of which the Adder accepts only the first four (0-3). Bits 4 and 5 are ignored. Next, the 4 diagonal is pulsed. This sends out bits 4 to 9, of which the last two are again ignored, and so on. It is just as easy to use all six bits in alphanumeric work, or to handle bytes of only one bit for logical analysis, or to offset the bytes by any number of bits. All this can be done by pulling the appropriate shift diagonals. An analogous matrix arrangement is used to change from serial to parallel operation at the output of the adder. [...]
byte: A string that consists of a number of bits, treated as a unit, and usually representing a character or a part of a character. NOTES: 1 The number of bits in a byte is fixed for a given data processing system. 2 The number of bits in a byte is usually 8.
We received the following from W Buchholz, one of the individuals who was working on IBM's Project Stretch in the mid 1950s. His letter tells the story. Not being a regular reader of your magazine, I heard about the question in the November 1976 issue regarding the origin of the term "byte" from a colleague who knew that I had perpetrated this piece of jargon [see page 77 of November 1976 BYTE, "Olde Englishe"] . I searched my files and could not locate a birth certificate. But I am sure that "byte" is coming of age in 1977 with its 21st birthday. Many have assumed that byte, meaning 8 bits, originated with the IBM System/360, which spread such bytes far and wide in the mid-1960s. The editor is correct in pointing out that the term goes back to the earlier Stretch computer (but incorrect in that Stretch was the first, not the last, of IBM's second-generation transistorized computers to be developed). The first reference found in the files was contained in an internal memo written in June 1956 during the early days of developing Stretch . A byte was described as consisting of any number of parallel bits from one to six. Thus a byte was assumed to have a length appropriate for the occasion. Its first use was in the context of the input-output equipment of the 1950s, which handled six bits at a time. The possibility of going to 8-bit bytes was considered in August 1956 and incorporated in the design of Stretch shortly thereafter . The first published reference to the term occurred in 1959 in a paper ' Processing Data in Bits and Pieces ' by G A Blaauw , F P Brooks Jr and W Buchholz in the IRE Transactions on Electronic Computers , June 1959, page 121. The notions of that paper were elaborated in Chapter 4 of Planning a Computer System (Project Stretch) , edited by W Buchholz, McGraw-Hill Book Company (1962). The rationale for coining the term was explained there on page 40 as follows: Byte denotes a group of bits used to encode a character, or the number of bits transmitted in parallel to and from input-output units. A term other than character is used here because a given character may be represented in different applications by more than one code, and different codes may use different numbers of bits (ie, different byte sizes). In input-output transmission the grouping of bits may be completely arbitrary and have no relation to actual characters. (The term is coined from bite , but respelled to avoid accidental mutation to bit. ) System/360 took over many of the Stretch concepts, including the basic byte and word sizes, which are powers of 2. For economy, however, the byte size was fixed at the 8 bit maximum, and addressing at the bit level was replaced by byte addressing. Since then the term byte has generally meant 8 bits, and it has thus passed into the general vocabulary. Are there any other terms coined especially for the computer field which have found their way into general dictionaries of English language?
1956 Summer: Gerrit Blaauw , Fred Brooks , Werner Buchholz , John Cocke and Jim Pomerene join the Stretch team. Lloyd Hunter provides transistor leadership. 1956 July [ sic ]: In a report Werner Buchholz lists the advantages of a 64-bit word length for Stretch. It also supports NSA 's requirement for 8-bit bytes. Werner's term "Byte" first popularized in this memo.
NB. This timeline erroneously specifies the birth date of the term "byte" as July 1956 , while Buchholz actually used the term as early as June 1956 .
[...] 60 is a multiple of 1, 2, 3, 4, 5, and 6. Hence bytes of length from 1 to 6 bits can be packed efficiently into a 60-bit word without having to split a byte between one word and the next. If longer bytes were needed, 60 bits would, of course, no longer be ideal. With present applications, 1, 4, and 6 bits are the really important cases. With 64-bit words, it would often be necessary to make some compromises, such as leaving 4 bits unused in a word when dealing with 6-bit bytes at the input and output. However, the LINK Computer can be equipped to edit out these gaps and to permit handling of bytes which are split between words. [...]
[...] The maximum input-output byte size for serial operation will now be 8 bits, not counting any error detection and correction bits. Thus, the Exchange will operate on an 8-bit byte basis, and any input-output units with less than 8 bits per byte will leave the remaining bits blank. The resultant gaps can be edited out later by programming [...]
I came to work for IBM , and saw all the confusion caused by the 64-character limitation. Especially when we started to think about word processing, which would require both upper and lower case. Add 26 lower case letters to 47 existing, and one got 73 -- 9 more than 6 bits could represent. I even made a proposal (in view of STRETCH , the very first computer I know of with an 8-bit byte) that would extend the number of punch card character codes to 256 [1] . Some folks took it seriously. I thought of it as a spoof. So some folks started thinking about 7-bit characters, but this was ridiculous. With IBM's STRETCH computer as background, handling 64-character words divisible into groups of 8 (I designed the character set for it, under the guidance of Dr. Werner Buchholz , the man who DID coin the term "byte" for an 8-bit grouping). [2] It seemed reasonable to make a universal 8-bit character set, handling up to 256. In those days my mantra was "powers of 2 are magic". And so the group I headed developed and justified such a proposal [3]. That was a little too much progress when presented to the standards group that was to formalize ASCII, so they stopped short for the moment with a 7-bit set, or else an 8-bit set with the upper half left for future work. The IBM 360 used 8-bit characters, although not ASCII directly. Thus Buchholz's "byte" caught on everywhere. I myself did not like the name for many reasons. The design had 8 bits moving around in parallel. But then came a new IBM part, with 9 bits for self-checking, both inside the CPU and in the tape drives . I exposed this 9-bit byte to the press in 1973. But long before that, when I headed software operations for Cie. Bull in France in 1965-66, I insisted that 'byte' be deprecated in favor of " octet ". You can notice that my preference then is now the preferred term. It is justified by new communications methods that can carry 16, 32, 64, and even 128 bits in parallel. But some foolish people now refer to a "16-bit byte" because of this parallel transfer, which is visible in the UNICODE set. I'm not sure, but maybe this should be called a " hextet ". But you will notice that I am still correct. Powers of 2 are still magic!
The word byte was coined around 1956 to 1957 at MIT Lincoln Laboratories within a project called SAGE (the North American Air Defense System), which was jointly developed by Rand , Lincoln Labs, and IBM . In that era, computer memory structure was already defined in terms of word size . A word consisted of x number of bits ; a bit represented a binary notational position in a word. Operations typically operated on all the bits in the full word. We coined the word byte to refer to a logical set of bits less than a full word size. At that time, it was not defined specifically as x bits but typically referred to as a set of 4 bits , as that was the size of most of our coded data items. Shortly afterward, I went on to other responsibilities that removed me from SAGE. After having spent many years in Asia, I returned to the U.S. and was bemused to find out that the word byte was being used in the new microcomputer technology to refer to the basic addressable memory unit.
A question-and-answer session at an ACM conference on the history of programming languages included this exchange: [ John Goodenough : You mentioned that the term "byte" is used in JOVIAL . Where did the term come from? ] [ Jules Schwartz (inventor of JOVIAL): As I recall, the AN/FSQ-31 , a totally different computer than the 709 , was byte oriented. I don't recall for sure, but I'm reasonably certain the description of that computer included the word "byte," and we used it. ] [ Fred Brooks : May I speak to that? Werner Buchholz coined the word as part of the definition of STRETCH , and the AN/FSQ-31 picked it up from STRETCH, but Werner is very definitely the author of that word. ] [ Schwartz: That's right. Thank you. ] | https://en.wikipedia.org/wiki/Terabyte |
A terahertz metamaterial is a class of composite metamaterials designed to interact at terahertz (THz) frequencies. The terahertz frequency range used in materials research is usually defined as 0.1 to 10 THz . [ note 1 ]
This bandwidth is also known as the terahertz gap because it is noticeably underutilized. [ note 2 ] This is because terahertz waves are electromagnetic waves with frequencies higher than microwaves but lower than infrared radiation and visible light . These characteristics mean that it is difficult to influence terahertz radiation with conventional electronic components and devices. Electronics technology controls the flow of electrons , and is well developed for microwaves and radio frequencies . Likewise, the terahertz gap also borders optical or photonic wavelengths ; the infrared , visible , and ultraviolet ranges (or spectrums ), where well developed lens technologies also exist. However, the terahertz wavelength , or frequency range , appears to be useful for security screening, medical imaging , wireless communications systems, non-destructive evaluation , and chemical identification, as well as submillimeter astronomy . Finally, as a non-ionizing radiation it does not have the risks inherent in X-ray screening . [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Currently, a fundamental lack in naturally occurring materials that allow for the desired electromagnetic response has led to constructing new artificial composite materials, termed metamaterials . The metamaterials are based on a lattice structure which mimics crystal structures . However, the lattice structure of this new material consists of rudimentary elements much larger than atoms or single molecules, but is an artificial, rather than a naturally occurring structure. Yet, the interaction achieved is below the dimensions of the terahertz radiation wave . In addition, the desired results are based on the resonant frequency of fabricated fundamental elements . [ 5 ] The appeal and usefulness is derived from a resonant response that can be tailored for specific applications, and can be controlled electrically or optically. Or the response can be as a passive material . [ 6 ] [ 7 ] [ 8 ] [ 9 ]
The development of electromagnetic, artificial-lattice structured materials, termed metamaterials, has led to the realization of phenomena that cannot be obtained with natural materials . This is observed, for example, with a natural glass lens , which interacts with light (the electromagnetic wave ) in a way that appears to be one-handed, while light is delivered in a two-handed manner. In other words, light consists of an electric field and magnetic field . The interaction of a conventional lens , or other natural materials, with light is heavily dominated by the interaction with the electric field (one-handed). The magnetic interaction in lens material is essentially nil. This results in common optical limitations such as a diffraction barrier . Moreover, there is a fundamental lack of natural materials that strongly interact with light's magnetic field. Metamaterials, a synthetic composite structure, overcomes this limitation. In addition, the choice of interactions can be invented and re-invented during fabrication, within the laws of physics . Hence, the capabilities of interaction with the electromagnetic spectrum , which is light, are broadened. [ 8 ]
Terahertz frequencies, or submillimeter wavelengths, which exist between microwave frequencies and infrared wavelengths, are virtually unused in the commercial sector, primarily due are limits to propagating the terahertz band through the atmosphere. However, terahertz devices have been useful in scientific applications, such as remote sensing and spectroscopy . [ 10 ]
Development of metamaterials has traversed the electromagnetic spectrum up to terahertz and infrared frequencies, but does not yet include the visible light spectrum. This is because, for example, it is easier to build a structure with larger fundamental elements that can control microwaves . The fundamental elements for terahertz and infrared frequencies have been progressively scaled to smaller sizes. In the future, visible light will require elements to be scaled even smaller, for capable control by metamaterials. [ 11 ] [ 12 ] [ 13 ]
Along with the ability to now interact at terahertz frequencies is the desire to build, deploy, and integrate THz metamaterial applications universally into society. This is because, as explained above, components and systems with terahertz capabilities will fill a technologically relevant void. Because no known natural materials are available that can accomplish this, artificially constructed materials must now take their place.
Research has begun with first, demonstrating the practical terahertz metamaterial. Moreover, since, many materials do not respond to THz radiation naturally, it is necessary then to build the electromagnetic devices which enable the construction of useful applied technologies operating within this range. These are devices such as directed light sources , lenses , switches , [ note 3 ] modulators and sensors . This void also includes phase-shifting and beam-steering devices [ note 4 ] Real world applications in the THz band are still in infancy [ 8 ] [ 11 ] [ 13 ] [ 14 ]
Moderate progress has been achieved. Terahertz metamaterial devices have been demonstrated in the laboratory as tunable far-infrared filters, optical switching modulators, and metamaterial absorbers . The recent existence of a terahertz radiating source in general are THz quantum cascade lasers , optically pumped THz lasers, backward wave oscillators (BWO) and frequency multiplied sources. However, technologies to control and manipulate THz waves are lagging behind other frequency domains of the spectrum of light. [ 11 ] [ 13 ] [ 14 ]
Furthermore, research into technologies which utilize THz frequencies show the capabilities for advanced sensing techniques . In areas where other wavelengths are limited, THz frequencies appear to fill the near future gap for advancements in security, public health , biomedicine , defense , communication , and quality control in manufacturing. This terahertz band has the distinction of being non-invasive and will therefore not disrupt or perturb the structure of the object being radiated. At the same time this frequency band demonstrates capabilities such as passing through and imaging the contents of a plastic container , penetrating a few millimeters of human skin tissue without ill effects, passing through clothing to detect hidden objects on personnel, and the detection of chemical and biological agents as novel approaches for counter-terrorism . [ 9 ] Terahertz metamaterials, because they interact at the appropriate THz frequencies, seem to be one answer in developing materials which use THz radiation. [ 9 ]
Researchers believe that artificial magnetic (paramagnetic) structures, or hybrid structures that combine natural and artificial magnetic materials, can play a key role in terahertz devices. Some THz metamaterial devices are compact cavities, adaptive optics and lenses, tunable mirrors, isolators , and converters . [ 8 ] [ 12 ] [ 15 ]
Without available terahertz sources, other applications are held back. In contrast, semiconductor devices have become integrated into everyday living. This means that commercial and scientific applications for generating the appropriate frequency bands of light commensurate with the semiconductor application or device are in wide use. Visible and infrared lasers are at the core of information technology . Moreover, at the other end of the spectrum, microwave and radio-frequency emitters enable wireless communications. [ 16 ]
However, applications for the terahertz regime, previously defined as the terahertz gap of 0.1 to 10 THz, is an impoverished regime by comparison. Sources for generating the required THz frequencies (or wavelength ) exist, but other challenges hinder their usefulness. Terahertz laser devices are not compact and therefore lack portability and are not easily integrated into systems . In addition, low-power-consumption, solid state terahertz sources are lacking. Furthermore, the current devices also have one or more shortcomings of low power output , poor tuning abilities , and may require cryogenic liquids for operation ( liquid helium ). [ 16 ] Additionally, this lack of appropriate sources hinders opportunities in spectroscopy , remote sensing , free space communications, and medical imaging . [ 16 ]
Meanwhile, potential terahertz frequency applications are being researched globally. Two recently developed technologies, Terahertz time-domain spectroscopy and quantum cascade lasers could possibly be part of a multitude of development platforms worldwide. However, the devices and components necessary to effectively manipulate terahertz radiation require much more development beyond what has been accomplished to date (2012). [ 6 ] [ 14 ] [ 15 ] [ 17 ]
As briefly mentioned above, naturally occurring materials such as conventional lenses and glass prisms are unable to significantly interact with the magnetic field of light . The significant interaction ( permittivity ) occurs with the electric field . In natural materials , any useful magnetic interaction will taper off in the gigahertz range of frequencies . Compared to interaction with the electric field, the magnetic component is imperceptible when in terahertz , infrared , and visible light . So, a notable step occurred with the invention of a practical metamaterial at microwave frequencies, [ note 5 ] because the rudimentary elements of metamaterials have demonstrated a coupling and inductive response to the magnetic component commensurate with the electric coupling and response. This demonstrated the occurrence of an artificial magnetism, [ note 6 ] and was later applied to terahertz and infrared electromagnetic wave (or light). In the terahertz and infrared domain, it is a response that has not been discovered in nature. [ 12 ] [ 18 ] [ 19 ]
Moreover, because the metamaterial is artificially fabricated during each step and phase of construction, this gives ability to choose how light, or the terahertz electromagnetic wave , will travel through the material and be transmitted . This degree of choice is not possible with conventional materials . The control is also derived from electrical-magnetic coupling and response of rudimentary elements that are smaller than the length of the electromagnetic wave travelling through the assembled metamaterial. [ 18 ] [ 19 ]
Electromagnetic radiation , which includes light, carries energy and momentum that may be imparted to matter with which it interacts. The radiation and matter have a symbiotic relationship. Radiation does not simply act on a material, nor is it simply acted on upon by a given material; radiation interacts with matter.
The magnetic interaction, or induced coupling, of any material can be translated into permeability . The permeability of naturally occurring materials is a positive value. A unique ability of metamaterials is to achieve permeability values less than zero (or negative values), which are not accessible in nature. Negative permeability was first achieved at microwave frequencies with the first metamaterials. A few years later, negative permeability was demonstrated in the terahertz regime. [ 12 ] [ 20 ]
Materials which can couple magnetically are particularly rare at terahertz or optical frequencies.
Published research pertaining to some natural magnetic materials states that these materials do respond to frequencies above the microwave range, but the response is usually weak, and limited to a narrow band of frequencies. This reduces the possible useful terahertz devices. It was noted that the realization of magnetism at THz and higher frequencies will substantially affect terahertz optics and their applications. [ 12 ]
This has to do with magnetic coupling at the atomic level. This drawback can be overcome by using metamaterials that mirror atomic magnetic coupling , on a scale of magnitudes larger than the atom. [ 12 ] [ 21 ]
The first terahertz metamaterials able to achieve a desired magnetic response, which included negative values for permeability , were passive materials . Because of this, "tuning" was achieved by fabricating a new material, with slightly altered dimensions to create a new response. However, the notable advance, or practical achievement, is actually demonstrating the manipulation of terahertz radiation with metamaterials .
For the first demonstration, more than one metamaterial structure was fabricated. However, the demonstration showed a range of 0.6 to 1.8 terahertz. The results were believed to also show that the effect can be tuned throughout the terahertz frequency regime by scaling the dimensions of the structure. This was followed by a demonstrations at 6 THz, and 100 THz.
With the first demonstration, scaling of elements, and spacing, allowed for success with the terahertz range of frequencies. As with metamaterials in lower frequency ranges, these elements were non-magnetic materials, but were conducting elements. The design allows a resonance that occurs with the electric and magnetic components simultaneously. And notable is the strong magnetic response of these artificially constructed materials.
For the elements to respond at resonance, at specified frequencies, this is arranged by specifically designing the element. The elements are then placed in a repeating pattern, as is common for metamaterials. In this case, the now combined and arrayed elements, along with attention to spacing, comprise a flat, rectangular, (planar) structured metamaterial. Since it was designed to operate at terahertz frequencies, photolithography is used to etch the elements onto a substrate. [ 12 ]
The split-ring resonator (SRR) is a common metamaterial in use for a variety of experiments. [ 6 ] Magnetic responses ( permeability ) at terahertz frequencies can be achieved with a structure composed of non-magnetic elements, such as copper-wire SRR, which demonstrate different responses centered around a resonant frequency. Split ring resonators show a capability for tuning across the terahertz regime. Furthermore, the repeating structure made up the constituent materials follows the same strategy of averaging the electromagnetic field as it manipulates and transmits the terahertz radiation This averaging technique is called an effective medium response . [ 12 ]
Effective permeability μ- eff is boosted from the inductance of the rings and the capacitance occurs at the gaps of the split rings. In this terahertz experiment ellipsometry is applied, rather than waveguides. In other words, a light source in free space, emits a polarized beam of radiation which is then reflected off the sample (see images to theright). The emitted polarization is intended, and angle of polarization is known. A polarization change is reflected (off the sample material) is then measured. [ clarification needed ] Information on the phase difference (if any) and the reflected polarization is considered. [ 12 ]
The local magnetic field of the cell material can be understood as a magnetic response . Below resonance the local magnetic field increases This magnetic response stays in phase with the electric field. Because the SRR cell is actually a non-magnetic material, this local magnetic response is temporary and will retain magnetic characteristics only so long as there is an externally applied magnetic field. Thus the total magnetization will drop to zero when the applied field is removed. In addition, the local magnetic response is actually a fraction of the total magnetic field. This fraction is proportional to the field strength and this explains the linear dependency. Likewise there is an aggregate linear response over the whole material. This tends to mimic alignments and spins at the atomic level. [ 12 ]
With increasing frequency that approaches resonance over time the induced currents in the looped wire can no longer keep up with the applied field and the local response begins to lag. Then as the frequency increases further the induced local field response lags further until it is completely out of phase with the excitation field. This results in a magnetic permeability that is falling below unity and includes values less than zero. The linear coupling between the induced local field and the fluctuating applied field is in contrast to the non-linear characteristics of ferromagnetism [ 12 ]
Later, a magnetic response in these materials were demonstrated at 100 terahertz, and in the infrared regime. Proving the magnetic response was an important step towards later controlling the refractive index . [ 15 ] [ 22 ] Finally, negative index of refraction was achieved for terahertz wavelengths at 200 terahertz using layer pairs metallic nanorods in parallel. [ 23 ] This work is also complemented by surface plasmon studies in the terahertz regime. [ 24 ]
Work also continues with studies of applying external controls such as electronic switching and semiconductor structures to control transmission and reflection properties. [ 25 ] [ 26 ] [ 27 ] [ 28 ]
Electromagnetic metamaterials show promise to fill the Terahertz gap (0.1 – 10 THz). The terahertz gap is caused by two general shortfalls. First, almost no naturally occurring materials are available for applications which would utilize terahertz frequency sources . Second is the inability to translate the successes with EM metamaterials in the microwave and optical domain , to the terahertz domain. [ 26 ] [ 27 ]
Moreover, the majority of research has focused on the passive properties of artificial periodic THz transmission , as determined by the patterning of the metamaterial elements e.g., the effects of the size and shape of inclusions, metal film thickness, hole geometry, periodicity, etc. It has been shown that the resonance can also be affected by depositing a dielectric layer on the metal hole arrays and by doping a semiconductor substrate, both of which result in significant shifting of the resonance frequency. However, little work has focused on the "active" manipulation of the extraordinary optical transmission though it is essential to realize many applications. [ 25 ]
Answering this need, there are proposals for "active metamaterials" which can proactively control the proportion of transmission and reflection components of the source (EM) radiation. Strategies include illuminating the structure with laser light, varying an external static magnetic field where the current does not vary, and by using an external bias voltage supply (semiconductor controlled). These methods lead to the possibilities of high-sensitive spectroscopy, higher power terahertz generation, short-range secure THz communication, an even more sensitive detection through terahertz capabilities. Furthermore, these include the development of techniques for, more sensitive terahertz detection, and more effective control and manipulation of terahertz waves. [ 26 ] [ 27 ]
By combining metamaterial elements – specifically, split ring resonators – with Microelectromechanical systems technology – has enabled the creation of non-planar flexible composites and micromechanically active structures where the orientation of the electromagnetically resonant elements can be precisely controlled with respect to the incident field. [ 29 ]
The theory, simulation, and demonstration of a dynamic response of metamaterial parameters were shown for the first time with a planar array of split ring resonators (SRRs). [ 30 ]
Terahertz metamaterials are making possible the study of novel devices. [ 31 ] [ 32 ]
In the terahertz compact moderate power amplifiers are not available. This results in a region that is underutilized, and the lack of novel amplifiers can be directly attributed as one of the causes.
Research work has involved investigating, creating, and designing light-weight slow-wave vacuum electronics devices based on traveling wave tube amplifiers . These are designs that involve folded waveguide , slow-wave circuits, in which the terahertz wave meanders through a serpentine path while interacting with a linear electron beam. Designs of folded-waveguide traveling-wave tubes are at frequencies of 670, 850, and 1030 GHz. In order to ameliorate the power limitations due to small dimensions and high attenuation, novel planar circuit designs are also being investigated. [ 2 ]
In-house work at the NASA Glenn Research Center has investigated the use of metamaterials—engineered materials with unique electromagnetic properties to increase the power and efficiency of terahertz amplification in two types of vacuum electronics slow wave circuits. The first type of circuit has a folded waveguide geometry in which anisotropic dielectrics and holey metamaterials are which consist of arrays of subwavelength holes (see image to the right). [ 33 ]
The second type of circuit has a planar geometry with a meander transmission line to carry the electromagnetic wave and a metamaterial structure embedded in the substrate. Computational results are more promising with this circuit. Preliminary results suggest that the metamaterial structure is effective in decreasing the electric field magnitude in the substrate and increasing the magnitude in the region above the meander line, where it can interact with an electron sheet beam. In addition, the planar circuit is less difficult to fabricate and can enable a higher current. More work is needed to investigate other planar geometries, optimize the electric-field/electron-beam interaction, and design focusing magnet geometries for the sheet beam. [ 33 ] [ 34 ]
The possibility of controlling radiations in the terahertz regime is leading to analysis of designs for sensing devices, and phase modulators. Devices that can apply this radiation would be particularly useful. Various strategies are analyzed or tested for tuning metamaterials that may function as sensors. [ 35 ] [ 36 ] Likewise linear phase shift can be accomplished by using control devices. [ 14 ] It also necessary to have sensors that can detect certain battlefield hazards. [ 37 ] | https://en.wikipedia.org/wiki/Terahertz_metamaterial |
Terahertz nondestructive evaluation pertains to devices, and techniques of analysis occurring in the terahertz domain of electromagnetic radiation . These devices and techniques evaluate the properties of a material, component or system without causing damage. [ 1 ]
Terahertz imaging is an emerging and significant nondestructive evaluation (NDE) technique used for dielectric (nonconducting, i.e., an insulator ) materials analysis and quality control in the pharmaceutical , biomedical , security, materials characterization , and aerospace industries. [ 3 ] [ 4 ] It has proved to be effective in the inspection of layers in paints and coatings, [ 5 ] detecting structural defects in ceramic and composite materials [ 6 ] and imaging the physical structure of paintings [ 7 ] and manuscripts. [ 8 ] [ 9 ] The use of THz waves for non-destructive evaluation enables inspection of multi-layered structures and can identify abnormalities from foreign material inclusions, disbond and delamination, mechanical impact damage, heat damage, and water or hydraulic fluid ingression. [ 10 ] This new method can play a significant role in a number of industries for materials characterization applications where precision thickness mapping (to assure product dimensional tolerances within a product and from product-to-product) and density mapping (to assure product quality within a product and from product-to-product) are required. [ 11 ]
Sensors and instruments are employed in the 0.1 to the 10 THz range for nondestructive evaluation , which includes detection. [ 11 ] [ 12 ]
The Terahertz Density Thickness Imager is a nondestructive inspection method that employs terahertz energy for density and thickness mapping in dielectric , ceramic , and composite materials . This non-contact, single-sided terahertz electromagnetic measurement and imaging method characterizes micro-structure and thickness variation in dielectric ( insulating ) materials. This method was demonstrated for the Space Shuttle external tank sprayed-on foam insulation and has been designed for use as an inspection method for current and future NASA thermal protection systems and other dielectric material inspection applications where no contact can be made with the sample due to fragility and it is impractical to use ultrasonic methods. [ 11 ]
Rotational spectroscopy uses electromagnetic radiation in the frequency range from 0.1 to 4 terahertz (THz). This range includes millimeter-range wavelengths and is particularly sensitive to chemical molecules. The resulting THz absorption produces a unique and reproducible spectral pattern that identifies the material. THz spectroscopy can detect trace amounts of explosives in less than one second. Because explosives continually emit trace amounts of vapor, it should be possible to use these methods to detect concealed explosives from a distance. [ 12 ]
THz-wave radar can sense gas leaks, chemicals and nuclear materials. In field tests, THz-wave radar detected chemicals at the 10-ppm level from 60 meters away. This method can be used in a fence line or aircraft mounted system that works day or night in any weather. It can locate and track chemical and radioactive plumes. THz-wave radar that can sense radioactive plumes from nuclear plants have detected plumes several kilometers away based on radiation-induced ionization effects in air. [ 12 ]
THz tomography techniques are nondestructive methods that can use THz pulsed beam or millimeter-range sources to locate objects in 3D. [ 13 ] These techniques include tomography, tomosynthesis, synthetic aperture radar and time of flight. Such techniques can resolve details on scales of less than one millimeter in objects that are several tens of centimeters in size.
Security imaging is currently being done by both active and passive methods. Active systems illuminate the subject with THz radiation whereas passive systems merely view the naturally occurring radiation from the subject.
Evidently passive systems are inherently safe, whereas an argument can be made that any form of "irradiation" of a person is undesirable. In technical and scientific terms, however, the active illumination schemes are safe according to all current legislation and standards.
The purpose of using active illumination sources is primarily to make the signal-to-noise ratio better. This is analogous to using a flash on a standard optical light camera when the ambient lighting level is too low.
For security imaging purposes the operating frequencies are typically in the range 0.1 THz to 0.8 THz (100 GHz to 800 GHz). In this range skin is not transparent so the imaging systems can look through clothing and hair, but not inside the body. There are privacy issues associated with such activities, especially surrounding the active systems since the active systems, with their higher quality images, can show very detailed anatomical features.
Active systems such as the L3 Provision and the Smiths eqo are actually mm-wave imaging systems rather than Terahertz imaging systems like Millitech systems. These widely deployed systems do not display images, avoiding any privacy issues. Instead they display generic "mannequin" outlines with any anomalous regions highlighted.
Since security screening is looking for anomalous images, items like false legs, false arms, colostomy bags, body-worn urinals, body-worn insulin pumps, and external breast augmentations will show up. Note that breast implants, being under the skin, will not be revealed.
Active imaging techniques can be used to perform medical imaging. Because THz radiation is biologically safe (non ionisant), it can be used in high resolution imaging to detect skin cancer. [ 12 ]
NASA Space Shuttle inspections are an example of this technology's application.
After the Shuttle Columbia accident in 2003, Columbia Accident Investigation Board recommendation R3.2.1 stated “Initiate an aggressive program to eliminate all External Tank Thermal Protection System debris-shedding at the source….” To support this recommendation, inspection methods for flaws in foam are being evaluated, developed, and refined at NASA. [ 1 ] [ 11 ] [ 12 ]
STS-114 employed Space Shuttle Discovery , and was the first "Return to Flight" Space Shuttle mission following the Space Shuttle Columbia disaster . It launched at 10:39 EDT , 26 July 2005. During the STS-114 flight significant foam shedding was observed. Therefore, the ability to nondestructively detect and characterize crushed foam after that flight became a significant priority when it was believed that the staff processing the tank had crushed foam by walking on it or from hail damage when the shuttle was on the launch pad or during other preparations for launch.
Additionally, density variations in the foam were also potential points of flaw initiation causing foam shedding. The innovation described below answered the call to develop a nondestructive, totally non-contact, non- liquid-coupled method that could simultaneously and precisely characterize thickness variation (from crushed foam due to worker handling and hail damage) and density variation in foam materials. It was critical to have a method that did not require fluid (water) coupling; i.e.; ultrasonic testing methods require water coupling.
There are millions of dollars of ultrasonic equipment in the field and on the market that are used as thickness gauges and density meters . When terahertz nondestructive evaluation is fully commercialized into a more portable form, and becomes less expensive it will be able to replace the ultrasonic instruments for structural plastic , ceramic , and foam materials. The new instruments will not require liquid coupling thereby enhancing their usefulness in field applications and possibly for high-temperature in-situ applications where liquid coupling is not possible. A potential new market segment can be developed with this technology. [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Terahertz_nondestructive_evaluation |
Terahertz spectroscopy detects and controls properties of matter with electromagnetic fields that are in the frequency range between a few hundred gigahertz and several terahertz (abbreviated as THz). In many-body systems, several of the relevant states have an energy difference that matches with the energy of a THz photon . Therefore, THz spectroscopy provides a particularly powerful method in resolving and controlling individual transitions between different many-body states. By doing this, one gains new insights about many-body quantum kinetics and how that can be utilized in developing new technologies that are optimized up to the elementary quantum level.
Different electronic excitations within semiconductors are already widely used in lasers , electronic components and computers . At the same time, they constitute an interesting many-body system whose quantum properties can be modified, e.g., via a nanostructure design. Consequently, THz spectroscopy on semiconductors is relevant in revealing both new technological potentials of nanostructures as well as in exploring the fundamental properties of many-body systems in a controlled fashion.
There are a great variety of techniques to generate THz radiation and to detect THz fields. One can, e.g., use an antenna , a quantum-cascade laser , a free-electron laser , or optical rectification to produce well-defined THz sources. The resulting THz field can be characterized via its electric field E THz ( t ). Present-day experiments can already output E THz ( t ) that has a peak value in the range of MV/cm (megavolts per centimeter). [ 1 ] To estimate how strong such fields are, one can compute the level of energy change such fields induce to an electron over microscopic distance of one nanometer (nm), i.e., L = 1 nm. One simply multiplies the peak E THz ( t ) with elementary charge e and L to obtain e E THz ( t ) L = 100 meV. In other words, such fields have a major effect on electronic systems because the mere field strength of E THz ( t ) can induce electronic transitions over microscopic scales . One possibility is to use such THz fields to study Bloch oscillations [ 2 ] [ 3 ] where semiconductor electrons move through the Brillouin zone , just to return to where they started, giving rise to the Bloch oscillations.
The THz sources can be also extremely short, [ 4 ] down to single cycle of THz field's oscillation. For one THz, that means duration in the range of one picosecond (ps). Consequently, one can use THz fields to monitor and control ultrafast processes in semiconductors or to produce ultrafast switching in semiconductor components. Obviously, the combination of ultrafast duration and strong peak E THz ( t ) provides vast new possibilities to systematic studies in semiconductors.
Besides the strength and duration of E THz ( t ), the THz field's photon energy plays a vital role in semiconductor investigations because it can be made resonant with several intriguing many-body transitions. For example, electrons in conduction band and holes , i.e., electronic vacancies, in valence band attract each other via the Coulomb interaction . Under suitable conditions, electrons and holes can be bound to excitons that are hydrogen-like states of matter. At the same time, the exciton binding energy is few to hundreds of meV that can be matched energetically with a THz photon. Therefore, the presence of excitons can be uniquely detected [ 5 ] [ 6 ] based on the absorption spectrum of a weak THz field. [ 7 ] [ 8 ] Also simple states, such as plasma and correlated electron–hole plasma [ 9 ] can be monitored or modified by THz fields.
In optical spectroscopy, the detectors typically measure the intensity of the light field rather than the electric field because there are no detectors that can directly measure electromagnetic fields in the optical range. However, there are multiple techniques, such as antennas and electro-optical sampling , that can be applied to measure the time evolution of E THz ( t ) directly. For example, one can propagate a THz pulse through a semiconductor sample and measure the transmitted and reflected fields as function of time. Therefore, one collects information of semiconductor excitation dynamics completely in time domain, which is the general principle of the terahertz time-domain spectroscopy .
By using short THz pulses, [ 4 ] a great variety of physical phenomena have already been studied. For unexcited, intrinsic semiconductors one can determine the complex permittivity or THz-absorption coefficient and refractive index, respectively. [ 11 ] The frequency of transversal-optical phonons , to which THz photons can couple, lies for most semiconductors at several THz. [ 12 ] Free carriers in doped semiconductors or optically excited semiconductors lead to a considerable absorption of THz photons. [ 13 ] Since THz pulses passes through non-metallic materials, they can be used for inspection and transmission of packaged items.
The THz fields can be applied to accelerate electrons out of their equilibrium. If this is done fast enough, one can measure the elementary processes, such as how fast the screening of the Coulomb interaction is built up. This was experimentally explored in Ref. [ 14 ] where it was shown that screening is complete within tens of femtoseconds in semiconductors. These insights are very important to understand how electronic plasma behaves in solids .
The Coulomb interaction can also pair electrons and holes into excitons, as discussed above. Due to their analog to the hydrogen atom , excitons have bound states that can be uniquely identified by the usual quantum numbers 1 s , 2 s , 2 p , and so on. In particular, 1 s -to-2 p transition is dipole allowed and can be directly generated by E THz ( t ) if the photon energy matches the transition energy. In gallium arsenide -type systems, this transition energy is roughly 4 meV that corresponds to 1 THz photons. At resonance, the dipole d 1 s ,2 p defines the Rabi energy Ω Rabi = d 1 s ,2 p E THz ( t ) that determines the time scale at which the 1 s -to-2 p transition proceeds.
For example, one can excite the excitonic transition with an additional optical pulse which is synchronized with the THz pulse. This technique is called transient THz spectroscopy. [ 4 ] Using this technique one can follow the formation dynamics of excitons [ 7 ] [ 8 ] or observe THz gain arising from intraexcitonic transitions. [ 15 ] [ 16 ]
Since a THz pulse can be intense and short, e.g., single-cycle, it is experimentally possible to realize situations where duration of the pulse, time scale related to Rabi- as well as the THz photon energy ħω are degenerate. In this situation, one enters the realm of extreme nonlinear optics [ 17 ] where the usual approximations, such as the rotating-wave approximation (abbreviated as RWA) or the conditions for complete state transfer, break down. As a result, the Rabi oscillations become strongly distorted by the non-RWA contributions, the multiphoton absorption or emission processes, and the dynamic Franz–Keldysh effect , as measured in Refs. [ 18 ] [ 19 ]
By using a free-electron laser, one can generate longer THz pulses that are more suitable for detecting the Rabi oscillations directly. This technique could indeed demonstrate the Rabi oscillations, or actually the related Autler–Townes splitting , in experiments. [ 20 ] The Rabi splitting has also been measured with a short THz pulse [ 21 ] and also the onset to multi-THz-photon ionization has been detected, [ 22 ] as the THz fields are made stronger. Recently, it has also been shown that the Coulomb interaction causes nominally dipole-forbidden intra-excitonic transitions to become partially allowed. [ 23 ]
Terahertz transitions in solids can be systematically approached by generalizing the semiconductor Bloch equations [ 9 ] and the related many-body correlation dynamics. At this level, one realizes the THz field are directly absorbed by two-particle correlations that modify the quantum kinetics of electron and hole distributions. Therefore, a systematic THz analysis must include the quantum kinetics of many-body correlations, that can be treated systematically, e.g., with the cluster-expansion approach . At this level, one can explain and predict a wide range of effects with the same theory, ranging from Drude -like response [ 13 ] of plasma to extreme nonlinear effects of excitons. | https://en.wikipedia.org/wiki/Terahertz_spectroscopy_and_technology |
In physics , terahertz time-domain spectroscopy ( THz-TDS ) is a spectroscopic technique in which the properties of matter are probed with short pulses of terahertz radiation . The generation and detection scheme is sensitive to the sample's effect on both the amplitude and the phase of the terahertz radiation.
Typically, an ultrashort pulsed laser is used in the terahertz pulse generation process. In the use of low-temperature grown GaAs as an antenna, the ultrashort pulse creates charge carriers that are accelerated to create the terahertz pulse. In the use of non-linear crystals as a source, a high-intensity ultrashort pulse produces THz radiation from the crystal. A single terahertz pulse can contain frequency components covering much of the terahertz range, often from 0.05 to 4 THz, though the use of an air plasma can yield frequency components up to 40 THz. [ 1 ] After THz pulse generation, the pulse is directed by optical techniques, focused through a sample, then measured.
THz-TDS requires generation of an ultrafast (thus, large bandwidth) terahertz pulse from an even faster femtosecond optical pulse, typically from a Ti-sapphire laser . That optical pulse is first split to provide a probe pulse whose path length is adjusted using an optical delay line . The probe pulse strobes the detector that is sensitive to the electric field of the resulting terahertz signal at the time of the optical probe pulse sent to it. By varying the path length traversed by the probe pulse, the test signal is thereby measured as a function of time—the same principle as a sampling oscilloscope (technically, the measurement obtains the convolution of the test signal and the time-domain response of the strobed detector). To obtain the resulting frequency domain response using the Fourier transform , the measurement must cover each point in time (delay-line offset) of the resulting test pulse. The response of a test sample can be calibrated by dividing its spectrum so obtained by the spectrum of the terahertz pulse obtained with the sample removed, for instance.
Components of a typical THz-TDS instrument, as illustrated in the figure, include an infrared laser , optical beamsplitters , beam steering mirrors , delay stages, a terahertz generator , terahertz beam focusing and collimating optics like parabolic mirrors , and detector.
Constructing a THz-TDS experiment using low temperature grown GaAs (LT-GaAs) based antennas requires a laser whose photon energy exceeds the band gap of the material. Ti:sapphire lasers tuned to around 800 nm, matching the energy gap in LT-GaAs, are ideal as they can generate optical pulses as short as 10 fs . These lasers are available as commercial, turnkey systems.
Silver-coated mirrors are optimum for use as steering mirrors for infrared pulses around 800 nm. Their reflectivity is higher than gold and much higher than aluminum at that wavelength.
A beamsplitter is used to divide a single ultrashort optical pulse into two separate beams. A 50/50 beamsplitter is often used, supplying equal optical power to the terahertz generator and detector, though it is common to provide the terahertz generation path with more power given the inefficiency of the terahertz generation process compared to the detection efficiency of infrared (typically 800 nm wavelength) light.
An optical delay-line is implemented using a movable stage to vary the path length of one of the two beam paths. A delay stage uses a moving retroreflector to redirect the beam along a well-defined output path but following a delay. Movement of the stage holding the retroreflector corresponds to an adjustment of path length and consequently the time at which the terahertz detector is gated relative to the source terahertz pulse.
A purge box is typically used so that absorption of THz radiation by gaseous water molecules is minimized. A dry air source is often used for this purpose, however, a nitrogen gas source may also be used.
Water is known to have many discrete absorptions in the THz region that are rotational modes of water molecules. Alternatively, nitrogen, as a diatomic molecule, has no electric dipole moment, and does not (for the purposes of typical THz-TDS) absorb THz radiation. Thus, a purge box may be filled with nitrogen gas so no unintended discrete absorptions in the THz frequency range occur.
Off-axis parabolic mirrors are commonly used to collimate and focus THz radiation. Radiation from an effective point source, such as from a low-temperature gallium arsenide (LT-GaAs) antenna (active region ~5 μm) incident on an off-axis parabolic mirror becomes collimated, while collimated radiation incident on a parabolic mirror is focused to a point (see diagram). Terahertz radiation can thus be manipulated spatially using optical components such as metal-coated mirrors as well as lenses made from materials that are transparent at THz wavelengths. Samples for spectroscopy are commonly placed at a focus where the terahertz beam is most concentrated.
THz radiation has several distinct advantages for use in spectroscopy . Many materials are transparent at terahertz wavelengths, and this radiation is safe for biological tissue being non-ionizing (as opposed to X-rays ). Many interesting materials have unique spectral fingerprints in the terahertz range that may be used for identification. Demonstrated examples include several different types of explosives , dynamic fingerprinting of DNA and protein molecules using polarization varying anisotropic terahertz microspectroscopy , [ 2 ] polymorphic forms of many compounds used as active pharmaceutical ingredients (API) in commercial medications as well as several illegal narcotic substances. [ 3 ]
Since many materials are transparent to THz radiation, underlying materials can be accessed through visually opaque intervening layers.
Though not strictly a spectroscopic technique, the ultrashort width of THz radiation pulses allows for measurements (e.g., thickness, density, defect location) on difficult-to-probe materials like foam. These measurement capabilities share many similarities to those of pulsed ultrasonic systems as the depth of buried structures can be inferred through timing of their reflections of these short terahertz pulses.
There are three widely used techniques for generating terahertz pulses, all based on ultrashort pulses from titanium-sapphire lasers or mode-locked fiber lasers .
When an ultra-short (100 femtoseconds or shorter) optical pulse illuminates a semiconductor and its wavelength (energy) is above the energy band-gap of the material, it photogenerates mobile carriers. Most carriers are generated near the surface of the material (typically within 1 micrometre) because pulses are absorbed exponentially with respect to depth. This has two main effects. Firstly, it generates a band bending that has the effect of accelerating carriers of different signs in opposite directions (normal to the surface), creating a dipole. This effect is known as surface field emission . Secondly, the presence of a surface creates a break of symmetry that causes carriers to move (on average) only into the bulk of the semiconductor. This phenomenon, combined with the difference of mobilities of electrons and holes, also produces a dipole. This is known as the photo-Dember effect and is particularly strong in high-mobility semiconductors such as indium arsenide .
When generating THz radiation via a photoconductive emitter, an ultrafast pulse (typically 100 femtoseconds or shorter) creates charge carriers (electron-hole pairs) in a semiconductor material. This incident laser pulse abruptly changes the antenna from an insulating state into a conducting state. Due to an electric bias applied across the antenna, a sudden electric current transmits across the antenna. This changing current lasts for about a picosecond, and thus emits terahertz radiation since the Fourier transform of a picosecond length signal will contain THz components.
Typically the two antenna electrodes are patterned on a low temperature gallium arsenide (LT-GaAs), semi-insulating gallium arsenide (SI-GaAs), or other semiconductor (such as InP ) substrate .
In a commonly used scheme, the electrodes are formed into the shape of a simple dipole antenna with a gap of a few micrometers and have a bias voltage up to 40 V between them. The ultrafast laser pulse must have a wavelength that is short enough to excite electrons across the bandgap of the semiconductor substrate. This scheme is suitable for illumination with a Ti:sapphire oscillator laser with photon energies of 1.55 eV and pulse energies of about 10 nJ. For use with amplified Ti:sapphire lasers with pulse energies of about 1 mJ, the electrode gap can be increased to several centimeters with a bias voltage of up to 200 kV.
More recent advances towards cost-efficient and compact THz-TDS systems are based on mode-locked fiber laser sources emitting at a center wavelength of 1550 nm. Therefore, the photoconductive emitters must be based on semiconductor materials with smaller band gaps of approximately 0.74 eV such as Fe -doped indium gallium arsenide [ 4 ] or indium gallium arsenide / indium aluminum arsenide heterostructures . [ 5 ]
The short duration of THz pulses generated (typically ~2 ps ) are primarily due to the rapid rise of the photo-induced current in the semiconductor and short carrier lifetime semiconductor materials (e.g., LT-GaAs). This current may persist for only a few hundred femtoseconds to several nanoseconds depending on the substrate material. This is not the only means of generation but is currently (as of 2008 [update] ) the most common. [ citation needed ]
Pulses produced by this method have average power levels on the order of several tens of micro watts . [ 5 ] The peak power during pulses can be many orders of magnitude higher due to the low duty cycle of mostly >1%, which is dependent on the repetition rate of the laser source. The maximum bandwidth of the resulting THz pulse is primarily limited by the duration of the laser pulse, while the frequency position of the maximum of the Fourier spectrum is determined by the carrier lifetime of the semiconductor. [ 6 ]
In optical rectification , a high-intensity ultrashort laser pulse passes through a transparent crystal material that emits a terahertz pulse without any applied voltages. It is a nonlinear-optical process, where an appropriate crystal material is quickly electrically polarized at high optical intensities. This changing electrical polarization emits terahertz radiation.
Because of the high laser intensities that are necessary, this technique is mostly used with amplified Ti:sapphire lasers . Typical crystal materials are zinc telluride , gallium phosphide , and gallium selenide.
The bandwidth of pulses generated by optical rectification is limited by the laser pulse duration, terahertz absorption in the crystal material, the thickness of the crystal, and a mismatch between the propagation speed of the laser pulse and the terahertz pulse inside the crystal. Typically, a thicker crystal will generate higher intensities, but lower THz frequencies. With this technique, it is possible to boost the generated frequencies to 40 THz (7.5 μm) or higher, although 2 THz (150 μm) is more commonly used since it requires less complex optical setups.
The electrical field of terahertz pulses is measured in a detector simultaneously illuminated with an ultrashort laser pulse. Two common detection schemes are used in THz-TDS: photoconductive sampling and electro-optical sampling. The power of THz pulses can be detected by bolometers (heat detectors cooled to liquid-helium temperatures), but since bolometers can only measure the total energy of a terahertz pulse rather than its electric field over time, they are unsuitable for THz-TDS.
Because the measurement technique is coherent, it naturally rejects incoherent radiation. Additionally, because the time slice of the measurement is extremely narrow, the noise contribution to the measurement is extremely low.
The signal-to-noise ratio (S/N) of the resulting time-domain waveform depends on experimental conditions (e.g., averaging time). However due to the coherent sampling techniques described, high S/N values (>70 dB) are routinely observed with 1 minute averaging times.
The original problem responsible for the " Terahertz gap " (the colloquial term for the lack of techniques in the THz frequency range) was that electronics routinely have limited operation at frequencies at and above 10 12 Hz. Two experimental parameters make such measurement possible in THz-TDS with LT-GaAs antennas: the femtosecond “gating” pulses and the < 1 ps lifetimes of the charge carriers in the antenna (effectively determining the antenna's “on” time). When all optical path lengths have fixed length, an effective dc current results at the detection electronics due to their low time resolution. Picosecond time resolution does not come from fast electronic or optical techniques, but from the ability to adjust optical path lengths on the micrometer (μm) scale. To measure a particular segment of a THz pulse, the optical path lengths are fixed and the (effective dc) current at the detector due to the particular segment of electric field of the THz pulse.
THz-TDS measurements are typically not single-shot measurements.
Photoconductive detection is similar to photoconductive generation. Here, the voltage bias across the antenna leads is generated by the electric field of the THz pulse focused onto the antenna, rather than some external generation. The THz electric field drives current across the antenna leads, which is usually amplified with a low-bandwidth amplifier. This amplified current is the measured parameter that corresponds to the THz field strength. Again, the carriers in the semiconductor substrate have an extremely short lifetime. Thus, the THz electric field strength is only sampled for an extremely narrow slice ( femtoseconds ) of the entire electric field waveform.
The materials used for generation of terahertz radiation by optical rectification can also be used for its detection by using the Pockels effect , where particular crystalline materials become birefringent in the presence of an electric field. The birefringence caused by the electric field of a terahertz pulse leads to a change in the optical polarization of the detection pulse, proportional to the terahertz electric-field strength. With the help of polarizers and photodiodes , this polarization change is measured.
As with the generation, the bandwidth of the detection is dependent on the laser pulse duration, material properties, and crystal thickness.
The sensitivity of THz-TDS detection via electro-optical sampling can be enhanced beyond the classical shot noise limit through the use of squeezed light generated by an optical parametric amplifier . [ 7 ]
THz-TDS measures the electric field of a pulse and not just the power. Thus, THz-TDS measures both the amplitude and phase information of the frequency components it contains. In contrast, measuring only the power at each frequency is essentially a photon counting technique; information regarding the phase of the light is not obtained. Thus, the waveform is not uniquely determined by such a power measurement.
Even when measuring only the power reflected from a sample, the complex optical response constant of the material can be obtained. This is so because the complex nature of an optical constant is not arbitrary. The real and imaginary parts of an optical constant are related by the Kramers–Kronig relations . There is a difficulty in applying the Kramers-Kronig relations as written, because information about the sample (reflected power, for example) must be obtained at all frequencies. In practice, far separated frequency regions do not have significant influence on each other, and reasonable limiting conditions can be applied at high and low frequency, outside of the measured range.
THz-TDS, in contrast, does not require use of Kramers-Kronig relations. By measuring the electric field of a THz pulse in the time-domain, the amplitude and phase of each frequency component of the THz pulse are known (in contrast to the single piece of information known by a power measurement). Thus the real and imaginary parts of an optical constant can be known at every frequency within the usable bandwidth of a THz pulse, without need of frequencies outside the usable bandwidth or Kramers-Kronig relations. | https://en.wikipedia.org/wiki/Terahertz_time-domain_spectroscopy |
Teratology is the study of abnormalities of physiological development in organisms during their life span. It is a sub-discipline in medical genetics which focuses on the classification of congenital abnormalities in dysmorphology caused by teratogens and also in pharmacology and toxicology . Teratogens are substances that may cause non-heritable birth defects via a toxic effect on an embryo or fetus . [ 1 ] Defects include malformations, disruptions, deformations, and dysplasia that may cause stunted growth, delayed mental development, or other congenital disorders that lack structural malformations. [ 2 ] These defects can be recognized prior to or at birth as well as later during early childhood. [ 3 ] The related term developmental toxicity includes all manifestations of abnormal development that are caused by environmental insult . [ 4 ] The extent to which teratogens will impact an embryo is dependent on several factors, such as how long the embryo has been exposed, the stage of development the embryo was in when exposed (gestational timing), the genetic makeup of the embryo, and the transfer rate of the teratogen. [ 5 ] [ 6 ] The dose of the teratogen, the route of exposure to the teratogen, and the chemical nature of the teratogenic agent also contribute to the level of teratogenicity. [ 6 ]
The term was borrowed in 1842 from the French tératologie , where it was formed in 1830 from the Greek τέρας teras ( word stem τέρατ- terat- ), meaning "sign sent by the gods, portent, marvel, monster", and -ologie ( -ology ), used to designate a discourse, treaty, science, theory, or study of some topic. [ 7 ]
Old literature referred to abnormalities of all kinds under the Latin term Lusus naturae ( lit. ' freak of nature ' ). As early as the 17th century, Teratology referred to a discourse on prodigies and marvels of anything so extraordinary as to seem abnormal. In the 19th century, it acquired a meaning more closely related to biological deformities, mostly in the field of botany. Currently, its most instrumental meaning is that of the medical study of teratogenesis, congenital malformations or individuals with significant malformations. Historically, people have used many pejorative terms to describe/label cases of significant physical malformations. In the 1960s, David W. Smith of the University of Washington Medical School (one of the researchers who became known in 1973 for the discovery of fetal alcohol syndrome ), [ 8 ] popularized the term teratology . With the growth of understanding of the origins of birth defects, the field of teratology as of 2015 [update] overlaps with other fields of science, including developmental biology , embryology , and genetics .
Until the 1940s, teratologists regarded birth defects as primarily hereditary. In 1941, the first well-documented cases of environmental agents being the cause of severe birth defects were reported. [ 9 ]
Teratogenesis occurs when the development of an embryo is altered negatively due to the presence of teratogens. Teratogens are the causes of teratogenesis. Common examples of teratogens include genetic disorders , maternal nutrition and health, and chemical agents such as drugs and alcohol . [ 10 ] Lesser known examples that will be covered include stress, [ 11 ] caffeine, [ 12 ] and deficiencies in diet and nutrition. [ 13 ] Although teratogens can affect a fetus during any time in the pregnancy, one of the most sensitive time frames for them to be exposed to the developing embryo is during the embryonic period. This period is in effect from about the fourteenth day following when a female's egg is implanted into a specific place in the reproductive organs and sixty days after conception. [ 14 ] Teratogens are able to cause abnormal defects through certain mechanisms that occur throughout the development of the embryo.
In 1959 and in his 1973 monograph Environment and Birth Defects , embryologist James Wilson put forth six principles of teratogenesis to guide the study and understanding of teratogenic agents and their effects on developing organisms. [ 15 ] These principles were derived from and expanded on by those laid forth by zoologist Camille Dareste in the late 1800s: [ 15 ] [ 16 ]
The mechanisms of these teratogens lie in specific alterations to genes, cells, and tissues within the developing organism that cause deviation from normal development and can result in functional defects, growth stunts, malformation, and even death. Finally, susceptibility to teratogens is more elevated during specific, critical periods during development. [ 17 ]
The natural metabolic processes of the human body produce highly reactive oxygen-containing molecules called reactive oxygen species. [ 18 ] Being highly reactive, these molecules can oxidatively damage fats, proteins, and DNA, and alter signal transduction. Teratogens such as thalidomide, methamphetamine, and phenytoin are known to enhance ROS formation, potentially leading to teratogenesis [ 18 ]
ROS damage a certain class of reactions called redox reactions, which are chemical processes in which substances change their oxidation states by donating or accepting electrons. [ 19 ] In these reactions, ROS act as strong oxidizing agents. They accept electrons from other molecules, causing those molecules to become oxidized. This shifts the balance of redox reactions in cells, inducing oxidative stress when ROS levels are high, leading to cellular damage. [ 18 ]
Developmental processes such as rapid cell division, cell differentiation into different types, and apoptosis rely on pathways that involve communication between cells through a process called signal transduction. These pathways' proper functioning is highly dependent on a certain class of reactions called redox reactions; many of these pathways are vulnerable to disruption due to oxidative stress. [ 20 ] Therefore, one mechanism by which teratogens induce teratogenesis is by triggering oxidative stress and derailing redox-dependent signal transduction pathways in early development. [ 20 ]
Folate plays key roles in DNA methylation and in synthesis of nitrogenous bases found in DNA and RNA. These processes are crucial for cell division, cell growth, gene regulation, protein synthesis, and cell differentiation. [ 21 ] All these processes ensure normal fetal development. Since the developing fetus requires rapid cell growth and division, the demand for folate increase during pregnancy, which if not met, can lead to teratogenic complications. [ 21 ]
Epigenetic modifications are any heritable modifications to the expression of genes in the DNA that do not include direct code alteration of the base genome. These modifications can include heritable alterations in transcriptional and translational processes of certain genes and even their interactions with other genes. [ 22 ] Many known teratogens affect fetal development by inducing these epigenetic modifications including turning on/off transcriptional processes of certain genes, regulating the location and distribution of proteins inside the cell, and regulating cell differentiation by modifying which mRNA molecules are translated into protein. [ 22 ]
During embryo development, a temporary organ called a placenta forms in the womb, connecting the mother to the fetus. The placenta provides oxygen and nutrients to the developing fetus throughout the pregnancy. Environmental influences such as under-nutrition, drugs, alcohol, tobacco smoke, and even abnormal hormonal activity can lead to epigenetic changes in the placental cells and harm the fetus in the long term, though specific mechanisms by which developmental damage takes place remains unclear. [ 23 ]
Common causes of teratogenesis include: [ 24 ] [ 25 ]
In humans , congenital disorders resulted in about 510,000 deaths globally in 2010. [ 32 ]
About 3% of newborns have a "major physical anomaly", meaning a physical anomaly that has cosmetic or functional significance. [ 33 ] Developmental defects manifest in approximately 3% to 5% of newborns in the United States, between 2% to 3% of which are teratogen-induced. [ 34 ] Congenital disorders are responsible for 20% of infant deaths. [ 35 ] The most common congenital diseases are heart defects, Down syndrome , and neural tube defects. Trisomy 21 is the most common type of Down Syndrome. About 95% of infants born with Down Syndrome have this disorder and it consists of 3 separate copies of chromosomes. Translocation Down syndrome is not as common, as only 3% of infants with Down Syndrome are diagnosed with this type. [ 36 ] VSD, ventricular septal defect, is the most common type of heart defect in infants. If an infant has a large VSD it can result into heart failure. [ 37 ] Infants with a smaller VSD have a 96% survival rate and those with a moderate VSD have about an 86% survival rate. [ citation needed ] Lastly, NTD, neural tube defect, is a defect that forms in the brain and spine during early development. If the spinal cord is exposed and touching the skin it can require surgery to prevent an infection. [ 38 ]
Though many pregnancies are accompanied with prescription drugs, there is limited knowledge regarding the potential teratogenic risks. Only medications that are commonly taken during pregnancies that are known to cause structural birth defects are considered teratogenic agents. [ 39 ] One common drug in particular that is teratogenic is isotretinoin, known by many as Accutane. It became popular through its success in the care and treatment of skin cancer and severe acne. However, over time it has become clear that it causes severe teratogenic effects with 20-35% of exposed embryos experiencing developmental defects. Exposure of isotretinoin has led to severe skull, facial, cardiovascular, and neurological defects – to name a few. [ 40 ] Another drug known as carbamazepine is sometimes prescribed during pregnancy if the mother experiences more extreme concerns regarding epilepsy or bipolar disorder. [ 41 ] Unfortunately, this drug can also cause birth and developmental defects especially during the early stages of pregnancy such as defects of the neural tube, which develops into the brain and spinal cord. [ 42 ] An example of this is spina bifida. [ 43 ] Oral and topical antifungal agents such as fluconazole, ketoconazole, and terbinafine are commonly prescribed in pregnancy. Some fungal infections are asymptomatic and therefore do not really cause discomfort, but some are slightly more severe and can negatively affect a pregnant woman's life quality and even the fetus. This is primarily when antifungal agents are prescribed during pregnancy. Unfortunately, the use of antifungal agents can lead to spontaneous abortions and defects mainly regarding the cardiovascular and musculoskeletal systems, as well as some eye defects. [ 44 ] It is safer to avoid taking medications during pregnancy to keep the likelihood of teratogenicity low, as the chances of any pregnancy resulting in birth defects is only 3-5%. [ 45 ] However, it is necessary and cannot be avoided in certain cases. As with any medical concern, a doctor should always be consulted in order for the pregnancy to have the best outcome possible for both mother and baby.
Acitretin is a retinoid and vitamin A derivative that is used in the treatment of psoriasis. [ 46 ] Acitretin is highly teratogenic and noted for the possibility of severe birth defects. It was initially suggested as a replacement for Etretinate. [ 47 ] It should not be used by pregnant women or women planning to get pregnant within 3 years following the use of acitretin. Sexually active women of childbearing age who use acitretin should also use at least two forms of birth control concurrently. Men and women who use it should not donate blood for three years after using it, because of the possibility that the blood might be used in a pregnant patient and cause birth defects. In addition, it may cause nausea, headache, itching, dry, red or flaky skin, dry or red eyes, dry or chapped lips, swollen lips, dry mouth, thirst, cystic acne or hair loss. [ 48 ] [ 49 ] [ 50 ]
Etretinate (trade name Tegison) is a medication developed by Hoffmann–La Roche that was approved by the FDA in 1986 to treat severe psoriasis . It is a second-generation retinoid . [ 51 ] It was subsequently removed from the Canadian market in 1996 and the United States market in 1998 due to the high risk of birth defects. It remains on the market in Japan as Tigason . [ 52 ]
Isotretinoin is classified as a retinoid drug and is used as a treatment for severe acne, other skin conditions, and some cancer types. [ 53 ] In treatment against acne, it functions by hindering the activity of skin's sebaceous glands. [ 54 ] It is extremely effective in its use in treatment against severe acne, but does have some negative side effects such as dry skin, nausea, joint and muscle pain, blistering skin, and the development of sores on mucous membranes. [ 53 ] Some brand names for isotretinoin are Accutane, Absorica, Claravis, and Myorisan. Accutane is no longer on the market, but many other generic alternatives are available. [ 53 ]
Prenatal exposure to isotretinoin can cause neurocognitive impairment in some children. [ 55 ] Isotretinoin is able to cross the placenta, potentially harming the developing fetus. If a fetus is exposed to isotretinoin during the first trimester of pregnancy, craniofacial, cardiac, and central nervous system malformations can occur. [ 56 ] Some prenatal exposures to isotretinoin can result in still births or spontaneous abortions. [ 56 ] The use of isotretinoin during pregnancy can increase cell apoptosis, leading to malformations, as well as heart defects. [ 57 ]
In humans , vaccination has become readily available, and is important for the prevention of various communicable diseases such as polio and rubella , among others. There has been no association between congenital malformations and vaccination — for example, a population-wide study in Finland in which expectant mothers received the oral polio vaccine found no difference in infant outcomes when compared with mothers from reference cohorts who had not received the vaccine. [ 58 ] However, on grounds of theoretical risk, it is still not recommended to vaccinate for polio while pregnant unless there is risk of infection. [ 59 ] An important exception to this relates to provision of the influenza vaccine while pregnant. During the 1918 and 1957 influenza pandemics, mortality from influenza in pregnant women was 45%. In a 2005 study of vaccination during pregnancy, Munoz et al. demonstrated that there was no adverse outcome observed in the new infants or mothers, suggesting that the balance of risk between infection and vaccination favored preventative vaccination. [ 60 ]
There are a number of ways that a fetus can be affected in pregnancy, specifically due to exposure to various substances. There are a number of drugs that can do this, specifically drugs such as female reproductive hormones or hormone replacement drugs such as estrogen and progesterone that are not only essential for reproductive health, but also pose concerns when it comes to the synthetic alternatives to these. This can cause a multitude of congenital abnormalities and deformities, many of which can ultimately affect the fetus and even the mother's reproductive system in the long term. According to a study conducted from 2015 till 2018, it was found that there was an increased risk of both maternal and neonatal complications developing as a result of hormone replacement therapy cycles being conducted during pregnancy, especially in regards to hormones such as estrogen, testosterone and thyroid hormone. [ 61 ] [ 62 ] [ 63 ] When hormones such as estrogen and testosterone are replaced, this can cause the fetus to become stunted in growth, born prematurely with a lower birth weight, develop mental retardation, while in turn causing the mother's ovarian reserve to be depleted while increasing ovarian follicular recruitment. [ 64 ]
It is rare for cancer and pregnancy to coincide, occurring in only 1 in 1,000 pregnancies and making up less than 0.1% of all recorded malignant tumors. [ 65 ] However, when this does occur, there are many complications and great, although not well understood, risks to the fetus in the event that chemotherapy drugs are used. The majority of these drugs are cytotoxic, meaning that they have the potential to be carcinogenic , mutagenic, and teratogenic. [ 66 ] If used during the first two weeks of pregnancy, they may inhibit implantation of the fetus and led to miscarriage. [ 65 ] They may particularly act as teratogenic agents if used from the second to eighth week, as this is a critical stage for tissue differentiation. The highest risk continues through the first trimester, making up 14% of major malformations. [ 67 ] Chemotherapeutic drugs are considered safer to use during the second and third trimester, but there is limited research to fully support this.
Thalidomide , also known as Thalomid, was used in the mid-1900s primarily, as a sedative. [ 69 ] It is a drug that was first introduced in Germany and spread to other countries as a therapeutic prescription from the 1950s to early 1960s in Europe as an anti-nausea medication to alleviate morning sickness among pregnant women. [ 70 ] While the exact mechanism of action of thalidomide is not known, it is thought to be related to inhibition of angiogenesis through interaction with the insulin like growth factor(IGF-1) and fibroblast like growth factor 2 (FGF-2) pathways. [ 68 ] This drug acted upon the immune system causing the overall blood cell count be reduced after repeated usage and hindered the generation of the cells. [ 70 ] In the 1960s, it became apparent that thalidomide altered embryo development and led to limb deformities such as thumb absence, underdevelopment of entire limbs, or phocomelia . [ 68 ] It is among the first known drugs that research pointed towards the possibility of it causing birth defects. [ 70 ] Thalidomide may have caused teratogenic effects in over 10,000 babies worldwide. [ 71 ] [ 72 ] As it became more well known, other uses were found, such as its use in leprosy treatment, cancer treatment, and HIV infections. [ 70 ]
In the US, alcohol is subject to the FDA drug labeling Pregnancy Category X ( Contraindicated in pregnancy ). Alcohol is known to cause fetal alcohol spectrum disorder . [ citation needed ]
There are a wide range of affects that Prenatal Alcohol Exposure (PAE) can have on a developing fetus. Some of the most prominent possible outcomes include the development of Fetal Alcohol Syndrome, a reduction in brain volume, still births, spontaneous abortions, impairments of the nervous system, and much more. [ 73 ] Fetal Alcohol Syndrome has numerous symptoms which may include cognitive impairments and impairment of the facial features. [ 73 ] PAE remains the leading cause of birth defects and neurodevelopmental abnormalities in the United States, affecting 9.1 to 50 per 1000 live births in the U.S. and 68.0 to 89.2 per 1000 in populations with high levels of alcohol use. [ 74 ]
Consuming tobacco products while pregnant or breastfeeding can have significant negative impacts on the health and development of the unborn child and newborn infant. [ 75 ] In a research study conducted in 1957, the relationship between tobacco consumption during pregnancy and premature births was studied. [ 76 ] The research showed that there was significant evidence that tobacco consumption during pregnancy can cause the mother to go into labor and deliver earlier than determined due date. [ 76 ] Some of the data showed conflicting evidence because tobacco reduces premature birth via gestational hypertension but increases other symptom risks. [ 76 ] From 1957 to 1986 there were over 500,000 babies observed in studies that showed pregnant mothers intaking tobacco have increased probability that the baby birthed will weigh less than babies birthed by non-smoking mothers. [ 76 ] A research study was conducted on six year old's and found a correlation between lower birth weights and lower IQ levels. [ 76 ] This can be harmful to the child potentially affecting their brain development overtime as the fetus was not able to have the development of the neurological pathways needed to grow. [ 76 ] Tobacco use can also cause stillbirths in mothers who are pregnant, increasing the probability up to three times more risk than non tobacco users. [ 76 ] Research shows that the earlier in the pregnancy the mother is the higher chance of a still birth baby being born. [ 76 ] Babies that are exposed to nicotine and tobacco can develop an addiction to this substance while still developing, causing addict-like behavioral patterns when born. [ 76 ]
E-Cigarettes are electronic devices that contain a heating device as well as a cartage to hold liquid in. [ 77 ] The liquid in the cartages contain nicotine in about one-third to two-thirds the amount in regular cigarettes. [ 75 ] This means that the nicotine still crosses the placenta , and can be detected in the fetus' blood and plasma at higher levels than the maternal concentrations. [ 76 ] It can be harmful to the developing fetus' brain and lungs. [ 77 ] The liquid also contains artificial flavoring agents that can be harmful to the body. [ 77 ] A pregnant mother can have issues that form during development of the baby due to nicotine like birth deformities or retardation . [ 75 ] Many of the deformities can include the skull not fully forming, limbs forming partially, or cardiovascular issues. [ 75 ]
Cocaine can act as a teratogen , having various effects on the developing fetus. [ 78 ] Some common teratogenic defects caused by cocaine include hydronephrosis , cleft palate , polydactyly , and down syndrome . [ 78 ] Cocaine as a drug has a low molecular weight and high water and lipid solubility which enables it to cross the placenta and fetal blood-brain barrier. [ 79 ] Because cocaine is able to pass through the placenta and enter the fetus, the fetus' circulation can be negatively affected. With restriction of fetal circulation, the development of organs in the fetus can be impacted, even resulting in intestines developing outside of the fetus' body. [ 78 ] Cocaine use during pregnancy can also result in obstetric labor complications such as preterm birth or delivery, uterine rupture , miscarriage , and stillbirth . [ 78 ]
There is currently no reliable data to suggest that marijuana consistently acts as a teratogen. However, some studies show that it may have negative effects on the development of the fetus and consequent neurobehavioral outcomes. Frequent use of marijuana during pregnancy has been related to a reduction in birth weight, although the association is not strong. [ 80 ] The neurodevelopmental effects include sleep disturbances, hyperactivity, increased delinquency, and worsened problem-solving. [ 80 ] However, this data is not conclusive because there are a variety of other factors that tend to be associated with prenatal use of marijuana, including poorer economic status and exposure to other illicit drugs. Complications with maternal use of cannabis also stem from the fact that it is excreted into breast milk in small quantities and may harm motor development if fetal exposure is regular. [ 81 ] It is advised that mothers refrain from using any products containing THC while they are breastfeeding or pregnant.
Caffeine consumption during pregnancy has been linked to intrauterine growth retardation and spontaneous abortion during the first trimester. Other teratogenic effects include low birthweight, [ 82 ] problems with neural tube development, decreased head circumference, excessive infant growth, and cognitive impairments at birth. Caffeine's chemical structure allows it to be transmitted across biological membranes, including the placental barrier, [ 83 ] which is then transmitted to the developing embryo.
The inability to breakdown caffeine results in a build up of caffeine in the embryo. The build up of caffeine in embryos can produce teratogenic effects by blocking adenosine receptors, which regulate several neurotransmitters, including dopamine, serotonin, norepinephrine, and GABA. [ 84 ] The teratogenic effects of caffeine are variable, and affects individuals differently depending on their sensitivity to caffeine. [ 85 ] One mother may not have any teratogenic effects from caffeine consumption during pregnancy, while another could have significant complications. [ 86 ]
One example of a physical agent which may give rise to developmental complications is heat. Women may be exposed to heat from external sources such as extreme heat conditions and hot-tub exposures. External temperatures that exceed 102 °Fahrenheit can give rise to fetal complications via the mechanism of neural tube malformation. [ 87 ] The exact mechanisms relating heat to neural tube defects are not well-known. A potential theory connects heat to multiple cell-related issues, including cell movement, cell division, and apoptosis. The disruption in these normal processes may ultimately feed into the mechanism of neural tube malformation. [ 88 ]
Another method of exposure to heat can be seen as a result of the pregnancy itself. This phenomenon can be associated with maternal weight gain as well the heat produced via fetal metabolism, both of which may cause dysregulation of heat escape. The exact mechanisms beyond these surface-level causes are not clear. One theory associates this heat with producing heat-shock proteins, which then disrupt a certain normal protein balance. This deviation from a normal protein balance may then interfere with fetal development. Another theory draws potential connections between elevated temperature, oxidative stress, and inflammation with blood flow restriction to the fetus. [ 89 ]
Although large exposures to radiation during pregnancies are often rare, when such exposures occur the resulting teratogenic complications occur due to various factors and/or mechanisms. The negative effects associated with radiation in general have to do with the interaction of said radiation with the stem cells of the developing fetus. There are also associations with DNA damage, oxidative stress responses, and changes in protein expression. In terms of ionizing radiation in particular, such forms of radiation often cause chemical changes to occur that yields abnormal chemical species. These chemical materials can then act on two different structures: they can either alter specific tissue-level structures in a predictable way, or act on DNA structures in a more random fashion. [ 90 ]
While some ranges of sound are kept from reaching the fetus due to the presence of the mother's abdomen and uterus as barrier of sorts, there is still evidence that both high intensity sounds and continuous exposure to sound can be harmful to the fetus. Such sounds may bring about many potential problems within the fetus, including chromosomal abnormalities, altered social behavior after birth, and issues with hearing. [ 91 ] In terms of hearing damage specifically, it is thought that these external sounds cause damage to the developing fetal cochlea and its constituent parts, particularly the inner and outer hairs of the structure. [ 92 ]
Long before modern science, it was understood that heavy metals could cause negative effects to those who were exposed. The Greek physician Pedanius Dioscorides described the effects of lead exposure as something that "makes the mind give way". Lead exposure in adults can lead to cardiological, renal, reproductive, and cognitive issues that are often irreversible, however, lead exposure during pregnancy can be detrimental to the long-term health of the fetus. [ 93 ] Exposure to lead during pregnancy is well known to have teratogenic effects on the development of a fetus. [ 94 ] Specifically, fetal exposure to lead can cause cognitive impairment, premature births, unplanned abortions, ADHD, and much more. [ 95 ] Lead exposure during the first trimester of pregnancy leads to the greatest predictability of cognitive development issues after birth. [ 94 ]
Low socioeconomic status correlates to a higher probability of lead exposure. [ 96 ] A well-known recent example of lead poisoning — and the impacts it can have on a community — was the 2014 water crisis in Flint, Michigan . Researchers have found that female fetuses developed at a higher rate than male fetuses in Flint when compared to surrounding areas. The higher rate of female births indicated a problem because male fetuses are more sensitive to pregnancy hazards than female fetuses. [ 97 ]
Phthalate acid esters (PAEs) are a classification of chemical plasticizers used to increase flexibility in commercial plastics, such as polyethylene terephthalate (PET) and polyvinyl chloride (PVC). Phthalates are currently used in several consumer goods, including food packaging, cosmetics, clothing, fragrance, and toys. [ 98 ] Additionally, they have wide-spread use in pharmaceutical and medical products, including in coatings and fillers of extend-release medications, [ 99 ] blood bag packaging, tubes used in blood transfers, and hemodialysis units. [ 100 ]
The most common phthalates include di(2-ethylhexyl) phthalate and di-n-butyl phthalate. As of 2017, di(2-ethylhexyl) phthalate is estimated to make up 30% of plastic produced in the United States and European Union, [ 98 ] and up to 80% of plastic produced in China. [ 98 ]
Several animal studies have been conducted to observe the specific effects of DEHP in vitro, including rats, mice, and chick embryos. Observed effects of high phthalate exposure in utero included neural tube malformations, [ 100 ] encephalopathy, [ 98 ] limb malformations, [ 100 ] decreased vasculature, [ 101 ] vascular malformations, [ 102 ] decreased bodyweight [ 98 ] and interuterine death at high concentrations. [ 98 ] Higher concentrations of phthalates and phthalate metabolites have also been observed in the urine of mothers to children with neural tube malformations.
Phthalate exposure induces teratogenic effects through multiple mechanisms of action. High levels of di-(2-ethylhexyl)-phthalate create oxidative stress in utero, which results in cellular apoptosis in developing fetuses. [ 101 ] In vivo, di-(2-ethylhexyl)-phthalate is hydrolyzed into 2-ethylhexanol. It's hypothesized that the metabolic byproduct of 2-ethylhexanol, ethylhexanoic acid, is the primary teratogen responsible for developmental defects in embryos exposed to di-(2-ethylhexyl)-phthalate. [ 103 ]
The use of di-n-butyl phthalate in children's products was restricted in the United States in 2008, and is restricted in cosmetics in the European Union. Several phthalates, including di-n-butyl phthalate, di-n-hexyl phthalate, and butyl benzyl phthalate, were issued a Proposition 65 warning by the state of California in March, 2005 following evidence of reproductive toxicity and teratogenic effects. [ 104 ]
Maternal stress has been associated with an increased risk of various birth defects, though a direct causal relationship has not been conclusively established. Studies suggest that the exposure to significant psychological stress or traumatic events during pregnancy may correlate with a higher incidence of congenital anomalies, such as oral facial cleft (cleft lip and palate), neural tube defects and conotruncal heart defects. [ 105 ] One proposed mechanisms involves the dysregulation of maternal stress hormones, particularly glucocorticoids, which include cortisol and other corticosteroids. These hormones, often referred to as "stress hormones", are capable of crossing the placental barrier, but their effects on the fetus depends on the timing, duration, and intensity of exposure. [ 106 ] The placenta expresses various enzymes, which metabolizes active cortisol into its inactive form, protecting the fetus. However extreme physiological responses or chronic stress could overwhelm this protective factor. Additionally, stress-induced changes in maternal physiology, such as reduced uteroplacental blood flow, inflammation, and oxidative stress, may further contribute to developmental disruptions. [ 107 ] Sometimes, corticosteroids are used therapeutically to promote fetal lung maturation in preterm labor, excessive or prolonged exposure has been linked to intrauterine growth restriction and altered fetal programming. [ 108 ] Further research is needed to clarify the exact role of maternal stress in teratogenesis and to determine the potential long-term impacts on offspring health.
Micronutrient deficiencies during pregnancy can contribute to teratogenesis by disrupting essential developmental processes. Deficiencies in folate, iodine, vitamin A, and other key nutrients have been linked to congenital anomalies, miscarriage, and impaired fetal growth. These deficiencies impair cellular differentiation, gene expression, and organogenesis, making proper maternal nutrition crucial for fetal development. Prevention strategies include dietary supplementation and food fortification programs to reduce the incidence of birth defects worldwide. [ 109 ]
Folate deficiency increases the risk of neural tube defects. It has been shown that supplementation of folate before, during, and after conception is able to reduce the risk of a fetus developing neural tube defects, cardiovascular malformations, cleft lip and palate, urogenital abnormalities, and reduced limb size. [ 110 ]
In mothers, an iodine deficiency can lead to hypothyroidism, increasing the chances for miscarriage to occur. [ 111 ] Hypothyroidism can also potentially cause growth problems in the baby, increasing the chances for preterm delivery. If the iodine deficiency is severe, the likelihood of stillbirth is increased as well as the child having the potential for increased hearing problems. [ 111 ] Iodine deficiency has been associated with craniofacial and heart defects. [ 109 ] The most severe cases of iodine deficiency caused hypothyroidism can result in cretinism . [ 112 ]
Zinc deficiency can result in fetal death, intrauterine growth retardation, and teratogenesis. [ 113 ] It can also have postnatal effects, such as behavioral abnormalities, elevated risk of high blood pressure, or impaired cognitive abilities. [ 113 ]
Evidence for congenital deformities found in the fossil record is studied by paleopathologists, specialists in ancient disease and injury. Fossils bearing evidence of congenital deformity are scientifically significant because they can help scientists infer the evolutionary history of life's developmental processes. For instance, because a Tyrannosaurus rex specimen has been discovered with a block vertebra , it means that vertebrae have been developing the same basic way since at least the most recent common ancestor of dinosaurs and mammals. Other notable fossil deformities include a hatchling specimen of the bird-like dinosaur, Troodon , the tip of whose jaw was twisted. [ 114 ] Another notably deformed fossil was a specimen of the Choristodera Hyphalosaurus , which had two heads- the oldest known example of polycephaly . [ 115 ]
Thalidomide is a teratogen known to be significantly detrimental to organ and limb development during embryogenesis. [ 116 ] It has been observed in chick embryos that exposure to thalidomide can induce limb outgrowth deformities, due to increased oxidative stress interfering with the Wnt signaling pathway , increasing apoptosis, and damaging immature blood vessels in developing limb buds . [ 27 ] [ 117 ]
Retinoic acid (RA) is significant in embryonic development. It induces the function of limb patterning of a developing embryo in species such as mice and other vertebrate limbs. [ 118 ] For example, during the process of regenerating a newt limb an increased amount of RA moves the limb more proximal to the distal blastoma and the extent of the proximalization of the limb increases with the amount of RA present during the regeneration process. [ 118 ] A study looked at the RA activity intracellularly in mice in relation to human regulating CYP26 enzymes which play a critical role in metabolizing RA. [ 118 ] This study also helps to reveal that RA is significant in various aspects of limb development in an embryo, however irregular control or excess amounts of RA can have teratogenic impacts causing malformations of limb development. They looked specifically at CYP26B1 which is highly expressed in regions of limb development in mice. [ 118 ] The lack of CYP26B1 was shown to cause a spread of RA signal towards the distal section of the limb causing proximo-distal patterning irregularities of the limb. [ 118 ] Not only did it show spreading of RA but a deficiency in the CYP26B1 also showed an induced apoptosis effect in the developing mouse limb but delayed chondrocyte maturation, which are cells that secrete a cartilage matrix which is significant for limb structure. [ 118 ] They also looked at what happened to development of the limbs in wild type mice, that are mice with no CYP26B1 deficiencies, but which had an excess amount of RA present in the embryo. The results showed a similar impact to limb patterning if the mice did have the CYP26B1 deficiency meaning that there was still a proximal distal patterning deficiency observed when excess RA was present. [ 118 ] This then concludes that RA plays the role of a morphogen to identify proximal distal patterning of limb development in mice embryos and that CYP26B1 is significant to prevent apoptosis of those limb tissues to further proper development of mice limbs in vivo. [ citation needed ]
There has been evidence of teratogenic effects of lead in rats as well. An experiment was conducted where pregnant rats were given drinking water, before and during pregnancy, that contained lead. Many detrimental effects, and signs of teratogenesis were found, such as negative impacts on the formation of the cerebellum, fetal mortality, and developmental issues for various parts of the body. [ 119 ]
In botany , teratology investigates the theoretical implications of abnormal specimens. For example, the discovery of abnormal flowers—for example, flowers with leaves instead of petals, or flowers with staminoid pistils—furnished important evidence for the " foliar theory ", the theory that all flower parts are highly specialised leaves. [ 120 ] In plants, such specimens are denoted as 'lusus naturae' (' sports of nature', abbreviated as 'lus.'); and occasionally as 'ter.', 'monst.', or 'monstr.'. [ 121 ]
Plants can have mutations that leads to different types of deformations such as: [ citation needed ]
Studies designed to test the teratogenic potential of environmental agents use animal model systems (e.g., rat, mouse, rabbit, dog, and monkey). Early teratologists exposed pregnant animals to environmental agents and observed the fetuses for gross visceral and skeletal abnormalities. While this is still part of the teratological evaluation procedures today, the field of Teratology is moving to a more molecular level, seeking the mechanism(s) of action by which these agents act. One example of this is the use of mammalian animal models to evaluate the molecular role of teratogens in the development of embryonic populations, such as the neural crest , [ 122 ] which can lead to the development of neurocristopathies . Genetically modified mice are commonly used for this purpose. In addition, pregnancy registries are large, prospective studies that monitor exposures women receive during their pregnancies and record the outcome of their births. These studies provide information about possible risks of medications or other exposures in human pregnancies. Prenatal alcohol exposure (PAE) can produce craniofacial malformations, a phenotype that is visible in Fetal Alcohol Syndrome . Current evidence suggests that craniofacial malformations occur via: apoptosis of neural crest cells, [ 123 ] interference with neural crest cell migration, [ 124 ] [ 125 ] as well as the disruption of sonic hedgehog (shh) signaling . [ 126 ]
Understanding how a teratogen causes its effect is not only important in preventing congenital abnormalities but also has the potential for developing new therapeutic drugs safe for use with pregnant women. [ citation needed ] | https://en.wikipedia.org/wiki/Teratology |
A teratoma is a tumor made up of several types of tissue , such as hair , muscle , teeth , or bone . [ 4 ] Teratomata typically form in the tailbone (where it is known as a sacrococcygeal teratoma ), ovary , or testicle . [ 4 ]
Symptoms may be minimal if the tumor is small. [ 2 ] A testicular teratoma may present as a painless lump. [ 1 ] Complications may include ovarian torsion , testicular torsion , or hydrops fetalis . [ 1 ] [ 2 ] [ 3 ]
They are a type of germ cell tumor (a tumor that begins in the cells that give rise to sperm or eggs ). [ 4 ] [ 8 ] They are divided into two types: mature and immature. [ 4 ] Mature teratomas include dermoid cysts and are generally benign . [ 8 ] Immature teratomas may be cancerous . [ 4 ] [ 9 ] Most ovarian teratomas are mature. [ 10 ] In adults, testicular teratomas are generally cancerous. [ 11 ] Definitive diagnosis is based on a tissue biopsy . [ 2 ]
Treatment of coccyx, testicular, and ovarian teratomas is generally by surgery. [ 5 ] [ 6 ] [ 12 ] Testicular and immature ovarian teratomas are also frequently treated with chemotherapy . [ 6 ] [ 10 ]
Teratomas occur in the coccyx in about one in 30,000 newborns, making them one of the most common tumors in this age group. [ 5 ] [ 7 ] Females are affected more often than males. [ 5 ] Ovarian teratomas represent about a quarter of ovarian tumors and are typically noticed during middle age. [ 10 ] Testicular teratomas represent almost half of testicular cancers . [ 13 ] They can occur in both children and adults. [ 14 ] The term comes from the Greek word for "monster" [ 15 ] plus the "-oma" suffix used for tumors.
Teratomas can cause an autoimmune illness called Anti-NMDA receptor encephalitis . In this condition, the teratomas may contain B cells with NMDA-receptor specificities. [ 16 ]
After teratoma removal surgery, a risk exists of regrowth in place, or in nearby organs. [ 17 ]
A mature teratoma is a grade 0 teratoma. They are highly variable in form and histology, and may be solid, cystic, or a combination of the two. A mature teratoma often contains several different types of tissue such as skin , muscle , and bone . Skin may surround a cyst and grow abundant hair (see: § Dermoid cyst ) . Mature teratomas generally are benign, with 0.17–2% of mature cystic teratomas becoming malignant. [ 18 ]
Immature teratoma is the malignant counterpart of the mature teratoma and contains immature tissues which typically show primitive or embryonal neuroectodermal histopathology. Immature teratoma has one of the lowest rates of somatic mutation of any tumor type and results from one of five mechanisms of meiotic failure . [ 19 ]
Gliomatosis peritoneii, which presents as a deposition of mature glial cells in the peritoneum, is almost exclusively seen in conjunction with cases of ovarian teratoma. Through genetic studies of exome sequence, it was found that gliomatosis is genetically identical to the parent ovarian tumor and developed from cells that disseminate from the ovarian teratoma. [ 19 ]
A dermoid cyst is a mature cystic teratoma containing hair (sometimes very abundant) and other structures characteristic of normal skin and other tissues derived from the ectoderm . The term is most often applied to teratoma on the skull sutures and in the ovaries of females. [ citation needed ]
Fetus in fetu and fetiform teratoma are rare forms of mature teratomas that include one or more components resembling a malformed fetus. Both forms may contain or appear to contain complete organ systems, even major body parts, such as a torso or limbs. Fetus in fetu differs from fetiform teratoma in having an apparent spine and bilateral symmetry . [ 20 ]
Most authorities agree that fetiform teratomas are highly developed mature teratomas; the natural history of fetus in fetu is controversial. [ 20 ] It has been noted that fetiform teratoma is reported more often (by gynecologists) in ovarian teratomas, and fetus in fetu is reported more often (by general surgeons) in retroperitoneal teratomas. Fetus in fetu has often been interpreted as a fetus growing within its twin . As such, this interpretation assumes a special complication of twinning , one of several grouped under the term parasitic twin . In many cases, the fetus in fetu is reported to occupy a fluid-filled cyst within a mature teratoma. [ 21 ] [ 22 ] [ 23 ] [ 24 ] Cysts within mature teratomas may have partially-developed organ systems: reports include cases of partial cranial bones , long bones and a rudimentary, beating heart. [ 25 ] [ 26 ]
Regardless of whether fetus in fetu and fetiform teratoma are one entity or two, they are distinct from and not to be confused with ectopic pregnancy .
A struma ovarii (also known as goitre of the ovary or ovarian goiter) is a rare form of mature teratoma that contains mostly thyroid tissue. [ 27 ]
Epignathus is a rare teratoma originating in the oropharyngeal area that occurs in utero . It presents with a mass protruding from the mouth at birth. Untreated, breathing is impossible. An EXIT procedure is the recommended initial treatment.
Teratomas may be found in babies, children, and adults. Teratomas of embryonal origin are most often found in babies at birth, in young children, and, since the advent of ultrasound imaging , in fetuses.
The most diagnosed fetal teratomas are sacrococcygeal teratoma (Altman types I, II, and III) and cervical (neck) teratoma. Because these teratomas project from the fetal body into the surrounding amniotic fluid , they can be seen during routine prenatal ultrasound exams. Teratomas within the fetal body are less easily seen with ultrasound; for these, MRI of the pregnant uterus is more informative. [ 28 ] [ 29 ]
Teratomas are not dangerous for the fetus unless either a mass effect occurs or a large amount of blood flows through the tumor (known as vascular steal). The mass effect frequently consists of obstruction of normal passage of fluids from surrounding organs. The vascular steal can place a strain on the growing heart of the fetus, even resulting in heart failure, thus must be monitored by fetal echocardiography .
Teratomas belong to a class of tumors known as nonseminomatous germ cell tumor . All tumors of this class are the result of abnormal development of pluripotent cells: germ cells and embryonal cells . Teratomas of embryonic origin are congenital ; teratomas of germ cell origin may or may not be congenital. The kind of pluripotent cell appears to be unimportant, apart from constraining the location of the teratoma in the body.
Teratomas derived from germ cells occur in the testicle and ovaries . Teratomas derived from embryonic cells usually occur on the subject's midline: in the brain, elsewhere in the skull , in the nose, in the tongue, under the tongue, and in the neck (cervical teratoma), mediastinum , retroperitoneum , and attached to the coccyx . Teratomas may also occur elsewhere: very rarely in solid organs (most notably the heart and liver) and hollow organs (such as the stomach and bladder), and more commonly on the skull sutures .
Teratoma rarely include more complicated body parts such as teeth , brain matter , [ 30 ] eyes , [ 31 ] [ 32 ] or torso . [ 33 ]
Concerning the origin of teratomas, numerous hypotheses exist. [ 20 ] These hypotheses are not to be confused with the unrelated hypothesis that fetus in fetu (see below) is not a teratoma at all, but rather a parasitic twin .
Teratomas are thought to originate in utero , so can be considered congenital tumors. Many teratomas are not diagnosed until much later in childhood or in adulthood. Large tumors are more likely to be diagnosed early on. Sacrococcygeal and cervical teratomas are often detected by prenatal ultrasound . Additional diagnostic methods may include prenatal magnetic resonance imaging . In rare circumstances, the tumor is so large that the fetus may be damaged or die. In the case of large sacrococcygeal teratomas, a significant portion of the fetus' blood flow is redirected toward the teratoma (a phenomenon called steal syndrome ), causing heart failure , or hydrops , of the fetus. In certain cases, fetal surgery may be indicated.
Beyond the newborn period, symptoms of a teratoma depend on its location and organ of origin. Ovarian teratomas often present with abdominal or pelvic pain , caused by torsion of the ovary or irritation of its ligaments. A recently discovered condition where ovarian teratomas cause encephalitis associated with antibodies against the N-methyl-D-aspartate receptor antibody (NMDAR) - often referred to as " anti-NMDA receptor encephalitis ", was identified as a serious complication. Patients develop a multistage illness that progresses from psychosis, memory deficits, seizures, and language disintegration into a state of unresponsiveness with catatonic features often associated with abnormal movements, and autonomic and breathing instability. [ 34 ] Testicular teratomas present as a palpable mass in the testis; mediastinal teratomas often cause compression of the lungs or the airways and may present with chest pain and/or respiratory symptoms.
Some teratomas contain yolk sac elements, which secrete alpha-fetoprotein . Its detection may help to confirm the diagnosis and is often used as a marker for recurrence or treatment efficacy, but is rarely the method of initial diagnosis. (Maternal serum alpha-fetoprotein is a useful screening test for other fetal conditions, including Down syndrome , spina bifida , and abdominal wall defects such as gastroschisis .)
Regardless of location in the body, a teratoma is classified according to a cancer staging system. This indicates whether chemotherapy or radiation therapy may be needed in addition to surgery. Teratomas commonly are classified using the Gonzalez-Crussi [ 20 ] grading system: 0 or mature ( benign ); 1 or immature, probably benign; 2 or immature, possibly malignant (cancerous); and 3 or frankly malignant. If frankly malignant, the tumor is a cancer for which additional cancer staging applies. [ citation needed ]
Teratomas are also classified by their content; a solid teratoma contains only tissues (perhaps including more complex structures); a cystic teratoma contains only pockets of fluid or semifluid such as cerebrospinal fluid , sebum , or fat; a mixed teratoma contains both solid and cystic parts. Cystic teratomas usually are grade 0 and, conversely, grade 0 teratomas usually are cystic.
Grades 0, 1, and 2 pure teratomas have the potential to become malignant (grade 3), and malignant pure teratomas have the potential to metastasize . These rare forms of teratoma with malignant transformation may contain elements of somatic (not germ cell) malignancy such as leukemia , carcinoma , or sarcoma . [ 35 ] A teratoma may contain elements of other germ cell tumors, in which case it is not a pure teratoma, but rather is a mixed germ cell tumor and is malignant. In infants and young children, these elements usually are endodermal sinus tumor , followed by choriocarcinoma . Finally, a teratoma can be pure and not malignant yet highly aggressive; this is exemplified by growing teratoma syndrome, in which chemotherapy eliminates the malignant elements of a mixed tumor, leaving pure teratoma, which paradoxically begins to grow very rapidly. [ 36 ]
A "benign" grade 0 (mature) teratoma nonetheless has a risk of malignancy. Recurrence with malignant endodermal sinus tumor has been reported in cases of formerly benign mature teratoma, [ 37 ] [ 38 ] even in fetiform teratoma and fetus in fetu. [ 39 ] [ 40 ] Squamous cell carcinoma has been found in a mature cystic teratoma at the time of initial surgery. [ 41 ] A grade 1 immature teratoma that appears to be benign (e.g., because AFP is not elevated) has a much higher risk of malignancy, and requires adequate follow-up. [ 42 ] [ 43 ] [ 44 ] [ 45 ] This grade of teratoma also may be difficult to diagnose correctly. It can be confused with other small round cell neoplasms such as neuroblastoma, small cell carcinoma of hypercalcemic type, primitive neuroectodermal tumor, Wilm's tumor, desmoplastic small round cell tumor, and non-Hodgkin lymphoma . [ 46 ]
A teratoma with malignant transformation is a very rare form of teratoma that may contain elements of somatic malignant tumors such as leukemia, carcinoma, or sarcoma. [ 35 ] Of 641 children with pure teratoma, nine developed TMT: [ 47 ] five carcinoma, two glioma , and two embryonal carcinoma (here, these last are classified among germ cell tumors).
In November 2024, the SCT-study consortium published the risk of malignancy at initial resection in sacrococcygeal teratoma increases with age reaching a plateau at 6 years of age. [ 48 ]
Extraspinal ependymoma , usually considered to be a glioma (a type of nongerm cell tumor), may be an unusual form of mature teratoma. [ 49 ]
The treatment of choice is complete surgical removal ( i.e., complete resection). [ 50 ] [ 51 ] Teratomas are normally well-encapsulated and noninvasive of surrounding tissues, hence they are relatively easy to resect from surrounding tissues. Exceptions include teratomas in the brain, and very large, complex teratomas that have pushed into and become interlaced with adjacent muscles and other structures.
Prevention of recurrence does not require en bloc resection of surrounding tissues.
For malignant teratomas, usually, surgery is followed by chemotherapy.
Teratomas that are in surgically inaccessible locations, or are very complex, or are likely to be malignant (due to late discovery and/or treatment) sometimes are treated first with chemotherapy. [ citation needed ]
Although often described as benign, a teratoma does have malignant potential. A UK study of 351 infants and children diagnosed with "benign" teratoma reported 227 with MT, 124 with IT. Five years after surgery, event-free survival was 92.2% and 85.9%, respectively, and overall survival was 99% and 95.1%. [ 52 ] A similar study in Italy reported on 183 infants and children diagnosed with teratoma. At 10 years after surgery, event-free and overall survival were 90.4% and 98%, respectively. [ 53 ]
Depending on which tissue(s) it contains, a teratoma may secrete a variety of chemicals with systemic effects. Some teratomas secrete the "pregnancy hormone" human chorionic gonadotropin (βhCG), which can be used in clinical practice to monitor the successful treatment or relapse in patients with a known HCG-secreting teratoma. This hormone is not recommended as a diagnostic marker, because most teratomas do not secrete it. Some teratomas secrete thyroxine , in some cases to such a degree that it can lead to clinical hyperthyroidism in the patient. Of special concern is the secretion of alpha-fetoprotein (AFP); under some circumstances, AFP can be used as a diagnostic marker specific for the presence of yolk sac cells within the teratoma. These cells can develop into a frankly malignant tumor known as yolk sac tumor or endodermal sinus tumor .
Adequate follow-up requires close observation, involving repeated physical examination, scanning (ultrasound, MRI, or CT), and measurement of AFP and/or βhCG. [ 54 ] [ 55 ]
Embryonal teratomas most commonly occur in the sacrococcygeal region; sacrococcygeal teratoma is the single most common tumor found in newborn humans.
Of teratomas on the skull sutures, about 50% are found in or adjacent to the orbit . [ 57 ] Limbal dermoid is a choristoma , not a teratoma.
Teratoma qualifies as a rare disease , but is not extremely rare. Sacrococcygeal teratoma alone is diagnosed at birth in one out of 40,000 humans. Given the current human population and birth rate, this equals five per day or 1800 per year. Add to that number sacrococcygeal teratomas diagnosed later in life, and teratomas in other locales, and the incidence approaches 10,000 new diagnoses of teratoma per year. [ citation needed ]
Ovarian teratomas have been reported in mares , [ 58 ] mountain lions , [ 59 ] [ 60 ] and canines. [ 61 ] Teratomas also occur, rarely, in other species. [ 62 ]
Pluripotent stem cells including human induced pluripotent stem cells have a unique property of being able to generate teratomas when injected in rodents in the research laboratory. [ 63 ] The roots of this observation has been attributed to Leroy Stevens of the Jackson Laboratory . [ 64 ] In 1970, Stevens noticed that the cell populations that gave rise to teratomas were very similar to the cells of very early embryos.
For this reason, the so-called "teratoma assay" is one of the gold-standard validation assays for pluripotent stem cells. [ 65 ] Because differentiated human pluripotent stem cells are being developed as the basis for numerous regenerative medicine therapies, there is concern that residual undifferentiated stem cells could lead to teratoma formation in injected patients, and researchers are working to develop methods to address this concern. [ 66 ]
New research has looked at utilizing the human teratoma in chimeric animal studies as a promising platform for modeling multi-lineage human development, pan-tissue functional genetic screening, and tissue engineering. [ 67 ]
This article incorporates public domain material from Dictionary of Cancer Terms . U.S. National Cancer Institute . | https://en.wikipedia.org/wiki/Teratoma |
Terbium(III) oxide , also known as terbium sesquioxide , is a sesquioxide of the rare earth metal terbium , having chemical formula Tb 2 O 3 . It is a p-type semiconductor , which conducts protons, which is enhanced when doped with calcium . [ 3 ] It may be prepared by the reduction of Tb 4 O 7 in hydrogen at 1300 °C for 24 hours. [ 4 ]
It is a basic oxide and easily dissolved to dilute acids, and then almost colourless terbium salt is formed.
The crystal structure is cubic and the lattice constant is a = 1057 pm. [ 5 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Terbium(III)_oxide |
Tercica, Inc. , was a biopharmaceutical company based in Brisbane, California , United States. It developed Increlex (mecasermin [rDNA origin] injection), also known as recombinant human Insulin-like Growth Factor-1 ( rhIGF-1 ). Tercica applied to the Food and Drug Administration (FDA) for approval of Increlex as a long-term therapy for growth failure in children with severe primary IGF-1 deficiency ( Primary IGFD ), which is characterized by growth failure, and as a treatment for children with growth hormone (GH) gene deletion who have developed neutralizing antibodies to growth hormone. [ citation needed ]
Tercica licensed rights to develop, manufacture, and market Increlex from Genentech , Inc. Increlex conducted Phase III clinical trials to evaluate the safety and efficacy of Increlex in children with Primary IGFD . [ citation needed ]
In 2007, a case between Insmed and Tercica was settled when the jury found that Insmed infringed patents licensed to Tercica for Increlex. In the settlement, Insmed agreed to stop selling Iplex in the United States as a treatment for growth deficiencies and to withdraw an application to have the drug approved for such use in Europe. [ 1 ]
In 2008, the Ipsen Group acquired Tercica and changed its name to Ipsen Biopharmaceuticals, Inc. [ 2 ]
This biotechnology article is a stub . You can help Wikipedia by expanding it .
This United States corporation or company article is a stub . You can help Wikipedia by expanding it .
This article about a medical , pharmaceutical or biotechnological corporation or company is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tercica |
Terephthalaldehyde (TA) is an organic compound with the formula C 6 H 4 (CHO) 2 . It is one of three isomers of benzene dicarboxaldehyde, in which the aldehyde moieties are positioned in the para conformation on the benzene ring. Terephthalaldehyde appears as a white to beige solid, typically in the form of a powder. It is soluble in many organic solvents , such as alcohols (e.g., methanol or ethanol ) and ethers (e.g., tetrahydrofuran or diethylether ).
Terepthalaldehyde can be synthesised from p-xylene in two steps. [ 2 ] First, p-xylene can be reacted with bromine to create α,α,α',α'-Tetrabromo-p-xylene. Next, sulphuric acid is introduced to create terephthaldehyde. Alternative procedures also describe the conversion of similar p-xylene derivatives into terephthalaldehyde.
Terphthalaldehyde is used in the preparation of imines , which are also commonly referred to as Schiff bases , following a condensation reaction with amines . During this reaction, water is also formed. This reaction is by definition reversible, thus creating an equilibrium between aldehyde and amine on one side, and the imine and water on the other. However, due to aromatic conjugation between the imine group and benzene ring, the imines are relatively stable and will not easily hydrolyse back to the aldehyde. [ 3 ] When in an acidic aqueous environment, however, imines will start to hydrolyse more easily. [ 4 ] Typically, an equilibrium between the imine and aldehyde is formed, which is dependent on the concentration of the relevant compounds and the pH of the solution.
Imines from terephthalaldehyde find use in the preparation of metal-organic coordination complexes. In addition, terepthaldehyde is a commonly used monomer in the production of imine polymers , also called polyimines . [ 5 ] It finds further use in the synthesis of covalent organic frameworks (COFs), [ 6 ] and It is used as a precursor for the preparation of paramagnetic microporous polymeric organic frameworks (POFs) through copolymerization with pyrrole , indole , and carbazole . Due to the characteristic metal-coordinating properties of imines, terephthalaldehyde finds common use in synthesis of molecular cages . [ 7 ]
Terephthalaldehyde is also a commonly used intermediate or starting material in the preparation of a broad variety of organic compounds, such as pharmaceuticals, dyes and fluorescent whitening agents. | https://en.wikipedia.org/wiki/Terephthalaldehyde |
This page provides supplementary chemical data on Terephthalic acid , the organic compound and one of three isomeric phthalic acids , all with formula C 6 H 4 (CO 2 H) 2 .
The handling of this chemical may require notable safety precautions, which are set forth on the Material Safety Datasheet ( MSDS ) for it. SIRI | https://en.wikipedia.org/wiki/Terephthalic_acid_(data_page) |
Teresa Lyn Head-Gordon ( née Teresa Lyn Gordon ) is an American chemist and the Chancellor's Professor of Chemistry, Bioengineering, and Chemical and Biomolecular Engineering at the University of California, Berkeley . [ 4 ] She is also a faculty scientist in the Chemical Sciences Division at the Lawrence Berkeley National Laboratory and a fellow of both the American Institute for Medical and Biological Engineering (AIMBE ) and the American Chemical Society (ACS). [ 5 ]
Head-Gordon was born in Akron, Ohio. [ 2 ] She completed her bachelor's degree in chemistry at Case Western Reserve University in 1983. [ 6 ] She worked as a waitress for a year before starting a PhD in 1984, and in 1989 she earned her doctorate degree in Theoretical Chemistry from Carnegie Mellon University under the supervision of Charles L. Brooks III . [ 1 ] [ 3 ] [ 6 ] [ 7 ]
From 1990 to 1992 Head-Gordon worked as a postdoctoral member of technical staff at Bell Labs , studying protein folding and the perturbation theories of water with Frank Stillinger . [ 2 ] [ 3 ] She joined Lawrence Berkeley National Laboratory in 1992, where she worked as a staff scientist until 2001. [ 2 ] In 2001 Head-Gordon was awarded the IBM-SUR Award. [ 6 ] That year she became a faculty member of Bioengineering at University of California, Berkeley . [ 6 ] She was the 2005 Schlumberger Fellow at the University of Cambridge . [ 8 ] In 2011 she became a member of Chemical and Biomolecular Engineering department; in 2012 she joined the chemistry department at University of California, Berkeley ., [ 6 ] and joined the chemical sciences division as a faculty scientist at Lawrence Berkeley National Laboratory . [ 9 ] In 2012 she was made the Chancellor’s Professor at the University of California, Berkeley . [ 6 ] She is a member of the Pitzer Center for Theoretical Chemistry. [ 10 ]
Head-Gordon develops theoretical models that are used in chemical physics and biophysics. [ 11 ] The Head-Gordon group studies condensed phase systems ranging from biomolecular systems, molecular liquids, and complex interfaces. [ 12 ] [ 13 ] [ 14 ] [ 15 ] Her group develops software packages for molecular simulations. [ 16 ] [ 17 ] [ 18 ]
She is on the Board of Directors of the Molecular Sciences Software Institute. [ 19 ] She became co-director of CalSov in 2016. [ 6 ] In 2016 she was elected a fellow of the American Institute for Medical and Biological Engineering for her contributions to the computational methodologies for macromolecular assemblies. [ 5 ] In 2018 she was elected a Fellow of the American Chemical Society . [ 20 ] | https://en.wikipedia.org/wiki/Teresa_Head-Gordon |
Terfenol-D , an alloy of the formula Tb x Dy 1− x Fe 2 ( x ≈ 0.3), is a magnetostrictive material. It was initially developed in the 1970s by the Naval Ordnance Laboratory in the United States. The technology for manufacturing the material efficiently was developed in the 1980s at Ames Laboratory under a U.S. Navy-funded program. [ 1 ] It is named after terbium , iron (Fe), Naval Ordnance Laboratory (NOL), and the D comes from dysprosium .
The alloy has the highest magnetostriction of any alloy , up to 0.002 m/m at saturation; it expands and contracts in a magnetic field. Terfenol-D has a large magnetostriction force, high energy density , low sound velocity , and a low Young's modulus . At its most pure form, it also has low ductility and a low fracture resistance. Terfenol-D is a gray alloy that has different possible ratios of its elemental components that always follow a formula of Tb x Dy 1− x Fe 2 . The addition of dysprosium made it easier to induce magnetostrictive responses by making the alloy require a lower level of magnetic fields. When the ratio of Tb and Dy is increased, the resulting alloy's magnetostrictive properties will operate at temperatures as low as −200 °C, and when decreased, it may operate at a maximum of 200 °C. The composition of Terfenol-D allows it to have a large magnetostriction and magnetic flux when a magnetic field is applied to it. This case exists for a large range of compressive stresses , with a trend of decreasing magnetostriction as the compressive stress increases. Crush strength has been shown (unpublished) to be quite high under certain conditions. [ 2 ] There is also a relationship between the magnetic flux and compression in which when the compressive stress increases, the magnetic flux changes less drastically. [ 3 ] Terfenol-D is mostly used for its magnetostrictive properties, in which it changes shape when exposed to magnetic fields in a process called magnetization . Magnetic heat treatment is shown to improve the magnetostrictive properties of Terfenol-D at low compressive stress for certain ratios of Tb and Dy. [ 4 ]
Due to its material properties, Terfenol-D is excellent for use in the manufacturing of low frequency, high powered underwater acoustics . Its initial application was in naval sonar systems. It sees application in magnetomechanical sensors, actuators , and acoustic and ultrasonic transducers due to its high energy density and large bandwidth capabilities, e.g. in the SoundBug device (its first commercial application by FeONIC ). Its strain is also larger than that of another normally used material ( PZT8 ), which allows Terfenol-D transducers to reach greater depths for ocean explorations than past transducers. [ 5 ] Its low Young's modulus brings some complications due to compression at large depths, which are overcome in transducer designs that may reach 1000 ft in depth and only lose a small amount of accuracy of around 1 dB. [ 6 ] Due to its high temperature range, Terfenol-D is also useful in deep hole acoustic transducers where the environment may reach high pressure and temperatures like oil holes. Terfenol-D may also be used for hydraulic valve drivers due to its high strain and high force properties. [ 6 ] Similarly, magnetostrictive actuators have also been considered for use in fuel injectors for diesel engines because of the high stresses that can be produced. [ 7 ] Terfenol-D uniquely combines key characteristics that enable advanced diesel fuel injection. First, the quantum mechanical origin of magnetostriction means this effect does not degrade, giving it robustness and durability. Second, it makes good use of the compression available from diesel fuel pressure. Finally, its mechanical expansion tends to be proportional to the imposed magnetic field, making injector needle position continuously controllable. An injector needle directly operated by Terfenol-D can have lifetime durability on an engine cylinder head while enabling unprecedented control over each injection event throughout its entire duration. These properties can be used for in-cylinder treatment of efficiency, emissions, and noise while enabling fuel flexibility. [ 8 ]
The increase in use of Terfenol-D in transducers required new production techniques that increased production rates and quality because the original methods were unreliable and small scale. There are four methods that are used to produce Terfenol-D, which are free stand zone melting, modified Bridgman, sintered powder compact, and polymer matrix composites.
The first two methods, free stand zone melting (FSZM) and modified Bridgman (MB), are capable of producing Terfenol-D that has high magnetostrictive properties and energy densities. However, FSZM cannot produce a rod larger than 8 mm in diameter due to the surface tension of the Terfenol-D and how the FSZM process has no container to restrict the material. The MB process offers a minimum of 10 mm diameter size and is only restricted due to the wall interfering with the crystal growth . [ 9 ] Both methods create solid crystals that require later manufacturing if a geometry other than a right-angle cylinder is needed. The solid crystals produced have a fine lamellar structure . [ 10 ]
The other two techniques, sintered powder compact and polymer matrix composites , are powder based. These techniques allow for intricate geometry and detail. However, the size is limited to 10mm in diameter and 100mm in length due to the molds used. [ 9 ] The resulting microstructures of these powder based methods differ from the solid crystal ones because they do not have a lamellar structure and have a lower density . However, all methods have similar magnetostrictive properties. [ 10 ]
Due to size restriction, MB is the best process to produce Terfenol-D. However it is a labor-intensive method. A newer process like MB is Etrema crystal grower (ECG) that results in larger diameter Terfenol-D crystals and increased magnetostrictive performance. The reliability of magnetostrictive properties of the Terfenol-D throughout the life of the material is increased by using ECG. [ 9 ]
Terfenol-D has some minor drawbacks which stem from its material properties. Terfenol-D has low ductility and low fracture resistance. To solve this, Terfenol-D has been added to polymers and other metals to create composites. When added to polymers, the stiffness of the resulting composite is low. When composites of Terfenol-D with ductile metal binders are created, the resulting material has increased stiffness and ductility with reduced magnetostrictive properties. These metal composites may be formed by explosion compaction . In a study done on processing Terfenol-D alloys, the resulting alloys created using copper and Terfenol-D had increased strength and hardness values, which supports the theory that the composites of ductile metal binders and Terfenol-D result in a stronger and more ductile material. [ 11 ] | https://en.wikipedia.org/wiki/Terfenol-D |
Teri W. Odom is an American chemist and materials scientist. She is the chair of the chemistry department, the Joan Husting Madden and William H. Madden, Jr. Professor of Chemistry, and a professor of materials science and engineering at Northwestern University . [ 2 ] [ 3 ] She is affiliated with the university's International Institute for Nanotechnology , Chemistry of Life Processes Institute, Northwestern Initiative for Manufacturing Science and Innovation, Interdisciplinary Biological Sciences Graduate Program, and department of applied physics. [ 2 ] [ 4 ]
Odom attended Stanford University , where she earned a BS in chemistry, was elected to Phi Beta Kappa , and received the Standford's Marsden Memorial Prize for Chemistry Research (1996). She obtained her PhD in chemical physics from Harvard University in 2001 under the guidance of Charles M. Lieber , then conducted post-doctoral research at Harvard with George M. Whitesides from 2001 to 2002. [ 5 ] [ 4 ]
Odom joined Northwestern University's department of chemistry in 2002 [ 5 ] and became the department chair in 2018. [ 2 ] In 2010, she became the founding chair of the Noble Metal Nanoparticles Gordon Research Conference [ 6 ] [ 7 ] Between 2016 and 2018, she was associate director of the International Institute for Nanotechnology . [ 8 ] [ 2 ] Odom has worked on the editorial advisory boards of ACS Nano , [ 9 ] [ 10 ] Bioconjugate Chemistry , Materials Horizons , Annual Review of Physical Chemistry [ 10 ] Natural Sciences , Nano Futures , and Accounts of Chemical Research . [ citation needed ] Odom became an inaugural associate editor for Royal Society of Chemistry 's Chemical Science journal in 2009, a position she held until 2013. [ 11 ] [ 4 ] [ 12 ] She was on the editorial advisory board of Nano Letters beginning in 2010 and became editor-in-chief in 2019. [ 3 ] [ 10 ] In 2013, she became a founding Executive Editor for ACS Photonics . [ 9 ] [ 10 ]
Research in the Odom group focus on controlling materials at 100 nm scale and investigating their size and shape-dependent properties. Odom group has developed parallel, multi-scale pattering tools to generate hierarchical, anisotropic, and 3D hard and soft materials with applications in imaging, sensing, wetting and cancer therapeutics. As a result of Odom's nanofabrication tools, she has developed flat optics that can manipulate light at the nanoscale and beat the diffraction limit and tunable plasmon-based lasers. Odom also conducts research into nanoparticle-cell interactions using new biological nanoconstructs that offer imaging and therapeutic functions due to their shape (gold nanostar). [ 3 ] [ 2 ] [ 7 ]
Odom's husband Brian, now a physicist and astronomer at Northwestern University, piqued her interest in science by introducing her to the double-slit experiment while they were dating. He encouraged her to pursue undergraduate summer research, an experience that inspired her to continue studying physics and chemistry. [ 13 ] [ 14 ] | https://en.wikipedia.org/wiki/Teri_W._Odom |
In argumentation theory , a term (or notion ) is that part of a statement in an argument which refers to a specific thing. A term is usually, but not always expressed as a noun . According to Essentials of Logic , the word is derived from the Latin "terminus." [ 1 ]
One of the requirements to informally prove a conclusion with a deductive argument is for all its terms to be used unambiguously . The ambiguous use of a term in a deductive argument may be an instance of the fallacy of four terms . [ 1 ]
This logic -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Term_(argumentation) |
In mathematical logic , a term denotes a mathematical object while a formula denotes a mathematical fact. In particular, terms appear as components of a formula. This is analogous to natural language, where a noun phrase refers to an object and a whole sentence refers to a fact.
A first-order term is recursively constructed from constant symbols, variable symbols , and function symbols .
An expression formed by applying a predicate symbol to an appropriate number of terms is called an atomic formula , which evaluates to true or false in bivalent logics , given an interpretation .
For example, ( x + 1 ) ∗ ( x + 1 ) {\displaystyle (x+1)*(x+1)} is a term built from the constant 1, the variable x , and the binary function symbols + {\displaystyle +} and ∗ {\displaystyle *} ; it is part of the atomic formula ( x + 1 ) ∗ ( x + 1 ) ≥ 0 {\displaystyle (x+1)*(x+1)\geq 0} which evaluates to true for each real-numbered value of x .
Besides in logic , terms play important roles in universal algebra , and rewriting systems .
Given a set V of variable symbols, a set C of constant symbols and sets F n of n -ary function symbols, also called operator symbols, for each natural number n ≥ 1, the set of (unsorted first-order) terms T is recursively defined to be the smallest set with the following properties: [ 1 ]
Using an intuitive, pseudo- grammatical notation, this is sometimes written as:
The signature of the term language describes which function symbol sets F n are inhabited. Well-known examples are the unary function symbols sin , cos ∈ F 1 , and the binary function symbols +, −, ⋅, / ∈ F 2 . Ternary operations and higher-arity functions are possible but uncommon in practice. Many authors consider constant symbols as 0-ary function symbols F 0 , thus needing no special syntactic class for them.
A term denotes a mathematical object from the domain of discourse . A constant c denotes a named object from that domain, a variable x ranges over the objects in that domain, and an n -ary function f maps n - tuples of objects to objects. For example, if n ∈ V is a variable symbol, 1 ∈ C is a constant symbol, and add ∈ F 2 is a binary function symbol, then n ∈ T , 1 ∈ T , and (hence) add ( n , 1) ∈ T by the first, second, and third term building rule, respectively. The latter term is usually written as n +1, using infix notation and the more common operator symbol + for convenience.
Originally, logicians defined a term to be a character string adhering to certain building rules. [ 2 ] However, since the concept of tree became popular in computer science, it turned out to be more convenient to think of a term as a tree. For example, several distinct character strings, like " ( n ⋅( n +1))/2 ", " (( n ⋅( n +1)))/2 ", and " n ( n + 1 ) 2 {\displaystyle {\frac {n(n+1)}{2}}} ", denote the same term and correspond to the same tree, viz. the left tree in the above picture.
Separating the tree structure of a term from its graphical representation on paper, it is also easy to account for parentheses (being only representation, not structure) and invisible multiplication operators (existing only in structure, not in representation).
Two terms are said to be structurally , literally , or syntactically equal if they correspond to the same tree. For example, the left and the right tree in the above picture are structurally un equal terms, although they might be considered " semantically equal " as they always evaluate to the same value in rational arithmetic . While structural equality can be checked without any knowledge about the meaning of the symbols, semantic equality cannot. If the function / is e.g. interpreted not as rational but as truncating integer division, then at n =2 the left and right term evaluates to 3 and 2, respectively.
Structurally equal terms need to agree in their variable names.
In contrast, a term t is called a renaming , or a variant , of a term u if the latter resulted from consistently renaming all variables of the former, i.e. if u = tσ for some renaming substitution σ. In that case, u is a renaming of t , too, since a renaming substitution σ has an inverse σ −1 , and t = uσ −1 . Both terms are then also said to be equal modulo renaming . In many contexts, the particular variable names in a term don't matter, e.g. the commutativity axiom for addition can be stated as x + y = y + x or as a + b = b + a ; in such cases the whole formula may be renamed, while an arbitrary subterm usually may not, e.g. x + y = b + a is not a valid version of the commutativity axiom. [ note 1 ] [ note 2 ]
The set of variables of a term t is denoted by vars ( t ).
A term that doesn't contain any variables is called a ground term ; a term that doesn't contain multiple occurrences of a variable is called a linear term .
For example, 2+2 is a ground term and hence also a linear term, x ⋅( n +1) is a linear term, n ⋅( n +1) is a non-linear term. These properties are important in, for example, term rewriting .
Given a signature for the function symbols, the set of all terms forms the free term algebra . The set of all ground terms forms the initial term algebra .
Abbreviating the number of constants as f 0 , and the number of i -ary function symbols as f i , the number θ h of distinct ground terms of a height up to h can be computed by the following recursion formula:
Given a set R n of n -ary relation symbols for each natural number n ≥ 1, an (unsorted first-order) atomic formula is obtained by applying an n -ary relation symbol to n terms. As for function symbols, a relation symbol set R n is usually non-empty only for small n . In mathematical logic, more complex formulas are built from atomic formulas using logical connectives and quantifiers . For example, letting R {\displaystyle \mathbb {R} } denote the set of real numbers , ∀ x : x ∈ R {\displaystyle \mathbb {R} } ⇒ ( x +1)⋅( x +1) ≥ 0 is a mathematical formula evaluating to true in the algebra of complex numbers .
An atomic formula is called ground if it is built entirely from ground terms; all ground atomic formulas composable from a given set of function and predicate symbols make up the Herbrand base for these symbol sets.
When the domain of discourse contains elements of basically different kinds, it is useful to split the set of all terms accordingly. To this end, a sort (sometimes also called type ) is assigned to each variable and each constant symbol, and a declaration [ note 3 ] of domain sorts and range sort to each function symbol. A sorted term f ( t 1 ,..., t n ) may be composed from sorted subterms t 1 ,..., t n only if the i th subterm's sort matches the declared i th domain sort of f . Such a term is also called well-sorted ; any other term (i.e. obeying the unsorted rules only) is called ill-sorted .
For example, a vector space comes with an associated field of scalar numbers. Let W and N denote the sort of vectors and numbers, respectively, let V W and V N be the set of vector and number variables, respectively, and C W and C N the set of vector and number constants, respectively. Then e.g. 0 → ∈ C W {\displaystyle {\vec {0}}\in C_{W}} and 0 ∈ C N , and the vector addition, the scalar multiplication, and the inner product is declared as + : W × W → W , ∗ : W × N → W {\displaystyle +:W\times W\to W,*:W\times N\to W} , and ⟨ . , . ⟩ : W × W → N {\displaystyle \langle .,.\rangle :W\times W\to N} , respectively. Assuming variable symbols v → , w → ∈ V W {\displaystyle {\vec {v}},{\vec {w}}\in V_{W}} and a , b ∈ V N , the term ⟨ ( v → + 0 → ) ∗ a , w → ∗ b ⟩ {\displaystyle \langle ({\vec {v}}+{\vec {0}})*a,{\vec {w}}*b\rangle } is well-sorted, while v → + a {\displaystyle {\vec {v}}+a} is not (since + doesn't accept a term of sort N as 2nd argument). In order to make a ∗ v → {\displaystyle a*{\vec {v}}} a well-sorted term, an additional declaration ∗ : N × W → W {\displaystyle *:N\times W\to W} is required. Function symbols having several declarations are called overloaded .
See many-sorted logic for more information, including extensions of the many-sorted framework described here.
Mathematical notations as shown in the table do not fit into the scheme of a first-order term as defined above , as they all introduce an own local , or bound , variable that may not appear outside the notation's scope, e.g. t ⋅ ∫ a b sin ( k ⋅ t ) d t {\displaystyle t\cdot \int _{a}^{b}\sin(k\cdot t)\;dt} doesn't make sense.
In contrast, the other variables, referred to as free , behave like ordinary first-order term variables, e.g. k ⋅ ∫ a b sin ( k ⋅ t ) d t {\displaystyle k\cdot \int _{a}^{b}\sin(k\cdot t)\;dt} does make sense.
All these operators can be viewed as taking a function rather than a value term as one of their arguments. For example, the lim operator is applied to a sequence, i.e. to a mapping from positive integer to e.g. real numbers. As another example, a C function to implement the second example from the table, Σ, would have a function pointer argument (see box below).
Lambda terms can be used to denote anonymous functions to be supplied as arguments to lim , Σ, ∫, etc.
For example, the function square from the C program below can be written anonymously as a lambda term λ i . i 2 . The general sum operator Σ can then be considered as a ternary function symbol taking a lower bound value, an upper bound value and a function to be summed-up. Due to its latter argument, the Σ operator is called a second-order function symbol .
As another example, the lambda term λ n . x / n denotes a function that maps 1, 2, 3, ... to x /1, x /2, x /3, ..., respectively, that is, it denotes the sequence ( x /1, x /2, x /3, ...). The lim operator takes such a sequence and returns its limit (if defined).
The rightmost column of the table indicates how each mathematical notation example can be represented by a lambda term, also converting common infix operators into prefix form.
Given a set V of variable symbols, the set of lambda terms is defined recursively as follows:
The above motivating examples also used some constants like div , power , etc. which are, however, not admitted in pure lambda calculus.
Intuitively, the abstraction λ x . t denotes a unary function that returns t when given x , while the application ( t 1 t 2 ) denotes the result of calling the function t 1 with the input t 2 . For example, the abstraction λ x . x denotes the identity function, while λ x . y denotes the constant function always returning y . The lambda term λ x .( x x ) takes a function x and returns the result of applying x to itself. | https://en.wikipedia.org/wiki/Term_(logic) |
In universal algebra and mathematical logic , a term algebra is a freely generated algebraic structure over a given signature . [ 1 ] [ 2 ] For example, in a signature consisting of a single binary operation , the term algebra over a set X of variables is exactly the free magma generated by X . Other synonyms for the notion include absolutely free algebra and anarchic algebra . [ 3 ]
From a category theory perspective, a term algebra is the initial object for the category of all X -generated algebras of the same signature , and this object, unique up to isomorphism , is called an initial algebra ; it generates by homomorphic projection all algebras in the category. [ 4 ] [ 5 ]
A similar notion is that of a Herbrand universe in logic , usually used under this name in logic programming , [ 6 ] which is (absolutely freely) defined starting from the set of constants and function symbols in a set of clauses . That is, the Herbrand universe consists of all ground terms : terms that have no variables in them.
An atomic formula or atom is commonly defined as a predicate applied to a tuple of terms; a ground atom is then a predicate in which only ground terms appear. The Herbrand base is the set of all ground atoms that can be formed from predicate symbols in the original set of clauses and terms in its Herbrand universe. [ 7 ] [ 8 ] These two concepts are named after Jacques Herbrand .
Term algebras also play a role in the semantics of abstract data types , where an abstract data type declaration provides the signature of a multi-sorted algebraic structure and the term algebra is a concrete model of the abstract declaration.
A type τ {\displaystyle \tau } is a set of function symbols, with each having an associated arity (i.e. number of inputs). For any non-negative integer n {\displaystyle n} , let τ n {\displaystyle \tau _{n}} denote the function symbols in τ {\displaystyle \tau } of arity n {\displaystyle n} . A constant is a function symbol of arity 0.
Let τ {\displaystyle \tau } be a type, and let X {\displaystyle X} be a non-empty set of symbols, representing the variable symbols. (For simplicity, assume X {\displaystyle X} and τ {\displaystyle \tau } are disjoint.) Then the set of terms T ( X ) {\displaystyle T(X)} of type τ {\displaystyle \tau } over X {\displaystyle X} is the set of all well-formed strings that can be constructed using the variable symbols of X {\displaystyle X} and the constants and operations of τ {\displaystyle \tau } . Formally, T ( X ) {\displaystyle T(X)} is the smallest set such that:
The term algebra T ( X ) {\displaystyle {\mathcal {T}}(X)} of type τ {\displaystyle \tau } over X {\displaystyle X} is, in summary, the algebra of type τ {\displaystyle \tau } that maps each expression to its string representation. Formally, T ( X ) {\displaystyle {\mathcal {T}}(X)} is defined as follows: [ 9 ]
A term algebra is called absolutely free because for any algebra A {\displaystyle {\mathcal {A}}} of type τ {\displaystyle \tau } , and for any function g : X → A {\displaystyle g:X\to {\mathcal {A}}} , g {\displaystyle g} extends to a unique homomorphism g ∗ : T ( X ) → A {\displaystyle g^{\ast }:{\mathcal {T}}(X)\to {\mathcal {A}}} , which simply evaluates each term t ∈ T ( X ) {\displaystyle t\in {\mathcal {T}}(X)} to its corresponding value g ∗ ( t ) ∈ A {\displaystyle g^{\ast }(t)\in {\mathcal {A}}} . Formally, for each t ∈ T ( X ) {\displaystyle t\in {\mathcal {T}}(X)} :
As an example type inspired from integer arithmetic can be defined by τ 0 = { 0 , 1 } {\displaystyle \tau _{0}=\{0,1\}} , τ 1 = { } {\displaystyle \tau _{1}=\{\}} , τ 2 = { + , ∗ } {\displaystyle \tau _{2}=\{+,*\}} , and τ i = { } {\displaystyle \tau _{i}=\{\}} for each i > 2 {\displaystyle i>2} .
The best-known algebra of type τ {\displaystyle \tau } has the natural numbers as its domain and interprets 0 {\displaystyle 0} , 1 {\displaystyle 1} , + {\displaystyle +} , and ∗ {\displaystyle *} in the usual way; we refer to it as A n a t {\displaystyle {\mathcal {A}}_{nat}} .
For the example variable set X = { x , y } {\displaystyle X=\{x,y\}} , we are going to investigate the term algebra T ( X ) {\displaystyle {\mathcal {T}}(X)} of type τ {\displaystyle \tau } over X {\displaystyle X} .
First, the set T ( X ) {\displaystyle T(X)} of terms of type τ {\displaystyle \tau } over X {\displaystyle X} is considered.
We use red color to flag its members, which otherwise may be hard to recognize due to their uncommon syntactic form.
We have e.g.
More generally, each string in T ( X ) {\displaystyle T(X)} corresponds to a mathematical expression built from the admitted symbols and written in Polish prefix notation ;
for example, the term ∗ + x 1 x {\displaystyle {\color {red}*+x1x}} corresponds to the expression ( x + 1 ) ∗ x {\displaystyle (x+1)*x} in usual infix notation . No parentheses are needed to avoid ambiguities in Polish notation; e.g. the infix expression x + ( 1 ∗ x ) {\displaystyle x+(1*x)} corresponds to the term + x ∗ 1 x {\displaystyle {\color {red}+x*1x}} .
To give some counter-examples, we have e.g.
Now that the term set T ( X ) {\displaystyle T(X)} is established, we consider the term algebra T ( X ) {\displaystyle {\mathcal {T}}(X)} of type τ {\displaystyle \tau } over X {\displaystyle X} .
This algebra uses T ( X ) {\displaystyle T(X)} as its domain, on which addition and multiplication need to be defined.
The addition function + T ( X ) {\displaystyle +^{{\mathcal {T}}(X)}} takes two terms p {\displaystyle p} and q {\displaystyle q} and returns the term + p q {\displaystyle {\color {red}+}pq} ; similarly, the multiplication function ∗ T ( X ) {\displaystyle *^{{\mathcal {T}}(X)}} maps given terms p {\displaystyle p} and q {\displaystyle q} to the term ∗ p q {\displaystyle {\color {red}*}pq} .
For example, ∗ T ( X ) ( + x 1 , x ) {\displaystyle *^{{\mathcal {T}}(X)}({\color {red}+x1},{\color {red}x})} evaluates to the term ∗ + x 1 x {\displaystyle {\color {red}*+x1x}} .
Informally, the operations + T ( X ) {\displaystyle +^{{\mathcal {T}}(X)}} and ∗ T ( X ) {\displaystyle *^{{\mathcal {T}}(X)}} are both "sluggards" in that they just record what computation should be done, rather than doing it.
As an example for unique extendability of a homomorphism consider g : X → A n a t {\displaystyle g:X\to {\mathcal {A}}_{nat}} defined by g ( x ) = 7 {\displaystyle g(x)=7} and g ( y ) = 3 {\displaystyle g(y)=3} .
Informally, g {\displaystyle g} defines an assignment of values to variable symbols, and once this is done, every term from T ( X ) {\displaystyle T(X)} can be evaluated in a unique way in A n a t {\displaystyle {\mathcal {A}}_{nat}} .
For example,
In a similar way, one obtains g ∗ ( ∗ + x 1 x ) = . . . = 8 ∗ g ( x ) = . . . = 56 {\displaystyle g^{*}({\color {red}*+x1x})=...=8*g({\color {red}x})=...=56} .
The signature σ of a language is a triple < O , F , P > consisting of the alphabet of constants O , function symbols F , and predicates P . The Herbrand base [ 10 ] of a signature σ consists of all ground atoms of σ : of all formulas of the form R ( t 1 , ..., t n ), where t 1 , ..., t n are terms containing no variables (i.e. elements of the Herbrand universe) and R is an n -ary relation symbol ( i.e. predicate ). In the case of logic with equality, it also contains all equations of the form t 1 = t 2 , where t 1 and t 2 contain no variables.
Term algebras can be shown decidable using quantifier elimination . The complexity of the decision problem is in NONELEMENTARY because binary constructors are injective and thus pairing functions. [ 11 ] | https://en.wikipedia.org/wiki/Term_algebra |
In atomic physics , a term symbol is an abbreviated description of the total spin and orbital angular momentum quantum numbers of the electrons in a multi-electron atom . So while the word symbol suggests otherwise, it represents an actual value of a physical quantity .
For a given electron configuration of an atom, its state depends also on its total angular momentum, including spin and orbital components, which are specified by the term symbol. The usual atomic term symbols assume LS coupling (also known as Russell–Saunders coupling) in which the all-electron total quantum numbers for orbital ( L ), spin ( S ) and total ( J ) angular momenta are good quantum numbers .
In the terminology of atomic spectroscopy , L and S together specify a term ; L , S , and J specify a level ; and L , S , J and the magnetic quantum number M J specify a state . The conventional term symbol has the form 2 S +1 L J , where J is written optionally in order to specify a level. L is written using spectroscopic notation : for example, it is written "S", "P", "D", or "F" to represent L = 0, 1, 2, or 3 respectively. For coupling schemes other that LS coupling, such as the jj coupling that applies to some heavy elements, other notations are used to specify the term.
Term symbols apply to both neutral and charged atoms, and to their ground and excited states. Term symbols usually specify the total for all electrons in an atom, but are sometimes used to describe electrons in a given subshell or set of subshells, for example to describe each open subshell in an atom having more than one. The ground state term symbol for neutral atoms is described, in most cases, by Hund's rules . Neutral atoms of the chemical elements have the same term symbol for each column in the s-block and p-block elements, but differ in d-block and f-block elements where the ground-state electron configuration changes within a column, where exceptions to Hund's rules occur. Ground state term symbols for the chemical elements are given below .
Term symbols are also used to describe angular momentum quantum numbers for atomic nuclei and for molecules. For molecular term symbols , Greek letters are used to designate the component of orbital angular momenta along the molecular axis.
The use of the word term for an atom's electronic state is based on the Rydberg–Ritz combination principle , an empirical observation that the wavenumbers of spectral lines can be expressed as the difference of two terms . This was later summarized by the Bohr model , which identified the terms with quantized energy levels, and the spectral wavenumbers of these levels with photon energies.
Tables of atomic energy levels identified by their term symbols are available for atoms and ions in ground and excited states from the National Institute of Standards and Technology (NIST). [ 1 ]
The usual atomic term symbols assume LS coupling (also known as Russell–Saunders coupling), in which the atom's total spin quantum number S and the total orbital angular momentum quantum number L are " good quantum numbers ". (Russell–Saunders coupling is named after Henry Norris Russell and Frederick Albert Saunders , who described it in 1925 [ 2 ] ). The spin-orbit interaction then couples the total spin and orbital moments to give the total electronic angular momentum quantum number J . Atomic states are then well described by term symbols of the form:
2 S + 1 L J {\displaystyle ^{2S+1}L_{J}}
where
The orbital symbols S, P, D and F are derived from the characteristics of the spectroscopic lines corresponding to s, p, d, and f orbitals: sharp , principal , diffuse , and fundamental ; the rest are named in alphabetical order from G onwards (omitting J, S and P). When used to describe electronic states of an atom, the term symbol is often written following the electron configuration . For example, 1s 2 2s 2 2p 2 3 P 0 represents the ground state of a neutral carbon atom. The superscript 3 indicates that the spin multiplicity 2 S + 1 is 3 (it is a triplet state ), so S = 1; the letter "P" is spectroscopic notation for L = 1; and the subscript 0 is the value of J (in this case J = L − S ). [ 1 ]
Small letters refer to individual orbitals or one-electron quantum numbers, whereas capital letters refer to many-electron states or their quantum numbers.
For a given electron configuration,
The product ( 2 S + 1 ) ( 2 L + 1 ) {\displaystyle (2S+1)(2L+1)} as a number of possible states | S , M S , L , M L ⟩ {\displaystyle |S,M_{S},L,M_{L}\rangle } with given S and L is also a number of basis states in the uncoupled representation, where S {\displaystyle S} , M S {\displaystyle M_{S}} , L {\displaystyle L} , M L {\displaystyle M_{L}} ( M S {\displaystyle M_{S}} and M L {\displaystyle M_{L}} are z-axis components of total spin and total orbital angular momentum respectively) are good quantum numbers whose corresponding operators mutually commute. With given S {\displaystyle S} and L {\displaystyle L} , the eigenstates | S , M S , L , M L ⟩ {\displaystyle |S,M_{S},L,M_{L}\rangle } in this representation span function space of dimension ( 2 S + 1 ) ( 2 L + 1 ) {\displaystyle (2S+1)(2L+1)} , as M S = S , S − 1 , … , − S + 1 , − S {\displaystyle M_{S}=S,S-1,\dots ,-S+1,-S} and M L = L , L − 1 , . . . , − L + 1 , − L {\displaystyle M_{L}=L,L-1,...,-L+1,-L} . In the coupled representation where total angular momentum (spin + orbital) is treated, the associated states (or eigenstates ) are | J , M J , S , L ⟩ {\displaystyle |J,M_{J},S,L\rangle } and these states span the function space with dimension of
as M J = J , J − 1 , … , − J + 1 , − J {\displaystyle M_{J}=J,J-1,\dots ,-J+1,-J} . Obviously, the dimension of function space in both representations must be the same.
As an example, for S = 1 , L = 2 {\displaystyle S=1,L=2} , there are (2×1+1)(2×2+1) = 15 different states (= eigenstates in the uncoupled representation) corresponding to the 3 D term , of which (2×3+1) = 7 belong to the 3 D 3 ( J = 3) level. The sum of ( 2 J + 1 ) {\displaystyle (2J+1)} for all levels in the same term equals (2 S +1)(2 L +1) as the dimensions of both representations must be equal as described above. In this case, J can be 1, 2, or 3, so 3 + 5 + 7 = 15.
The parity of a term symbol is calculated as
where ℓ i {\displaystyle \ell _{i}} is the orbital quantum number for each electron. P = 1 {\displaystyle P=1} means even parity while P = − 1 {\displaystyle P=-1} is for odd parity. In fact, only electrons in odd orbitals (with ℓ {\displaystyle \ell } odd) contribute to the total parity: an odd number of electrons in odd orbitals (those with an odd ℓ {\displaystyle \ell } such as in p, f, ...) correspond to an odd term symbol, while an even number of electrons in odd orbitals correspond to an even term symbol. The number of electrons in even orbitals is irrelevant as any sum of even numbers is even. For any closed subshell, the number of electrons is 2 ( 2 ℓ + 1 ) {\displaystyle 2(2\ell +1)} which is even, so the summation of ℓ i {\displaystyle \ell _{i}} in closed subshells is always an even number. The summation of quantum numbers ∑ i ℓ i {\textstyle \sum _{i}\ell _{i}} over open (unfilled) subshells of odd orbitals ( ℓ {\displaystyle \ell } odd) determines the parity of the term symbol. If the number of electrons in this reduced summation is odd (even) then the parity is also odd (even).
When it is odd, the parity of the term symbol is indicated by a superscript letter "o", otherwise it is omitted:
Alternatively, parity may be indicated with a subscript letter "g" or "u", standing for gerade (German for "even") or ungerade ("odd"):
It is relatively easy to predict the term symbol for the ground state of an atom using Hund's rules . It corresponds to a state with maximum S and L .
As an example, in the case of fluorine , the electronic configuration is 1s 2 2s 2 2p 5 .
In the periodic table, because atoms of elements in a column usually have the same outer electron structure, and always have the same electron structure in the "s-block" and "p-block" elements (see block (periodic table) ), all elements may share the same ground state term symbol for the column. Thus, hydrogen and the alkali metals are all 2 S 1 ⁄ 2 , the alkaline earth metals are 1 S 0 , the boron column elements are 2 P 1 ⁄ 2 , the carbon column elements are 3 P 0 , the pnictogens are 4 S 3 ⁄ 2 , the chalcogens are 3 P 2 , the halogens are 2 P 3 ⁄ 2 , and the inert gases are 1 S 0 , per the rule for full shells and subshells stated above.
Term symbols for the ground states of most chemical elements [ 3 ] are given in the collapsed table below. [ 4 ] In the d-block and f-block, the term symbols are not always the same for elements in the same column of the periodic table, because open shells of several d or f electrons have several closely spaced terms whose energy ordering is often perturbed by the addition of an extra complete shell to form the next element in the column.
For example, the table shows that the first pair of vertically adjacent atoms with different ground-state term symbols are V and Nb. The 6 D 1 ⁄ 2 ground state of Nb corresponds to an excited state of V 2112 cm −1 above the 4 F 3 ⁄ 2 ground state of V, which in turn corresponds to an excited state of Nb 1143 cm −1 above the Nb ground state. [ 1 ] These energy differences are small compared to the 15158 cm −1 difference between the ground and first excited state of Ca, [ 1 ] which is the last element before V with no d electrons.
The process to calculate all possible term symbols for a given electron configuration is somewhat longer.
As an example, consider the carbon electron structure: 1s 2 2s 2 2p 2 . After removing full subshells, there are 2 electrons in a p-level ( ℓ = 1 {\displaystyle \ell =1} ), so there are
different states.
where the floor function ⌊ x ⌋ {\displaystyle \lfloor x\rfloor } denotes the greatest integer not exceeding x .
For configurations with at most two electrons (or holes) per subshell, an alternative and much quicker method of arriving at the same result can be obtained from group theory . The configuration 2p 2 has the symmetry of the following direct product in the full rotation group:
which, using the familiar labels Γ (0) = S , Γ (1) = P and Γ (2) = D , can be written as
The square brackets enclose the anti-symmetric square. Hence the 2p 2 configuration has components with the following symmetries:
The Pauli principle and the requirement for electrons to be described by anti-symmetric wavefunctions imply that only the following combinations of spatial and spin symmetry are allowed:
Then one can move to step five in the procedure above, applying Hund's rules.
The group theory method can be carried out for other such configurations, like 3d 2 , using the general formula
The symmetric square will give rise to singlets (such as 1 S, 1 D, & 1 G), while the anti-symmetric square gives rise to triplets (such as 3 P & 3 F).
More generally, one can use
where, since the product is not a square, it is not split into symmetric and anti-symmetric parts. Where two electrons come from inequivalent orbitals, both a singlet and a triplet are allowed in each case. [ 6 ]
Basic concepts for all coupling schemes:
Most famous coupling schemes are introduced here but these schemes can be mixed to express the energy state of an atom. This summary is based on [1] .
These are notations for describing states of singly excited atoms, especially noble gas atoms. Racah notation is basically a combination of LS or Russell–Saunders coupling and J 1 L 2 coupling. LS coupling is for a parent ion and J 1 L 2 coupling is for a coupling of the parent ion and the excited electron. The parent ion is an unexcited part of the atom. For example, in Ar atom excited from a ground state ...3p 6 to an excited state ...3p 5 4p in electronic configuration, 3p 5 is for the parent ion while 4p is for the excited electron. [ 8 ]
In Racah notation, states of excited atoms are denoted as ( ( 2 S 1 + 1 ) L 1 J 1 ) n ℓ [ K ] J o {\displaystyle \left(^{\left(2{{S}_{1}}+1\right)}{{L}_{1}}_{{J}_{1}}\right)n\ell \left[K\right]_{J}^{o}} . Quantities with a subscript 1 are for the parent ion, n and ℓ are principal and orbital quantum numbers for the excited electron, K and J are quantum numbers for K = J 1 + ℓ {\displaystyle \mathbf {K} =\mathbf {J} _{1}+{\boldsymbol {\ell }}} and J = K + s {\displaystyle \mathbf {J} =\mathbf {K} +\mathbf {s} } where ℓ {\displaystyle {\boldsymbol {\ell }}} and s {\displaystyle \mathbf {s} } are orbital angular momentum and spin for the excited electron respectively. “ o ” represents a parity of excited atom. For an inert (noble) gas atom, usual excited states are N p 5 nℓ where N = 2, 3, 4, 5, 6 for Ne, Ar, Kr, Xe, Rn, respectively in order. Since the parent ion can only be 2 P 1/2 or 2 P 3/2 , the notation can be shortened to n ℓ [ K ] J o {\displaystyle n\ell \left[K\right]_{J}^{o}} or n ℓ ′ [ K ] J o {\displaystyle n\ell '\left[K\right]_{J}^{o}} , where nℓ means the parent ion is in 2 P 3/2 while nℓ′ is for the parent ion in 2 P 1/2 state.
Paschen notation is a somewhat odd notation; it is an old notation made to attempt to fit an emission spectrum of neon to a hydrogen-like theory. It has a rather simple structure to indicate energy levels of an excited atom. The energy levels are denoted as n′ℓ# . ℓ is just an orbital quantum number of the excited electron. n′ℓ is written in a way that 1s for ( n = N + 1, ℓ = 0) , 2p for ( n = N + 1, ℓ = 1) , 2s for ( n = N + 2, ℓ = 0) , 3p for ( n = N + 2, ℓ = 1) , 3s for ( n = N + 3, ℓ = 0) , etc. Rules of writing n′ℓ from the lowest electronic configuration of the excited electron are: (1) ℓ is written first, (2) n′ is consecutively written from 1 and the relation of ℓ = n′ − 1, n′ − 2, ... , 0 (like a relation between n and ℓ ) is kept. n′ℓ is an attempt to describe electronic configuration of the excited electron in a way of describing electronic configuration of hydrogen atom. # is an additional number denoted to each energy level of given n′ℓ (there can be multiple energy levels of given electronic configuration, denoted by the term symbol). # denotes each level in order, for example, # = 10 is for a lower energy level than # = 9 level and # = 1 is for the highest level in a given n′ℓ . An example of Paschen notation is below. | https://en.wikipedia.org/wiki/Term_symbol |
A terminal is the point at which a conductor from a component , device or network comes to an end. [ 1 ] Terminal may also refer to an electrical connector at this endpoint, acting as the reusable interface to a conductor and creating a point where external circuits can be connected. [ 2 ] [ 3 ] A terminal may simply be the end of a wire or it may be fitted with a connector or fastener . [ citation needed ]
In network analysis , terminal means a point at which connections can be made to a network in theory and does not necessarily refer to any physical object. In this context, especially in older documents, it is sometimes called a pole . On circuit diagrams, terminals for external connections are denoted by empty circles. [ 4 ] They are distinguished from nodes or junctions which are entirely internal to the circuit and are denoted by solid circles. [ 5 ]
All electrochemical cells have two terminals ( electrodes ) which are referred to as the anode and cathode or positive (+) and negative (–). On many dry batteries , the positive terminal (cathode) is a protruding metal cap, and the negative terminal (anode) is a flat metal disc (see Battery terminal ) . In a galvanic cell such as a common AA battery , electrons flow from the negative terminal to the positive terminal, while the conventional current is opposite to this. [ 6 ] | https://en.wikipedia.org/wiki/Terminal_(electronics) |
Terminal Productivity Executive ( TPX ) is a multiple session manager for IBM mainframes . It allows connected users to access resources with a single sign-on. [ 1 ] [ 2 ] It holds several sessions concurrently, allowing a person to switch among them via the single connection on their physical terminal or terminal emulator application, [ 3 ] i.e. telnet . For each session, TPX uses a virtual terminal ; users can use it to switch amongst ISPF and SDSF in the Time Sharing Option .
TPX is presently a product of CA Technologies , [ 4 ] having been originally developed by Morgan Stanley , and later acquired by Duquesne Systems. [ 5 ] [ 6 ] TPX is primarily used on z/OS , but a version also exists for z/VM . [ 7 ]
This software article is a stub . You can help Wikipedia by expanding it .
This mainframe computer -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Terminal_Productivity_Executive |
A terminal access controller (TAC) is a host computer that accepts terminal connections, usually from dial-up lines, and that allows the user to invoke Internet remote log-on procedures, such as Telnet . [ 1 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Terminal_access_controller |
Terminal cisternae (singular: terminal cisterna) are enlarged areas of the sarcoplasmic reticulum surrounding the transverse tubules . [ 1 ]
Terminal cisternae are discrete regions within the muscle cell. They store calcium (increasing the capacity of the sarcoplasmic reticulum to release calcium) and release it when an action potential courses down the transverse tubules, eliciting muscle contraction . [ 2 ] Because terminal cisternae ensure rapid calcium delivery, they are well developed in muscles that contract quickly, such as fast twitch skeletal muscle . Terminal cisternae then go on to release calcium, which binds to troponin . This releases tropomyosin , exposing active sites of the thin filament, actin .
There are several mechanisms directly linked to the terminal cisternae which facilitate excitation-contraction coupling . When excitation of the membrane arrives at the T-tubule nearest the muscle fiber , a dihydropyridine channel ( DHP channel ) is activated. [ 2 ] This is similar to a voltage-gated calcium channel , but is not actually an ionotropic channel. Instead, it serves to activate ryanodine , which will let calcium ions pass into the sarcoplasmic reticulum, and triggers calcium release to the muscle fiber itself.
A T-tubule surrounded by two terminal cisternae is called a triad . The terminal cisternae, along with the transverse tubules, are the mechanisms of transduction from a nervous impulse to an actual muscle contraction .
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Terminal_cisternae |
Terminal concentrators , also known as terminal multiplexers , were hardware devices used to multiplex multiple serial terminals to a single hardware computer connection. Examples of terminal multiplexers were the IBM 3299 [ 1 ] and the terminal multiplexers made by Gandalf Technologies .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Terminal_concentrator |
A terminal controller is a device that collects traffic from a set of terminals and directs them to a concentrator .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Terminal_controller |
In medicine, the terminal drop hypothesis is a hypothesis that a sharp reduction in cognitive capacity in older people is often correlated with impending death, typically within five years. [ 1 ] [ 2 ]
This medical article is a stub . You can help Wikipedia by expanding it .
This death -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Terminal_drop_hypothesis |
The terminal investment hypothesis is the idea in life history theory that as an organism's residual reproductive value (or the total reproductive value minus the reproductive value of the current breeding attempt) decreases, its reproductive effort will increase. Thus, as an organism's prospects for survival decreases (through age or an immune challenge, for example), it will invest more in reproduction. This hypothesis is generally supported in animals, although results contrary to it do exist.
The terminal investment hypothesis posits that as residual reproductive value (measured as the total reproductive value minus the reproductive value of the current breeding attempt [ 1 ] ) decreases, reproductive effort increases. [ 2 ] This is based on the cost of reproduction hypothesis , which says that an increase in resources dedicated to current reproduction decreases the potential for future reproduction. But, as the residual reproductive value decreases, the importance of this trade-off decreases, leading to increased investment in the current reproductive attempt. [ 3 ] This terminal investment hypothesis can be illustrated by the equation
c ^ = ( a + b ) ϕ ( Φ − ϕ ) {\displaystyle {\hat {c}}={\frac {(a+b)\phi }{(\Phi -\phi )}}} ,
where Φ {\displaystyle \Phi } is the total reproductive value, ϕ {\displaystyle \phi } the reproductive value of the current breeding attempt, a {\displaystyle a} the proportionate increase in ϕ {\displaystyle \phi } resulting from a positive decision (where a yes-no decision must be made regarding whether or not to increase reproductive effort), c ^ {\displaystyle {\hat {c}}} the cost of a positive decision where there is no selective pressure for either a positive decision or negative decision (this variable is also known as the "barely-justified cost"). The variable b {\displaystyle b} is the proportionate loss in ϕ {\displaystyle \phi } from a negative decision. The barely-justified cost is thus inversely proportional to the residual reproductive value. When the level of reproductive investment has not reached the point where the equation above is true, more positive decisions about reproductive effort will be made. Thus, as the residual reproductive value decreases, more positive decisions need to be made so the equation is equal. [ 1 ]
In animals, most tests of the terminal investment hypothesis are correlations of age and reproductive effort, immune challenges on all age stages, and immune challenges on older ages versus younger ages. The last type of test is considered to be a more reliable measure of senescence's effect on reproductive effort, as younger individuals should reduce reproductive effort to reduce their chance of death because of their high future reproductive prospects, while older animals should increase effort because of their low future prospects. [ 2 ] Overall, the terminal investment hypothesis is generally supported in a variety of animals. [ 4 ]
A study on blue tits published in 2000 found that individuals injected with a human diphtheria – tetanus vaccine fed their nestlings less than those injected with a control solution. [ 5 ] In a study published in 2004, house sparrows that were injected with a Newcastle disease vaccine were more likely to lay a replacement clutch after their first clutch had been artificially removed than those that were injected with a control solution. [ 6 ] In a study published in 2006, old blue-footed boobies injected with lipopolysaccharides (to challenge the immune system) before laying fledged more young than normal, whereas young individuals fledged less than normal. [ 2 ] An increase in maternal effort in immune challenged birds may be mediated by the hormone corticosterone ; a study published in 2015 found that house wrens injected with lipopolysaccharides increased foraging, and that measurements of corticosterone from eggs laid after injection found a positive correlation of this hormone with maternal foraging rates. [ 7 ]
A study published in 2009 supported the cost of reproduction and terminal investment hypotheses in the burying beetle . It found that beetles manipulated to overproduce young (by replacing a 30 grams (1.1 oz) mouse carcass with a 20 grams (0.71 oz) carcass) had shorter lifespans than those that bred on just 30 grams (1.1 oz) carcasses, followed by those that had a 20 grams (0.71 oz) carcass. In turn, non-breeding beetles had a significantly longer lifespan than those that bred. This supports the cost of reproduction hypothesis. Another experiment from the same study found beetles that first bred at 65 days had a larger brood size before dispersal (before the larvae start to pupate in the soil) than those that initially bred at 28 days. This supports the terminal investment hypothesis, and prevents the effect of an increased average brood size in older animals due to differential survival of quality individuals. [ 3 ]
A study published in 2004 on the flatworm Diplostomum spathaceum found that as its intermediate host, a snail, aged, production of cercariae (which are passed on to the final host, a fish) decreased. This is in line with the bet hedging hypothesis, which, in this case, says that the flatworm should attempt to keep its host alive longer so that more young can be produced; it does not support the terminal investment hypothesis. [ 8 ]
A study published in 2002 found results contrary to the terminal investment hypothesis in reindeer . Calf weight peaked at the mother's seventh year of age, and declined thereafter. However, this would only be opposed to the hypothesis if reproductive costs did not increase with age. An alternative hypothesis, the senescence hypothesis, positing that reproductive output declines with age-related loss of function, was supported by the study. [ 9 ] These two hypotheses are not necessarily mutually exclusive; a study on rhesus macaques published in 2010 strongly supported the senescence hypothesis and weakly supported the terminal investment hypothesis. It found that older mothers were lighter, less active, and had lighter infants with reduced survival rates compared to younger mothers (supporting the senescence hypothesis), but that older individuals spent more time in contact with their young (supporting the terminal investment hypothesis). [ 10 ] Additionally, a study published in 1982 on red deer on the island of Rhum found that while older mothers produced less offspring (and lighter offspring, when they did) than expected for a given body weight, they had longer suckling bouts (which had previously been correlated with milk yield, calf body condition in early winter, and calf survival to spring) compared to younger mothers. [ 11 ]
A study on spotted turtles published in 2008 found that individuals in very poor condition sometimes did not breed. This is consistent with the bet hedging hypothesis, and indicates decision making on a large temporal scale (as spotted turtles may live for 65 to 110 years). However, individuals in poor condition generally produced a relatively large amount of small eggs; consistent with the terminal investment hypothesis. [ 12 ]
Although the terminal investment hypothesis has been relatively widely studied in animals, there have been few studies of the hypothesis' application to plants. One study on members of the long-lived oak genus Quercus found that trees declined in condition towards the end of their lifespan, and did not invest an increasing proportion of their decreasing resources in reproduction. [ 4 ] | https://en.wikipedia.org/wiki/Terminal_investment_hypothesis |
A terminal mode is one of a set of possible states of a terminal or pseudo terminal character device in Unix-like systems and determines how characters written to the terminal are interpreted. In cooked mode data is preprocessed before being given to a program, while raw mode passes the data as-is to the program without interpreting any of the special characters.
The system intercepts special characters in cooked mode and interprets special meaning from them. Backspace , delete , and Control-D are typically used to enable line-editing for the input to the running programs, and other control characters such as Control-C and Control-Z are used for job control or associated with other signals . The precise definition of what constitutes a cooked mode is operating system -specific. [ 1 ]
For example, if "ABC<Backspace>D" is given as an input to a program through a terminal character device in cooked mode, the program gets "ABD". But, if the terminal is in raw mode, the program gets the characters "ABC" followed by the Backspace character and followed by "D". In cooked mode, the terminal line discipline processes the characters "ABC<Backspace>D" and presents only the result ("ABD") to the program.
Technically, the term "cooked mode" should be associated only with streams that have a terminal line discipline , but generally it is applied to any system that does some amount of preprocessing. [ 2 ]
cbreak mode (sometimes called rare mode ) is a mode between raw mode and cooked mode. Unlike cooked mode it works with single characters at a time, rather than forcing a wait for a whole line and then feeding the line in all at once. Unlike raw mode, keystrokes like abort (usually Control-C ) are still processed by the terminal and will interrupt the process.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Terminal_mode |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.