text stringlengths 26 3.6k | page_title stringlengths 1 71 | source stringclasses 1
value | token_count int64 10 512 | id stringlengths 2 8 | url stringlengths 31 117 | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|---|---|
Backbone network
A backbone network is part of a computer network infrastructure that provides a path for the exchange of information between different LANs or subnetworks. A backbone can tie together diverse networks within the same building, across different buildings, or over a wide area. When designing a network backbone, network performance and network congestion are critical factors to take into account. Normally, the backbone network's capacity is greater than that of the individual networks connected to it.
For example, a large company might implement a backbone network to connect departments that are located around the world. The equipment that ties together the departmental networks constitutes the network backbone. Another example of a backbone network is the Internet backbone, which is a massive, global system of fiber-optic cable and optical networking that carry the bulk of data between wide area networks (WANs), metro, regional, national and transoceanic networks.
Metropolitan area network
A metropolitan area network (MAN) is a large computer network that interconnects users with computer resources in a geographic region of the size of a metropolitan area.
Wide area network
A wide area network (WAN) is a computer network that covers a large geographic area such as a city, country, or spans even intercontinental distances. A WAN uses a communications channel that combines many types of media such as telephone lines, cables, and airwaves. A WAN often makes use of transmission facilities provided by common carriers, such as telephone companies. WAN technologies generally function at the lower three layers of the OSI model: the physical layer, the data link layer, and the network layer.
Enterprise private network
An enterprise private network is a network that a single organization builds to interconnect its office locations (e.g., production sites, head offices, remote offices, shops) so they can share computer resources.
Virtual private network
A virtual private network (VPN) is an overlay network in which some of the links between nodes are carried by open connections or virtual circuits in some larger network (e.g., the Internet) instead of by physical wires. The data link layer protocols of the virtual network are said to be tunneled through the larger network. One common application is secure communications through the public Internet, but a VPN need not have explicit security features, such as authentication or content encryption. VPNs, for example, can be used to separate the traffic of different user communities over an underlying network with strong security features. | Computer network | Wikipedia | 491 | 4122592 | https://en.wikipedia.org/wiki/Computer%20network | Technology | Networks | null |
VPN may have best-effort performance or may have a defined service level agreement (SLA) between the VPN customer and the VPN service provider.
Global area network
A global area network (GAN) is a network used for supporting mobile users across an arbitrary number of wireless LANs, satellite coverage areas, etc. The key challenge in mobile communications is handing off communications from one local coverage area to the next. In IEEE Project 802, this involves a succession of terrestrial wireless LANs.
Organizational scope
Networks are typically managed by the organizations that own them. Private enterprise networks may use a combination of intranets and extranets. They may also provide network access to the Internet, which has no single owner and permits virtually unlimited global connectivity.
Intranet
An intranet is a set of networks that are under the control of a single administrative entity. An intranet typically uses the Internet Protocol and IP-based tools such as web browsers and file transfer applications. The administrative entity limits the use of the intranet to its authorized users. Most commonly, an intranet is the internal LAN of an organization. A large intranet typically has at least one web server to provide users with organizational information.
Extranet
An extranet is a network that is under the administrative control of a single organization but supports a limited connection to a specific external network. For example, an organization may provide access to some aspects of its intranet to share data with its business partners or customers. These other entities are not necessarily trusted from a security standpoint. The network connection to an extranet is often, but not always, implemented via WAN technology.
Internet
An internetwork is the connection of multiple different types of computer networks to form a single computer network using higher-layer network protocols and connecting them together using routers.
The Internet is the largest example of internetwork. It is a global system of interconnected governmental, academic, corporate, public, and private computer networks. It is based on the networking technologies of the Internet protocol suite. It is the successor of the Advanced Research Projects Agency Network (ARPANET) developed by DARPA of the United States Department of Defense. The Internet utilizes copper communications and an optical networking backbone to enable the World Wide Web (WWW), the Internet of things, video transfer, and a broad range of information services. | Computer network | Wikipedia | 467 | 4122592 | https://en.wikipedia.org/wiki/Computer%20network | Technology | Networks | null |
Participants on the Internet use a diverse array of methods of several hundred documented, and often standardized, protocols compatible with the Internet protocol suite and the IP addressing system administered by the Internet Assigned Numbers Authority and address registries. Service providers and large enterprises exchange information about the reachability of their address spaces through the Border Gateway Protocol (BGP), forming a redundant worldwide mesh of transmission paths.
Darknet
A darknet is an overlay network, typically running on the Internet, that is only accessible through specialized software. It is an anonymizing network where connections are made only between trusted peers — sometimes called friends (F2F) — using non-standard protocols and ports.
Darknets are distinct from other distributed peer-to-peer networks as sharing is anonymous (that is, IP addresses are not publicly shared), and therefore users can communicate with little fear of governmental or corporate interference.
Network service
Network services are applications hosted by servers on a computer network, to provide some functionality for members or users of the network, or to help the network itself to operate.
The World Wide Web, E-mail, printing and network file sharing are examples of well-known network services. Network services such as Domain Name System (DNS) give names for IP and MAC addresses (people remember names like nm.lan better than numbers like ), and Dynamic Host Configuration Protocol (DHCP) to ensure that the equipment on the network has a valid IP address.
Services are usually based on a service protocol that defines the format and sequencing of messages between clients and servers of that network service.
Network performance
Bandwidth
Bandwidth in bit/s may refer to consumed bandwidth, corresponding to achieved throughput or goodput, i.e., the average rate of successful data transfer through a communication path. The throughput is affected by processes such as bandwidth shaping, bandwidth management, bandwidth throttling, bandwidth cap and bandwidth allocation (using, for example, bandwidth allocation protocol and dynamic bandwidth allocation).
Network delay | Computer network | Wikipedia | 399 | 4122592 | https://en.wikipedia.org/wiki/Computer%20network | Technology | Networks | null |
Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several components, the sum of which is the total delay:
Processing delay time it takes a router to process the packet header
Queuing delay time the packet spends in routing queues
Transmission delay time it takes to push the packet's bits onto the link
Propagation delay time for a signal to propagate through the media
A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from less than a microsecond to several hundred milliseconds.
Performance metrics
The parameters that affect performance typically can include throughput, jitter, bit error rate and latency.
In circuit-switched networks, network performance is synonymous with the grade of service. The number of rejected calls is a measure of how well the network is performing under heavy traffic loads. Other types of performance measures can include the level of noise and echo.
In an Asynchronous Transfer Mode (ATM) network, performance can be measured by line rate, quality of service (QoS), data throughput, connect time, stability, technology, modulation technique, and modem enhancements.
There are many ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled instead of measured. For example, state transition diagrams are often used to model queuing performance in a circuit-switched network. The network planner uses these diagrams to analyze how the network performs in each state, ensuring that the network is optimally designed. | Computer network | Wikipedia | 397 | 4122592 | https://en.wikipedia.org/wiki/Computer%20network | Technology | Networks | null |
Network congestion
Network congestion occurs when a link or node is subjected to a greater data load than it is rated for, resulting in a deterioration of its quality of service. When networks are congested and queues become too full, packets have to be discarded, and participants must rely on retransmission to maintain reliable communications. Typical effects of congestion include queueing delay, packet loss or the blocking of new connections. A consequence of these latter two is that incremental increases in offered load lead either to only a small increase in the network throughput or to a potential reduction in network throughput.
Network protocols that use aggressive retransmissions to compensate for packet loss tend to keep systems in a state of network congestion even after the initial load is reduced to a level that would not normally induce network congestion. Thus, networks using these protocols can exhibit two stable states under the same level of load. The stable state with low throughput is known as congestive collapse.
Modern networks use congestion control, congestion avoidance and traffic control techniques where endpoints typically slow down or sometimes even stop transmission entirely when the network is congested to try to avoid congestive collapse. Specific techniques include: exponential backoff in protocols such as 802.11's CSMA/CA and the original Ethernet, window reduction in TCP, and fair queueing in devices such as routers.
Another method to avoid the negative effects of network congestion is implementing quality of service priority schemes allowing selected traffic to bypass congestion. Priority schemes do not solve network congestion by themselves, but they help to alleviate the effects of congestion for critical services. A third method to avoid network congestion is the explicit allocation of network resources to specific flows. One example of this is the use of Contention-Free Transmission Opportunities (CFTXOPs) in the ITU-T G.hn home networking standard.
For the Internet, addresses the subject of congestion control in detail.
Network resilience
Network resilience is "the ability to provide and maintain an acceptable level of service in the face of faults and challenges to normal operation."
Security
Computer networks are also used by security hackers to deploy computer viruses or computer worms on devices connected to the network, or to prevent these devices from accessing the network via a denial-of-service attack. | Computer network | Wikipedia | 464 | 4122592 | https://en.wikipedia.org/wiki/Computer%20network | Technology | Networks | null |
Network security
Network Security consists of provisions and policies adopted by the network administrator to prevent and monitor unauthorized access, misuse, modification, or denial of the computer network and its network-accessible resources. Network security is used on a variety of computer networks, both public and private, to secure daily transactions and communications among businesses, government agencies, and individuals.
Network surveillance
Network surveillance is the monitoring of data being transferred over computer networks such as the Internet. The monitoring is often done surreptitiously and may be done by or at the behest of governments, by corporations, criminal organizations, or individuals. It may or may not be legal and may or may not require authorization from a court or other independent agency.
Computer and network surveillance programs are widespread today, and almost all Internet traffic is or could potentially be monitored for clues to illegal activity.
Surveillance is very useful to governments and law enforcement to maintain social control, recognize and monitor threats, and prevent or investigate criminal activity. With the advent of programs such as the Total Information Awareness program, technologies such as high-speed surveillance computers and biometrics software, and laws such as the Communications Assistance For Law Enforcement Act, governments now possess an unprecedented ability to monitor the activities of citizens.
However, many civil rights and privacy groups—such as Reporters Without Borders, the Electronic Frontier Foundation, and the American Civil Liberties Union—have expressed concern that increasing surveillance of citizens may lead to a mass surveillance society, with limited political and personal freedoms. Fears such as this have led to lawsuits such as Hepting v. AT&T. The hacktivist group Anonymous has hacked into government websites in protest of what it considers "draconian surveillance".
End to end encryption
End-to-end encryption (E2EE) is a digital communications paradigm of uninterrupted protection of data traveling between two communicating parties. It involves the originating party encrypting data so only the intended recipient can decrypt it, with no dependency on third parties. End-to-end encryption prevents intermediaries, such as Internet service providers or application service providers, from reading or tampering with communications. End-to-end encryption generally protects both confidentiality and integrity.
Examples of end-to-end encryption include HTTPS for web traffic, PGP for email, OTR for instant messaging, ZRTP for telephony, and TETRA for radio. | Computer network | Wikipedia | 487 | 4122592 | https://en.wikipedia.org/wiki/Computer%20network | Technology | Networks | null |
Typical server-based communications systems do not include end-to-end encryption. These systems can only guarantee the protection of communications between clients and servers, not between the communicating parties themselves. Examples of non-E2EE systems are Google Talk, Yahoo Messenger, Facebook, and Dropbox.
The end-to-end encryption paradigm does not directly address risks at the endpoints of the communication themselves, such as the technical exploitation of clients, poor quality random number generators, or key escrow. E2EE also does not address traffic analysis, which relates to things such as the identities of the endpoints and the times and quantities of messages that are sent.
SSL/TLS
The introduction and rapid growth of e-commerce on the World Wide Web in the mid-1990s made it obvious that some form of authentication and encryption was needed. Netscape took the first shot at a new standard. At the time, the dominant web browser was Netscape Navigator. Netscape created a standard called secure socket layer (SSL). SSL requires a server with a certificate. When a client requests access to an SSL-secured server, the server sends a copy of the certificate to the client. The SSL client checks this certificate (all web browsers come with an exhaustive list of root certificates preloaded), and if the certificate checks out, the server is authenticated and the client negotiates a symmetric-key cipher for use in the session. The session is now in a very secure encrypted tunnel between the SSL server and the SSL client.
Views of networks
Users and network administrators typically have different views of their networks. Users can share printers and some servers from a workgroup, which usually means they are in the same geographic location and are on the same LAN, whereas a network administrator is responsible for keeping that network up and running. A community of interest has less of a connection of being in a local area and should be thought of as a set of arbitrarily located users who share a set of servers, and possibly also communicate via peer-to-peer technologies. | Computer network | Wikipedia | 425 | 4122592 | https://en.wikipedia.org/wiki/Computer%20network | Technology | Networks | null |
Network administrators can see networks from both physical and logical perspectives. The physical perspective involves geographic locations, physical cabling, and the network elements (e.g., routers, bridges and application-layer gateways) that interconnect via the transmission media. Logical networks, called, in the TCP/IP architecture, subnets, map onto one or more transmission media. For example, a common practice in a campus of buildings is to make a set of LAN cables in each building appear to be a common subnet, using VLANs.
Users and administrators are aware, to varying extents, of a network's trust and scope characteristics. Again using TCP/IP architectural terminology, an intranet is a community of interest under private administration usually by an enterprise, and is only accessible by authorized users (e.g. employees). Intranets do not have to be connected to the Internet, but generally have a limited connection. An extranet is an extension of an intranet that allows secure communications to users outside of the intranet (e.g. business partners, customers).
Unofficially, the Internet is the set of users, enterprises, and content providers that are interconnected by Internet Service Providers (ISP). From an engineering viewpoint, the Internet is the set of subnets, and aggregates of subnets, that share the registered IP address space and exchange information about the reachability of those IP addresses using the Border Gateway Protocol. Typically, the human-readable names of servers are translated to IP addresses, transparently to users, via the directory function of the Domain Name System (DNS).
Over the Internet, there can be business-to-business, business-to-consumer and consumer-to-consumer communications. When money or sensitive information is exchanged, the communications are apt to be protected by some form of communications security mechanism. Intranets and extranets can be securely superimposed onto the Internet, without any access by general Internet users and administrators, using secure VPN technology. | Computer network | Wikipedia | 413 | 4122592 | https://en.wikipedia.org/wiki/Computer%20network | Technology | Networks | null |
A redox indicator (also called an oxidation-reduction indicator) is an indicator which undergoes a definite color change at a specific electrode potential.
The requirement for fast and reversible color change means that the oxidation-reduction equilibrium for an indicator redox system needs to be established very quickly. Therefore, only a few classes of organic redox systems can be used for indicator purposes.
There are two common classes of redox indicators:
metal complexes of phenanthroline and bipyridine. In these systems, the metal changes oxidation state.
organic redox systems such as methylene blue. In these systems, a proton participates in the redox reaction. Therefore, sometimes redox indicators are also divided into two general groups: independent or dependent on pH.
The most common redox indicator are organic compounds.
Redox Indicator example:
The molecule 2,2'- Bipyridine is a redox Indicator. In solution, it changes from light blue to red at an electrode potential of 0.97 V.
pH independent
pH dependent | Redox indicator | Wikipedia | 210 | 577876 | https://en.wikipedia.org/wiki/Redox%20indicator | Physical sciences | Chemical methods | Chemistry |
Chromate salts contain the chromate anion, . Dichromate salts contain the dichromate anion, . They are oxyanions of chromium in the +6 oxidation state and are moderately strong oxidizing agents. In an aqueous solution, chromate and dichromate ions can be interconvertible.
Chemical properties
Chromates react with hydrogen peroxide, giving products in which peroxide, , replaces one or more oxygen atoms. In acid solution the unstable blue peroxo complex Chromium(VI) oxide peroxide, CrO(O2)2, is formed; it is an uncharged covalent molecule, which may be extracted into ether. Addition of pyridine results in the formation of the more stable complex CrO(O2)2py.
Acid–base properties
In aqueous solution, chromate and dichromate anions exist in a chemical equilibrium.
The predominance diagram shows that the position of the equilibrium depends on both pH and the analytical concentration of chromium. The chromate ion is the predominant species in alkaline solutions, but dichromate can become the predominant ion in acidic solutions.
Further condensation reactions can occur in strongly acidic solution with the formation of trichromates, , and tetrachromates, . All polyoxyanions of chromium(VI) have structures made up of tetrahedral CrO4 units sharing corners.
The hydrogen chromate ion, HCrO4−, is a weak acid:
+ H+; pKa ≈ 5.9
It is also in equilibrium with the dichromate ion:
2 + H2O
This equilibrium does not involve a change in hydrogen ion concentration, which would predict that the equilibrium is independent of pH. The red line on the predominance diagram is not quite horizontal due to the simultaneous equilibrium with the chromate ion. The hydrogen chromate ion may be protonated, with the formation of molecular chromic acid, H2CrO4, but the pKa for the equilibrium
is not well characterized. Reported values vary between about −0.8 and 1.6.
The dichromate ion is a somewhat weaker base than the chromate ion:
, pKa = 1.18
The pKa value for this reaction shows that it can be ignored at pH > 4. | Chromate and dichromate | Wikipedia | 499 | 577881 | https://en.wikipedia.org/wiki/Chromate%20and%20dichromate | Physical sciences | Metallic oxyanions | Chemistry |
Oxidation–reduction properties
The chromate and dichromate ions are fairly strong oxidizing agents. Commonly three electrons are added to a chromium atom, reducing it to oxidation state +3. In acid solution the aquated Cr3+ ion is produced.
+ 14 H+ + 6 e− → 2 Cr3+ + 7 H2O ε0 = 1.33 V
In alkaline solution chromium(III) hydroxide is produced. The redox potential shows that chromates are weaker oxidizing agent in alkaline solution than in acid solution.
+ 4 + 3 e− → + 5 ε0 = −0.13 V
Applications
Approximately of hexavalent chromium, mainly sodium dichromate, were produced in 1985. Chromates and dichromates are used in chrome plating to protect metals from corrosion and to improve paint adhesion. Chromate and dichromate salts of heavy metals, lanthanides and alkaline earth metals are only very slightly soluble in water and are thus used as pigments. The lead-containing pigment chrome yellow was used for a very long time before environmental regulations discouraged its use. When used as oxidizing agents or titrants in a redox chemical reaction, chromates and dichromates convert into trivalent chromium, Cr3+, salts of which typically have a distinctively different blue-green color.
Natural occurrence and production
The primary chromium ore is the mixed metal oxide chromite, FeCr2O4, found as brittle metallic black crystals or granules. Chromite ore is heated with a mixture of calcium carbonate and sodium carbonate in the presence of air. The chromium is oxidized to the hexavalent form, while the iron forms iron(III) oxide, Fe2O3:
4 FeCr2O4 + 8 Na2CO3 + 7 O2 → 8 Na2CrO4 + 2 Fe2O3 + 8 CO2
Subsequent leaching of this material at higher temperatures dissolves the chromates, leaving a residue of insoluble iron oxide. Normally the chromate solution is further processed to make chromium metal, but a chromate salt may be obtained directly from the liquor. | Chromate and dichromate | Wikipedia | 472 | 577881 | https://en.wikipedia.org/wiki/Chromate%20and%20dichromate | Physical sciences | Metallic oxyanions | Chemistry |
Chromate containing minerals are rare. Crocoite, PbCrO4, which can occur as spectacular long red crystals, is the most commonly found chromate mineral. Rare potassium chromate minerals and related compounds are found in the Atacama Desert. Among them is lópezite – the only known dichromate mineral.
Toxicity
Hexavalent chromium compounds can be toxic and carcinogenic (IARC Group 1). Inhaling particles of hexavalent chromium compounds can cause lung cancer. Also positive associations have been observed between exposure to chromium (VI) compounds and cancer of the nose and nasal sinuses. The use of chromate compounds in manufactured goods is restricted in the EU (and by market commonality the rest of the world) by EU Parliament directive on the Restriction of Hazardous Substances (RoHS) Directive (2002/95/EC). | Chromate and dichromate | Wikipedia | 183 | 577881 | https://en.wikipedia.org/wiki/Chromate%20and%20dichromate | Physical sciences | Metallic oxyanions | Chemistry |
In electrochemistry, a half-cell is a structure that contains a conductive electrode and a surrounding conductive electrolyte separated by a naturally occurring Helmholtz double layer. Chemical reactions within this layer momentarily pump electric charges between the electrode and the electrolyte, resulting in a potential difference between the electrode and the electrolyte. The typical anode reaction involves a metal atom in the electrode being dissolved and transported as a positive ion across the double layer, causing the electrolyte to acquire a net positive charge while the electrode acquires a net negative charge. The growing potential difference creates an intense electric field within the double layer, and the potential rises in value until the field halts the net charge-pumping reactions. This self-limiting action occurs almost instantly in an isolated half-cell; in applications two dissimilar half-cells are appropriately connected to constitute a Galvanic cell.
A standard half-cell consists of a metal electrode in an aqueous solution where the concentration of the metal ions is 1 molar (1 mol/L) at 298 kelvins (25 °C). In the case of the standard hydrogen electrode (SHE), a platinum electrode is used and is immersed in an acidic solution where the concentration of hydrogen ions is 1M, with hydrogen gas at 1atm being bubbled through solution. The electrochemical series, which consists of standard electrode potentials and is closely related to the reactivity series, was generated by measuring the difference in potential between the metal half-cell in a circuit with a standard hydrogen half-cell, connected by a salt bridge.
The standard hydrogen half-cell:
2H+(aq) + 2e− → H2(g)
The half-cells of a Daniell cell:
Original equation
Zn + Cu2+ → Zn2+ + Cu
Half-cell (anode) of Zn
Zn → Zn2+ + 2e−
Half-cell (cathode) of Cu
Cu2+ + 2e− → Cu | Half-cell | Wikipedia | 414 | 577962 | https://en.wikipedia.org/wiki/Half-cell | Physical sciences | Electrochemistry | Chemistry |
Hypochlorous acid is an inorganic compound with the chemical formula , also written as HClO, HOCl, or ClHO. Its structure is . It is an acid that forms when chlorine dissolves in water, and itself partially dissociates, forming a hypochlorite anion, . HClO and are oxidizers, and the primary disinfection agents of chlorine solutions. HClO cannot be isolated from these solutions due to rapid equilibration with its precursor, chlorine.
Because of its strong antimicrobial properties, the related compounds sodium hypochlorite (NaOCl) and calcium hypochlorite () are ingredients in many commercial bleaches, deodorants, and disinfectants. The white blood cells of mammals, such as humans, also contain hypochlorous acid as a tool against foreign bodies. In living organisms, HOCl is generated by the reaction of hydrogen peroxide with chloride ions under the catalysis of the heme enzyme myeloperoxidase (MPO).
Like many other disinfectants, hypochlorous acid solutions will destroy pathogens, such as COVID-19, absorbed on surfaces. In low concentrations, such solutions can serve to disinfect open wounds.
History
Hypochlorous acid was discovered in 1834 by the French chemist Antoine Jérôme Balard (1802–1876) by adding, to a flask of chlorine gas, a dilute suspension of mercury(II) oxide in water. He also named the acid and its compounds.
Despite being relatively easy to make, it is difficult to maintain a stable hypochlorous acid solution. It is not until recent years that scientists have been able to cost-effectively produce and maintain hypochlorous acid water for stable commercial use. | Hypochlorous acid | Wikipedia | 386 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
Uses
In organic synthesis, HClO converts alkenes to chlorohydrins.
In biology, hypochlorous acid is generated in activated neutrophils by myeloperoxidase-mediated peroxidation of chloride ions, and contributes to the destruction of bacteria.
In medicine, hypochlorous acid water has been used as a disinfectant and sanitiser.
In wound care, and as of early 2016 the U.S. Food and Drug Administration has approved products whose main active ingredient is hypochlorous acid for use in treating wounds and various infections in humans and pets. It is also FDA-approved as a preservative for saline solutions.
In disinfection, it has been used in the form of liquid spray, wet wipes and aerosolised application. Recent studies have shown hypochlorous acid water to be suitable for fog and aerosolised application for disinfection chambers and suitable for disinfecting indoor settings such as offices, hospitals and healthcare clinics.
In food service and water distribution, specialized equipment to generate weak solutions of HClO from water and salt is sometimes used to generate adequate quantities of safe (unstable) disinfectant to treat food preparation surfaces and water supplies. It is also commonly used in restaurants due to its non-flammable and nontoxic characteristics.
In water treatment, hypochlorous acid is the active sanitizer in hypochlorite-based products (e.g. used in swimming pools).
Similarly, in ships and yachts, marine sanitation devices use electricity to convert seawater into hypochlorous acid to disinfect macerated faecal waste before discharge into the sea.
In deodorization, hypochlorous acid has been tested to remove up to 99% of foul odours including garbage, rotten meat, toilet, stool, and urine odours.
Formation, stability and reactions
Addition of chlorine to water gives both hydrochloric acid (HCl) and hypochlorous acid (HClO): | Hypochlorous acid | Wikipedia | 437 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
When acids are added to aqueous salts of hypochlorous acid (such as sodium hypochlorite in commercial bleach solution), the resultant reaction is driven to the left, and chlorine gas is formed. Thus, the formation of stable hypochlorite bleaches is facilitated by dissolving chlorine gas into basic water solutions, such as sodium hydroxide.
The acid can also be prepared by dissolving dichlorine monoxide in water; under standard aqueous conditions, anhydrous hypochlorous acid is currently impossible to prepare due to the readily reversible equilibrium between it and its anhydride:
, K = 3.55 × 10−3 dm3/mol (at 0 °C)
The presence of light or transition metal oxides of copper, nickel, or cobalt accelerates the exothermic decomposition into hydrochloric acid and oxygen:
Fundamental reactions
In aqueous solution, hypochlorous acid partially dissociates into the anion hypochlorite :
Salts of hypochlorous acid are called hypochlorites. One of the best-known hypochlorites is NaClO, the active ingredient in bleach.
HClO is a stronger oxidant than chlorine under standard conditions.
, E = +1.63 V
HClO reacts with HCl to form chlorine:
HClO reacts with ammonia to form monochloramine:
HClO can also react with organic amines, forming N-chloroamines.
Hypochlorous acid exists in equilibrium with its anhydride, dichlorine monoxide.
, K = 3.55 × 10−3 dm3/mol (at 0 °C)
Reactivity of HClO with biomolecules
Hypochlorous acid reacts with a wide variety of biomolecules, including DNA, RNA, fatty acid groups, cholesterol and proteins. | Hypochlorous acid | Wikipedia | 421 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
Reaction with protein sulfhydryl groups
Knox et al. first noted that HClO is a sulfhydryl inhibitor that, in sufficient quantity, could completely inactivate proteins containing sulfhydryl groups. This is because HClO oxidises sulfhydryl groups, leading to the formation of disulfide bonds that can result in crosslinking of proteins. The HClO mechanism of sulfhydryl oxidation is similar to that of monochloramine, and may only be bacteriostatic, because once the residual chlorine is dissipated, some sulfhydryl function can be restored. One sulfhydryl-containing amino acid can scavenge up to four molecules of HClO. Consistent with this, it has been proposed that sulfhydryl groups of sulfur-containing amino acids can be oxidized a total of three times by three HClO molecules, with the fourth reacting with the α-amino group. The first reaction yields sulfenic acid () then sulfinic acid () and finally . Sulfenic acids form disulfides with another protein sulfhydryl group, causing cross-linking and aggregation of proteins. Sulfinic acid and derivatives are produced only at high molar excesses of HClO, and disulfides are formed primarily at bacteriocidal levels. Disulfide bonds can also be oxidized by HClO to sulfinic acid. Because the oxidation of sulfhydryls and disulfides evolves hydrochloric acid, this process results in the depletion HClO. | Hypochlorous acid | Wikipedia | 337 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
Reaction with protein amino groups
Hypochlorous acid reacts readily with amino acids that have amino group side-chains, with the chlorine from HClO displacing a hydrogen, resulting in an organic chloramine. Chlorinated amino acids rapidly decompose, but protein chloramines are longer-lived and retain some oxidative capacity. Thomas et al. concluded from their results that most organic chloramines decayed by internal rearrangement and that fewer available NH2 groups promoted attack on the peptide bond, resulting in cleavage of the protein. McKenna and Davies found that 10 mM or greater HClO is necessary to fragment proteins in vivo. Consistent with these results, it was later proposed that the chloramine undergoes a molecular rearrangement, releasing HCl and ammonia to form an aldehyde. The aldehyde group can further react with another amino group to form a Schiff base, causing cross-linking and aggregation of proteins.
Reaction with DNA and nucleotides
Hypochlorous acid reacts slowly with DNA and RNA as well as all nucleotides in vitro. GMP is the most reactive because HClO reacts with both the heterocyclic NH group and the amino group. In similar manner, TMP with only a heterocyclic NH group that is reactive with HClO is the second-most reactive. AMP and CMP, which have only a slowly reactive amino group, are less reactive with HClO. UMP has been reported to be reactive only at a very slow rate. The heterocyclic NH groups are more reactive than amino groups, and their secondary chloramines are able to donate the chlorine. These reactions likely interfere with DNA base pairing, and, consistent with this, Prütz has reported a decrease in viscosity of DNA exposed to HClO similar to that seen with heat denaturation. The sugar moieties are nonreactive and the DNA backbone is not broken. NADH can react with chlorinated TMP and UMP as well as HClO. This reaction can regenerate UMP and TMP and results in the 5-hydroxy derivative of NADH. The reaction with TMP or UMP is slowly reversible to regenerate HClO. A second slower reaction that results in cleavage of the pyridine ring occurs when excess HClO is present. is inert to HClO. | Hypochlorous acid | Wikipedia | 501 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
Reaction with lipids
Hypochlorous acid reacts with unsaturated bonds in lipids, but not saturated bonds, and the ion does not participate in this reaction. This reaction occurs by hydrolysis with addition of chlorine to one of the carbons and a hydroxyl to the other. The resulting compound is a chlorohydrin. The polar chlorine disrupts lipid bilayers and could increase permeability. When chlorohydrin formation occurs in lipid bilayers of red blood cells, increased permeability occurs. Disruption could occur if enough chlorohydrin is formed. The addition of preformed chlorohydrin to red blood cells can affect permeability as well. Cholesterol chlorohydrin have also been observed, but do not greatly affect permeability, and it is believed that is responsible for this reaction. Hypochlorous acid also reacts with a subclass of glycerophospholipids called plasmalogens, yielding chlorinated fatty aldehydes which are capable of protein modification and may play a role in inflammatory processes such as platelet aggregation and the formation of neutrophil extracellular traps.
Mode of disinfectant action
E. coli exposed to hypochlorous acid lose viability in less than 0.1 seconds due to inactivation of many vital systems. Hypochlorous acid has a reported of 0.0104–0.156 ppm and 2.6 ppm caused 100% growth inhibition in 5 minutes. However, the concentration required for bactericidal activity is also highly dependent on bacterial concentration. | Hypochlorous acid | Wikipedia | 346 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
In 1948, Knox et al. proposed the idea that inhibition of glucose oxidation is a major factor in the bacteriocidal nature of chlorine solutions. They proposed that the active agent or agents diffuse across the cytoplasmic membrane to inactivate key sulfhydryl-containing enzymes in the glycolytic pathway. This group was also the first to note that chlorine solutions (HClO) inhibit sulfhydryl enzymes. Later studies have shown that, at bacteriocidal levels, the cytosol components do not react with HClO. In agreement with this, McFeters and Camper found that aldolase, an enzyme that Knox et al. proposes would be inactivated, was unaffected by HClO in vivo. It has been further shown that loss of sulfhydryls does not correlate with inactivation. That leaves the question concerning what causes inhibition of glucose oxidation. The discovery that HClO blocks induction of β-galactosidase by added lactose led to a possible answer to this question. The uptake of radiolabeled substrates by both ATP hydrolysis and proton co-transport may be blocked by exposure to HClO preceding loss of viability. From this observation, it proposed that HClO blocks uptake of nutrients by inactivating transport proteins. The question of loss of glucose oxidation has been further explored in terms of loss of respiration. Venkobachar et al. found that succinic dehydrogenase was inhibited in vitro by HClO, which led to the investigation of the possibility that disruption of electron transport could be the cause of bacterial inactivation. Albrich et al. subsequently found that HClO destroys cytochromes and iron-sulfur clusters and observed that oxygen uptake is abolished by HClO and adenine nucleotides are lost. It was also observed that irreversible oxidation of cytochromes paralleled the loss of respiratory activity. One way of addressing the loss of oxygen uptake was by studying the effects of HClO on succinate-dependent electron transport. Rosen et al. found that levels of reductable cytochromes in HClO-treated cells were normal, and these cells were unable to reduce them. Succinate dehydrogenase was also inhibited by HClO, stopping the flow of electrons to oxygen | Hypochlorous acid | Wikipedia | 488 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
Later studies revealed that Ubiquinol oxidase activity ceases first, and the still-active cytochromes reduce the remaining quinone. The cytochromes then pass the electrons to oxygen, which explains why the cytochromes cannot be reoxidized, as observed by Rosen et al. However, this line of inquiry was ended when Albrich et al. found that cellular inactivation precedes loss of respiration by using a flow mixing system that allowed evaluation of viability on much smaller time scales. This group found that cells capable of respiring could not divide after exposure to HClO | Hypochlorous acid | Wikipedia | 130 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
Depletion of adenine nucleotides
Having eliminated loss of respiration, Albrich et al. proposes that the cause of death may be due to metabolic dysfunction caused by depletion of adenine nucleotides. Barrette et al. studied the loss of adenine nucleotides by studying the energy charge of HClO-exposed cells and found that cells exposed to HClO were unable to step up their energy charge after addition of nutrients. The conclusion was that exposed cells have lost the ability to regulate their adenylate pool, based on the fact that metabolite uptake was only 45% deficient after exposure to HClO and the observation that HClO causes intracellular ATP hydrolysis. It was also confirmed that, at bacteriocidal levels of HClO, cytosolic components are unaffected. So it was proposed that modification of some membrane-bound protein results in extensive ATP hydrolysis, and this, coupled with the cells inability to remove AMP from the cytosol, depresses metabolic function. One protein involved in loss of ability to regenerate ATP has been found to be ATP synthetase. Much of this research on respiration reconfirms the observation that relevant bacteriocidal reactions take place at the cell membrane.
Inhibition of DNA replication
Recently it has been proposed that bacterial inactivation by HClO is the result of inhibition of DNA replication. When bacteria are exposed to HClO, there is a precipitous decline in DNA synthesis that precedes inhibition of protein synthesis, and closely parallels loss of viability. During bacterial genome replication, the origin of replication (oriC in E. coli) binds to proteins that are associated with the cell membrane, and it was observed that HClO treatment decreases the affinity of extracted membranes for oriC, and this decreased affinity also parallels loss of viability. A study by Rosen et al. compared the rate of HClO inhibition of DNA replication of plasmids with different replication origins and found that certain plasmids exhibited a delay in the inhibition of replication when compared to plasmids containing oriC. Rosen's group proposed that inactivation of membrane proteins involved in DNA replication are the mechanism of action of HClO. | Hypochlorous acid | Wikipedia | 460 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
Protein unfolding and aggregation
HClO is known to cause post-translational modifications to proteins, the notable ones being cysteine and methionine oxidation. A recent examination of HClO's bactericidal role revealed it to be a potent inducer of protein aggregation. Hsp33, a chaperone known to be activated by oxidative heat stress, protects bacteria from the effects of HClO by acting as a holdase, effectively preventing protein aggregation. Strains of Escherichia coli and Vibrio cholerae lacking Hsp33 were rendered especially sensitive to HClO. Hsp33 protected many essential proteins from aggregation and inactivation due to HClO, which is a probable mediator of HClO's bactericidal effects.
Hypochlorites
Hypochlorites are the salts of hypochlorous acid; commercially important hypochlorites are calcium hypochlorite and sodium hypochlorite.
Production of hypochlorites using electrolysis
Solutions of hypochlorites can be produced in-situ by electrolysis of an aqueous sodium chloride solution in both batch and flow processes. The composition of the resulting solution depends on the pH at the anode. In acid conditions the solution produced will have a high hypochlorous acid concentration, but will also contain dissolved gaseous chlorine, which can be corrosive, at a neutral pH the solution will be around 75% hypochlorous acid and 25% hypochlorite. Some of the chlorine gas produced will dissolve forming hypochlorite ions. Hypochlorites are also produced by the disproportionation of chlorine gas in alkaline solutions.
Safety
HClO is classified as non-hazardous by the Environmental Protection Agency in the US. As an oxidising agent, it can be corrosive or irritant depending on its concentration and pH.
In a clinical test, hypochlorous acid water was tested for eye irritation, skin irritation, and toxicity. The test concluded that it was non-toxic and non-irritating to the eye and skin.
In a 2017 study, a saline hygiene solution preserved with pure hypochlorous acid was shown to reduce the bacterial load significantly without altering the diversity of bacterial species on the eyelids. After 20 minutes of treatment, there was more than 99% reduction of the Staphylococci bacteria. | Hypochlorous acid | Wikipedia | 507 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
Commercialisation
Commercial disinfection applications remained elusive for a long time after the discovery of hypochlorous acid because the stability of its solution in water is difficult to maintain. The active compounds quickly deteriorate back into salt water, losing the solution its disinfecting capability, which makes it difficult to transport for wide use. It is less commonly used as a disinfectant compared to bleach and alcohol due to cost, despite its stronger disinfecting capabilities.
Technological developments have reduced manufacturing costs and allow for manufacturing and bottling of hypochlorous acid water for home and commercial use. However, most hypochlorous acid water has a short shelf life. Storing away from heat and direct sunlight can help slow the deterioration. The further development of continuous flow electrochemical cells has been implemented in new products, allowing the commercialisation of domestic and industrial continuous flow devices for the in-situ generation of hypochlorous acid for disinfection purposes. | Hypochlorous acid | Wikipedia | 203 | 578099 | https://en.wikipedia.org/wiki/Hypochlorous%20acid | Physical sciences | Specific acids | Chemistry |
In electrochemistry, the standard hydrogen electrode (abbreviated SHE), is a redox electrode which forms the basis of the thermodynamic scale of oxidation-reduction potentials. Its absolute electrode potential is estimated to be at 25 °C, but to form a basis for comparison with all other electrochemical reactions, hydrogen's standard electrode potential () is declared to be zero volts at any temperature. Potentials of all other electrodes are compared with that of the standard hydrogen electrode at the same temperature.
Nernst equation for SHE
The hydrogen electrode is based on the redox half cell corresponding to the reduction of two hydrated protons, into one gaseous hydrogen molecule,
General equation for a reduction reaction:
The reaction quotient () of the half-reaction is the ratio between the chemical activities () of the reduced form (the reductant, ) and the oxidized form (the oxidant, ).
Considering the redox couple:
2H_{(aq)}+ + 2e- <=> H2_{(g)}
at chemical equilibrium, the ratio of the reaction products by the reagents is equal to the equilibrium constant of the half-reaction:
where
and correspond to the chemical activities of the reduced and oxidized species involved in the redox reaction
represents the activity of .
denotes the chemical activity of gaseous hydrogen (), which is approximated here by its fugacity
denotes the partial pressure of gaseous hydrogen, expressed without unit; where
is the mole fraction
is the total gas pressure in the system
is the standard pressure (1 bar = 10 pascal) introduced here simply to overcome the pressure unit and to obtain an equilibrium constant without unit.
More details on managing gas fugacity to get rid of the pressure unit in thermodynamic calculations can be found at thermodynamic activity#Gases. The followed approach is the same as for chemical activity and molar concentration of solutes in solution. In the SHE, pure hydrogen gas () at the standard pressure of is engaged in the system. Meanwhile the general SHE equation can also be applied to other thermodynamic systems with different mole fraction or total pressure of hydrogen. | Standard hydrogen electrode | Wikipedia | 455 | 578150 | https://en.wikipedia.org/wiki/Standard%20hydrogen%20electrode | Physical sciences | Electrochemistry | Chemistry |
This redox reaction occurs at a platinized platinum electrode.
The electrode is immersed in the acidic solution and pure hydrogen gas is bubbled over its surface. The concentration of both the reduced and oxidised forms of hydrogen are maintained at unity. That implies that the pressure of hydrogen gas is 1 bar (100 kPa) and the activity coefficient of hydrogen ions in the solution is unity. The activity of hydrogen ions is their effective concentration, which is equal to the formal concentration times the activity coefficient. These unit-less activity coefficients are close to 1.00 for very dilute water solutions, but usually lower for more concentrated solutions.
As the general form of the Nernst equation at equilibrium is the following:
and as by definition in the case of the SHE,
The Nernst equation for the SHE becomes:
Simply neglecting the pressure unit present in , this last equation can often be directly written as:
And by solving the numerical values for the term
the practical formula commonly used in the calculations of this Nernst equation is:
(unit: volt)
As under standard conditions the equation simplifies to:
(unit: volt)
This last equation describes the straight line with a negative slope of -0.0591 volt/ pH unit delimiting the lower stability region of water in a Pourbaix diagram where gaseous hydrogen is evolving because of water decomposition.
where:
is the activity of the hydrogen ions (H+) in aqueous solution, with:
is the activity coefficient of hydrogen ions (H+) in aqueous solution
is the molar concentration of hydrogen ions (H+) in aqueous solution
is the standard concentration (1 M) used to overcome concentration unit
is the partial pressure of the hydrogen gas, in bar ()
is the universal gas constant: J⋅K−1⋅mol−1 (rounded here to 4 decimal)
is the absolute temperature, in kelvin (at 25 °C: 298.15 K)
is the Faraday constant (the charge per mole of electrons), equal to
is the standard pressure:
: as the system is at chemical equilibrium, hydrogen gas, is also in equilibrium with dissolved hydrogen, and the Nernst equation implicitly takes into account the Henry's law for gas dissolution. Therefore, there is no need to independently consider the gas dissolution process in the system, as it is already de facto included. | Standard hydrogen electrode | Wikipedia | 485 | 578150 | https://en.wikipedia.org/wiki/Standard%20hydrogen%20electrode | Physical sciences | Electrochemistry | Chemistry |
SHE vs NHE vs RHE
During the early development of electrochemistry, researchers used the normal hydrogen electrode as their standard for zero potential. This was convenient because it could actually be constructed by "[immersing] a platinum electrode into a solution of 1 N strong acid and [bubbling] hydrogen gas through the solution at about 1 atm pressure". However, this electrode/solution interface was later changed. What replaced it was a theoretical electrode/solution interface, where the concentration of H+ was 1 M, but the H+ ions were assumed to have no interaction with other ions (a condition not physically attainable at those concentrations). To differentiate this new standard from the previous one, it was given the name 'standard hydrogen electrode'.
Finally, there are also reversible hydrogen electrodes (RHEs), which are practical hydrogen electrodes whose potential depends on the pH of the solution.
In summary,
NHE (normal hydrogen electrode): potential of a platinum electrode in 1 M acid solution with 1 bar of hydrogen bubbled through
SHE (standard hydrogen electrode): potential of a platinum electrode in a theoretical ideal solution (the current standard for zero potential for all temperatures)
RHE (reversible hydrogen electrode): a practical hydrogen electrode whose potential depends on the pH of the solution
Choice of platinum
The choice of platinum for the hydrogen electrode is due to several factors:
inertness of platinum (it does not corrode)
the capability of platinum to catalyze the reaction of proton reduction
a high intrinsic exchange current density for proton reduction on platinum
excellent reproducibility of the potential (bias of less than 10 μV when two well-made hydrogen electrodes are compared with one another)
The surface of platinum is platinized (i.e., covered with a layer of fine powdered platinum also known as platinum black) to:
Increase total surface area. This improves reaction kinetics and maximum possible current
Use a surface material that adsorbs hydrogen well at its interface. This also improves reaction kinetics
Other metals can be used for fabricating electrodes with a similar function such as the palladium-hydrogen electrode. | Standard hydrogen electrode | Wikipedia | 436 | 578150 | https://en.wikipedia.org/wiki/Standard%20hydrogen%20electrode | Physical sciences | Electrochemistry | Chemistry |
Interference
Because of the high adsorption activity of the platinized platinum electrode, it's very important to protect electrode surface and solution from the presence of organic substances as well as from atmospheric oxygen. Inorganic ions that can be reduced to a lower valency state at the electrode also have to be avoided (e.g., , ). A number of organic substances are also reduced by hydrogen on a platinum surface, and these also have to be avoided.
Cations that can be reduced and deposited on the platinum can be source of interference: silver, mercury, copper, lead, cadmium and thallium.
Substances that can inactivate ("poison") the catalytic sites include arsenic, sulfides and other sulfur compounds, colloidal substances, alkaloids, and material found in biological systems.
Isotopic effect
The standard redox potential of the deuterium couple is slightly different from that of the proton couple (ca. −0.0044 V vs SHE). Various values in this range have been obtained: −0.0061 V, −0.00431 V, −0.0074 V.
2 D_{(aq)}+ + 2 e- -> D2_{(g)}
Also difference occurs when hydrogen deuteride (HD, or deuterated hydrogen, DH) is used instead of hydrogen in the electrode.
Experimental setup
The scheme of the standard hydrogen electrode:
platinized platinum electrode
hydrogen gas
solution of the acid with activity of H+ = 1 mol dm−3
hydroseal for preventing oxygen interference
reservoir through which the second half-element of the galvanic cell should be attached. The connection can be direct, through a narrow tube to reduce mixing, or through a salt bridge, depending on the other electrode and solution. This creates an ionically conductive path to the working electrode of interest. | Standard hydrogen electrode | Wikipedia | 390 | 578150 | https://en.wikipedia.org/wiki/Standard%20hydrogen%20electrode | Physical sciences | Electrochemistry | Chemistry |
The bush dog (Speothos venaticus) is a canine found in Central and South America. In spite of its extensive range, it is very rare in most areas except in Suriname, Guyana and Peru; it was first described by Peter Wilhelm Lund from fossils in Brazilian caves and was believed to be extinct.
The bush dog is the only extant species in the genus Speothos, and genetic evidence suggests that its closest living relative is the maned wolf of central South America or the African wild dog. The species is listed as Near Threatened by the IUCN.
In Brazil, it is called ('vinegar dog') and ('bush dog'). In Spanish-speaking countries, it is called ('vinegar dog'), ('vinegar fox'), ('water dog'), and ('shrub or woodland dog').
Description
Adult bush dogs have soft long brownish-tan fur, with a lighter reddish tinge on the head, neck and back and a bushy tail, while the underside is dark, sometimes with a lighter throat patch. Younger individuals, however, have black fur over their entire bodies. Adults typically have a head-body length of , with a tail. They have a shoulder height of and weigh . They have short legs relative to their body, as well as a short snout and relatively small ears.
The teeth are adapted for its carnivorous habits. Uniquely for an American canid, the dental formula is for a total of 38 teeth. The bush dog is one of three canid species (the other two being the dhole and the African wild dog) with trenchant heel dentition, having a single cusp on the talonid of the lower carnassial tooth that increases the cutting blade length. Females have four pairs of teats and both sexes have large scent glands on either side of the anus. Bush dogs have partially webbed toes, which allow them to swim more efficiently.
Genetics
Speothos has a diploid chromosome number of 74, and so it is unable to produce fertile hybrids with other canids.
Distribution and habitat | Bush dog | Wikipedia | 427 | 578550 | https://en.wikipedia.org/wiki/Bush%20dog | Biology and health sciences | Canines | Animals |
Bush dogs are found from Costa Rica in Central America and through much of South America east of the Andes, as far south as central Bolivia, Paraguay, and southern Brazil. They primarily inhabit lowland forests up to elevation, wet savannas and other habitats near rivers, but may also be found in drier cerrado and open pasture. The historic range of this species may have extended as far north as Costa Rica where the species may still be found in suitable habitat. New, repeated observations of bush dog groups have been recorded in east-central (Barbilla National Park) and south-eastern (La Amistad International Park) Costa Rica, and a substantial portion of the Talamanca Mountains up to to the north-northwest and at elevations up to . Very recent fossils dating from 300 AD to 900 AD (the Late Ceramic Age) have been found in the Manzanilla site on the eastern coast of Trinidad.
There are three recognised subspecies:
The South American bush dog (Speothos venaticus venaticus), with a range including southern Colombia and Venezuela, the Guyanas, most of Brazil, eastern Ecuador and Peru, Bolivia, and northern Paraguay.
The Panamanian bush dog (Speothos venaticus panamensis), with a range including Panama, northern Colombia and Venezuela, western Ecuador.
The southern bush dog (Speothos venaticus wingei), with a range including southern Brazil and Paraguay, as well as extreme northeastern Argentina. The first camera trap photos of this species in Argentina were obtained in April 2016 from the Selva Paranaense Don Otto Ecological Private Reserve, located in Eldorado Department of the Misiones province of Argentina.
Behavior
Bush dogs are carnivores and hunt during the day. Their typical prey are pacas, agoutis, acouchis and capybaras, all large rodents. Although they can hunt alone, bush dogs are usually found in small packs. The dogs can bring down much larger prey, including peccaries and rheas, and a pack of six dogs has even been reported hunting a tapir, where they trailed the animal and nipped at its legs until it was felled. When hunting paca, part of the pack chases it on land and part wait for it in the water, where it often retreats. | Bush dog | Wikipedia | 472 | 578550 | https://en.wikipedia.org/wiki/Bush%20dog | Biology and health sciences | Canines | Animals |
Bush dogs appear to be the most gregarious South American canid species. They use hollow logs and cavities such as armadillo burrows for shelter. Packs consist of a single mated pair and their immediate relations, and have a home range of . Only the adult pair breed, while the other members of the pack are subordinate, and help with rearing and guarding any pups. Packmates keep in contact with frequent whines, perhaps because visibility is poor in the undergrowth where they typically hunt. While eating large prey, parents position themselves at either ends of the animal, making it easier for the pups to disembowel it.
Reproduction
Bush dogs mate throughout the year; oestrus lasts up to twelve days and occurs every 15 to 44 days. Like many other canids, bush dog mating includes a copulatory tie, during which the animals are locked together. Urine-marking plays a significant role in their pre-copulatory behavior.
Gestation lasts from 65 to 83 days and normally results in the birth of a litter of three to six pups, although larger litters of up to 10 have been reported. The young are born blind and helpless and initially weigh . The eyes open after 14 to 19 days and the pups first emerge from the nativity den shortly thereafter. The young are weaned at around four weeks and reach sexual maturity at one year. They can live for up to 10 years in captivity.
Conservation
Bush dogs are among the least-studied canines, and their conservation efforts are still in early stages. Due to their rarity, when bush dog bones were discovered in a cave in 1839, paleontologist Peter Wilhelm Lund mistakenly believed they were extinct. Living individuals were later found.
Research shows they are generalists capable of thriving in diverse habitats. However, conservation is challenging due to their dense habitats and sparse, scattered populations, making them difficult to locate. Bush dogs require large, undisturbed territories to support their pack-based lifestyle, and they are notably shy. | Bush dog | Wikipedia | 412 | 578550 | https://en.wikipedia.org/wiki/Bush%20dog | Biology and health sciences | Canines | Animals |
The International Union for Conservation of Nature (IUCN) lists bush dogs as Near Threatened due to a population decline of approximately 20-25% over the past 12 years. The main threats include habitat loss (particularly from deforestation for wood, cattle farming, and palm oil production), loss of prey due to human hunting, and diseases contracted from domestic dogs. Habitat loss, especially through Amazonian clear-cutting, is the most significant threat, while disease transmission from unvaccinated domestic dogs has also become a growing concern due to human encroachment.
Hunting bush dogs is illegal in most of their range, including countries like Colombia, Ecuador, Brazil, French Guiana, Paraguay, Peru, Bolivia, Panama, and Argentina. However, Guyana and Suriname lack explicit hunting bans for bush dogs, and many countries in the bush dog’s range have limited resources to enforce existing wildlife laws.
To better understand and protect bush dogs, scientists are experimenting with various monitoring methods. Traditional camera traps have proven ineffective due to bush dogs' elusive nature, so researchers are now using scent-detection dogs to locate bush dog burrows. This approach aims to provide valuable insights into their habitat use, prey preferences, and pack dynamics, including when cubs leave the pack. Protected areas such as the Yasuni Biosphere Reserve may support stable populations.
In a positive development, bush dogs were recently captured on camera traps in Costa Rica's Talamanca Mountains in 2020, suggesting they may be expanding their range northward and into higher elevations. This could indicate that with dedicated conservation efforts, bush dogs may stabilize or even increase in numbers. | Bush dog | Wikipedia | 323 | 578550 | https://en.wikipedia.org/wiki/Bush%20dog | Biology and health sciences | Canines | Animals |
The anterior cruciate ligament (ACL) is one of a pair of cruciate ligaments (the other being the posterior cruciate ligament) in the human knee. The two ligaments are called "cruciform" ligaments, as they are arranged in a crossed formation. In the quadruped stifle joint (analogous to the knee), based on its anatomical position, it is also referred to as the cranial cruciate ligament. The term cruciate is Latin for cross. This name is fitting because the ACL crosses the posterior cruciate ligament to form an "X". It is composed of strong, fibrous material and assists in controlling excessive motion by limiting mobility of the joint. The anterior cruciate ligament is one of the four main ligaments of the knee, providing 85% of the restraining force to anterior tibial displacement at 30 and 90° of knee flexion. The ACL is the most frequently injured ligament in the knee.
Structure
The ACL originates from deep within the notch of the distal femur. Its proximal fibers fan out along the medial wall of the lateral femoral condyle. The two bundles of the ACL are the anteromedial and the posterolateral, named according to where the bundles insert into the tibial plateau. The tibial plateau is a critical weight-bearing region on the upper extremity of the tibia. The ACL attaches in front of the intercondyloid eminence of the tibia, where it blends with the anterior horn of the medial meniscus.
Purpose
The purpose of the ACL is to resist the motions of anterior tibial translation and internal tibial rotation; this is important to have rotational stability. This function prevents anterior tibial subluxation of the lateral and medial tibiofemoral joints, which is important for the pivot-shift phenomenon. The ACL has mechanoreceptors that detect changes in direction of movement, position of the knee joint, and changes in acceleration, speed, and tension. A key factor in instability after ACL injuries is having altered neuromuscular function secondary to diminished somatosensory information. For athletes who participate in sports involving cutting, jumping, and rapid deceleration, the knee must be stable in terminal extension, which is the screw-home mechanism.
Clinical significance
Injury | Anterior cruciate ligament | Wikipedia | 490 | 578923 | https://en.wikipedia.org/wiki/Anterior%20cruciate%20ligament | Biology and health sciences | Human anatomy | Health |
An ACL tear is one of the most common knee injuries, with over 100,000 tears occurring annually in the US. Most ACL tears are a result of a non-contact mechanism such as a sudden change in a direction causing the knee to rotate inward. As the knee rotates inward, additional strain is placed on the ACL, since the femur and tibia, which are the two bones that articulate together forming the knee joint, move in opposite directions, causing the ACL to tear. Most athletes require reconstructive surgery on the ACL, in which the torn or ruptured ACL is completely removed and replaced with a piece of tendon or ligament tissue from the patient (autograft) or from a donor (allograft). Conservative treatment has poor outcomes in ACL injury, since the ACL is unable to form a fibrous clot, as it receives most of its nutrients from synovial fluid; this washes away the reparative cells, making the formation of fibrous tissue difficult. The two most common sources for tissue are the patellar ligament and the hamstrings tendon. The patellar ligament is often used, since bone plugs on each end of the graft are extracted, which helps integrate the graft into the bone tunnels during reconstruction. The surgery is arthroscopic, meaning that a tiny camera is inserted through a small surgical cut. The camera sends video to a large monitor so the surgeon can see any damage to the ligaments. In the event of an autograft, the surgeon makes a larger cut to get the needed tissue. In the event of an allograft, in which material is donated, this is not necessary, since no tissue is taken directly from the patient's own body. The surgeon drills a hole forming the tibial bone tunnel and femoral bone tunnel, allowing for the patient's new ACL graft to be guided through. Once the graft is pulled through the bone tunnels, two screws are placed into the tibial and femoral bone tunnel. Recovery time usually ranges between one and two years, but is sometimes longer, depending if the patient chose an autograft or allograft. A week or so after the occurrence of the injury, the athlete is usually deceived by the fact that he/she is walking normally and not feeling much pain | Anterior cruciate ligament | Wikipedia | 492 | 578923 | https://en.wikipedia.org/wiki/Anterior%20cruciate%20ligament | Biology and health sciences | Human anatomy | Health |
This is dangerous, as some athletes start resuming some of their activities such as jogging, which with a wrong move or twist, could damage the bones, as the graft has not completely become integrated into the bone tunnels. Injured athletes must understand the significance of each step of an ACL injury to avoid complications and ensure a proper recovery | Anterior cruciate ligament | Wikipedia | 67 | 578923 | https://en.wikipedia.org/wiki/Anterior%20cruciate%20ligament | Biology and health sciences | Human anatomy | Health |
Nonoperative treatment of the ACL
ACL reconstruction is the most common treatment for an ACL tear, but it is not the only treatment available for individuals. Some may find it more beneficial to complete a nonoperative rehabilitation program. Individuals who are going to continue with physical activity that involves cutting and pivoting, and individuals who are no longer participating in those specific activities both are candidates for the nonoperative route. In comparing operative and nonoperative approaches to ACL tears, few differences were noted between surgical and nonsurgical groups, with no significant differences in regard to knee function or muscle strength reported by the patients.
The main goals to achieve during rehabilitation (rehab) of an ACL tear is to regain sufficient functional stability, maximize full muscle strength, and decrease risk of reinjury. Typically, three phases are involved in nonoperative treatment - the acute phase, the neuromuscular training phase, and the return to sport phase. During the acute phase, the rehab is focusing on the acute symptoms that occur right after the injury and are causing an impairment. The use of therapeutic exercises and appropriate therapeutic modalities is crucial during this phase to assist in repairing the impairments from the injury. The neuromuscular training phase is used to focus on the patient regaining full strength in both the lower extremity and the core muscles. This phase begins when the patient regains full range of motion, no effusion, and adequate lower extremity strength. During this phase, the patient completes advanced balance, proprioception, cardiovascular conditioning, and neuromuscular interventions. In the final, return to sport phase, the patient focuses on sport-specific activities and agility. A functional performance brace is suggested to be used during the phase to assist with stability during pivoting and cutting activities. | Anterior cruciate ligament | Wikipedia | 370 | 578923 | https://en.wikipedia.org/wiki/Anterior%20cruciate%20ligament | Biology and health sciences | Human anatomy | Health |
Operative treatment of the ACL
Anterior cruciate ligament surgery is a complex operation that requires expertise in the field of orthopedic and sports medicine. Many factors should be considered when discussing surgery, including the athlete's level of competition, age, previous knee injury, other injuries sustained, leg alignment, and graft choice. Typically, four graft types are possible, the bone-patella tendon-bone graft, the semitendinosus and gracilis tendons (quadrupled hamstring tendon), quadriceps tendon, and an allograft. Although extensive research has been conducted on which grafts are the best, the surgeon typically chooses the type of graft with which he or she is most comfortable. If rehabilitated correctly, the reconstruction should last. In fact, 92.9% of patients are happy with graft choice.
Prehabilitation has become an integral part of the ACL reconstruction process. This means that the patient exercises before getting surgery to maintain factors such as range of motion and strength. Based on a single leg hop test and self-reported assessment, prehab improved function; these effects were sustained 12 weeks postoperatively.
Postsurgical rehabilitation is essential in the recovery from the reconstruction. This typically takes a patient 6 to 12 months to return to life as it was prior to the injury. The rehab can be divided into protection of the graft, improving range of motion, decrease swelling, and regaining muscle control. Each phase has different exercises based on the patients' needs. For example, while the ligament is healing, a patient's joint should not be used for full weight-bearing, but the patient should strengthen the quadriceps and hamstrings by doing quad sets and weight shifting drills. Phase two would require full weight-bearing and correcting gait patterns, so exercises such as core strengthening and balance exercises would be appropriate. In phase three, the patient begins running, and can do aquatic workouts to help with reducing joint stresses and cardiorespiratory endurance. Phase four includes multiplanar movements, thus enhancing a running program and beginning agility and plyometric drills. Lastly, phase five focuses on sport- or life-specific motions, depending on the patient. | Anterior cruciate ligament | Wikipedia | 467 | 578923 | https://en.wikipedia.org/wiki/Anterior%20cruciate%20ligament | Biology and health sciences | Human anatomy | Health |
A 2010 Los Angeles Times review of two medical studies discussed whether ACL reconstruction was advisable. One study found that children under 14 who had ACL reconstruction fared better after early surgery than those who underwent a delayed surgery. For adults 18 to 35, though, patients who underwent early surgery followed by rehabilitation fared no better than those who had rehabilitative therapy and a later surgery.
The first report focused on children and the timing of an ACL reconstruction. ACL injuries in children are a challenge because children have open growth plates in the bottom of the femur or thigh bone and on the top of the tibia or shin. An ACL reconstruction typically crosses the growth plates, posing a theoretical risk of injury to the growth plate, stunting leg growth, or causing the leg to grow at an unusual angle.
The second study noted focused on adults. It found no significant statistical difference in performance and pain outcomes for patients who receive early ACL reconstruction vs. those who receive physical therapy with an option for later surgery. This would suggest that many patients without instability, buckling, or giving way after a course of rehabilitation can be managed nonoperatively, but was limited to outcomes after two years and did not involve patients who were serious athletes. Patients involved in sports requiring significant cutting, pivoting, twisting, or rapid acceleration or deceleration may not be able to participate in these activities without ACL reconstruction.
ACL injuries in women
Risk differences between outcomes in men and women can be attributed to a combination of multiple factors, including anatomical, hormonal, genetic, positional, neuromuscular, and environmental factors. The size of the anterior cruciate ligament is often the most reported difference. Studies look at the length, cross-sectional area, and volume of ACLs. Researchers use cadavers, and in vivo placement to study these factors, and most studies confirm that women have smaller anterior cruciate ligaments. Other factors that could contribute to higher risks of ACL tears in women include patient weight and height, the size and depth of the intercondylar notch, the diameter of the ACL, the magnitude of the tibial slope, the volume of the tibial spines, the convexity of the lateral tibiofemoral articular surfaces, and the concavity of the medial tibial plateau. While anatomical factors are most talked about, extrinsic factors, including dynamic movement patterns, might be the most important risk factor when it comes to ACL injury.
Gallery | Anterior cruciate ligament | Wikipedia | 511 | 578923 | https://en.wikipedia.org/wiki/Anterior%20cruciate%20ligament | Biology and health sciences | Human anatomy | Health |
The agouti (, ) or common agouti is many of several rodent species of the genus Dasyprocta. They are native to Central America, northern and central South America, and the southern Lesser Antilles. Some species have also been introduced elsewhere in the West Indies. They are related to guinea pigs and look quite similar, but they are larger and have longer legs. The species vary considerably in colour, being brown, reddish, dull orange, greyish, or blackish, but typically with lighter underparts. Their bodies are covered with coarse hair, which is raised when alarmed. They weigh and are in length, with short, hairless tails.
The related pacas were formerly included in genus Agouti, but these animals were reclassified in 1998 as genus Cuniculus.
The Spanish term is agutí. In Mexico, the agouti is called the . In Panama, it is known as the and in eastern Ecuador, as the .
Etymology
The name "agouti" is derived from either Guarani or Tupi, both South American indigenous languages, in which the name is written variously as agutí, agoutí, acutí, akuti and akuri. The Portuguese term for these animals, cutia, is derived from this original naming.
Description
Agoutis have five toes on their front feet and three toes on their hind feet; the first toe is very small. The tail is very short or nonexistent and hairless. The molar teeth have cylindrical crowns, with several islands and a single lateral fold of enamel. Agoutis may grow to be up to in length and in weight. Most species are brown on their backs and whitish or buff on their bellies; the fur may have a glossy appearance and then glimmers in an orange colour. Reports differ as to whether they are diurnal or nocturnal animals.
Behaviour and habits
In the wild, they are shy animals and flee from humans, while in captivity they may become trusting. In Trinidad, they are renowned for being very fast runners, able to keep hunting dogs occupied with chasing them for hours. | Agouti | Wikipedia | 423 | 579019 | https://en.wikipedia.org/wiki/Agouti | Biology and health sciences | Rodents | Animals |
Agoutis are found in forested and wooded areas in Central and South America. Their habitats include rainforests, savannas, and cultivated fields. They conceal themselves at night in hollow tree trunks or in burrows among roots. Active and graceful in their movements, their pace is either a kind of trot or a series of springs following one another so rapidly as to look like a gallop. They take readily to water, in which they swim well.
When feeding, agoutis sit on their hind legs and hold food between their forepaws. They may gather in groups of up to 100 to feed. They eat fallen fruit, leaves and roots, although they may sometimes climb trees to eat green fruit. They hoard food in small, buried stores. They sometimes eat the eggs of ground-nesting birds and even shellfish on the seashore. They may cause damage to sugarcane and banana plantations. They are regarded as one of the few species (along with macaws) that can open Brazil nuts without tools, mainly thanks to their strength and exceptionally sharp teeth. In southern Brazil, their main source of energy is the nut of Araucaria angustifolia.
Breeding
Agoutis give birth to litters of two to four young (pups) after a gestation period of three months. Some species have two litters a year in May and October, while others breed year round. The pups are born in burrows lined with leaves, roots and hair. They are well developed at birth and may be up and eating within an hour. Fathers are barred from the nest while the young are very small, but the parents pair bond for the rest of their lives. They can live for as long as 20 years, a remarkably long time for a rodent. | Agouti | Wikipedia | 356 | 579019 | https://en.wikipedia.org/wiki/Agouti | Biology and health sciences | Rodents | Animals |
Species
Azara's agouti, Dasyprocta azarae
Coiban agouti, Dasyprocta coibae
Crested agouti, Dasyprocta cristata
Black agouti, Dasyprocta fuliginosa
Orinoco agouti, Dasyprocta guamara
Kalinowski's agouti, Dasyprocta kalinowskii
Red-rumped agouti, Dasyprocta leporina
Mexican agouti, Dasyprocta mexicana
Black-rumped agouti, Dasyprocta prymnolopha
Central American agouti, Dasyprocta punctata
Ruatan Island agouti, Dasyprocta ruatanica
Brown agouti, Dasyprocta variegata (previously lumped with D. punctata) | Agouti | Wikipedia | 167 | 579019 | https://en.wikipedia.org/wiki/Agouti | Biology and health sciences | Rodents | Animals |
In classical mechanics, the gravitational potential is a scalar potential associating with each point in space the work (energy transferred) per unit mass that would be needed to move an object to that point from a fixed reference point in the conservative gravitational field. It is analogous to the electric potential with mass playing the role of charge. The reference point, where the potential is zero, is by convention infinitely far away from any mass, resulting in a negative potential at any finite distance. Their similarity is correlated with both associated fields having conservative forces.
Mathematically, the gravitational potential is also known as the Newtonian potential and is fundamental in the study of potential theory. It may also be used for solving the electrostatic and magnetostatic fields generated by uniformly charged or polarized ellipsoidal bodies.
Potential energy
The gravitational potential (V) at a location is the gravitational potential energy (U) at that location per unit mass:
where m is the mass of the object. Potential energy is equal (in magnitude, but negative) to the work done by the gravitational field moving a body to its given position in space from infinity. If the body has a mass of 1 kilogram, then the potential energy to be assigned to that body is equal to the gravitational potential. So the potential can be interpreted as the negative of the work done by the gravitational field moving a unit mass in from infinity.
In some situations, the equations can be simplified by assuming a field that is nearly independent of position. For instance, in a region close to the surface of the Earth, the gravitational acceleration, g, can be considered constant. In that case, the difference in potential energy from one height to another is, to a good approximation, linearly related to the difference in height:
Mathematical form
The gravitational potential V at a distance x from a point mass of mass M can be defined as the work W that needs to be done by an external agent to bring a unit mass in from infinity to that point:
where G is the gravitational constant, and F is the gravitational force. The product GM is the standard gravitational parameter and is often known to higher precision than G or M separately. The potential has units of energy per mass, e.g., J/kg in the MKS system. By convention, it is always negative where it is defined, and as x tends to infinity, it approaches zero. | Gravitational potential | Wikipedia | 480 | 579026 | https://en.wikipedia.org/wiki/Gravitational%20potential | Physical sciences | Classical mechanics | Physics |
The gravitational field, and thus the acceleration of a small body in the space around the massive object, is the negative gradient of the gravitational potential. Thus the negative of a negative gradient yields positive acceleration toward a massive object. Because the potential has no angular components, its gradient is
where x is a vector of length x pointing from the point mass toward the small body and is a unit vector pointing from the point mass toward the small body. The magnitude of the acceleration therefore follows an inverse square law:
The potential associated with a mass distribution is the superposition of the potentials of point masses. If the mass distribution is a finite collection of point masses, and if the point masses are located at the points x1, ..., xn and have masses m1, ..., mn, then the potential of the distribution at the point x is
If the mass distribution is given as a mass measure dm on three-dimensional Euclidean space R3, then the potential is the convolution of with dm. In good cases this equals the integral
where is the distance between the points x and r. If there is a function ρ(r) representing the density of the distribution at r, so that , where dv(r) is the Euclidean volume element, then the gravitational potential is the volume integral
If V is a potential function coming from a continuous mass distribution ρ(r), then ρ can be recovered using the Laplace operator, :
This holds pointwise whenever ρ is continuous and is zero outside of a bounded set. In general, the mass measure dm can be recovered in the same way if the Laplace operator is taken in the sense of distributions. As a consequence, the gravitational potential satisfies Poisson's equation. | Gravitational potential | Wikipedia | 355 | 579026 | https://en.wikipedia.org/wiki/Gravitational%20potential | Physical sciences | Classical mechanics | Physics |
β-Carotene (beta-carotene) is an organic, strongly colored red-orange pigment abundant in fungi, plants, and fruits. It is a member of the carotenes, which are terpenoids (isoprenoids), synthesized biochemically from eight isoprene units and thus having 40 carbons.
Dietary β-carotene is a provitamin A compound, converting in the body to retinol (vitamin A). In foods, it has rich content in carrots, pumpkin, spinach, and sweet potato. It is used as a dietary supplement and may be prescribed to treat erythropoietic protoporphyria, an inherited condition of sunlight sensitivity.
β-carotene is the most common carotenoid in plants. When used as a food coloring, it has the E number E160a. The structure was deduced in 1930.
Isolation of β-carotene from fruits abundant in carotenoids is commonly done using column chromatography. It is industrially extracted from richer sources such as the algae Dunaliella salina. The separation of β-carotene from the mixture of other carotenoids is based on the polarity of a compound. β-Carotene is a non-polar compound, so it is separated with a non-polar solvent such as hexane. Being highly conjugated, it is deeply colored, and as a hydrocarbon lacking functional groups, it is lipophilic.
Provitamin A activity
Plant carotenoids are the primary dietary source of provitamin A worldwide, with β-carotene as the best-known provitamin A carotenoid. Others include α-carotene and β-cryptoxanthin. Carotenoid absorption is restricted to the duodenum of the small intestine. One molecule of β-carotene can be cleaved by the intestinal enzyme β,β-carotene 15,15'-monooxygenase into two molecules of vitamin A.
Absorption, metabolism and excretion
As part of the digestive process, food-sourced carotenoids must be separated from plant cells and incorporated into lipid-containing micelles to be bioaccessible to intestinal enterocytes. If already extracted (or synthetic) and then presented in an oil-filled dietary supplement capsule, there is greater bioavailability compared to that from foods. | Β-Carotene | Wikipedia | 512 | 579219 | https://en.wikipedia.org/wiki/%CE%92-Carotene | Biology and health sciences | Biological pigments | Biology |
At the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is then either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to retinol binding protein 2, before being incorporated into chylomicrons. The conversion process consists of one molecule of β-carotene cleaved by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene, into two molecules of retinal. When plasma retinol is in the normal range the gene expression for SCARB1 and BCO1 are suppressed, creating a feedback loop that suppresses β-carotene absorption and conversion.
The majority of chylomicrons are taken up by the liver, then secreted into the blood repackaged into low density lipoproteins (LDLs). From these circulating lipoproteins and the chylomicrons that bypassed the liver, β-carotene is taken into cells via receptor SCARB1. Human tissues differ in expression of SCARB1, and hence β-carotene content. Examples expressed as ng/g, wet weight: liver=479, lung=226, prostate=163 and skin=26.
Once taken up by peripheral tissue cells, the major usage of absorbed β-carotene is as a precursor to retinal via symmetric cleavage by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene. A lesser amount is metabolized by the mitochondrial enzyme beta-carotene 9',10'-dioxygenase, which is encoded by the BCO2 gene. The products of this asymmetric cleavage are two beta-ionone molecules and rosafluene. BCO2 appears to be involved in preventing excessive accumulation of carotenoids; a BCO2 defect in chickens results in yellow skin color due to accumulation in subcutaneous fat.
Conversion factors
For counting dietary vitamin A intake, β-carotene may be converted either using the newer retinol activity equivalents (RAE) or the older international unit (IU). | Β-Carotene | Wikipedia | 486 | 579219 | https://en.wikipedia.org/wiki/%CE%92-Carotene | Biology and health sciences | Biological pigments | Biology |
Retinol activity equivalents (RAEs)
Since 2001, the US Institute of Medicine uses retinol activity equivalents (RAE) for their Dietary Reference Intakes, defined as follows:
1 μg RAE = 1 μg retinol from food or supplements
1 μg RAE = 2 μg all-trans-β-carotene from supplements
1 μg RAE = 12 μg of all-trans-β-carotene from food
1 μg RAE = 24 μg α-carotene or β-cryptoxanthin from food
RAE takes into account carotenoids' variable absorption and conversion to vitamin A by humans better than and replaces the older retinol equivalent (RE) (1 μg RE = 1 μg retinol, 6 μg β-carotene, or 12 μg α-carotene or β-cryptoxanthin). RE was developed 1967 by the United Nations/World Health Organization Food and Agriculture Organization (FAO/WHO).
International Units
Another older unit of vitamin A activity is the international unit (IU). Like retinol equivalent, the international unit does not take into account carotenoid variable absorption and conversion to vitamin A by humans, as well as the more modern retinol activity equivalent. Food and supplement labels still generally use IU, but IU can be converted to the more useful retinol activity equivalent as follows:
1 μg RAE = 3.33 IU retinol
1 IU retinol = 0.3 μg RAE
1 IU β-carotene from supplements = 0.3 μg RAE
1 IU β-carotene from food = 0.05 μg RAE
1 IU α-carotene or β-cryptoxanthin from food = 0.025 μg RAE1 | Β-Carotene | Wikipedia | 381 | 579219 | https://en.wikipedia.org/wiki/%CE%92-Carotene | Biology and health sciences | Biological pigments | Biology |
Dietary sources
The average daily intake of β-carotene is in the range 2–7 mg, as estimated from a pooled analysis of 500,000 women living in the US, Canada, and some European countries. Beta-carotene is found in many foods and is sold as a dietary supplement. β-Carotene contributes to the orange color of many different fruits and vegetables. Vietnamese gac (Momordica cochinchinensis Spreng.) and crude palm oil are particularly rich sources, as are yellow and orange fruits, such as cantaloupe, mangoes, pumpkin, and papayas, and orange root vegetables such as carrots and sweet potatoes.
The color of β-carotene is masked by chlorophyll in green leaf vegetables such as spinach, kale, sweet potato leaves, and sweet gourd leaves.
The U.S. Department of Agriculture lists foods high in β-carotene content:
No dietary requirement
Government and non-government organizations have not set a dietary requirement for β-carotene.
Side effects
Excess β-carotene is predominantly stored in the fat tissues of the body. The most common side effect of excessive β-carotene consumption is carotenodermia, a physically harmless condition that presents as a conspicuous orange skin tint arising from deposition of the carotenoid in the outermost layer of the epidermis.
Carotenosis
Carotenoderma, also referred to as carotenemia, is a benign and reversible medical condition where an excess of dietary carotenoids results in orange discoloration of the outermost skin layer. It is associated with a high blood β-carotene value. This can occur after a month or two of consumption of beta-carotene rich foods, such as carrots, carrot juice, tangerine juice, mangos, or in Africa, red palm oil. β-carotene dietary supplements can have the same effect. The discoloration extends to palms and soles of feet, but not to the white of the eye, which helps distinguish the condition from jaundice. Carotenodermia is reversible upon cessation of excessive intake. Consumption of greater than 30 mg/day for a prolonged period has been confirmed as leading to carotenemia. | Β-Carotene | Wikipedia | 481 | 579219 | https://en.wikipedia.org/wiki/%CE%92-Carotene | Biology and health sciences | Biological pigments | Biology |
No risk for hypervitaminosis A
At the enterocyte cell wall, β-carotene is taken up by the membrane transporter protein scavenger receptor class B, type 1 (SCARB1). Absorbed β-carotene is then either incorporated as such into chylomicrons or first converted to retinal and then retinol, bound to retinol binding protein 2, before being incorporated into chylomicrons. The conversion process consists of one molecule of β-carotene cleaved by the enzyme beta-carotene 15,15'-dioxygenase, which is encoded by the BCO1 gene, into two molecules of retinal. When plasma retinol is in the normal range the gene expression for SCARB1 and BCO1 are suppressed, creating a feedback loop that suppresses absorption and conversion. Because of these two mechanisms, high intake will not lead to hypervitaminosis A.
Drug interactions
β-Carotene can interact with medication used for lowering cholesterol. Taking them together can lower the effectiveness of these medications and is considered only a moderate interaction. Bile acid sequestrants and proton-pump inhibitors can decrease absorption of β-carotene. Consuming alcohol with β-carotene can decrease its ability to convert to retinol and could possibly result in hepatotoxicity.
β-Carotene and lung cancer in smokers
Chronic high doses of β-carotene supplementation increases the probability of lung cancer in smokers while its natural vitamer, retinol, increases lung cancer in smokers and nonsmokers. The effect is specific to supplementation dose as no lung damage has been detected in those who are exposed to cigarette smoke and who ingest a physiological dose of β-carotene (6 mg), in contrast to high pharmacological dose (30 mg).
Increases in lung cancer have been attributed to the tendency of β-carotene to oxidize, yet based on the pharmacokinetics of β-carotene absorption and transport through the intestine and the lack of specific β-carotene transporters, it is unlikely that β-carotene reaches the lung of smokers in sufficient quantities. Additional research is required to understand the link between the increased risk of cancer and all-cause mortality following β-carotene supplementation. | Β-Carotene | Wikipedia | 497 | 579219 | https://en.wikipedia.org/wiki/%CE%92-Carotene | Biology and health sciences | Biological pigments | Biology |
Additionally, supplemental, high-dose β-carotene may increase the risk of prostate cancer, intracerebral hemorrhage, and cardiovascular and total mortality irrespective of smoking status.
Industrial sources
β-carotene is industrially made either by total synthesis (see ) or by extraction from biological sources such as vegetables, microalgae (especially Dunaliella salina), and genetically-engineered microbes. The synthetic path is low-cost and high-yield.
Research
Medical authorities generally recommend obtaining beta-carotene from food rather than dietary supplements. A 2013 meta-analysis of randomized controlled trials concluded that high-dosage (≥9.6 mg/day) beta-carotene supplementation is associated with a 6% increase in the risk of all-cause mortality, while low-dosage (<9.6 mg/day) supplementation does not have a significant effect on mortality. Research is insufficient to determine whether a minimum level of beta-carotene consumption is necessary for human health and to identify what problems might arise from insufficient beta-carotene intake. However, a 2018 meta-analysis mostly of prospective cohort studies found that both dietary and circulating beta-carotene are associated with a lower risk of all-cause mortality. The highest circulating beta-carotene category, compared to the lowest, correlated with a 37% reduction in the risk of all-cause mortality, while the highest dietary beta-carotene intake category, compared to the lowest, was linked to an 18% decrease in the risk of all-cause mortality.
Macular degeneration
Age-related macular degeneration (AMD) represents the leading cause of irreversible blindness in elderly people. AMD is an oxidative stress, retinal disease that affects the macula, causing progressive loss of central vision. β-carotene content is confirmed in human retinal pigment epithelium. Reviews reported mixed results for observational studies, with some reporting that diets higher in β-carotene correlated with a decreased risk of AMD whereas other studies reporting no benefits. Reviews reported that for intervention trials using only β-carotene, there was no change to risk of developing AMD. | Β-Carotene | Wikipedia | 467 | 579219 | https://en.wikipedia.org/wiki/%CE%92-Carotene | Biology and health sciences | Biological pigments | Biology |
Cancer
A meta-analysis concluded that supplementation with β-carotene does not appear to decrease the risk of cancer overall, nor specific cancers including: pancreatic, colorectal, prostate, breast, melanoma, or skin cancer generally. High levels of β-carotene may increase the risk of lung cancer in current and former smokers. Results are not clear for thyroid cancer.
Cataract
A Cochrane review looked at supplementation of β-carotene, vitamin C, and vitamin E, independently and combined, on people to examine differences in risk of cataract, cataract extraction, progression of cataract, and slowing the loss of visual acuity. These studies found no evidence of any protective effects afforded by β-carotene supplementation on preventing and slowing age-related cataract. A second meta-analysis compiled data from studies that measured diet-derived serum beta-carotene and reported a not statistically significant 10% decrease in cataract risk.
Erythropoietic protoporphyria
High doses of β-carotene (up to 180 mg per day) may be used as a treatment for erythropoietic protoporphyria, a rare inherited disorder of sunlight sensitivity, without toxic effects.
Food drying
Foods rich in carotenoid dyes show discoloration upon drying. This is due to thermal degradation of carotenoids, possibly via isomerization and oxidation reactions. | Β-Carotene | Wikipedia | 302 | 579219 | https://en.wikipedia.org/wiki/%CE%92-Carotene | Biology and health sciences | Biological pigments | Biology |
Land reclamation, often known as reclamation, and also known as land fill (not to be confused with a waste landfill), is the process of creating new land from oceans, seas, riverbeds or lake beds. The land reclaimed is known as reclamation ground, reclaimed land, or land fill.
History
In Ancient Egypt, the rulers of the Twelfth Dynasty (c. 2000–1800 BC) undertook a far-sighted land reclamation scheme to increase agricultural output. They constructed levees and canals to connect the Faiyum with the Bahr Yussef waterway, diverting water that would have flowed into Lake Moeris and causing gradual evaporation around the lake's edges, creating new farmland from the reclaimed land. A similar land reclamation system using dams and drainage canals was used in the Greek Copaic Basin during the Middle Helladic Period (c. 1900–1600 BC). Another early large-scale project was the Beemster Polder in the Netherlands, realized in 1612 adding of land. In Hong Kong the Praya Reclamation Scheme added of land in 1890 during the second phase of construction. It was one of the most ambitious projects ever taken during the Colonial Hong Kong era. Some 20% of land in the Tokyo Bay area has been reclaimed, most notably Odaiba artificial island. The city of Rio de Janeiro was largely built on reclaimed land, as was Wellington, New Zealand.
Methods
Land reclamation can be achieved by a number of different methods. The simplest method involves filling the area with large amounts of heavy rock and/or cement, then filling with clay and dirt until the desired height is reached. The process is called "infilling" and the material used to fill the space is generally called "infill". Draining of submerged wetlands is often used to reclaim land for agricultural use. Deep cement mixing is used typically in situations in which the material displaced by either dredging or draining may be contaminated and hence needs to be contained. Land dredging is also another method of land reclamation. It is the removal of sediments and debris from the bottom of a body of water. It is commonly used for maintaining reclaimed land masses as sedimentation, a natural process, fills channels and harbors.
Notable instances
Africa
The Hassan II Mosque is built on reclaimed land.
The Eko Atlantic in Lagos.
Gracefield Island in Lekki, Lagos.
The Foreshore in Cape Town.
Stone Town in Zanzibar. | Land reclamation | Wikipedia | 489 | 579223 | https://en.wikipedia.org/wiki/Land%20reclamation | Physical sciences | Artificial landforms | null |
Asia
Parts of the coastlines of Mainland China, Hong Kong, North Korea and South Korea. It is estimated that nearly 65% of tidal flats around the Yellow Sea have been reclaimed.
The north of Bahrain.
Inland lowlands in the Yangtze valley, China, including the areas of important cities like Wuhan.
Nanhui New City in Shanghai
Haikou Bay, Hainan Province, China, where the west side of Haidian Island is being extended, and off the coast of Haikou, where new land for a marina is being created.
The Cotai area of Macau, where many casinos are located.
Parts of Shekou in Shenzhen, Guangdong province.
Much of the coastline of Mumbai, India. It took over 150 years to join the original Seven Islands of Bombay. These seven islands were lush, green, thickly wooded, and dotted with 22 hills, with the Arabian Sea washing through them at high tide. The original Isle of Bombay was only long and wide from Dongri to Malabar Hill (at its broadest point) and the other six were Colaba, Old Woman's Island, Mahim, Parel, Worli and Mazgaon. ( | Land reclamation | Wikipedia | 235 | 579223 | https://en.wikipedia.org/wiki/Land%20reclamation | Physical sciences | Artificial landforms | null |
A data center is a building, a dedicated space within a building, or a group of buildings used to house computer systems and associated components, such as telecommunications and storage systems.
Since IT operations are crucial for business continuity, it generally includes redundant or backup components and infrastructure for power supply, data communication connections, environmental controls (e.g., air conditioning, fire suppression), and various security devices. A large data center is an industrial-scale operation using as much electricity as a medium town. Estimated global data center electricity consumption in 2022 was 240–340 TWh, or roughly 1–1.3% of global electricity demand. This excludes energy used for cryptocurrency mining, which was estimated to be around 110 TWh in 2022, or another 0.4% of global electricity demand. The IEA projects that data center electric use could double between 2022 and 2026. High demand for electricity from data centers, including by cryptomining and artificial intelligence, has also increased strain on local electric grids and increased electricity prices in some markets.
Data centers can vary widely in terms of size, power requirements, redundancy, and overall structure. Four common categories used to segment types of data centers are onsite data centers, colocation facilities, hyperscale data centers, and edge data centers.
History
Data centers have their roots in the huge computer rooms of the 1940s, typified by ENIAC, one of the earliest examples of a data center. Early computer systems, complex to operate and maintain, required a special environment in which to operate. Many cables were necessary to connect all the components, and methods to accommodate and organize these were devised such as standard racks to mount equipment, raised floors, and cable trays (installed overhead or under the elevated floor). A single mainframe required a great deal of power and had to be cooled to avoid overheating. Security became important – computers were expensive, and were often used for military purposes. Basic design guidelines for controlling access to the computer room were therefore devised. | Data center | Wikipedia | 419 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
During the microcomputer industry boom of the 1980s, users started to deploy computers everywhere, in many cases with little or no care about operating requirements. However, as information technology (IT) operations started to grow in complexity, organizations grew aware of the need to control IT resources. The availability of inexpensive networking equipment, coupled with new standards for the network structured cabling, made it possible to use a hierarchical design that put the servers in a specific room inside the company. The use of the term data center, as applied to specially designed computer rooms, started to gain popular recognition about this time.
A boom of data centers came during the dot-com bubble of 1997–2000. Companies needed fast Internet connectivity and non-stop operation to deploy systems and to establish a presence on the Internet. Installing such equipment was not viable for many smaller companies. Many companies started building very large facilities, called internet data centers (IDCs), which provide enhanced capabilities, such as crossover backup: "If a Bell Atlantic line is cut, we can transfer them to ... to minimize the time of outage."
The term cloud data centers (CDCs) has been used. Increasingly, the division of these terms has almost disappeared and they are being integrated into the term data center.
The global data center market saw steady growth in the 2010s, with a notable acceleration in the latter half of the decade. According to Gartner, worldwide data center infrastructure spending reached $200 billion in 2021, representing a 6% increase from 2020 despite the economic challenges posed by the COVID-19 pandemic.
The latter part of the 2010s and early 2020s saw a significant shift towards AI and machine learning applications, generating a global boom for more powerful and efficient data center infrastructure. As of March 2021, global data creation was projected to grow to more than 180 zettabytes by 2025, up from 64.2 zettabytes in 2020.
The United States is currently the foremost leader in data center infrastructure, hosting 5,381 data centers as of March 2024, the highest number of any country worldwide. According to global consultancy McKinsey & Co., U.S. market demand is expected to double to 35 gigawatts (GW) by 2030, up from 17 GW in 2022. As of 2023, the U.S. accounts for roughly 40 percent of the global market. | Data center | Wikipedia | 488 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
A study published by the Electric Power Research Institute (EPRI) in May 2024 estimates U.S. data center power consumption could range from 4.6% to 9.1% of the country’s generation by 2030. As of 2023, about 80% of U.S. data center load was concentrated in 15 states, led by Virginia and Texas.
Requirements for modern data centers
Modernization and data center transformation enhances performance and energy efficiency.
Information security is also a concern, and for this reason, a data center has to offer a secure environment that minimizes the chances of a security breach. A data center must, therefore, keep high standards for assuring the integrity and functionality of its hosted computer environment.
Industry research company International Data Corporation (IDC) puts the average age of a data center at nine years old. Gartner, another research company, says data centers older than seven years are obsolete. The growth in data (163 zettabytes by 2025) is one factor driving the need for data centers to modernize.
Focus on modernization is not new: concern about obsolete equipment was decried in 2007, and in 2011 Uptime Institute was concerned about the age of the equipment therein. By 2018 concern had shifted once again, this time to the age of the staff: "data center staff are aging faster than the equipment."
Meeting standards for data centers
The Telecommunications Industry Association's Telecommunications Infrastructure Standard for Data Centers specifies the minimum requirements for telecommunications infrastructure of data centers and computer rooms including single tenant enterprise data centers and multi-tenant Internet hosting data centers. The topology proposed in this document is intended to be applicable to any size data center.
Telcordia GR-3160, NEBS Requirements for Telecommunications Data Center Equipment and Spaces, provides guidelines for data center spaces within telecommunications networks, and environmental requirements for the equipment intended for installation in those spaces. These criteria were developed jointly by Telcordia and industry representatives. They may be applied to data center spaces housing data processing or Information Technology (IT) equipment. The equipment may be used to:
Operate and manage a carrier's telecommunication network
Provide data center based applications directly to the carrier's customers
Provide hosted applications for a third party to provide services to their customers
Provide a combination of these and similar data center applications | Data center | Wikipedia | 465 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
Data center transformation
Data center transformation takes a step-by-step approach through integrated projects carried out over time. This differs from a traditional method of data center upgrades that takes a serial and siloed approach. The typical projects within a data center transformation initiative include standardization/consolidation, virtualization, automation and security.
Standardization/consolidation: Reducing the number of data centers and avoiding server sprawl (both physical and virtual) often includes replacing aging data center equipment, and is aided by standardization.
Virtualization: Lowers capital and operational expenses, reduces energy consumption. Virtualized desktops can be hosted in data centers and rented out on a subscription basis. Investment bank Lazard Capital Markets estimated in 2008 that 48 percent of enterprise operations will be virtualized by 2012. Gartner views virtualization as a catalyst for modernization.
Automating: Automating tasks such as provisioning, configuration, patching, release management, and compliance is needed, not just when facing fewer skilled IT workers.
Securing: Protection of virtual systems is integrated with the existing security of physical infrastructures.
Raised floor
A raised floor standards guide named GR-2930 was developed by Telcordia Technologies, a subsidiary of Ericsson.
Although the first raised floor computer room was made by IBM in 1956, and they've "been around since the 1960s", it was the 1970s that made it more common for computer centers to thereby allow cool air to circulate more efficiently.
The first purpose of the raised floor was to allow access for wiring.
Lights out
The lights-out data center, also known as a darkened or a dark data center, is a data center that, ideally, has all but eliminated the need for direct access by personnel, except under extraordinary circumstances. Because of the lack of need for staff to enter the data center, it can be operated without lighting. All of the devices are accessed and managed by remote systems, with automation programs used to perform unattended operations. In addition to the energy savings, reduction in staffing costs and the ability to locate the site further from population centers, implementing a lights-out data center reduces the threat of malicious attacks upon the infrastructure.
Noise levels
Generally speaking, local authorities prefer noise levels at data centers to be "10 dB below the existing night-time background noise level at the nearest residence."
OSHA regulations require monitoring of noise levels inside data centers if noise exceeds 85 decibels. The average noise level in server areas of a data center may reach as high as 92-96 dB(A). | Data center | Wikipedia | 512 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
Residents living near data centers have described the sound as "a high-pitched whirring noise 24/7", saying “It’s like being on a tarmac with an airplane engine running constantly ... Except that the airplane keeps idling and never leaves.”
External sources of noise include HVAC equipment and energy generators.
Data center design
The field of data center design has been growing for decades in various directions, including new construction big and small along with the creative re-use of existing facilities, like abandoned retail space, old salt mines and war-era bunkers.
a 65-story data center has already been proposed
the number of data centers as of 2016 had grown beyond 3 million USA-wide, and more than triple that number worldwide
Local building codes may govern the minimum ceiling heights and other parameters. Some of the considerations in the design of data centers are:
Size - one room of a building, one or more floors, or an entire building,
Capacity - can hold up to or past 1,000 servers
Other considerations - Space, power, cooling, and costs in the data center.
Mechanical engineering infrastructure - heating, ventilation and air conditioning (HVAC); humidification and dehumidification equipment; pressurization.
Electrical engineering infrastructure design - utility service planning; distribution, switching and bypass from power sources; uninterruptible power source (UPS) systems; and more.
Design criteria and trade-offs
Availability expectations: The costs of avoiding downtime should not exceed the cost of the downtime itself
Site selection: Location factors include proximity to power grids, telecommunications infrastructure, networking services, transportation lines and emergency services. Other considerations should include flight paths, neighboring power drains, geological risks, and climate (associated with cooling costs).
Often, power availability is the hardest to change.
High availability
Various metrics exist for measuring the data-availability that results from data-center availability beyond 95% uptime, with the top of the scale counting how many nines can be placed after 99%.
Modularity and flexibility
Modularity and flexibility are key elements in allowing for a data center to grow and change over time. Data center modules are pre-engineered, standardized building blocks that can be easily configured and moved as needed.
A modular data center may consist of data center equipment contained within shipping containers or similar portable containers. Components of the data center can be prefabricated and standardized which facilitates moving if needed. | Data center | Wikipedia | 493 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
Environmental control
Temperature and humidity are controlled via:
Air conditioning
indirect cooling, such as using outside air, Indirect Evaporative Cooling (IDEC) units, and also using sea water.
It is important that computers do not get humid or overheat, as high humidity can lead to dust clogging the fans, which leads to overheat, or can cause components to malfunction, ruining the board and running a fire hazard. Overheat can cause components, usually the silicon or copper of the wires or circuits to melt, causing connections to loosen, causing fire hazards.
Electrical power
Backup power consists of one or more uninterruptible power supplies, battery banks, and/or diesel / gas turbine generators.
To prevent single points of failure, all elements of the electrical systems, including backup systems, are typically given redundant copies, and critical servers are connected to both the A-side and B-side power feeds. This arrangement is often made to achieve N+1 redundancy in the systems. Static transfer switches are sometimes used to ensure instantaneous switchover from one supply to the other in the event of a power failure.
Low-voltage cable routing
Options include:
Data cabling can be routed through overhead cable trays
Raised floor cabling, both for security reasons and to avoid the extra cost of cooling systems over the racks.
Smaller/less expensive data centers may use anti-static tiles instead for a flooring surface.
Air flow
Air flow management addresses the need to improve data center computer cooling efficiency by preventing the recirculation of hot air exhausted from IT equipment and reducing bypass airflow. There are several methods of separating hot and cold airstreams, such as hot/cold aisle containment and in-row cooling units.
Aisle containment
Cold aisle containment is done by exposing the rear of equipment racks, while the fronts of the servers are enclosed with doors and covers. This is similar to how large-scale food companies refrigerate and store their products.
Computer cabinets/Server farms are often organized for containment of hot/cold aisles. Proper air duct placement prevents the cold and hot air from mixing. Rows of cabinets are paired to face each other so that the cool and hot air intakes and exhausts don't mix air, which would severely reduce cooling efficiency.
Alternatively, a range of underfloor panels can create efficient cold air pathways directed to the raised-floor vented tiles. Either the cold aisle or the hot aisle can be contained. | Data center | Wikipedia | 502 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
Another option is fitting cabinets with vertical exhaust duct chimneys. Hot exhaust pipes/vents/ducts can direct the air into a Plenum space above a Dropped ceiling and back to the cooling units or to outside vents. With this configuration, traditional hot/cold aisle configuration is not a requirement.
Fire protection
Data centers feature fire protection systems, including passive and Active Design elements, as well as implementation of fire prevention programs in operations. Smoke detectors are usually installed to provide early warning of a fire at its incipient stage.
Although the main room usually does not allow Wet Pipe-based Systems due to the fragile nature of Circuit-boards, there still exist systems that can be used in the rest of the facility or in cold/hot aisle air circulation systems that are closed systems, such as:
Sprinkler systems
Misting, using high pressure to create extremely small water droplets, which can be used in sensitive rooms due to the nature of the droplets.
However, there also exist other means to put out fires, especially in Sensitive areas, usually using Gaseous fire suppression, of which Halon gas was the most popular, until the negative effects of producing and using it were discovered.
Security
Physical access is usually restricted. Layered security often starts with fencing, bollards and mantraps. Video camera surveillance and permanent security guards are almost always present if the data center is large or contains sensitive information. Fingerprint recognition mantraps are starting to be commonplace.
Logging access is required by some data protection regulations; some organizations tightly link this to access control systems. Multiple log entries can occur at the main entrance, entrances to internal rooms, and at equipment cabinets. Access control at cabinets can be integrated with intelligent power distribution units, so that locks are networked through the same appliance.
Energy use
Energy use is a central issue for data centers. Power draw ranges from a few kW for a rack of servers in a closet to several tens of MW for large facilities. Some facilities have power densities more than 100 times that of a typical office building. For higher power density facilities, electricity costs are a dominant operating expense and account for over 10% of the total cost of ownership (TCO) of a data center. | Data center | Wikipedia | 445 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
Greenhouse gas emissions
In 2020, data centers (excluding cryptocurrency mining) and data transmission each used about 1% of world electricity. Although some of this electricity was low carbon, the IEA called for more "government and industry efforts on energy efficiency, renewables procurement and RD&D", as some data centers still use electricity generated by fossil fuels. They also said that lifecycle emissions should be considered, that is including embodied emissions, such as in buildings. Data centers are estimated to have been responsible for 0.5% of US greenhouse gas emissions in 2018. Some Chinese companies, such as Tencent, have pledged to be carbon neutral by 2030, while others such as Alibaba have been criticized by Greenpeace for not committing to become carbon neutral. Google and Microsoft now each consume more power than some fairly big countries, surpassing the consumption of more than 100 countries.
Energy efficiency and overhead
The most commonly used energy efficiency metric for data centers is power usage effectiveness (PUE), calculated as the ratio of total power entering the data center divided by the power used by IT equipment.
PUE measures the percentage of power used by overhead devices (cooling, lighting, etc.). The average USA data center has a PUE of 2.0, meaning two watts of total power (overhead + IT equipment) for every watt delivered to IT equipment. State-of-the-art data centers are estimated to have a PUE of roughly 1.2. Google publishes quarterly efficiency metrics from its data centers in operation. PUEs of as low as 1.01 have been achieved with two phase immersion cooling.
The U.S. Environmental Protection Agency has an Energy Star rating for standalone or large data centers. To qualify for the ecolabel, a data center must be within the top quartile in energy efficiency of all reported facilities. The Energy Efficiency Improvement Act of 2015 (United States) requires federal facilities — including data centers — to operate more efficiently. California's Title 24 (2014) of the California Code of Regulations mandates that every newly constructed data center must have some form of airflow containment in place to optimize energy efficiency.
The European Union also has a similar initiative: EU Code of Conduct for Data Centres.
Energy use analysis and projects
The focus of measuring and analyzing energy use goes beyond what is used by IT equipment; facility support hardware such as chillers and fans also use energy. | Data center | Wikipedia | 492 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
In 2011, server racks in data centers were designed for more than 25 kW and the typical server was estimated to waste about 30% of the electricity it consumed. The energy demand for information storage systems is also rising. A high-availability data center is estimated to have a 1 megawatt (MW) demand and consume $20,000,000 in electricity over its lifetime, with cooling representing 35% to 45% of the data center's total cost of ownership. Calculations show that in two years, the cost of powering and cooling a server could be equal to the cost of purchasing the server hardware. Research in 2018 has shown that a substantial amount of energy could still be conserved by optimizing IT refresh rates and increasing server utilization.
In 2011, Facebook, Rackspace and others founded the Open Compute Project (OCP) to develop and publish open standards for greener data center computing technologies. As part of the project, Facebook published the designs of its server, which it had built for its first dedicated data center in Prineville. Making servers taller left space for more effective heat sinks and enabled the use of fans that moved more air with less energy. By not buying commercial off-the-shelf servers, energy consumption due to unnecessary expansion slots on the motherboard and unneeded components, such as a graphics card, was also saved. In 2016, Google joined the project and published the designs of its 48V DC shallow data center rack. This design had long been part of Google data centers. By eliminating the multiple transformers usually deployed in data centers, Google had achieved a 30% increase in energy efficiency. In 2017, sales for data center hardware built to OCP designs topped $1.2 billion and are expected to reach $6 billion by 2021.
Power and cooling analysis
Power is the largest recurring cost to the user of a data center. Cooling it at or below wastes money and energy. Furthermore, overcooling equipment in environments with a high relative humidity can expose equipment to a high amount of moisture that facilitates the growth of salt deposits on conductive filaments in the circuitry. | Data center | Wikipedia | 428 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
A power and cooling analysis, also referred to as a thermal assessment, measures the relative temperatures in specific areas as well as the capacity of the cooling systems to handle specific ambient temperatures. A power and cooling analysis can help to identify hot spots, over-cooled areas that can handle greater power use density, the breakpoint of equipment loading, the effectiveness of a raised-floor strategy, and optimal equipment positioning (such as AC units) to balance temperatures across the data center. Power cooling density is a measure of how much square footage the center can cool at maximum capacity. The cooling of data centers is the second largest power consumer after servers. The cooling energy varies from 10% of the total energy consumption in the most efficient data centers and goes up to 45% in standard air-cooled data centers.
Energy efficiency analysis
An energy efficiency analysis measures the energy use of data center IT and facilities equipment. A typical energy efficiency analysis measures factors such as a data center's Power Use Effectiveness (PUE) against industry standards, identifies mechanical and electrical sources of inefficiency, and identifies air-management metrics. However, the limitation of most current metrics and approaches is that they do not include IT in the analysis. Case studies have shown that by addressing energy efficiency holistically in a data center, major efficiencies can be achieved that are not possible otherwise.
Computational Fluid Dynamics (CFD) analysis
This type of analysis uses sophisticated tools and techniques to understand the unique thermal conditions present in each data center—predicting the temperature, airflow, and pressure behavior of a data center to assess performance and energy consumption, using numerical modeling. By predicting the effects of these environmental conditions, CFD analysis of a data center can be used to predict the impact of high-density racks mixed with low-density racks and the onward impact on cooling resources, poor infrastructure management practices, and AC failure or AC shutdown for scheduled maintenance.
Thermal zone mapping
Thermal zone mapping uses sensors and computer modeling to create a three-dimensional image of the hot and cool zones in a data center.
This information can help to identify optimal positioning of data center equipment. For example, critical servers might be placed in a cool zone that is serviced by redundant AC units.
Green data centers
Data centers use a lot of power, consumed by two main usages: The power required to run the actual equipment and then the power required to cool the equipment. Power efficiency reduces the first category. | Data center | Wikipedia | 492 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
Cooling cost reduction through natural means includes location decisions: When the focus is avoiding good fiber connectivity, power grid connections, and people concentrations to manage the equipment, a data center can be miles away from the users. Mass data centers like Google or Facebook don't need to be near population centers. Arctic locations that can use outside air, which provides cooling, are becoming more popular.
Renewable electricity sources are another plus. Thus countries with favorable conditions, such as Canada, Finland, Sweden, Norway, and Switzerland are trying to attract cloud computing data centers.
Singapore lifted a three-year ban on new data centers in April 2022. A major data center hub for the Asia-Pacific region, Singapore lifted its moratorium on new data center projects in 2022, granting 4 new projects, but rejecting more than 16 data center applications from over 20 new data centers applications received. Singapore's new data centers shall meet very strict green technology criteria including "Water Usage Effectiveness (WUE) of 2.0/MWh, Power Usage Effectiveness (PUE) of less than 1.3, and have a "Platinum certification under Singapore's BCA-IMDA Green Mark for New Data Centre" criteria that clearly addressed decarbonization and use of hydrogen cells or solar panels.
Direct current data centers
Direct current data centers are data centers that produce direct current on site with solar panels and store the electricity on site in a battery storage power station. Computers run on direct current and the need for inverting the AC power from the grid would be eliminated. The data center site could still use AC power as a grid-as-a-backup solution. DC data centers could be 10% more efficient and use less floor space for inverting components.
Energy reuse
It is very difficult to reuse the heat which comes from air-cooled data centers. For this reason, data center infrastructures are more often equipped with heat pumps. An alternative to heat pumps is the adoption of liquid cooling throughout a data center. Different liquid cooling techniques are mixed and matched to allow for a fully liquid-cooled infrastructure that captures all heat with water. Different liquid technologies are categorized in 3 main groups, indirect liquid cooling (water-cooled racks), direct liquid cooling (direct-to-chip cooling) and total liquid cooling (complete immersion in liquid, see server immersion cooling). This combination of technologies allows the creation of a thermal cascade as part of temperature chaining scenarios to create high-temperature water outputs from the data center. | Data center | Wikipedia | 506 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
Impact on electricity prices
Cryptomining and the artificial intelligence boom of the 2020s has also led to increased demand for electricity, that the IEA expects could double global overall data center demand for electricity between 2022 and 2026. The US could see its share of the electricity market going to data centers increase from 4% to 6% over those four years. Bitcoin used up 2% of US electricity in 2023. This has led to increased electricity prices in some regions, particularly in regions with lots of data centers like Santa Clara, California and upstate New York. Data centers have also generated concerns in Northern Virginia about whether residents will have to foot the bill for future power lines. It has also made it harder to develop housing in London. A Bank of America Institute report in July 2024 found that the increase in demand for electricity due in part to AI has been pushing electricity prices higher and is a significant contributor to electricity inflation.
Dynamic infrastructure
Dynamic infrastructure provides the ability to intelligently, automatically and securely move workloads within a data center anytime, anywhere, for migrations, provisioning, to enhance performance, or building co-location facilities. It also facilitates performing routine maintenance on either physical or virtual systems all while minimizing interruption. A related concept is Composable Infrastructure, which allows for the dynamic reconfiguration of the available resources to suit needs, only when needed.
Side benefits include
reducing cost
facilitating business continuity and high availability
enabling cloud and grid computing.
Network infrastructure
Communications in data centers today are most often based on networks running the Internet protocol suite. Data centers contain a set of routers and switches that transport traffic between the servers and to the outside world which are connected according to the data center network architecture. Redundancy of the internet connection is often provided by using two or more upstream service providers (see Multihoming).
Some of the servers at the data center are used for running the basic internet and intranet services needed by internal users in the organization, e.g., e-mail servers, proxy servers, and DNS servers.
Network security elements are also usually deployed: firewalls, VPN gateways, intrusion detection systems, and so on. Also common are monitoring systems for the network and some of the applications. Additional off-site monitoring systems are also typical, in case of a failure of communications inside the data center.
Software/data backup
Non-mutually exclusive options for data backup are:
Onsite
Offsite | Data center | Wikipedia | 500 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
Onsite is traditional, and one of its major advantages is immediate availability.
Offsite backup storage
Data backup techniques include having an encrypted copy of the data offsite. Methods used for transporting data are:
Having the customer write the data to a physical medium, such as magnetic tape, and then transporting the tape elsewhere.
Directly transferring the data to another site during the backup, using appropriate links.
Uploading the data "into the cloud".
Modular data center
For quick deployment or IT disaster recovery, several large hardware vendors have developed mobile/modular solutions that can be installed and made operational in a very short amount of time.
Micro data center
Micro data centers (MDCs) are access-level data centers which are smaller in size than traditional data centers but provide the same features. They are typically located near the data source to reduce communication delays, as their small size allows several MDCs to be spread out over a wide area. MDCs are well suited to user-facing, front end applications. They are commonly used in edge computing and other areas where low latency data processing is needed. | Data center | Wikipedia | 222 | 579730 | https://en.wikipedia.org/wiki/Data%20center | Technology | Commercial buildings | null |
A Reuleaux triangle is a curved triangle with constant width, the simplest and best known curve of constant width other than the circle. It is formed from the intersection of three circular disks, each having its center on the boundary of the other two. Constant width means that the separation of every two parallel supporting lines is the same, independent of their orientation. Because its width is constant, the Reuleaux triangle is one answer to the question "Other than a circle, what shape can a manhole cover be made so that it cannot fall down through the hole?"
They are named after Franz Reuleaux, a 19th-century German engineer who pioneered the study of machines for translating one type of motion into another, and who used Reuleaux triangles in his designs. However, these shapes were known before his time, for instance by the designers of Gothic church windows, by Leonardo da Vinci, who used it for a map projection, and by Leonhard Euler in his study of constant-width shapes. Other applications of the Reuleaux triangle include giving the shape to guitar picks, fire hydrant nuts, pencils, and drill bits for drilling filleted square holes, as well as in graphic design in the shapes of some signs and corporate logos.
Among constant-width shapes with a given width, the Reuleaux triangle has the minimum area and the sharpest (smallest) possible angle (120°) at its corners. By several numerical measures it is the farthest from being centrally symmetric. It provides the largest constant-width shape avoiding the points of an integer lattice, and is closely related to the shape of the quadrilateral maximizing the ratio of perimeter to diameter. It can perform a complete rotation within a square while at all times touching all four sides of the square, and has the smallest possible area of shapes with this property. However, although it covers most of the square in this rotation process, it fails to cover a small fraction of the square's area, near its corners. Because of this property of rotating within a square, the Reuleaux triangle is also sometimes known as the Reuleaux rotor. | Reuleaux triangle | Wikipedia | 433 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
The Reuleaux triangle is the first of a sequence of Reuleaux polygons whose boundaries are curves of constant width formed from regular polygons with an odd number of sides. Some of these curves have been used as the shapes of coins. The Reuleaux triangle can also be generalized into three dimensions in multiple ways: the Reuleaux tetrahedron (the intersection of four balls whose centers lie on a regular tetrahedron) does not have constant width, but can be modified by rounding its edges to form the Meissner tetrahedron, which does. Alternatively, the surface of revolution of the Reuleaux triangle also has constant width.
Construction
The Reuleaux triangle may be constructed either directly from three circles, or by rounding the sides of an equilateral triangle.
The three-circle construction may be performed with a compass alone, not even needing a straightedge. By the Mohr–Mascheroni theorem
the same is true more generally of any compass-and-straightedge construction, but the construction for the Reuleaux triangle is particularly simple.
The first step is to mark two arbitrary points of the plane (which will eventually become vertices of the triangle), and use the compass to draw a circle centered at one of the marked points, through the other marked point. Next, one draws a second circle, of the same radius, centered at the other marked point and passing through the first marked point.
Finally, one draws a third circle, again of the same radius, with its center at one of the two crossing points of the two previous circles, passing through both marked points. The central region in the resulting arrangement of three circles will be a Reuleaux triangle.
Alternatively, a Reuleaux triangle may be constructed from an equilateral triangle T by drawing three arcs of circles, each centered at one vertex of T and connecting the other two vertices.
Or, equivalently, it may be constructed as the intersection of three disks centered at the vertices of T, with radius equal to the side length of T.
Mathematical properties | Reuleaux triangle | Wikipedia | 419 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
The most basic property of the Reuleaux triangle is that it has constant width, meaning that for every pair of parallel supporting lines (two lines of the same slope that both touch the shape without crossing through it) the two lines have the same Euclidean distance from each other, regardless of the orientation of these lines. In any pair of parallel supporting lines, one of the two lines will necessarily touch the triangle at one of its vertices. The other supporting line may touch the triangle at any point on the opposite arc, and their distance (the width of the Reuleaux triangle) equals the radius of this arc.
The first mathematician to discover the existence of curves of constant width, and to observe that the Reuleaux triangle has constant width, may have been Leonhard Euler. In a paper that he presented in 1771 and published in 1781 entitled De curvis triangularibus, Euler studied curvilinear triangles as well as the curves of constant width, which he called orbiforms.
Extremal measures
By many different measures, the Reuleaux triangle is one of the most extreme curves of constant width.
By the Blaschke–Lebesgue theorem, the Reuleaux triangle has the smallest possible area of any curve of given constant width. This area is
where s is the constant width. One method for deriving this area formula is to partition the Reuleaux triangle into an inner equilateral triangle and three curvilinear regions between this inner triangle and the arcs forming the Reuleaux triangle, and then add the areas of these four sets. At the other extreme, the curve of constant width that has the maximum possible area is a circular disk, which has area .
The angles made by each pair of arcs at the corners of a Reuleaux triangle are all equal to 120°. This is the sharpest possible angle at any vertex of any curve of constant width. Additionally, among the curves of constant width, the Reuleaux triangle is the one with both the largest and the smallest inscribed equilateral triangles. The largest equilateral triangle inscribed in a Reuleaux triangle is the one connecting its three corners, and the smallest one is the one connecting the three midpoints of its sides. The subset of the Reuleaux triangle consisting of points belonging to three or more diameters is the interior of the larger of these two triangles; it has a larger area than the set of three-diameter points of any other curve of constant width. | Reuleaux triangle | Wikipedia | 508 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
Although the Reuleaux triangle has sixfold dihedral symmetry, the same as an equilateral triangle, it does not have central symmetry.
The Reuleaux triangle is the least symmetric curve of constant width according to two different measures of central asymmetry, the Kovner–Besicovitch measure (ratio of area to the largest centrally symmetric shape enclosed by the curve) and the Estermann measure (ratio of area to the smallest centrally symmetric shape enclosing the curve). For the Reuleaux triangle, the two centrally symmetric shapes that determine the measures of asymmetry are both hexagonal, although the inner one has curved sides. The Reuleaux triangle has diameters that split its area more unevenly than any other curve of constant width. That is, the maximum ratio of areas on either side of a diameter, another measure of asymmetry, is bigger for the Reuleaux triangle than for other curves of constant width.
Among all shapes of constant width that avoid all points of an integer lattice, the one with the largest width is a Reuleaux triangle. It has one of its axes of symmetry parallel to the coordinate axes on a half-integer line. Its width, approximately 1.54, is the root of a degree-6 polynomial with integer coefficients.
Just as it is possible for a circle to be surrounded by six congruent circles that touch it, it is also possible to arrange seven congruent Reuleaux triangles so that they all make contact with a central Reuleaux triangle of the same size. This is the maximum number possible for any curve of constant width.
Among all quadrilaterals, the shape that has the greatest ratio of its perimeter to its diameter is an equidiagonal kite that can be inscribed into a Reuleaux triangle.
Other measures
By Barbier's theorem all curves of the same constant width including the Reuleaux triangle have equal perimeters. In particular this perimeter equals the perimeter of the circle with the same width, which is .
The radii of the largest inscribed circle of a Reuleaux triangle with width s, and of the circumscribed circle of the same triangle, are
respectively; the sum of these radii equals the width of the Reuleaux triangle. More generally, for every curve of constant width, the largest inscribed circle and the smallest circumscribed circle are concentric, and their radii sum to the constant width of the curve. | Reuleaux triangle | Wikipedia | 510 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
The optimal packing density of the Reuleaux triangle in the plane remains unproven, but is conjectured to be
which is the density of one possible double lattice packing for these shapes. The best proven upper bound on the packing density is approximately 0.947. It has also been conjectured, but not proven, that the Reuleaux triangles have the highest packing density of any curve of constant width.
Rotation within a square
Any curve of constant width can form a rotor within a square, a shape that can perform a complete rotation while staying within the square and at all times touching all four sides of the square. However, the Reuleaux triangle is the rotor with the minimum possible area. As it rotates, its axis does not stay fixed at a single point, but instead follows a curve formed by the pieces of four ellipses. Because of its 120° angles, the rotating Reuleaux triangle cannot reach some points near the sharper angles at the square's vertices, but rather covers a shape with slightly rounded corners, also formed by elliptical arcs.
At any point during this rotation, two of the corners of the Reuleaux triangle touch two adjacent sides of the square, while the third corner of the triangle traces out a curve near the opposite vertex of the square. The shape traced out by the rotating Reuleaux triangle covers approximately 98.8% of the area of the square.
As a counterexample
Reuleaux's original motivation for studying the Reuleaux triangle was as a counterexample, showing that three single-point contacts may not be enough to fix a planar object into a single position. The existence of Reuleaux triangles and other curves of constant width shows that diameter measurements alone cannot verify that an object has a circular cross-section.
In connection with the inscribed square problem, observed that the Reuleaux triangle provides an example of a constant-width shape in which no regular polygon with more than four sides can be inscribed, except the regular hexagon, and he described a small modification to this shape that preserves its constant width but also prevents regular hexagons from being inscribed in it. He generalized this result to three dimensions using a cylinder with the same shape as its cross section.
Applications
Reaching into corners
Several types of machinery take the shape of the Reuleaux triangle, based on its property of being able to rotate within a square. | Reuleaux triangle | Wikipedia | 489 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
The Watts Brothers Tool Works square drill bit has the shape of a Reuleaux triangle, modified with concavities to form cutting surfaces. When mounted in a special chuck which allows for the bit not having a fixed centre of rotation, it can drill a hole that is nearly square. Although patented by Henry Watts in 1914, similar drills invented by others were used earlier. Other Reuleaux polygons are used to drill pentagonal, hexagonal, and octagonal holes.
Panasonic's RULO robotic vacuum cleaner has its shape based on the Reuleaux triangle in order to ease cleaning up dust in the corners of rooms.
Rolling cylinders
Another class of applications of the Reuleaux triangle involves cylindrical objects with a Reuleaux triangle cross section. Several pencils are manufactured in this shape, rather than the more traditional round or hexagonal barrels. They are usually promoted as being more comfortable or encouraging proper grip, as well as being less likely to roll off tables (since the center of gravity moves up and down more than a rolling hexagon).
A Reuleaux triangle (along with all other curves of constant width) can roll but makes a poor wheel because it does not roll about a fixed center of rotation. An object on top of rollers that have Reuleaux triangle cross-sections would roll smoothly and flatly, but an axle attached to Reuleaux triangle wheels would bounce up and down three times per revolution. This concept was used in a science fiction short story by Poul Anderson titled "The Three-Cornered Wheel". A bicycle with floating axles and a frame supported by the rim of its Reuleaux triangle shaped wheel was built and demonstrated in 2009 by Chinese inventor Guan Baihua, who was inspired by pencils with the same shape.
Mechanism design | Reuleaux triangle | Wikipedia | 364 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
Another class of applications of the Reuleaux triangle involves using it as a part of a mechanical linkage that can convert rotation around a fixed axis
into reciprocating motion. These mechanisms were studied by Franz Reuleaux. With the assistance of the Gustav Voigt company, Reuleaux built approximately 800 models of mechanisms, several of which involved the Reuleaux triangle. Reuleaux used these models in his pioneering scientific investigations of their motion. Although most of the Reuleaux–Voigt models have been lost, 219 of them have been collected at Cornell University, including nine based on the Reuleaux triangle. However, the use of Reuleaux triangles in mechanism design predates the work of Reuleaux; for instance, some steam engines from as early as 1830 had a cam in the shape of a Reuleaux triangle.
One application of this principle arises in a film projector. In this application, it is necessary to advance the film in a jerky, stepwise motion, in which each frame of film stops for a fraction of a second in front of the projector lens, and then much more quickly the film is moved to the next frame. This can be done using a mechanism in which the rotation of a Reuleaux triangle within a square is used to create a motion pattern for an actuator that pulls the film quickly to each new frame and then pauses the film's motion while the frame is projected.
The rotor of the Wankel engine is shaped as a curvilinear triangle that is often cited as an example of a Reuleaux triangle. However, its curved sides are somewhat flatter than those of a Reuleaux triangle and so it does not have constant width.
Architecture | Reuleaux triangle | Wikipedia | 353 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
In Gothic architecture, beginning in the late 13th century or early 14th century, the Reuleaux triangle became one of several curvilinear forms frequently used for windows, window tracery, and other architectural decorations. For instance, in English Gothic architecture, this shape was associated with the decorated period, both in its geometric style of 1250–1290 and continuing into its curvilinear style of 1290–1350. It also appears in some of the windows of the Milan Cathedral. In this context, the shape is sometimes called a spherical triangle, which should not be confused with spherical triangle meaning a triangle on the surface of a sphere. In its use in Gothic church architecture, the three-cornered shape of the Reuleaux triangle may be seen both as a symbol of the Trinity, and as "an act of opposition to the form of the circle".
The Reuleaux triangle has also been used in other styles of architecture. For instance, Leonardo da Vinci sketched this shape as the plan for a fortification. Modern buildings that have been claimed to use a Reuleaux triangle shaped floorplan include the MIT Kresge Auditorium, the Kölntriangle, the Donauturm, the Torre de Collserola, and the Mercedes-Benz Museum. However in many cases these are merely rounded triangles, with different geometry than the Reuleaux triangle.
Mapmaking
Another early application of the Reuleaux triangle, da Vinci's world map from circa 1514, was a world map in which the spherical surface of the earth was divided into eight octants, each flattened into the shape of a Reuleaux triangle.
Similar maps also based on the Reuleaux triangle were published by Oronce Finé in 1551 and by John Dee in 1580.
Other objects
Many guitar picks employ the Reuleaux triangle, as its shape combines a sharp point to provide strong articulation, with a wide tip to produce a warm timbre. Because all three points of the shape are usable, it is easier to orient and wears less quickly compared to a pick with a single tip. | Reuleaux triangle | Wikipedia | 427 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
The Reuleaux triangle has been used as the shape for the cross section of a fire hydrant valve nut. The constant width of this shape makes it difficult to open the fire hydrant using standard parallel-jawed wrenches; instead, a wrench with a special shape is needed. This property allows the fire hydrants to be opened only by firefighters (who have the special wrench) and not by other people trying to use the hydrant as a source of water for other activities.
Following a suggestion of , the antennae of the Submillimeter Array, a radio-wave astronomical observatory on Mauna Kea in Hawaii, are arranged on four nested Reuleaux triangles. Placing the antennae on a curve of constant width causes the observatory to have the same spatial resolution in all directions, and provides a circular observation beam. As the most asymmetric curve of constant width, the Reuleaux triangle leads to the most uniform coverage of the plane for the Fourier transform of the signal from the array. The antennae may be moved from one Reuleaux triangle to another for different observations, according to the desired angular resolution of each observation. The precise placement of the antennae on these Reuleaux triangles was optimized using a neural network. In some places the constructed observatory departs from the preferred Reuleaux triangle shape because that shape was not possible within the given site.
Signs and logos
The shield shapes used for many signs and corporate logos feature rounded triangles. However, only some of these are Reuleaux triangles.
The corporate logo of Petrofina (Fina), a Belgian oil company with major operations in Europe, North America and Africa, used a Reuleaux triangle with the Fina name from 1950 until Petrofina's merger with Total S.A. (today TotalEnergies) in 2000.
Another corporate logo framed in the Reuleaux triangle, the south-pointing compass of Bavaria Brewery, was part of a makeover by design company Total Identity that won the SAN 2010 Advertiser of the Year award. The Reuleaux triangle is also used in the logo of Colorado School of Mines.
In the United States, the National Trails System and United States Bicycle Route System both mark routes with Reuleaux triangles on signage.
In nature | Reuleaux triangle | Wikipedia | 461 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
According to Plateau's laws, the circular arcs in two-dimensional soap bubble clusters meet at 120° angles, the same angle found at the corners of a Reuleaux triangle. Based on this fact, it is possible to construct clusters in which some of the bubbles take the form of a Reuleaux triangle.
The shape was first isolated in crystal form in 2014 as Reuleaux triangle disks. Basic bismuth nitrate disks with the Reuleaux triangle shape were formed from the hydrolysis and precipitation of bismuth nitrate in an ethanol–water system in the presence of 2,3-bis(2-pyridyl)pyrazine.
Generalizations
Triangular curves of constant width with smooth rather than sharp corners may be obtained as the locus of points at a fixed distance from the Reuleaux triangle. Other generalizations of the Reuleaux triangle include surfaces in three dimensions, curves of constant width with more than three sides, and the Yanmouti sets which provide extreme examples of an inequality between width, diameter, and inradius.
Three-dimensional version
The intersection of four balls of radius s centered at the vertices of a regular tetrahedron with side length s is called the Reuleaux tetrahedron, but its surface is not a surface of constant width. It can, however, be made into a surface of constant width, called Meissner's tetrahedron, by replacing three of its edge arcs by curved surfaces, the surfaces of rotation of a circular arc. Alternatively, the surface of revolution of a Reuleaux triangle through one of its symmetry axes forms a surface of constant width, with minimum volume among all known surfaces of revolution of given constant width.
Reuleaux polygons
The Reuleaux triangle can be generalized to regular or irregular polygons with an odd number of sides, yielding a Reuleaux polygon, a curve of constant width formed from circular arcs of constant radius. The constant width of these shapes allows their use as coins that can be used in coin-operated machines. Although coins of this type in general circulation usually have more than three sides, a Reuleaux triangle has been used for a commemorative coin from Bermuda. | Reuleaux triangle | Wikipedia | 444 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
Similar methods can be used to enclose an arbitrary simple polygon within a curve of constant width, whose width equals the diameter of the given polygon. The resulting shape consists of circular arcs (at most as many as sides of the polygon), can be constructed algorithmically in linear time, and can be drawn with compass and straightedge. Although the Reuleaux polygons all have an odd number of circular-arc sides, it is possible to construct constant-width shapes with an even number of circular-arc sides of varying radii.
Yanmouti sets
The Yanmouti sets are defined as the convex hulls of an equilateral triangle together with three circular arcs, centered at the triangle vertices and spanning the same angle as the triangle, with equal radii that are at most equal to the side length of the triangle. Thus, when the radius is small enough, these sets degenerate to the equilateral triangle itself, but when the radius is as large as possible they equal the corresponding Reuleaux triangle. Every shape with width w, diameter d, and inradius r (the radius of the largest possible circle contained in the shape) obeys the inequality
and this inequality becomes an equality for the Yanmouti sets, showing that it cannot be improved.
Related figures
In the classical presentation of a three-set Venn diagram as three overlapping circles, the central region (representing elements belonging to all three sets) takes the shape of a Reuleaux triangle. The same three circles form one of the standard drawings of the Borromean rings, three mutually linked rings that cannot, however, be realized as geometric circles. Parts of these same circles are used to form the triquetra, a figure of three overlapping semicircles (each two of which form a vesica piscis symbol) that again has a Reuleaux triangle at its center; just as the three circles of the Venn diagram may be interlaced to form the Borromean rings, the three circular arcs of the triquetra may be interlaced to form a trefoil knot. | Reuleaux triangle | Wikipedia | 436 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
Relatives of the Reuleaux triangle arise in the problem of finding the minimum perimeter shape that encloses a fixed amount of area and includes three specified points in the plane. For a wide range of choices of the area parameter, the optimal solution to this problem will be a curved triangle whose three sides are circular arcs with equal radii. In particular, when the three points are equidistant from each other and the area is that of the Reuleaux triangle, the Reuleaux triangle is the optimal enclosure.
Circular triangles are triangles with circular-arc edges, including the Reuleaux triangle as well as other shapes.
The deltoid curve is another type of curvilinear triangle, but one in which the curves replacing each side of an equilateral triangle are concave rather than convex. It is not composed of circular arcs, but may be formed by rolling one circle within another of three times the radius. Other planar shapes with three curved sides include the arbelos, which is formed from three semicircles with collinear endpoints, and the Bézier triangle.
The Reuleaux triangle may also be interpreted as the stereographic projection of one triangular face of a spherical tetrahedron, the Schwarz triangle of parameters with spherical angles of measure and sides of spherical length | Reuleaux triangle | Wikipedia | 267 | 580252 | https://en.wikipedia.org/wiki/Reuleaux%20triangle | Mathematics | Two-dimensional space | null |
A trowel is a small hand tool used for digging, applying, smoothing, or moving small amounts of viscous or particulate material. Common varieties include the masonry trowel, garden trowel, and float trowel.
A power trowel is a much larger gasoline or electrically powered walk-behind device with rotating paddles used to finish concrete floors.
Hand trowel
Numerous forms of trowel are used in masonry, concrete, and drywall construction, as well as applying adhesives such as those used in tiling and laying synthetic flooring. Masonry trowels are traditionally made of forged carbon steel, but some newer versions are made of cast stainless steel, which has longer wear and is rust-free. These include:
Bricklayer's trowel has an elongated triangular-shaped flat metal blade, used by masons for leveling, spreading, and shaping cement, plaster, and mortar.
Pointing trowel, a scaled-down version of a bricklayer's trowel, for small jobs and repair work.
Tuck pointing trowel is long and thin, designed for packing mortar between bricks.
Float trowel or finishing trowel is usually rectangular, used to smooth, level, or texture the top layer of hardening concrete. A flooring trowel has one rectangular end and one pointed end, made to fit corners. A grout float is used for applying and working grout into gaps in floor and wall tile.
Gauging trowel has a rounded tip, used to mix measured proportions of the different ingredients for quick set plaster.
Pool trowel is a flat-bladed tool with rounded ends used to apply coatings to concrete, especially on swimming pool decks.
Margin trowel is a small rectangular bladed tool used to move, apply, and smooth small amounts of masonry or adhesive material.
Notched trowel is a rectangular shaped tool with regularly spaced notches along one or more sides used to apply adhesive when adhering tile, or laying synthetic floor surfaces. | Trowel | Wikipedia | 418 | 580863 | https://en.wikipedia.org/wiki/Trowel | Technology | Agricultural tools | null |
Other forms of trowel include:
Garden trowel, a hand tool with a pointed, scoop-shaped metal blade and wooden, metal, or plastic handle. It is comparable to a spade or shovel, but is generally much smaller, being designed for use with one hand. It is used for breaking up earth, digging small holes, especially for planting and weeding, mixing in fertilizer or other additives, and transferring plants to pots.
Camping trowel, a hand tool used in the outdoors to securely stake and prop up a tent, channel a small stream of water, level a sleeping surface, dig a cathole for no traces of waste and do many more outdoor survival chores. Camping trowels can sometimes be made of lighter weight materials than gardening trowels to make them easier to carry in a backpack or they can be made of heavier materials for chopping kindling or shoveling soil without having to awkwardly reach or bend over. Camping trowels may incorporate a secondary side ruler to measure ground surface depth; however, the ruler might prematurely become defaced from course soil particulates. Camping trowels sometimes have a front tip and side features, such as a pointed tip and a serrated side edge to easily cut through tree roots or frozen soil. These serrated camping trowels may include a cover guard to protect the user from cut wounds as well as save backpacks from puncture holes and tears. They may also fold-up for added protection and easy storage. Few others allow for items such as toilet paper to be stored upon or inside the handle.[2]
In archaeology brick or pointing trowels (usually 4" or 5" steel trowels) are used to scratch the strata in an excavation and allow the colors of the soil to be clear, so that the different strata can be identified, processed and excavated. In the United States, there are several preferred brands of pointing trowels, including the Marshalltown trowel; while in the British Isles the WHS 4" pointing trowel is the traditional tool. | Trowel | Wikipedia | 423 | 580863 | https://en.wikipedia.org/wiki/Trowel | Technology | Agricultural tools | null |
Tonkinese is a domestic cat breed produced by crossbreeding between the Siamese and Burmese. Members of the breed are distinguished by a pointed coat pattern in a variety of colors. In addition to the modified coat colors of the "mink" pattern, which is a dilution of the point color, the breed is now being shown in the foundation-like Siamese and Burmese colors: pointed with white and solid overall (sepia).
The best known variety is the short-haired Tonkinese, but there is a semi-longhaired (sometimes called Tibetan) which tends to be more popular in Europe, mainly in the Netherlands, Germany, Belgium, Luxembourg, and France.
History
Origin
The modern Tonkinese breed is a reconstruction of a breed brought to the West in the 19th century. These cats were originally known as 'chocolate Siamese'. Breeders working with imported cats from Malaysia noticed some cats have aquamarine eyes and darker coats than the Siamese. In 1901 the Siamese Cat Club recognised them as a Siamese of the 'chocolate' type.
Many of the cats used to found the Siamese and Burmese in the West are believed to be Tonkinese, including Wong Mau. Tonkinese would be bred still but registered as either Burmese or Siamese, it was not until the 1950s that breeders would take interest in the cat. These breeders worked together on developing breeding lines with these cats and by 1965 the Tonkinese was recognised in Canada as a distinct breed — whence the name originated.
More modern Tonkinese cats are the result of the crossbreeding programs of two breeders working independently of each other. Margaret Conroy, of Canada, and Jane Barletta, of the United States, crossed the Siamese and Burmese breeds, with the aim of creating the ideal combination of both parent breeds' distinctive appearance and lively personalities. The cats thus produced were moved from crossbreed classification to an established breed in 2001. The name is a reference to the Tonkin region of Indochina, though it is suggestive only, as the cats have no connection with the area.
In the West, Tonkinese cats under the age of sixth months have historically been referred to as "small-cats" rather than "kittens" to reflect a more direct translation from Burmese, although this term has become almost obsolete since the mid-20th century. | Tonkinese cat | Wikipedia | 480 | 580880 | https://en.wikipedia.org/wiki/Tonkinese%20cat | Biology and health sciences | Cats | Animals |
Breed recognition
The breed received championship status with the Cat Fanciers' Association in 1984. The Governing Council of the Cat Fancy (GCCF) recognised the breed in 1991. Today the breed is recognised in most of Europe, Australia, New Zealand, Hong Kong, Japan, and South Africa. Over 30 countries have Tonkinese cats featured on postage stamps.
Description
Appearance
Tonkinese are a medium-sized cat, considered an intermediate type between the slender, long-bodied modern Siamese and European Burmese and the more "cobby", or substantially-built American Burmese. Like their Burmese ancestors, they are deceptively muscular and typically seem much heavier than expected when picked up. Tail and legs are slim but proportionate to the body, with distinctive oval paws. They have a gently rounded, slightly wedge-shaped head and blunted muzzle, with moderately almond-shaped eyes and ears set towards the outside of their head. The American style is a rounder but sculpted head with a shorter body and sturdier appearance to reflect the old-fashioned Siamese and rounded Burmese from which it was originally bred in the United States. While many American breeders avoided using the extreme "contemporary" Burmese in favor of the more moderate "traditional" Burmese, the original Tonkinese breed standard was based on the extreme spherical style of the Burmese descended from Wong Mau.
Coat and color
The Tonkinese comes in the several colours listed below.
Black (also referred to as “brown”, “seal”, or “natural” by different fanciers or organizations)
Blue
Chocolate (also called “champagne”)
Lilac (also called “platinum”)
Cinnamon
Fawn
Red
Cream
Additional dilute modifiers (including “caramel”, “apricot”)
Each color also has three variations of colorpoint coat pattern:
"point", the classic Siamese-style dark face, ears, legs and tail on a contrasting white or cream base, and blue eyes;
"solid" or "sepia", similar to the Burmese, in which the color is essentially uniform over the body with only faintly visible points and golden-amber or green eyes; and
"mink", a unique intermediate between the other two, in which the base is a lighter shade but still harmonious with the point color, and the eyes are a lighter blue-green, called aquamarine. They can be anywhere on the entire blue-green to green-blue spectrum. | Tonkinese cat | Wikipedia | 497 | 580880 | https://en.wikipedia.org/wiki/Tonkinese%20cat | Biology and health sciences | Cats | Animals |
Additionally, all colors can present in the tortoiseshell or tabby patterns.
Color and pattern recognition
Depending on the cat registry, not all colors and patterns are allowed for the Tonkinese cat breed. Tonkineses are currently officially recognized by the Cat Fanciers' Association (CFA) and World Cat Federation (WCF) in only four base colors: black (brown, seal, natural), blue, chocolate (champagne), and lilac (platinum). All four base colors are allowed in the three colorpoint patterns. The GCCF accepts brown, blue, chocolate, lilac, cinnamon, and fawn, red, cream, plus caramel and apricot. These colors are allowed in the tortoiseshell and tabby patterns, and additionally the three colorpoint patterns. Similar to the GCCF, The International Cat Association (TICA) accepts all of the genetically possible colors and patterns.
Temperament
Like both parent breeds, Tonkinese are active, vocal and generally people-oriented cats, playful and interested in everything going on around them; however, this also means they are easily susceptible to becoming lonesome or bored. Their voice is similar in tone to the Burmese, persistent but softer and sweeter than the Siamese, similar to the gentle quacking of a duck. Like Burmese, Tonkinese are reputed to sometimes engage in such dog-like behaviors as fetching, and to enjoy jumping to great heights.
Health
In a 2012 review of over 5,000 cases of urate urolithiasis the Tonkinese was significantly under-represented, with only one of the recorded cases belonging to the breed against a population of 365.
Genetics
Tonkin is a crossbreed type, with coat color and pattern wholly dependent on whether individuals carry the Siamese or Burmese gene. Breeding two mink Tonkinese cats does not usually yield a full litter of mink kittens, as this intermediate pattern is the result of having one gene for the Burmese solid pattern and one for the Siamese pointed pattern. | Tonkinese cat | Wikipedia | 417 | 580880 | https://en.wikipedia.org/wiki/Tonkinese%20cat | Biology and health sciences | Cats | Animals |
Colors and patterns in any litter depend both on statistical chance and the color genetics and patterns of the parents. Breeding between two mink-patterned cats will, on average, produce half mink kittens and one quarter each pointed and sepia kittens. A pointed and a sepia bred together will always produce all mink patterned kittens. A pointed bred to a mink will produce half pointed and half mink kittens, and a sepia bred to a mink will produce half sepia and half mink kittens. | Tonkinese cat | Wikipedia | 108 | 580880 | https://en.wikipedia.org/wiki/Tonkinese%20cat | Biology and health sciences | Cats | Animals |
Sulfur trioxide (alternative spelling sulphur trioxide) is the chemical compound with the formula SO3. It has been described as "unquestionably the most [economically] important sulfur oxide". It is prepared on an industrial scale as a precursor to sulfuric acid.
Sulfur trioxide exists in several forms: gaseous monomer, crystalline trimer, and solid polymer. Sulfur trioxide is a solid at just below room temperature with a relatively narrow liquid range. Gaseous SO3 is the primary precursor to acid rain.
Molecular structure and bonding
Monomer
The molecule SO3 is trigonal planar. As predicted by VSEPR theory, its structure belongs to the D3h point group. The sulfur atom has an oxidation state of +6 and may be assigned a formal charge value as low as 0 (if all three sulfur-oxygen bonds are assumed to be double bonds) or as high as +2 (if the Octet Rule is assumed). When the formal charge is non-zero, the S-O bonding is assumed to be delocalized. In any case the three S-O bond lengths are equal to one another, at 1.42 Å. The electrical dipole moment of gaseous sulfur trioxide is zero.
Trimer
Both liquid and gaseous SO3 exists in an equilibrium between the monomer and the cyclic trimer. The nature of solid SO3 is complex and at least 3 polymorphs are known, with conversion between them being dependent on traces of water.
Absolutely pure SO3 freezes at 16.8 °C to give the γ-SO3 form, which adopts the cyclic trimer configuration [S(=O)2(μ-O)]3.
Polymer
If SO3 is condensed above 27 °C, then α-SO3 forms, which has a melting point of 62.3 °C. α-SO3 is fibrous in appearance. Structurally, it is the polymer [S(=O)2(μ-O)]n. Each end of the polymer is terminated with OH groups. β-SO3, like the alpha form, is fibrous but of different molecular weight, consisting of an hydroxyl-capped polymer, but melts at 32.5 °C. Both the gamma and the beta forms are metastable, eventually converting to the stable alpha form if left standing for sufficient time. This conversion is caused by traces of water. | Sulfur trioxide | Wikipedia | 495 | 580936 | https://en.wikipedia.org/wiki/Sulfur%20trioxide | Physical sciences | Covalent oxides | Chemistry |
Relative vapor pressures of solid SO3 are alpha < beta < gamma at identical temperatures, indicative of their relative molecular weights. Liquid sulfur trioxide has a vapor pressure consistent with the gamma form. Thus heating a crystal of α-SO3 to its melting point results in a sudden increase in vapor pressure, which can be forceful enough to shatter a glass vessel in which it is heated. This effect is known as the "alpha explosion".
Chemical reactions
Sulfur trioxide undergoes many reactions.
Hydration and hydrofluorination
SO3 is the anhydride of H2SO4. Thus, it is susceptible to hydration:
SO3 + H2O → H2SO4(ΔfH = −200 kJ/mol)
Gaseous sulfur trioxide fumes profusely even in a relatively dry atmosphere owing to formation of a sulfuric acid mist.
SO3 is aggressively hygroscopic. The heat of hydration is sufficient that mixtures of SO3 and wood or cotton can ignite. In such cases, SO3 dehydrates these carbohydrates.
Akin to the behavior of H2O, hydrogen fluoride adds to give fluorosulfuric acid:
SO3 + HF → FSO3H
Deoxygenation
SO3 reacts with dinitrogen pentoxide to give the nitronium salt of pyrosulfate:
2 SO3 + N2O5 → [NO2]2S2O7
Oxidant
Sulfur trioxide is an oxidant. It oxidizes sulfur dichloride to thionyl chloride.
SO3 + SCl2 → SOCl2 + SO2
Lewis acid
SO3 is a strong Lewis acid readily forming adducts with Lewis bases. With pyridine, it gives the sulfur trioxide pyridine complex. Related adducts form from dioxane and trimethylamine.
Sulfonating agent
Sulfur trioxide is a potent sulfonating agent, i.e. it adds SO3 groups to substrates. Often the substrates are organic, as in aromatic sulfonation. For activated substrates, Lewis base adducts of sulfur trioxide are effective sulfonating agents.
Preparation
The direct oxidation of sulfur dioxide to sulfur trioxide in air proceeds very slowly:
2 SO2 + O2 → 2 SO3(ΔH = −198.4 kJ/mol) | Sulfur trioxide | Wikipedia | 501 | 580936 | https://en.wikipedia.org/wiki/Sulfur%20trioxide | Physical sciences | Covalent oxides | Chemistry |
Industrial
Industrially SO3 is made by the contact process. Sulfur dioxide is produced by the burning of sulfur or iron pyrite (a sulfide ore of iron). After being purified by electrostatic precipitation, the SO2 is then oxidised by atmospheric oxygen at between 400 and 600 °C over a catalyst. A typical catalyst consists of vanadium pentoxide (V2O5) activated with potassium oxide K2O on kieselguhr or silica support. Platinum also works very well but is too expensive and is poisoned (rendered ineffective) much more easily by impurities.
The majority of sulfur trioxide made in this way is converted into sulfuric acid.
Laboratory
Sulfur trioxide can be prepared in the laboratory by the two-stage pyrolysis of sodium bisulfate. Sodium pyrosulfate is an intermediate product:
Dehydration at 315 °C:
2 NaHSO4 → Na2S2O7 + H2O
Cracking at 460 °C:
Na2S2O7 → Na2SO4 + SO3
The latter occurs at much lower temperatures (45–60 °C) in the presence of catalytic H2SO4. In contrast, KHSO4 undergoes the same reactions at a higher temperature.
Another two step method involving a salt pyrolysis starts with concentrated sulfuric acid and anhydrous tin tetrachloride:
Reaction between tin tetrachloride and sulfuric acid in a 1:2 molar mixture at near reflux (114 °C):
SnCl4 + 2 H2SO4 → Sn(SO4)2 + 4 HCl
Pyrolysis of anhydrous tin(IV) sulfate at 150 °C - 200 °C:
Sn(SO4)2 → SnO2 + 2 SO3
The advantage of this method over the sodium bisulfate one is that it requires much lower temperatures and can be done using normal borosilicate laboratory glassware without the risk of shattering. A disadvantage is that it generates significant quantities of hydrogen chloride gas which needs to be captured as well.
SO3 may also be prepared by dehydrating sulfuric acid with phosphorus pentoxide.
Applications
Sulfur trioxide is a reagent in sulfonation reactions. Dimethyl sulfate is produced commercially by the reaction of dimethyl ether with sulfur trioxide: | Sulfur trioxide | Wikipedia | 486 | 580936 | https://en.wikipedia.org/wiki/Sulfur%20trioxide | Physical sciences | Covalent oxides | Chemistry |
Sulfate esters are used as detergents, dyes, and pharmaceuticals. Sulfur trioxide is generated in situ from sulfuric acid or is used as a solution in the acid.
B2O3 stabilized sulfur trioxide was traded by Baker & Adamson under the tradename "Sulfan" in the 20th century.
Safety
Along with being an oxidizing agent, sulfur trioxide is highly corrosive. It reacts violently with water to produce highly corrosive sulfuric acid. | Sulfur trioxide | Wikipedia | 99 | 580936 | https://en.wikipedia.org/wiki/Sulfur%20trioxide | Physical sciences | Covalent oxides | Chemistry |
Auscultation (based on the Latin verb auscultare "to listen") is listening to the internal sounds of the body, usually using a stethoscope. Auscultation is performed for the purposes of examining the circulatory and respiratory systems (heart and breath sounds), as well as the alimentary canal.
The term was introduced by René Laennec. The act of listening to body sounds for diagnostic purposes has its origin further back in history, possibly as early as Ancient Egypt. Auscultation and palpation go together in physical examination and are alike in that both have ancient roots, both require skill, and both are still important today. Laënnec's contributions were refining the procedure, linking sounds with specific pathological changes in the chest, and inventing a suitable instrument (the stethoscope) to mediate between the patient's body and the clinician's ear.
Auscultation is a skill that requires substantial clinical experience, a fine stethoscope and good listening skills. Health professionals (doctors, nurses, etc.) listen to three main organs and organ systems during auscultation: the heart, the lungs, and the gastrointestinal system. When auscultating the heart, doctors listen for abnormal sounds, including heart murmurs, gallops, and other extra sounds coinciding with heartbeats. Heart rate is also noted. When listening to lungs, breath sounds such as wheezes, crepitations and crackles are identified. The gastrointestinal system is auscultated to note the presence of bowel sounds.
Electronic stethoscopes can be recording devices, and can provide noise reduction and signal enhancement. This is helpful for purposes of telemedicine (remote diagnosis) and teaching. This opened the field to computer-aided auscultation. Ultrasonography (US) inherently provides capability for computer-aided auscultation, and portable US, especially portable echocardiography, replaces some stethoscope auscultation (especially in cardiology), although not nearly all of it (stethoscopes are still essential in basic checkups, listening to bowel sounds, and other primary care contexts).
Auscultogram
The sounds of auscultation can be depicted using symbols to produce an auscultogram. It is used in cardiology training.
Mediate and immediate auscultation | Auscultation | Wikipedia | 504 | 581091 | https://en.wikipedia.org/wiki/Auscultation | Biology and health sciences | Diagnostics | Health |
Mediate auscultation is an antiquated medical term for listening (auscultation) to the internal sounds of the body using an instrument (mediate), usually a stethoscope. It is opposed to immediate auscultation, directly placing the ear on the body.
Doppler auscultation
It was demonstrated in the 2000s that Doppler auscultation using a handheld ultrasound transducer enables the auscultation of valvular movements and blood flow sounds that are undetected during cardiac examination with a stethoscope. The Doppler auscultation presented a sensitivity of 84% for the detection of aortic regurgitations, while classic stethoscope auscultation presented a sensitivity of 58%. Moreover, Doppler auscultation was superior in the detection of impaired ventricular relaxation. Since the physics of Doppler auscultation and classic auscultation are different, it has been suggested that both methods could complement each other. | Auscultation | Wikipedia | 215 | 581091 | https://en.wikipedia.org/wiki/Auscultation | Biology and health sciences | Diagnostics | Health |
The littoral zone, also called litoral or nearshore, is the part of a sea, lake, or river that is close to the shore. In coastal ecology, the littoral zone includes the intertidal zone extending from the high water mark (which is rarely inundated), to coastal areas that are permanently submerged — known as the foreshore — and the terms are often used interchangeably. However, the geographical meaning of littoral zone extends well beyond the intertidal zone to include all neritic waters within the bounds of continental shelves.
Etymology
The word littoral may be used both as a noun and as an adjective. It derives from the Latin noun litus, litoris, meaning "shore". (The doubled t is a late-medieval innovation, and the word is sometimes seen in the more classical-looking spelling litoral.)
Description
The term has no single definition. What is regarded as the full extent of the littoral zone, and the way the littoral zone is divided into subregions, varies in different contexts. For lakes, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. The use of the term also varies from one part of the world to another, and between different disciplines. For example, military commanders speak of the littoral in ways that are quite different from the definition used by marine biologists.
The adjacency of water gives a number of distinctive characteristics to littoral regions. The erosive power of water results in particular types of landforms, such as sand dunes, and estuaries. The natural movement of the littoral along the coast is called the littoral drift. Biologically, the ready availability of water enables a greater variety of plant and animal life, and particularly the formation of extensive wetlands. In addition, the additional local humidity due to evaporation usually creates a microclimate supporting unique types of organisms.
In oceanography and marine biology | Littoral zone | Wikipedia | 424 | 581326 | https://en.wikipedia.org/wiki/Littoral%20zone | Physical sciences | Oceanography | Earth science |
In oceanography and marine biology, the idea of the littoral zone is extended roughly to the edge of the continental shelf. Starting from the shoreline, the littoral zone begins at the spray region just above the high tide mark. From here, it moves to the intertidal region between the high and low water marks, and then out as far as the edge of the continental shelf. These three subregions are called, in order, the supralittoral zone, the eulittoral zone, and the sublittoral zone.
Supralittoral zone
The supralittoral zone (also called the splash, spray or supratidal zone) is the area above the spring high tide line that is regularly splashed, but not submerged by ocean water. Seawater penetrates these elevated areas only during storms with high tides. Organisms that live here must cope with exposure to fresh water from rain, cold, heat, dryness and predation by land animals and seabirds. At the top of this area, patches of dark lichens can appear as crusts on rocks. Some types of periwinkles, Neritidae and detritus feeding Isopoda commonly inhabit the lower supralittoral.
Eulittoral zone
The eulittoral zone (also called the midlittoral or mediolittoral zone) is the intertidal zone, known also as the foreshore. It extends from the spring high tide line, which is rarely inundated, to the spring low tide line, which is rarely not inundated. It is alternately exposed and submerged once or twice daily. Organisms living here must be able to withstand the varying conditions of temperature, light, and salinity. Despite this, productivity is high in this zone. The wave action and turbulence of recurring tides shape and reform cliffs, gaps and caves, offering a huge range of habitats for sedentary organisms. Protected rocky shorelines usually show a narrow, almost homogenous, eulittoral strip, often marked by the presence of barnacles. Exposed sites show a wider extension and are often divided into further zones. For more on this, see intertidal ecology.
Sublittoral zone
The sublittoral zone starts immediately below the eulittoral zone. This zone is permanently covered with seawater and is approximately equivalent to the neritic zone. | Littoral zone | Wikipedia | 495 | 581326 | https://en.wikipedia.org/wiki/Littoral%20zone | Physical sciences | Oceanography | Earth science |
In physical oceanography, the sublittoral zone refers to coastal regions with significant tidal flows and energy dissipation, including non-linear flows, internal waves, river outflows and oceanic fronts. In practice, this typically extends to the edge of the continental shelf, with depths around 200 meters.
In marine biology, the sublittoral zone refers to the areas where sunlight reaches the ocean floor, that is, where the water is never so deep as to take it out of the photic zone. This results in high primary production and makes the sublittoral zone the location of the majority of sea life. As in physical oceanography, this zone typically extends to the edge of the continental shelf. The benthic zone in the sublittoral is much more stable than in the intertidal zone; temperature, water pressure, and the amount of sunlight remain fairly constant. Sublittoral corals do not have to deal with as much change as intertidal corals. Corals can live in both zones, but they are more common in the sublittoral zone.
Within the sublittoral, marine biologists also identify the following:
The infralittoral zone is the algal dominated zone, which may extend to five metres below the low water mark.
The circalittoral zone is the region beyond the infralittoral, that is, below the algal zone and dominated by sessile animals such as mussels and oysters.
Shallower regions of the sublittoral zone, extending not far from the shore, are sometimes referred to as the subtidal zone.
Habitats in littoral zones
Many vertebrates (e.g., mammals, waterfowl, reptiles) and invertebrates (insects, etc.) use both the littoral zone as well as the terrestrial ecosystem for food and habitat. Biota that are commonly assumed to reside in the pelagic zone often rely heavily on resources from the littoral zone. Littoral areas of ponds and lakes are typically better oxygenated, structurally more complex, and afford more abundant and diverse food resources than do profundal sediments. All these factors lead to a high diversity of insects and very complex trophic interactions. | Littoral zone | Wikipedia | 463 | 581326 | https://en.wikipedia.org/wiki/Littoral%20zone | Physical sciences | Oceanography | Earth science |
The great lakes of the world represent a global heritage of surface freshwater and aquatic biodiversity. Species lists for 14 of the world's largest lakes reveal that 15% of the global diversity (the total number of species) of freshwater fishes, 9% of non-insect freshwater invertebrate diversity, and 2% of aquatic insect diversity live in this handful of lakes. The vast majority (more than 93%) of species inhabit the shallow, nearshore littoral zone, and 72% are completely restricted to the littoral zone, even though littoral habitats are a small fraction of total lake areas.
Because the littoral zone is important for many recreational and industrial purposes, it is often severely affected by many human activities that increase nutrient loading, spread invasive species, cause acidification and climate change, and produce increased fluctuations in water level. Littoral zones are both more negatively affected by human activity and less intensively studied than offshore waters. Conservation of the remarkable biodiversity and biotic integrity of large lakes will require better integration of littoral zones into our understanding of lake ecosystem functioning and focused efforts to alleviate human impacts along the shoreline.
In freshwater ecosystems
In freshwater situations, the littoral zone is the nearshore habitat where photosynthetically active radiation penetrates to the lake bottom in sufficient quantities to support photosynthesis. Sometimes other definitions are used. For example, the Minnesota Department of Natural Resources defines littoral as that portion of the lake that is less than 15 feet in depth. Such fixed-depth definitions often do not accurately represent the true ecological zonation, but are sometimes used because they are simple measurements to make bathymetric maps or when there are no measurements of light penetration. The littoral zone comprises an estimated 78% of Earth's total lake area. | Littoral zone | Wikipedia | 365 | 581326 | https://en.wikipedia.org/wiki/Littoral%20zone | Physical sciences | Oceanography | Earth science |
The littoral zone may form a narrow or broad fringing wetland, with extensive areas of aquatic plants sorted by their tolerance to different water depths. Typically, four zones are recognized, from higher to lower on the shore: wooded wetland, wet meadow, marsh and aquatic vegetation. The relative areas of these four types depends not only on the profile of the shoreline, but upon past water levels. The area of wet meadow is particularly dependent upon past water levels; in general, the area of wet meadows along lakes and rivers increases with natural water level fluctuations. Many of the animals in lakes and rivers are dependent upon the wetlands of littoral zones, since the rooted plants provide habitat and food. Hence, a large and productive littoral zone is considered an important characteristic of a healthy lake or river.
Littoral zones are at particular risk for two reasons. First, human settlement is often attracted to shorelines, and settlement often disrupts breeding habitats for littoral zone species. For example, many turtles are killed on roads when they leave the water to lay their eggs in upland sites. Fish can be negatively affected by docks and retaining walls which remove breeding habitat in shallow water. Some shoreline communities even deliberately try to remove wetlands since they may interfere with activities like swimming. Overall, the presence of human settlement has a demonstrated negative impact upon adjoining wetlands. An equally serious problem is the tendency to stabilize lake or river levels with dams. Dams removed the spring flood, which carries nutrients into littoral zones and reduces the natural fluctuation of water levels upon which many wetland plants and animals depend. Hence, over time, dams can reduce the area of wetland from a broad littoral zone to a narrow band of vegetation. Marshes and wet meadows are at particular risk.
Other definitions
For the purposes of naval operations, the US Navy divides the littoral zone in the ways shown on the diagram at the top of this article. The US Army Corps of Engineers and the US Environmental Protection Agency have their own definitions, which have legal implications.
The UK Ministry of Defence defines the littoral as those land areas (and their adjacent areas and associated air space) that are susceptible to engagement and influence from the sea. | Littoral zone | Wikipedia | 445 | 581326 | https://en.wikipedia.org/wiki/Littoral%20zone | Physical sciences | Oceanography | Earth science |
Oviraptor (; ) is a genus of oviraptorid dinosaur that lived in Asia during the Late Cretaceous period. The first remains were collected from the Djadokhta Formation of Mongolia in 1923 during a paleontological expedition led by Roy Chapman Andrews, and in the following year the genus and type species Oviraptor philoceratops were named by Henry Fairfield Osborn. The genus name refers to the initial thought of egg-stealing habits, and the specific name was intended to reinforce this view indicating a preference over ceratopsian eggs. Despite the fact that numerous specimens have been referred to the genus, Oviraptor is only known from a single partial skeleton regarded as the holotype, as well as a nest of about fifteen eggs and several small fragments from a juvenile.
Oviraptor was a rather small feathered oviraptorid, estimated at long with a weight between . It had a wide lower jaw with a skull that likely had a crest. Both upper and lower jaws were toothless and developed a horny beak, which was used during feeding along the robust morphology of the lower jaws. The arms were well-developed and elongated ending in three fingers with curved claws. Like other oviraptorids, Oviraptor had long hindlimbs that had four-toed feet, with the first toe reduced. The tail was likely not very elongated, and ended in a pygostyle that supported large feathers.
The initial relationships of Oviraptor were poorly understood at the time and was assigned to the unrelated Ornithomimidae by the original describer, Henry Osborn. However, re-examinations made by Rinchen Barsbold proved that Oviraptor was distinct enough to warrant a separate family, the Oviraptoridae. When first described, Oviraptor was interpreted as an egg-thief, egg-eating dinosaur given the close association of the holotype with a dinosaur nest. However, findings of numerous oviraptorosaurs in nesting poses have demonstrated that this specimen was actually brooding the nest and not stealing nor feeding on the eggs. Moreover, the discovery of remains of a small juvenile or nestling have been reported in association with the holotype specimen, further supporting parental care.
History of discovery | Oviraptor | Wikipedia | 468 | 581471 | https://en.wikipedia.org/wiki/Oviraptor | Biology and health sciences | Theropods | Animals |
The first remains of Oviraptor were discovered on reddish sandstones of the Late Cretaceous Djadokhta Formation of Mongolia, at the Bayn Dzak locality (also known as Flaming Cliffs), during the Third Central Asiatic expedition in 1923. This expedition was led by the North American naturalist Roy Chapman Andrews and ended in the discovery of three new-to-science theropod fossil remains—including those of Oviraptor. These were formally described by the North American paleontologist Henry Fairfield Osborn in 1924, who in the basis of the new material, named the genera Oviraptor, Saurornithoides and Velociraptor. The particular genus Oviraptor was erected with the type species O. philoceratops based on the holotype AMNH 6517, a partial individual lacking the back of the skeleton but including a badly crushed skull, partial cervical and dorsal vertebrae, pectoral elements including the furcula with the left arm and partial hands, the left ilium and some ribs. Accordingly, this specimen was found lying over a nest of approximately 15 eggs—a nest that has been catalogued as AMNH 6508—with the skull separated from the eggs by only of sediment. Given the close proximity of both specimens, Osborn interpreted Oviraptor as a dinosaur with egg-eating habits, and explained that the generic name, Oviraptor, is Latin for "egg seizer" or "egg thief", due to the association of the fossils. The specific name, philoceratops, is intended as "fondness for ceratopsian eggs" which is also given as a result of the initial thought of the nest pertaining to Protoceratops or another ceratopsian. However, Osborn suggested that the name Oviraptor could reflect an incorrect perception of this dinosaur. Furthermore, Osborn found Oviraptor to be similar to the unrelated—at the time, however, considered related—fast-running ornithomimids based on the toothless jaws, and assigned Oviraptor to the Ornithomimidae. Osborn had previously reported the taxon as "Fenestrosaurus philoceratops", but this was later discredited. | Oviraptor | Wikipedia | 457 | 581471 | https://en.wikipedia.org/wiki/Oviraptor | Biology and health sciences | Theropods | Animals |
In 1976, the Mongolian paleontologist Rinchen Barsbold noted some inconsistencies regarding the taxonomic placement of Oviraptor and concluded that this taxon was quite distinct from ornithomimids based on anatomical traits. Under this consideration, he erected the Oviraptoridae to contain Oviraptor and close relatives. After Osborn's initial description of Oviraptor, the egg nest associated with the holotype was accepted to have belonged to Protoceratops, and oviraptorids were largely considered to have been egg-eating theropods. Nevertheless, in the 1990s, the discovery of numerous nesting and nestling oviraptorid specimens proved that Osborn was correct in his caution regarding the name of Oviraptor. These findings showed that oviraptorids brooded and protected their nests by crouching on them. This new line of evidence showed that the nest associated with the holotype of Oviraptor belonged to it and the specimen was actually brooding the eggs at the time of death, not preying on them.
Referred specimens | Oviraptor | Wikipedia | 223 | 581471 | https://en.wikipedia.org/wiki/Oviraptor | Biology and health sciences | Theropods | Animals |
After the naming of Oviraptoridae in 1976, Barsbold referred six additional specimens to Oviraptor, including two particular specimens under the number MPC-D 100/20 and 100/21. In 1986, Barsbold realized that the latter two did not belong to the genus and instead they represented a new oviraptorid: Conchoraptor. Most of the other specimens are also unlikely to belong to Oviraptor itself, and they have been assigned to other oviraptorids. A partial individual also with eggs from the Bayan Mandahu Formation of Mongolia was referred in 1996 by Dong Zhiming and Philip J. Currie, the specimen IVPP V9608. However, in 2010 Nicholas R. Longrich and the two latter paleontologist have expressed their uncertainties regarding this referral as there are several anatomical differences such as the hand phalangeal proportions. They concluded that this specimen was a different and indeterminate species not referrable to this taxon. In 1981, Barsbold referred the specimen MPC-D 100/42 to Oviraptor, a very well-preserved and rather complete individual from the Djadokhta Formation. Since the known elements of Oviraptor were so fragmentary compared to other members, MPC-D 100/42 became the prime reference/depiction of this taxon being prominently labelled as Oviraptor philoceratops in scientific literature. | Oviraptor | Wikipedia | 296 | 581471 | https://en.wikipedia.org/wiki/Oviraptor | Biology and health sciences | Theropods | Animals |
This conception was refuted by James M. Clark and colleagues in 2002, who noted that this tall-crested specimen has more features of the skull in common with Citipati than it does with Oviraptor—which in fact, does not preserve a crest—and it may represent a second species of the former, or, an entire new genus. In 1986, Barsbold described a second species of Oviraptor, "O. mongoliensis", based on specimen MPC-D 100/32a which hails from the Nemegt Formation. However, a re-examination by Barsbold in 1997 found enough differences in this specimen to name the new genus Rinchenia, but he did not describe it with formality and this new oviraptorid remained as a nomen dubium. This was amended by the Polish paleontologist Halszka Osmólska and team in 2004 by formally naming the taxon Rinchenia mongoliensis. The North American paleontologist Mark A. Norell and colleagues in 2018 reported a new specimen of Oviraptor: AMNH 33092, which is composed of a tibia and two metatarsals of a nestling or very small juvenile. AMNH 33092 was found in association with the holotype and it was likely part of the nest. Oviraptor is now known from the holotype with associated eggs, and a juvenile/nestling.
Description
The holotype specimen has been estimated at in length with a weight ranging from . Though the holotype largely lacks the posterior region of the skeleton, it is likely that Oviraptor had two well-developed hindlimbs that ended in three functional toes with the first one being vestigial, as well as a relatively reduced tail. As evidenced in related oviraptorids, the arms were covered by elongated feathers, and the tail ended in a pygostyle, which is known to support a fan of feathers. | Oviraptor | Wikipedia | 406 | 581471 | https://en.wikipedia.org/wiki/Oviraptor | Biology and health sciences | Theropods | Animals |
Skull
The skull of Oviraptor was deep and shortened with large fenestrae (openings) compared to other dinosaurs, and measures about long as preserved. The actual length may actually be longer though, given that the holotype skull lacks several regions such as the premaxilla. The holotype skull lacks a crest in almost its entirety, however, the top surfaces of the fused parietal and frontal bones indicate that it likely had a well-developed crest, supported by the nasal and premaxilla bones (mainly the latter) of the rostrum. Oviraptor had an elongated maxilla and dentary, which may result into a more extended snout compared to the highly stocky jaws of other oviraptorids. The palate is rigid, extended below the jaw line and formed by the premaxillae, vomers, and maxillae. As in other oviraptorids, it may have had a pair of tooth-like projections on the palate that were directed downwards. As in other oviraptorids, the nares (external nostrils) would have been relatively small and placed high on the skull. Oviraptor had toothless jaws that ended in a robust, parrot-like rhamphotheca (horny beak). The curvature of the dentary tip was down-turned but less pronounced than other oviraptorids, such as Citipati. As a whole, the lower jaw is a short and deep bone that covers .
Postcranial skeleton
As in most oviraptorids, the neural spines of the holotype cervical vertebrae were short, and the neural arches were X-shaped. However the spines become more pronounced in posterior vertebrae. The zygapophyses of the first cervical vertebrae are configured parallel to each other, and the postzygapophyses appear to not diverge significantly from the midline, mostly similar to Citipati. The cervical ribs are fused to the vertebrae in the holotype. The neural spines are rectangular in the anterior series of the dorsal vertebrae when seen in a lateral view and larger than the spines of the cervicals. On the anteriormost dorsal vertebra several pleurocoels (small air-spaced holes) can be found, which are similar to those of Khaan. | Oviraptor | Wikipedia | 476 | 581471 | https://en.wikipedia.org/wiki/Oviraptor | Biology and health sciences | Theropods | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.