id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
12057519
https://en.wikipedia.org/wiki/Internet%20of%20things
Internet of things
Internet of things (IoT) describes devices with sensors, processing ability, software and other technologies that connect and exchange data with other devices and systems over the Internet or other communication networks. The Internet of things encompasses electronics, communication, and computer science engineering. "Internet of things" has been considered a misnomer because devices do not need to be connected to the public internet; they only need to be connected to a network and be individually addressable. The field has evolved due to the convergence of multiple technologies, including ubiquitous computing, commodity sensors, and increasingly powerful embedded systems, as well as machine learning. Older fields of embedded systems, wireless sensor networks, control systems, automation (including home and building automation), independently and collectively enable the Internet of things. In the consumer market, IoT technology is most synonymous with "smart home" products, including devices and appliances (lighting fixtures, thermostats, home security systems, cameras, and other home appliances) that support one or more common ecosystems and can be controlled via devices associated with that ecosystem, such as smartphones and smart speakers. IoT is also used in healthcare systems. There are a number of concerns about the risks in the growth of IoT technologies and products, especially in the areas of privacy and security, and consequently there have been industry and government moves to address these concerns, including the development of international and local standards, guidelines, and regulatory frameworks. Because of their interconnected nature, IoT devices are vulnerable to security breaches and privacy concerns. At the same time, the way these devices communicate wirelessly creates regulatory ambiguities, complicating jurisdictional boundaries of the data transfer. Background Around 1972, for its remote site use, Stanford Artificial Intelligence Laboratory developed a computer controlled vending machine, adapted from a machine rented from Canteen Vending, which sold for cash or, though a computer terminal (Teletype Model 33 KSR), on credit. Products included, at least, beer, yogurt, and milk. It was called the Prancing Pony, after the name of the room, named after an inn in Tolkien's Lord of the Rings, as each room at Stanford Artificial Intelligence Laboratory was named after a place in Middle Earth. A successor version still operates in the Computer Science Department at Stanford, with both hardware and software having been updated. History In 1982, an early concept of a network connected smart device was built as an Internet interface for sensors installed in the Carnegie Mellon University Computer Science Departments departmental Coca-Cola vending machine, supplied by graduate student volunteers, provided a temperature model and an inventory status, inspired by the computer controlled vending machine in the Prancing Pony room at Stanford Artificial Intelligence Laboratory. First accessible only on the CMU campus, it became the first ARPANET-connected appliance, Mark Weiser's 1991 paper on ubiquitous computing, "The Computer of the 21st Century", as well as academic venues such as UbiComp and PerCom produced the contemporary vision of the IoT. In 1994, Reza Raji described the concept in IEEE Spectrum as "[moving] small packets of data to a large set of nodes, so as to integrate and automate everything from home appliances to entire factories". Between 1993 and 1997, several companies proposed solutions like Microsoft's at Work or Novell's NEST. The field gained momentum when Bill Joy envisioned device-to-device communication as a part of his "Six Webs" framework, presented at the World Economic Forum at Davos in 1999. The concept of the "Internet of things" and the term itself, first appeared in a speech by Peter T. Lewis, to the Congressional Black Caucus Foundation Legislative Weekend in Washington, D.C., published in September 1985. According to Lewis, "The Internet of Things, or IoT, is the integration of people, processes and technology with connectable devices and sensors to enable remote monitoring, status, manipulation and evaluation of trends of such devices." The term "Internet of things" was coined independently by Kevin Ashton of Procter & Gamble, later of MIT's Auto-ID Center, in 1999, though he prefers the phrase "Internet for things". At that point, he viewed radio-frequency identification (RFID) as essential to the Internet of things, which would allow computers to manage all individual things. The main theme of the Internet of things is to embed short-range mobile transceivers in various gadgets and daily necessities to enable new forms of communication between people and things, and between things themselves. In 2004 Cornelius "Pete" Peterson, CEO of NetSilicon, predicted that, "The next era of information technology will be dominated by [IoT] devices, and networked devices will ultimately gain in popularity and significance to the extent that they will far exceed the number of networked computers and workstations." Peterson believed that medical devices and industrial controls would become dominant applications of the technology. Defining the Internet of things as "simply the point in time when more 'things or objects' were connected to the Internet than people", Cisco Systems estimated that the IoT was "born" between 2008 and 2009, with the things/people ratio growing from 0.08 in 2003 to 1.84 in 2010. Applications The extensive set of applications for IoT devices is often divided into consumer, commercial, industrial, and infrastructure spaces. Consumers A growing portion of IoT devices is created for consumer use, including connected vehicles, home automation, wearable technology, connected health, and appliances with remote monitoring capabilities. Home automation IoT devices are a part of the larger concept of home automation, which can include lighting, heating and air conditioning, media and security systems and camera systems. Long-term benefits could include energy savings by automatically ensuring lights and electronics are turned off or by making the residents in the home aware of usage. A smart home or automated home could be based on a platform or hubs that control smart devices and appliances. For instance, using Apple's HomeKit, manufacturers can have their home products and accessories controlled by an application in iOS devices such as the iPhone and the Apple Watch. This could be a dedicated app or iOS native applications such as Siri. This can be demonstrated in the case of Lenovo's Smart Home Essentials, which is a line of smart home devices that are controlled through Apple's Home app or Siri without the need for a Wi-Fi bridge. There are also dedicated smart home hubs that are offered as standalone platforms to connect different smart home products. These include the Amazon Echo, Google Home, Apple's HomePod, and Samsung's SmartThings Hub. In addition to the commercial systems, there are many non-proprietary, open source ecosystems, including Home Assistant, OpenHAB and Domoticz. Elder care One key application of a smart home is to assist the elderly and disabled. These home systems use assistive technology to accommodate an owner's specific disabilities. Voice control can assist users with sight and mobility limitations while alert systems can be connected directly to cochlear implants worn by hearing-impaired users. They can also be equipped with additional safety features, including sensors that monitor for medical emergencies such as falls or seizures. Smart home technology applied in this way can provide users with more freedom and a higher quality of life. Organizations The term "Enterprise IoT" refers to devices used in business and corporate settings. Medical and healthcare The Internet of Medical Things (IoMT) is an application of the IoT for medical and health-related purposes, data collection and analysis for research, and monitoring. The IoMT has been referenced as "Smart Healthcare", as the technology for creating a digitized healthcare system, connecting available medical resources and healthcare services. IoT devices can be used to enable remote health monitoring and emergency notification systems. These health monitoring devices can range from blood pressure and heart rate monitors to advanced devices capable of monitoring specialized implants, such as pacemakers, Fitbit electronic wristbands, or advanced hearing aids. Some hospitals have begun implementing "smart beds" that can detect when they are occupied and when a patient is attempting to get up. It can also adjust itself to ensure appropriate pressure and support are applied to the patient without the manual interaction of nurses. A 2015 Goldman Sachs report indicated that healthcare IoT devices "can save the United States more than $300 billion in annual healthcare expenditures by increasing revenue and decreasing cost." Moreover, the use of mobile devices to support medical follow-up led to the creation of 'm-health', used analyzed health statistics." Specialized sensors can also be equipped within living spaces to monitor the health and general well-being of senior citizens, while also ensuring that proper treatment is being administered and assisting people to regain lost mobility via therapy as well. These sensors create a network of intelligent sensors that are able to collect, process, transfer, and analyze valuable information in different environments, such as connecting in-home monitoring devices to hospital-based systems. Other consumer devices to encourage healthy living, such as connected scales or wearable heart monitors, are also a possibility with the IoT. End-to-end health monitoring IoT platforms are also available for antenatal and chronic patients, helping one manage health vitals and recurring medication requirements. Advances in plastic and fabric electronics fabrication methods have enabled ultra-low cost, use-and-throw IoMT sensors. These sensors, along with the required RFID electronics, can be fabricated on paper or e-textiles for wireless powered disposable sensing devices. Applications have been established for point-of-care medical diagnostics, where portability and low system-complexity is essential. IoMT was not only being applied in the clinical laboratory industry, but also in the healthcare and health insurance industries. IoMT in the healthcare industry is now permitting doctors, patients, and others, such as guardians of patients, nurses, families, and similar, to be part of a system, where patient records are saved in a database, allowing doctors and the rest of the medical staff to have access to patient information. IoT devices within healthcare are structured in a multi-layered architecture. Initially, data is collected from the devices and sensors within the IoT. Subsequently, data is processed and stored within the institution's network. At this point, data becomes accessible for further internal and external procedures. Simultaneously, data is protected via various cybersecurity measures, such as the principle of least privilege (PoLP), data encryption for example, with the Advanced Encryption Standard (AES), intrusion detection systems (IDS), and intrusion prevention systems (IPS). Lastly, data can be accessed by users on workstations or portable devices through applications like patient management software (PMS). IoMT in the insurance industry provides access to better and new types of dynamic information. This includes sensor-based solutions such as biosensors, wearables, connected health devices, and mobile apps to track customer behavior. This can lead to more accurate underwriting and new pricing models. The application of the IoT in healthcare plays a fundamental role in managing chronic diseases and in disease prevention and control. Remote monitoring is made possible through the connection of powerful wireless solutions. The connectivity enables health practitioners to capture patient's data and apply complex algorithms in health data analysis. Transportation The IoT can assist in the integration of communications, control, and information processing across various transportation systems. Application of the IoT extends to all aspects of transportation systems (i.e., the vehicle, the infrastructure, and the driver or user). Dynamic interaction between these components of a transport system enables inter- and intra-vehicular communication, smart traffic control, smart parking, electronic toll collection systems, logistics and fleet management, vehicle control, safety, and road assistance. V2X communications In vehicular communication systems, vehicle-to-everything communication (V2X), consists of three main components: vehicle-to-vehicle communication (V2V), vehicle-to-infrastructure communication (V2I) and vehicle to pedestrian communications (V2P). V2X is the first step to autonomous driving and connected road infrastructure. Home automation IoT devices can be used to monitor and control the mechanical, electrical and electronic systems used in various types of buildings (e.g., public and private, industrial, institutions, or residential) in home automation and building automation systems. In this context, three main areas are being covered in literature: The integration of the Internet with building energy management systems to create energy-efficient and IOT-driven "smart buildings". The possible means of real-time monitoring for reducing energy consumption and monitoring occupant behaviors. The integration of smart devices in the built environment and how they might be used in future applications. Industrial Also known as IIoT, industrial IoT devices acquire and analyze data from connected equipment, operational technology (OT), locations, and people. Combined with operational technology (OT) monitoring devices, IIoT helps regulate and monitor industrial systems. Also, the same implementation can be carried out for automated record updates of asset placement in industrial storage units as the size of the assets can vary from a small screw to the whole motor spare part, and misplacement of such assets can cause a loss of manpower time and money. Manufacturing The IoT can connect various manufacturing devices equipped with sensing, identification, processing, communication, actuation, and networking capabilities. Network control and management of manufacturing equipment, asset and situation management, or manufacturing process control allow IoT to be used for industrial applications and smart manufacturing. IoT intelligent systems enable rapid manufacturing and optimization of new products and rapid response to product demands. Digital control systems to automate process controls, operator tools and service information systems to optimize plant safety and security are within the purview of the IIoT. IoT can also be applied to asset management via predictive maintenance, statistical evaluation, and measurements to maximize reliability. Industrial management systems can be integrated with smart grids, enabling energy optimization. Measurements, automated controls, plant optimization, health and safety management, and other functions are provided by networked sensors. In addition to general manufacturing, IoT is also used for processes in the industrialization of construction. Agriculture There are numerous IoT applications in farming such as collecting data on temperature, rainfall, humidity, wind speed, pest infestation, and soil content. This data can be used to automate farming techniques, take informed decisions to improve quality and quantity, minimize risk and waste, and reduce the effort required to manage crops. For example, farmers can now monitor soil temperature and moisture from afar and even apply IoT-acquired data to precision fertilization programs. The overall goal is that data from sensors, coupled with the farmer's knowledge and intuition about his or her farm, can help increase farm productivity, and also help reduce costs. In August 2018, Toyota Tsusho began a partnership with Microsoft to create fish farming tools using the Microsoft Azure application suite for IoT technologies related to water management. Developed in part by researchers from Kindai University, the water pump mechanisms use artificial intelligence to count the number of fish on a conveyor belt, analyze the number of fish, and deduce the effectiveness of water flow from the data the fish provide. The FarmBeats project from Microsoft Research that uses TV white space to connect farms is also a part of the Azure Marketplace now. Maritime IoT devices are in use to monitor the environments and systems of boats and yachts. Many pleasure boats are left unattended for days in summer, and months in winter so such devices provide valuable early alerts of boat flooding, fire, and deep discharge of batteries. The use of global Internet data networks such as Sigfox, combined with long-life batteries, and microelectronics allows the engine rooms, bilge, and batteries to be constantly monitored and reported to connected Android & Apple applications for example. Infrastructure Monitoring and controlling operations of sustainable urban and rural infrastructures like bridges, railway tracks and on- and offshore wind farms is a key application of the IoT. The IoT infrastructure can be used for monitoring any events or changes in structural conditions that can compromise safety and increase risk. The IoT can benefit the construction industry by cost-saving, time reduction, better quality workday, paperless workflow and increase in productivity. It can help in taking faster decisions and saving money in Real-Time Data Analytics. It can also be used for scheduling repair and maintenance activities efficiently, by coordinating tasks between different service providers and users of these facilities. IoT devices can also be used to control critical infrastructure like bridges to provide access to ships. The usage of IoT devices for monitoring and operating infrastructure is likely to improve incident management and emergency response coordination, and quality of service, up-times and reduce costs of operation in all infrastructure-related areas. Even areas such as waste management can benefit. Metropolitan scale deployments There are several planned or ongoing large-scale deployments of the IoT, to enable better management of cities and systems. For example, Songdo, South Korea, the first of its kind fully equipped and wired smart city, is gradually being built, with approximately 70 percent of the business district completed . Much of the city is planned to be wired and automated, with little or no human intervention. In 2014 another application was undergoing a project in Santander, Spain. For this deployment, two approaches have been adopted. This city of 180,000 inhabitants has already seen 18,000 downloads of its city smartphone app. The app is connected to 10,000 sensors that enable services like parking search, and environmental monitoring. City context information is used in this deployment so as to benefit merchants through a spark deals mechanism based on city behavior that aims at maximizing the impact of each notification. Other examples of large-scale deployments underway include the Sino-Singapore Guangzhou Knowledge City; work on improving air and water quality, reducing noise pollution, and increasing transportation efficiency in San Jose, California; and smart traffic management in western Singapore. Using its RPMA (Random Phase Multiple Access) technology, San Diego–based Ingenu has built a nationwide public network for low-bandwidth data transmissions using the same unlicensed 2.4 gigahertz spectrum as Wi-Fi. Ingenu's "Machine Network" covers more than a third of the US population across 35 major cities including San Diego and Dallas. French company, Sigfox, commenced building an Ultra Narrowband wireless data network in the San Francisco Bay Area in 2014, the first business to achieve such a deployment in the U.S. It subsequently announced it would set up a total of 4000 base stations to cover a total of 30 cities in the U.S. by the end of 2016, making it the largest IoT network coverage provider in the country thus far. Cisco also participates in smart cities projects. Cisco has deployed technologies for Smart Wi-Fi, Smart Safety & Security, Smart Lighting, Smart Parking, Smart Transports, Smart Bus Stops, Smart Kiosks, Remote Expert for Government Services (REGS) and Smart Education in the five km area in the city of Vijaywada, India. Another example of a large deployment is the one completed by New York Waterways in New York City to connect all the city's vessels and be able to monitor them live 24/7. The network was designed and engineered by Fluidmesh Networks, a Chicago-based company developing wireless networks for critical applications. The NYWW network is currently providing coverage on the Hudson River, East River, and Upper New York Bay. With the wireless network in place, NY Waterway is able to take control of its fleet and passengers in a way that was not previously possible. New applications can include security, energy and fleet management, digital signage, public Wi-Fi, paperless ticketing and others. Energy management Significant numbers of energy-consuming devices (e.g. lamps, household appliances, motors, pumps, etc.) already integrate Internet connectivity, which can allow them to communicate with utilities not only to balance power generation but also helps optimize the energy consumption as a whole. These devices allow for remote control by users, or central management via a cloud-based interface, and enable functions like scheduling (e.g., remotely powering on or off heating systems, controlling ovens, changing lighting conditions etc.). The smart grid is a utility-side IoT application; systems gather and act on energy and power-related information to improve the efficiency of the production and distribution of electricity. Using advanced metering infrastructure (AMI) Internet-connected devices, electric utilities not only collect data from end-users, but also manage distribution automation devices like transformers. Environmental monitoring Environmental monitoring applications of the IoT typically use sensors to assist in environmental protection by monitoring air or water quality, atmospheric or soil conditions, and can even include areas like monitoring the movements of wildlife and their habitats. Development of resource-constrained devices connected to the Internet also means that other applications like earthquake or tsunami early-warning systems can also be used by emergency services to provide more effective aid. IoT devices in this application typically span a large geographic area and can also be mobile. It has been argued that the standardization that IoT brings to wireless sensing will revolutionize this area. Living Lab Another example of integrating the IoT is Living Lab which integrates and combines research and innovation processes, establishing within a public-private-people-partnership. Between 2006 and January 2024, there were over 440 Living Labs (though not all are currently active) that use the IoT to collaborate and share knowledge between stakeholders to co-create innovative and technological products. For companies to implement and develop IoT services for smart cities, they need to have incentives. The governments play key roles in smart city projects as changes in policies will help cities to implement the IoT which provides effectiveness, efficiency, and accuracy of the resources that are being used. For instance, the government provides tax incentives and cheap rent, improves public transports, and offers an environment where start-up companies, creative industries, and multinationals may co-create, share a common infrastructure and labor markets, and take advantage of locally embedded technologies, production process, and transaction costs. Military The Internet of Military Things (IoMT) is the application of IoT technologies in the military domain for the purposes of reconnaissance, surveillance, and other combat-related objectives. It is heavily influenced by the future prospects of warfare in an urban environment and involves the use of sensors, munitions, vehicles, robots, human-wearable biometrics, and other smart technology that is relevant on the battlefield. One of the examples of IOT devices used in the military is Xaver 1000 system. The Xaver 1000 was developed by Israel's Camero Tech, which is the latest in the company's line of "through wall imaging systems". The Xaver line uses millimeter wave (MMW) radar, or radar in the range of 30-300 gigahertz. It is equipped with an AI-based life target tracking system as well as its own 3D 'sense-through-the-wall' technology. Internet of Battlefield Things The Internet of Battlefield Things (IoBT) is a project initiated and executed by the U.S. Army Research Laboratory (ARL) that focuses on the basic science related to the IoT that enhance the capabilities of Army soldiers. In 2017, ARL launched the Internet of Battlefield Things Collaborative Research Alliance (IoBT-CRA), establishing a working collaboration between industry, university, and Army researchers to advance the theoretical foundations of IoT technologies and their applications to Army operations. Ocean of Things The Ocean of Things project is a DARPA-led program designed to establish an Internet of things across large ocean areas for the purposes of collecting, monitoring, and analyzing environmental and vessel activity data. The project entails the deployment of about 50,000 floats that house a passive sensor suite that autonomously detect and track military and commercial vessels as part of a cloud-based network. Product digitalization There are several applications of smart or active packaging in which a QR code or NFC tag is affixed on a product or its packaging. The tag itself is passive, however, it contains a unique identifier (typically a URL) which enables a user to access digital content about the product via a smartphone. Strictly speaking, such passive items are not part of the Internet of things, but they can be seen as enablers of digital interactions. The term "Internet of Packaging" has been coined to describe applications in which unique identifiers are used, to automate supply chains, and are scanned on large scale by consumers to access digital content. Authentication of the unique identifiers, and thereby of the product itself, is possible via a copy-sensitive digital watermark or copy detection pattern for scanning when scanning a QR code, while NFC tags can encrypt communication. Trends and characteristics The IoT's major significant trend in recent years is the growth of devices connected and controlled via the Internet. The wide range of applications for IoT technology mean that the specifics can be very different from one device to the next but there are basic characteristics shared by most. The IoT creates opportunities for more direct integration of the physical world into computer-based systems, resulting in efficiency improvements, economic benefits, and reduced human exertions. IoT Analytics reported there were 16.6 billion IoT devices connected in 2023. In 2020, the same firm projected there would be 30 billion devices connected by 2025. As of October, 2024, there are around 17 billion. Intelligence Ambient intelligence and autonomous control are not part of the original concept of the Internet of things. Ambient intelligence and autonomous control do not necessarily require Internet structures, either. However, there is a shift in research (by companies such as Intel) to integrate the concepts of the IoT and autonomous control, with initial outcomes towards this direction considering objects as the driving force for autonomous IoT. An approach in this context is deep reinforcement learning where most of IoT systems provide a dynamic and interactive environment. Training an agent (i.e., IoT device) to behave smartly in such an environment cannot be addressed by conventional machine learning algorithms such as supervised learning. By reinforcement learning approach, a learning agent can sense the environment's state (e.g., sensing home temperature), perform actions (e.g., turn HVAC on or off) and learn through the maximizing accumulated rewards it receives in long term. IoT intelligence can be offered at three levels: IoT devices, Edge/Fog nodes, and cloud computing. The need for intelligent control and decision at each level depends on the time sensitiveness of the IoT application. For example, an autonomous vehicle's camera needs to make real-time obstacle detection to avoid an accident. This fast decision making would not be possible through transferring data from the vehicle to cloud instances and return the predictions back to the vehicle. Instead, all the operation should be performed locally in the vehicle. Integrating advanced machine learning algorithms including deep learning into IoT devices is an active research area to make smart objects closer to reality. Moreover, it is possible to get the most value out of IoT deployments through analyzing IoT data, extracting hidden information, and predicting control decisions. A wide variety of machine learning techniques have been used in IoT domain ranging from traditional methods such as regression, support vector machine, and random forest to advanced ones such as convolutional neural networks, LSTM, and variational autoencoder. In the future, the Internet of things may be a non-deterministic and open network in which auto-organized or intelligent entities (web services, SOA components) and virtual objects (avatars) will be interoperable and able to act independently (pursuing their own objectives or shared ones) depending on the context, circumstances or environments. Autonomous behavior through the collection and reasoning of context information as well as the object's ability to detect changes in the environment (faults affecting sensors) and introduce suitable mitigation measures constitutes a major research trend, clearly needed to provide credibility to the IoT technology. Modern IoT products and solutions in the marketplace use a variety of different technologies to support such context-aware automation, but more sophisticated forms of intelligence are requested to permit sensor units and intelligent cyber-physical systems to be deployed in real environments. Architecture IoT system architecture, in its simplistic view, consists of three tiers: Tier 1: Devices, Tier 2: the Edge Gateway, and Tier 3: the Cloud. Devices include networked things, such as the sensors and actuators found in IoT equipment, particularly those that use protocols such as Modbus, Bluetooth, Zigbee, or proprietary protocols, to connect to an Edge Gateway. The Edge Gateway layer consists of sensor data aggregation systems called Edge Gateways that provide functionality, such as pre-processing of the data, securing connectivity to cloud, using systems such as WebSockets, the event hub, and, even in some cases, edge analytics or fog computing. Edge Gateway layer is also required to give a common view of the devices to the upper layers to facilitate in easier management. The final tier includes the cloud application built for IoT using the microservices architecture, which are usually polyglot and inherently secure in nature using HTTPS/OAuth. It includes various database systems that store sensor data, such as time series databases or asset stores using backend data storage systems (e.g. Cassandra, PostgreSQL). The cloud tier in most cloud-based IoT system features event queuing and messaging system that handles communication that transpires in all tiers. Some experts classified the three-tiers in the IoT system as edge, platform, and enterprise and these are connected by proximity network, access network, and service network, respectively. Building on the Internet of things, the web of things is an architecture for the application layer of the Internet of things looking at the convergence of data from IoT devices into Web applications to create innovative use-cases. In order to program and control the flow of information in the Internet of things, a predicted architectural direction is being called BPM Everywhere which is a blending of traditional process management with process mining and special capabilities to automate the control of large numbers of coordinated devices. Network architecture The Internet of things requires huge scalability in the network space to handle the surge of devices. IETF 6LoWPAN can be used to connect devices to IP networks. With billions of devices being added to the Internet space, IPv6 will play a major role in handling the network layer scalability. IETF's Constrained Application Protocol, ZeroMQ, and MQTT can provide lightweight data transport. In practice many groups of IoT devices are hidden behind gateway nodes and may not have unique addresses. Also the vision of everything-interconnected is not needed for most applications as it is mainly the data which need interconnecting at a higher layer. Fog computing is a viable alternative to prevent such a large burst of data flow through the Internet. The edge devices' computation power to analyze and process data is extremely limited. Limited processing power is a key attribute of IoT devices as their purpose is to supply data about physical objects while remaining autonomous. Heavy processing requirements use more battery power harming IoT's ability to operate. Scalability is easy because IoT devices simply supply data through the Internet to a server with sufficient processing power. Decentralized IoT Decentralized Internet of things, or decentralized IoT, is a modified IoT which utilizes fog computing to handle and balance requests of connected IoT devices in order to reduce loading on the cloud servers and improve responsiveness for latency-sensitive IoT applications like vital signs monitoring of patients, vehicle-to-vehicle communication of autonomous driving, and critical failure detection of industrial devices. Performance is improved, especially for huge IoT systems with millions of nodes. Conventional IoT is connected via a mesh network and led by a major head node (centralized controller). The head node decides how a data is created, stored, and transmitted. In contrast, decentralized IoT attempts to divide IoT systems into smaller divisions. The head node authorizes partial decision-making power to lower level sub-nodes under mutual agreed policy. Some approached to decentralized IoT attempts to address the limited bandwidth and hashing capacity of battery powered or wireless IoT devices via blockchain. Complexity In semi-open or closed loops (i.e., value chains, whenever a global finality can be settled) the IoT will often be considered and studied as a complex system due to the huge number of different links, interactions between autonomous actors, and its capacity to integrate new actors. At the overall stage (full open loop) it will likely be seen as a chaotic environment (since systems always have finality). As a practical approach, not all elements on the Internet of things run in a global, public space. Subsystems are often implemented to mitigate the risks of privacy, control and reliability. For example, domestic robotics (domotics) running inside a smart home might only share data within and be available via a local network. Managing and controlling a high dynamic ad hoc IoT things/devices network is a tough task with the traditional networks architecture, Software Defined Networking (SDN) provides the agile dynamic solution that can cope with the special requirements of the diversity of innovative IoT applications. Size considerations The exact scale of the Internet of things is unknown, with quotes of billions or trillions often quoted at the beginning of IoT articles. In 2015 there were 83 million smart devices in people's homes. This number is expected to grow to 193 million devices by 2020. The figure of online capable devices grew 31% from 2016 to 2017 to reach 8.4 billion. Space considerations In the Internet of things, the precise geographic location of a thing—and also the precise geographic dimensions of a thing—can be critical. Therefore, facts about a thing, such as its location in time and space, have been less critical to track because the person processing the information can decide whether or not that information was important to the action being taken, and if so, add the missing information (or decide to not take the action). (Note that some things on the Internet of things will be sensors, and sensor location is usually important.) The GeoWeb and Digital Earth are applications that become possible when things can become organized and connected by location. However, the challenges that remain include the constraints of variable spatial scales, the need to handle massive amounts of data, and an indexing for fast search and neighbour operations. On the Internet of things, if things are able to take actions on their own initiative, this human-centric mediation role is eliminated. Thus, the time-space context that we as humans take for granted must be given a central role in this information ecosystem. Just as standards play a key role on the Internet and the Web, geo-spatial standards will play a key role on the Internet of things. A solution to "basket of remotes" Many IoT devices have the potential to take a piece of this market. Jean-Louis Gassée (Apple initial alumni team, and BeOS co-founder) has addressed this topic in an article on Monday Note, where he predicts that the most likely problem will be what he calls the "basket of remotes" problem, where we'll have hundreds of applications to interface with hundreds of devices that don't share protocols for speaking with one another. For improved user interaction, some technology leaders are joining forces to create standards for communication between devices to solve this problem. Others are turning to the concept of predictive interaction of devices, "where collected data is used to predict and trigger actions on the specific devices" while making them work together. Social Internet of things Social Internet of things (SIoT) is a new kind of IoT that focuses the importance of social interaction and relationship between IoT devices. SIoT is a pattern of how cross-domain IoT devices enabling application to application communication and collaboration without human intervention in order to serve their owners with autonomous services, and this only can be realized when gained low-level architecture support from both IoT software and hardware engineering. Social Network for IoT Devices (Not Human) IoT defines a device with an identity like a citizen in a community and connect them to the Internet to provide services to its users. SIoT defines a social network for IoT devices only to interact with each other for different goals that to serve human. How is SIoT different from IoT? SIoT is different from the original IoT in terms of the collaboration characteristics. IoT is passive, it was set to serve for dedicated purposes with existing IoT devices in predetermined system. SIoT is active, it was programmed and managed by AI to serve for unplanned purposes with mix and match of potential IoT devices from different systems that benefit its users. How does SIoT Work? IoT devices built-in with sociability will broadcast their abilities or functionalities, and at the same time discovers, shares information, monitors, navigates and groups with other IoT devices in the same or nearby network realizing SIoT and facilitating useful service compositions in order to help its users proactively in every day's life especially during emergency. Social IoT Examples IoT-based smart home technology monitors health data of patients or aging adults by analyzing their physiological parameters and prompt the nearby health facilities when emergency medical services needed. In case emergency, automatically, ambulance of a nearest available hospital will be called with pickup location provided, ward assigned, patient's health data will be transmitted to the emergency department, and display on the doctor's computer immediately for further action. IoT sensors on the vehicles, road and traffic lights monitor the conditions of the vehicles and drivers and alert when attention needed and also coordinate themselves automatically to ensure autonomous driving is working normally. Unfortunately if an accident happens, IoT camera will inform the nearest hospital and police station for help. Social IoT Challenges Internet of things is multifaceted and complicated. One of the main factors that hindering people from adopting and use Internet of things (IoT) based products and services is its complexity. Installation and setup is a challenge to people, therefore, there is a need for IoT devices to mix match and configure themselves automatically to provide different services at different situation. System security always a concern for any technology, and it is more crucial for SIoT as not only security of oneself need to be considered but also the mutual trust mechanism between collaborative IoT devices from time to time, from place to place. Another critical challenge for SIoT is the accuracy and reliability of the sensors. At most of the circumstances, IoT sensors would need to respond in nanoseconds to avoid accidents, injury, and loss of life. Enabling technologies There are many technologies that enable the IoT. Crucial to the field is the network used to communicate between devices of an IoT installation, a role that several wireless or wired technologies may fulfill: Addressability The original idea of the Auto-ID Center is based on RFID-tags and distinct identification through the Electronic Product Code. This has evolved into objects having an IP address or URI. An alternative view, from the world of the Semantic Web focuses instead on making all things (not just those electronic, smart, or RFID-enabled) addressable by the existing naming protocols, such as URI. The objects themselves do not converse, but they may now be referred to by other agents, such as powerful centralised servers acting for their human owners. Integration with the Internet implies that devices will use an IP address as a distinct identifier. Due to the limited address space of IPv4 (which allows for 4.3 billion different addresses), objects in the IoT will have to use the next generation of the Internet protocol (IPv6) to scale to the extremely large address space required. Internet-of-things devices additionally will benefit from the stateless address auto-configuration present in IPv6, as it reduces the configuration overhead on the hosts, and the IETF 6LoWPAN header compression. To a large extent, the future of the Internet of things will not be possible without the support of IPv6; and consequently, the global adoption of IPv6 in the coming years will be critical for the successful development of the IoT in the future. Application Layer ADRC defines an application layer protocol and supporting framework for implementing IoT applications. Short-range wireless Bluetooth mesh networking – Specification providing a mesh networking variant to Bluetooth Low Energy (BLE) with an increased number of nodes and standardized application layer (Models). Li-Fi (light fidelity) – Wireless communication technology similar to the Wi-Fi standard, but using visible-light communication for increased bandwidth. Near-field communication (NFC) – Communication protocols enabling two electronic devices to communicate within a 4 cm range. Radio-frequency identification (RFID) – Technology using electromagnetic fields to read data stored in tags embedded in other items. Wi-Fi – Technology for local area networking–based on the IEEE 802.11 standard, where devices may communicate through a shared access point or directly between individual devices. Zigbee – Communication protocols for personal area networking– based on the IEEE 802.15.4 standard, providing low power consumption, low data rate, low cost, and high throughput. Z-Wave – Wireless communications protocol used primarily for home automation and security applications Medium-range wireless LTE-Advanced – High-speed communication specification for mobile networks. Provides enhancements to the LTE standard with extended coverage, higher throughput, and lower latency. 5G – 5G wireless networks can be used to achieve the high communication requirements of the IoT and connect a large number of IoT devices, even when they are on the move. There are three features of 5G that are each considered to be useful for supporting particular elements of IoT: enhanced mobile broadband (eMBB), massive machine type communications (mMTC) and ultra-reliable low latency communications (URLLC). LoRa: Range up to in urban areas, and up to or more in rural areas (line of sight). DASH7: Range of up to 2 km. Long-range wireless Low-power wide-area networking (LPWAN) – Wireless networks designed to allow long-range communication at a low data rate, reducing power and cost for transmission. Available LPWAN technologies and protocols: LoRaWan, Sigfox, NB-IoT, Weightless, RPMA, MIoTy, IEEE 802.11ah Very-small-aperture terminal (VSAT) – Satellite communication technology using small dish antennas for narrowband and broadband data. Wired Ethernet – General purpose networking standard using twisted pair and fiber optic links in conjunction with hubs or switches. Power-line communication (PLC) – Communication technology using electrical wiring to carry power and data. Specifications such as HomePlug or G.hn utilize PLC for networking IoT devices. Comparison of technologies by layer Different technologies have different roles in a protocol stack. Below is a simplified presentation of the roles of several popular communication technologies in IoT applications: Standards and standards organizations This is a list of technical standards for the IoT, most of which are open standards, and the standards organizations that aspire to successfully setting them. Politics and civic engagement Some scholars and activists argue that the IoT can be used to create new models of civic engagement if device networks can be open to user control and inter-operable platforms. Philip N. Howard, a professor and author, writes that political life in both democracies and authoritarian regimes will be shaped by the way the IoT will be used for civic engagement. For that to happen, he argues that any connected device should be able to divulge a list of the "ultimate beneficiaries" of its sensor data and that individual citizens should be able to add new organisations to the beneficiary list. In addition, he argues that civil society groups need to start developing their IoT strategy for making use of data and engaging with the public. Government regulation One of the key drivers of the IoT is data. The success of the idea of connecting devices to make them more efficient is dependent upon access to and storage & processing of data. For this purpose, companies working on the IoT collect data from multiple sources and store it in their cloud network for further processing. This leaves the door wide open for privacy and security dangers and single point vulnerability of multiple systems. The other issues pertain to consumer choice and ownership of data and how it is used. Though still in their infancy, regulations and governance regarding these issues of privacy, security, and data ownership continue to develop. IoT regulation depends on the country. Some examples of legislation that is relevant to privacy and data collection are: the US Privacy Act of 1974, OECD Guidelines on the Protection of Privacy and Transborder Flows of Personal Data of 1980, and the EU Directive 95/46/EC of 1995. Current regulatory environment: A report published by the Federal Trade Commission (FTC) in January 2015 made the following three recommendations: Data security – At the time of designing IoT companies should ensure that data collection, storage and processing would be secure at all times. Companies should adopt a "defense in depth" approach and encrypt data at each stage. Data consent – users should have a choice as to what data they share with IoT companies and the users must be informed if their data gets exposed. Data minimisation – IoT companies should collect only the data they need and retain the collected information only for a limited time. However, the FTC stopped at just making recommendations for now. According to an FTC analysis, the existing framework, consisting of the FTC Act, the Fair Credit Reporting Act, and the Children's Online Privacy Protection Act, along with developing consumer education and business guidance, participation in multi-stakeholder efforts and advocacy to other agencies at the federal, state and local level, is sufficient to protect consumer rights. A resolution passed by the Senate in March 2015, is already being considered by the Congress. This resolution recognized the need for formulating a National Policy on IoT and the matter of privacy, security and spectrum. Furthermore, to provide an impetus to the IoT ecosystem, in March 2016, a bipartisan group of four Senators proposed a bill, The Developing Innovation and Growing the Internet of Things (DIGIT) Act, to direct the Federal Communications Commission to assess the need for more spectrum to connect IoT devices. Approved on 28 September 2018, California Senate Bill No. 327 goes into effect on 1 January 2020. The bill requires "a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorized access, destruction, use, modification, or disclosure," Several standards for the IoT industry are actually being established relating to automobiles because most concerns arising from use of connected cars apply to healthcare devices as well. In fact, the National Highway Traffic Safety Administration (NHTSA) is preparing cybersecurity guidelines and a database of best practices to make automotive computer systems more secure. A recent report from the World Bank examines the challenges and opportunities in government adoption of IoT. These include – Still early days for the IoT in government  Underdeveloped policy and regulatory frameworks  Unclear business models, despite strong value proposition  Clear institutional and capacity gap in government AND the private sector  Inconsistent data valuation and management  Infrastructure a major barrier  Government as an enabler  Most successful pilots share common characteristics (public-private partnership, local, leadership) In early December 2021, the U.K. government introduced the Product Security and Telecommunications Infrastructure bill (PST), an effort to legislate IoT distributors, manufacturers, and importers to meet certain cybersecurity standards. The bill also seeks to improve the security credentials of consumer IoT devices. Criticism, problems and controversies Platform fragmentation The IoT suffers from platform fragmentation, lack of interoperability and common technical standards a situation where the variety of IoT devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently between different inconsistent technology ecosystems hard. For example, wireless connectivity for IoT devices can be done using Bluetooth, Wi-Fi, Wi-Fi HaLow, Zigbee, Z-Wave, LoRa, NB-IoT, Cat M1 as well as completely custom proprietary radios – each with its own advantages and disadvantages; and unique support ecosystem. The IoT's amorphous computing nature is also a problem for security, since patches to bugs found in the core operating system often do not reach users of older and lower-price devices. One set of researchers says that the failure of vendors to support older devices with patches and updates leaves more than 87% of active Android devices vulnerable. Privacy, autonomy, and control Philip N. Howard, a professor and author, writes that the Internet of things offers immense potential for empowering citizens, making government transparent, and broadening information access. Howard cautions, however, that privacy threats are enormous, as is the potential for social control and political manipulation. Concerns about privacy have led many to consider the possibility that big data infrastructures such as the Internet of things and data mining are inherently incompatible with privacy. Key challenges of increased digitalization in the water, transport or energy sector are related to privacy and cybersecurity which necessitate an adequate response from research and policymakers alike. Writer Adam Greenfield claims that IoT technologies are not only an invasion of public space but are also being used to perpetuate normative behavior, citing an instance of billboards with hidden cameras that tracked the demographics of passersby who stopped to read the advertisement. The Internet of Things Council compared the increased prevalence of digital surveillance due to the Internet of things to the concept of the panopticon described by Jeremy Bentham in the 18th century. The assertion is supported by the works of French philosophers Michel Foucault and Gilles Deleuze. In Discipline and Punish: The Birth of the Prison, Foucault asserts that the panopticon was a central element of the discipline society developed during the Industrial Era. Foucault also argued that the discipline systems established in factories and school reflected Bentham's vision of panopticism. In his 1992 paper "Postscripts on the Societies of Control", Deleuze wrote that the discipline society had transitioned into a control society, with the computer replacing the panopticon as an instrument of discipline and control while still maintaining the qualities similar to that of panopticism. Peter-Paul Verbeek, a professor of philosophy of technology at the University of Twente, Netherlands, writes that technology already influences our moral decision making, which in turn affects human agency, privacy and autonomy. He cautions against viewing technology merely as a human tool and advocates instead to consider it as an active agent. Justin Brookman, of the Center for Democracy and Technology, expressed concern regarding the impact of the IoT on consumer privacy, saying that "There are some people in the commercial space who say, 'Oh, big data – well, let's collect everything, keep it around forever, we'll pay for somebody to think about security later.' The question is whether we want to have some sort of policy framework in place to limit that." Tim O'Reilly believes that the way companies sell the IoT devices on consumers are misplaced, disputing the notion that the IoT is about gaining efficiency from putting all kinds of devices online and postulating that the "IoT is really about human augmentation. The applications are profoundly different when you have sensors and data driving the decision-making." Editorials at WIRED have also expressed concern, one stating "What you're about to lose is your privacy. Actually, it's worse than that. You aren't just going to lose your privacy, you're going to have to watch the very concept of privacy be rewritten under your nose." The American Civil Liberties Union (ACLU) expressed concern regarding the ability of IoT to erode people's control over their own lives. The ACLU wrote that "There's simply no way to forecast how these immense powers – disproportionately accumulating in the hands of corporations seeking financial advantage and governments craving ever more control – will be used. Chances are big data and the Internet of Things will make it harder for us to control our own lives, as we grow increasingly transparent to powerful corporations and government institutions that are becoming more opaque to us." In response to rising concerns about privacy and smart technology, in 2007 the British Government stated it would follow formal Privacy by Design principles when implementing their smart metering program. The program would lead to replacement of traditional power meters with smart power meters, which could track and manage energy usage more accurately. However the British Computer Society is doubtful these principles were ever actually implemented. In 2009 the Dutch Parliament rejected a similar smart metering program, basing their decision on privacy concerns. The Dutch program later revised and passed in 2011. Data storage A challenge for producers of IoT applications is to clean, process and interpret the vast amount of data which is gathered by the sensors. There is a solution proposed for the analytics of the information referred to as Wireless Sensor Networks. These networks share data among sensor nodes that are sent to a distributed system for the analytics of the sensory data. Another challenge is the storage of this bulk data. Depending on the application, there could be high data acquisition requirements, which in turn lead to high storage requirements. In 2013, the Internet was estimated to be responsible for consuming 5% of the total energy produced, and a "daunting challenge to power" IoT devices to collect and even store data still remains. Data silos, although a common challenge of legacy systems, still commonly occur with the implementation of IoT devices, particularly within manufacturing. As there are a lot of benefits to be gained from IoT and IIoT devices, the means in which the data is stored can present serious challenges without the principles of autonomy, transparency, and interoperability being considered. The challenges do not occur by the device itself, but the means in which databases and data warehouses are set-up. These challenges were commonly identified in manufactures and enterprises which have begun upon digital transformation, and are part of the digital foundation, indicating that in order to receive the optimal benefits from IoT devices and for decision making, enterprises will have to first re-align their data storing methods. These challenges were identified by Keller (2021) when investigating the IT and application landscape of I4.0 implementation within German M&E manufactures. Security Security is the biggest concern in adopting Internet of things technology, with concerns that rapid development is happening without appropriate consideration of the profound security challenges involved and the regulatory changes that might be necessary. The rapid development of the Internet of Things (IoT) has allowed billions of devices to connect to the network. Due to too many connected devices and the limitation of communication security technology, various security issues gradually appear in the IoT. Most of the technical security concerns are similar to those of conventional servers, workstations and smartphones. These concerns include using weak authentication, forgetting to change default credentials, unencrypted messages sent between devices, SQL injections, man-in-the-middle attacks, and poor handling of security updates. However, many IoT devices have severe operational limitations on the computational power available to them. These constraints often make them unable to directly use basic security measures such as implementing firewalls or using strong cryptosystems to encrypt their communications with other devices - and the low price and consumer focus of many devices makes a robust security patching system uncommon. Rather than conventional security vulnerabilities, fault injection attacks are on the rise and targeting IoT devices. A fault injection attack is a physical attack on a device to purposefully introduce faults in the system to change the intended behavior. Faults might happen unintentionally by environmental noises and electromagnetic fields. There are ideas stemmed from control-flow integrity (CFI) to prevent fault injection attacks and system recovery to a healthy state before the fault. Internet of things devices also have access to new areas of data, and can often control physical devices, so that even by 2014 it was possible to say that many Internet-connected appliances could already "spy on people in their own homes" including televisions, kitchen appliances, cameras, and thermostats. Computer-controlled devices in automobiles such as brakes, engine, locks, hood and trunk releases, horn, heat, and dashboard have been shown to be vulnerable to attackers who have access to the on-board network. In some cases, vehicle computer systems are Internet-connected, allowing them to be exploited remotely. By 2008 security researchers had shown the ability to remotely control pacemakers without authority. Later hackers demonstrated remote control of insulin pumps and implantable cardioverter defibrillators. Poorly secured Internet-accessible IoT devices can also be subverted to attack others. In 2016, a distributed denial of service attack powered by Internet of things devices running the Mirai malware took down a DNS provider and major web sites. The Mirai Botnet had infected roughly 65,000 IoT devices within the first 20 hours. Eventually the infections increased to around 200,000 to 300,000 infections. Brazil, Colombia and Vietnam made up of 41.5% of the infections. The Mirai Botnet had singled out specific IoT devices that consisted of DVRs, IP cameras, routers and printers. Top vendors that contained the most infected devices were identified as Dahua, Huawei, ZTE, Cisco, ZyXEL and MikroTik. In May 2017, Junade Ali, a computer scientist at Cloudflare noted that native DDoS vulnerabilities exist in IoT devices due to a poor implementation of the Publish–subscribe pattern. These sorts of attacks have caused security experts to view IoT as a real threat to Internet services. The U.S. National Intelligence Council in an unclassified report maintains that it would be hard to deny "access to networks of sensors and remotely-controlled objects by enemies of the United States, criminals, and mischief makers... An open market for aggregated sensor data could serve the interests of commerce and security no less than it helps criminals and spies identify vulnerable targets. Thus, massively parallel sensor fusion may undermine social cohesion, if it proves to be fundamentally incompatible with Fourth-Amendment guarantees against unreasonable search." In general, the intelligence community views the Internet of things as a rich source of data. On 31 January 2019, The Washington Post wrote an article regarding the security and ethical challenges that can occur with IoT doorbells and cameras: "Last month, Ring got caught allowing its team in Ukraine to view and annotate certain user videos; the company says it only looks at publicly shared videos and those from Ring owners who provide consent. Just last week, a California family's Nest camera let a hacker take over and broadcast fake audio warnings about a missile attack, not to mention peer in on them, when they used a weak password." There have been a range of responses to concerns over security. The Internet of Things Security Foundation (IoTSF) was launched on 23 September 2015 with a mission to secure the Internet of things by promoting knowledge and best practice. Its founding board is made from technology providers and telecommunications companies. In addition, large IT companies are continually developing innovative solutions to ensure the security of IoT devices. In 2017, Mozilla launched Project Things, which allows to route IoT devices through a safe Web of Things gateway. As per the estimates from KBV Research, the overall IoT security market would grow at 27.9% rate during 2016–2022 as a result of growing infrastructural concerns and diversified usage of Internet of things. Governmental regulation is argued by some to be necessary to secure IoT devices and the wider Internet – as market incentives to secure IoT devices is insufficient. It was found that due to the nature of most of the IoT development boards, they generate predictable and weak keys which make it easy to be utilized by man-in-the-middle attack. However, various hardening approaches were proposed by many researchers to resolve the issue of SSH weak implementation and weak keys. IoT security within the field of manufacturing presents different challenges, and varying perspectives. Within the EU and Germany, data protection is constantly referenced throughout manufacturing and digital policy particularly that of I4.0. However, the attitude towards data security differs from the enterprise perspective whereas there is an emphasis on less data protection in the form of GDPR as the data being collected from IoT devices in the manufacturing sector does not display personal details. Yet, research has indicated that manufacturing experts are concerned about "data security for protecting machine technology from international competitors with the ever-greater push for interconnectivity". Safety IoT systems are typically controlled by event-driven smart apps that take as input either sensed data, user inputs, or other external triggers (from the Internet) and command one or more actuators towards providing different forms of automation. Examples of sensors include smoke detectors, motion sensors, and contact sensors. Examples of actuators include smart locks, smart power outlets, and door controls. Popular control platforms on which third-party developers can build smart apps that interact wirelessly with these sensors and actuators include Samsung's SmartThings, Apple's HomeKit, and Amazon's Alexa, among others. A problem specific to IoT systems is that buggy apps, unforeseen bad app interactions, or device/communication failures, can cause unsafe and dangerous physical states, e.g., "unlock the entrance door when no one is at home" or "turn off the heater when the temperature is below 0 degrees Celsius and people are sleeping at night". Detecting flaws that lead to such states, requires a holistic view of installed apps, component devices, their configurations, and more importantly, how they interact. Recently, researchers from the University of California Riverside have proposed IotSan, a novel practical system that uses model checking as a building block to reveal "interaction-level" flaws by identifying events that can lead the system to unsafe states. They have evaluated IotSan on the Samsung SmartThings platform. From 76 manually configured systems, IotSan detects 147 vulnerabilities (i.e., violations of safe physical states/properties). Design Given widespread recognition of the evolving nature of the design and management of the Internet of things, sustainable and secure deployment of IoT solutions must design for "anarchic scalability". Application of the concept of anarchic scalability can be extended to physical systems (i.e. controlled real-world objects), by virtue of those systems being designed to account for uncertain management futures. This hard anarchic scalability thus provides a pathway forward to fully realize the potential of Internet-of-things solutions by selectively constraining physical systems to allow for all management regimes without risking physical failure. Brown University computer scientist Michael Littman has argued that successful execution of the Internet of things requires consideration of the interface's usability as well as the technology itself. These interfaces need to be not only more user-friendly but also better integrated: "If users need to learn different interfaces for their vacuums, their locks, their sprinklers, their lights, and their coffeemakers, it's tough to say that their lives have been made any easier." Environmental sustainability impact A concern regarding Internet-of-things technologies pertains to the environmental impacts of the manufacture, use, and eventual disposal of all these semiconductor-rich devices. Modern electronics are replete with a wide variety of heavy metals and rare-earth metals, as well as highly toxic synthetic chemicals. This makes them extremely difficult to properly recycle. Electronic components are often incinerated or placed in regular landfills. Furthermore, the human and environmental cost of mining the rare-earth metals that are integral to modern electronic components continues to grow. This leads to societal questions concerning the environmental impacts of IoT devices over their lifetime. Intentional obsolescence of devices The Electronic Frontier Foundation has raised concerns that companies can use the technologies necessary to support connected devices to intentionally disable or "brick" their customers' devices via a remote software update or by disabling a service necessary to the operation of the device. In one example, home automation devices sold with the promise of a "Lifetime Subscription" were rendered useless after Nest Labs acquired Revolv and made the decision to shut down the central servers the Revolv devices had used to operate. As Nest is a company owned by Alphabet (Google's parent company), the EFF argues this sets a "terrible precedent for a company with ambitions to sell self-driving cars, medical devices, and other high-end gadgets that may be essential to a person's livelihood or physical safety." Owners should be free to point their devices to a different server or collaborate on improved software. But such action violates the United States DMCA section 1201, which only has an exemption for "local use". This forces tinkerers who want to keep using their own equipment into a legal grey area. EFF thinks buyers should refuse electronics and software that prioritize the manufacturer's wishes above their own. Examples of post-sale manipulations include Google Nest Revolv, disabled privacy settings on Android, Sony disabling Linux on PlayStation 3, and enforced EULA on Wii U. Confusing terminology Kevin Lonergan at Information Age, a business technology magazine, has referred to the terms surrounding the IoT as a "terminology zoo". The lack of clear terminology is not "useful from a practical point of view" and a "source of confusion for the end user". A company operating in the IoT space could be working in anything related to sensor technology, networking, embedded systems, or analytics. According to Lonergan, the term IoT was coined before smart phones, tablets, and devices as we know them today existed, and there is a long list of terms with varying degrees of overlap and technological convergence: Internet of things, Internet of everything (IoE), Internet of goods (supply chain), industrial Internet, pervasive computing, pervasive sensing, ubiquitous computing, cyber-physical systems (CPS), wireless sensor networks (WSN), smart objects, digital twin, cyberobjects or avatars, cooperating objects, machine to machine (M2M), ambient intelligence (AmI), Operational technology (OT), and information technology (IT). Regarding IIoT, an industrial sub-field of IoT, the Industrial Internet Consortium's Vocabulary Task Group has created a "common and reusable vocabulary of terms" to ensure "consistent terminology" across publications issued by the Industrial Internet Consortium. IoT One has created an IoT Terms Database including a New Term Alert to be notified when a new term is published. , this database aggregates 807 IoT-related terms, while keeping material "transparent and comprehensive". Adoption barriers Lack of interoperability and unclear value propositions Despite a shared belief in the potential of the IoT, industry leaders and consumers are facing barriers to adopt IoT technology more widely. Mike Farley argued in Forbes that while IoT solutions appeal to early adopters, they either lack interoperability or a clear use case for end-users. A study by Ericsson regarding the adoption of IoT among Danish companies suggests that many struggle "to pinpoint exactly where the value of IoT lies for them". Privacy and security concerns As for IoT, especially in regards to consumer IoT, information about a user's daily routine is collected so that the "things" around the user can cooperate to provide better services that fulfill personal preference. When the collected information which describes a user in detail travels through multiple hops in a network, due to a diverse integration of services, devices and network, the information stored on a device is vulnerable to privacy violation by compromising nodes existing in an IoT network. For example, on 21 October 2016, a multiple distributed denial of service (DDoS) attacks systems operated by domain name system provider Dyn, which caused the inaccessibility of several websites, such as GitHub, Twitter, and others. This attack is executed through a botnet consisting of a large number of IoT devices including IP cameras, gateways, and even baby monitors. Fundamentally there are 4 security objectives that the IoT system requires: (1) data confidentiality: unauthorised parties cannot have access to the transmitted and stored data; (2) data integrity: intentional and unintentional corruption of transmitted and stored data must be detected; (3) non-repudiation: the sender cannot deny having sent a given message; (4) data availability: the transmitted and stored data should be available to authorised parties even with the denial-of-service (DOS) attacks. Information privacy regulations also require organisations to practice "reasonable security". California's SB-327 Information privacy: connected devices "would require a manufacturer of a connected device, as those terms are defined, to equip the device with a reasonable security feature or features that are appropriate to the nature and function of the device, appropriate to the information it may collect, contain, or transmit, and designed to protect the device and any information contained therein from unauthorised access, destruction, use, modification, or disclosure, as specified". As each organisation's environment is unique, it can prove challenging to demonstrate what "reasonable security" is and what potential risks could be involved for the business. Oregon's HB2395 also "requires [a] person that manufactures, sells or offers to sell connected device] manufacturer to equip connected device with reasonable security features that protect connected device and information that connected device collects, contains, stores or transmits] stores''' from access, destruction, modification, use or disclosure that consumer does not authorise." According to antivirus provider Kaspersky, there were 639 million data breaches of IoT devices in 2020 and 1.5 billion breaches in the first six months of 2021. Traditional governance structure A study issued by Ericsson regarding the adoption of Internet of things among Danish companies identified a "clash between IoT and companies' traditional governance structures, as IoT still presents both uncertainties and a lack of historical precedence." Among the respondents interviewed, 60 percent stated that they "do not believe they have the organizational capabilities, and three of four do not believe they have the processes needed, to capture the IoT opportunity." This has led to a need to understand organizational culture in order to facilitate organizational design processes and to test new innovation management practices. A lack of digital leadership in the age of digital transformation has also stifled innovation and IoT adoption to a degree that many companies, in the face of uncertainty, "were waiting for the market dynamics to play out", or further action in regards to IoT "was pending competitor moves, customer pull, or regulatory requirements". Some of these companies risk being "kodaked" – "Kodak was a market leader until digital disruption eclipsed film photography with digital photos" – failing to "see the disruptive forces affecting their industry" and "to truly embrace the new business models the disruptive change opens up". Scott Anthony has written in Harvard Business Review that Kodak "created a digital camera, invested in the technology, and even understood that photos would be shared online" but ultimately failed to realize that "online photo sharing was'' the new business, not just a way to expand the printing business." Business planning and project management According to 2018 study, 70–75% of IoT deployments were stuck in the pilot or prototype stage, unable to reach scale due in part to a lack of business planning. Even though scientists, engineers, and managers across the world are continuously working to create and exploit the benefits of IoT products, there are some flaws in the governance, management and implementation of such projects. Despite tremendous forward momentum in the field of information and other underlying technologies, IoT still remains a complex area and the problem of how IoT projects are managed still needs to be addressed. IoT projects must be run differently than simple and traditional IT, manufacturing or construction projects. Because IoT projects have longer project timelines, a lack of skilled resources and several security/legal issues, there is a need for new and specifically designed project processes. The following management techniques should improve the success rate of IoT projects: A separate research and development phase  A Proof-of-Concept/Prototype before the actual project begins  Project managers with interdisciplinary technical knowledge  Universally defined business and technical jargon
Technology
Networks
null
13629466
https://en.wikipedia.org/wiki/Bronchitis
Bronchitis
Bronchitis is inflammation of the bronchi (large and medium-sized airways) in the lungs that causes coughing. Bronchitis usually begins as an infection in the nose, ears, throat, or sinuses. The infection then makes its way down to the bronchi. Symptoms include coughing up sputum, wheezing, shortness of breath, and chest pain. Bronchitis can be acute or chronic. Acute bronchitis usually has a cough that lasts around three weeks, and is also known as a chest cold. In more than 90% of cases, the cause is a viral infection. These viruses may be spread through the air when people cough or by direct contact. A small number of cases are caused by a bacterial infection such as Mycoplasma pneumoniae or Bordetella pertussis. Risk factors include exposure to tobacco smoke, dust, and other air pollution. Treatment of acute bronchitis typically involves rest, paracetamol (acetaminophen), and nonsteroidal anti-inflammatory drugs (NSAIDs) to help with the fever. Chronic bronchitis is defined as a productive cough – one that produces sputum – that lasts for three months or more per year for at least two years. Many people with chronic bronchitis also have chronic obstructive pulmonary disease (COPD). Tobacco smoking is the most common cause, with a number of other factors such as air pollution and genetics playing a smaller role. Treatments include quitting smoking, vaccinations, rehabilitation, and often inhaled bronchodilators and steroids. Some people may benefit from long-term oxygen therapy. Acute bronchitis is one of the more common diseases. About 5% of adults and 6% of children have at least one episode a year. Acute bronchitis is the most common type of bronchitis. By contrast in the United States, in 2018, 9.3 million people were diagnosed with the less common chronic bronchitis. Acute bronchitis Acute bronchitis, also known as a chest cold, is a short-term inflammation of the bronchi of the lungs. The most common symptom is a cough that may or may not produce sputum. Other symptoms may include coughing up mucus, wheezing, shortness of breath, fever, and chest discomfort. Fever when present is mild. The infection may last a few to ten days. The cough may persist for several weeks afterwards, with the total duration of symptoms usually around three weeks. Symptoms may last for up to six weeks. Cause In more than 90% of cases, the cause is a viral infection. These viruses may spread through the air when people cough or by direct contact. Risk factors include exposure to tobacco smoke, dust, and other air pollutants. A small number of cases are due to bacteria such as Mycoplasma pneumoniae or Bordetella pertussis. Diagnosis Diagnosis is typically based on a person's signs and symptoms. The color of the sputum does not indicate if the infection is viral or bacterial. Determining the underlying organism is usually not required. Other causes of similar symptoms include asthma, pneumonia, bronchiolitis, bronchiectasis, and COPD. A chest X-ray may be useful to detect pneumonia. Another common sign of bronchitis is a cough lasting ten days to three weeks. If the cough lasts longer than a month, it may become chronic bronchitis. In addition, a fever may be present. Acute bronchitis is normally caused by a viral infection. Typically, these infections are rhinovirus, adenovirus, parainfluenza, or influenza. No specific testing is normally needed to diagnose acute bronchitis. Treatment One form of prevention is to avoid smoking and other lung irritants. Frequent hand washing may also be protective. Treatment for acute bronchitis usually involves rest, paracetamol (acetaminophen), and NSAIDs to help with the fever. Cough medicine has little support for its use, and is not recommended in children under the age of six. There is tentative evidence that salbutamol may be useful in treating wheezing; however, it may result in nervousness and tremors. Antibiotics should generally not be used. An exception is when acute bronchitis is due to pertussis. Tentative evidence supports honey and pelargonium to help with symptoms. Getting plenty of rest and drinking enough fluids are often recommended as well. Chinese medicinal herbs are of unclear effect. Epidemiology Acute bronchitis is one of the most common diseases and the most common type of bronchitis. About 5% of adults are affected, and about 6% of children have at least one episode yearly. It occurs more often in the winter. More than 10 million people in the U.S. visit a healthcare provider each year for this condition, with about 70% receiving antibiotics that are mostly unnecessary. There are efforts to decrease the use of antibiotics in acute bronchitis. Chronic bronchitis Chronic bronchitis is a lower respiratory tract disease, defined by a productive cough that lasts for three months or more per year for at least two years. The cough is sometimes referred to as a smoker's cough since it often results from smoking. When chronic bronchitis occurs together with decreased airflow it is known as chronic obstructive pulmonary disease (COPD). Many people with chronic bronchitis have COPD; however, most people with COPD do not also have chronic bronchitis. Estimates of the number of people with COPD who have chronic bronchitis are 7–40%. Estimates of the number of people who smoke and have chronic bronchitis who also have COPD is 60%. The term "chronic bronchitis" was used in previous definitions of COPD but is no longer included in the definition. The term is still used clinically. While both chronic bronchitis and emphysema are often associated with COPD, neither is needed to make the diagnosis. A Chinese consensus commented on symptomatic types of COPD that include chronic bronchitis with frequent exacerbations. Chronic bronchitis is marked by mucus hypersecretion and mucins. The excess mucus is produced by an increased number of goblet cells, and enlarged submucosal glands in response to long-term irritation. The mucous glands in the submucosa secrete more than the goblet cells. Mucins thicken mucus, and their concentration has been found to be high in cases of chronic bronchitis, and also to correlate with the severity of the disease. Excess mucus can narrow the airways, thereby limiting airflow and accelerating the decline in lung function, and result in COPD. Excess mucus shows itself as a chronic productive cough and its severity and volume of sputum can fluctuate in periods of acute exacerbations. In COPD, those with the chronic bronchitic phenotype with associated chronic excess mucus, experience a worse quality of life than those without. The increased secretions are initially cleared by coughing. The cough is often worse soon after awakening, and the sputum produced may have a yellow or green color and may be streaked with specks of blood. In the early stages, a cough can maintain mucus clearance. However, with continued excessive secretion mucus clearance is impaired, and when the airways become obstructed a cough becomes ineffective. Effective mucociliary clearance depends on airway hydration, ciliary beating, and the rates of mucin secretion. Each of these factors is impaired in chronic bronchitis. Chronic bronchitis can lead to a higher number of exacerbations and a faster decline in lung function. The ICD-11 lists chronic bronchitis with emphysema (emphysematous bronchitis) as a "certain specified COPD". Cause Most cases of chronic bronchitis are caused by tobacco smoking. Chronic bronchitis in young adults who smoke is associated with a greater chance of developing COPD. There is an association between smoking cannabis and chronic bronchitis. In addition, chronic inhalation of air pollution, or irritating fumes or dust from hazardous exposures in occupations such as coal mining, grain handling, textile manufacturing, livestock farming, and metal moulding may also be a risk factor for the development of chronic bronchitis. Bronchitis caused in this way is often referred to as industrial bronchitis, or occupational bronchitis. Rarely genetic factors also play a role. Air quality can also affect the respiratory system with higher levels of nitrogen dioxide and sulfur dioxide contributing to bronchial symptoms. Sulfur dioxide can cause inflammation which can aggravate chronic bronchitis and make infections more likely. Air pollution in the workplace is the cause of several non-communicable diseases (NCDs) including chronic bronchitis. Treatment Decline in lung function in chronic bronchitis may be slowed by stopping smoking. Chronic bronchitis may be treated with a number of medications and occasionally oxygen therapy. Pulmonary rehabilitation may also be used. A distinction has been made between exacerbations (sudden worsenings) of chronic bronchitis, and otherwise stable chronic bronchitis. Stable chronic bronchitis can be defined as the normal definition of chronic bronchitis, plus the absence of an acute exacerbation in the previous four weeks. A Cochrane review found that mucolytics in chronic bronchitis may slightly decrease the chance of developing an exacerbation. The mucolytic guaifenesin is a safe and effective treatment for stable chronic bronchitis. This has an advantage in that it is available as an extended use tablet which lasts for twelve hours. Erdosteine is a mucolytic recommended by NICE. GOLD also supports the use of some mucolytics that are advised against when inhaled corticosteroids are being used, and singles out erdosteine as having good effects regardless of corticosteroid use. Erdosteine also has antioxidant properties. Erdosteine has been shown to significantly reduce the risk of exacerbations, shorten their duration, and hospital stays. In those with the chronic bronchitic phenotype of COPD, the phosphodiesterase-4 inhibitor roflumilast may decrease significant exacerbations. Epidemiology Chronic bronchitis affects about 3.4–22% of the general population. Individuals over 45 years of age, smokers, those that live or work in areas with high air pollution, and anybody with asthma all have a higher risk of developing chronic bronchitis. This wide range is due to the different definitions of chronic bronchitis that can be diagnosed based on signs and symptoms or the clinical diagnosis of the disorder. Chronic bronchitis tends to affect men more often than women. While the primary risk factor for chronic bronchitis is smoking, there is still a 4–22% chance that non-smokers can get chronic bronchitis. This might suggest other risk factors such as the inhalation of fuels, dusts, fumes and genetic factor. In the United States, in 2016, 8.6 million people were diagnosed with chronic bronchitis, and there were 518 reported deaths. Per 100,000 of population the death rate of chronic bronchitis was 0.2. History The condition of bronchitis has been recognised for many centuries, in several different cultures including the Ancient Greek, Chinese, and Indian, with the presence of excess phlegm and cough noted in recognition of the same condition. Early treatments of chronic bronchitis included garlic, cinnamon and ipecac, among others. Modern treatments were developed during the second half of the 20th century. The British physician Charles Badham was the first person to describe the condition and name the acute form as acute bronchitis in his book Observations on the inflammatory affections of the mucous membrane of the bronchiæ, published in 1808. In this book, Badham distinguished three forms of bronchitis, including acute and chronic. A second, expanded edition of the book was published in 1814 with the title An essay on bronchitis. Badham used the term catarrh to refer to the cardinal symptoms of chronic cough and mucus hypersecretion of chronic bronchitis, and described chronic bronchitis as a disabling disorder. In 1901 an article was published on the treatment of chronic bronchitis in the elderly. The symptoms described have remained unchanged. The cause was thought to be brought on by dampness, cold weather, and foggy conditions, and treatments were aimed towards various cough mixtures, respiratory stimulants, and tonics. It was noted that something other than the weather was thought to be at play. Exacerbations of the condition were also described at this time. Another physician Harry Campbell was referred to who had written in the British Medical Journal a week before. Campbell had suggested that the cause of chronic bronchitis was due to toxic substances, and recommended pure air, simple food, and exercise to remove them from the body. A joint research programme was undertaken in Chicago and London from 1951 to 1953 in which the clinical features of one thousand cases of chronic bronchitis were detailed. The findings were published in the Lancet in 1953. It was stated that since its introduction by Badham, chronic bronchitis had become an increasingly popular diagnosis. The study had looked at various associations such as the weather, conditions at home, and at work, age of onset, childhood illnesses, smoking habits, and breathlessness. It was concluded that chronic bronchitis invariably led to emphysema, particularly when the bronchitis had persisted for a long time. In 1957 it was noted that at the time there were many investigations being carried out into chronic bronchitis and emphysema in general, and among industrial workers exposed to dust. Excerpts were published dating from 1864 in which Charles Parsons had noted the occurring consequence of the development of emphysema from bronchitis. This was seen to be not always applicable. His findings were in association with his studies on chronic bronchitis among pottery workers. A CIBA (now Novartis) meeting in 1959, and a meeting of the American Thoracic Society in 1962, defined chronic bronchitis as a component of COPD, in the terms that have not changed. Eosinophilic bronchitis Eosinophilic bronchitis is a chronic dry cough, defined by the presence of an increased number of a type of white blood cell known as eosinophils. It has a normal finding on X-ray and has no airflow limitation. Protracted bacterial bronchitis Protracted bacterial bronchitis in children, is defined as a chronic productive cough with a positive bronchoalveolar lavage that resolves with antibiotics. Protracted bacterial bronchitis is usually caused by Streptococcus pneumoniae, non-typable Haemophilus influenzae, or Moraxella catarrhalis. Protracted bacterial bronchitis (lasting more than 4 weeks) in children may be helped by antibiotics. Plastic bronchitis Plastic bronchitis is a rarely found condition in which thickened secretions plug the bronchi. The plugs are rubbery or plastic-feeling (thus the name). The light-colored plugs take the branching shape of the bronchi that they fill, and are known as bronchial casts. When these casts are coughed up, they are firmer in texture from typical phlegm or the short, softer mucus plugs seen in some people with asthma. However, some people with asthma have larger, firmer, and more complex plugs. These differ from the casts seen in people whose plastic bronchitis is associated with congenital heart disease or lymphatic vessel abnormalities mainly because eosinophils and Charcot–Leyden crystals are present in the asthma-associated casts but not in the others. Casts obstruct the airflow, and can result in the overinflation of the opposite lung. Plastic bronchitis usually occurs in children. Some cases may result from abnormalities in the lymphatic vessels. Advanced cases may show imaging similarities to bronchiectasis. Eosinophilic plastic bronchitis Eosinophilic plastic bronchitis is a subtype of plastic bronchitis that is more often found in children. Symptoms may include a cough, and wheezing, and imaging may reveal a lung that is completely collapsed. Depending on the size of the casts, and the location the condition may present with mild symptoms, or prove fatal. Aspergillus bronchitis Aspergillus bronchitis is a type of aspergillosis, a fungal infection caused by Aspergillus a common mold that affects the bronchi. Unlike other types of pulmonary aspergillosis, it can affect individuals who are not immunocompromised. In immunocompetent individuals, Aspergillus bronchitis may manifest as persistent respiratory infections or symptoms that do not respond to antibiotics, but may improve with antifungals.
Biology and health sciences
Non-infectious disease
null
11072934
https://en.wikipedia.org/wiki/Street%20gutter
Street gutter
A street gutter is a depression that runs parallel to a road and is designed to collect rainwater that flows along the street diverting it into a storm drain. A gutter alleviates water buildup on a street, allows pedestrians to pass without walking through puddles, and reduces the risk of hydroplaning by road vehicles. When a curbstone is present, a gutter may be formed by the convergence of the road surface and the vertical face of the sidewalk; otherwise, a dedicated gutter surface made of concrete may be present. Depending on local regulations, a gutter usually discharges, as a nonpoint pollution source in a storm drain whose final discharge falls into a detention pond (in order to remove some pollutants by sedimentation) or into a body of water. Street gutters are most often found in areas of a city which have high pedestrian traffic. In rural areas, gutters are seldom used and are frequently replaced by a borrow ditch. In past centuries, when urban streets did not have sanitary sewers, street gutters were made deep enough to serve that purpose as well; responsibility for operation and maintenance of the dual-purpose street gutter was cooperatively shared between the local government and the inhabitants. A now obsolete word meaning a street gutter is a kennel.
Technology
Food, water and health
null
11078621
https://en.wikipedia.org/wiki/Polysulfane
Polysulfane
A polysulfane is a chemical compound of formula , where n > 1 (although disulfane () is sometimes excluded). Compounds containing 2 – 8 sulfur atoms have been isolated, longer chain compounds have been detected, but only in solution. is colourless, higher members are yellow with the colour increasing with the sulfur content. In the chemical literature the term polysulfanes is sometimes used for compounds containing , e.g. organic polysulfanes . Structures Polysulfanes consist of unbranched chains of sulfur atoms terminated with hydrogen atoms. The branched isomer of tetrasulfane , in which the fourth sulfur is bonded to the central sulfur, would be described as trithiosulfurous acid, . Computations suggests that it is less stable than the linear isomer . The S-S-S angles approach 90° in trisulfane and higher polysulfanes. Reactions and properties Polysulfanes can easily be oxidised, and are thermodynamically unstable with respect to decomposition (disproportionation) readily to and sulfur: (in this chemical reaction, is cyclo-octasulfur, one of the allotropes of sulfur) This decomposition reaction is catalyzed by alkali. To suppress this behavior, containers for polysulfanes are often pretreated with acid to remove traces of alkali. In contrast to the thermodynamic instability of polysulfates, polysulfide anions form spontaneously by treatment of with elemental sulfur: Beyond and , many higher polysulfanes (n = 3 – 8) are known. They have unbranched sulfur chains. Starting with disulfane , all known polysulfanes are liquids at room temperature. The density, boiling point and viscosity correlate with chain length. Physical properties of polysulfanes are given in the table below. They also react with sulfite and cyanide producing thiosulfate and thiocyanate respectively. Polysulfanes can be made from polysulfides by pouring a solution of a polysulfide salt into cooled concentrated hydrochloric acid. A mixture of metastable polysulfanes separates as a yellow oil, from which individual compounds may be separated by fractional distillation. Other more selective syntheses are: (n = 4, 5, 6) The reaction of polysulfanes with sulfur dichloride or disulfur dichloride produces long-chain dichloropolysulfanes: The reaction with a sulfite salt (a base) quantitatively decomposes the polysulfane to produce thiosulfate and hydrogen sulfide:
Physical sciences
Hydrogen compounds
Chemistry
11079974
https://en.wikipedia.org/wiki/Tint%2C%20shade%20and%20tone
Tint, shade and tone
In color theory, a tint is a mixture of a color with white, which increases lightness, while a shade is a mixture with black, which increases darkness. Both processes affect the resulting color mixture's relative saturation. A tone is produced either by mixing a color with gray, or by both tinting and shading. Mixing a color with any neutral color (including black, gray, and white) reduces the chroma, or colorfulness, while the hue (the relative mixture of red, green, blue, etc., depending on the colorspace) remains unchanged. In the graphic arts, especially printmaking and drawing, "tone" has a different meaning, referring to areas of continuous color, produced by various means, as opposed to the linear marks made by an engraved or drawn line. In common language, the term shade can be generalized to encompass any varieties of a particular color, whether technically they are shades, tints, tones, or slightly different hues. Meanwhile, the term tint can be generalized to refer to any lighter or darker variation of a color (e.g. "tinted windows"). When mixing colored light (additive color models), the achromatic mixture of spectrally balanced red, green, and blue (RGB) is always white, not gray or black. When we mix colorants, such as the pigments in paint mixtures, a color is produced which is always darker and lower in chroma, or saturation, than the parent colors. This moves the mixed color toward a neutral color—a gray or near-black. Lights are made brighter or dimmer by adjusting their brightness, i.e., energy level; in painting, lightness is adjusted through mixture with white, black, or a color's complement. The Color Triangle depicting tint, shade, and tone was proposed in 1937 by Faber Birren. In art It is common among some artistic painters to darken a paint color by adding black paint—producing colors called shades—or to lighten a color by adding white—producing colors called tints. However, this is not always the best way for representational painting, since one result is for colors to also shift in their hues. For instance, darkening a color by adding black can cause colors such as yellows, reds and oranges to shift toward the greenish or bluish part of the spectrum. Lightening a color by adding white can cause a shift towards blue when mixed with reds and oranges (see Abney effect). Another practice when darkening a color is to use its opposite, or complementary, color (e.g. violet-purple added to yellowish-green) in order to neutralize it without a shift in hue, and darken it if the additive color is darker than the parent color. When lightening a color this hue shift can be corrected with the addition of a small amount of an adjacent color to bring the hue of the mixture back in line with the parent color (e.g. adding a small amount of orange to a mixture of red and white will correct the tendency of this mixture to shift slightly towards the blue end of the spectrum).
Physical sciences
Basics
Physics
8912313
https://en.wikipedia.org/wiki/Flower%20mantis
Flower mantis
Flower mantises are praying mantises that use a special form of camouflage referred to as aggressive mimicry, which they not only use to attract prey, but avoid predators as well. These insects have specific colorations and behaviors that mimic flowers in their surrounding habitats. This strategy has been observed in other mantises including the stick mantis and dead-leaf mantis. The observed behavior of these mantises includes positioning themselves on a plant and either inserting themselves within the irradiance or on the foliage of the plants until a prey insect comes within range. Many species of flower mantises are popular as pets. The flower mantises are diurnal group with a single ancestry (a clade), but the majority of the known species belong to family Hymenopodidea. Example species: Orchid mantis The orchid mantis, Hymenopus coronatus of southeast Asia mimics orchid flowers. There is no evidence that suggests that they mimic a specific orchid, but their bodies are often white with pink markings and green eyes. These insects display different body morphologies depending on their life stage; juveniles are able to bend their abdomens upwards, allowing them to easily resemble a flower. However, the adult's wings are too large, inhibiting their ability to bend as the juveniles do. This dichotomy suggests that there must be other processes involved to attract insect prey species. Since Hymenopus coronatus do not mimic one orchid in particular, their colorations often do not match the coloration of a single orchid species. Antipredator behaviour One mechanism displayed by the orchid mantis to attract prey is the ability to absorb UV light the same way that flowers do. This makes the mantis appear flower-like to UV-sensitive insects who are often pollinators. To an insect, the mantis and the surrounding flowers appear blue; this contrasts against the foliage in the background that appears red. In his 1940 book Adaptive Coloration in Animals, Hugh Cott quotes an account by Nelson Annandale, saying that the mantis hunts on the flowers of the "Straits Rhododendron", Melastoma polyanthum. The nymph has what Cott calls "special alluring coloration" (aggressive mimicry), where the animal itself is the "decoy". The insect is pink and white, with flattened limbs with "that semiopalescent, semicrystalline appearance that is caused in flower petals by a purely structural arrangement of liquid globules or empty cells". The mantis climbs up the twigs of the plant and stands imitating a flower and waits for its prey patiently. It then sways from side to side, and soon small flies land on and around it, attracted by the small black spot on the end of its abdomen, which resembles a fly. When a larger dipteran fly, as big as a house fly, landed nearby, the mantis at once seized and ate it. More recently (2015), the orchid mantis's coloration has been shown to mimic tropical flowers effectively, attracting pollinators and catching them. Juvenile mantises secrete a mixture of the chemicals 3HOA and 10HDA, attracting their top prey species, the oriental bumblebee. This method of deception is aggressive chemical mimicry, imitating the chemical composition of the bee's pheromones. The chemicals are stored in the mandibles and released when H. coronatus is hunting. Adult mantises do not produce these chemicals. Taxonomic range The flower mantises include species from several genera, many of which are popularly kept as pets. Seven of the genera are in the Hymenopodidae:
Biology and health sciences
Insects: General
Animals
1048332
https://en.wikipedia.org/wiki/Hotspot%20%28geology%29
Hotspot (geology)
In geology, hotspots (or hot spots) are volcanic locales thought to be fed by underlying mantle that is anomalously hot compared with the surrounding mantle. Examples include the Hawaii, Iceland, and Yellowstone hotspots. A hotspot's position on the Earth's surface is independent of tectonic plate boundaries, and so hotspots may create a chain of volcanoes as the plates move above them. There are two hypotheses that attempt to explain their origins. One suggests that hotspots are due to mantle plumes that rise as thermal diapirs from the core–mantle boundary. The alternative plate theory is that the mantle source beneath a hotspot is not anomalously hot, rather the crust above is unusually weak or thin, so that lithospheric extension permits the passive rising of melt from shallow depths. Origin The origins of the concept of hotspots lie in the work of J. Tuzo Wilson, who postulated in 1963 that the formation of the Hawaiian Islands resulted from the slow movement of a tectonic plate across a hot region beneath the surface. It was later postulated that hotspots are fed by streams of hot mantle rising from the Earth's core–mantle boundary in a structure called a mantle plume. Whether or not such mantle plumes exist has been the subject of a major controversy in Earth science, but seismic images consistent with evolving theory now exist. At any place where volcanism is not linked to a constructive or destructive plate margin, the concept of a hotspot has been used to explain its origin. A review article by Courtillot et al. listing possible hotspots makes a distinction between primary hotspots coming from deep within the mantle and secondary hotspots derived from mantle plumes. The primary hotspots originate from the core/mantle boundary and create large volcanic provinces with linear tracks (Easter Island, Iceland, Hawaii, Afar, Louisville, Reunion, and Tristan confirmed; Galapagos, Kerguelen and Marquersas likely). The secondary hotspots originate at the upper/lower mantle boundary, and do not form large volcanic provinces, but island chains (Samoa, Tahiti, Cook, Pitcairn, Caroline, MacDonald confirmed, with up to 20 or so more possible). Other potential hotspots are the result of shallow mantle material surfacing in areas of lithospheric break-up caused by tension and are thus a very different type of volcanism. Estimates for the number of hotspots postulated to be fed by mantle plumes have ranged from about 20 to several thousand, with most geologists considering a few tens to exist. Hawaii, Réunion, Yellowstone, Galápagos, and Iceland are some of the most active volcanic regions to which the hypothesis is applied. The plumes imaged to date vary widely in width and other characteristics, and are tilted, being not the simple, relatively narrow and purely thermal plumes many expected. Only one, (Yellowstone) has as yet been consistently modelled and imaged from deep mantle to surface. Composition Most hotspot volcanoes are basaltic (e.g., Hawaii, Tahiti). As a result, they are less explosive than subduction zone volcanoes, in which water is trapped under the overriding plate. Where hotspots occur in continental regions, basaltic magma rises through the continental crust, which melts to form rhyolites. These rhyolites can form violent eruptions. For example, the Yellowstone Caldera was formed by some of the most powerful volcanic explosions in geologic history. However, when the rhyolite is completely erupted, it may be followed by eruptions of basaltic magma rising through the same lithospheric fissures (cracks in the lithosphere). An example of this activity is the Ilgachuz Range in British Columbia, which was created by an early complex series of trachyte and rhyolite eruptions, and late extrusion of a sequence of basaltic lava flows. The hotspot hypothesis is now closely linked to the mantle plume hypothesis. The detailed compositional studies now possible on hotspot basalts have allowed linkage of samples over the wider areas often implicate in the later hypothesis, and it's seismic imaging developments. Contrast with subduction zone island arcs Hotspot volcanoes are considered to have a fundamentally different origin from island arc volcanoes. The latter form over subduction zones, at converging plate boundaries. When one oceanic plate meets another, the denser plate is forced downward into a deep ocean trench. This plate, as it is subducted, releases water into the base of the over-riding plate, and this water mixes with the rock, thus changing its composition causing some rock to melt and rise. It is this that fuels a chain of volcanoes, such as the Aleutian Islands, near Alaska. Hotspot volcanic chains The joint mantle plume/hotspot hypothesis originally envisaged the feeder structures to be fixed relative to one another, with the continents and seafloor drifting overhead. The hypothesis thus predicts that time-progressive chains of volcanoes are developed on the surface. Examples are Yellowstone, which lies at the end of a chain of extinct calderas, which become progressively older to the west. Another example is the Hawaiian archipelago, where islands become progressively older and more deeply eroded to the northwest. Geologists have tried to use hotspot volcanic chains to track the movement of the Earth's tectonic plates. This effort has been vexed by the lack of very long chains, by the fact that many are not time-progressive (e.g. the Galápagos) and by the fact that hotspots do not appear to be fixed relative to one another (e.g. Hawaii and Iceland). That mantle plumes are much more complex than originally hypothesised and move independently of each other and plates is now used to explain such observations. In 2020, Wei et al. used seismic tomography to detect the oceanic plateau, formed about 100 million years ago by the hypothesized mantle plume head of the Hawaii-Emperor seamount chain, now subducted to a depth of 800 km under eastern Siberia. Postulated hotspot volcano chains Hawaiian–Emperor seamount chain (Hawaii hotspot) Louisville Ridge (Louisville hotspot) Walvis Ridge (Gough and Tristan hotspot) Kodiak–Bowie Seamount chain (Bowie hotspot) Cobb–Eickelberg Seamount chain (Cobb hotspot) New England Seamounts (New England hotspot) Anahim Volcanic Belt (Anahim hotspot) Mackenzie dike swarm (Mackenzie hotspot) Great Meteor hotspot track (New England hotspot) St. Helena Seamount Chain–Cameroon Volcanic Line (Saint Helena hotspot) Southern Mascarene Plateau–Chagos-Maldives-Laccadive Ridge (Réunion hotspot) Ninety East Ridge (Kerguelen hotspot) Tuamotu–Line Island chain (Easter hotspot) Austral–Gilbert–Marshall chain (Macdonald hotspot) Juan Fernández Ridge (Juan Fernández hotspot) Tasmantid Seamount Chain (Tasmantid hotspot) Canary Islands (Canary hotspot) Cape Verde (Cape Verde hotspot) List of volcanic regions postulated to be hotspots Eurasian plate Eifel hotspot (8) , w= 1 az= 082° ±8° rate= 12 ±2 mm/yr Iceland hotspot (14) Eurasian Plate, w= .8 az= 075° ±10° rate= 5 ±3 mm/yr North American Plate, w= .8 az= 287° ±10° rate= 15 ±5 mm/yr Possibly related to the North Atlantic continental rifting (62 Ma), Greenland. Azores hotspot (1) Eurasian Plate, w= .5 az= 110° ±12° North American Plate, w= .3 az= 280° ±15° Jan Mayen hotspot (15) Hainan hotspot (46) , az= 000° ±15° African plate Mount Etna (47) Hoggar hotspot (13) , w= .3 az= 046° ±12° Tibesti hotspot (40) , w= .2 az= 030° ±15° Jebel Marra/Darfur hotspot (6) , w= .5 az= 045° ±8° Afar hotspot (29, misplaced in map) , w= .2 az= 030° ±15° rate= 16 ±8 mm/yr Possibly related to the Afar triple junction, 30 Ma. Cameroon hotspot (17) , w= .3 az= 032° ±3° rate= 15 ±5 mm/yr Madeira hotspot (48) , w= .3 az= 055° ±15° rate= 8 ±3 mm/yr Canary hotspot (18) , w= 1 az= 094° ±8° rate= 20 ±4 mm/yr New England/Great Meteor hotspot (28) , w= .8 az= 040° ±10° Cape Verde hotspot (19) , w= .2 az= 060° ±30° Sierra Leone hotspot St. Helena hotspot (34) , w= 1 az= 078° ±5° rate= 20 ±3 mm/yr Gough hotspot (49), at 40°19' S 9°56' W. , w= .8 az= 079° ±5° rate= 18 ±3 mm/yr Tristan hotspot (42), at 37°07′ S 12°17′ W. Vema hotspot (Vema Seamount, 43), at 31°38' S 8°20' E. Related maybe to the Paraná and Etendeka traps (c. 132 Ma) through the Walvis Ridge. Discovery hotspot (50) (Discovery Seamounts) , w= 1 az= 068° ±3° Bouvet hotspot (51) Shona/Meteor hotspot (27) , w= .3 az= 074° ±6° Réunion hotspot (33) , w= .8 az= 047° ±10° rate= 40 ±10 mm/yr Possibly related to the Deccan Traps (main events: 68.5–66 Ma) Comoros hotspot (21) , w= .5 az=118 ±10° rate=35 ±10 mm/yr Antarctic plate Marion hotspot (25) , w= .5 az= 080° ±12° Crozet hotspot (52) , w= .8 az= 109° ±10° rate= 25 ±13 mm/yr Possibly related to the Karoo-Ferrar geologic province (183 Ma) Kerguelen hotspot (20) , w= .2 az= 050° ±30° rate= 3 ±1 mm/yr Related to the Kerguelen Plateau (130 Ma) Heard hotspot (53), possibly part of Kerguelen hotspot , w= .2 az= 030° ±20° Île Saint-Paul and Île Amsterdam could be part of the Kerguelen hotspot trail (St. Paul is possibly not another hotspot) Balleny hotspot (2) , w= .2 az= 325° ±7° Erebus hotspot (54) South American plate Trindade/Martin Vaz hotspot (41) , w= 1 az= 264° ±5° Fernando hotspot (9) , w= 1 az= 266° ±7° Possibly related to the Central Atlantic Magmatic Province (c. 200 Ma) Ascension hotspot (55) North American plate Bermuda hotspot (56) , w= .3 az= 260° ±15° Yellowstone hotspot (44) , w= .8 az= 235° ±5° rate= 26 ±5 mm/yr Possibly related to the Columbia River Basalt Group (17–14 Ma). Raton hotspot (32) , w= 1 az= 240°±4° rate= 30 ±20 mm/yr Anahim hotspot (45) (Nazko Cone) Australian plate Lord Howe hotspot (22) , w= .8 az= 351° ±10° Tasmantid hotspot (39) , w= .8 az= 007° ±5° rate= 63 ±5 mm/yr East Australia hotspot (30) , w= .3 az= 000° ±15° rate= 65 ±3 mm/yr Nazca plate Juan Fernández hotspot (16) , w= 1 az= 084° ±3° rate= 80 ±20 mm/yr San Felix hotspot (36) , w= .3 az= 083° ±8° Easter hotspot (7) , w= 1 az= 087° ±3° rate= 95 ±5 mm/yr Galápagos hotspot (10) Nazca Plate, w= 1 az= 096° ±5° rate= 55 ±8 mm/yr Cocos Plate, w= .5 az= 045° ±6° Possibly related to the Caribbean large igneous province (main events: 95–88 Ma). Pacific plate Louisville hotspot (23) , w= 1 az= 316° ±5° rate= 67 ±5 mm/yr Possibly related to the Ontong Java Plateau (125–120 Ma). Foundation hotspot/Ngatemato seamounts (57) , w= 1 az= 292° ±3° rate= 80 ±6 mm/yr Macdonald hotspot (24) , w= 1 az= 289° ±6° rate= 105 ±10 mm/yr North Austral/President Thiers (President Thiers Bank, 58) , w= (1.0) azim= 293° ± 3° rate= 75 ±15 mm/yr Arago hotspot (Arago Seamount, 59) , w= 1 azim= 296° ±4° rate= 120 ±20 mm/yr Maria/Southern Cook hotspot (Îles Maria, 60) , w= 0.8 az= 300° ±4° Samoa hotspot (35) , w= .8 az= 285°±5° rate= 95 ±20 mm/yr Crough hotspot (Crough Seamount, 61) , w= .8 az= 284° ± 2° Pitcairn hotspot (31) , w= 1 az= 293° ±3° rate= 90 ±15 mm/yr Society/Tahiti hotspot (38) , w= .8 az= 295°±5° rate= 109 ±10 mm/yr Marquesas hotspot (26) , w= .5 az= 319° ±8° rate= 93 ±7 mm/yr Caroline hotspot (4) , w= 1 az= 289° ±4° rate= 135 ±20 mm/yr Hawaii hotspot (12) , w= 1 az= 304° ±3° rate= 92 ±3 mm/yr Socorro/Revillagigedos hotspot (37) Guadalupe hotspot (11) , w= .8 az= 292° ±5° rate= 80 ±10 mm/yr Cobb hotspot (5) , w= 1 az= 321° ±5° rate= 43 ±3 mm/yr Bowie/Pratt-Welker hotspot (3) , w=.8 az= 306° ±4° rate= 40 ±20 mm/yr Former hotspots Euterpe/Musicians hotspot (Musicians Seamounts) Mackenzie hotspot Matachewan hotspot
Physical sciences
Volcanic landforms
null
1048518
https://en.wikipedia.org/wiki/Supramolecular%20chemistry
Supramolecular chemistry
Supramolecular chemistry refers to the branch of chemistry concerning chemical systems composed of a discrete number of molecules. The strength of the forces responsible for spatial organization of the system range from weak intermolecular forces, electrostatic charge, or hydrogen bonding to strong covalent bonding, provided that the electronic coupling strength remains small relative to the energy parameters of the component. While traditional chemistry concentrates on the covalent bond, supramolecular chemistry examines the weaker and reversible non-covalent interactions between molecules. These forces include hydrogen bonding, metal coordination, hydrophobic forces, van der Waals forces, pi–pi interactions and electrostatic effects. Important concepts advanced by supramolecular chemistry include molecular self-assembly, molecular folding, molecular recognition, host–guest chemistry, mechanically-interlocked molecular architectures, and dynamic covalent chemistry. The study of non-covalent interactions is crucial to understanding many biological processes that rely on these forces for structure and function. Biological systems are often the inspiration for supramolecular research. History The existence of intermolecular forces was first postulated by Johannes Diderik van der Waals in 1873. However, Nobel laureate Hermann Emil Fischer developed supramolecular chemistry's philosophical roots. In 1894, Fischer suggested that enzyme–substrate interactions take the form of a "lock and key", the fundamental principles of molecular recognition and host–guest chemistry. In the early twentieth century non-covalent bonds were understood in gradually more detail, with the hydrogen bond being described by Latimer and Rodebush in 1920. With the deeper understanding of the non-covalent interactions, for example, the clear elucidation of DNA structure, chemists started to emphasize the importance of non-covalent interactions. In 1967, Charles J. Pedersen discovered crown ethers, which are ring-like structures capable of chelating certain metal ions. Then, in 1969, Jean-Marie Lehn discovered a class of molecules similar to crown ethers, called cryptands. After that, Donald J. Cram synthesized many variations to crown ethers, on top of separate molecules capable of selective interaction with certain chemicals. The three scientists were awarded the Nobel Prize in Chemistry in 1987 for "development and use of molecules with structure-specific interactions of high selectivity”. In 2016, Bernard L. Feringa, Sir J. Fraser Stoddart, and Jean-Pierre Sauvage were awarded the Nobel Prize in Chemistry, "for the design and synthesis of molecular machines". The term supermolecule (or supramolecule) was introduced by Karl Lothar Wolf et al. (Übermoleküle) in 1937 to describe hydrogen-bonded acetic acid dimers. The term supermolecule is also used in biochemistry to describe complexes of biomolecules, such as peptides and oligonucleotides composed of multiple strands. Eventually, chemists applied these concepts to synthetic systems. One breakthrough came in the 1960s with the synthesis of the crown ethers by Charles J. Pedersen. Following this work, other researchers such as Donald J. Cram, Jean-Marie Lehn and Fritz Vögtle reported a variety of three-dimensional receptors, and throughout the 1980s research in the area gathered a rapid pace with concepts such as mechanically interlocked molecular architectures emerging. The influence of supramolecular chemistry was established by the 1987 Nobel Prize for Chemistry which was awarded to Donald J. Cram, Jean-Marie Lehn, and Charles J. Pedersen in recognition of their work in this area. The development of selective "host–guest" complexes in particular, in which a host molecule recognizes and selectively binds a certain guest, was cited as an important contribution. Concepts Molecular self-assembly Molecular self-assembly is the construction of systems without guidance or management from an outside source (other than to provide a suitable environment). The molecules are directed to assemble through non-covalent interactions. Self-assembly may be subdivided into intermolecular self-assembly (to form a supramolecular assembly), and intramolecular self-assembly (or folding as demonstrated by foldamers and polypeptides). Molecular self-assembly also allows the construction of larger structures such as micelles, membranes, vesicles, liquid crystals, and is important to crystal engineering. Molecular recognition and complexation Molecular recognition is the specific binding of a guest molecule to a complementary host molecule to form a host–guest complex. Often, the definition of which species is the "host" and which is the "guest" is arbitrary. The molecules are able to identify each other using non-covalent interactions. Key applications of this field are the construction of molecular sensors and catalysis. Template-directed synthesis Molecular recognition and self-assembly may be used with reactive species in order to pre-organize a system for a chemical reaction (to form one or more covalent bonds). It may be considered a special case of supramolecular catalysis. Non-covalent bonds between the reactants and a "template" hold the reactive sites of the reactants close together, facilitating the desired chemistry. This technique is particularly useful for situations where the desired reaction conformation is thermodynamically or kinetically unlikely, such as in the preparation of large macrocycles. This pre-organization also serves purposes such as minimizing side reactions, lowering the activation energy of the reaction, and producing desired stereochemistry. After the reaction has taken place, the template may remain in place, be forcibly removed, or may be "automatically" decomplexed on account of the different recognition properties of the reaction product. The template may be as simple as a single metal ion or may be extremely complex. Mechanically interlocked molecular architectures Mechanically interlocked molecular architectures consist of molecules that are linked only as a consequence of their topology. Some non-covalent interactions may exist between the different components (often those that were used in the construction of the system), but covalent bonds do not. Supramolecular chemistry, and template-directed synthesis in particular, is key to the efficient synthesis of the compounds. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, molecular Borromean rings and ravels. Dynamic covalent chemistry In dynamic covalent chemistry covalent bonds are broken and formed in a reversible reaction under thermodynamic control. While covalent bonds are key to the process, the system is directed by non-covalent forces to form the lowest energy structures. Biomimetics Many synthetic supramolecular systems are designed to copy functions of biological systems. These biomimetic architectures can be used to learn about both the biological model and the synthetic implementation. Examples include photoelectrochemical systems, catalytic systems, protein design and self-replication. Imprinting Molecular imprinting describes a process by which a host is constructed from small molecules using a suitable molecular species as a template. After construction, the template is removed leaving only the host. The template for host construction may be subtly different from the guest that the finished host binds to. In its simplest form, imprinting uses only steric interactions, but more complex systems also incorporate hydrogen bonding and other interactions to improve binding strength and specificity. Molecular machinery Molecular machines are molecules or molecular assemblies that can perform functions such as linear or rotational movement, switching, and entrapment. These devices exist at the boundary between supramolecular chemistry and nanotechnology, and prototypes have been demonstrated using supramolecular concepts. Jean-Pierre Sauvage, Sir J. Fraser Stoddart and Bernard L. Feringa shared the 2016 Nobel Prize in Chemistry for the 'design and synthesis of molecular machines'. Building blocks Supramolecular systems are rarely designed from first principles. Rather, chemists have a range of well-studied structural and functional building blocks that they are able to use to build up larger functional architectures. Many of these exist as whole families of similar units, from which the analog with the exact desired properties can be chosen. Synthetic recognition motifs The pi-pi charge-transfer interactions of bipyridinium with dioxyarenes or diaminoarenes have been used extensively for the construction of mechanically interlocked systems and in crystal engineering. The use of crown ether binding with metal or ammonium cations is ubiquitous in supramolecular chemistry. The formation of carboxylic acid dimers and other simple hydrogen bonding interactions. The complexation of bipyridines or terpyridines with ruthenium, silver or other metal ions is of great utility in the construction of complex architectures of many individual molecules. The complexation of porphyrins or phthalocyanines around metal ions gives access to catalytic, photochemical and electrochemical properties in addition to the complexation itself. These units are used a great deal by nature. Macrocycles Macrocycles are very useful in supramolecular chemistry, as they provide whole cavities that can completely surround guest molecules and may be chemically modified to fine-tune their properties. Cyclodextrins, calixarenes, cucurbiturils and crown ethers are readily synthesized in large quantities, and are therefore convenient for use in supramolecular systems. More complex cyclophanes, and cryptands can be synthesised to provide more tailored recognition properties. Supramolecular metallocycles are macrocyclic aggregates with metal ions in the ring, often formed from angular and linear modules. Common metallocycle shapes in these types of applications include triangles, squares, and pentagons, each bearing functional groups that connect the pieces via "self-assembly." Metallacrowns are metallomacrocycles generated via a similar self-assembly approach from fused chelate-rings. Structural units Many supramolecular systems require their components to have suitable spacing and conformations relative to each other, and therefore easily employed structural units are required. Commonly used spacers and connecting groups include polyether chains, biphenyls and triphenyls, and simple alkyl chains. The chemistry for creating and connecting these units is very well understood. nanoparticles, nanorods, fullerenes and dendrimers offer nanometer-sized structure and encapsulation units. Surfaces can be used as scaffolds for the construction of complex systems and also for interfacing electrochemical systems with electrodes. Regular surfaces can be used for the construction of self-assembled monolayers and multilayers. The understanding of intermolecular interactions in solids has undergone a major renaissance via inputs from different experimental and computational methods in the last decade. This includes high-pressure studies in solids and "in situ" crystallization of compounds which are liquids at room temperature along with the use of electron density analysis, crystal structure prediction and DFT calculations in solid state to enable a quantitative understanding of the nature, energetics and topological properties associated with such interactions in crystals. Photo-chemically and electro-chemically active units Porphyrins, and phthalocyanines have highly tunable photochemical and electrochemical activity as well as the potential to form complexes. Photochromic and photoisomerizable groups can change their shapes and properties, including binding properties, upon exposure to light. Tetrathiafulvalene (TTF) and quinones have multiple stable oxidation states, and therefore can be used in redox reactions and electrochemistry. Other units, such as benzidine derivatives, viologens, and fullerenes, are useful in supramolecular electrochemical devices. Biologically-derived units The extremely strong complexation between avidin and biotin is instrumental in blood clotting, and has been used as the recognition motif to construct synthetic systems. The binding of enzymes with their cofactors has been used as a route to produce modified enzymes, electrically contacted enzymes, and even photoswitchable enzymes. DNA has been used both as a structural and as a functional unit in synthetic supramolecular systems. Applications Materials technology Supramolecular chemistry has found many applications, in particular molecular self-assembly processes have been applied to the development of new materials. Large structures can be readily accessed using bottom-up synthesis as they are composed of small molecules requiring fewer steps to synthesize. Thus most of the bottom-up approaches to nanotechnology are based on supramolecular chemistry. Many smart materials are based on molecular recognition. Catalysis A major application of supramolecular chemistry is the design and understanding of catalysts and catalysis. Non-covalent interactions influence the binding reactants. Medicine Design based on supramolecular chemistry has led to numerous applications in the creation of functional biomaterials and therapeutics. Supramolecular biomaterials afford a number of modular and generalizable platforms with tunable mechanical, chemical and biological properties. These include systems based on supramolecular assembly of peptides, host–guest macrocycles, high-affinity hydrogen bonding, and metal–ligand interactions. A supramolecular approach has been used extensively to create artificial ion channels for the transport of sodium and potassium ions into and out of cells. Supramolecular chemistry is also important to the development of new pharmaceutical therapies by understanding the interactions at a drug binding site. The area of drug delivery has also made critical advances as a result of supramolecular chemistry providing encapsulation and targeted release mechanisms. In addition, supramolecular systems have been designed to disrupt protein–protein interactions that are important to cellular function. Data storage and processing Supramolecular chemistry has been used to demonstrate computation functions on a molecular scale. In many cases, photonic or chemical signals have been used in these components, but electrical interfacing of these units has also been shown by supramolecular signal transduction devices. Data storage has been accomplished by the use of molecular switches with photochromic and photoisomerizable units, by electrochromic and redox-switchable units, and even by molecular motion. Synthetic molecular logic gates have been demonstrated on a conceptual level. Even full-scale computations have been achieved by semi-synthetic DNA computers.
Physical sciences
Supramolecular chemistry
Chemistry
1050386
https://en.wikipedia.org/wiki/Audiology
Audiology
Audiology (from Latin , "to hear"; and from Greek branch of learning , -logia) is a branch of science that studies hearing, balance, and related disorders. Audiologists treat those with hearing loss and proactively prevent related damage. By employing various testing strategies (e.g. behavioral hearing tests, otoacoustic emission measurements, and electrophysiologic tests), audiologists aim to determine whether someone has normal sensitivity to sounds. If hearing loss is identified, audiologists determine which portions of hearing (high, middle, or low frequencies) are affected, to what degree (severity of loss), and where the lesion causing the hearing loss is found (outer ear, middle ear, inner ear, auditory nerve and/or central nervous system). If an audiologist determines that a hearing loss or vestibular abnormality is present, they will provide recommendations for interventions or rehabilitation (e.g. hearing aids, cochlear implants, appropriate medical referrals). In addition to diagnosing audiologic and vestibular pathologies, audiologists can also specialize in rehabilitation of tinnitus, hyperacusis, misophonia, auditory processing disorders, cochlear implant use and/or hearing aid use. Audiologists can provide hearing health care from birth to end-of-life. Audiologist An audiologist is a health care provider specializing in identifying, diagnosing, treating, and monitoring disorders of the auditory and vestibular systems. Audiologists are trained to diagnose, manage and/or treat hearing, tinnitus, or balance problems. They dispense, manage, and rehabilitate hearing aids and assess candidacy for and map hearing implants, such as cochlear implants, middle ear implants and bone conduction implants. They counsel families through a new diagnosis of hearing loss in infants, and help teach coping and compensation skills to late-deafened adults. They also help design and implement personal and industrial hearing safety programs, newborn hearing screening programs, school hearing screening programs, and provide special or custom fitted ear plugs and other hearing protection devices to help prevent hearing loss. Audiologists are trained to evaluate peripheral vestibular disorders originating from pathologies of the vestibular portion of the inner ear. They also provide treatment for certain vestibular and balance disorders, such as Benign Paroxysmal Positional Vertigo (BPPV). In addition, many audiologists work as auditory or acoustic scientists in a research capacity. Audiologists are trained in anatomy and physiology, hearing aids, cochlear implants, electrophysiology, acoustics, psychophysics and psychoacoustics, neurology, vestibular function and assessment, balance disorders, counseling and communication options such as sign language. Audiologists may also run a neonatal hearing screening program which has been made compulsory in many hospitals in US, UK and India. An audiologist usually graduates with one of the following qualifications: BSc, MSc(Audiology), AuD, STI, PhD, or ScD, depending on the program and country attended. In 2018, a report by CareerCast found the occupation of audiologist to be the third least stressful job surveyed. History The use of the terms audiology and audiologist in publications has been traced back only as far as 1946. The creator of the term remains unknown, but Berger identified possible originators as Mayer BA Schier, Willard B Hargrave, Stanley Nowak, Norman Canfield, or Raymond Carhart. In a biographical profile by Robert Galambos, Hallowell Davis is credited with coining the term in the 1940s, saying the then-prevalent term "auricular training" sounded like a method of teaching people how to wiggle their ears. The first US university course for audiologists was offered by Carhart at Northwestern University, in 1946. Audiology was born of interdisciplinary collaboration. The substantial prevalence of hearing loss observed in the veteran population after World War II inspired the creation of the field as it is known today. The International Society of Audiology (ISA) was founded in 1952 to "...facilitate the knowledge, protection and rehabilitation of human hearing" and to "...serve as an advocate for the profession and for the hearing impaired throughout the world." It promotes interactions among national societies, associations and organizations that have similar missions, through the organization of a biannual world congress, through the publication of the scientific peer-reviewed International Journal of Audiology and by offering support to the World Health Organization's efforts towards addressing the needs of the hearing impaired and deaf community. Requirements The International Society of Audiology maintains Global Audiology, which is a portal in Wikiversity that provides information of audiology education and practice around the world. Summary information is provided below: Australia In Australia, Audiologists must hold a Master of Audiology, Master of Clinical Audiology, Master of Audiology Studies or alternatively a bachelor's degree from overseas certified by the private agency Vocational Education, Training and Assessment Services (VETASSESS). Although audiologists in Australia are not required to be a member of any professional body, audiology graduates can undergo a clinical training program or internship leading to accreditation with Audiology Australia (AudA) or the Australian College of Audiology (ACAud) which involves supervised practice and professional development, and typically lasts one year. To provide rehabilitative services to eligible pensioners, war veterans, and children and young adults under the age of 26 as part of the Hearing Services Program, an audiologist must hold a qualified practitioner (QP) number which can be sought by first obtaining accreditation. Brazil In Brazil, audiology training is part of speech pathology and audiology undergraduate, four-year courses. The University of São Paulo was the first university to offer a bachelor's degree, and it started operations in 1977. At the federal level, the recognition of the educational programs and the profession of speech pathologist and audiologist took place on December 9, 1981, signed by President João Figueiredo (law no. 6965). The terms  audiology and  audiologist can be tracked in Brazilian publications since 1946. The work of audiologists in Brazil was described in 2007. Canada In Canada, a masters of science (MSc) is the minimum requirement to practice audiology in the country. The profession is regulated in New Brunswick, Quebec, Ontario, Manitoba, Saskatchewan, Alberta, and British Columbia, where it is illegal to practice without being registered as a full member in the appropriate provincial regulatory body. Bangladesh A BSc (Hons) in audiology and speech language pathology is required. India To practice audiology, professionals need to have either a bachelor's or a master's degree in audiology and be registered with Rehabilitation Council of India (RCI). Malaysia Three Malaysian educational institutions offer degrees in audiology. United Kingdom There are currently five routes to becoming a registered audiologist: FdSc in hearing aid audiology BSc in audiology MSc in audiology Fast-track conversion Diploma for those with a BSc in another relevant science subject, available at Southampton, Manchester, UCL, London, and Edinburgh BSc(Hons) in clinical physiology (audiology) available at Glasgow Caledonian University (all applicants must be NHS employees) United States In the United States, audiologists are regulated by state licensure or registration in all 50 states and the District of Columbia. Starting in 2007, the doctor of audiology (AuD) became the entry-level degree for clinical practice for some states, with most states expected to follow this requirement very soon, as there are no longer any professional programs in audiology which offer the master's degree. Minimum requirements for the AuD degree include a minimum of 75 semester hours of post-baccalaureate study, meeting prescribed competencies, passing a national exam offered by Praxis Series of the Educational Testing Service, and practicum experience that is equivalent to a minimum of 12 months of full-time, supervised experience. Most states have continuing education renewal requirements that must be met to stay licensed. Audiologists can also earn certification from the American Speech-Language-Hearing Association or through the American Board of Audiology (ABA). Currently, there are over 70 AuD programs in the United States. In the past, audiologists have typically held a master's degree and the appropriate healthcare license. However, in the 1990s the profession began to transition to a doctoral level as a minimum requirement. In the United States, starting in 2007, audiologists were required to receive a doctoral degree (AuD or PhD) in audiology from an accredited university graduate or professional program before practicing. All states require licensing, and audiologists may also carry national board certification from the American Board of Audiology or a certificate of clinical competence in audiology (CCC-A) from the American Speech-Language-Hearing Association. Pakistan In Pakistan, a master's or doctoral degree in audiology is required to practice this profession. This medical degree must come from a recognised institute, most of which are government, otherwise the person didn't get the license to practice the audiology. Pakistan Medical and Dental Council (PMDC) issues the practicing license to all the medical students. Besides these, the person who provides the medical instruments to these doctors should also have the certificate of accreditation, issued by the Pakistan National Accreditation Council (PNAC). Portugal The exercise of audiologist profession in Portugal necessarily imply the qualifications degree in audiology or legally equivalent as defined in Decree-Law 320/99 of August 11 Article 4. South Africa In South Africa, there are currently five institutions offering training in audiology. The institutions offer different qualifications that make one eligible for practicing audiology in South Africa. The qualifications are as follows, I) B. Audiology, II) BSc. Audiology, III) B. Communication Pathology (Audiology), and IV) B. Speech Language Pathology and Audiology (BSLP&A). All practicing audiologists are required to be registered with the Health Professional Council of South Africa (HPCSA). Turkey Audiology in Turkey started in 1968 as an audiology master's degree program at Hacettepe University Faculty of Medicine, Department of Ear, Nose and Throat. The program, which was carried out as Audiology until 1989, has been revised since this year and continued as "Audiology and Speech Disorders" Master's and Doctoral education. The first undergraduate program was opened in 2011, and as of 2011, Audiologist has become a profession defined and officially recognized by the state of the Republic of Turkey.
Biology and health sciences
Fields of medicine
Health
1051355
https://en.wikipedia.org/wiki/Sawfish
Sawfish
Sawfish, also known as carpenter sharks, are a family of rays characterized by a long, narrow, flattened rostrum, or nose extension, lined with sharp transverse teeth, arranged in a way that resembles a saw. They are among the largest fish, with some species reaching lengths of about . They are found worldwide in tropical and subtropical regions in coastal marine and brackish estuarine waters, as well as freshwater rivers and lakes. All species are critically endangered. They should not be confused with sawsharks (order Pristiophoriformes) or the extinct sclerorhynchoids (order Rajiformes) which have a similar appearance, or swordfish (family Xiphiidae) which have a similar name but a very different appearance. Sawfishes are relatively slow breeders and the females give birth to live young. They feed on fish and invertebrates that are detected and captured with the use of their saw. They are generally harmless to humans, but can inflict serious injuries with the saw when captured and defending themselves. Sawfish have been known and hunted for thousands of years, and play an important mythological and spiritual role in many societies around the world. Once common, sawfish have experienced a drastic decline in recent decades, and the only remaining strongholds are in Northern Australia and Florida, United States. All five species are rated as Critically Endangered by the IUCN. They are hunted for their fins (shark fin soup), use of parts as traditional medicine, their teeth and saw. They also face habitat loss. Sawfish have been listed by CITES since 2007, restricting international trade in them and their parts. They are protected in Australia, the United States and several other countries, meaning that sawfish caught by accident have to be released and violations can be punished with hefty fines. Taxonomy and etymology The scientific names of the sawfish family Pristidae and its type genus Pristis are derived from the . Despite their appearance, sawfish are rays (superorder Batoidea). The sawfish family has traditionally been considered the sole living member of the order Pristiformes, but recent authorities have generally subsumed it into Rhinopristiformes, an order that now includes the sawfish family, as well as families containing guitarfish, wedgefish, banjo rays and the like. Sawfish quite resemble guitarfish, except that the latter group lacks a saw, and their common ancestor likely was similar to guitarfish. Living species The species level taxonomy in the sawfish family has historically caused considerable confusion and was often described as chaotic. Only in 2013 was it firmly established that there are five living species in two genera. Anoxypristis contains a single living species that historically was included in Pristis, but the two genera are morphologically and genetically highly distinct. Today Pristis contains four living, valid species divided into two species groups. Three species are in the smalltooth group, and there is only a single in the largetooth group. Three poorly defined species were formerly recognized in the largetooth group, but in 2013 it was shown that P. pristis, P. microdon and P. perotteti do not differ in morphology or genetics. As a consequence, recent authorities treat P. microdon and P. perotteti as junior synonyms of P. pristis. Extinct (fossil) species In addition to the living sawfish, there are several extinct species that only are known from fossil remains found around the world in all continents. Peyeria from the Cenomanian age (Late Cretaceous) was once considered as the oldest known pristid, though it may represent a rhinid rather than a sawfish, or probably a junior synonym of the sclerorhynchoid Onchopristis. Indisputable sawfish genera emerged in the Cenozoic age about 60 million years ago, relatively soon after the Cretaceous–Paleogene mass extinction. Among these are Propristis, a monotypic genus only known from fossil remains, as well as several extinct Pristis species and several extinct Anoxypristis species (both of these genera are also represented by living species). Historically, palaeontologists have not separated Anoxypristis from Pristis. In contrast, several additional extinct genera are occasionally listed, including Dalpiazia, Onchopristis, Oxypristis, and Mesopristis, but recent authorities generally include the first two genera within Sclerorhynchoidei and the last two are synonyms of Anoxypristis. The extinct order Sclerorhynchoidei had long rostra with large denticles similar to sawfishes and sawsharks. This feature was convergently evolved, recently proposed as 'pristification', and their closest living relatives are actually skates. While they are often called "sawfishes", the more accurate common name for sclerorhynchoids is "sawskates". Appearance and anatomy Sawfish are dull brownish, greyish, greenish or yellowish above, but the shade varies and dark individuals can be almost black. The underside is pale, and typically whitish. Saw The most distinctive feature of sawfish is their saw-like rostrum with a row of whitish teeth (rostral teeth) on either side of it. The rostrum is an extension of the chondrocranium ("skull"), made of cartilage and covered in skin. The rostrum length is typically about one quarter to one third of the total length of the fish, but it varies depending on species, and sometimes with age and sex. The rostral teeth are not teeth in the traditional sense, but heavily modified dermal denticles. The rostral teeth grow in size throughout the life of the sawfish and a tooth is not replaced if it is lost. In Pristis sawfish, the teeth are found along the entire length of the rostrum, but, in adult Anoxypristis, there are no teeth on the basal one-quarter of the rostrum (about one-sixth in juvenile Anoxypristis). The number of teeth varies depending on the species and can range from 14 to 37 on each side of the rostrum. It is common for a sawfish to have slightly different tooth counts on each side of its rostrum. (The difference typically does not surpass three.) In some species, females on average have fewer teeth than males. Each tooth is peg-like in Pristis sawfish, and flattened and broadly triangular in Anoxypristis. A combination of features, including fins and rostrum, are typically used to separate the species, but it is possible to do it by the rostrum alone. Head, body and fins Sawfish have a strong shark-like body, a flat underside and a flat head. Pristis sawfish have a rough sandpaper-like skin texture because of the covering of dermal denticles, but in Anoxypristis the skin is largely smooth. The mouth and nostrils are placed on the underside of the head. There are about 88–128 small, blunt-edged teeth in the upper jaw of the mouth and about 84–176 in the lower jaw (not to be confused with the teeth on the saw). These are arranged in 10–12 rows on each jaw, and somewhat resemble a cobblestone road. They have small eyes and behind each is a spiracle, which is used to draw water past the gills. The gill slits, five on each side, are placed on the underside of the body near the base of the pectoral fins. The position of the gill openings separates them from the superficially similar yet generally much smaller (up to long) sawsharks, in which the slits are on the side of the neck. Unlike sawfish, sawsharks also have a pair of long barbels on the rostrum ("saw"). Sawfish have two relatively high and distinct dorsal fins, wing-like pectoral and pelvic fins, and a tail with a distinct upper lobe and a variably sized lower lobe (lower lobe relatively large in Anoxypristis; small to absent in Pristis sawfish). The position of the first dorsal fin compared to the pelvic fins varies and is a useful feature for separating some of the species. There are no anal fins. Like other elasmobranches, sawfish lack a swim bladder (instead controlling their buoyancy with a large oil-rich liver), and have a skeleton consisting of cartilage. Males have claspers, a pair of elongated structures used for mating and positioned on the underside at the pelvic fins. The claspers are small and indistinct in young males. Their small intestines contain an internal partition shaped like a corkscrew, called a spiral valve, which increases the surface area available for food absorption. Size Sawfish are large to very large fish, but the maximum size of each species is generally uncertain. The smalltooth sawfish, largetooth sawfish and green sawfish are among the world's largest fish. They can certainly all reach about in total length and there are reports of individuals larger than , but these are often labeled with some uncertainty. Typically reported maximum total lengths of these three are from . Large individuals may weigh as much as , or possibly even more. Old unconfirmed and highly questionable reports of much larger individuals do exist, including one that reputedly had a length of , another that had a weight of , and a third that was long and weighed . The two remaining species, the dwarf sawfish and narrow sawfish, are considerably smaller, but are still large fish with a maximum total length of at least and respectively. In the past it was often reported that the dwarf sawfish only reaches about , but this is now known to be incorrect. Distribution Range Sawfish are found worldwide in tropical and subtropical waters. Historically they ranged in the East Atlantic from Morocco to South Africa, and in the West Atlantic from New York (United States) to Uruguay, including the Caribbean and Gulf of Mexico. There are old reports (last in the late 1950s or shortly after) from the Mediterranean and these have typically been regarded as vagrants, but a review of records strongly suggests that this sea had a breeding population. In the East Pacific they ranged from Mazatlán (Mexico) to northern Peru. Although the Gulf of California occasionally has been included in their range, the only known Pacific Mexican records of sawfish are from south of its mouth. They were widespread in the western and central Indo-Pacific, ranging from South Africa to the Red Sea and Persian Gulf, east and north to Korea and southern Japan, through Southeast Asia to Papua New Guinea and Australia. Today sawfish have disappeared from much of their historical range. Habitat Sawfish are primarily found in coastal marine and estuarine brackish waters, but they are euryhaline (can adapt to various salinities) and also found in freshwater. The largetooth sawfish, alternatively called the freshwater sawfish, has the greatest affinity for freshwater. For example, it has been reported as far as up the Amazon River and in Lake Nicaragua, and its young spend the first years of their life in freshwater. In contrast, the smalltooth, green and dwarf sawfish typically avoid pure freshwater, but may occasionally move far up rivers, especially during periods when there is an increased salinity. There are reports of narrow sawfish seen far upriver, but these need confirmation and may involve misidentifications of other species of sawfish. Sawfish are mostly found in relatively shallow waters, typically at depths less than , and occasionally less than . Young prefer very shallow places and are often found in water only deep. Sawfish can occur offshore, but are rare deeper than . An unidentified sawfish (either a largetooth or smalltooth sawfish) was captured off Central America at a depth in excess of . The dwarf and largetooth sawfish are strictly warm-water species that generally live in waters that are and respectively. The green and smalltooth sawfish also occur in colder waters, in the latter down to , as illustrated by their (original) distributions that ranged further north and south of the strictly warm-water species. Sawfish are bottom-dwellers, but in captivity it has been noted that at least the largetooth and green sawfish readily take food from the water surface. Sawfish are mostly found in places with soft bottoms such as mud or sand, but may also occur over hard rocky bottoms or at coral reefs. They are often found in areas with seagrass or mangrove. Sawsharks are typically found much deeper, often at depths in excess of , and when shallower mostly in colder subtropical or temperate waters than sawfish. Behavior Breeding and life cycle Relatively little is known about the reproductive habits of the sawfish, but all species are ovoviviparous with the adult females giving birth to live young once a year or every second year. In general, males appear to reach sexual maturity at a slightly younger age and smaller size than females. As far as known, sexual maturity is reached at an age of 7–12 years in Pristis and 2–3 years in Anoxypristis. In the smalltooth and green sawfish this equals a total length of , in the largetooth sawfish at , in the dwarf sawfish about , and in the narrow sawfish at . This means that the generation length is about 4.6 years in the narrow sawfish and 14.6–17.2 years in the remaining species. Mating involves the male inserting a clasper, organs at the pelvic fins, into the female to fertilize the eggs. As known from many elasmobranchs, the mating appears to be rough, with the sawfish often sustaining lacerations from its partner's saw. However, through genetic testing it has been shown that at least the smalltooth sawfish also can reproduce by parthenogenesis where no male is involved and the offspring are clones of their mother. In Florida, United States, it appears that about 3% of the smalltooth sawfish offspring are the result of parthenogenesis. It is speculated that this may be in response to being unable to find a partner, allowing the females to reproduce anyway. The pregnancy lasts several months. There are 1–23 young in each sawfish litter, which are long at birth. In the embryos the rostrum is flexible and it only hardens shortly before birth. To protect the mother the saws of the young have a soft cover, which falls off shortly after birth. The pupping grounds are in coastal and estuarine waters. In most species the young generally stay there for the first part of their lives, occasionally moving upriver when there is an increase in salinity. The exception is the largetooth sawfish where the young move upriver into freshwater where they stay for 3–5 years, sometimes as much as from the sea. In at least the smalltooth sawfish the young show a degree of site fidelity, generally staying in the same fairly small area in the first part of their lives. In the green and dwarf sawfish there are indications that both sexes remain in the same overall region throughout their lives with little mixing between the subpopulations. In the largetooth sawfish the males appear to move more freely between the subpopulations, while mothers return to the region where they were born to give birth to their own young. The length of the full lifespan of sawfish is labeled with considerable uncertainty. A green sawfish caught as a juvenile lived for 35 years in captivity, and a smalltooth sawfish lived for more than 42 years in captivity. In the narrow sawfish it has been estimated that the lifespan is about 9 years, and in the Pristis sawfish it has been estimated that it varies from about 30 to more than 50 years depending on the exact species. Electrolocation The rostrum (saw), unique among jawed fish, plays a significant role in both locating and capturing prey. The head and rostrum contain thousands of sensory organs, the ampullae of Lorenzini, that allow the sawfish to detect and monitor the movements of other organisms by measuring the electric fields they emit. Electroreception is found in all cartilaginous fishes and some bony fishes. In sawfish the sensory organs are packed most densely on the upper- and underside of the rostrum, varying in position and numbers depending on the species. Utilizing their saw as an extended sensing device, sawfish are able to examine their entire surroundings from a position close to the seafloor. It appears that sawfish can detect potential prey by electroreception from a distance of about . Some waters where sawfish live are very murky, limiting the possibility of hunting by sight. Feeding Sawfish are predators that feed on fish, crustaceans and molluscs. Old stories of sawfish attacking large prey such as whales and dolphins by cutting out pieces of flesh are now considered to be wholly unsubstantiated. Humans are far too large to be considered potential prey. In captivity they are typically fed ad libitum or in set amounts that (per week) equal 1–4% of the total weight of the sawfish, but there are indications that captives grow considerably faster than their wild counterparts. Exactly how they use their saw after the prey has been located has been debated, and some scholarship on the subject has been based on speculations rather than real observations. In 2012 it was shown that there are three primary techniques, informally called "saw in water", "saw on substrate" and "pin". If a prey item such as a fish is located in the open water, the sawfish uses the first method, making a rapid swipe at the prey with its saw to incapacitate it. It is then brought to the seabed and eaten. The "saw on substrate" is similar, but used on prey at the seabed. The saw is highly streamlined and when swiped it causes very little water movement. The final method involves pinning the prey against the seabed with the underside of the saw, in a manner similar to that seen in guitarfish. The "pin" is also used to manipulate the position of the prey, allowing fish to be swallowed head-first and thus without engaging any possible fin spines. The spines of catfish, a common prey, have been found imbedded in the rostrum of sawfish. Schools of mullets have been observed trying to escape sawfish. Prey fish are typically swallowed whole and not cut into small pieces with the saw, although on occasion one may be split in half during capture by the slashing motion. Prey choice is therefore limited by the size of the mouth. A sawfish had a catfish in its stomach. It had been suggested that sawfish use their saw to dig/rake in the bottom for prey, but this was not observed during a 2012 study, or supported by later hydrodynamic studies. Large sawfish often have rostral teeth with tips that are notably worn. Saw and self-defense Old stories often describe sawfish as highly dangerous to humans, sinking ships and cutting people in half, but today these are considered myths and not factual. Sawfish are actually docile and harmless to humans, except when captured; they can inflict serious injuries when defending themselves, by thrashing the saw from side to side. The saw is also used in self-defense against predators, such as sharks, that may eat sawfish. In captivity, they have been seen using their saws during fights over hierarchy or food. Relationship with humans In history, culture and mythology The largetooth sawfish was among the species formally described by Carl Linnaeus (as "Squalus pristis") in Systema Naturae in 1758, but sawfish were already known thousands of years earlier. Sawfish were occasionally mentioned in antiquity, in works such as Pliny's Natural History (77–79 AD). Pristis, the scientific name formalised for sawfish by Linnaeus in 1758, was also in use as a name even before his publication. For example, sawfish or "priste" were included in Libri de piscibus marinis in quibus verae piscium effigies expressae sunt by Guillaume Rondelet in 1554, and "pristi" were included in De piscibus libri V, et De cetis lib. vnus by Ulisse Aldrovandi in 1613. Outside Europe, sawfish are mentioned in old Persian texts, such as 13th century writings by Zakariya al-Qazwini. Sawfish have been found among archaeological remains in several parts of the world, including the Persian Gulf region, the Pacific coast of Panama, coastal Brazil and elsewhere. The cultural significance of sawfish varies significantly. The Aztecs, in what is currently Mexico, often included depictions of sawfish rostra (saws), notably as the striker/sword of the monster Cipactli. Numerous sawfish rostra have been found buried at the Templo Mayor, and two locations in coastal Veracruz had Aztec names referring to sawfish. In the same general region, sawfish teeth have been found in Mayan graves. The sawfish saw is part of the dancing masks of the Huave and Zapotecs in Oaxaca, Mexico. The Kuna people on the Caribbean coast of Panama and Colombia consider sawfish as rescuers of drowning people, and protectors against dangerous sea creatures. Also in Panama, sawfish were recognized as containing powerful spirits that could protect humans against supernatural enemies. In the Bissagos Islands off West Africa, dancing dressed as sawfish and other sea creatures is part of men's coming-of-age ceremonies. In Gambia, the saws indicate courage; the more saws are on display in a house, the more courageous the owner is seen to be. In Senegal, the Lebu people believe the saw can protect their family, house and livestock. In the same general region, they are recognized as ancestral spirits who use the saw as a magic weapon. The Akan people of Ghana see sawfish as an authority symbol. There are proverbs with sawfish in the African language Duala. In some other parts of coastal Africa, sawfish are considered extremely dangerous and supernatural, but their powers can be used by humans, as their saw is seen to retain powers against disease, bad luck and evil. Among most African groups, consumption of sawfish meat is entirely acceptable, but among some (the Fula, Serer and Wolof people) it is taboo. In the Niger Delta region of southern Nigeria, the saws of sawfish (known as oki in Ijaw and neighbouring languages) are often used in masquerades. In Asia, sawfish are a powerful symbol in many cultures. Asian shamans use sawfish rostrums for exorcisms and other ceremonies to repel demons and disease. They are believed to protect houses from ghosts when hung over doorways. Illustrations of sawfish are often found at Buddhist temples in Thailand. In the Sepik region of New Guinea, locals admire sawfish, but also see them as punishers, who will unleash heavy rainstorms on anyone breaking fishing taboos. Among the Warnindhilyagwa, a group of Indigenous Australians, the ancestral sawfish, Yukwurrirrindangwa and rays created the land. The ancestral sawfish carved out the river of Groote Eylandt with their saws. Among European sailors, sawfish were often feared as animals that could sink ships by piercing/sawing the hull with their saw (claims now known to be entirely untrue), but there are also stories of them saving people. In one case, it was described how a ship almost sank during a storm in Italy in 1573. The sailors prayed and made it safely ashore, where they discovered a sawfish that had "plugged" a hole in the ship with its saw. A sawfish rostrum said to be from this miraculous event is kept at the sanctuary of Carmine Maggiore in Naples. Sawfish have been used as symbols in recent history. During World War II, illustrations of sawfish were placed on navy ships, and used as symbols by both American and Nazi German submarines. Sawfish served as the emblem of the German U-96 submarine, known for its portrayal in Das Boot, and was later the symbol of the 9th U-boat Flotilla. The German World War II (Battle Badge of Small Combat Units) depicted a sawfish. In cartoons and humorous popular culture, the sawfish—particularly its rostrum ("nose")—has been employed as a sort of living tool. Examples of this can be found in Vicke Viking and Fighting Fantasy volume "Demons of the Deep". A stylized sawfish was chosen by the Central Bank of the West African States to appear on coins and banknotes of the CFA currency. This was due to it being a mythological representation of fecundity and prosperity. The image takes its form from an Akan and Baoule bronze weight, used for exchanges in the trade of gold powder. In aquariums Sawfish are popular in public aquariums, but require very large tanks. In a review of 10 North American and European public aquariums that kept sawfish, their tanks were all very large and ranged from about . Individuals in public aquariums often function as "ambassadors" for sawfish and their conservation plight. In captivity they are quite robust, appear to grow faster than their wild counterparts (perhaps due to consistent access to food). Some individuals have lived for decades, but breeding them has proven difficult. In 2012, four smalltooth sawfish pups were born at Atlantis Paradise Island in the Bahamas and, in 2023, another three were born at SeaWorld Orlando in Florida; these remain the only times a member of this family has been successfully bred in captivity. Unsuccessful breeding attempts had taken place earlier at the same facility, including a miscarriage in 2003. Nevertheless, it is hoped that this success may be the first step in a captive breeding program for the threatened sawfish. It is speculated that seasonal variations in water temperature, salinity and photoperiod are necessary to encourage breeding. Artificial insemination, as already practiced with a few captive sharks, is also being considered. Tracking studies indicate that if sawfish are released to the wild after spending a period in captivity (for example, if they outgrow their exhibit), they rapidly adopt a movement pattern similar to that of fully wild sawfish. Among the five sawfish species, only the four Pristis species are known to be kept in public aquariums. The most common is the largetooth sawfish, with studbooks including 16 individuals in North America in 2014, 5 individuals in Europe in 2013 and 13 individuals in Australia in 2017; this was followed by the green sawfish, with 13 individuals in North America, and 6 in Europe. Both of these species are also kept at public aquariums in Asia, and the only captive dwarf sawfish are in Japan. In 2014, studbooks included 12 smalltooth sawfish in North America, and the only ones kept elsewhere are at a public aquarium in Colombia. Decline and conservation Sawfish were once common, with habitat found along the coastline of 90 countries, locally even abundant, but they have declined drastically and are now among the most threatened groups of marine fish. Fishing for various uses Sawfish and their parts have been used for numerous things. In approximate order of impact, the four most serious threats today are use in shark fin soup, as traditional medicine, rostral teeth for cockfighting spurs and the saw as a novelty item. Despite being rays rather than sharks, sawfish have some of most prized fins for use in shark fin soup, on level with tiger, mako, blue, porbeagle, thresher, hammerhead, blacktip, sandbar and bull shark. As traditional medicine (especially Chinese medicine, but also known from Mexico, Brazil, Kenya, Eritrea, Yemen, Iran, India and Bangladesh) sawfish parts, oil or powder have been claimed to work against respiratory ailments, eye problems, rheumatism, pain, inflammation, scabies, skin ulcers, diarrhea and stomach problems, but there is no evidence supporting any of these uses. The saws are used in ceremonies and as curiosities. Until relatively recently many saws were sold to visiting tourists, or through antique stores or shell shops, but they are now mostly sold online, often illegally. In 2007 it was estimated that the fins and saw from a single sawfish potentially could earn a fisher more than US$5,000 in Kenya and in 2014 a single rostral tooth sold as cockfighting spurs in Peru or Ecuador had a value of up to US$220. Secondary uses are the meat for consumption and the skin for leather. Historically the saws were used as weapons (large saws) and combs (small saws). Oil from the liver was prized for use in boat repairs and street lights, and as recent as the 1920s in Florida it was regarded as the best fish oil for consumption. Sawfish fishing goes back several thousand years, but until relatively recently it typically involved traditional low-intensity methods such as simple hook-and-line or spearing. In most regions the major population decline in sawfish started in the 1960s–1980s. This coincided with a major growth in demand of fins for shark fin soup, the expansion of the international shark finning fishing fleet, and a proliferation of modern nylon fishing nets. The exception is the dwarf sawfish which was relatively widespread in the Indo-Pacific, but by the early 1900s it had already disappeared from most of its range, only surviving for certain in Australia (there is a single recent possible record from the Arabian region). The saw has been described as sawfish's Achilles' heel, as it easily becomes entangled in fishing nets. Sawfish can also be difficult or dangerous to release from nets, meaning that some fishers will kill them even before bringing them aboard the boat, or cut off the saw to keep it/release the fish. Because it is their main hunting device, the long-term survival of saw-less sawfish is highly questionable. In Australia where sawfish have to be released if caught, the narrow sawfish has the highest mortality rate, but it is still almost 50% for dwarf sawfish caught in gill nets. In an attempt of lowering this, a guide to sawfish release has been published. Habitat destruction and vulnerability to predators Although fishing is the main cause of the drastic decline in sawfish, another serious problem is habitat destruction. Coastal and estuarine habitats, including mangrove and seagrass meadows, are often degraded by human developments and pollution, and these are important habitats for sawfish, especially their young. In a study of juvenile sawfish in Western Australia's Fitzroy River about 60% had bite marks from bull sharks or crocodiles. Changes to river flows, such as by dams or droughts, can increase the risk faced by sawfish young by bringing them into more contact with predators. Endangered sawfish and other fish in Florida are showing strange behaviors and dying because of environmental toxins. These toxins, produced by microalgae near the sea bottom, affect the neurological systems of fish. 21st century status The combined range of the five sawfish species encompassed 90 countries, but today they have certainly disappeared entirely from 20 of these and possibly disappeared from several others. Many more have lost at least one of their species, leaving only one or two remaining. Of the five species of sawfish, three are critically endangered and two are endangered according to the International Union for Conservation of Nature's Red List of Threatened Species. The sawfish is now presumed extinct in 55 nations (including China, Iraq, Haiti, Japan, Timor-Leste, El Salvador, Taiwan, Djibouti and Brunei), with 18 countries with at least one species of sawfish missing and 28 countries with at least two. The United States and Australia appear to be the last strongholds of the species, where sawfish are better protected. Science Advances identifies Cuba, Tanzania, Colombia, Madagascar, Panama, Brazil, Mexico and Sri Lanka as the nations where urgent action could make a big contribution to saving the species. Australia The only remaining stronghold of the four species in the Indo-Pacific region (narrow, dwarf, largetooth and green sawfish) is in Northern Australia, but they have also experienced a decline there. Pristis sawfish are protected in Australia and only Indigenous Australians can legally catch them. Violations can result in a fine of up to AU$121,900. The narrow sawfish does not receive the same level of protection as the Pristis sawfish. Under CITES regulations, Australia was the only country that could export wild-caught sawfish for the aquarium trade from 2007 to 2013 (no country afterwards). This strictly involved the largetooth sawfish where the Australian population remains relatively robust, and only living individuals "to appropriate and acceptable aquaria for primarily conservation purposes". Numbers traded were very low (eight between 2007 and 2011), and following a review Australia did not export any after 2011. Largetooth sawfish have been monitored in Fitzroy River, Western Australia, a primary stronghold for the species, since 2000. In December 2018, the largest recorded mass fish death in the river occurred when more than 40 sawfish died, mainly because of heat and a severe lack of rainfall during a poor wet season. A 14-day research expedition in Far North Queensland in October 2019 did not spot a single sawfish. Expert Dr Peter Kyne of Charles Darwin University said that habitat change in the south and gillnet fishing in the north had contributed to the decline in numbers, but now that fishers had started working with the conservationists, dams and water diversions to the river flows had become a bigger problem in the north. Also, impact of successful saltwater crocodile conservation is a negative one on sawfish populations. However, there were still good populations in the Adelaide River and Daly River in the Northern Territory, and the Fitzroy River in the Kimberley. A study by Murdoch University researchers and Indigenous rangers, which captured more than 500 sawfish between 2002 and 2018, concluded that the survival of the sawfish could be at risk from dams or major water diversions on the Fitzroy River. It found that the fish are completely reliant on the Kimberley's wet season floods to complete their breeding cycle; in recent drier years, the population has suffered. There has been debate about using water from the river for agriculture and to grow fodder crops for cattle in the region. Sharks and Rays Australia (SARA) are conducting a citizen science investigation to understand the sawfish's historical habitats. Citizen can report their sawfish sighting online. Rest of the world Except for Australia, sawfish have been extirpated or only survive in very low numbers in the Indo-Pacific region. For example, among the four species only two (narrow and largetooth sawfish) certainly survive in South Asia, and only two (narrow and green sawfish) certainly survive in Southeast Asia. The status of the two species of the Atlantic region, the smalltooth and largetooth sawfish, is comparable to the Indo-Pacific. For example, sawfish have been entirely extirpated from most of the Atlantic coast of Africa (only survives for certain in Guinea-Bissau and Sierra Leone), as well as South Africa. The only relatively large remaining population of the largetooth sawfish in the Atlantic region is at the Amazon estuary in Brazil, but there are smaller in Central America and West Africa, and this species is also found in the Pacific and Indian Oceans. The smalltooth sawfish is only found in the Atlantic region and it is possibly the most threatened of all the species, as it had the smallest original range (range ) and has experienced the greatest contraction (disappeared from c. 81% of its original range). It only survives for certain in six countries, and it is possible that the only remaining viable population is in the United States. In the United States the smalltooth sawfish once occurred from Texas to New York, but its numbers have declined by at least 95% and today it is essentially restricted to Florida. However, the Florida population retains a high genetic diversity, has now stabilised and appears to be slowly increasing. A Recovery Plan for the smalltooth sawfish has been in effect since 2002. It has been strictly protected in the United States since 2003 when it was added to the Endangered Species Act as the first marine fish. This makes it "illegal to harm, harass, hook, or net sawfish in any way, except with a permit or in a permitted fishery". The fine is up to US$10,000 for the first violation alone. If accidentally caught, the sawfish has to be released as carefully as possible and a basic how-to guide has been published. In 2003 an attempt of adding the largetooth sawfish to the Endangered Species Act was denied, in part because this species does not occur in the United States anymore (last confirmed US record in 1961). However, it was added in 2011, and all the remaining sawfish species were added in 2014, restricting trade in them and their parts in the United States. In 2020, a Florida fisherman used a power saw to remove a smalltooth sawfish's rostrum and then released the maimed fish; he received a fine, community service and probation. Since 2007, all sawfish species have been listed on CITES Appendix I, which prohibits international trade in them and their parts. The only exception was the relatively robust Australian population of the largetooth sawfish that was listed on CITES Appendix II, which allowed trade to public aquariums only. Following reviews Australia did not use this option after 2011 and in 2013 it too was moved to Appendix I. In addition to Australia and the United States, sawfish are protected in the European Union, Mexico, Nicaragua, Costa Rica, Ecuador, Brazil, Indonesia, Malaysia, Bangladesh, India, Pakistan, Bahrain, Qatar, the United Arab Emirates, Guinea, Senegal and South Africa, but they are likely already functionally extirpated or entirely extirpated from several of these countries. Illegal fishing continues and in many countries enforcement of fishing laws is lacking. Even in Australia where relatively well-protected, people are occasionally caught illegally trying to sell sawfish parts, especially the saw. The saw is distinctive, but it can be difficult to identify flesh or fins as originating from sawfish when cut up for sale at fish markets. This can be resolved with DNA testing. If protected their relatively low reproduction rates make these animals especially slow to recover from overfishing. An example of this is the largetooth sawfish in Lake Nicaragua where once abundant. The population rapidly crashed during the 1970s when tens of thousands were caught. It was protected by the Nicaraguan government in the early 1980s, but remains rare today. Nevertheless, there are indications that at least the smalltooth sawfish population may be able to recover at a faster pace than formerly believed, if well-protected. Uniquely in this family, the narrow sawfish has a relatively fast reproduction rate (generation length about 4.6 years, less than one-third the time of the other species), it has experienced the smallest contraction of its range (30%) and it is one of only two species considered Endangered rather than Critically Endangered by the IUCN. The other rated as Endangered is the dwarf sawfish, but this primarily reflects that its main decline happened at least 100 years ago and IUCN ratings are based on the time period of the last three generations (estimated about 49 years in dwarf sawfish). There are several research projects aimed at sawfish in Australia and North America, but also a few in other continents. The Florida Museum of Natural History maintains the International Sawfish Encounter Database where people worldwide are encouraged to report any sawfish encounters, whether it was living or a rostrum seen for sale in a shop/online. Its data is used by biologists and conservationists for evaluating the habitat, range and abundance of sawfish around the world. In an attempt of increasing the knowledge of their plight the first "Sawfish Day" was held on 17 October 2017, and this was repeated on the same date in 2018.
Biology and health sciences
Batoidea
null
1051892
https://en.wikipedia.org/wiki/Io%20%28moon%29
Io (moon)
Io (), or Jupiter I, is the innermost and second-smallest of the four Galilean moons of the planet Jupiter. Slightly larger than Earth's moon, Io is the fourth-largest moon in the Solar System, has the highest density of any moon, the strongest surface gravity of any moon, and the lowest amount of water by atomic ratio of any known astronomical object in the Solar System. It was discovered in 1610 by Galileo Galilei and was named after the mythological character Io, a priestess of Hera who became one of Zeus's lovers. With over 400 active volcanoes, Io is the most geologically active object in the Solar System. This extreme geologic activity is the result of tidal heating from friction generated within Io's interior as it is pulled between Jupiter and the other Galilean moons—Europa, Ganymede and Callisto. Several volcanoes produce plumes of sulfur and sulfur dioxide that climb as high as above the surface. Io's surface is also dotted with more than 100 mountains that have been uplifted by extensive compression at the base of Io's silicate crust. Some of these peaks are taller than Mount Everest, the highest point on Earth's surface. Unlike most moons in the outer Solar System, which are mostly composed of water ice, Io is primarily composed of silicate rock surrounding a molten iron or iron sulfide core. Most of Io's surface is composed of extensive plains with a frosty coating of sulfur and sulfur dioxide. Io's volcanism is responsible for many of its unique features. Its volcanic plumes and lava flows produce large surface changes and paint the surface in various subtle shades of yellow, red, white, black, and green, largely due to allotropes and compounds of sulfur. Numerous extensive lava flows, several more than in length, also mark the surface. The materials produced by this volcanism make up Io's thin, patchy atmosphere, and they also greatly affect the nature and radiation levels of Jupiter's extensive magnetosphere. Io's volcanic ejecta also produce a large, intense plasma torus around Jupiter, creating a hostile radiation environment on and around the moon. Io played a significant role in the development of astronomy in the 17th and 18th centuries; discovered in January 1610 by Galileo Galilei, along with the other Galilean satellites, this discovery furthered the adoption of the Copernican model of the Solar System, the development of Kepler's laws of motion, and the first measurement of the speed of light. In 1979, the two Voyager spacecraft revealed Io to be a geologically active world, with numerous volcanic features, large mountains, and a young surface with no obvious impact craters. The Galileo spacecraft performed several close flybys in the 1990s and early 2000s, obtaining data about Io's interior structure and surface composition. These spacecraft also revealed the relationship between Io and Jupiter's magnetosphere and the existence of a belt of high-energy radiation centered on Io's orbit. Further observations have been made by Cassini–Huygens in 2000, New Horizons in 2007, and Juno since 2017, as well as from Earth-based telescopes and the Hubble Space Telescope. Nomenclature Although Simon Marius is not credited with the sole discovery of the Galilean satellites, his names for the moons were adopted. In his 1614 publication Mundus Iovialis anno M.DC.IX Detectus Ope Perspicilli Belgici, he proposed several alternative names for the innermost of the large moons of Jupiter, including "The Mercury of Jupiter" and "The First of the Jovian Planets". Based on a suggestion from Johannes Kepler in October 1613, he also devised a naming scheme whereby each moon was named for a lover of the Greek god Zeus or his Roman equivalent, Jupiter. He named the innermost large moon of Jupiter after the Greek Io: Marius's names were not widely adopted until centuries later (mid-20th century). In much of the earlier astronomical literature, Io was generally referred to by its Roman numeral designation (a system introduced by Galileo) as "", or as "the first satellite of Jupiter". The customary English pronunciation of the name is , though sometimes people attempt a more 'authentic' pronunciation, . The name has two competing stems in Latin: Īō and (rarely) Īōn. The latter is the basis of the English adjectival form, Ionian. Features on Io are named after characters and places from the Io myth, as well as deities of fire, volcanoes, the Sun, and thunder from various myths, and characters and places from Dante's Inferno: names appropriate to the volcanic nature of the surface. Since the surface was first seen up close by Voyager 1, the International Astronomical Union has approved 249 names for Io's volcanoes, mountains, plateaus, and large albedo features. The approved feature categories used for Io for different types of volcanic features include patera ('saucer'; volcanic depression), fluctus ('flow'; lava flow), vallis ('valley'; lava channel), and active eruptive center (location where plume activity was the first sign of volcanic activity at a particular volcano). Named mountains, plateaus, layered terrain, and shield volcanoes include the terms mons, mensa ('table'), planum, and tholus ('rotunda'), respectively. Named, bright albedo regions use the term regio. Examples of named features are Prometheus, Pan Mensa, Tvashtar Paterae, and Tsũi Goab Fluctus. Observational history The first reported observation of Io was made by Galileo Galilei on 7 January 1610 using a 20x-power, refracting telescope at the University of Padua. However, in that observation, Galileo could not separate Io and Europa due to the low power of his telescope, so the two were recorded as a single point of light. Io and Europa were seen for the first time as separate bodies during Galileo's observations of the Jovian system the following day, 8 January 1610 (used as the discovery date for Io by the IAU). The discovery of Io and the other Galilean satellites of Jupiter was published in Galileo's Sidereus Nuncius in March 1610. In his Mundus Jovialis, published in 1614, Simon Marius claimed to have discovered Io and the other moons of Jupiter in 1609, one week before Galileo's discovery. Galileo doubted this claim and dismissed the work of Marius as plagiarism. Regardless, Marius's first recorded observation came from 29 December 1609 in the Julian calendar, which equates to 8 January 1610 in the Gregorian calendar, which Galileo used. Given that Galileo published his work before Marius, Galileo is credited with the discovery. For the next two and a half centuries, Io remained an unresolved, 5th-magnitude point of light in astronomers' telescopes. During the 17th century, Io and the other Galilean satellites served a variety of purposes, including early methods to determine longitude, validating Kepler's third law of planetary motion, and determining the time required for light to travel between Jupiter and Earth. Based on ephemerides produced by astronomer Giovanni Cassini and others, Pierre-Simon Laplace created a mathematical theory to explain the resonant orbits of Io, Europa, and Ganymede. This resonance was later found to have a profound effect on the geologies of the three moons. Improved telescope technology in the late 19th and 20th centuries allowed astronomers to resolve (that is, see as distinct objects) large-scale surface features on Io. In the 1890s, Edward E. Barnard was the first to observe variations in Io's brightness between its equatorial and polar regions, correctly determining that this was due to differences in color and albedo between the two regions and not due to Io being egg-shaped, as proposed at the time by fellow astronomer William Pickering, or two separate objects, as initially proposed by Barnard. Later telescopic observations confirmed Io's distinct reddish-brown polar regions and yellow-white equatorial band. Telescopic observations in the mid-20th century began to hint at Io's unusual nature. Spectroscopic observations suggested that Io's surface was devoid of water ice (a substance found to be plentiful on the other Galilean satellites). The same observations suggested a surface dominated by evaporates composed of sodium salts and sulfur. Radiotelescopic observations revealed Io's influence on the Jovian magnetosphere, as demonstrated by decametric wavelength bursts tied to the orbital period of Io. Pioneer The first spacecraft to pass by Io were the Pioneer 10 and 11 probes on 3 December 1973 and 2 December 1974, respectively. Radio tracking provided an improved estimate of Io's mass, which, along with the best available information of its size, suggested it had the highest density of the Galilean satellites, and was composed primarily of silicate rock rather than water ice. The Pioneers also revealed the presence of a thin atmosphere and intense radiation belts near the orbit of Io. The camera on board Pioneer 11 took the only good image of the moon obtained by either spacecraft, showing its north polar region and its yellow tint. Close-up images were planned during Pioneer 10s encounter, but those were lost because of the high-radiation environment. Voyager When the twin probes Voyager 1 and Voyager 2 passed by Io in 1979, their more advanced imaging systems allowed for far more detailed images. Voyager 1 flew past Io on 5 March 1979 from a distance of . The images returned during the approach revealed a strange, multi-colored landscape devoid of impact craters. The highest-resolution images showed a relatively young surface punctuated by oddly shaped pits, mountains taller than Mount Everest, and features resembling volcanic lava flows. Shortly after the encounter, Voyager navigation engineer Linda A. Morabito noticed a plume emanating from the surface in one of the images. Analysis of other Voyager 1 images showed nine such plumes scattered across the surface, proving that Io was volcanically active. This conclusion was predicted in a paper published shortly before the Voyager 1 encounter by Stan Peale, Patrick Cassen, and R. T. Reynolds. The authors calculated that Io's interior must experience significant tidal heating caused by its orbital resonance with Europa and Ganymede (see the "Tidal heating" section for a more detailed explanation of the process). Data from this flyby showed that the surface of Io is dominated by sulfur and sulfur dioxide frosts. These compounds also dominate its thin atmosphere and the torus of plasma centered on Io's orbit (also discovered by Voyager). Voyager 2 passed Io on 9 July 1979 at a distance of . Though it did not approach nearly as close as Voyager 1, comparisons between images taken by the two spacecraft showed several surface changes that had occurred in the four months between the encounters. In addition, observations of Io as a crescent as Voyager 2 departed the Jovian system revealed that seven of the nine plumes observed in March were still active in July 1979, with only the volcano Pele shutting down between flybys. Galileo The Galileo spacecraft arrived at Jupiter in 1995 after a six-year journey from Earth to follow up on the discoveries of the two Voyager probes and the ground-based observations made in the intervening years. Io's location within one of Jupiter's most intense radiation belts precluded a prolonged close flyby, but Galileo did pass close by shortly before entering orbit for its two-year, primary mission studying the Jovian system. Although no images were taken during the close flyby on 7 December 1995, the encounter did yield significant results, such as the discovery of a large iron core, similar to that found on the rocky planets of the inner Solar System. Despite the lack of close-up imaging and mechanical problems that greatly restricted the amount of data returned, several significant discoveries were made during Galileo primary mission. Galileo observed the effects of a major eruption at Pillan Patera and confirmed that volcanic eruptions are composed of silicate magmas with magnesium-rich mafic and ultramafic compositions. Distant imaging of Io was acquired for almost every orbit during the primary mission, revealing large numbers of active volcanoes (both thermal emission from cooling magma on the surface and volcanic plumes), numerous mountains with widely varying morphologies, and several surface changes that had taken place both between the Voyager and Galileo eras and between Galileo orbits. The Galileo mission was twice extended, in 1997 and 2000. During these extended missions, the probe flew by Io three times in late 1999 and early 2000, and three times in late 2001 and early 2002. Observations during these encounters revealed the geologic processes occurring at Io's volcanoes and mountains, excluded the presence of a magnetic field, and demonstrated the extent of volcanic activity. Cassini In December 2000, the Cassini spacecraft had a distant and brief encounter with the Jovian system en route to Saturn, allowing for joint observations with Galileo. These observations revealed a new plume at Tvashtar Paterae and provided insights into Io's aurorae. New Horizons The New Horizons spacecraft, en route to Pluto and the Kuiper belt, flew by the Jovian system and Io on 28 February 2007. During the encounter, numerous distant observations of Io were obtained. These included images of a large plume at Tvashtar, providing the first detailed observations of the largest class of Ionian volcanic plume since observations of Pele's plume in 1979. New Horizons also captured images of a volcano near Girru Patera in the early stages of an eruption, and several volcanic eruptions that have occurred since Galileo. Juno The Juno spacecraft was launched in 2011 and entered orbit around Jupiter on 5 July 2016. Junos mission is primarily focused on improving our understanding of Jupiter's interior, magnetic field, aurorae, and polar atmosphere. Junos 54-day orbit is highly inclined and highly eccentric in order to better characterize Jupiter's polar regions and to limit its exposure to the planet's harsh inner radiation belts, limiting close encounters with Jupiter's moons. The closest approach to Io during the initial, prime mission occurred in February 2020 at a distance of 195,000 kilomters. Juno's extended mission, begun in June 2021, allowed for closer encounters with Jupiter's Galilean satellites due to Junos orbital precession. After a series of increasingly closer encounters with Io in 2022 and 2023, Juno performed a pair of close flybys on 30 December 2023, and 3 February 2024, both with altitudes of 1,500 kilometers. The primary goal of these encounters were to improve our understanding of Io's gravity field using doppler tracking and to image Io's surface to look for surface changes since Io was last seen up-close in 2007. During several orbits, Juno has observed Io from a distance using JunoCam, a wide-angle, visible-light camera, to look for volcanic plumes and JIRAM, a near-infrared spectrometer and imager, to monitor thermal emission from Io's volcanoes. JIRAM near-infrared spectroscopy has so far allowed for the coarse mapping of sulfur dioxide frost across Io's surface as well as mapping minor surface components weakly absorbing sunlight at 2.1 and 2.65 μm. Future missions There are two forthcoming missions planned for the Jovian system. The Jupiter Icy Moon Explorer (JUICE) is a planned European Space Agency mission to the Jovian system that is intended to end up in Ganymede orbit. JUICE launched in April 2023, with arrival at Jupiter planned for July 2031. JUICE will not fly by Io, but it will use its instruments, such as a narrow-angle camera, to monitor Io's volcanic activity and measure its surface composition during the two-year Jupiter-tour phase of the mission prior to Ganymede orbit insertion. Europa Clipper is a planned NASA mission to the Jovian system focused on Jupiter's moon Europa. Like JUICE, Europa Clipper will not perform any flybys of Io, but distant volcano monitoring is likely. Europa Clipper launched in October 2024, with an arrival at Jupiter in 2030. The Io Volcano Observer (IVO) was a proposal to NASA for a low-cost, Discovery-class mission selected for a Phase A study along with three other missions in 2020. IVO would launch in January 2029 and perform ten flybys of Io while in orbit around Jupiter beginning in the early 2030s. However, the Venus missions DAVINCI+ and VERITAS were selected in favor of those. Orbit and rotation Io orbits Jupiter at a distance of from Jupiter's center and from its cloudtops. It is the innermost of the Galilean satellites of Jupiter, its orbit lying between those of Thebe and Europa. Including Jupiter's inner satellites, Io is the fifth moon out from Jupiter. It takes Io about 42.5 hours (1.77 days) to complete one orbit around Jupiter (fast enough for its motion to be observed over a single night of observation). Io is in a 2:1 mean-motion orbital resonance with Europa and a 4:1 mean-motion orbital resonance with Ganymede, completing two orbits of Jupiter for every one orbit completed by Europa, and four orbits for every one completed by Ganymede. This resonance helps maintain Io's orbital eccentricity (0.0041), which in turn provides the primary heating source for its geologic activity. Without this forced eccentricity, Io's orbit would circularize through tidal dissipation, leading to a less geologically active world. Like the other Galilean satellites and the Moon, Io rotates synchronously with its orbital period, keeping one face nearly pointed toward Jupiter. This synchrony provides the definition for Io's longitude system. Io's prime meridian intersects the equator at the sub-Jovian point. The side of Io that always faces Jupiter is known as the subjovian hemisphere, whereas the side that always faces away is known as the antijovian hemisphere. The side of Io that always faces in the direction that Io travels in its orbit is known as the leading hemisphere, whereas the side that always faces in the opposite direction is known as the trailing hemisphere. From the surface of Io, Jupiter would subtend an arc of 19.5°, making Jupiter appear 39 times the apparent diameter of Earth's Moon. Interaction with Jupiter's magnetosphere Io plays a significant role in shaping Jupiter's magnetic field, acting as an electric generator that can develop 400,000 volts across itself and create an electric current of 3 million amperes, releasing ions that give Jupiter a magnetic field inflated to more than twice the size it would otherwise have. The magnetosphere of Jupiter sweeps up gases and dust from Io's thin atmosphere at a rate of 1 tonne per second. This material is mostly composed of ionized and atomic sulfur, oxygen and chlorine; atomic sodium and potassium; molecular sulfur dioxide and sulfur; and sodium chloride dust. These materials originate from Io's volcanic activity, with the material that escapes to Jupiter's magnetic field and into interplanetary space coming directly from Io's atmosphere. These materials, depending on their ionized state and composition, end up in various neutral (non-ionized) clouds and radiation belts in Jupiter's magnetosphere and, in some cases, are eventually ejected from the Jovian system. Surrounding Io (at a distance of up to six Io radii from its surface) is a cloud of neutral sulfur, oxygen, sodium, and potassium atoms. These particles originate in Io's upper atmosphere and are excited by collisions with ions in the plasma torus (discussed below) and by other processes into filling Io's Hill sphere, which is the region where Io's gravity is dominant over Jupiter's. Some of this material escapes Io's gravitational pull and goes into orbit around Jupiter. Over a 20-hour period, these particles spread out from Io to form a banana-shaped, neutral cloud that can reach as far as six Jovian radii from Io, either inside Io's orbit and ahead of it or outside Io's orbit and behind it. The collision process that excites these particles also occasionally provides sodium ions in the plasma torus with an electron, removing those new "fast" neutrals from the torus. These particles retain their velocity (70 km/s, compared to the 17 km/s orbital velocity at Io), and are thus ejected in jets leading away from Io. Io orbits within a belt of intense radiation known as the Io plasma torus. The plasma in this doughnut-shaped ring of ionized sulfur, oxygen, sodium, and chlorine originates when neutral atoms in the "cloud" surrounding Io are ionized and carried along by the Jovian magnetosphere. Unlike the particles in the neutral cloud, these particles co-rotate with Jupiter's magnetosphere, revolving around Jupiter at 74 km/s. Like the rest of Jupiter's magnetic field, the plasma torus is tilted with respect to Jupiter's equator (and Io's orbital plane), so that Io is at times below and at other times above the core of the plasma torus. As noted above, these ions' higher velocity and energy levels are partly responsible for the removal of neutral atoms and molecules from Io's atmosphere and more extended neutral clouds. The torus is composed of three sections: an outer, "warm" torus that resides just outside Io's orbit; a vertically extended region known as the "ribbon", composed of the neutral source region and cooling plasma, located at around Io's distance from Jupiter; and an inner, "cold" torus, composed of particles that are slowly spiraling in toward Jupiter. After residing an average of 40 days in the torus, particles in the "warm" torus escape and are partially responsible for Jupiter's unusually large magnetosphere, their outward pressure inflating it from within. Particles from Io, detected as variations in magnetospheric plasma, have been detected far into the long magnetotail by New Horizons. To study similar variations within the plasma torus, researchers measured the ultraviolet light it emits. Although such variations have not been definitively linked to variations in Io's volcanic activity (the ultimate source for material in the plasma torus), this link has been established in the neutral sodium cloud. During an encounter with Jupiter in 1992, the Ulysses spacecraft detected a stream of dust-sized particles being ejected from the Jovian system. The dust in these discrete streams travels away from Jupiter at speeds upwards of several hundred kilometers per second, has an average particle size of 10 μm, and consists primarily of sodium chloride. Dust measurements by Galileo showed that these dust streams originated on Io, but exactly how these form, whether from Io's volcanic activity or material removed from the surface, is unknown. Jupiter's magnetic field, which Io crosses, couples Io's atmosphere and neutral cloud to Jupiter's polar upper atmosphere by generating an electric current known as the Io flux tube. This current produces an auroral glow in Jupiter's polar regions known as the Io footprint, as well as aurorae in Io's atmosphere. Particles from this auroral interaction darken the Jovian polar regions at visible wavelengths. The location of Io and its auroral footprint with respect to Earth and Jupiter has a strong influence on Jovian radio emissions from our vantage point: when Io is visible, radio signals from Jupiter increase considerably. The Juno mission, currently in orbit around Jupiter, should help shed light on these processes. The Jovian magnetic field lines that do get past Io's ionosphere also induce an electric current, which in turn creates an induced magnetic field within Io's interior. Io's induced magnetic field is thought to be generated within a partially molten, silicate magma ocean 50 kilometers beneath Io's surface. Similar induced fields were found at the other Galilean satellites by Galileo, possibly generated within liquid water oceans in the interiors of those moons. According to an international study published in the journal Nature in 2024, no magma ocean would exist on the satellite Io despite the large number of volcanoes and the tidal interaction with Jupiter, as historical data from the mission Galileo probe suggested. The scientists used data from two recent overflights by the Juno probe and claimed that an "almost" solid mantle exists beneath Io's surface and not an ocean of magma as previously thought. Geology Io is slightly larger than Earth's Moon. It has a mean radius of (about 5% greater than the Moon's) and a mass of 8.9319 kg (about 21% greater than the Moon's). It is a slight ellipsoid in shape, with its longest axis directed toward Jupiter. Among the Galilean satellites, in both mass and volume, Io ranks behind Ganymede and Callisto but ahead of Europa. Interior Composed primarily of silicate rock and iron, Io and Europa are closer in bulk composition to terrestrial planets than to other satellites in the outer Solar System, which are mostly composed of a mix of water ice and silicates. Io has a density of , the highest of any regular moon in the Solar System; significantly higher than the other Galilean satellites (Ganymede and Callisto in particular, whose densities are around ) and slightly higher (~5.5%) than the Moon's and Europa's . Models based on the Voyager and Galileo measurements of Io's mass, radius, and quadrupole gravitational coefficients (numerical values related to how mass is distributed within an object) suggest that its interior is differentiated between a silicate-rich crust and mantle and an iron- or iron-sulfide-rich core. Io's metallic core makes up approximately 20% of its mass. Depending on the amount of sulfur in the core, the core has a radius between if it is composed almost entirely of iron, or between for a core consisting of a mix of iron and sulfur. Galileo magnetometer failed to detect an internal, intrinsic magnetic field at Io, suggesting that the core is not convecting. Modeling of Io's interior composition suggests that the mantle is composed of at least 75% of the magnesium-rich mineral forsterite, and has a bulk composition similar to that of L-chondrite and LL-chondrite meteorites, with higher iron content (compared to silicon) than the Moon or Earth, but lower than Mars. To support the heat flow observed on Io, 10–20% of Io's mantle may be molten, though regions where high-temperature volcanism has been observed may have higher melt fractions. However, re-analysis of Galileo magnetometer data in 2009 revealed the presence of an induced magnetic field at Io, requiring a magma ocean below the surface. Further analysis published in 2011 provided direct evidence of such an ocean. This layer is estimated to be 50 km thick and to make up about 10% of Io's mantle. It is estimated that the temperature in the magma ocean reaches 1,200 °C. It is not known if the 10–20% partial melting percentage for Io's mantle is consistent with the requirement for a significant amount of molten silicates in this possible magma ocean. The lithosphere of Io, composed of basalt and sulfur deposited by Io's extensive volcanism, is at least thick, and likely less than thick. Tidal heating Unlike Earth and the Moon, Io's main source of internal heat comes from tidal dissipation rather than radioactive isotope decay, the result of Io's orbital resonance with Europa and Ganymede. Such heating is dependent on Io's distance from Jupiter, its orbital eccentricity, the composition of its interior, and its physical state. Its Laplace resonance with Europa and Ganymede maintains Io's eccentricity and prevents tidal dissipation within Io from circularizing its orbit. The resonant orbit also helps to maintain Io's distance from Jupiter; otherwise tides raised on Jupiter would cause Io to slowly spiral outward from its parent planet. The tidal forces experienced by Io are about 20,000 times stronger than the tidal forces Earth experiences due to the Moon, and the vertical differences in its tidal bulge, between the times Io is at periapsis and apoapsis in its orbit, could be as much as . The friction or tidal dissipation produced in Io's interior due to this varying tidal pull, which, without the resonant orbit, would have gone into circularizing Io's orbit instead, creates significant tidal heating within Io's interior, melting a significant amount of Io's mantle and core. The amount of energy produced is up to 200 times greater than that produced solely from radioactive decay. This heat is released in the form of volcanic activity, generating its observed high heat flow (global total: 0.6 to 1.6×1014 W). Models of its orbit suggest that the amount of tidal heating within Io changes with time; however, the current amount of tidal dissipation is consistent with the observed heat flow. Models of tidal heating and convection have not found consistent planetary viscosity profiles that simultaneously match tidal energy dissipation and mantle convection of heat to the surface. Although there is general agreement that the origin of the heat as manifested in Io's many volcanoes is tidal heating from the pull of gravity from Jupiter and its moon Europa, the volcanoes are not in the positions predicted with tidal heating. They are shifted 30 to 60 degrees to the east. A study published by Tyler et al. (2015) suggests that this eastern shift may be caused by an ocean of molten rock under the surface. The movement of this magma would generate extra heat through friction due to its viscosity. The study's authors believe that this subsurface ocean is a mixture of molten and solid rock. Other moons in the Solar System are also tidally heated, and they too may generate additional heat through the friction of subsurface magma or water oceans. This ability to generate heat in a subsurface ocean increases the chance of life on bodies like Europa and Enceladus. Surface Based on their experience with the ancient surfaces of the Moon, Mars, and Mercury, scientists expected to see numerous impact craters in Voyager 1 first images of Io. The density of impact craters across Io's surface would have given clues to Io's age. However, they were surprised to discover that the surface was almost completely lacking in impact craters, but was instead covered in smooth plains dotted with tall mountains, pits of various shapes and sizes, and volcanic lava flows. Compared to most worlds observed to that point, Io's surface was covered in a variety of colorful materials (leading Io to be compared to a rotten orange or to pizza) from various sulfurous compounds. The lack of impact craters indicated that Io's surface is geologically young, like the terrestrial surface; volcanic materials continuously bury craters as they are produced. This result was spectacularly confirmed as at least nine active volcanoes were observed by Voyager 1. Surface composition Io's colorful appearance is the result of materials deposited by its extensive volcanism, including silicates (such as orthopyroxene), sulfur, and sulfur dioxide. Sulfur dioxide frost is ubiquitous across the surface of Io, forming large regions covered in white or grey materials. Sulfur is also seen in many places across Io, forming yellow to yellow-green regions. Sulfur deposited in the mid-latitude and polar regions is often damaged by radiation, breaking up the normally stable cyclic 8-chain sulfur. This radiation damage produces Io's red-brown polar regions. Explosive volcanism, often taking the form of umbrella-shaped plumes, paints the surface with sulfurous and silicate materials. Plume deposits on Io are often colored red or white depending on the amount of sulfur and sulfur dioxide in the plume. Generally, plumes formed at volcanic vents from degassing lava contain a greater amount of , producing a red "fan" deposit, or in extreme cases, large (often reaching beyond from the central vent) red rings. A prominent example of a red-ring plume deposit is located at Pele. These red deposits consist primarily of sulfur (generally 3- and 4-chain molecular sulfur), sulfur dioxide, and perhaps sulfuryl chloride. Plumes formed at the margins of silicate lava flows (through the interaction of lava and pre-existing deposits of sulfur and sulfur dioxide) produce white or gray deposits. Compositional mapping and Io's high density suggest that Io contains little to no water, though small pockets of water ice or hydrated minerals have been tentatively identified, most notably on the northwest flank of the mountain Gish Bar Mons. Io has the least amount of water of any known body in the Solar System. This lack of water is likely due to Jupiter being hot enough early in the evolution of the Solar System to drive off volatile materials like water in the vicinity of Io, but not hot enough to do so farther out. Volcanism The tidal heating produced by Io's forced orbital eccentricity has made it the most volcanically active world in the Solar System, with hundreds of volcanic centers and extensive lava flows. During a major eruption, lava flows tens or even hundreds of kilometers long can be produced, consisting mostly of basalt silicate lavas with either mafic or ultramafic (magnesium-rich) compositions. As a by-product of this activity, sulfur, sulfur dioxide gas and silicate pyroclastic material (like ash) are blown up to into space, producing large, umbrella-shaped plumes, painting the surrounding terrain in red, black, and white, and providing material for Io's patchy atmosphere and Jupiter's extensive magnetosphere. Io's surface is dotted with volcanic depressions known as paterae which generally have flat floors bounded by steep walls. These features resemble terrestrial calderas, but it is unknown if they are produced through collapse over an emptied lava chamber like their terrestrial cousins. One hypothesis suggests that these features are produced through the exhumation of volcanic sills, and the overlying material is either blasted out or integrated into the sill. Examples of paterae in various stages of exhumation have been mapped using Galileo images of the Chaac-Camaxtli region. Unlike similar features on Earth and Mars, these depressions generally do not lie at the peak of shield volcanoes and are normally larger, with an average diameter of , the largest being Loki Patera at . Loki is also consistently the strongest volcano on Io, contributing on average 25% of Io's global heat output. Whatever the formation mechanism, the morphology and distribution of many paterae suggest that these features are structurally controlled, with at least half bounded by faults or mountains. These features are often the site of volcanic eruptions, either from lava flows spreading across the floors of the paterae, as at an eruption at Gish Bar Patera in 2001, or in the form of a lava lake. Lava lakes on Io either have a continuously overturning lava crust, such as at Pele, or an episodically overturning crust, such as at Loki. Lava flows represent another major volcanic terrain on Io. Magma erupts onto the surface from vents on the floor of paterae or on the plains from fissures, producing inflated, compound lava flows similar to those seen at Kilauea in Hawaii. Images from the Galileo spacecraft revealed that many of Io's major lava flows, like those at Prometheus and Amirani, are produced by the build-up of small breakouts of lava flows on top of older flows. Larger outbreaks of lava have also been observed on Io. For example, the leading edge of the Prometheus flow moved between Voyager in 1979 and the first Galileo observations in 1996. A major eruption in 1997 produced more than of fresh lava and flooded the floor of the adjacent Pillan Patera. Analysis of the Voyager images led scientists to believe that these flows were composed mostly of various compounds of molten sulfur. However, subsequent Earth-based infrared studies and measurements from the Galileo spacecraft indicate that these flows are composed of basaltic lava with mafic to ultramafic compositions. This hypothesis is based on temperature measurements of Io's "hotspots", or thermal-emission locations, which suggest temperatures of at least 1,300 K and some as high as 1,600 K. Initial estimates suggesting eruption temperatures approaching 2,000 K have since proven to be overestimates because the wrong thermal models were used to model the temperatures. The discovery of plumes at the volcanoes Pele and Loki were the first sign that Io is geologically active. Generally, these plumes are formed when volatiles like sulfur and sulfur dioxide are ejected skyward from Io's volcanoes at speeds reaching , creating umbrella-shaped clouds of gas and dust. Additional material that might be found in these volcanic plumes include sodium, potassium, and chlorine. These plumes appear to be formed in one of two ways. Io's largest plumes, such as those emitted by Pele, are created when dissolved sulfur and sulfur dioxide gas are released from erupting magma at volcanic vents or lava lakes, often dragging silicate pyroclastic material with them. These plumes form red (from the short-chain sulfur) and black (from the silicate pyroclastics) deposits on the surface. Plumes formed in this manner are among the largest observed at Io, forming red rings more than in diameter. Examples of this plume type include Pele, Tvashtar, and Dazhbog. Another type of plume is produced when encroaching lava flows vaporize underlying sulfur dioxide frost, sending the sulfur skyward. This type of plume often forms bright circular deposits consisting of sulfur dioxide. These plumes are often less than tall, and are among the most long-lived plumes on Io. Examples include Prometheus, Amirani, and Masubi. The erupted sulfurous compounds are concentrated in the upper crust from a decrease in sulfur solubility at greater depths in Io's lithosphere and can be a determinant for the eruption style of a hot spot. Mountains Io has 100 to 150 mountains. These structures average in height and reach a maximum of at South Boösaule Montes. Mountains often appear as large (the average mountain is long), isolated structures with no apparent global tectonic patterns outlined, in contrast to the case on Earth. To support the tremendous topography observed at these mountains requires compositions consisting mostly of silicate rock, as opposed to sulfur. Despite the extensive volcanism that gives Io its distinctive appearance, nearly all of its mountains are tectonic structures, and are not produced by volcanoes. Instead, most Ionian mountains form as the result of compressive stresses on the base of the lithosphere, which uplift and often tilt chunks of Io's crust through thrust faulting. The compressive stresses leading to mountain formation are the result of subsidence from the continuous burial of volcanic materials. The global distribution of mountains appears to be opposite that of volcanic structures; mountains dominate areas with fewer volcanoes and vice versa. This suggests large-scale regions in Io's lithosphere where compression (supportive of mountain formation) and extension (supportive of patera formation) dominate. Locally, however, mountains and paterae often abut one another, suggesting that magma often exploits faults formed during mountain formation to reach the surface. Mountains on Io (generally, structures rising above the surrounding plains) have a variety of morphologies. Plateaus are most common. These structures resemble large, flat-topped mesas with rugged surfaces. Other mountains appear to be tilted crustal blocks, with a shallow slope from the formerly flat surface and a steep slope consisting of formerly sub-surface materials uplifted by compressive stresses. Both types of mountains often have steep scarps along one or more margins. Only a handful of mountains on Io appear to have a volcanic origin. These mountains resemble small shield volcanoes, with steep slopes (6–7°) near a small, central caldera and shallow slopes along their margins. These volcanic mountains are often smaller than the average mountain on Io, averaging only in height and wide. Other shield volcanoes with much shallower slopes are inferred from the morphology of several of Io's volcanoes, where thin flows radiate out from a central patera, such as at Ra Patera. Nearly all mountains appear to be in some stage of degradation. Large landslide deposits are common at the base of Ionian mountains, suggesting that mass wasting is the primary form of degradation. Scalloped margins are common among Io's mesas and plateaus, the result of sulfur dioxide sapping from Io's crust, producing zones of weakness along mountain margins. Atmosphere Io has an extremely thin atmosphere consisting mainly of sulfur dioxide (), with minor constituents including sulfur monoxide (), sodium chloride (), and atomic sulfur and oxygen. The atmosphere has significant variations in density and temperature with time of day, latitude, volcanic activity, and surface frost abundance. The maximum atmospheric pressure on Io ranges from 3.3 to 3 pascals (Pa) or 0.3 to 3 nbar, spatially seen on Io's anti-Jupiter hemisphere and along the equator, and temporally in the early afternoon when the temperature of surface frost peaks. Localized peaks at volcanic plumes have also been seen, with pressures of 5 to 40  Pa (5 to 40 nbar). Io's atmospheric pressure is lowest on Io's night side, where the pressure dips to 0.1 to 1 Pa (0.0001 to 0.001 nbar). Io's atmospheric temperature ranges from the temperature of the surface at low altitudes, where sulfur dioxide is in vapor pressure equilibrium with frost on the surface, to 1,800 K at higher altitudes where the lower atmospheric density permits heating from plasma in the Io plasma torus and from Joule heating from the Io flux tube. The low pressure limits the atmosphere's effect on the surface, except for temporarily redistributing sulfur dioxide from frost-rich to frost-poor areas, and to expand the size of plume deposit rings when plume material re-enters the thicker dayside atmosphere. Gas in Io's atmosphere is stripped by Jupiter's magnetosphere, escaping to either the neutral cloud that surrounds Io, or the Io plasma torus, a ring of ionized particles that shares Io's orbit but co-rotates with the magnetosphere of Jupiter. Approximately one ton of material is removed from the atmosphere every second through this process so that it must be constantly replenished. The most dramatic source of are volcanic plumes, which pump kg of sulfur dioxide per second into Io's atmosphere on average, though most of this condenses back onto the surface. Much of the sulfur dioxide in Io's atmosphere is sustained by sunlight-driven sublimation of frozen on the surface. The day-side atmosphere is largely confined to within 40° of the equator, where the surface is warmest and most active volcanic plumes reside. A sublimation-driven atmosphere is also consistent with observations that Io's atmosphere is densest over the anti-Jupiter hemisphere, where frost is most abundant, and is densest when Io is closer to the Sun. However, some contributions from volcanic plumes are required as the highest observed densities have been seen near volcanic vents. Because the density of sulfur dioxide in the atmosphere is tied directly to surface temperature, Io's atmosphere partially collapses at night, or when Io is in the shadow of Jupiter (with an ~80% drop in column density). The collapse during eclipse is limited somewhat by the formation of a diffusion layer of sulfur monoxide in the lowest portion of the atmosphere, but the atmosphere pressure of Io's nightside atmosphere is two to four orders of magnitude less than at its peak just past noon. The minor constituents of Io's atmosphere, such as , , , and derive either from: direct volcanic outgassing; photodissociation, or chemical breakdown caused by solar ultraviolet radiation, from ; or the sputtering of surface deposits by charged particles from Jupiter's magnetosphere. Various researchers have proposed that the atmosphere of Io freezes onto the surface when it passes into the shadow of Jupiter. Evidence for this is a "post-eclipse brightening", where the moon sometimes appears a bit brighter as if covered with frost immediately after eclipse. After about 15 minutes the brightness returns to normal, presumably because the frost has disappeared through sublimation. Besides being seen through ground-based telescopes, post-eclipse brightening was found in near-infrared wavelengths using an instrument aboard the Cassini spacecraft. Further support for this idea came in 2013 when the Gemini Observatory was used to directly measure the collapse of Io's atmosphere during, and its reformation after, eclipse with Jupiter. High-resolution images of Io acquired when Io is experiencing an eclipse reveal an aurora-like glow. As on Earth, this is due to particle radiation hitting the atmosphere, though in this case the charged particles come from Jupiter's magnetic field rather than the solar wind. Aurorae usually occur near the magnetic poles of planets, but Io's are brightest near its equator. Io lacks an intrinsic magnetic field of its own; therefore, electrons traveling along Jupiter's magnetic field near Io directly impact Io's atmosphere. More electrons collide with its atmosphere, producing the brightest aurora, where the field lines are tangent to Io (i.e. near the equator), because the column of gas they pass through is the longest there. Aurorae associated with these tangent points on Io are observed to rock with the changing orientation of Jupiter's tilted magnetic dipole. Fainter aurora from oxygen atoms along the limb of Io (the red glows in the image at right), and sodium atoms on Io's night-side (the green glows in the same image) have also been observed.
Physical sciences
Solar System
null
2268160
https://en.wikipedia.org/wiki/Hip
Hip
In vertebrate anatomy, the hip, or coxa (: coxae) in medical terminology, refers to either an anatomical region or a joint on the outer (lateral) side of the pelvis. The hip region is located lateral and anterior to the gluteal region, inferior to the iliac crest, and lateral to the obturator foramen, with muscle tendons and soft tissues overlying the greater trochanter of the femur. In adults, the three pelvic bones (ilium, ischium and pubis) have fused into one hip bone, which forms the superomedial/deep wall of the hip region. The hip joint, scientifically referred to as the acetabulofemoral joint (art. coxae), is the ball-and-socket joint between the pelvic acetabulum and the femoral head. Its primary function is to support the weight of the torso in both static (e.g. standing) and dynamic (e.g. walking or running) postures. The hip joints have very important roles in retaining balance, and for maintaining the pelvic inclination angle. Pain of the hip may be the result of numerous causes, including nervous, osteoarthritic, infectious, traumatic, and genetic. Structure Region The hip joint, also known as a ball and socket joint, is formed by the acetabulum of the pelvis and the femoral head, which is the top portion of the thigh bone (femur). It allows for a wide range of movement and stability in the lower body. The proximal femur is largely covered by muscles and, as a consequence, the greater trochanter is often the only palpable bony structure in the hip region. Articulation The hip joint or coxofemoral joint is a ball and socket synovial joint formed by the articulation of the rounded head of the femur and the cup-like acetabulum of the pelvis. The socket of the acetabulum is pointing downwards and anterolaterally. The socket is also turned such that the outer edge of its roof is more lateral than outer edge of the floor. It forms the primary connection between the bones of the lower limb and the axial skeleton of the trunk and pelvis. Both joint surfaces are covered with a strong but lubricated layer called articular hyaline cartilage. The cuplike acetabulum forms at the union of three pelvic bones — the ilium, pubis, and ischium. The Y-shaped growth plate that separates them, the triradiate cartilage, is fused definitively at ages 14–16. It is a special type of spheroidal or ball and socket joint where the roughly spherical femoral head is largely contained within the acetabulum and has an average radius of curvature of 2.5 cm. The acetabulum grasps almost half the femoral ball, a grip deepened by a ring-shaped fibrocartilaginous lip, the acetabular labrum, which extends the joint beyond the equator. The centre of the acetabulum (fovea) does not articulate to anything. Instead, it is lined with fat pad and attached to ligamentum teres. The acetabular labrum is horse-shoe shaped. Its inferior notch is bridged by transverse acetabular ligament. The joint space between the femoral head and the superior acetabulum is normally between 2 and 7 mm. The head of the femur is attached to the shaft by a thin neck region that is often prone to fracture in the elderly, which is mainly due to the degenerative effects of osteoporosis. The acetabulum is oriented inferiorly, laterally and anteriorly, while the femoral neck is directed superiorly, medially, and slightly anteriorly. Articular angles Acetabular angle (or Sharp's angle) is the angle between the horizontal line passing through the inferior aspects of triradiate cartilages (Hilgenreiner's line) and another line passing through the inferior angle of triradiate cartilage to superior acetabular rim. The angle measures 35 degrees at birth, 25 degrees at one year of age, and less than 10 degrees by 15 years of age. In adults the angle can vary from 33 to 38 degrees. The sagittal angle of the acetabular inlet is an angle between a line passing from the anterior to the posterior acetabular rim and the sagittal plane. It measures 7° at birth and increases to 17° in adults. Wiberg's centre-edge angle (CE angle) is an angle between a vertical line and a line from the centre of the femoral head to the most lateral part of the acetabulum, as seen on an anteroposterior radiograph. The vertical-centre-anterior margin angle (VCA) is an angle formed from a vertical line (V) and a line from the centre of the femoral head (C) and the anterior (A) edge of the dense shadow of the subchondral bone slightly posterior to the anterior edge of the acetabulum, with the radiograph being taken from the false angle, that is, a lateral view rotated 25 degrees towards becoming frontal. The articular cartilage angle (AC angle, also called acetabular index or Hilgenreiner angle) is an angle formed parallel to the weight bearing dome, that is, the acetabular sourcil or "roof", and the horizontal plane, or a line connecting the corner of the triangular cartilage and the lateral acetabular rim. In normal hips in children aged between 11 and 24 months, it has been estimated to be on average 20°, ranging between 18° and 25°. It becomes progressively lower with age. Suggested cutoff values to classify the angle as abnormally increased include: 30° up to 4 months of age. 25° up to 2 years of age. Femoral neck angle The angle between the longitudinal axes of the femoral neck and shaft, called the caput-collum-diaphyseal angle or CCD angle, normally measures approximately 150° in newborn and 126° in adults (coxa norma). An abnormally small angle is known as coxa vara and an abnormally large angle as coxa valga. Because changes in shape of the femur naturally affects the knee, coxa valga is often combined with genu varum (bow-leggedness), while coxa vara leads to genu valgum (knock-knees). Changes in the CCD angle is the result of changes in the stress patterns applied to the hip joint. Such changes, caused for example by a dislocation, change the trabecular patterns inside the bones. Two continuous trabecular systems emerging on the auricular surface of the sacroiliac joint meander and criss-cross each other down through the hip bone, the femoral head, neck, and shaft. In the hip bone, one system arises on the upper part of the auricular surface to converge onto the posterior surface of the greater sciatic notch, from where its trabeculae are reflected to the inferior part of the acetabulum. The other system emerges on the lower part of the auricular surface, converges at the level of the superior gluteal line, and is reflected laterally onto the upper part of the acetabulum. In the femur, the first system lines up with a system arising from the lateral part of the femoral shaft to stretch to the inferior portion of the femoral neck and head. The other system lines up with a system in the femur stretching from the medial part of the femoral shaft to the superior part of the femoral head. On the lateral side of the hip joint the fascia lata is strengthened to form the iliotibial tract which functions as a tension band and reduces the bending loads on the proximal part of the femur. Capsule Proximally, capsule of the hip joint is attached to the edge of the acetabulum, acetabular labrum, and transverse acetabular ligament. Distally, it is attached to the trochanters of the femur and intertrochanteric line anteriorly. Posteriorly, it is attached to a junction between medial two-thirds and lateral one-third of the femoral neck, one finger breadth away from the intertrochanteric crest. From its attachment at the femoral neck, the fibres of the capsule reflected backwards towards the acetabulum, carrying retinacula vessels supplying the femoral head. The part of femoral neck outside the capsule is shorter in front than posteriorly. The strong but loose fibrous capsule of the hip joint permits the hip joint to have the second largest range of movement (second only to the shoulder) and yet support the weight of the body, arms and head. The capsule has two sets of fibers: longitudinal and circular. The circular fibers form a collar around the femoral neck called the zona orbicularis. The longitudinal retinacular fibers travel along the neck and carry blood vessels. Ligaments The hip joint is reinforced by four ligaments, of which three are extracapsular and one intracapsular. The extracapsular ligaments are the iliofemoral, ischiofemoral, and pubofemoral ligaments attached to the bones of the pelvis (the ilium, ischium, and pubis respectively). All three strengthen the capsule and prevent an excessive range of movement in the joint. Of these, the Y-shaped and twisted iliofemoral ligament is the strongest ligament in the human body. It has a tensile strength of 350 kg. Iliofemoral ligament is a thickening of the anterior capsule extending from anterior inferior iliac spine to intertrochanteric line. Ischiofemoral ligament is the thickening of posterior capsule of the hip and pubofemoral ligament is the thickening of the inferior capsule. In the upright position, iliofemoral ligament prevents the trunk from falling backward without the need for muscular activity, thus preventing excessive hyperextension. In the sitting position, it becomes relaxed, thus permitting the pelvis to tilt backward into its sitting position. Ischiofemoral prevents excessive extension and the pubofemoral ligament prevents excess abduction and extension. The zona orbicularis, which lies like a collar around the most narrow part of the femoral neck, is covered by the other ligaments which partly radiate into it. The zona orbicularis acts like a buttonhole on the femoral head and assists in maintaining the contact in the joint. All three ligaments become taut when the joint is extended - this stabilises the joint, and reduces the energy demand of muscles when standing. The intracapsular ligament, the ligamentum teres, is attached to a depression in the acetabulum (the acetabular notch) and a depression on the femoral head (the fovea of the head). It is only stretched when the hip is dislocated, and may then prevent further displacement. It is not that important as a ligament but can often be vitally important as a conduit of a small artery to the head of the femur, that is, the . This artery is not present in everyone but can become the only blood supply to the bone in the head of the femur when the neck of the femur is fractured or disrupted by injury in childhood. Blood supply The hip joint is supplied with blood from the medial circumflex femoral and lateral circumflex femoral arteries, which are both usually branches of the deep artery of the thigh (profunda femoris), but there are numerous variations and one or both may also arise directly from the femoral artery. There is also a small contribution from the foveal artery, a small vessel in the ligament of the head of the femur which is a branch of the posterior division of the obturator artery, which becomes important to avoid avascular necrosis of the head of the femur when the blood supply from the medial and lateral circumflex arteries are disrupted (e.g. through fracture of the neck of the femur along their course). The hip has two anatomically important anastomoses, the cruciate and the trochanteric anastomoses, the latter of which provides most of the blood to the head of the femur. These anastomoses exist between the femoral artery or profunda femoris and the gluteal vessels. Muscles and movements The hip muscles act on three mutually perpendicular main axes, all of which pass through the center of the femoral head, resulting in three degrees of freedom and three pair of principal directions: Flexion and extension around a transverse axis (left-right); lateral rotation and medial rotation around a longitudinal axis (along the thigh); and abduction and adduction around a sagittal axis (forward-backward); and a combination of these movements (i.e. circumduction, a compound movement in which the leg describes the surface of an irregular cone). Some of the hip muscles also act on either the vertebral joints or the knee joint, that with their extensive areas of origin and/or insertion, different part of individual muscles participate in very different movements, and that the range of movement varies with the position of the hip joint. Additionally, the inferior and Superior gemelli muscles assist the obturator internus and the three muscles together form the three-headed muscle known as the triceps coxae. The movements of the hip joint is thus performed by a series of muscles which are here presented in order of importance with the range of motion from the neutral zero-degree position indicated: Lateral or external rotation (30° with the hip extended, 50° with the hip flexed): gluteus maximus; quadratus femoris; obturator internus; dorsal fibers of gluteus medius and minimus; iliopsoas (including psoas major from the vertebral column); obturator externus; adductor magnus, longus, brevis, and minimus; piriformis; and sartorius. The iliofemoral ligament inhibits lateral rotation and extension, this is why the hip can rotate laterally to a greater degree when it is flexed. Medial or internal rotation (40°): anterior fibers of gluteus medius and minimus; tensor fasciae latae; the part of adductor magnus inserted into the adductor tubercle; and, with the leg abducted also the pectineus. Extension or retroversion (20°): gluteus maximus (if put out of action, active standing from a sitting position is not possible, but standing and walking on a flat surface is); dorsal fibers of gluteus medius and minimus; adductor magnus; and piriformis. Additionally, the following thigh muscles extend the hip: semimembranosus, semitendinosus, and long head of biceps femoris. Maximal extension is inhibited by the iliofemoral ligament. Flexion or anteversion (140°): the hip flexors: iliopsoas (with psoas major from vertebral column); tensor fasciae latae, pectineus, adductor longus, adductor brevis, and gracilis. Thigh muscles acting as hip flexors: rectus femoris and sartorius. Maximal flexion is inhibited by the thigh coming in contact with the chest. Abduction (50° with hip extended, 80° with hip flexed): gluteus medius; tensor fasciae latae; gluteus maximus with its attachment at the fascia lata; gluteus minimus; piriformis; and obturator internus. Maximal abduction is inhibited by the neck of the femur coming into contact with the lateral pelvis. When the hips are flexed, this delays the impingement until a greater angle. Adduction (30° with hip extended, 20° with hip flexed): adductor magnus with adductor minimus; adductor longus, adductor brevis, gluteus maximus with its attachment at the gluteal tuberosity; gracilis (extends to the tibia); pectineus, quadratus femoris; and obturator externus. Of the thigh muscles, semitendinosus is especially involved in hip adduction. Maximal adduction is impeded by the thighs coming into contact with one another. This can be avoided by abducting the opposite leg, or having the legs alternately flexed/extended at the hip so they travel in different planes and do not intersect. Clinical significance A hip fracture is a break that occurs in the upper part of the femur. Symptoms may include pain around the hip particularly with movement and shortening of the leg. The hip joint can be replaced by a prosthesis in a hip replacement operation due to fractures or illnesses such as osteoarthritis. Hip pain can have multiple sources and can also be associated with lower back pain. At the 2022 Consumer Electronics Show, a company named Safeware announced an airbag belt that is designed to prevent hip fractures among such uses as the elderly and hospital patients. Abnormal orientation of the acetabular socket as seen in hip dysplasia can lead to hip subluxation (partial dislocation), degeneration of the acetabular labrum. Excessive coverage of femoral head by the acetabulum can lead to pincer-type femoro-acetabular impingement (FAI). Sexual dimorphism and cultural significance In humans, unlike other animals, the hip bones are substantially different in the two sexes. The hips of human females widen during puberty. The femora are also more widely spaced in females, so as to widen the opening in the hip bone and thus facilitate childbirth. Finally, the ilium and its muscle attachment are shaped so as to situate the buttocks away from the birth canal, where contraction of the buttocks could otherwise damage the baby. The female hips have long been associated with both fertility and general expression of sexuality. Since broad hips facilitate childbirth and also serve as an anatomical cue of sexual maturity, they have been seen as an attractive trait for women for thousands of years. Many of the classical poses women take when sculpted, painted or photographed, such as the Grande Odalisque, serve to emphasize the prominence of their hips. Similarly, women's fashion through the ages has often drawn attention to the girth of the wearer's hips. Additional images
Biology and health sciences
Human anatomy
Health
2269463
https://en.wikipedia.org/wiki/Thermonuclear%20weapon
Thermonuclear weapon
A thermonuclear weapon, fusion weapon or hydrogen bomb (H bomb) is a second-generation nuclear weapon design. Its greater sophistication affords it vastly greater destructive power than first-generation nuclear bombs, a more compact size, a lower mass, or a combination of these benefits. Characteristics of nuclear fusion reactions make possible the use of non-fissile depleted uranium as the weapon's main fuel, thus allowing more efficient use of scarce fissile material such as uranium-235 () or plutonium-239 (). The first full-scale thermonuclear test (Ivy Mike) was carried out by the United States in 1952, and the concept has since been employed by most of the world's nuclear powers in the design of their weapons. Modern fusion weapons essentially consist of two main components: a nuclear fission primary stage (fueled by or ) and a separate nuclear fusion secondary stage containing thermonuclear fuel: heavy isotopes of hydrogen (deuterium and tritium) as the pure element or in modern weapons lithium deuteride. For this reason, thermonuclear weapons are often colloquially called hydrogen bombs or H-bombs. A fusion explosion begins with the detonation of the fission primary stage. Its temperature soars past 100 million kelvin, causing it to glow intensely with thermal ("soft") X-rays. These X-rays flood the void (the "radiation channel" often filled with polystyrene foam) between the primary and secondary assemblies placed within an enclosure called a radiation case, which confines the X-ray energy and resists its outward pressure. The distance separating the two assemblies ensures that debris fragments from the fission primary (which move much more slowly than X-ray photons) cannot disassemble the secondary before the fusion explosion runs to completion. The secondary fusion stage—consisting of outer pusher/tamper, fusion fuel filler and central plutonium spark plug—is imploded by the X-ray energy impinging on its pusher/ tamper. This compresses the entire secondary stage and drives up the density of the plutonium spark plug. The density of the plutonium fuel rises to such an extent that the spark plug is driven into a supercritical state, and it begins a nuclear fission chain reaction. The fission products of this chain reaction heat the highly compressed (and thus super dense) thermonuclear fuel surrounding the spark plug to around 300 million kelvin, igniting fusion reactions between fusion fuel nuclei. In modern weapons fueled by lithium deuteride, the fissioning plutonium spark plug also emits free neutrons that collide with lithium nuclei and supply the tritium component of the thermonuclear fuel. The secondary's relatively massive tamper (which resists outward expansion as the explosion proceeds) also serves as a thermal barrier to keep the fusion fuel filler from becoming too hot, which would spoil the compression. If made of uranium, enriched uranium or plutonium, the tamper captures fast fusion neutrons and undergoes fission itself, increasing the overall explosive yield. Additionally, in most designs the radiation case is also constructed of a material that undergoes fission driven by fast thermonuclear neutrons. Such bombs are classified as two stage weapons. Fast fission of the tamper and radiation case is the main contribution to the total yield and is the dominant process that produces radioactive fission product fallout. Before Ivy Mike, Operation Greenhouse in 1951 was the first American nuclear test series to test principles that led to the development of thermonuclear weapons. Sufficient fission was achieved to boost the associated fusion device, and enough was learned to achieve a full-scale device within a year. The design of all modern thermonuclear weapons in the United States is known as the Teller–Ulam configuration for its two chief contributors, Edward Teller and Stanisław Ulam, who developed it in 1951 for the United States, with certain concepts developed with the contribution of physicist John von Neumann. Similar devices were developed by the Soviet Union, United Kingdom, France, China and India. The thermonuclear Tsar Bomba was the most powerful bomb ever detonated. As thermonuclear weapons represent the most efficient design for weapon energy yield in weapons with yields above , virtually all the nuclear weapons of this size deployed by the five nuclear-weapon states under the Non-Proliferation Treaty today are thermonuclear weapons using the Teller–Ulam design. Public knowledge concerning nuclear weapon design Detailed knowledge of fission and fusion weapons is classified to some degree in virtually every industrialized country. In the United States, such knowledge can by default be classified as "Restricted Data", even if it is created by persons who are not government employees or associated with weapons programs, in a legal doctrine known as "born secret" (though the constitutional standing of the doctrine has been at times called into question; see United States v. Progressive, Inc.). Born secret is rarely invoked for cases of private speculation. The official policy of the United States Department of Energy has been not to acknowledge the leaking of design information, as such acknowledgment would potentially validate the information as accurate. In a small number of prior cases, the U.S. government has attempted to censor weapons information in the public press, with limited success. According to the New York Times, physicist Kenneth W. Ford defied government orders to remove classified information from his book Building the H Bomb: A Personal History. Ford claims he used only pre-existing information and even submitted a manuscript to the government, which wanted to remove entire sections of the book for concern that foreign states could use the information. Though large quantities of vague data have been officially released—and larger quantities of vague data have been unofficially leaked by former bomb designers—most public descriptions of nuclear weapon design details rely to some degree on speculation, reverse engineering from known information, or comparison with similar fields of physics (inertial confinement fusion is the primary example). Such processes have resulted in a body of unclassified knowledge about nuclear bombs that is generally consistent with official unclassified information releases and related physics and is thought to be internally consistent, though there are some points of interpretation that are still considered open. The state of public knowledge about the Teller–Ulam design has been mostly shaped from a few specific incidents outlined in a section below. Basic principle Primary and secondary stages The basic principle of the Teller–Ulam configuration is the idea that different parts of a thermonuclear weapon can be chained together in stages, with the detonation of each stage providing the energy to ignite the next stage. At a minimum, this implies a primary section that consists of an implosion-type fission bomb (a "trigger"), and a secondary section that consists of fusion fuel. The energy released by the primary compresses the secondary through the process of radiation implosion, at which point it is heated and undergoes nuclear fusion. This process could be continued, with energy from the secondary igniting a third fusion stage; the Soviet Union's AN602 "Tsar Bomba" is thought to have been a three-stage fission-fusion-fusion device. Theoretically by continuing this process thermonuclear weapons with arbitrarily high yield could be constructed. This contrasts with fission weapons, which are limited in yield because only so much fission fuel can be amassed in one place before the danger of its accidentally becoming supercritical becomes too great. Surrounding the other components is a hohlraum or radiation case, a container that traps the first stage or primary's energy inside temporarily. The outside of this radiation case, which is also normally the outside casing of the bomb, is the only direct visual evidence publicly available of any thermonuclear bomb component's configuration. Numerous photographs of various thermonuclear bomb exteriors have been declassified. The primary is thought to be a standard implosion method fission bomb, though likely with a core boosted by small amounts of fusion fuel (usually 1:1 deuterium:tritium gas) for extra efficiency; the fusion fuel releases excess neutrons when heated and compressed, inducing additional fission. When fired, the or core would be compressed to a smaller sphere by special layers of conventional high explosives arranged around it in an explosive lens pattern, initiating the nuclear chain reaction that powers the conventional "atomic bomb". The secondary is usually shown as a column of fusion fuel and other components wrapped in many layers. Around the column is first a "pusher-tamper", a heavy layer of uranium-238 () or lead that helps compress the fusion fuel (and, in the case of uranium, may eventually undergo fission itself). Inside this is the fusion fuel, usually a form of lithium deuteride, which is used because it is easier to weaponize than liquefied tritium/deuterium gas. This dry fuel, when bombarded by neutrons, produces tritium, a heavy isotope of hydrogen that can undergo nuclear fusion, along with the deuterium present in the mixture. (See the article on nuclear fusion for a more detailed technical discussion of fusion reactions.) Inside the layer of fuel is the "spark plug", a hollow column of fissile material ( or ) often boosted by deuterium gas. The spark plug, when compressed, can undergo nuclear fission (because of the shape, it is not a critical mass without compression). The tertiary, if one is present, would be set below the secondary and probably be made of the same materials. Interstage Separating the secondary from the primary is the interstage. The fissioning primary produces four types of energy: 1) expanding hot gases from high explosive charges that implode the primary; 2) superheated plasma that was originally the bomb's fissile material and its tamper; 3) the electromagnetic radiation; and 4) the neutrons from the primary's nuclear detonation. The interstage is responsible for accurately modulating the transfer of energy from the primary to the secondary. It must direct the hot gases, plasma, electromagnetic radiation and neutrons toward the right place at the right time. Less than optimal interstage designs have resulted in the secondary failing to work entirely on multiple shots, known as a "fissile fizzle". The Castle Koon shot of Operation Castle is a good example; a small flaw allowed the neutron flux from the primary to prematurely begin heating the secondary, weakening the compression enough to prevent any fusion. There is very little detailed information in the open literature about the mechanism of the interstage. One of the best sources is a simplified diagram of a British thermonuclear weapon similar to the American W80 warhead. It was released by Greenpeace in a report titled "Dual Use Nuclear Technology". The major components and their arrangement are in the diagram, though details are almost absent; what scattered details it does include likely have intentional omissions or inaccuracies. They are labeled "End-cap and Neutron Focus Lens" and "Reflector Wrap"; the former channels neutrons to the / Spark Plug while the latter refers to an X-ray reflector; typically a cylinder made of an X-ray opaque material such as uranium with the primary and secondary at either end. It does not reflect like a mirror; instead, it gets heated to a high temperature by the X-ray flux from the primary, then it emits more evenly spread X-rays that travel to the secondary, causing what is known as radiation implosion. In Ivy Mike, gold was used as a coating over the uranium to enhance the blackbody effect. Next comes the "Reflector/Neutron Gun Carriage". The reflector seals the gap between the Neutron Focus Lens (in the center) and the outer casing near the primary. It separates the primary from the secondary and performs the same function as the previous reflector. There are about six neutron guns (seen here from Sandia National Laboratories) each protruding through the outer edge of the reflector with one end in each section; all are clamped to the carriage and arranged more or less evenly around the casing's circumference. The neutron guns are tilted so the neutron emitting end of each gun end is pointed towards the central axis of the bomb. Neutrons from each neutron gun pass through and are focused by the neutron focus lens towards the centre of primary in order to boost the initial fissioning of the plutonium. A "polystyrene Polarizer/Plasma Source" is also shown (see below). The first U.S. government document to mention the interstage was only recently released to the public promoting the 2004 initiation of the Reliable Replacement Warhead (RRW) Program. A graphic includes blurbs describing the potential advantage of a RRW on a part-by-part level, with the interstage blurb saying a new design would replace "toxic, brittle material" and "expensive 'special' material... [that require] unique facilities". The "toxic, brittle material" is widely assumed to be beryllium, which fits that description and would also moderate the neutron flux from the primary. Some material to absorb and re-radiate the X-rays in a particular manner may also be used. Candidates for the "special material" are polystyrene and a substance called "Fogbank", an unclassified codename. Fogbank's composition is classified, though aerogel has been suggested as a possibility. It was first used in thermonuclear weapons with the W76 thermonuclear warhead and produced at a plant in the Y-12 Complex at Oak Ridge, Tennessee, for use in the W76. Production of Fogbank lapsed after the W76 production run ended. The W76 Life Extension Program required more Fogbank to be made. This was complicated by the fact that the original Fogbank's properties were not fully documented, so a massive effort was mounted to re-invent the process. An impurity crucial to the properties of the old Fogbank was omitted during the new process. Only close analysis of new and old batches revealed the nature of that impurity. The manufacturing process used acetonitrile as a solvent, which led to at least three evacuations of the Fogbank plant in 2006. Widely used in the petroleum and pharmaceutical industries, acetonitrile is flammable and toxic. Y-12 is the sole producer of Fogbank. Summary A simplified summary of the above explanation is: A (relatively) small fission bomb known as the "primary" explodes. Energy released in the primary is transferred to the "secondary" (or fusion) stage. This energy compresses the fusion fuel and sparkplug; the compressed sparkplug becomes supercritical and undergoes a fission chain reaction, further heating the compressed fusion fuel to a high enough temperature to induce fusion. Energy released by the fusion events continues heating the fuel, keeping the reaction going. The fusion fuel of the secondary stage may be surrounded by a layer of additional fuel that undergoes fission when hit by the neutrons from the reactions within. These fission events account for about half of the total energy released in typical designs. Compression of the secondary How exactly the energy is "transported" from the primary to the secondary has been the subject of some disagreement in the open press but is thought to be transmitted through the X-rays and gamma rays that are emitted from the fissioning primary. This energy is then used to compress the secondary. The crucial detail of how the X-rays create the pressure is the main remaining disputed point in the unclassified press. There are three proposed theories: Radiation pressure exerted by the X-rays. This was the first idea put forth by Howard Morland in an article in The Progressive. X-rays creating a plasma in the radiation channel's filler (a polystyrene or "Fogbank" plastic foam). This was a second idea put forward by Chuck Hansen and later by Howard Morland. Tamper/pusher ablation. This is the concept best supported by physical analysis. Radiation pressure The radiation pressure exerted by the large quantity of X-ray photons inside the closed casing might be enough to compress the secondary. Electromagnetic radiation such as X-rays or light carries momentum and exerts a force on any surface it strikes. The pressure of radiation at the intensities seen in everyday life, such as sunlight striking a surface, is usually imperceptible, but at the extreme intensities found in a thermonuclear bomb the pressure is enormous. For two thermonuclear bombs for which the general size and primary characteristics are well understood, the Ivy Mike test bomb and the modern W-80 cruise missile warhead variant of the W-61 design, the radiation pressure was calculated to be for the Ivy Mike design and for the W-80. Foam plasma pressure Foam plasma pressure is the concept that Chuck Hansen introduced during the Progressive case, based on research that located declassified documents listing special foams as liner components within the radiation case of thermonuclear weapons. The sequence of firing the weapon (with the foam) would be as follows: The high explosives surrounding the core of the primary fire, compressing the fissile material into a supercritical state and beginning the fission chain reaction. The fissioning primary emits thermal X-rays, which "reflect" along the inside of the casing, irradiating the polystyrene foam. The irradiated foam becomes a hot plasma, pushing against the tamper of the secondary, compressing it tightly, and beginning the fission chain reaction in the spark plug. Pushed from both sides (from the primary and the spark plug), the lithium deuteride fuel is highly compressed and heated to thermonuclear temperatures. Also, by being bombarded with neutrons, each lithium-6 (6Li) atom splits into one tritium atom and one alpha particle. Then begins a fusion reaction between the tritium and the deuterium, releasing even more neutrons, and a huge amount of energy. The fuel undergoing the fusion reaction emits a large flux of high energy neutrons (), which irradiates the tamper (or the bomb casing), causing it to undergo a fast fission reaction, providing about half of the total energy. This would complete the fission-fusion-fission sequence. Fusion, unlike fission, is relatively "clean"—it releases energy but no harmful radioactive products or large amounts of nuclear fallout. The fission reactions though, especially the last fission reactions, release a tremendous amount of fission products and fallout. If the last fission stage is omitted, by replacing the uranium tamper with one made of lead, for example, the overall explosive force is reduced by approximately half but the amount of fallout is relatively low. The neutron bomb is a hydrogen bomb with an intentionally thin tamper, allowing as many of the fast fusion neutrons as possible to escape. Current technical criticisms of the idea of "foam plasma pressure" focus on unclassified analysis from similar high energy physics fields that indicate that the pressure produced by such a plasma would only be a small multiplier of the basic photon pressure within the radiation case, and also that the known foam materials intrinsically have a very low absorption efficiency of the gamma ray and X-ray radiation from the primary. Most of the energy produced would be absorbed by either the walls of the radiation case or the tamper around the secondary. Analyzing the effects of that absorbed energy led to the third mechanism: ablation. Tamper-pusher ablation The outer casing of the secondary assembly is called the "tamper-pusher". The purpose of a tamper in an implosion bomb is to delay the expansion of the reacting fuel supply (which is very hot dense plasma) until the fuel is fully consumed and the explosion runs to completion. The same tamper material serves also as a pusher in that it is the medium by which the outside pressure (force acting on the surface area of the secondary) is transferred to the mass of fusion fuel. The proposed tamper-pusher ablation mechanism posits that the outer layers of the thermonuclear secondary's tamper-pusher are heated so extremely by the primary's X-ray flux that they expand violently and ablate away (fly off). Because total momentum is conserved, this mass of high velocity ejecta impels the rest of the tamper-pusher to recoil inwards with tremendous force, crushing the fusion fuel and the spark plug. The tamper-pusher is built robustly enough to insulate the fusion fuel from the extreme heat outside; otherwise, the compression would be spoiled. Rough calculations for the basic ablation effect are relatively simple: the energy from the primary is distributed evenly onto all of the surfaces within the outer radiation case, with the components coming to a thermal equilibrium, and the effects of that thermal energy are then analyzed. The energy is mostly deposited within about one X-ray optical thickness of the tamper/pusher outer surface, and the temperature of that layer can then be calculated. The velocity at which the surface then expands outwards is calculated and, from a basic Newtonian momentum balance, the velocity at which the rest of the tamper implodes inwards. Applying the more detailed form of those calculations to the Ivy Mike device yields vaporized pusher gas expansion velocity of and an implosion velocity of perhaps if of the total tamper/pusher mass is ablated off, the most energy efficient proportion. For the W-80 the gas expansion velocity is roughly and the implosion velocity . The pressure due to the ablating material is calculated to be in the Ivy Mike device and in the W-80 device. Comparing implosion mechanisms Comparing the three mechanisms proposed, it can be seen that: The calculated ablation pressure is one order of magnitude greater than the higher proposed plasma pressures and nearly two orders of magnitude greater than calculated radiation pressure. No mechanism to avoid the absorption of energy into the radiation case wall and the secondary tamper has been suggested, making ablation apparently unavoidable. The other mechanisms appear to be unneeded. United States Department of Defense official declassification reports indicate that foamed plastic materials are or may be used in radiation case liners, and despite the low direct plasma pressure they may be of use in delaying the ablation until energy has distributed evenly and a sufficient fraction has reached the secondary's tamper/pusher. Richard Rhodes' book Dark Sun stated that a layer of plastic foam was fixed to the lead liner of the inside of the Ivy Mike steel casing using copper nails. Rhodes quotes several designers of that bomb explaining that the plastic foam layer inside the outer case is to delay ablation and thus recoil of the outer case: if the foam were not there, metal would ablate from the inside of the outer case with a large impulse, causing the casing to recoil outwards rapidly. The purpose of the casing is to contain the explosion for as long as possible, allowing as much X-ray ablation of the metallic surface of the secondary stage as possible, so it compresses the secondary efficiently, maximizing the fusion yield. Plastic foam has a low density, so causes a smaller impulse when it ablates than metal does. Design variations Possible variations to the weapon design have been proposed: Either the tamper or the casing have been proposed to be made of (highly enriched uranium) in the final fission jacket. The far more expensive is also fissionable with fast neutrons like the in depleted or natural uranium, but its fission-efficiency is higher. This is because nuclei also undergo fission by slow neutrons ( nuclei require a minimum energy of about ) and because these slower neutrons are produced by other fissioning nuclei in the jacket (in other words, supports the nuclear chain reaction whereas does not). Furthermore, a jacket fosters neutron multiplication, whereas nuclei consume fusion neutrons in the fast-fission process. Using a final fissionable/fissile jacket of would thus increase the yield of a Teller–Ulam bomb above a depleted uranium or natural uranium jacket. This has been proposed specifically for the W87 warheads retrofitted to currently deployed LGM-30 Minuteman III ICBMs. In some descriptions, additional internal structures exist to protect the secondary from receiving excessive neutrons from the primary. The inside of the casing may or may not be specially machined to "reflect" the X-rays. X-ray "reflection" is not like light reflecting off a mirror, but rather the reflector material is heated by the X-rays, causing the material to emit X-rays, which then travel to the secondary. Most bombs do not apparently have tertiary "stages"—that is, third compression stage(s), which are additional fusion stages compressed by a previous fusion stage. The fissioning of the last blanket of uranium, which provides about half the yield in large bombs, does not count as a "stage" in this terminology. The U.S. tested three-stage bombs in several explosions during Operation Redwing but is thought to have fielded only one such tertiary model, i.e., a bomb in which a fission stage, followed by a fusion stage, finally compresses yet another fusion stage. This U.S. design was the heavy but highly efficient (i.e., nuclear weapon yield per unit bomb weight) B41 nuclear bomb. The Soviet Union is thought to have used multiple stages (including more than one tertiary fusion stage) in their ( in intended use) Tsar Bomba. The fissionable jacket could be replaced with lead, as was done with the Tsar Bomba. If any hydrogen bombs have been made from configurations other than those based on the Teller–Ulam design, the fact of it is not publicly known. A possible exception to this is the Soviet early Sloika design. In essence, the Teller–Ulam configuration relies on at least two instances of implosion occurring: first, the conventional (chemical) explosives in the primary would compress the fissile core, resulting in a fission explosion many times more powerful than that which chemical explosives could achieve alone (first stage). Second, the radiation from the fissioning of the primary would be used to compress and ignite the secondary fusion stage, resulting in a fusion explosion many times more powerful than the fission explosion alone. This chain of compression could conceivably be continued with an arbitrary number of tertiary fusion stages, each igniting more fusion fuel in the next stage although this is debated. Finally, efficient bombs (but not so-called neutron bombs) end with the fissioning of the final natural uranium tamper, something that could not normally be achieved without the neutron flux provided by the fusion reactions in secondary or tertiary stages. Such designs are suggested to be capable of being scaled up to an arbitrary large yield (with apparently as many fusion stages as desired), potentially to the level of a "doomsday device." However, usually such weapons were not more than a dozen megatons, which was generally considered enough to destroy even the most hardened practical targets (for example, a control facility such as the Cheyenne Mountain Complex). Even such large bombs have been replaced by smaller yield nuclear bunker buster bombs. For destruction of cities and non-hardened targets, breaking the mass of a single missile payload down into smaller MIRV bombs in order to spread the energy of the explosions into a "pancake" area is far more efficient in terms of area-destruction per unit of bomb energy. This also applies to single bombs deliverable by cruise missile or other system, such as a bomber, resulting in most operational warheads in the U.S. program having yields of less than . Ivy Mike In his 1995 book Dark Sun: The Making of the Hydrogen Bomb, author Richard Rhodes describes in detail the internal components of the "Ivy Mike" Sausage device, based on information obtained from extensive interviews with the scientists and engineers who assembled it. According to Rhodes, the actual mechanism for the compression of the secondary was a combination of the radiation pressure, foam plasma pressure, and tamper-pusher ablation theories; the radiation from the primary heated the polyethylene foam lining of the casing to a plasma, which then re-radiated radiation into the secondary's pusher, causing its surface to ablate and driving it inwards, compressing the secondary, igniting the sparkplug, and causing the fusion reaction. The general applicability of this principle is unclear. W88 In 1999 a reporter for the San Jose Mercury News reported that the U.S. W88 nuclear warhead, a small MIRVed warhead used on the Trident II SLBM, had a prolate primary (code-named Komodo) and a spherical secondary (code-named Cursa) inside a specially shaped radiation case (known as the "peanut" for its shape). The value of an egg-shaped primary lies apparently in the fact that a MIRV warhead is limited by the diameter of the primary: if an egg-shaped primary can be made to work properly, then the MIRV warhead can be made considerably smaller yet still deliver a high-yield explosion. A W88 warhead manages to yield up to with a physics package long, with a maximum diameter of , and by different estimates weighing in a range from . The smaller warhead allows more of them to fit onto a single missile and improves basic flight properties such as speed and range. History United States The idea of a thermonuclear fusion bomb ignited by a smaller fission bomb was first proposed by Enrico Fermi to his colleague Edward Teller when they were talking at Columbia University in September 1941, at the start of what would become the Manhattan Project. Teller spent much of the Manhattan Project attempting to figure out how to make the design work, preferring it over work on the atomic bomb, and over the last year of the project he was assigned exclusively to the task. However once World War II ended, there was little impetus to devote many resources to the Super, as it was then known. The first atomic bomb test by the Soviet Union in August 1949 came earlier than expected by Americans, and over the next several months there was an intense debate within the U.S. government, military, and scientific communities regarding whether to proceed with development of the far more powerful Super. The debate covered matters that were alternatively strategic, pragmatic, and moral. In their Report of the General Advisory Committee, Robert Oppenheimer and colleagues concluded that "[t]he extreme danger to mankind inherent in the proposal [to develop thermonuclear weapons] wholly outweighs any military advantage." Despite the objections raised, on 31 January 1950, President Harry S. Truman made the decision to go forward with the development of the new weapon. Teller and other U.S. physicists struggled to find a workable design. Stanislaw Ulam, a co-worker of Teller, made the first key conceptual leaps towards a workable fusion design. Ulam's two innovations that rendered the fusion bomb practical were that compression of the thermonuclear fuel before extreme heating was a practical path towards the conditions needed for fusion, and the idea of staging or placing a separate thermonuclear component outside a fission primary component, and somehow using the primary to compress the secondary. Teller then realized that the gamma and X-ray radiation produced in the primary could transfer enough energy into the secondary to create a successful implosion and fusion burn, if the whole assembly was wrapped in a hohlraum or radiation case. The "George" shot of Operation Greenhouse of 9 May 1951 tested the basic concept for the first time on a very small scale. As the first successful (uncontrolled) release of nuclear fusion energy, which made up a small fraction of the total yield, it raised expectations to a near certainty that the concept would work. On 1 November 1952, the Teller–Ulam configuration was tested at full scale in the "Ivy Mike" shot at an island in the Enewetak Atoll, with a yield of (over 450 times more powerful than the bomb dropped on Nagasaki during World War II). The device, dubbed the Sausage, used an extra-large fission bomb as a "trigger" and liquid deuterium—kept in its liquid state by of cryogenic equipment—as its fusion fuel, and weighed around altogether. The liquid deuterium fuel of Ivy Mike was impractical for a deployable weapon, and the next advance was to use a solid lithium deuteride fusion fuel instead. In 1954 this was tested in the "Castle Bravo" shot (the device was code-named Shrimp), which had a yield of (2.5 times expected) and is the largest U.S. bomb ever tested. Efforts shifted towards developing miniaturized Teller–Ulam weapons that could fit into intercontinental ballistic missiles and submarine-launched ballistic missiles. By 1960, with the W47 warhead deployed on Polaris ballistic missile submarines, megaton-class warheads were as small as in diameter and in weight. Further innovation in miniaturizing warheads was accomplished by the mid-1970s, when versions of the Teller–Ulam design were created that could fit ten or more warheads on the end of a small MIRVed missile. Soviet Union The first Soviet fusion design, developed by Andrei Sakharov and Vitaly Ginzburg in 1949 (before the Soviets had a working fission bomb), was dubbed the Sloika, after a Russian layer cake, and was not of the Teller–Ulam configuration. It used alternating layers of fissile material and lithium deuteride fusion fuel spiked with tritium (this was later dubbed Sakharov's "First Idea"). Though nuclear fusion might have been technically achievable, it did not have the scaling property of a "staged" weapon. Thus, such a design could not produce thermonuclear weapons whose explosive yields could be made arbitrarily large (unlike U.S. designs at that time). The fusion layer wrapped around the fission core could only moderately multiply the fission energy (modern Teller–Ulam designs can multiply it 30-fold). Additionally, the whole fusion stage had to be imploded by conventional explosives, along with the fission core, substantially increasing the amount of chemical explosives needed. The first Sloika design test, RDS-6s, was detonated in 1953 with a yield equivalent to ( from fusion). Attempts to use a Sloika design to achieve megaton-range results proved unfeasible. After the United States tested the "Ivy Mike" thermonuclear device in November 1952, proving that a multimegaton bomb could be created, the Soviets searched for an alternative design. The "Second Idea", as Sakharov referred to it in his memoirs, was a previous proposal by Ginzburg in November 1948 to use lithium deuteride in the bomb, which would, in the course of being bombarded by neutrons, produce tritium and free deuterium. In late 1953 physicist Viktor Davidenko achieved the first breakthrough of staging the reactions. The next breakthrough of radiation implosion was discovered and developed by Sakharov and Yakov Zel'dovich in early 1954. Sakharov's "Third Idea", as the Teller–Ulam design was known in the USSR, was tested in the shot "RDS-37" in November 1955 with a yield of . The Soviets demonstrated the power of the staging concept in October 1961, when they detonated the massive and unwieldy Tsar Bomba. It was the largest nuclear weapon developed and tested by any country. United Kingdom In 1954 work began at Aldermaston to develop the British fusion bomb, with Sir William Penney in charge of the project. British knowledge on how to make a thermonuclear fusion bomb was rudimentary, and at the time the United States was not exchanging any nuclear knowledge because of the Atomic Energy Act of 1946. The United Kingdom had worked closely with the Americans on the Manhattan Project. British access to nuclear weapons information was cut off by the United States at one point due to concerns about Soviet espionage. Full cooperation was not reestablished until an agreement governing the handling of secret information and other issues was signed. However, the British were allowed to observe the U.S. Castle tests and used sampling aircraft in the mushroom clouds, providing them with clear, direct evidence of the compression produced in the secondary stages by radiation implosion. Because of these difficulties, in 1955 Prime Minister Anthony Eden agreed to a secret plan, whereby if the Aldermaston scientists failed or were greatly delayed in developing the fusion bomb, it would be replaced by an extremely large fission bomb. In 1957 the Operation Grapple tests were carried out. The first test, Green Granite, was a prototype fusion bomb that failed to produce equivalent yields compared to the U.S. and Soviets, achieving only approximately . The second test Orange Herald was the modified fission bomb and produced —making it the largest fission explosion ever. At the time almost everyone (including the pilots of the plane that dropped it) thought that this was a fusion bomb. This bomb was put into service in 1958. A second prototype fusion bomb, Purple Granite, was used in the third test, but only produced approximately . A second set of tests was scheduled, with testing recommencing in September 1957. The first test was based on a "… new simpler design. A two-stage thermonuclear bomb that had a much more powerful trigger". This test Grapple X Round C was exploded on 8 November and yielded approximately . On 28 April 1958 a bomb was dropped that yielded —Britain's most powerful test. Two final air burst tests on 2 and 11 September 1958, dropped smaller bombs that yielded around each. American observers had been invited to these kinds of tests. After Britain's successful detonation of a megaton-range device (and thus demonstrating a practical understanding of the Teller–Ulam design "secret"), the United States agreed to exchange some of its nuclear designs with the United Kingdom, leading to the 1958 US–UK Mutual Defence Agreement. Instead of continuing with its own design, the British were given access to the design of the smaller American Mk 28 warhead and were able to manufacture copies. China Mao Zedong decided to begin a Chinese nuclear-weapons program during the First Taiwan Strait Crisis of 1954–1955. The People's Republic of China detonated its first thermonuclear bomb on 17 June 1967, 32 months after detonating its first fission weapon, with a yield of 3.31 Mt. It took place in the Lop Nor Test Site in northwest China. China had received extensive technical help from the Soviet Union to jump-start their nuclear program, but by 1960 the rift between the Soviet Union and China had become so great that the Soviet Union ceased all assistance to China. A story in The New York Times by William Broad reported that in 1995, a supposed Chinese double agent delivered information indicating that China knew secret details of the U.S. W88 warhead, supposedly through espionage. (This line of investigation eventually resulted in the abortive trial of Wen Ho Lee.) France The "Canopus" test in the Fangataufa atoll in French Polynesia on 24 August 1968 was the country's first multistage thermonuclear weapon test. The bomb was detonated from a balloon at a height of . The result of this test was significant atmospheric contamination. Very little is known about France's development of the Teller–Ulam design, beyond the fact that France detonated a device in the "Canopus" test. France reportedly had great difficulty with its initial development of the Teller-Ulam design, but it later overcame these and is believed to have nuclear weapons equal in sophistication to the other major nuclear powers. France and China did not sign or ratify the Partial Nuclear Test Ban Treaty of 1963, which banned nuclear test explosions in the atmosphere, underwater, or in outer space. Between 1966 and 1996 France carried out more than 190 nuclear tests. France's final nuclear test took place on 27 January 1996, and then the country dismantled its Polynesian test sites. France signed the Comprehensive Nuclear-Test-Ban Treaty that same year, and then ratified the Treaty within two years. In 2015 France confirmed that its nuclear arsenal contains about 300 warheads, carried by submarine-launched ballistic missiles and fighter-bombers. France has four Triomphant-class ballistic missile submarines. One ballistic missile submarine is deployed in the deep ocean, but a total of three must be in operational use at all times. The three older submarines are armed with 16 M45 missiles. The newest submarine, "Le Terrible", was commissioned in 2010, and it has M51 missiles capable of carrying TN 75 thermonuclear warheads. The air fleet is four squadrons at four different bases. In total, there are 23 Mirage 2000N aircraft and 20 Rafales capable of carrying nuclear warheads. The M51.1 missiles are intended to be replaced with the new M51.2 warhead beginning in 2016, which has a greater range than the M51.1. France has about 60 air-launched missiles tipped with TN 80/TN 81 warheads with a yield of about each. France's nuclear program has been carefully designed to ensure that these weapons remain usable decades into the future. Currently, France is no longer deliberately producing critical mass materials such as plutonium and enriched uranium, but it still relies on nuclear energy for electricity, with as a byproduct. India On 11 May 1998, India announced that it had detonated a thermonuclear bomb in its Operation Shakti tests ("Shakti-I", specifically, in Hindi the word 'Shakti' means power). Samar Mubarakmand, a Pakistani nuclear physicist, asserted that if Shakti-I had been a thermonuclear test, the device had failed to fire. However, Harold M. Agnew, former director of the Los Alamos National Laboratory, said that India's assertion of having detonated a staged thermonuclear bomb was believable. India says that their thermonuclear device was tested at a controlled yield of because of the close proximity of the Khetolai village at about , to ensure that the houses in that village do not suffer significant damage. Another cited reason was that radioactivity released from yields significantly more than 45 kt might not have been contained fully. After the Pokhran-II tests, Rajagopala Chidambaram, former chairman of the Atomic Energy Commission of India, said that India has the capability to build thermonuclear bombs of any yield at will. India officially maintains that it can build thermonuclear weapons of various yields up to around on the basis of the Shakti-1 thermonuclear test. The yield of India's hydrogen bomb test remains highly debatable among the Indian science community and the international scholars. The question of politicisation and disputes between Indian scientists further complicated the matter. In an interview in August 2009, the director for the 1998 test site preparations, K. Santhanam claimed that the yield of the thermonuclear explosion was lower than expected and that India should therefore not rush into signing the Comprehensive Nuclear-Test-Ban Treaty. Other Indian scientists involved in the test have disputed Santhanam's claim, arguing that his claims are unscientific. British seismologist Roger Clarke argued that the magnitudes suggested a combined yield of up to , consistent with the Indian announced total yield of . U.S. seismologist Jack Evernden has argued that for correct estimation of yields, one should 'account properly for geological and seismological differences between test sites'. Israel Israel is alleged to possess thermonuclear weapons of the Teller–Ulam design, but it is not known to have tested any nuclear devices, although it is widely speculated that the Vela incident of 1979 may have been a joint Israeli–South African nuclear test. It is well established that Edward Teller advised and guided the Israeli establishment on general nuclear matters for some 20 years. Between 1964 and 1967, Teller made six visits to Israel where he lectured at the Tel Aviv University on general topics in theoretical physics. It took him a year to convince the CIA about Israel's capability and finally in 1976, Carl Duckett of the CIA testified to the U.S. Congress, after receiving credible information from an "American scientist" (Teller), on Israel's nuclear capability. During the 1990s, Teller eventually confirmed speculations in the media that it was during his visits in the 1960s that he concluded that Israel was in possession of nuclear weapons. After he conveyed the matter to the higher level of the U.S. government, Teller reportedly said: "They [Israel] have it, and they were clever enough to trust their research and not to test, they know that to test would get them into trouble." North Korea North Korea claimed to have tested its miniaturised thermonuclear bomb on 6 January 2016. North Korea's first three nuclear tests (2006, 2009 and 2013) were relatively low yield and do not appear to have been of a thermonuclear weapon design. In 2013, the South Korean Defense Ministry speculated that North Korea may be trying to develop a "hydrogen bomb" and such a device may be North Korea's next weapons test. In January 2016, North Korea claimed to have successfully tested a hydrogen bomb, although only a magnitude 5.1 seismic event was detected at the time of the test, a similar magnitude to the 2013 test of a atomic bomb. These seismic recordings cast doubt upon North Korea's claim that a hydrogen bomb was tested and suggest it was a non-fusion nuclear test. On 3 September 2017, the country's state media reported that a hydrogen bomb test was conducted that resulted in "perfect success". According to the U.S. Geological Survey (USGS), the blast released energy equivalent to an earthquake with a seismic magnitude of 6.3, 10 times more powerful than previous nuclear tests conducted by North Korea. U.S. Intelligence released an early assessment that the yield estimate was , with an uncertainty range of . On 12 September, NORSAR revised its estimate of the explosion magnitude upward to 6.1, matching that of the CTBTO but less powerful than the USGS estimate of 6.3. Its yield estimate was revised to , while noting the estimate had some uncertainty and an undisclosed margin of error. On 13 September, an analysis of before and after synthetic-aperture radar satellite imagery of the test site was published suggesting the test occurred under of rock, and the yield "could have been in excess of 300 kilotons". Public knowledge The Teller–Ulam design was for many years considered one of the top nuclear secrets, and even today it is not discussed in any detail by official publications with origins "behind the fence" of classification. United States Department of Energy (DOE) policy has been, and continues to be, that they do not acknowledge when "leaks" occur, because doing so would acknowledge the accuracy of the supposed leaked information. Aside from images of the warhead casing, most information in the public domain about this design is relegated to a few terse statements by the DOE and the work of a few individual investigators. U.S. Department of Energy statements In 1972 the United States government declassified a document stating "[I]n thermonuclear (TN) weapons, a fission 'primary' is used to trigger a TN reaction in thermonuclear fuel referred to as a 'secondary, and in 1979 added, "[I]n thermonuclear weapons, radiation from a fission explosive can be contained and used to transfer energy to compress and ignite a physically separate component containing thermonuclear fuel." To this latter sentence the US government specified that "Any elaboration of this statement will be classified." The only information that may pertain to the spark plug was declassified in 1991: "Fact that fissile or fissionable materials are present in some secondaries, material unidentified, location unspecified, use unspecified, and weapons undesignated." In 1998 the DOE declassified the statement that "The fact that materials may be present in channels and the term 'channel filler', with no elaboration", which may refer to the polystyrene foam (or an analogous substance). Whether these statements vindicate some or all of the models presented above is up for interpretation, and official U.S. government releases about the technical details of nuclear weapons have been purposely equivocating in the past (e.g., Smyth Report). Other information, such as the types of fuel used in some of the early weapons, has been declassified, though precise technical information has not been. United States v. The Progressive Most of the current ideas on the workings of the Teller–Ulam design came into public awareness after the DOE attempted to censor a magazine article by U.S. anti-weapons activist Howard Morland in 1979 on the "secret of the hydrogen bomb". In 1978, Morland had decided that discovering and exposing this "last remaining secret" would focus attention onto the arms race and allow citizens to feel empowered to question official statements on the importance of nuclear weapons and nuclear secrecy. Most of Morland's ideas about how the weapon worked were compiled from accessible sources: the drawings that most inspired his approach came from the Encyclopedia Americana. Morland also interviewed (often informally) many former Los Alamos scientists (including Teller and Ulam, though neither gave him any useful information), and he used a variety of interpersonal strategies to encourage informative responses from them (i.e., asking questions such as "Do they still use spark plugs?" even if he was not aware what the latter term specifically referred to). Morland eventually concluded that the "secret" was that the primary and secondary were kept separate and that radiation pressure from the primary compressed the secondary before igniting it. When an early draft of the article, to be published in The Progressive magazine, was sent to the DOE after falling into the hands of a professor who was opposed to Morland's goal, the DOE requested that the article not be published and pressed for a temporary injunction. The DOE argued that Morland's information was (1) likely derived from classified sources, (2) if not derived from classified sources, itself counted as "secret" information under the "born secret" clause of the 1954 Atomic Energy Act, and (3) was dangerous and would encourage nuclear proliferation. Morland and his lawyers disagreed on all points, but the injunction was granted, as the judge in the case felt that it was safer to grant the injunction and allow Morland, et al., to appeal. Through a variety of more complicated circumstances, the DOE case began to wane as it became clear that some of the data they were attempting to claim as "secret" had been published in a students' encyclopedia a few years earlier. After another H-bomb speculator, Chuck Hansen, had his own ideas about the "secret" (quite different from Morland's) published in a Wisconsin newspaper, the DOE claimed that The Progressive case was moot, dropped its suit, and allowed the magazine to publish its article, which it did in November 1979. Morland had by then, however, changed his opinion of how the bomb worked, suggesting that a foam medium (the polystyrene) rather than radiation pressure was used to compress the secondary, and that in the secondary there was a spark plug of fissile material as well. He published these changes, based in part on the proceedings of the appeals trial, as a short erratum in The Progressive a month later. In 1981, Morland published a book about his experience, describing in detail the train of thought that led him to his conclusions about the "secret". Morland's work is interpreted as being at least partially correct because the DOE had sought to censor it, one of the few times they violated their usual approach of not acknowledging "secret" material that had been released; however, to what degree it lacks information, or has incorrect information, is not known with any confidence. The difficulty that other countries had in developing the Teller–Ulam design (even when they apparently understood the design, such as with the United Kingdom) makes it somewhat unlikely that this simple information alone is what provides the ability to manufacture thermonuclear weapons. Nevertheless, the ideas put forward by Morland in 1979 have been the basis for all the current speculation on the Teller–Ulam design. Notable accidents On 5 February 1958, during a training mission flown by a B-47, a Mark 15 nuclear bomb, also known as the Tybee Bomb, was lost off the coast of Tybee Island near Savannah, Georgia. The US Air Force maintains that the bomb was unarmed and did not contain the live plutonium core necessary to initiate a nuclear explosion. The bomb was thought by the Department of Energy to lie buried under several feet of silt at the bottom of Wassaw Sound. On 17 January 1966, a fatal collision occurred between a B-52G and a KC-135 Stratotanker over Palomares, Spain. The conventional explosives in two of the Mk28-type hydrogen bombs detonated upon impact with the ground, dispersing plutonium over nearby farms. A third bomb landed intact near Palomares while the fourth fell 12 miles (19 km) off the coast into the Mediterranean sea and was recovered a few months later. On 21 January 1968, a B-52G, with four B28FI thermonuclear bombs aboard as part of Operation Chrome Dome, crashed on the ice of the North Star Bay while attempting an emergency landing at Thule Air Base in Greenland. The resulting fire caused extensive radioactive contamination. Personnel involved in the cleanup failed to recover all the debris from three of the bombs, and one bomb was not recovered.
Technology
Weapons of mass destruction
null
5675394
https://en.wikipedia.org/wiki/Giant%20Pacific%20octopus
Giant Pacific octopus
The giant Pacific octopus (Enteroctopus dofleini), also known as the North Pacific giant octopus, is a large marine cephalopod belonging to the genus Enteroctopus and Enteroctopodidae family. Its spatial distribution encompasses much of the coastal North Pacific, from the Mexican state of Baja California, north along the United States' West Coast (California, Oregon, Washington and Alaska, including the Aleutian Islands), and British Columbia, Canada; across the northern Pacific to the Russian Far East (Kamchatka, Sea of Okhotsk), south to the East China Sea, the Yellow Sea, the Sea of Japan, Japan's Pacific east coast, and around the Korean Peninsula. It can be found from the intertidal zone down to , and is best-adapted to colder, oxygen- and nutrient-rich waters. It is the largest octopus species on earth and can often be found in aquariums and research facilities in addition to the ocean. E. dofleini play an important role in maintaining the health and biodiversity of deep sea ecosystems, cognitive research, and the fishing industry. Etymology The giant Pacific octopus was first described in 1910 by Gerhard Wülker of Leipzig University in Über Japanische Cephalopoden. He describes the species' morphology in detail, and mentions that there seems to be much variation within the species. The specific name dofleini was chosen by Gerhard Wülker in honor of German scientist Franz Theodor Doflein. It was moved to genus Enteroctopus by Eric Hochberg in 1998. Description Size E. dofleini is distinguished from other species by its large size. It is the largest octopus species. Adults usually weigh around , with an arm span up to . Some larger individuals have weighed in at , with a radial span of . American zoologist G. H. Parker found that the largest suckers on a giant Pacific octopus are about and can support each. The only other possible contender for the largest species of octopus is the seven-arm octopus (Haliphron atlanticus), based on a , incomplete carcass estimated to have a live mass of . Ecology Diet E. dofleini preys on shrimp, crabs, scallops, abalones, cockles, snails, clams, lobsters, fish, squid, and other octopuses. Food is procured with its suckers and then bitten using its tough beak of chitin. It has also been observed to catch spiny dogfish (Squalus acanthias) up to in length while in captivity. Additionally, consumed carcasses of this same shark species have been found in giant Pacific octopus middens in the wild, providing strong evidence of these octopuses preying on small sharks in their natural habitat. In May 2012, amateur photographer Ginger Morneau was widely reported to have photographed a wild giant Pacific octopus attacking and drowning a seagull, demonstrating that this species is not above eating any available source of food within its size range, even birds. Predators Scavengers and other organisms often attempt to eat octopus eggs, even when the female is present to protect them. Giant Pacific octopus paralarvae are preyed upon by many other zooplankton and filter feeders. Marine mammals, such as harbor seals, sea otters, and sperm whales depend upon the giant Pacific octopus as a source of food. Pacific sleeper sharks are also confirmed predators of this species. In addition, the octopus (along with cuttlefish and squid) is a significant source of protein for human consumption. About are commercially fished, worth $6 billion annually. Over thousands of years, humans have caught them using lures, spears, pot traps, nets, and bare hands. The octopus is parasitized by the mesozoan , which lives in its renal appendages. Movement patterns E. dofleini move through the open water using jet propulsion, which is achieved by drawing water into its body cavity and then forcefully expelling it through a siphon, creating a powerful thrust and propelling the octopus through the water at a high speed. When moving on the seafloor, however, the octopus crawls using its arms. E. dofleini remain stationary or in hiding 94% of the time, usually concealed within dens, kelp, or camouflaged in their environment. Otherwise, they exhibit activity throughout the day, increasingly so from midnight to the early morning. While stationary, E. dofleini hide, groom, eat, sleep, and maintain dens. E. dofleini are capable of moving vast distances to occupy new areas or habitats, with large octopuses moving further than smaller ones. Their movements are not random; they demonstrate a preference for habitats with dense kelp cover and rocky terrain suggesting a sophisticated level of habitat selection, likely optimizing foraging efficiency and minimizing exposure to predators. Furthermore, their movement patterns include direct relocations to new areas and central-tendency movements to return to familiar habitats. This navigation behavior is influenced by the use of familiar cliff edges, substrates, and topography as well as visual navigation. E. dofleini migration patterns vary depending on the population. In the eastern Pacific waters off the coast of Japan, migration coincides with seasonal temperature changes in the winter and summer. Here, E. dofleini migrate to shallower waters in the early summer and winter and offshore in the late summer and winter. There is no evidence of these migration patterns in the Alaskan and northeast Pacific populations of E. dofleini. Shelter E. dofleini are den dwellers, which serve as a central point from when they forage while also providing protection, shelter, and privacy. After hunting, they bring food back to the den to feed in a safer environment and avoid predators. Shells, bones, and other feeding debris pile up outside of the den, creating "den litter" that is commonly used by scientists and divers to find E. dofleini. Dens range across depth and substratum type including caves, holes dug beneath rock, and even trash on the ocean floor such as bottles, tires, pipes, and barrels. Den selection is greatly influenced by foraging behavior and preferred prey. Dens made of soft substrata may be preferred in areas where bivalves are abundant while dens near rocky areas might be chosen in areas with higher crab populations. The size of the den is small, usually being just large enough for the octopus to fit inside and turn around. E. dofleini beak size determines the size of the space it can fit inside, with its body being able to compress through tiny spaces as small as two inches. E. dofleini prefer to occupy same shelter for at least one month, often longer if possible. It is common for these octopus to leave their den for short periods of time and eventually return to re-use the same den. However, over longer periods of time, E. dofleini relocate to new dens situated relatively nearby, within an average distance of 13.2 meters. Lifespan and reproduction Unlike most other octopus species, whose lifespans normally span only one year, the giant Pacific octopus has a lifespan of three to five years. They reach sexual maturity at one to two years of age. Gonadal maturation has been linked to the optic gland of octopuses which has been compared functionally to the vertebrate pituitary gland. These optic glands are the only endocrine glands identified in octopuses, and their secretions have been found to contribute to behaviors linked with reproduction and senescence. When removed, females no longer brood their eggs, resume feeding, increase in weight, and experience longer lifespans compared to sexually mature, brooding females who still retain their optic glands. To help compensate for its relatively short lifespan, the octopus is extremely prolific. It can lay between 120,000 and 400,000 eggs which are coated in chorion, and attached to a hard surface by the female. The spawn is intensively cared for exclusively by the female, who continuously blows water over it and grooms it to remove algae and other growths. While she fulfills her duty of parental care the female stays close to her spawn, never leaving to feed, leading to her death soon after the young have hatched. The female's death is the result of starvation, as she subsists on her own body fats during this period of approximately 6 months. Hatchlings are about the size of a grain of rice, and very few survive to adulthood. Their growth rate is quite rapid: starting from and growing to at adulthood, which is an increase of around 0.9% per day. The giant Pacific octopus' growth over the course of a year has two sections: a faster section, from July to December, and a slower section, from January to June. Because they are cool-blooded, they are able to use most of their consumed energy for body mass, respiration, physical activity, and reproduction. During reproduction, the male octopus deposits a spermatophore (or sperm packet) more than long using his hectocotylus (specialized arm) in the female's mantle. The hectocotylus is found on the third arm of male octopuses and occupies the last four inches of the arm. This part of the male arm anatomy contains no suckers. Large spermatophores are characteristic of octopuses in this genus. The female stores the spermatophore in her spermatheca until she is ready to fertilize her eggs. One female at the Seattle Aquarium was observed to retain a spermatophore for seven months before laying fertilized eggs. Both male and female giant Pacific octopuses are semelparous, meaning they only go through one breeding cycle in their life. Analysis of egg clutches has shown evidence of polygyny and polyandry in giant Pacific octopuses, where males and females mate with multiple partners. This multiple paternity potentially allows females to increase the odds of at least one of the males she mates with producing fit offspring. After mating, both the males and females stop eating and ultimately die. After reproduction, they enter senescence, which involves obvious changes in behavior and appearance, including a reduced appetite, retraction of skin around the eyes giving them a more pronounced appearance, increased activity in uncoordinated patterns, and white lesions all over the body. While the duration of this stage is variable, it typically lasts about one to two months. Despite active senescence primarily occurring over this period immediately following reproduction, research has shown that changes related to senescence may begin as early as the onset of reproductive behavior. In early stages of senescence, which begins as the octopus enters the stage of reproduction, hyper-sensitivity is noted where individuals overreact to both noxious and non-noxious touch. As they enter late senescence, insensitivity is observed along with the dramatic physical changes described above. Changes in sensitivity to touch are attributed to decreasing cellular density in nerve and epithelial cells as the nervous system degrades. Death is typically attributed to starvation, as the females have stopped hunting in order to protect their eggs; males often spend more time in the open, making them more likely to be preyed upon. Intelligence Octopuses are ranked as the most intelligent invertebrates. Giant Pacific octopuses are commonly kept on display at aquariums due to their size and interesting physiology, and have demonstrated the ability to recognize humans with whom they frequently come in contact. These responses include jetting water, changing body texture, and other behaviors that are consistently demonstrated to specific individuals. They have the ability to solve simple puzzles, open childproof bottles, and use tools. The octopus brain has folded lobes (a distinct characteristic of complexity) and visual and tactile memory centers. They have about 300 million neurons. They have been known to open tank valves, disassemble expensive equipment, and generally wreak havoc in labs and aquaria. Some researchers even claim that they are capable of motor play and having personalities. Conservation and climate change Conservation Giant Pacific octopuses are not currently under the protection of Convention on International Trade in Endangered Species of Wild Fauna and Flora or evaluated in the IUCN Red List. DNA techniques have assisted in genetic and phylogenetic analysis of the species' evolutionary past. Following its DNA analysis, the giant Pacific octopus may actually prove to be three subspecies (one in Japan, another in Alaska, and a third in Puget Sound). In Puget Sound, the Washington Fish and Wildlife Commission adopted rules for protecting the harvest of giant Pacific octopuses at seven sites, after a legal harvest caused a public outcry. Populations in Puget Sound are not considered threatened. Regardless of these data gaps in abundance estimates, future climate change scenarios may affect these organisms in different ways. Climate change is complex, with predicted biotic and abiotic changes to multiple processes including oxygen limitation, reproduction, ocean acidification, toxins, effects on other trophic levels, and RNA editing. The seafood industry Fraud is an issue in the seafood industry, with species names being switched by accident or on purpose, as in the case of using the name of a more expensive species for a cheaper one. Cephalopods, in particular, lose distinguishing characteristics during food processing, making them much harder to identify. One study developed a multiplex PCR assay to distinguish between three prevalent octopus species in the Eastern Pacific, namely, the giant Pacific octopus, the big blue octopus, and the common octopus, in order to accurately identify these species and help to prevent seafood fraud. Combined with lack of assessment and mislabeling, tracking the species's abundance is nearly impossible. Scientists have relied on catch numbers to estimate stock abundance, but the animals are solitary and difficult to find. Sites like The Monterey Bay Aquarium Seafood Watch can help people to responsibly consume seafood, including the giant Pacific octopus. Seafood Watch lists giant Pacific octopus in either the "Buy" or "Buy, but be aware of concerns" categories depending on the geographical location of the catch. Oxygen limitation Octopuses have been found to migrate for a variety of reasons. Using tag and recapture methods, scientists found they move from den to den in response to decreased food availability, change in water quality, increase in predation, or increased population density (or decreased available habitat/den space) Because their blue blood is copper-based (hemocyanin) and not an efficient oxygen carrier, octopuses favor and move toward cooler, oxygen-rich water. This dependency limits octopus habitat, typically to temperate waters . If seawater temperatures continue to rise, these organisms may be forced to move to deeper, cooler water. Each fall in Washington's Hood Canal, a habitat for many octopuses, phytoplankton and macroalgae die and create a dead zone. As these micro-organisms decompose, oxygen is used up in the process and has been measured to be as low as 2 parts per million (ppm). This is a state of hypoxia. Normal levels are measured at 7–9 ppm. Fish and octopuses move from the deep towards the shallow water for more oxygen. Females do not leave, and die with their eggs at nesting sites. Warming seawater temperatures promote phytoplankton growth, and annual dead zones have been found to be increasing in size. To avoid these dead zones, octopuses must move to shallower waters, which may be warmer in temperature and less oxygen-rich, trapping them between two low-oxygen zones. Reproduction Increased seawater temperatures also increase metabolic processes. The warmer the water, the faster octopus eggs develop and hatch. After hatching, the paralarvae swim to the surface to join other plankton, where they are often preyed upon by birds, fish, and other plankton feeders. Quicker hatching time may also affect critical timing for food availability. One study found that higher water temperatures accelerated all aspects of reproduction and even shortened lifespan by up to 20%. Other studies concur that warming climate scenarios should result in higher embryo and paralarvae mortalities. Ocean acidification The burning of fossil fuels, deforestation, industrialization, and other land-use changes cause increased carbon dioxide levels in the atmosphere. The ocean absorbs an estimated 30% of emitted anthropogenic CO2. As the ocean absorbs CO2, it becomes more acidic and lowers in pH. Ocean acidification lowers available carbonate ions, which is a building block for calcium carbonate (CaCO3). Calcifying organisms use calcium carbonate to produce shells, skeletons, and tests. The prey base that octopuses prefer (crab, clams, scallops, mussels, etc.) are negatively impacted by ocean acidification, and may decrease in abundance. Shifts in available prey may force a change in octopus diets to other, nonshelled organisms. Because octopuses have hemocyanin as copper-based blood, a small change in pH can reduce oxygen-carrying capacity. A pH change from 8.0 to 7.7 or 7.5 will have life-or-death effects on cephalopods. Toxins Researchers have found high concentrations of heavy metals and PCBs in tissues and digestive glands, which may have come from these octopus' preferred prey, the red rock crab (Cancer productus). These crabs bury themselves in contaminated sediments and eat prey that live nearby. What effects these toxins have on octopuses are unknown, but other exposed animals have been known to show liver damage, changes in immune systems, and death. Effects on other trophic levels Potential changes in octopus populations will affect upper and lower trophic levels. Lower trophic levels include all prey items, and may fluctuate inversely with octopus abundance. Higher trophic levels include all predators of octopuses, and may fluctuate with octopus abundance, although many may prey upon a variety of organisms. Protection of other threatened species may affect octopus populations (the sea otter, for example), as they may rely on octopuses for food. Some research suggests that fishing other species has aided octopus populations, by taking out predators and competitors.
Biology and health sciences
Cephalopods
Animals
14742447
https://en.wikipedia.org/wiki/Axonopus
Axonopus
Axonopus is a genus of plants in the grass family, known generally as carpet grass. They are native primarily to the tropical and subtropical regions of the Americas with one species in tropical Africa and another on Easter Island. They are sometimes rhizomatous and many are tolerant of periodic submersion. Species , Plants of the World Online accepted 78 species: Formerly included
Biology and health sciences
Poales
Plants
354588
https://en.wikipedia.org/wiki/Hydrothermal%20vent
Hydrothermal vent
Hydrothermal vents are fissures on the seabed from which geothermally heated water discharges. They are commonly found near volcanically active places, areas where tectonic plates are moving apart at mid-ocean ridges, ocean basins, and hotspots. The dispersal of hydrothermal fluids throughout the global ocean at active vent sites creates hydrothermal plumes. Hydrothermal deposits are rocks and mineral ore deposits formed by the action of hydrothermal vents. Hydrothermal vents exist because the Earth is both geologically active and has large amounts of water on its surface and within its crust. Under the sea, they may form features called black smokers or white smokers, which deliver a wide range of elements to the world's oceans, thus contributing to global marine biogeochemistry. Relative to the majority of the deep sea, the areas around hydrothermal vents are biologically more productive, often hosting complex communities fueled by the chemicals dissolved in the vent fluids. Chemosynthetic bacteria and archaea found around hydrothermal vents form the base of the food chain, supporting diverse organisms including giant tube worms, clams, limpets, and shrimp. Active hydrothermal vents are thought to exist on Jupiter's moon Europa and Saturn's moon Enceladus, and it is speculated that ancient hydrothermal vents once existed on Mars. Hydrothermal vents have been hypothesized to have been a significant factor to starting abiogenesis and the survival of primitive life. The conditions of these vents have been shown to support the synthesis of molecules important to life. Some evidence suggests that certain vents such as alkaline hydrothermal vents or those containing supercritical CO2 are more conducive to the formation of these organic molecules. However, the origin of life is a widely debated topic, and there are many conflicting viewpoints. Physical properties Hydrothermal vents in the deep ocean typically form along the mid-ocean ridges, such as the East Pacific Rise and the Mid-Atlantic Ridge. These are locations where two tectonic plates are diverging and new crust is being formed. The water that issues from seafloor hydrothermal vents consists mostly of seawater drawn into the hydrothermal system close to the volcanic edifice through faults and porous sediments or volcanic strata, plus some magmatic water released by the upwelling magma. On land, the majority of water circulated within fumarole and geyser systems is meteoric water and ground water that has percolated down into the hydrothermal system from the surface, but also commonly contains some portion of metamorphic water, magmatic water, and sedimentary formational brine released by the magma. The proportion of each varies from location to location. In contrast to the approximately ambient water temperature at these depths, water emerges from these vents at temperatures ranging from up to as high as . Due to the high hydrostatic pressure at these depths, water may exist in either its liquid form or as a supercritical fluid at such temperatures. The critical point of (pure) water is at a pressure of 218 atmospheres. However, introducing salinity into the fluid raises the critical point to higher temperatures and pressures. The critical point of seawater (3.2 wt. % NaCl) is and 298.5 bars, corresponding to a depth of ~ below sea level. Accordingly, if a hydrothermal fluid with a salinity of 3.2 wt. % NaCl vents above and 298.5 bars, it is supercritical. Furthermore, the salinity of vent fluids have been shown to vary widely due to phase separation in the crust. The critical point for lower salinity fluids is at lower temperature and pressure conditions than that for seawater, but higher than that for pure water. For example, a vent fluid with a 2.24 wt. % NaCl salinity has the critical point at and 280.5 bars. Thus, water emerging from the hottest parts of some hydrothermal vents can be a supercritical fluid, possessing physical properties between those of a gas and those of a liquid.Examples of supercritical venting are found at several sites. Sister Peak (Comfortless Cove Hydrothermal Field, , depth ) vents low salinity phase-separated, vapor-type fluids. Sustained venting was not found to be supercritical but a brief injection of was well above supercritical conditions. A nearby site, Turtle Pits, was found to vent low salinity fluid at , which is above the critical point of the fluid at that salinity. A vent site in the Cayman Trough named Beebe, which is the world's deepest known hydrothermal site at ~ below sea level, has shown sustained supercritical venting at and 2.3 wt% NaCl. Although supercritical conditions have been observed at several sites, it is not yet known what significance, if any, supercritical venting has in terms of hydrothermal circulation, mineral deposit formation, geochemical fluxes or biological activity. The initial stages of a vent chimney begin with the deposition of the mineral anhydrite. Sulfides of copper, iron, and zinc then precipitate in the chimney gaps, making it less porous over the course of time. Vent growths on the order of per day have been recorded. An April 2007 exploration of the deep-sea vents off the coast of Fiji found those vents to be a significant source of dissolved iron (see iron cycle). Black smokers and white smokers Some hydrothermal vents form roughly cylindrical chimney structures. These form from minerals that are dissolved in the vent fluid. When the superheated water contacts the near-freezing sea water, the minerals precipitate out to form particles which add to the height of the stacks. Some of these chimney structures can reach heights of . An example of such a towering vent was "Godzilla", a structure on the Pacific Ocean deep seafloor near Oregon that rose to before it fell over in 1996. Black smokers A black smoker or deep-sea vent is a type of hydrothermal vent found on the seabed, typically in the bathyal zone (with largest frequency in depths from ), but also in lesser depths as well as deeper in the abyssal zone. They appear as black, chimney-like structures that emit a cloud of black material. Black smokers typically emit particles with high levels of sulfur-bearing minerals, or sulfides. Black smokers are formed in fields hundreds of meters wide when superheated water from below Earth's crust comes through the ocean floor (water may attain temperatures above ). This water is rich in dissolved minerals from the crust, most notably sulfides. When it comes in contact with cold ocean water, many minerals precipitate, forming a black, chimney-like structure around each vent. The deposited metal sulfides can become massive sulfide ore deposits in time. Some black smokers on the Azores portion of the Mid-Atlantic Ridge are extremely rich in metal content, such as Rainbow with 24,000 μM concentrations of iron. Black smokers were first discovered in 1979 on the East Pacific Rise by scientists from Scripps Institution of Oceanography during the RISE Project. They were observed using the deep submergence vehicle ALVIN from the Woods Hole Oceanographic Institution. Now, black smokers are known to exist in the Atlantic and Pacific Oceans, at an average depth of . The most northerly black smokers are a cluster of five named Loki's Castle, discovered in 2008 by scientists from the University of Bergen at 73°N, on the Mid-Atlantic Ridge between Greenland and Norway. These black smokers are of interest as they are in a more stable area of the Earth's crust, where tectonic forces are less and consequently fields of hydrothermal vents are less common. The world's deepest known black smokers are located in the Cayman Trough, 5,000 m (3.1 miles) below the ocean's surface. White smokers White smoker vents emit lighter-hued minerals, such as those containing barium, calcium and silicon. These vents also tend to have lower-temperature plumes probably because they are generally distant from their heat source. Black and white smokers may coexist in the same hydrothermal field, but they generally represent proximal (close) and distal (distant) vents to the main upflow zone, respectively. However, white smokers correspond mostly to waning stages of such hydrothermal fields, as magmatic heat sources become progressively more distant from the source (due to magma crystallization) and hydrothermal fluids become dominated by seawater instead of magmatic water. Mineralizing fluids from this type of vent are rich in calcium and they form dominantly sulfate-rich (i.e., barite and anhydrite) and carbonate deposits. Hydrothermal plumes Hydrothermal plumes are fluid entities that manifest where hydrothermal fluids are expelled into the overlying water column at active hydrothermal vent sites. As hydrothermal fluids typically harbor physical (e.g., temperature, density) and chemical (e.g., pH, Eh, major ions) properties distinct from seawater, hydrothermal plumes embody physical and chemical gradients that promote several types of chemical reactions, including oxidation-reduction reactions and precipitation reactions. Because of these reactions, hydrothermal plumes are dynamic entities whose physical and chemical properties evolve over both space and time within the ocean. Hydrothermal vent fluids harbor temperatures (~40 to >400 °C) well above that of ocean floor seawater (~4 °C), meaning that hydrothermal fluid is less dense than the surrounding seawater and will rise through the water column due to buoyancy, forming a hydrothermal plume; therefore, the phase during which hydrothermal plumes rise through the water column is known as the "buoyant plume" phase. During this phase, shear forces between the hydrothermal plume and surrounding seawater generate turbulent flow that facilitates mixing between the two types of fluids, which progressively dilutes the hydrothermal plume with seawater. Eventually, the coupled effects of dilution and rising into progressively warmer (less dense) overlying seawater will cause the hydrothermal plume to become neutrally buoyant at some height above the seafloor; therefore, this stage of hydrothermal plume evolution is known as the "nonbuoyant plume" phase. Once the plume is neutrally buoyant, it can no longer continue to rise through the water column and instead begins to spread laterally throughout the ocean, potentially over several thousands of kilometers. Chemical reactions occur concurrently with the physical evolution of hydrothermal plumes. While seawater is a relatively oxidizing fluid, hydrothermal vent fluids are typically reducing in nature. Consequently, reduced chemicals such as hydrogen gas, hydrogen sulfide, methane, Fe2+, and Mn2+ that are common in many vent fluids will react upon mixing with seawater. In fluids with high concentrations of H2S, dissolved metal ions such as Fe2+ and Mn2+ readily precipitate as dark-colored metal sulfide minerals (see "black smokers"). Furthermore, Fe2+ and Mn2+ entrained within the hydrothermal plume will eventually oxidize to form insoluble Fe and Mn (oxy)hydroxide minerals. For this reason, the hydrothermal "near field" has been proposed to refer to the hydrothermal plume region undergoing active oxidation of metals while the term "far field" refers to the plume region within which complete metal oxidation has occurred. Identification and dating Several chemical tracers found in hydrothermal plumes are used to locate deep-sea hydrothermal vents during discovery cruises. Useful tracers of hydrothermal activity should be chemically unreactive so that changes in tracer concentration subsequent to venting are due solely to dilution. The noble gas helium fits this criterion and is a particularly useful tracer of hydrothermal activity. This is because hydrothermal venting releases elevated concentrations of helium-3 relative to seawater, a rare, naturally occurring He isotope derived exclusively from the Earth's interior. Thus, the dispersal of 3He throughout the oceans via hydrothermal plumes creates anomalous seawater He isotope compositions that signify hydrothermal venting. Another noble gas that can serve as a tracer of hydrothermal activity is radon. As all naturally occurring isotopes of Rn are radioactive, Rn concentrations in seawater can also provide information on hydrothermal plume ages when combined with He isotope data. The isotope radon-222 is utilized for this purpose as 222Rn has the longest half-life of all naturally occurring radon isotopes of roughly 3.82 days. Dissolved gases, such as H2, H2S, and CH4, and metals, such as Fe and Mn, present at high concentrations in hydrothermal vent fluids relative to seawater may also be diagnostic of hydrothermal plumes and thus active venting; however, these components are reactive and are thus less suitable as tracers of hydrothermal activity. Ocean biogeochemistry Hydrothermal plumes represent an important mechanism through which hydrothermal systems influence marine biogeochemistry. Hydrothermal vents emit a wide variety of trace metals into the ocean, including Fe, Mn, Cr, Cu, Zn, Co, Ni, Mo, Cd, V, and W, many of which have biological functions. Numerous physical and chemical processes control the fate of these metals once they are expelled into the water column. Based on thermodynamic theory, Fe2+ and Mn2+ should oxidize in seawater to form insoluble metal (oxy)hydroxide precipitates; however, complexation with organic compounds and the formation of colloids and nanoparticles can keep these redox-sensitive elements suspended in solution far from the vent site. Fe and Mn often have the highest concentrations among metals in acidic hydrothermal vent fluids, and both have biological significance, particularly Fe, which is often a limiting nutrient in marine environments. Therefore, far-field transport of Fe and Mn via organic complexation may constitute an important mechanism of ocean metal cycling. Additionally, hydrothermal vents deliver significant concentrations of other biologically important trace metals to the ocean such as Mo, which may have been important in the early chemical evolution of the Earth's oceans and to the origin of life (see "theory of hydrothermal origin of life"). However, Fe and Mn precipitates can also influence ocean biogeochemistry by removing trace metals from the water column. The charged surfaces of iron (oxy)hydroxide minerals effectively adsorb elements such as phosphorus, vanadium, arsenic, and rare earth metals from seawater; therefore, although hydrothermal plumes may represent a net source of metals such as Fe and Mn to the oceans, they can also scavenge other metals and non-metalliferous nutrients such as P from seawater, representing a net sink of these elements. Biology of hydrothermal vents Life has traditionally been seen as driven by energy from the sun, but deep-sea organisms have no access to sunlight, so biological communities around hydrothermal vents must depend on nutrients found in the dusty chemical deposits and hydrothermal fluids in which they live. Previously, benthic oceanographers assumed that vent organisms were dependent on marine snow, as deep-sea organisms are. This would leave them dependent on plant life and thus the sun. Some hydrothermal vent organisms do consume this "rain", but with only such a system, life forms would be sparse. Compared to the surrounding sea floor, however, hydrothermal vent zones have a density of organisms 10,000 to 100,000 times greater. These organisms include yeti crabs, which have long hairy arms that they reach out over the vent to collect food with. The hydrothermal vents are recognized as a type of chemosynthetic based ecosystems (CBE) where primary productivity is fuelled by chemical compounds as energy sources instead of light (chemoautotrophy). Hydrothermal vent communities are able to sustain such vast amounts of life because vent organisms depend on chemosynthetic bacteria for food. The water from the hydrothermal vent is rich in dissolved minerals and supports a large population of chemoautotrophic bacteria. These bacteria use sulfur compounds, particularly hydrogen sulfide, a chemical highly toxic to most known organisms, to produce organic material through the process of chemosynthesis. The vents' impact on the living environment goes beyond the organisms that lives around them, as they act as a significant source of iron in the oceans, providing iron for the phytoplankton. Biological communities The oldest confirmed record of a "modern" biological community related with a vent is the Figueroa Sulfide, from the Early Jurassic of California. The ecosystem so formed is reliant upon the continued existence of the hydrothermal vent field as the primary source of energy, which differs from most surface life on Earth, which is based on solar energy. However, although it is often said that these communities exist independently of the sun, some of the organisms are actually dependent upon oxygen produced by photosynthetic organisms, while others are anaerobic. The chemosynthetic bacteria grow into a thick mat which attracts other organisms, such as amphipods and copepods, which graze upon the bacteria directly. Larger organisms, such as snails, shrimp, crabs, tube worms, fish (especially eelpout, cutthroat eel, Ophidiiformes and Symphurus thermophilus), and octopuses (notably Vulcanoctopus hydrothermalis), form a food chain of predator and prey relationships above the primary consumers. The main families of organisms found around seafloor vents are annelids, pogonophorans, gastropods, and crustaceans, with large bivalves, vestimentiferan worms, and "eyeless" shrimp making up the bulk of nonmicrobial organisms. Siboglinid tube worms, which may grow to over tall in the largest species, often form an important part of the community around a hydrothermal vent. They have no mouth or digestive tract, and like parasitic worms, absorb nutrients produced by the bacteria in their tissues. About 285 billion bacteria are found per ounce of tubeworm tissue. Tubeworms have red plumes which contain hemoglobin. Hemoglobin combines with hydrogen sulfide and transfers it to the bacteria living inside the worm. In return, the bacteria nourish the worm with carbon compounds. Two of the species that inhabit a hydrothermal vent are Tevnia jerichonana, and Riftia pachyptila. One discovered community, dubbed "Eel City", consists predominantly of the eel Dysommina rugosa. Though eels are not uncommon, invertebrates typically dominate hydrothermal vents. Eel City is located near Nafanua volcanic cone, American Samoa. In 1993, already more than 100 gastropod species were known to occur in hydrothermal vents. Over 300 new species have been discovered at hydrothermal vents, many of them "sister species" to others found in geographically separated vent areas. It has been proposed that before the North American Plate overrode the mid-ocean ridge, there was a single biogeographic vent region found in the eastern Pacific. The subsequent barrier to travel began the evolutionary divergence of species in different locations. The examples of convergent evolution seen between distinct hydrothermal vents is seen as major support for the theory of natural selection and of evolution as a whole. Although life is very sparse at these depths, black smokers are the centers of entire ecosystems. Sunlight is nonexistent, so many organisms, such as archaea and extremophiles, convert the heat, methane, and sulfur compounds provided by black smokers into energy through a process called chemosynthesis. More complex life forms, such as clams and tubeworms, feed on these organisms. The organisms at the base of the food chain also deposit minerals into the base of the black smoker, therefore completing the life cycle. A species of phototrophic bacterium has been found living near a black smoker off the coast of Mexico at a depth of . No sunlight penetrates that far into the waters. Instead, the bacteria, part of the Chlorobiaceae family, use the faint glow from the black smoker for photosynthesis. This is the first organism discovered in nature to exclusively use a light other than sunlight for photosynthesis. New and unusual species are constantly being discovered in the neighborhood of black smokers. The Pompeii worm Alvinella pompejana, which is capable of withstanding temperatures up to , was found in the 1980s, and a scaly-foot gastropod Chrysomallon squamiferum in 2001 during an expedition to the Indian Ocean's Kairei hydrothermal vent field. The latter uses iron sulfides (pyrite and greigite) for the structure of its dermal sclerites (hardened body parts), instead of calcium carbonate. The extreme pressure of 2,500 m of water (approximately 25 megapascals or 250 atmospheres) is thought to play a role in stabilizing iron sulfide for biological purposes. This armor plating probably serves as a defense against the venomous radula (teeth) of predatory snails in that community. In March 2017, researchers reported evidence of possibly the oldest forms of life on Earth. Putative fossilized microorganisms were discovered in hydrothermal vent precipitates in the Nuvvuagittuq Belt of Quebec, Canada, that may have lived as early as 4.280 billion years ago, not long after the oceans formed 4.4 billion years ago, and not long after the formation of the Earth 4.54 billion years ago. Animal-bacterial symbiosis Hydrothermal vent ecosystems have enormous biomass and productivity, but this rests on the symbiotic relationships that have evolved at vents. Deep-sea hydrothermal vent ecosystems differ from their shallow-water and terrestrial hydrothermal counterparts due to the symbiosis that occurs between macroinvertebrate hosts and chemoautotrophic microbial symbionts in the former. Since sunlight does not reach deep-sea hydrothermal vents, organisms in deep-sea hydrothermal vents cannot obtain energy from the sun to perform photosynthesis. Instead, the microbial life found at hydrothermal vents is chemosynthetic; they fix carbon by using energy from chemicals such as sulfide, as opposed to light energy from the sun. In other words, the symbiont converts inorganic molecules (H2S, CO2, O) to organic molecules that the host then uses as nutrition. However, sulfide is an extremely toxic substance to most life on Earth. For this reason, scientists were astounded when they first found hydrothermal vents teeming with life in 1977. What was discovered was the ubiquitous symbiosis of chemoautotrophs living in (endosymbiosis) the vent animals' gills; the reason why multicellular life is capable to survive the toxicity of vent systems. Scientists are therefore now studying how the microbial symbionts aid in sulfide detoxification (therefore allowing the host to survive the otherwise toxic conditions). Work on microbiome function shows that host-associated microbiomes are also important in host development, nutrition, defense against predators, and detoxification. In return, the host provides the symbiont with chemicals required for chemosynthesis, such as carbon, sulfide, and oxygen. In the early stages of studying life at hydrothermal vents, there were differing theories regarding the mechanisms by which multicellular organisms were able to acquire nutrients from these environments, and how they were able to survive in such extreme conditions. In 1977, it was hypothesized that the chemoautotrophic bacteria at hydrothermal vents might be responsible for contributing to the diet of suspension-feeding bivalves. Finally, in 1981, it was understood that giant tubeworm nutrition acquisition occurred as a result of chemoautotrophic bacterial endosymbionts. As scientists continued to study life at hydrothermal vents, it was understood that symbiotic relationships between chemoautotrophs and macrofauna invertebrate species was ubiquitous. For instance, in 1983, clam gill tissue was confirmed to contain bacterial endosymbionts; in 1984 vent bathymodiolid mussels and vesicomyid clams were also found to carry endosymbionts. However, the mechanisms by which organisms acquire their symbionts differ, as do the metabolic relationships. For instance, tubeworms have no mouth and no gut, but they do have a "trophosome", which is where they deal with nutrition and where their endosymbionts are found. They also have a bright red plume, which they use to uptake compounds such as O, H2S, and CO2, which feed the endosymbionts in their trophosome. Remarkably, the tubeworms hemoglobin (which incidentally is the reason for the bright red color of the plume) is capable of carrying oxygen without interference or inhibition from sulfide, despite the fact that oxygen and sulfide are typically very reactive. In 2005, it was discovered that this is possible due to zinc ions that bind the hydrogen sulfide in the tubeworms hemoglobin, therefore preventing the sulfide from reacting with the oxygen. It also reduces the tubeworms tissue from exposure to the sulfide and provides the bacteria with the sulfide to perform chemoautotrophy. It has also been discovered that tubeworms can metabolize CO2 in two different ways, and can alternate between the two as needed as environmental conditions change. In 1988, research confirmed thiotrophic (sulfide-oxidizing) bacteria in Alviniconcha hessleri, a large vent mollusk. In order to circumvent the toxicity of sulfide, mussels first convert it to thiosulfate before carrying it over to the symbionts. In the case of motile organisms such as alvinocarid shrimp, they must track oxic (oxygen-rich) / anoxic (oxygen-poor) environments as they fluctuate in the environment. Organisms living at the edge of hydrothermal vent fields, such as pectinid scallops, also carry endosymbionts in their gills, and as a result their bacterial density is low relative to organisms living nearer to the vent. However, the scallop's dependence on the microbial endosymbiont for obtaining their nutrition is therefore also lessened. Furthermore, not all host animals have endosymbionts; some have episymbionts—symbionts living on the animal as opposed to inside the animal. Shrimp found at vents in the Mid-Atlantic Ridge were once thought of as an exception to the necessity of symbiosis for macroinvertebrate survival at vents. That changed in 1988 when they were discovered to carry episymbionts. Since then, other organisms at vents have been found to carry episymbionts as well, such as Lepetodrilis fucensis. Furthermore, while some symbionts reduce sulfur compounds, others are known as "methanotrophs" and reduce carbon compounds, namely methane. Bathmodiolid mussels are an example of a host that contains methanotrophic endosymbionts; however, the latter mostly occur in cold seeps as opposed to hydrothermal vents. While chemosynthesis occurring at the deep ocean allows organisms to live without sunlight in the immediate sense, they technically still rely on the sun for survival, since oxygen in the ocean is a byproduct of photosynthesis. However, if the sun were to suddenly disappear and photosynthesis ceased to occur on our planet, life at the deep-sea hydrothermal vents could continue for millennia (until the oxygen was depleted). Theory of hydrothermal origin of life The chemical and thermal dynamics in hydrothermal vents makes such environments highly suitable thermodynamically for chemical evolution processes to take place. Therefore, thermal energy flux is a permanent agent and is hypothesized to have contributed to the evolution of the planet, including prebiotic chemistry. Günter Wächtershäuser proposed the iron-sulfur world theory and suggested that life might have originated at hydrothermal vents. Wächtershäuser proposed that an early form of metabolism predated genetics. By metabolism he meant a cycle of chemical reactions that release energy in a form that can be harnessed by other processes. It has been proposed that amino acid synthesis could have occurred deep in the Earth's crust and that these amino acids were subsequently shot up along with hydrothermal fluids into cooler waters, where lower temperatures and the presence of clay minerals would have fostered the formation of peptides and protocells. This is an attractive hypothesis because of the abundance of CH4 (methane) and NH3 (ammonia) present in hydrothermal vent regions, a condition that was not provided by the Earth's primitive atmosphere. A major limitation to this hypothesis is the lack of stability of organic molecules at high temperatures, but some have suggested that life would have originated outside of the zones of highest temperature. There are numerous species of extremophiles and other organisms currently living immediately around deep-sea vents, suggesting that this is indeed a possible scenario. Experimental research and computer modeling indicate that the surfaces of mineral particles inside hydrothermal vents have similar catalytic properties to enzymes and are able to create simple organic molecules, such as methanol (CH3OH) and formic acid (HCO2H), out of the dissolved CO2 in the water. Additionally, the discovery of supercritical CO2 at some sites has been used to further support the theory of hydrothermal origin of life given that it can increase organic reaction rates. Its high solvation power and diffusion rate allow it to promote amino and formic acid synthesis, as well as the synthesis of other organic compounds, polymers, and the four amino acids: alanine, arginine, aspartic acid, and glycine. In situ experiments have revealed the convergence of high N2 content and supercritical CO2 at some sites, as well as evidence for complex organic material (amino acids) within supercritical CO2 bubbles. Proponents of this theory for the origin of life also propose the presence of supercritical CO2 as a solution to the “water paradox” that pervades theories on the origin of life in aquatic settings. This paradox encompasses the fact that water is both required for life and will, in abundance, hydrolyze organic molecules and prevent dehydration synthesis reactions necessary to chemical and biological evolution. Supercritical CO2, being hydrophobic, acts as a solvent that facilitates an environment conducive to dehydration synthesis. Therefore, it has been hypothesized that the presence of supercritical CO2 in Hadean hydrothermal vents played an important role in the origin of life. There is some evidence that links the origin of life to alkaline hydrothermal vents in particular. The pH conditions of these vents may have made them more suitable for emerging life. One current theory is that the naturally occurring proton gradients at these deep sea vents supplemented the lack of phospholipid bilayer membranes and proton pumps in early organisms, allowing ion gradients to form despite the lack of cellular machinery and components present in modern cells. There is some discourse around this topic. It has been argued that the natural pH gradients of these vents playing a role in the origin of life is actually implausible. The counter argument relies, among other points, on what the author describes as the unlikelihood of the formation of machinery which produces energy from the pH gradients found in hydrothermal vents without/before the existence of genetic information. This counterpoint has been responded to by Nick Lane, one of the researchers whose work it focuses on. He argues that the counterpoint largely misinterprets both his work and the work of others. Another reason that the view of deep sea hydrothermal vents as an ideal environment for the origin of life remains controversial is the absence of wet-dry cycles and exposure to UV light, which promote the formation of membranous vesicles and synthesis of many biomolecules. The ionic concentrations of hydrothermal vents differs from the intracellular fluid within the majority of life. It has instead been suggested that terrestrial freshwater environments are more likely to be an ideal environment for the formation of early cells. Meanwhile, proponents of the deep sea hydrothermal vent hypothesis suggest thermophoresis in mineral cavities to be an alternative compartment for polymerization of biopolymers. How thermophoresis within mineral cavities could promote coding and metabolism is unknown. Nick Lane suggests that nucleotide polymerization at high concentrations of nucleotides within self-replicating protocells, where "Molecular crowding and phosphorylation in such confined, high-energy protocells could potentially promote the polymerization of nucleotides to form RNA". Acetyl phosphate could possibly promote polymerization at mineral surfaces or at low water activity. A computational simulation shows that nucleotide concentration of nucleotide catalysis of "the energy currency pathway is favored, as energy is limiting; favoring this pathway feeds forward into a greater nucleotide synthesis". Fast nucleotide catalysis of fixation lowers nucleotide concentration as protocell growth and division is rapid which then leads to halving of nucleotide concentration, weak nucleotide catalysis of fixation promotes little to protocell growth and division. In biochemistry, reactions with CO2 and H2 produce precursors to biomolecules that are also produced from the acetyl-CoA pathway and Krebs cycle which would support an origin of life at deep sea alkaline vents. Acetyl phosphate produced from the reactions are capable of phosphorylating ADP to ATP, with maximum synthesis occurring at high water activity and low concentrations of ions, the Hadean ocean likely had lower concentrations of ions than modern oceans. The concentrations of Mg2+ and Ca2+ at alkaline hydrothermal systems are lower than the at the ocean. The high concentration of potassium within most life forms could be readily explained that protocells might have evolved sodium-hydrogen antiporters to pump out Na+ as prebiotic lipid membranes are less permeable to Na+ than H+. If cells originated at these environments, they would have been autotrophs with a Wood-Ljungdahl pathway and incomplete reverse Krebs cycle. Mathematical modelling of organic synthesis of carboxylic acids to lipids, nucleotides, amino acids, and sugars, and polymerization reactions are favorable at alkaline hydrothermal vents. The Deep Hot Biosphere At the beginning of his 1992 paper The Deep Hot Biosphere, Thomas Gold referred to ocean vents in support of his theory that the lower levels of the earth are rich in living biological material that finds its way to the surface. He further expanded his ideas in the book The Deep Hot Biosphere. An article on abiogenic hydrocarbon production in the February 2008 issue of Science journal used data from experiments at the Lost City hydrothermal field to report how the abiotic synthesis of low molecular mass hydrocarbons from mantle derived carbon dioxide may occur in the presence of ultramafic rocks, water, and moderate amounts of heat. Discovery and exploration In 1949, a deep water survey reported anomalously hot brines in the central portion of the Red Sea. Later work in the 1960s confirmed the presence of hot, , saline brines and associated metalliferous muds. The hot solutions were emanating from an active subseafloor rift. The highly saline character of the waters was not hospitable to living organisms. The brines and associated muds are currently under investigation as a source of mineable precious and base metals. In June 1976, scientists from the Scripps Institution of Oceanography obtained the first evidence for submarine hydrothermal vents along the Galápagos Rift, a spur of the East Pacific Rise, on the Pleiades II expedition, using the Deep-Tow seafloor imaging system. In 1977, the first scientific papers on hydrothermal vents were published by scientists from the Scripps Institution of Oceanography; research scientist Peter Lonsdale published photographs taken from deep-towed cameras, and PhD student Kathleen Crane published maps and temperature anomaly data. Transponders were deployed at the site, which was nicknamed "Clam-bake", to enable an expedition to return the following year for direct observations with the DSV Alvin. Chemosynthetic ecosystems surrounding the Galápagos Rift submarine hydrothermal vents were first directly observed in 1977, when a group of marine geologists funded by the National Science Foundation returned to the Clambake sites. The principal investigator for the submersible study was Jack Corliss of Oregon State University. Corliss and Tjeerd van Andel from Stanford University observed and sampled the vents and their ecosystem on February 17, 1977, while diving in the DSV Alvin, a research submersible operated by the Woods Hole Oceanographic Institution (WHOI). Other scientists on the research cruise included Richard (Dick) Von Herzen and Robert Ballard of WHOI, Jack Dymond and Louis Gordon of Oregon State University, John Edmond and Tanya Atwater of the Massachusetts Institute of Technology, Dave Williams of the U.S. Geological Survey, and Kathleen Crane of Scripps Institution of Oceanography. This team published their observations of the vents, organisms, and the composition of the vent fluids in the journal Science. In 1979, a team of biologists led by J. Frederick Grassle, at the time at WHOI, returned to the same location to investigate the biological communities discovered two year earlier. High temperature hydrothermal vents, the "black smokers", were discovered in spring 1979 by a team from the Scripps Institution of Oceanography using the submersible Alvin. The RISE expedition explored the East Pacific Rise at 21° N with the goals of testing geophysical mapping of the sea floor with the Alvin and finding another hydrothermal field beyond the Galápagos Rift vents. The expedition was led by Fred Spiess and Ken Macdonald and included participants from the U.S., Mexico and France. The dive region was selected based on the discovery of sea floor mounds of sulfide minerals by the French CYAMEX expedition in 1978. Prior to dive operations, expedition member Robert Ballard located near-bottom water temperature anomalies using a deeply towed instrument package. The first dive was targeted at one of those anomalies. On Easter Sunday April 15, 1979 during a dive of Alvin to 2,600 meters, Roger Larson and Bruce Luyendyk found a hydrothermal vent field with a biological community similar to the Galápagos vents. On a subsequent dive on April 21, William Normark and Thierry Juteau discovered the high temperature vents emitting black mineral particle jets from chimneys; the black smokers. Following this Macdonald and Jim Aiken rigged a temperature probe to Alvin to measure the water temperature at the black smoker vents. This observed the highest temperatures then recorded at deep sea hydrothermal vents (380±30 °C). Analysis of black smoker material and the chimneys that fed them revealed that iron sulfide precipitates are the common minerals in the "smoke" and walls of the chimneys.  In 2005, Neptune Resources NL, a mineral exploration company, applied for and was granted 35,000 km2 of exploration rights over the Kermadec Arc in New Zealand's Exclusive Economic Zone to explore for seafloor massive sulfide deposits, a potential new source of lead-zinc-copper sulfides formed from modern hydrothermal vent fields. The discovery of a vent in the Pacific Ocean offshore of Costa Rica, named the Medusa hydrothermal vent field (after the serpent-haired Medusa of Greek mythology), was announced in April 2007. The Ashadze hydrothermal field (13°N on the Mid-Atlantic Ridge, elevation -4200 m) was the deepest known high-temperature hydrothermal field until 2010, when a hydrothermal plume emanating from the Beebe site (, elevation -5000 m) was detected by a group of scientists from NASA Jet Propulsion Laboratory and Woods Hole Oceanographic Institution. This site is located on the 110 km long, ultraslow spreading Mid-Cayman Rise within the Cayman Trough. In early 2013, the deepest known hydrothermal vents were discovered in the Caribbean Sea at a depth of almost . Oceanographers are studying the volcanoes and hydrothermal vents of the Juan de Fuca mid ocean ridge where tectonic plates are moving away from each other. Hydrothermal vents and other geothermal manifestations are currently being explored in the Bahía de Concepción, Baja California Sur, Mexico. Distribution Hydrothermal vents are distributed along the Earth's plate boundaries, although they may also be found at intra-plate locations such as hotspot volcanoes. As of 2009 there were approximately 500 known active submarine hydrothermal vent fields, with about half visually observed at the seafloor and the other half suspected from water column indicators and/or seafloor deposits. Rogers et al. (2012) recognized at least 11 biogeographic provinces of hydrothermal vent systems: Mid-Atlantic Ridge province, East Scotia Ridge province, northern East Pacific Rise province, central East Pacific Rise province, southern East Pacific Rise province, south of the Easter Microplate, Indian Ocean province, four provinces in the western Pacific and many more. Exploitation Hydrothermal vents, in some instances, have led to the formation of exploitable mineral resources via the deposition of seafloor massive sulfide deposits. The Mount Isa orebody, located in Queensland, Australia, is an excellent example. Many hydrothermal vents are rich in cobalt, gold, copper, and rare earth metals essential for electronic components. Hydrothermal venting on the Archean seafloor is considered to have formed Algoma-type banded iron formations, which have been a source of iron ore. Recently, mineral exploration companies, driven by the elevated price activity in the base metals sector during the mid-2000s, have turned their attention to the extraction of mineral resources from hydrothermal fields on the seafloor. Significant cost reductions are, in theory, possible. In countries such as Japan, where mineral resources are primarily derived from international imports, there is a particular push for the extraction of seafloor mineral resources. The world's first "large-scale" mining of hydrothermal vent mineral deposits was carried out by Japan Oil, Gas and Metals National Corporation (JOGMEC) in August – September, 2017. JOGMEC carried out this operation using the Research Vessel Hakurei. This mining was carried out at the 'Izena hole/cauldron' vent field within the hydrothermally active back-arc basin known as the Okinawa Trough, which contains 15 confirmed vent fields according to the InterRidge Vents Database. Two companies are currently engaged in the late stages of commencing to mine seafloor massive sulfides (SMS). Nautilus Minerals is in the advanced stages of commencing extraction from its Solwarra deposit, in the Bismarck Archipelago, and Neptune Minerals is at an earlier stage with its Rumble II West deposit, located on the Kermadec Arc, near the Kermadec Islands. Both companies are proposing using modified existing technology. Nautilus Minerals, in partnership with Placer Dome (now part of Barrick Gold), succeeded in 2006 in returning over 10 metric tons of mined SMS to the surface using modified drum cutters mounted on an ROV, a world first. Neptune Minerals in 2007 succeeded in recovering SMS sediment samples using a modified oil industry suction pump mounted on an ROV, also a world first. Potential seafloor mining has environmental impacts, including dust plumes from mining machinery affecting filter-feeding organisms, collapsing or reopening vents, methane clathrate release, or even sub-oceanic land slides. There are also potential environmental effects from the tools needed for mining these hydrothermal vent ecosystems, including noise pollution and anthropogenic light. Hydrothermal vent system mining would require the use of both submerged mining tools on the seafloor, including remotely operated underwater vehicles (ROVs), as well as surface support vessels on the ocean surface. Inevitably, through the operation of these machines, some level of noise will be created, which presents a problem for hydrothermal vent organisms because, as they are up to 12,000 feet below the surface of the ocean, they experience very little sound. As a result of this, these organisms have evolved to have highly sensitive hearing organs, so if there is a sudden increase in noise, such as that created by mining machinery, there is potential to damage these auditory organs and harm the vent organisms. It is also important to consider that many studies have been able to show that a large percent of benthic organisms communicate using very low-frequency sounds; therefore, increasing ambient noise levels on the seafloor could potentially mask communication between the organisms and alter behavioral patterns. Similar to how deep-sea SMS mining tools create noise pollution, they also create anthropogenic light sources on the seafloor (from mining tools) and the ocean surface (from surface support vessels). Organisms at these hydrothermal vent systems are in the aphotic zone of the ocean and have adapted to very low light conditions. Studies on deep sea shrimp have shown the potential for flood lights used on the sea floor used in studying the vent systems to cause permanent retinal damage, warranting further research into the potential risk to other vent organisms. On top of the risk presented to deep-sea organisms, the surface support vessels use nocturnal anthropogenic lighting. Research has shown that this type of lighting on the ocean surface can disorient seabirds and cause fallout, where they fly toward the anthropogenic light and become exhausted or collide with man-made objects, resulting in injury or death. There is consideration for both aquatic and land organisms when evaluating the environmental effects of hydrothermal vent mining. There are three mining waste processes, known as the side cast sediment release, dewatering process, and sediment shift or disturbance, that would be expected with the deep-sea mining processes and could result in the accumulation of a sediment plume or cloud, which can have substantial environmental implications. The side cast sediment release is a process that would occur at the seafloor and would involve the move of material at the seafloor by the submerged ROV's and would most likely contribute to the formation of sediment plumes at the seafloor. The idea of side cast release is that the ROV's would discard economically invaluable material to the side of the mining sight before transporting the sulfide material to the supporting vessel at the surface. The goal of this process is to reduce the amount of material being transferred to the surface and minimize land-based. The dewatering process is a mining waste process that would most likely contribute to the formation of sediment plumes from the surface. The method of mine waste disposal releases water from the ship that may have been obtained during the extraction and transport of the material from the seafloor to the surface. The third contribution to the formation of the sediment plume or cloud would be sediment disturbance and release. This mining waste contribution is mainly associated with the mining activity on the seafloor associated with the movement of the ROVs and the destructive disturbance of the seafloor as part of the mining process itself. The two main environmental concerns as a result of these waste mining processes that contribute to the formation of the sediment plume would be the release of heavy metals and increased amounts of sediment released. The release of heavy metals is mainly associated with the dewatering process that would take place on board the ship at the surface of the water. The main problem associated with dewatering is that it is not just the release of seawater re-entering the water column. Heavy metals such as copper and cobalt that would be sourced from the material extracted on the seafloor are also mixed in with the water that is released into the water column. The first environmental concern associated with the release of heavy metals is that it has the potential to change ocean chemistry within that localized water column area. The second concern would be that some of the heavy metals that could be released can have some level of toxicity to not only organisms inhabiting that area but also organisms passing through the mining site area. The concerns surrounding increased sediment release are mainly related to the other two mining waste processes, side cast sediment and seafloor sediment disturbance. The main environmental concern would be the smothering of organisms below as a result of redistributing large amounts of sediment to other areas on the seafloor, which could potentially threaten the population of organisms inhabiting the area. Redistribution of large quantities of sediment can also affect the feeding and gas exchange processes between organisms, posing a serious threat to the population. Finally, these processes can also increase the sedimentation rate on the seafloor, resulting in a predicted minimum of 500 m per every 1–10 km. A large amount of work is currently being engaged in by both of the above-mentioned companies to ensure that the potential environmental impacts of seafloor mining are well understood and control measures are implemented before exploitation commences. However, this process has been arguably hindered by the disproportionate distribution of research effort among vent ecosystems; the best studied and understood hydrothermal vent ecosystems are not representative of those targeted for mining. Attempts have been made in the past to exploit minerals from the seafloor. The 1960s and 1970s saw a great deal of activity (and expenditure) in the recovery of manganese nodules from the abyssal plains, with varying degrees of success. This does demonstrate, however, that the recovery of minerals from the seafloor is possible and has been possible for some time. Mining of manganese nodules served as a cover story for the elaborate attempt in 1974 by the CIA to raise the sunken Soviet submarine K-129 using the Glomar Explorer, a ship purpose-built for the task by Howard Hughes. The operation was known as Project Azorian, and the cover story of seafloor mining of manganese nodules may have served as the impetus to propel other companies to make the attempt. Conservation The conservation of hydrothermal vents has been the subject of sometimes heated discussion in the oceanographic community for the last 20 years. It has been pointed out that it may be that those causing the most damage to these fairly rare habitats are scientists. There have been attempts to forge agreements over the behaviour of scientists investigating vent sites, but, although there is an agreed code of practice, there is no formal international and legally binding agreement. Conservation of hydrothermal vent ecosystems after the fact of mining of an active system would depend on the recolonization of chemosynthetic bacteria, and therefore the continuation of the hydrothermal vent fluid as it is the main hydrothermal energy source. It is very difficult to get an idea of the effects of mining on the hydrothermal vent fluid because there have been no large scale studies done. However, there have been studies on the recolonization of these vent ecosystems after volcanic destruction. From these we can develop insight on the potential effects of mining destruction, and have learned it took 3–5 years for bacteria to recolonize the area, and around 10 years for megafauna to return. It was also found that there was a shift in the composition of species in the ecosystem compared to before the destruction, and the presence of immigrant species. Though further research into the effects of sustained seafloor SMS mining on species recolonization is needed. Geochronological dating Common methods to find out the ages of hydrothermal vents are to date the sulfide (e.g., pyrite) and sulphate minerals (e.g., baryte). Common dating methods include radiometric dating and electron spin resonance dating. Different dating methods have their own limitations, assumptions and challenges. General challenges include the high purity of extracted minerals required for dating, the age range of each dating method, heating above closure temperatures erasing ages of older minerals, and multiple episodes of mineral formation resulting in a mixture of ages. In environments with multiple phases of mineral formation, generally, electron spin resonance dating gives the average age of the bulk mineral while radiometric dates are biased to the ages of younger phases because of the decay of parent nuclei. These explain why different methods can give different ages to the same sample and why the same hydrothermal chimney can have samples with different ages.
Physical sciences
Geologic features
Earth science
354618
https://en.wikipedia.org/wiki/Suspension%20%28chemistry%29
Suspension (chemistry)
In chemistry, a suspension is a heterogeneous mixture of a fluid that contains solid particles sufficiently large for sedimentation. The particles may be visible to the naked eye, usually must be larger than one micrometer, and will eventually settle, although the mixture is only classified as a suspension when and while the particles have not settled out. Properties A suspension is a heterogeneous mixture in which the solid particles do not dissolve, but get suspended throughout the bulk of the solvent, left floating around freely in the medium. The internal phase (solid) is dispersed throughout the external phase (fluid) through mechanical agitation, with the use of certain excipients or suspending agents. An example of a suspension would be sand in water. The suspended particles are visible under a microscope and will settle over time if left undisturbed. This distinguishes a suspension from a colloid, in which the colloid particles are smaller and do not settle. Colloids and suspensions are different from solution, in which the dissolved substance (solute) does not exist as a solid, and solvent and solute are homogeneously mixed. A suspension of liquid droplets or fine solid particles in a gas is called an aerosol. In the atmosphere, the suspended particles are called particulates and consist of fine dust and soot particles, sea salt, biogenic and volcanogenic sulfates, nitrates, and cloud droplets. Suspensions are classified on the basis of the dispersed phase and the dispersion medium, where the former is essentially solid while the latter may either be a solid, a liquid, or a gas. In modern chemical process industries, high-shear mixing technology has been used to create many novel suspensions. Suspensions are unstable from a thermodynamic point of view but can be kinetically stable over a longer period of time, which in turn can determine a suspension's shelf life. This time span needs to be measured in order to provide accurate information to the consumer and ensure the best product quality. "Dispersion stability refers to the ability of a dispersion to resist change in its properties over time." Technique monitoring physical stability Multiple light scattering coupled with vertical scanning is the most widely used technique to monitor the dispersion state of a product, hence identifying and quantifying destabilization phenomena. It works on concentrated dispersions without dilution. When light is sent through the sample, it is back scattered by the particles. The backscattering intensity is directly proportional to the size and volume fraction of the dispersed phase. Therefore, local changes in concentration (sedimentation) and global changes in size (flocculation, aggregation) are detected and monitored. Of primary importance in the analysis of stability in particle suspensions is the value of the zeta potential exhibited by suspended solids. This parameter indicates the magnitude of interparticle electrostatic repulsion and is commonly analyzed to determine how the use of adsorbates and pH modification affect particle repulsion and suspension stabilization or destabilization. Accelerating methods for shelf life prediction The kinetic process of destabilisation can be rather long (up to several months or even years for some products) and it is often required for the formulator to use further accelerating methods in order to reach reasonable development time for new product design. Thermal methods are the most commonly used and consists in increasing temperature to accelerate destabilisation (below critical temperatures of phase and degradation). Temperature affects not only the viscosity, but also interfacial tension in the case of non-ionic surfactants or more generally interactions forces inside the system. Storing a dispersion at high temperatures enables simulation of real life conditions for a product (e.g. tube of sunscreen cream in a car in the summer), but also to accelerate destabilisation processes up to 200 times including vibration, centrifugation and agitation are sometimes used. They subject the product to different forces that pushes the particles / film drainage. However, some emulsions would never coalesce in normal gravity, while they do under artificial gravity. Moreover, segregation of different populations of particles have been highlighted when using centrifugation and vibration. Examples Common examples of suspensions include: Mud or muddy water: where soil, clay, or silt particles are suspended in water. Flour suspended in water. Kimchi suspended on vinegar. Chalk suspended in water. Sand suspended in water.
Physical sciences
Chemical mixtures: General
null
354666
https://en.wikipedia.org/wiki/Shanghai%20Pudong%20International%20Airport
Shanghai Pudong International Airport
Shanghai Pudong International Airport is the busiest of the two international airports serving Shanghai, the largest city by population in China, and a major aviation hub of East Asia, and a major world airport. Pudong Airport serves both international flights and a smaller number of domestic flights, while the city's other major airport, Shanghai–Hongqiao, mainly serves domestic and regional flights in East Asia. Located about east of the city center, Pudong Airport occupies a site adjacent to the coastline in eastern Pudong. The airport is operated by Shanghai Airport Authority. The airport is the main hub for China Eastern Airlines and Shanghai Airlines, and a major international hub for Air China, as well as a secondary hub for China Southern Airlines. It is also the hub for privately owned Juneyao Air and Spring Airlines, and an Asia-Pacific cargo hub for FedEx, UPS and DHL. The DHL hub, opened in July 2012, is reportedly the largest express hub in Asia. Pudong Airport had two main passenger terminals, flanked on both sides by four operational parallel runways. A third passenger terminal was started in 2021, in addition to a satellite terminal and two additional runways, which will raise its annual capacity from 60 million passengers to 80 million, along with the ability to handle six million tons of freight. Pudong Airport is a fast-growing hub for both passenger and cargo traffic. With 3,440,084 metric tons handled in 2024, the airport is the world's third-busiest airport by cargo traffic. Pudong Airport also served a total of 54,476,397 passengers in 2023, making it the second-busiest airport in China after Guangzhou Baiyun Airport, sixth-busiest in Asia, and the twenty-first-busiest in the world. It is also the busiest international gateway of mainland China, with 35.25 million international passengers. Shanghai Pudong is the busiest international hub in China, and about half of its total passenger traffic is international. Pudong Airport is connected to Shanghai Hongqiao Airport by Shanghai Metro Line 2 and the Shanghai Maglev Train via Pudong International Airport Station. There are also airport buses connecting it with the rest of the city. History Early development Prior to the establishment of Pudong International Airport, Shanghai Hongqiao International Airport was the only primary airport of Shanghai. During the 1990s, the expansion of Hongqiao Airport to meet growing demand became impossible as the surrounding urban area was developing significantly, and an alternative to assume all international flights had to be sought. After deliberation, the municipal government decided to adopt the suggestion from Professor Chen Jiyu of East China Normal University, who wrote a letter to the Mayor of Shanghai Xu Kuangdi suggesting that the new airport should be constructed on the tidal flats of the south bank of the Yangtze River estuary, on the coast of the Pudong development zone to the east of Shanghai. Construction of the first phase of the new Shanghai Pudong International Airport began in October 1997, took two years to build at a cost of RMB 12 billion (US$1.67 billion), and was completed on 16 September 1999. It covers an area of and is from downtown Shanghai. The first phase of the airport has one 4E category runway () along with two parallel taxiways, an apron, seventy-six aircraft positions and a cargo warehouse. Shanghai Pudong International Airport was officially opened to public on 1 October 1999. A second runway was opened on 17 March 2005, and construction of phase two (including a second terminal, a third runway and a cargo terminal) began in December 2005 and started operation on 26 March 2008, in time for the Beijing 2008 Summer Olympics. In November 2011, Pudong Airport received approval from the national government for a new round of expansion which includes two runways. The fourth runway, along with an auxiliary taxiway and traffic control facilities, is projected to cost 2.58 billion yuan (US$403 million). The fifth runway, along with a new traffic tower, will cost 4.65 billion yuan (US$726.6 million). Construction was completed in 2015 and has doubled the capacity of the airport. Ongoing expansion Pudong International Airport officially started the third phase of the Pudong International Airport expansion with the construction on a new south satellite terminal on 29 December 2015. The new satellite terminal will be the world's largest single satellite terminal with a total construction area of , which is larger than the Pudong International Airport T2 terminal building (). The satellite terminal is composed of two halls, S1 and S2, forming an H-shaped structure. It will have an annual design capacity of 38 million passengers, The total cost of the project is estimated to be about 20.6 billion yuan. Halls S1 and S2 will have 83 gates. A high capacity people mover connecting T1 to S1 and T2 to S2 will be constructed. After the completion of the satellite terminal in 2019, Pudong International Airport will have an annual passenger capacity of 80 million passengers, ranking among the top ten airports in the world by capacity. As of October 2019, the satellite terminal is in operation and connected by people movers to the main Terminals 1 and 2. By 6 January 2021, work on Pudong Airport T3 began on the south side of the airfield. The new terminal is anticipated to serve 50 million annual passengers when it opens according to city officials, while the entire airport is expected to have 130 million passengers by 2030. Several public transport lines will be extended to T3. Facilities The airport has 162 boarding bridges (T1-31 T2-41 Satellite terminal-90) along with 189 remote gates. Four runways are parallel to the terminals (four operational): one runway with 4E rating (capable of accommodating aircraft up to Boeing 747-400), two runways with 4F rating (capable of accommodating aircraft up to Airbus A380, Boeing 747-8, and Antonov An-225), and two runways with 4F rating . Pudong airport currently has four runways. Rwy 35L/17R and Rwy 34R/16L are mostly used for landing while Rwy 35R/17L and Rwy 34L/16R are mostly used for takeoff. Runway 15/33 is not in operation. Terminal 1 Terminal 1 was opened to public and officially opened on 1 October 1999 along with a 4000-metre runway and a cargo hub. It was built to handle the demand for traffic and to relieve Shanghai Hongqiao International Airport's traffic. The exterior of Terminal 1 is shaped like a seagull, and has 28 gates, 13 of which are double-decker gates. The capacity of Terminal 1 is 20 million passengers. It currently has 204 check-in counters, thirteen luggage conveying belts and covers an area of 280,000 square metres. The gates for Terminal 1 is 1–12, 14–32 (linked with jetway), while the remote gates are 200–203, 251–258 (Domestic), 208–212, 213–216 (International). Terminal 2 Terminal 2 was officially opened to the public on 26 March 2008 along with the third runway, making the entire airport capable of handling 60 million passengers and 4.2 million tons of cargo annually. Terminal 2 is shaped like Terminal 1, although it more closely resembles a wave, and is slightly larger than Terminal 1. Terminal 2 also has more floor areas than Terminal 1. Terminal 2 is primarily used by Air China and other Star Alliance members. The gates for Terminal 2 are 50–98 (Note that gates 58–90 are used as both the C gates (used for domestic flights) and the D gates (used for international flights). The gates between gates 65–79 are only odd numbered (65, 67, 69, 71, 73, 75, 77, 79). Gates 50–57 and gates 92–98 are used for domestic flights only. The remote gates for Terminal 2 are C219-C224 for domestic and D228-D232 for international. Satellite concourses Construction on an additional satellite concourse facility that could accommodate further gates and terminal space started on 29 December 2015 and officially opened in September 2019. It is the largest stand-alone satellite airport terminal buildings in the world at 622,000 square meters. This project will support 38 million passengers annually through 90 departure gates across two S1 and S2 concourses. Gates for domestic flights are labelled H, while the gates for international flights and flights to Hong Kong, Macau, and Taiwan are labelled G in both satellite concourses. Automatic people mover Both S1 and S2 are connected together and are since the opening in September 2019 connected by an underground Shanghai Pudong Airport APM to the current T1 and T2 terminals operated by Shanghai Keolis for 20 years, including the East Line and the West Line. The operating section of the East Line is long, connecting Terminal 2 and Satellite 2, and the operating section of the West Line is long, connecting Terminal 1 and Satellite 1. A380/B747-8 stands Gates that can accommodate the A380/B747-8 are 24 (T1) 71,75 (T2) 119,121 (S1) 504-507 (remote stands near S1, on taxiway L02, between taxiway P3 and south of P2) 168, 170, 173 (S2). A-CDM implementation The airport has been using the Airport Collaborative Decision Making system (A-CDM) developed by the aviation data service company VariFlight since January 2017. The system is aimed to improve on-time performance and safety of the airport's operations. By June 2017, Shanghai Pudong airport recorded 62.7% punctuality rate, which was a 15% increase compared to the same period previous year. Airlines and destinations Passenger Pudong Airport mainly serves international flights along with flights to Baotou, Changchun, Dalian, Zhangjiajie and some smaller cities while most domestic flights are operating at Hongqiao Airport. However, some domestic flights operating at Hongqiao Airport only may move to Pudong Airport operating only at Pudong Airport instead of both. Cargo Statistics Ground transportation Highway North: S1 Yingbin Expressway and Huaxia Elevated Road South: Shanghai–Jiaxing–Huzhou Expressway and G1501 Shanghai Ring Expressway Rail transit Maglev train Starting service on 29 January 2004, as the first commercial high-speed maglev railway in the world, Shanghai Maglev Train links Pudong International Airport with Longyang Road Metro Station, where transfer to Line 2, Line 7, Line 16 and Line 18 is possible. The ride from Longyang Road Metro station to Pudong International Airport typically takes around eight minutes, with the maximum speed reaching . Trains operate every 15 minutes; therefore passengers can expect to arrive in less than 25 minutes, waiting time included. All cars are equipped with racks and space designated for luggage. Shanghai Metro Line 2 Shanghai Metro Line 2 also provides service between Pudong International Airport and Longyang Road, Lujiazui, People's Square, and Hongqiao International Airport, Shanghai's primary domestic airport as well as Shanghai Hongqiao Railway Station. Line 2 is part of the Shanghai Metro system; therefore unlike the Maglev, free in-system transfer to other lines are possible. Prices and speeds are substantially lower than the Maglev. A casual ride to People's Square, the city center, typically takes just over one hour, while a trip to Hongqiao International Airport takes about 1.5 hour. Shanghai Suburban Railway Airport Link Line The Airport Link Line of Shanghai Suburban Railway commenced operation on 27 December 2024. The line connects Hongqiao International Airport, Hongqiao Railway Station and Shanghai East Railway Station via Shanghai International Resort and Pudong International Airport. The initial operational section is between Hongqiao Airport and Pudong Airport, the maximum travel time between two airports is about 40 minutes. Initially, this line can only transfer under the same ticket with Shanghai Metro system at Zhongchun Roadstation, with Line 9. As this line belongs to the suburban railway system, the fare is different compare with metro lines. High-speed rail The airport will be directly linked to Shanghai East railway station in 2027. Airport buses Eight airport bus lines serve the airport, providing rapid links to various destinations. Airport Bus Route 1: To Shanghai Hongqiao railway station via Shanghai Hongqiao International Airport Airport Bus Route 2: To Jing'an Temple (City Terminal Hub) Airport Bus Route 4: To Hongkou Stadium Hub (Huayuan Road), via Deping Road at Pudong Avenue, Wujiaochang and Dabaishu Airport Bus Route 5: To Shanghai railway station, via Longyang Road Metro Station, Century Avenue at South Pudong Road (Lujiazui) and East Yan'an Road at Middle Zhejiang Road (People's Square) Airport Bus Route 7: To Shanghai South railway station, via West Huaxia Road at Shangnan Road and East Huaxia Road at Chuansha Road (Chuansha) Airport Bus Route 8: To Nanhui Coach and Bus Station Airport Bus Route 9: To Xinzhuang Metro Station Airport Bus Ring Route 1: To Hangchengyuan (Shiwan), via stops in Airport Workplace Accidents and incidents On 28 November 2009, an Avient Aviation McDonnell Douglas MD-11F cargo plane registered to Zimbabwe (registration: Z-BAV) departing for Bishkek, Kyrgyzstan crashed into a warehouse near the runway of the airport due to a tailstrike that caught fire during takeoff and broke into several pieces with seven people on board. Three people died and four were injured. On 22 July 2020, an Ethiopian Airlines Cargo Boeing 777 freighter aircraft caught fire while parked at Pudong International Airport as it prepared for a trip to São Paulo, Brazil and to Santiago, Chile via Addis Ababa, shortly after landing from Brussels, Belgium. No injuries were reported. Photo gallery
Technology
Asia
null
354860
https://en.wikipedia.org/wiki/Flying%20squirrel
Flying squirrel
Flying squirrels (scientifically known as Pteromyini or Petauristini) are a tribe of 50 species of squirrels in the family Sciuridae. Despite their name, they are not in fact capable of full flight in the same way as birds or bats, but they are able to glide from one tree to another with the aid of a patagium, a furred skin membrane that stretches from wrist to ankle. Their long tails also provide stability as they glide. Anatomically they are very similar to other squirrels with a number of adaptations to suit their lifestyle; their limb bones are longer and their hand bones, foot bones, and distal vertebrae are shorter. Flying squirrels are able to steer and exert control over their glide path with their limbs and tail. Molecular studies have shown that flying squirrels are monophyletic (having a common ancestor with no non-flying descendants) and originated some 18–20 million years ago. The genus Paracitellus is the earliest lineage to the flying squirrel dating back to the late Oligocene era. Most are nocturnal and omnivorous, eating fruit, seeds, buds, flowers, insects, gastropods, spiders, fungi, bird's eggs, tree sap and young birds. The young are born in a nest and are at first naked and helpless. They are cared for by their mother and by five weeks are able to practice gliding skills so that by ten weeks they are ready to leave the nest. Some captive-bred southern flying squirrels have become domesticated as small household pets, a type of "pocket pet". Description Flying squirrels are not capable of flight like birds or bats; instead, they glide between trees. They are capable of obtaining lift within the course of these flights, with flights recorded to . The direction and speed of the animal in midair are varied by changing the positions of its limbs, largely controlled by small cartilaginous wrist bones. There is a cartilage projection from the wrist that the squirrel holds upwards during a glide. This specialized cartilage is only present in flying squirrels and not other gliding mammals. Possible origins for the styliform cartilage have been explored, and the data suggests that it is most likely homologous to the carpal structures that can be found in other squirrels. This cartilage along with the manus forms a wing tip to be used during gliding. After being extended, the wing tip may adjust to various angles, controlling aerodynamic movements. The wrist also changes the tautness of the patagium, a furry parachute-like membrane that stretches from wrist to ankle. It has a fluffy tail that stabilizes in flight. The tail acts as an adjunct airfoil, working as an air brake before landing on a tree trunk. Similar gliding animals The colugos, Petauridae, and Anomaluridae are gliding mammals which are similar to flying squirrels through convergent evolution, although are not particularly close in relation. Like the flying squirrel, they are scansorial mammals that use their patagium to glide, unpowered, to move quickly through their environment. Evolutionary history Prior to the 21st century, the evolutionary history of the flying squirrel was frequently debated. This debate was clarified greatly as a result of two molecular studies. These studies found support that flying squirrels originated 18–20 million years ago, are monophyletic, and have a sister relationship with tree squirrels. Due to their close ancestry, the morphological differences between flying squirrels and tree squirrels reveal insight into the formation of the gliding mechanism. Compared to squirrels of similar size, flying squirrels, northern and southern flying squirrels show lengthening in bones of the lumbar vertebrae and forearm, whereas bones of the feet, hands, and distal vertebrae are reduced in length. Such differences in body proportions reveal the flying squirrels' adaptation to minimize wing loading and to increase maneuverability while gliding. The consequence for these differences is that unlike regular squirrels, flying squirrels are not well adapted for quadrupedal locomotion and therefore must rely more heavily on their gliding abilities. Several hypotheses have attempted to explain the evolution of gliding in flying squirrels. One possible explanation is related to energy efficiency and foraging. Gliding is an energetically efficient way to progress from one tree to another while foraging, as opposed to climbing down trees and maneuvering on the ground floor or executing dangerous leaps in the air. By gliding at high speeds, flying squirrels can rummage through a greater area of forest more quickly than tree squirrels. Flying squirrels can glide long distances by increasing their aerial speed and increasing their lift. Other hypotheses state that the mechanism evolved to avoid nearby predators and prevent injuries. If a dangerous situation arises on a specific tree, flying squirrels can glide to another, and thereby typically escape the previous danger. Furthermore, take-off and landing procedures during leaps, implemented for safety purposes, may explain the gliding mechanism. While leaps at high speeds are important to escape danger, the high-force impact of landing on a new tree could be detrimental to a squirrel's health. Yet the gliding mechanism of flying squirrels involves structures and techniques during flight that allow for great stability and control. If a leap is miscalculated, a flying squirrel may easily steer back onto the original course by using its gliding ability. A flying squirrel also creates a large glide angle when approaching its target tree, decreasing its velocity due to an increase in air resistance and allowing all four limbs to absorb the impact of the target. Fluorescence In 2019 it was observed, by chance, that a flying squirrel fluoresced pink under UV light. Subsequent research by biologists at Northland College in Northern Wisconsin found that this is true for all three species of North American flying squirrels. At this time it is unknown what purpose this serves. Non-flying squirrels do not fluoresce under UV light. Taxonomy Recent species New World flying squirrels belong to the genus Glaucomys (Greek for gleaming mouse). Old World flying squirrels belong to the genus Pteromys (Greek for winged mouse). The three species of the genus Glaucomys (Glaucomys sabrinus, Glaucomys volans and Glaucomys oregonensis) are native to North America and Central America; many other taxa are found throughout Asia as well, with the range of the Siberian Flying Squirrel (Pteromys volans) reaching into parts of northeast Europe (Russia, Finland and Estonia). Thorington and Hoffman (2005) recognize 15 genera of flying squirrels in two subtribes. Tribe Pteromyini – flying squirrels Subtribe Glaucomyina Genus Eoglaucomys Kashmir flying squirrel, Eoglaucomys fimbriatus Afghan flying squirrel, E. f. baberi Genus Glaucomys – New World flying squirrels (American flying squirrels), North America Southern flying squirrel, Glaucomys volans Northern flying squirrel, Glaucomys sabrinus Humboldt's flying squirrel, Glaucomys oregonensis Genus Hylopetes, Southeast Asia Particolored flying squirrel, Hylopetes alboniger Bartel's flying squirrel, Hylopetes bartelsi Gray-cheeked flying squirrel, Hylopetes lepidus Palawan flying squirrel, Hylopetes nigripes Indochinese flying squirrel, Hylopetes phayrei Jentink's flying squirrel, Hylopetes platyurus Sipora flying squirrel, Hylopetes sipora Red-cheeked flying squirrel, Hylopetes spadiceus Sumatran flying squirrel, Hylopetes winstoni Genus Iomys, Malaysia and Indonesia Javanese flying squirrel (Horsfield's flying squirrel), Iomys horsfieldi Mentawi flying squirrel, Iomys sipora Genus Petaurillus – pygmy flying squirrels, Borneo and the Malay Peninsula Lesser pygmy flying squirrel, Petaurillus emiliae Hose's pygmy flying squirrel, Petaurillus hosei Selangor pygmy flying squirrel, Petaurillus kinlochii Genus Petinomys, Southeast Asia Basilan flying squirrel, Petinomys crinitus Travancore flying squirrel, Petinomys fuscocapillus Whiskered flying squirrel, Petinomys genibarbis Hagen's flying squirrel, Petinomys hageni Siberut flying squirrel, Petinomys lugens Mindanao flying squirrel, Petinomys mindanensis Arrow flying squirrel, Petinomys sagitta Temminck's flying squirrel, Petinomys setosus Vordermann's flying squirrel, Petinomys vordermanni Genus Priapomys, western Yunnan in China and adjoining regions of Myanmar Himalayan large-eared flying squirrel, P. leonardi Subtribe Pteromyina Genus Aeretes, northeastern China Groove-toothed flying squirrel (North Chinese flying squirrel), Aeretes melanopterus Genus Aeromys – large black flying squirrels, Thailand to Borneo Black flying squirrel, Aeromys tephromelas Thomas's flying squirrel, Aeromys thomasi Genus Belomys, Southeast Asia Hairy-footed flying squirrel, Belomys pearsonii Genus Biswamoyopterus, northeastern India to southern China and southeast Asia Namdapha flying squirrel, Biswamoyopterus biswasi Laotian giant flying squirrel, Biswamoyopterus laoensis Mount Gaoligong flying squirrel Biswamoyopterus gaoligongensis Genus Eupetaurus, Pakistan to China; rare Western woolly flying squirrel, Eupetaurus cinereus Yunnan woolly flying squirrel, Eupetaurus nivamons Tibetan woolly flying squirrel, Eupetaurus tibetensis Genus Petaurista - giant flying squirrels, Southeast and East Asia Red and white giant flying squirrel, Petaurista alborufus Spotted giant flying squirrel, Petaurista elegans Hodgson's giant flying squirrel, Petaurista magnificus Bhutan giant flying squirrel, Petaurista nobilis Indian giant flying squirrel, Petaurista philippensis Chinese giant flying squirrel, Petaurista xanthotis Japanese giant flying squirrel, Petaurista leucogenys Red giant flying squirrel, Petaurista petaurista Mechuka giant flying squirrel, Petaurista mechukaensis Mishmi Hills giant flying squirrel, Petaurista mishmiensis Mebo giant flying squirrel, Petaurista siangensis Genus Pteromys – Old World flying squirrels, Finland to Japan Siberian flying squirrel, Pteromys volans Japanese dwarf flying squirrel, Pteromys momonga Genus Pteromyscus, southern Thailand to Borneo Smoky flying squirrel, Pteromyscus pulverulentus Genus Trogopterus, China Complex-toothed flying squirrel, Trogopterus xanthipes The Mechuka, Mishmi Hills, and Mebo giant flying squirrels were discovered in the northeastern state of India of Arunachal Pradesh in the late 2000s. Their holotypes are preserved in the collection of the Zoological Survey of India, Kolkata, India. Fossil species Flying squirrels have a well-documented fossil record from the Oligocene onwards. Some fossil genera go far back as the Eocene, and given that the flying squirrels are thought to have diverged later, these are likely misidentifications. Miopetaurista Miopetaurista crusafonti Miopetaurista dehmi Miopetaurista diescalidus Miopetaurista gaillardi Miopetaurista gibberosa Miopetaurista lappi Miopetaurista neogrivensis Miopetaurista thaleri Miopetaurista tobieni Pliopetaurista Pliopetaurista kollmanni Daxner-Höck, 2004 Neopetes Neopetes hoeckarum (De Bruijn, 1998) Neopetes macedoniensis (Bouwens and De Bruijn, 1986) Neopetes debruijni (Reumer & Hoek Ostende, 2003) Life cycles The life expectancy of flying squirrels in the wild is about six years, and flying squirrels can live up to fifteen years in zoos. The mortality rate in young flying squirrels is high because of predators and diseases. Predators of flying squirrels include tree snakes, raccoons, owls, martens, fishers, coyotes, bobcats, and feral cats. In the Pacific Northwest of North America, the northern spotted owl (Strix occidentalis) is a common predator of flying squirrels. Flying squirrels are usually nocturnal, since they are not adept at escaping birds of prey that hunt during the daytime. They eat according to their environment; they are omnivorous, and will eat whatever food they can find. The North American southern flying squirrel eats seeds, insects, gastropods (slugs and snails), spiders, shrubs, flowers, fungi, and tree sap. Reproduction The mating season for flying squirrels is during February and March. When the infants are born, the female squirrels live with them in maternal nest sites. The mothers nurture and protect them until they leave the nest. The males do not participate in nurturing their offspring. At birth, flying squirrels are mostly hairless, apart from their whiskers, and most of their senses are not present. Their internal organs are visible through the skin, and their sex can be signified. By week five, they are almost fully developed. At that point, they can respond to their environment and start to develop a mind of their own. Through the upcoming weeks of their lives, they practice leaping and gliding. After two and a half months, their gliding skills are perfected, they are ready to leave the nest, and are capable of independent survival. Diet Flying squirrels can easily forage for food in the night, given their highly developed sense of smell. They harvest fruits, nuts, fungi, and birds' eggs. Many gliders have specialized diets and there is evidence to believe that gliders may be able to take advantage of scattered protein deficient food. Additionally, gliding is a fast form of locomotion and by reducing travel time between patches, they can increase the amount of foraging time.
Biology and health sciences
Rodents
Animals
355155
https://en.wikipedia.org/wiki/Polar%20orbit
Polar orbit
A polar orbit is one in which a satellite passes above or nearly above both poles of the body being orbited (usually a planet such as the Earth, but possibly another body such as the Moon or Sun) on each revolution. It has an inclination of about 60–90 degrees to the body's equator. Launching satellites into polar orbit requires a larger launch vehicle to launch a given payload to a given altitude than for a near-equatorial orbit at the same altitude, because it cannot take advantage of the Earth's rotational velocity. Depending on the location of the launch site and the inclination of the polar orbit, the launch vehicle may lose up to 460 m/s of Delta-v, approximately 5% of the Delta-v required to attain Low Earth orbit. Usage Polar orbits are used for Earth-mapping, reconnaissance satellites, as well as for some weather satellites. The Iridium satellite constellation uses a polar orbit to provide telecommunications services. Near-polar orbiting satellites commonly choose a Sun-synchronous orbit, where each successive orbital pass occurs at the same local time of day. For some applications, such as remote sensing, it is important that changes over time are not aliased by changes in local time. Keeping the same local time on a given pass requires that the time period of the orbit be kept as short, which requires a low orbit. However, very low orbits rapidly decay due to drag from the atmosphere. Commonly used altitudes are between 700 and 800 km, producing an orbital period of about 100 minutes. The half-orbit on the Sun side then takes only 50 minutes, during which local time of day does not vary greatly. To retain a Sun-synchronous orbit as the Earth revolves around the Sun during the year, the orbit must precess about the Earth at the same rate (which is not possible if the satellite passes directly over the pole). Because of Earth's equatorial bulge, an orbit inclined at a slight angle is subject to a torque, which causes precession. An angle of about 8° from the pole produces the desired precession in a 100-minute orbit.
Physical sciences
Orbital mechanics
Astronomy
355377
https://en.wikipedia.org/wiki/Photonic%20crystal
Photonic crystal
A photonic crystal is an optical nanostructure in which the refractive index changes periodically. This affects the propagation of light in the same way that the structure of natural crystals gives rise to X-ray diffraction and that the atomic lattices (crystal structure) of semiconductors affect their conductivity of electrons. Photonic crystals occur in nature in the form of structural coloration and animal reflectors, and, as artificially produced, promise to be useful in a range of applications. Photonic crystals can be fabricated for one, two, or three dimensions. One-dimensional photonic crystals can be made of thin film layers deposited on each other. Two-dimensional ones can be made by photolithography, or by drilling holes in a suitable substrate. Fabrication methods for three-dimensional ones include drilling under different angles, stacking multiple 2-D layers on top of each other, direct laser writing, or, for example, instigating self-assembly of spheres in a matrix and dissolving the spheres. Photonic crystals can, in principle, find uses wherever light must be manipulated. For example, dielectric mirrors are one-dimensional photonic crystals which can produce ultra-high reflectivity mirrors at a specified wavelength. Two-dimensional photonic crystals called photonic-crystal fibers are used for fiber-optic communication, among other applications. Three-dimensional crystals may one day be used in optical computers, and could lead to more efficient photovoltaic cells. Although the energy of light (and all electromagnetic radiation) is quantized in units called photons, the analysis of photonic crystals requires only classical physics. "Photonic" in the name is a reference to photonics, a modern designation for the study of light (optics) and optical engineering. Indeed, the first research into what we now call photonic crystals may have been as early as 1887 when the English physicist Lord Rayleigh experimented with periodic multi-layer dielectric stacks, showing they can effect a photonic band-gap in one dimension. Research interest grew with work in 1987 by Eli Yablonovitch and Sajeev John on periodic optical structures with more than one dimension—now called photonic crystals. Introduction Photonic crystals are composed of periodic dielectric, metallo-dielectric—or even superconductor microstructures or nanostructures that affect electromagnetic wave propagation in the same way that the periodic potential in a semiconductor crystal affects the propagation of electrons, determining allowed and forbidden electronic energy bands. Photonic crystals contain regularly repeating regions of high and low refractive index. Light waves may propagate through this structure or propagation may be disallowed, depending on their wavelength. Wavelengths that may propagate in a given direction are called modes, and the ranges of wavelengths which propagate are called bands. Disallowed bands of wavelengths are called photonic band gaps. This gives rise to distinct optical phenomena, such as inhibition of spontaneous emission, high-reflecting omni-directional mirrors, and low-loss-waveguiding. The bandgap of photonic crystals can be understood as the destructive interference of multiple reflections of light propagating in the crystal at each interface between layers of high- and low- refractive index regions, akin to the bandgaps of electrons in solids. There are two strategies for opening up the complete photonic band gap. The first one is to increase the refractive index contrast for the band gap in each direction becomes wider and the second one is to make the Brillouin zone more similar to sphere. However, the former is limited by the available technologies and materials and the latter is restricted by the crystallographic restriction theorem. For this reason, the photonic crystals with a complete band gap demonstrated to date have face-centered cubic lattice with the most spherical Brillouin zone and made of high-refractive-index semiconductor materials. Another approach is to exploit quasicrystalline structures with no crystallography limits. A complete photonic bandgap was reported for low-index polymer quasicrystalline samples manufactured by 3D printing. The periodicity of the photonic crystal structure must be around or greater than half the wavelength (in the medium) of the light waves in order for interference effects to be exhibited. Visible light ranges in wavelength between about 400 nm (violet) to about 700 nm (red) and the resulting wavelength inside a material requires dividing that by the average index of refraction. The repeating regions of high and low dielectric constant must, therefore, be fabricated at this scale. In one dimension, this is routinely accomplished using the techniques of thin-film deposition. History Photonic crystals have been studied in one form or another since 1887, but no one used the term photonic crystal until over 100 years later—after Eli Yablonovitch and Sajeev John published two milestone papers on photonic crystals in 1987. The early history is well-documented in the form of a story when it was identified as one of the landmark developments in physics by the American Physical Society. Before 1987, one-dimensional photonic crystals in the form of periodic multi-layer dielectric stacks (such as the Bragg mirror) were studied extensively. Lord Rayleigh started their study in 1887, by showing that such systems have a one-dimensional photonic band-gap, a spectral range of large reflectivity, known as a stop-band. Today, such structures are used in a diverse range of applications—from reflective coatings to enhancing LED efficiency to highly reflective mirrors in certain laser cavities (see, for example, VCSEL). The pass-bands and stop-bands in photonic crystals were first reduced to practice by Melvin M. Weiner who called those crystals "discrete phase-ordered media." Weiner achieved those results by extending Darwin's dynamical theory for x-ray Bragg diffraction to arbitrary wavelengths, angles of incidence, and cases where the incident wavefront at a lattice plane is scattered appreciably in the forward-scattered direction. A detailed theoretical study of one-dimensional optical structures was performed by Vladimir P. Bykov, who was the first to investigate the effect of a photonic band-gap on the spontaneous emission from atoms and molecules embedded within the photonic structure. Bykov also speculated as to what could happen if two- or three-dimensional periodic optical structures were used. The concept of three-dimensional photonic crystals was then discussed by Ohtaka in 1979, who also developed a formalism for the calculation of the photonic band structure. However, these ideas did not take off until after the publication of two milestone papers in 1987 by Yablonovitch and John. Both these papers concerned high-dimensional periodic optical structures, i.e., photonic crystals. Yablonovitch's main goal was to engineer photonic density of states to control the spontaneous emission of materials embedded in the photonic crystal. John's idea was to use photonic crystals to affect localisation and control of light. After 1987, the number of research papers concerning photonic crystals began to grow exponentially. However, due to the difficulty of fabricating these structures at optical scales (see Fabrication challenges), early studies were either theoretical or in the microwave regime, where photonic crystals can be built on the more accessible centimetre scale. (This fact is due to a property of the electromagnetic fields known as scale invariance. In essence, electromagnetic fields, as the solutions to Maxwell's equations, have no natural length scale—so solutions for centimetre scale structure at microwave frequencies are the same as for nanometre scale structures at optical frequencies.) By 1991, Yablonovitch had demonstrated the first three-dimensional photonic band-gap in the microwave regime. The structure that Yablonovitch was able to produce involved drilling an array of holes in a transparent material, where the holes of each layer form an inverse diamond structure – today it is known as Yablonovite. In 1996, Thomas Krauss demonstrated a two-dimensional photonic crystal at optical wavelengths. This opened the way to fabricate photonic crystals in semiconductor materials by borrowing methods from the semiconductor industry. Pavel Cheben demonstrated a new type of photonic crystal waveguide – subwavelength grating (SWG) waveguide. The SWG waveguide operates in subwavelength region, away from the bandgap. It allows the waveguide properties to be controlled directly by the nanoscale engineering of the resulting metamaterial while mitigating wave interference effects. This provided “a missing degree of freedom in photonics” and resolved an important limitation in silicon photonics which was its restricted set of available materials insufficient to achieve complex optical on-chip functions. Today, such techniques use photonic crystal slabs, which are two dimensional photonic crystals "etched" into slabs of semiconductor. Total internal reflection confines light to the slab, and allows photonic crystal effects, such as engineering photonic dispersion in the slab. Researchers around the world are looking for ways to use photonic crystal slabs in integrated computer chips, to improve optical processing of communications—both on-chip and between chips. Autocloning fabrication technique, proposed for infrared and visible range photonic crystals by Sato et al. in 2002, uses electron-beam lithography and dry etching: lithographically formed layers of periodic grooves are stacked by regulated sputter deposition and etching, resulting in "stationary corrugations" and periodicity. Titanium dioxide/silica and tantalum pentoxide/silica devices were produced, exploiting their dispersion characteristics and suitability to sputter deposition. Such techniques have yet to mature into commercial applications, but two-dimensional photonic crystals are commercially used in photonic crystal fibres (otherwise known as holey fibres, because of the air holes that run through them). Photonic crystal fibres were first developed by Philip Russell in 1998, and can be designed to possess enhanced properties over (normal) optical fibres. Study has proceeded more slowly in three-dimensional than in two-dimensional photonic crystals. This is because of more difficult fabrication. Three-dimensional photonic crystal fabrication had no inheritable semiconductor industry techniques to draw on. Attempts have been made, however, to adapt some of the same techniques, and quite advanced examples have been demonstrated, for example in the construction of "woodpile" structures constructed on a planar layer-by-layer basis. Another strand of research has tried to construct three-dimensional photonic structures from self-assembly—essentially letting a mixture of dielectric nano-spheres settle from solution into three-dimensionally periodic structures that have photonic band-gaps. Vasily Astratov's group from the Ioffe Institute realized in 1995 that natural and synthetic opals are photonic crystals with an incomplete bandgap. The first demonstration of an "inverse opal" structure with a complete photonic bandgap came in 2000, from researchers at the University of Toronto, and Institute of Materials Science of Madrid (ICMM-CSIC), Spain. The ever-expanding field of natural photonics, bioinspiration and biomimetics—the study of natural structures to better understand and use them in design—is also helping researchers in photonic crystals. For example, in 2006 a naturally occurring photonic crystal was discovered in the scales of a Brazilian beetle. Analogously, in 2012 a diamond crystal structure was found in a weevil and a gyroid-type architecture in a butterfly. More recently, gyroid photonic crystals have been found in the feather barbs of blue-winged leafbirds and are responsible for the bird's shimmery blue coloration. Some publications suggest the feasibility of the complete photonic band gap in the visible range in photonic crystals with optically saturated media that can be implemented by using laser light as an external optical pump. Construction strategies The fabrication method depends on the number of dimensions that the photonic bandgap must exist in. One-dimensional photonic crystals To produce a one-dimensional photonic crystal, thin film layers of different dielectric constant may be periodically deposited on a surface which leads to a band gap in a particular propagation direction (such as normal to the surface). A Bragg grating is an example of this type of photonic crystal. One-dimensional photonic crystals can include layers of non-linear optical materials in which the non-linear behaviour is accentuated due to field enhancement at wavelengths near a so-called degenerate band edge. This field enhancement (in terms of intensity) can reach where N is the total number of layers. However, by using layers which include an optically anisotropic material, it has been shown that the field enhancement can reach , which, in conjunction with non-linear optics, has potential applications such as in the development of an all-optical switch. A one-dimensional photonic crystal can be implemented using repeated alternating layers of a metamaterial and vacuum. If the metamaterial is such that the relative permittivity and permeability follow the same wavelength dependence, then the photonic crystal behaves identically for TE and TM modes, that is, for both s and p polarizations of light incident at an angle. Recently, researchers fabricated a graphene-based Bragg grating (one-dimensional photonic crystal) and demonstrated that it supports excitation of surface electromagnetic waves in the periodic structure by using 633 nm He-Ne laser as the light source. Besides, a novel type of one-dimensional graphene-dielectric photonic crystal has also been proposed. This structure can act as a far-IR filter and can support low-loss surface plasmons for waveguide and sensing applications. 1D photonic crystals doped with bio-active metals (i.e. silver) have been also proposed as sensing devices for bacterial contaminants. Similar planar 1D photonic crystals made of polymers have been used to detect volatile organic compounds vapors in atmosphere. In addition to solid-phase photonic crystals, some liquid crystals with defined ordering can demonstrate photonic color. For example, studies have shown several liquid crystals with short- or long-range one-dimensional positional ordering can form photonic structures. Two-dimensional photonic crystals In two dimensions, holes may be drilled in a substrate that is transparent to the wavelength of radiation that the bandgap is designed to block. Triangular and square lattices of holes have been successfully employed. The Holey fiber or photonic crystal fiber can be made by taking cylindrical rods of glass in hexagonal lattice, and then heating and stretching them, the triangle-like airgaps between the glass rods become the holes that confine the modes. Three-dimensional photonic crystals There are several structure types that have been constructed: Spheres in a diamond lattice Yablonovite The woodpile structure – "rods" are repeatedly etched with beam lithography, filled in, and covered with a layer of new material. As the process repeats, the channels etched in each layer are perpendicular to the layer below, and parallel to and out of phase with the channels two layers below. The process repeats until the structure is of the desired height. The fill-in material is then dissolved using an agent that dissolves the fill-in material but not the deposition material. It is generally hard to introduce defects into this structure. Inverse opals or Inverse Colloidal Crystals-Spheres (such as polystyrene or silicon dioxide) can be allowed to deposit into a cubic close packed lattice suspended in a solvent. Then a hardener is introduced that makes a transparent solid out of the volume occupied by the solvent. The spheres are then dissolved with an acid such as Hydrochloric acid. The colloids can be either spherical or nonspherical. contains in excess of 750,000 polymer nanorods. Light focused on this beam splitter penetrates or is reflected, depending on polarization. Photonic crystal cavities Not only band gap, photonic crystals may have another effect if we partially remove the symmetry through the creation a nanosize cavity. This defect allows you to guide or to trap the light with the same function as nanophotonic resonator and it is characterized by the strong dielectric modulation in the photonic crystals. For the waveguide, the propagation of light depends on the in-plane control provided by the photonic band gap and to the long confinement of light induced by dielectric mismatch. For the light trap, the light is strongly confined in the cavity resulting further interactions with the materials. First, if we put a pulse of light inside the cavity, it will be delayed by nano- or picoseconds and this is proportional to the quality factor of the cavity. Finally, if we put an emitter inside the cavity, the emission light also can be enhanced significantly and or even the resonant coupling can go through Rabi oscillation. This is related with cavity quantum electrodynamics and the interactions are defined by the weak and strong coupling of the emitter and the cavity. The first studies for the cavity in one-dimensional photonic slabs are usually in grating or distributed feedback structures. For two-dimensional photonic crystal cavities, they are useful to make efficient photonic devices in telecommunication applications as they can provide very high quality factor up to millions with smaller-than-wavelength mode volume. For three-dimensional photonic crystal cavities, several methods have been developed including lithographic layer-by-layer approach, surface ion beam lithography, and micromanipulation technique. All those mentioned photonic crystal cavities that tightly confine light offer very useful functionality for integrated photonic circuits, but it is challenging to produce them in a manner that allows them to be easily relocated. There is no full control with the cavity creation, the cavity location, and the emitter position relative to the maximum field of the cavity while the studies to solve those problems are still ongoing. Movable cavity of nanowire in photonic crystals is one of solutions to tailor this light matter interaction. Fabrication challenges Higher-dimensional photonic crystal fabrication faces two major challenges: Making them with enough precision to prevent scattering losses blurring the crystal properties Designing processes that can robustly mass-produce the crystals One promising fabrication method for two-dimensionally periodic photonic crystals is a photonic-crystal fiber, such as a holey fiber. Using fiber draw techniques developed for communications fiber it meets these two requirements, and photonic crystal fibres are commercially available. Another promising method for developing two-dimensional photonic crystals is the so-called photonic crystal slab. These structures consist of a slab of material—such as silicon—that can be patterned using techniques from the semiconductor industry. Such chips offer the potential to combine photonic processing with electronic processing on a single chip. For three dimensional photonic crystals, various techniques have been used—including photolithography and etching techniques similar to those used for integrated circuits. Some of these techniques are already commercially available. To avoid the complex machinery of nanotechnological methods, some alternate approaches involve growing photonic crystals from colloidal crystals as self-assembled structures. Mass-scale 3D photonic crystal films and fibres can now be produced using a shear-assembly technique that stacks 200–300 nm colloidal polymer spheres into perfect films of fcc lattice. Because the particles have a softer transparent rubber coating, the films can be stretched and molded, tuning the photonic bandgaps and producing striking structural color effects. Computing photonic band structure The photonic band gap (PBG) is essentially the gap between the air-line and the dielectric-line in the dispersion relation of the PBG system. To design photonic crystal systems, it is essential to engineer the location and size of the bandgap by computational modeling using any of the following methods: Plane wave expansion method Inverse dispersion method Finite element method Finite difference time domain method Order-n spectral method KKR method Bloch wave – MoM method Essentially, these methods solve for the frequencies (normal modes) of the photonic crystal for each value of the propagation direction given by the wave vector, or vice versa. The various lines in the band structure, correspond to the different cases of n, the band index. For an introduction to photonic band structure, see K. Sakoda's and Joannopoulos books. The plane wave expansion method can be used to calculate the band structure using an eigen formulation of the Maxwell's equations, and thus solving for the eigen frequencies for each of the propagation directions, of the wave vectors. It directly solves for the dispersion diagram. Electric field strength values can also be calculated over the spatial domain of the problem using the eigen vectors of the same problem. For the picture shown to the right, corresponds to the band-structure of a 1D distributed Bragg reflector (DBR) with air-core interleaved with a dielectric material of relative permittivity 12.25, and a lattice period to air-core thickness ratio (d/a) of 0.8, is solved using 101 planewaves over the first irreducible Brillouin zone. The Inverse dispersion method also exploited plane wave expansion but formulates Maxwell's equation as an eigenproblem for the wave vector k while the frequency is considered as a parameter. Thus, it solves the dispersion relation instead of , which plane wave method does. The inverse dispersion method makes it possible to find complex value of the wave vector e.g. in the bandgap, which allows one to distinguish photonic crystals from metamaterial. Besides, the method is ready for the frequency dispersion of the permittivity to be taken into account. To speed calculation of the frequency band structure, the Reduced Bloch Mode Expansion (RBME) method can be used. The RBME method applies "on top" of any of the primary expansion methods mentioned above. For large unit cell models, the RBME method can reduce time for computing the band structure by up to two orders of magnitude. Applications Photonic crystals are attractive optical materials for controlling and manipulating light flow. One dimensional photonic crystals are already in widespread use, in the form of thin-film optics, with applications from low and high reflection coatings on lenses and mirrors to colour changing paints and inks. Higher-dimensional photonic crystals are of great interest for both fundamental and applied research, and the two dimensional ones are beginning to find commercial applications. The first commercial products involving two-dimensionally periodic photonic crystals are already available in the form of photonic-crystal fibers, which use a microscale structure to confine light with radically different characteristics compared to conventional optical fiber for applications in nonlinear devices and guiding exotic wavelengths. The three-dimensional counterparts are still far from commercialization but may offer additional features such as optical nonlinearity required for the operation of optical transistors used in optical computers, when some technological aspects such as manufacturability and principal difficulties such as disorder are under control. SWG photonic crystal waveguides have facilitated new integrated photonic devices for controlling transmission of light signals in photonic integrated circuits, including fibre-chip couplers, waveguide crossovers, wavelength and mode multiplexers, ultra-fast optical switches, athermal waveguides, biochemical sensors, polarization management circuits, broadband interference couplers, planar waveguide lenses, anisotropic waveguides, nanoantennas and optical phased arrays. SWG nanophotonic couplers permit highly-efficient and polarization-independent coupling between photonic chips and external devices. They have been adopted for fibre-chip coupling in volume optoelectronic chip manufacturing. These coupling interfaces are particularly important because every photonic chip needs to be optically connected with the external world and the chips themselves appear in many established and emerging applications, such as 5G networks, data center interconnects, chip-to-chip interconnects, metro- and long-haul telecommunication systems, and automotive navigation. In addition to the foregoing, photonic crystals have been proposed as platforms for the development of solar cells and optical sensors, including chemical sensors and biosensors.
Physical sciences
Optics
Physics
355522
https://en.wikipedia.org/wiki/Trematoda
Trematoda
Trematoda is a class of flatworms known as flukes or trematodes. They are obligate internal parasites with a complex life cycle requiring at least two hosts. The intermediate host, in which asexual reproduction occurs, is usually a snail. The definitive host, where the flukes sexually reproduce, is a vertebrate. Infection by trematodes can cause disease in all five traditional vertebrate classes: mammals, birds, amphibians, reptiles, and fish. Etymology Trematodes are commonly referred to as flukes. This term can be traced back to the Old English name for flounder, and refers to the flattened, rhomboidal shape of the organisms. Taxonomy There are 18,000 to 24,000 known species of trematodes, divided into two subclasses — the Aspidogastrea and the Digenea. Aspidogastrea is the smaller subclass, comprising 61 species. These flukes mainly infect bivalves and bony fishes. Digenea — which comprise the majority of trematodes — are found in certain mollusks and vertebrates. Trematodes of medical importance Flukes that cause disease in humans are often classified based on the organ system they infect. For example: Blood flukes inhabit the blood in some stages of their life cycle. Blood flukes that cause disease in humans include Trichobilharzia regenti, which causes swimmer's itch, and seven species of genus Schistosoma which cause schistosomiasis: S. guineensis, S.haematobium, S. intercalatum, S. japonicum, S. malayensis, S. mansoni, S. mekongi. As a definitive host, humans are infected when the cercariae (the larval forms of trematodes) penetrate the skin. Any contact with water containing these cercariae can potentially result in infection. Adult blood flukes can live for years in human or animal reservoir hosts. S. haematobium and S. japonicum are of particular importance, as these are carcinogenic parasites. S. haematobium, which infects the urinary bladder, is among the most important causes of bladder cancer in humans. This organism is classified by the International Agency for Research on Cancer (IARC) as a Group 1 (extensively proven) carcinogen. S. japonicum is associated with the development of liver cancer, and is classified as a Group 2B (possibly carcinogenic to humans) carcinogen. Liver flukes are commonly found within bile ducts, liver, and gallbladder in certain mammalian and avian species. They include Clonorchis sinensis, Dicrocoelium dendriticum, Dicrocoelium hospes, Fasciola gigantica, Fasciola hepatica, Opisthorchis felineus, and Opisthorchis viverrini. Clonorchis and Opisthorchis are carcinogenic parasites that are strongly associated with the development of cancer of the bile ducts. Lung flukes: there are ten species of lung flukes that infect humans, causing paragonimiasis. Of these, the most common cause of human paragonimiasis is Paragonimus westermani, the oriental lung fluke. Lung flukes require three different hosts in order to complete their life cycle. The first intermediate host is a snail, the second intermediate host is a crab or crayfish, and the definitive host for lung flukes is an animal or human host. Intestinal flukes inhabit the epithelium of the small intestine. These include Fasciolopsis buski (which causes fasciolopsiasis), Metagonimus miyatai, Metagonimus takahashii, Metagonimus yokogawai (which cause metagonimiasis), and Heterophyes heterophyes and Heterophyes nocens (which cause heterophyiasis). Anatomy Trematodes are flattened oval or worm-like animals, usually no more than a few centimeters in length, although species as small as are known. Their most distinctive external feature is the presence of two suckers, one close to the mouth, and the other on the underside of the animal. The body surface of trematodes comprises a tough syncytial tegument, which helps protect against digestive enzymes in those species that inhabit the gut of larger animals. It is also the surface of gas exchange; there are no respiratory organs. The mouth is located at the forward end of the animal, and opens into a muscular, pumping pharynx. The pharynx connects, via a short oesophagus, to one or two blind-ending caeca, which occupy most of the length of the body. In some species, the caeca are themselves branched. As in other flatworms, there is no anus, and waste material must be egested through the mouth. Although the excretion of nitrogenous waste occurs mostly through the tegument, trematodes do possess an excretory system, which is instead mainly concerned with osmoregulation. This consists of two or more protonephridia, with those on each side of the body opening into a collecting duct. The two collecting ducts typically meet up at a single bladder, opening to the exterior through one or two pores near the posterior end of the animal. The brain consists of a pair of ganglia in the head region, from which two or three pairs of nerve cords run down the length of the body. The nerve cords running along the ventral surface are always the largest, while the dorsal cords are present only in the Aspidogastrea. Trematodes generally lack any specialized sense organs, although some ectoparasitic species do possess one or two pairs of simple ocelli. Body wall musculature: Formed of three different muscle layers: circular, longitudinal, and diagonal. The outermost layer is formed by the circular muscle fibers, directly behind that are the longitudinal muscle fibers. The inner layer is formed by the diagonal muscle fibers. Together these muscle fibers form the segmented body wall of trematodes. Oral sucker and acetabulum: In some species of Trematoda, such as T. bragai, there is an acetabulum. This saucer-shaped organ is attached to the oral sucker in some Trematodes and other parasitic worms. This allows for parasitic worms to attach to their host by penetrating the host’s tissue with spines lining the acetabulum organ. In trematodes, the oral sucker is linked to the pharynx via a canal composed of meridional, equatorial, and radial muscle fibers. Together, the mouth, pharynx, and esophagus form the foregut in Trematodes. Reproductive system of blood flukes Most trematodes are hermaphrodites, as are many internal parasites. Blood flukes (Schistosoma) are the only form of trematodes that are dioecious (have both a male and female sex). Blood flukes are unique in the way that they can undergo both asexual and sexual reproduction. Asexual reproduction occurs in the hepatopancreas of a freshwater snail, which serves as an intermediate host. Sexual reproduction occurs later in the life cycle, in the definitive (vertebrate) host. The male reproductive system usually includes two testes, though some species may have more. The testes are located posterior and dorsal to the ventral sucker. Spermatogenesis produces biflagellate sperm (sperm with two tails). Sperm is stored in the seminal vesicles, which are connected to the testes by the vas deferens. The male reproductive system varies considerably in structure between species; this can be very useful in species identification. The female reproductive system consists of one ovary connected to an elongated uterus by a ciliated oviduct. The uterus opens to the exterior at the genital pore (the common external opening of the male and female reproductive systems). The location of the ovary varies among different species, making the female reproductive system useful in species identification. At the base of the oviduct is a copulatory duct — termed Laurer's canal — which is analogous to a vagina. Oocytes are released from the ovary into the oocapt (the dilated proximal end of the oviduct). Sperm cells travel from the seminal vesicles through the uterus to reach the ootype (the dilated distal part of the oviduct), where fertilization occurs. The ootype is connected via a pair of ducts to a number of vitelline ducts that produce yolk. After the egg is surrounded by yolk, its shell is formed from the secretions of Mehlis' glands, the ducts of which also open into the ootype. From the ootype, the fertilized egg then travels back into the uterus, and is ultimately released from the genital atrium. Life cycles Trematodes have a very complex life cycle and depending on what taxa they belong to, their life cycles can be completed with as little as one host compared to the typical three hosts.  When there is one host, this is normally a specific species of snail of the family Lymnaeidae. Almost all trematodes infect molluscs as the first host in the life cycle, and most have a complex life cycle involving other hosts. Most trematodes are monoecious and alternately reproduce sexually and asexually. The two main exceptions to this are the Aspidogastrea, which have no asexual reproduction, and the schistosomes, which are dioecious. In the definitive host, in which sexual reproduction occurs, eggs are commonly shed along with host feces. Eggs shed in water release free-swimming larval forms (Miracidia) that are infective to the intermediate host, in which asexual reproduction occurs. A species that exemplifies the remarkable life history of the trematodes is the bird fluke, Leucochloridium paradoxum. The definitive hosts, in which the parasite reproduces, are various woodland birds, while the hosts in which the parasite multiplies (intermediate host) are various species of snail. The adult parasite in the bird's gut produces eggs and these eventually end up on the ground in the bird's feces. Some eggs may be swallowed by a snail and hatch into larvae (miracidia). These larvae grow and take on a sac-like appearance. This stage is known as the sporocyst and it forms a central body in the snail's digestive gland that extends into a brood sac in the snail's head, muscular foot and eye-stalks. It is in the central body of the sporocyst where the parasite replicates itself, producing many tiny embryos (redia). These embryos move to the brood sac and mature into cercaria. Life cycle adaptations Trematodes have a large variation of forms throughout their life cycles. Individual trematode parasites life cycles may vary from this list. They have five larval stages along with the cystic and fully matured adult phases. Trematodes are released from the definitive host as eggs, which have evolved to withstand the harsh environment Released from the egg which hatches into the miracidium. This infects the first intermediate host in one of two ways, either active or passive transmission. The first host is normally a mollusk. a) Active transmission has adapted for dispersal in space as a free swimming ciliated miracidium with adaptations for recognizing and penetrating the first intermediate host. b) Passive transmission has adapted for dispersal in time and infects the first intermediate host contained within the egg. The sporocyst forms inside the snail first intermediate host and feeds through diffusion across the tegument. The rediae also forms inside the snail first intermediate host and feeds through a developed pharynx. Either the rediae or the sporocyst develops into the cercariae through polyembryony in the snail. The cercariae are adapted for dispersal in space and exhibit a large variety in morphology. They are adapted to recognize and penetrate the second intermediate host, and contain behavioral and physiological adaptations not present in earlier life stages. The metacercariae are an adapted cystic form dormant in the secondary intermediate host. The adult is the fully developed form which infects the definitive host. The first stage is the miracidium that is triangular in shape and covered by a ciliated ectoderm which is the outermost layer of the three germ layers. The epidermis and epidemic tissues of the parasite will develop from the miracidium. They also have an anterior spin which helps them drill into the snail. The miracidium develops into the sporocyst, which is a sac-like structure, and in this sac the larvae begin to develop. The cells multiply. The rediae and cercariae develop from the larvae which are then released and encyst as metacercariae, for instance on aquatic plants. Humans as well as larger sea creatures get infected when they eat these plants. When they infect humans, it can take 3–4 months for the metacercariae to mature into adult flukes and lay eggs. Example: liver flukes Liver flukes, one of the different species, are responsible for causing liver fluke disease which is also known as fasciolosis. They are hermaphroditic internal parasites. They are caused by the migration of a large number of immature flukes through the liver passageway or by adult flukes that migrate to the bile ducts. Liver flukes infect all grazing animals and infect humans when they eat raw or undercooked fish. Like other flukes, the liver flukes need intermediate hosts and as a result, the transmission from animals to humans happens in three phases. The first phase is the infection of the snail (the first intermediate host) via feces. They complete their gestation and hatch as cercariae. They leave their snail hosts and infect fish who are their second intermediate host. Lastly, larger animals ingest the metacercariae in raw and undercooked fish. In humans or grazing animals, the metacercariae complete their life cycle and become full grown liver flukes. Eusociality One species of tremtaoda, Haplorchis pumilio, has evolved eusociality involving a colony of them creating a class of sterile soldiers. One fluke invades a host and establishes a colony of dozens to thousands of clones that work together to take it over. Since rival trematode species might also invade and replace them, a specialized caste of sterile soldier trematodes protects the colony. Soldiers are smaller, more mobile, and develop along a different pathway than sexually mature reproductives. One big difference is their mouthparts (pharynx), which are five times as big as those of the reproductives. They make up nearly a quarter of the volume of the soldier. These soldiers don’t have a germinal mass, can’t metamorphose to be reproductive, and are, therefore, obligately sterile. Soldiers are readily distinguished from the immature and mature reproductive worms. Soldiers are more aggressive than reproductives, attacking heterospecific trematodes that infect their host in vitro. Interestingly, H. pumilio soldiers do not attack conspecifics from other colonies. The soldiers are not evenly distributed throughout the host body. They’re found in the highest numbers in the basal visceral mass, where competing trematodes tend to multiply during the early phase of infection. This strategic positioning allows them to effectively defend against invaders, similar to how soldier distribution patterns are seen in other animals with defensive castes. They "appear to be an obligately sterile physical caste, akin to that of the most advanced social insects". Reflecting on their use for understanding the evolution of animal social castes, one review commented, "trematodes are a lineage for sociobiologists to keep a careful watch on!" Infections Trematodes can cause disease in many types of vertebrates, including mammals, birds, reptiles, and fish. Cattle and sheep can become infected by eating contaminated food. These infections lead to a reduction in milk or meat production, which can be of significant economic importance to the livestock industry. Human trematode infections are most common in Asia, Africa and Latin America. However, trematodes can be found anywhere where untreated human waste is used as fertilizer. Humans can be infected by trematodes by immersion in or ingestion of contaminated water, or by consuming raw or undercooked contaminated animals or plants. Treatment Albendazole can be used to treat clonorchiasis and opisthorchiasis. Triclabendazole is often used to treat fasciolosis, and may also be useful in the treatment of paragonimiasis and dicrocoeliasis. Praziquantel is effective in the treatment of all diseases caused by flukes (clonorchiasis, dicrocoeliasis, echinostomiasis, fasciolopsiasis, fasciolosis, gastrodiscoidiasis, heterophyiasis, metagonimiasis, opisthorchiasis, paragonimiasis, and schistosomiasis).
Biology and health sciences
Platyzoa
Animals
355547
https://en.wikipedia.org/wiki/London%20dispersion%20force
London dispersion force
London dispersion forces (LDF, also known as dispersion forces, London forces, instantaneous dipole–induced dipole forces, fluctuating induced dipole bonds or loosely as van der Waals forces) are a type of intermolecular force acting between atoms and molecules that are normally electrically symmetric; that is, the electrons are symmetrically distributed with respect to the nucleus. They are part of the van der Waals forces. The LDF is named after the German physicist Fritz London. They are the weakest intermolecular force. Introduction The electron distribution around an atom or molecule undergoes fluctuations in time. These fluctuations create instantaneous electric fields which are felt by other nearby atoms and molecules, which in turn adjust the spatial distribution of their own electrons. The net effect is that the fluctuations in electron positions in one atom induce a corresponding redistribution of electrons in other atoms, such that the electron motions become correlated. While the detailed theory requires a quantum-mechanical explanation (see quantum mechanical theory of dispersion forces), the effect is frequently described as the formation of instantaneous dipoles that (when separated by vacuum) attract each other. The magnitude of the London dispersion force is frequently described in terms of a single parameter called the Hamaker constant, typically symbolized . For atoms that are located closer together than the wavelength of light, the interaction is essentially instantaneous and is described in terms of a "non-retarded" Hamaker constant. For entities that are farther apart, the finite time required for the fluctuation at one atom to be felt at a second atom ("retardation") requires use of a "retarded" Hamaker constant. While the London dispersion force between individual atoms and molecules is quite weak and decreases quickly with separation like , in condensed matter (liquids and solids), the effect is cumulative over the volume of materials, or within and between organic molecules, such that London dispersion forces can be quite strong in bulk solid and liquids and decay much more slowly with distance. For example, the total force per unit area between two bulk solids decreases by where is the separation between them. The effects of London dispersion forces are most obvious in systems that are very non-polar (e.g., that lack ionic bonds), such as hydrocarbons and highly symmetric molecules like bromine (Br2, a liquid at room temperature) or iodine (I2, a solid at room temperature). In hydrocarbons and waxes, the dispersion forces are sufficient to cause condensation from the gas phase into the liquid or solid phase. Sublimation heats of e.g. hydrocarbon crystals reflect the dispersion interaction. Liquification of oxygen and nitrogen gases into liquid phases is also dominated by attractive London dispersion forces. When atoms/molecules are separated by a third medium (rather than vacuum), the situation becomes more complex. In aqueous solutions, the effects of dispersion forces between atoms or molecules are frequently less pronounced due to competition with polarizable solvent molecules. That is, the instantaneous fluctuations in one atom or molecule are felt both by the solvent (water) and by other molecules. Larger and heavier atoms and molecules exhibit stronger dispersion forces than smaller and lighter ones. This is due to the increased polarizability of molecules with larger, more dispersed electron clouds. The polarizability is a measure of how easily electrons can be redistributed; a large polarizability implies that the electrons are more easily redistributed. This trend is exemplified by the halogens (from smallest to largest: F2, Cl2, Br2, I2). The same increase of dispersive attraction occurs within and between organic molecules in the order RF, RCl, RBr, RI (from smallest to largest) or with other more polarizable heteroatoms. Fluorine and chlorine are gases at room temperature, bromine is a liquid, and iodine is a solid. The London forces are thought to arise from the motion of electrons. Quantum mechanical theory The first explanation of the attraction between noble gas atoms was given by Fritz London in 1930. He used a quantum-mechanical theory based on second-order perturbation theory. The perturbation is because of the Coulomb interaction between the electrons and nuclei of the two moieties (atoms or molecules). The second-order perturbation expression of the interaction energy contains a sum over states. The states appearing in this sum are simple products of the stimulated electronic states of the monomers. Thus, no intermolecular antisymmetrization of the electronic states is included, and the Pauli exclusion principle is only partially satisfied. London wrote a Taylor series expansion of the perturbation in , where is the distance between the nuclear centers of mass of the moieties. This expansion is known as the multipole expansion because the terms in this series can be regarded as energies of two interacting multipoles, one on each monomer. Substitution of the multipole-expanded form of V into the second-order energy yields an expression that resembles an expression describing the interaction between instantaneous multipoles (see the qualitative description above). Additionally, an approximation, named after Albrecht Unsöld, must be introduced in order to obtain a description of London dispersion in terms of polarizability volumes, , and ionization energies, , (ancient term: ionization potentials). In this manner, the following approximation is obtained for the dispersion interaction between two atoms and . Here and are the polarizability volumes of the respective atoms. The quantities and are the first ionization energies of the atoms, and is the intermolecular distance. Note that this final London equation does not contain instantaneous dipoles (see molecular dipoles). The "explanation" of the dispersion force as the interaction between two such dipoles was invented after London arrived at the proper quantum mechanical theory. The authoritative work contains a criticism of the instantaneous dipole model and a modern and thorough exposition of the theory of intermolecular forces. The London theory has much similarity to the quantum mechanical theory of light dispersion, which is why London coined the phrase "dispersion effect". In physics, the term "dispersion" describes the variation of a quantity with frequency, which is the fluctuation of the electrons in the case of the London dispersion. Relative magnitude Dispersion forces are usually dominant over the three van der Waals forces (orientation, induction, dispersion) between atoms and molecules, with the exception of molecules that are small and highly polar, such as water. The following contribution of the dispersion to the total intermolecular interaction energy has been given:
Physical sciences
Supramolecular chemistry
Chemistry
355765
https://en.wikipedia.org/wiki/Compound%20eye
Compound eye
A compound eye is a visual organ found in arthropods such as insects and crustaceans. It may consist of thousands of ommatidia, which are tiny independent photoreception units that consist of a cornea, lens, and photoreceptor cells which distinguish brightness and color. The image perceived by this arthropod eye is a combination of inputs from the numerous ommatidia, which are oriented to point in slightly different directions. Compared with single-aperture eyes, compound eyes have poor image resolution; however, they possess a very large view angle and the ability to detect fast movement and, in some cases, the polarization of light. Because a compound eye is made up of a collection of ommatidia, each with its own lens, light will enter each ommatidium instead of using a single entrance point. The individual light receptors behind each lens are then turned on and off due to a series of changes in the light intensity during movement or when an object is moving, creating a flicker-effect known as the flicker frequency, which is the rate at which the ommatidia are turned on and off– this facilitates faster reaction to movement; honey bees respond in 0.01s compared with 0.05s for humans. Types Compound eyes are typically classified as either apposition eyes, which form multiple inverted images, or superposition eyes, which form a single erect image. Apposition eyes Apposition eyes can be divided into two groups. The typical apposition eye has a lens focusing light from one direction on the rhabdom, while light from other directions is absorbed by the dark wall of the ommatidium. The mantis shrimp is the most advanced example of an animal with this type of eye. In the other kind of apposition eye, found in the Strepsiptera, each lens forms an image, and the images are combined in the brain. This is called the schizochroal compound eye or the neural superposition eye (which, despite its name, is a form of the apposition eye). Superposition eyes The superposition eye is divided into three subtypes; the refracting, the reflecting, and the parabolic superposition eye. The refracting superposition eye has a gap between the lens and the rhabdom, and no side wall. Each lens takes light at an angle to its axis and reflects it to the same angle on the other side. The result is an image at half the radius of the eye, which is where the tips of the rhabdoms are. This kind is used mostly by nocturnal insects. In the parabolic superposition eye, seen in arthropods such as mayflies, the parabolic surfaces of the inside of each facet focus light from a reflector to a sensor array. Long-bodied decapod crustaceans such as shrimp, prawns, crayfish and lobsters are alone in having reflecting superposition eyes, which also have a transparent gap but use corner mirrors instead of lenses. Other Good fliers like flies or honey bees, or prey-catching insects like praying mantises or dragonflies, have specialized zones of ommatidia organized into a fovea area which gives acute vision. In the acute zone the eye is flattened and the facets larger. The flattening allows more ommatidia to receive light from a spot and therefore higher resolution. There are some exceptions from the types mentioned above. Some insects have a so-called single lens compound eye, a transitional type which is something between a superposition type of the multi-lens compound eye and the single lens eye found in animals with simple eyes. Then there is the mysid shrimp, Dioptromysis paucispinosa. The shrimp has an eye of the refracting superposition type, in the rear behind this in each eye there is a single large facet that is three times in diameter the others in the eye and behind this is an enlarged crystalline cone. This projects an upright image on a specialized retina. The resulting eye is a mixture of a simple eye within a compound eye. Another version is the pseudofaceted eye, as seen in Scutigera. This type of eye consists of a cluster of numerous ocelli on each side of the head, organized in a way that resembles a true compound eye. Asymmetries in compound eyes may be associated with asymmetries in behaviour. For example, Temnothorax albipennis ant scouts show behavioural lateralization when exploring unknown nest sites, showing a population-level bias to prefer left turns. One possible reason for this is that its environment is partly maze-like and consistently turning in one direction is a good way to search and exit mazes without getting lost. This turning bias is correlated with slight asymmetries in the ants' compound eyes (differential ommatidia count). The body of Ophiomastix wendtii, a type of brittle star, was previously thought to be covered with ommatidia, turning its whole skin into a compound eye, but this has since been found to be erroneous; the system does not rely on lenses or image formation. Cultural references "Dragonfly eyes" (Chinese: 蜻蜓眼 qingting yan) is a term for knobbly multi-coloured glass beads made in Western and Eastern Asia 2000–2500 years ago. Owing to the multiple views and stimuli, compound eyes or dragonfly eyes have become a feature in art, film and literature, particularly in the 2010s. For example: The Man with the Compound Eyes, novel (2011) by Wu Ming-yi, English translation (2013) by Darryl Sterk Dragonfly Eyes, movie (2017) by Xu Bing Dragonfly Eyes, novel (2016) by Cao Wenxuan, English translation (2021) by Helen Wang
Biology and health sciences
Visual system
Biology
355776
https://en.wikipedia.org/wiki/Pellagra
Pellagra
Pellagra is a disease caused by a lack of the vitamin niacin (vitamin B3). Symptoms include inflamed skin, diarrhea, dementia, and sores in the mouth. Areas of the skin exposed to friction and radiation are typically affected first. Over time affected skin may become darker, stiffen, peel, or bleed. There are two main types of pellagra, primary and secondary. Primary pellagra is due to a diet that does not contain enough niacin and tryptophan. Secondary pellagra is due to a poor ability to use the niacin within the diet. This can occur as a result of alcoholism, long-term diarrhea, carcinoid syndrome, Hartnup disease, and a number of medications such as isoniazid. Diagnosis is typically based on symptoms and may be assisted by urine testing. Treatment is with either nicotinic acid or nicotinamide supplementation. Improvements typically begin within a couple of days. General improvements in diet are also frequently recommended. Decreasing sun exposure via sunscreen and proper clothing is important while the skin heals. Without treatment death may occur. The disease occurs most commonly in the developing world, often as a disease of poverty associated with malnutrition, specifically sub-Saharan Africa. Signs and symptoms The classic symptoms of pellagra are diarrhea, dermatitis, dementia, and death ("the four Ds"). A more comprehensive list of symptoms includes: Dermatitis (characteristic "broad collar" rash known as casal collar) Hair loss Swelling Smooth, beefy red glossitis (tongue inflammation) Trouble sleeping Weakness Mental confusion or aggression Ataxia (lack of coordination), paralysis of extremities, peripheral neuritis (nerve damage) Diarrhea Dilated cardiomyopathy (enlarged, weakened heart) Sensitivity to radiation Eventually dementia J. Frostigs and Tom Spies—according to Cleary and Cleary—described more specific psychological symptoms of pellagra as: Psychosensory disturbances (impressions as being painful, annoying bright lights, odors intolerance causing nausea and vomiting, dizziness after sudden movements), Psychomotor disturbances (restlessness, tense and a desire to quarrel, increased preparedness for motor action), as well as Emotional disturbances Independently of clinical symptoms, blood level of tryptophan or urinary metabolites such as 2-pyridone/N-methylnicotinamide ratio <2 or NAD/NADP ratio in red blood cells can diagnose pellagra. The diagnosis is confirmed by rapid improvements in symptoms after doses of nicotinamide (250–500 mg/day) or nicotinamide enriched food. Pathophysiology Pellagra can develop according to several mechanisms, classically as a result of niacin (vitamin B3) deficiency, which results in decreased nicotinamide adenine dinucleotide (NAD). Since NAD and its phosphorylated NADP form are cofactors required in many body processes, the pathological impact of pellagra is broad and results in death if not treated. The first mechanism is simple dietary lack of niacin. Second, it may result from deficiency of tryptophan, an essential amino acid found in meat, poultry, fish, eggs, and peanuts, which the body uses to make niacin. Third, it may be caused by excess leucine, as it inhibits quinolinate phosphoribosyl transferase (QPRT) and inhibits the formation of nicotinic acid to nicotinamide mononucleotide (NMN) causing pellagra-like symptoms to occur. Some conditions can prevent the absorption of dietary niacin or tryptophan and lead to pellagra. Inflammation of the jejunum or ileum can prevent nutrient absorption, leading to pellagra, and this can in turn be caused by Crohn's disease. Gastroenterostomy can also cause pellagra. Chronic alcoholism can also cause poor absorption, which combined with a diet already low in niacin and tryptophan produces pellagra. Hartnup disease is a genetic disorder that reduces tryptophan absorption, leading to pellagra. Alterations in protein metabolism may also produce pellagra-like symptoms. An example is carcinoid syndrome, a disease in which neuroendocrine tumors along the GI tract use tryptophan as the source for serotonin production, which limits the available tryptophan for niacin synthesis. In normal patients, only one percent of dietary tryptophan is converted to serotonin; however, in patients with carcinoid syndrome, this value may increase to 70%. Carcinoid syndrome thus may produce niacin deficiency and clinical manifestations of pellagra. Anti-tuberculosis medication tends to bind to vitamin B6 and reduce niacin synthesis, since B6 (pyridoxine) is a required cofactor in the tryptophan-to-niacin reaction. Several therapeutic drugs can provoke pellagra. These include the antibiotics isoniazid, which decreases available B6 by binding to it and making it inactive, so it cannot be used in niacin synthesis, and chloramphenicol; the anti-cancer agent fluorouracil; and the immunosuppressant mercaptopurine. Treatment If untreated, pellagra can kill within four or five years. Treatment is with nicotinamide, which has the same vitamin function as nicotinic acid and a similar chemical structure, but has lower toxicity. The frequency and amount of nicotinamide administered depends on the degree to which the condition has progressed. Epidemiology Pellagra can be common in people who obtain most of their food energy from corn, notably rural South America, where maize is a staple food. If maize is not nixtamalized, it is a poor source of tryptophan, as well as niacin. Nixtamalization corrects the niacin deficiency, and is a common practice in Native American cultures that grow corn, but most especially in Mexico and the countries of Central America. Following the corn cycle, the symptoms usually appear during spring, increase in the summer due to greater sun exposure, and return the following spring. Indeed, pellagra was once endemic in the poorer states of the U.S. South, such as Mississippi and Alabama, where its cyclical appearance in the spring after meat-heavy winter diets led to it being known as "spring sickness" (particularly when it appeared among more vulnerable children), as well as among the residents of jails and orphanages as studied by Dr. Joseph Goldberger. Pellagra is common in Africa, Indonesia, and China. In affluent societies, a majority of patients with clinical pellagra are poor, homeless, alcohol-dependent, or psychiatric patients who refuse food. Pellagra was common among prisoners of Soviet labor camps (the Gulags). In addition, pellagra, as a micronutrient deficiency disease, frequently affects populations of refugees and other displaced people due to their unique, long-term residential circumstances and dependence on food aid. Refugees typically rely on limited sources of vitamin B3 provided to them, often peanuts (which, in Africa, may be supplied in place of local groundnut staples, such as the Bambara or Hausa groundnut); the instability in the nutritional content and distribution of food aid can be the cause of pellagra in displaced populations. In the 2000s, there were outbreaks in countries such as Angola, Zimbabwe and Nepal. In Angola specifically, recent reports show a similar incidence of pellagra since 2002, with clinical pellagra in 0.3% of women and 0.2% of children and niacin deficiency in 29.4% of women and 6% of children related to high untreated corn consumption. In other countries such as the Netherlands and Denmark, even with sufficient intake of niacin, cases have been reported. In this case, deficiency might happen not just because of poverty or malnutrition but secondary to alcoholism, drug interaction (psychotropic, cytostatic, tuberculostatic or analgesics), HIV, vitamin B2 and B6 deficiency, or malabsorption syndromes such as Hartnup disease and carcinoid tumors. Etymology The word is known to come from Lombard, but its exact origins are disputed. "Pell" certainly arises from classical Latin , meaning "skin". "-agra" may arise from Lombard , meaning "like serum or holly juice", or the Latinate , a suffix for maladies itself borrowed from the Greek , meaning "a catch-point, a hunting trap". History Native American cultivators who first domesticated corn (maize) prepared it by nixtamalization, in which the grain is treated with a solution of alkali such as lime. Nixtamalization makes the niacin nutritionally available and prevents pellagra. When maize was cultivated worldwide, and eaten as a staple without nixtamalization, pellagra became common. Pellagra was first described for its dermatological effect in Spain in 1735 by Gaspar Casal. He explained that the disease causes dermatitis in exposed skin areas such as hands, feet and neck and that the origin of the disease is poor diet and atmospheric influences. His work published in 1762 by his friend Juan Sevillano was titled Historia Natural y Medicina del Principado de Asturias or Natural and Medical History of the Principality of Asturias (1762). This led to the disease being known as "Asturian leprosy", and it is recognized as the first modern pathological description of a syndrome. It was an endemic disease in northern Italy, where it was named, from Lombard, by Francesco Frapolli of Milan. With pellagra affecting over 100,000 people in Italy by the 1880s, debates raged as to how to classify the disease (as a form of scurvy, elephantiasis or as something new), and over its causation. In the 19th century, Roussel started a campaign in France to restrict consumption of maize and eradicated the disease in France, but it remained endemic in many rural areas of Europe. Because pellagra outbreaks occurred in regions where maize was a dominant food crop, the most convincing hypothesis during the late 19th century, as espoused by Cesare Lombroso, was that the maize either carried a toxic substance or was a carrier of disease. Louis Sambon, an Anglo-Italian doctor working at the London School of Tropical Medicine, was convinced that pellagra was carried by an insect, along the lines of malaria. Later, the lack of pellagra outbreaks in Mesoamerica, where maize is a major food crop, led researchers to investigate processing techniques in that region. Pellagra was studied mostly in Europe until the late 19th century when it became epidemic especially in the southern United States. In the early 1900s, pellagra reached epidemic proportions in the American South. Between 1906 and 1940 more than 3 million Americans were affected by pellagra with more than 100,000 deaths, yet the epidemic resolved itself right after dietary niacin fortification. Pellagra deaths in South Carolina numbered 1,306 during the first ten months of 1915; 100,000 Southerners were affected in 1916. At this time, the scientific community held that pellagra was probably caused by a germ or some unknown toxin in corn. The Spartanburg Pellagra Hospital in Spartanburg, South Carolina, was the nation's first facility dedicated to discovering the cause of pellagra. It was established in 1914 with a special Congressional appropriation to the U.S. Public Health Service (PHS) and set up primarily for research. In 1915, Dr. Joseph Goldberger, assigned to study pellagra by the Surgeon General of the United States, showed it was linked to diet by observing the outbreaks of pellagra in orphanages and mental hospitals. Goldberger noted that children between the ages of 6 and 12 (but not older or younger children at the orphanages) and patients at the mental hospitals (but not doctors or nurses) were the ones who seemed most susceptible to pellagra. Goldberger theorized that a lack of meat, milk, eggs, and legumes made those particular populations susceptible to pellagra. By modifying the diet served in these institutions with "a marked increase in the fresh animal and the leguminous protein foods," Goldberger was able to show that pellagra could be prevented. By 1926, Goldberger established that a diet that included these foods, or a small amount of brewer's yeast, prevented pellagra. Goldberger experimented on 11 prisoners (one was dismissed because of prostatitis). Before the experiment, the prisoners were eating the prison fare fed to all inmates at Rankin Prison Farm in Mississippi. Goldberger started feeding them a restricted diet of grits, syrup, mush, biscuits, cabbage, sweet potatoes, rice, collards, and coffee with sugar (no milk). Healthy white male volunteers were selected as the typical skin lesions were easier to see in Caucasians and this population was felt to be those least susceptible to the disease, and thus provide the strongest evidence that the disease was caused by a nutritional deficiency. Subjects experienced mild, but typical cognitive and gastrointestinal symptoms, and within five months of this cereal-based diet, 6 of the 11 subjects broke out in the skin lesions that are necessary for a definitive diagnosis of pellagra. The lesions appeared first on the scrotum. Goldberger was not given the opportunity to experimentally reverse the effects of diet-induced pellagra as the prisoners were released shortly after the diagnoses of pellagra were confirmed. In the 1920s, he connected pellagra to the corn-based diets of rural areas rather than infection as contemporary medical opinion would suggest. Goldberger believed that the root cause of pellagra amongst Southern farmers was limited diet resulting from poverty, and that social and land reform would cure epidemic pellagra. His reform efforts were not realized, but crop diversification in the Southern United States, and the accompanying improvement in diet, dramatically reduced the risk of pellagra. Goldberger is remembered as the "unsung hero of American clinical epidemiology". Though he identified that a missing nutritional element was responsible for pellagra, he did not discover the specific vitamin responsible. In 1937, Conrad Elvehjem, a biochemistry professor at the University of Wisconsin-Madison, showed that the vitamin B3 cured pellagra (manifested as black tongue) in dogs. Later studies by Dr. Tom Spies, Marion Blankenhorn, and Clark Cooper established that niacin also cured pellagra in humans, for which Time Magazine dubbed them its 1938 Men of the Year in comprehensive science. Research conducted between 1900 and 1950 found the number of cases of women with pellagra was consistently double the number of cases of affected men. This is thought to be due to the inhibitory effect of estrogen on the conversion of the amino acid tryptophan to nicotinic acid mononucleotide (NaMN). Some researchers of the time gave a few explanations regarding the difference. Gillman and Gillman related skeletal tissue and pellagra in their research in South Africans. They provide some of the best evidence for skeletal manifestations of pellagra and the reaction of bone in malnutrition. They claimed radiological studies of adult pellagrins demonstrated marked osteoporosis. A negative mineral balance in pellagrins was noted, which indicated active mobilization and excretion of endogenous mineral substances, and undoubtedly impacted the turnover of bone. Extensive dental caries were present in over half of pellagra patients. In most cases, caries were associated with "severe gingival retraction, sepsis, exposure of cementum, and loosening of teeth". United States Pellagra was first reported in 1902 in the United States, and has "caused more deaths than any other nutrition-related disease in American history", reaching epidemic proportions in the American South during the early 1900s. Poverty and consumption of corn were the most frequently observed risk factors, but the exact cause was not known, until groundbreaking work by Joseph Goldberger. A 2017 National Bureau of Economic Research paper explored the role of cotton production in the emergence of disease; one prominent theory is that "widespread cotton production had displaced local production of niacin-rich foods and driven poor Southern farmers and mill workers to consume milled Midwestern corn, which was relatively cheap but also devoid of the niacin necessary to prevent pellagra." The study provided evidence in favor of the theory: there were lower pellagra rates in areas where farmers had been forced to abandon cotton production (a highly profitable crop) in favor of food crops (less profitable crops) due to boll weevil infestation of cotton crops (which occurred randomly). Pellagra developed especially among the vulnerable populations in institutions such as orphanages and prisons, because of the monotonous and restricted diet. Soon pellagra began to occur in epidemic proportions in states south of the Potomac and Ohio rivers. The pellagra epidemic lasted for nearly four decades beginning in 1906. It was estimated that there were 3 million cases, and 100,000 deaths due to pellagra during the epidemic. The pellagra epidemic in the American south had subsided in periods of low cotton production (late 1910s to early 1920s, the Great Depression), but it had consistently rebounded as cotton production recovered. The cause would not be understood until 1937, when the relation with niacin was discovered. Voluntary food fortification and periods of mandatory fortification on the state and federal levels soon followed, coinciding with a continuous drop in pellagra deaths. By the 1950s, the disease was virtually eliminated from the US. Corn processing The whole dried corn kernel contains a nutritious germ and a thin seed coat that provides some fiber. There are two important considerations for using ground whole-grain corn. The germ contains oil that is exposed by grinding, thus whole-grain cornmeal and grits turn rancid quickly at room temperature and should be refrigerated. Whole-grain cornmeal and grits require extended cooking times as seen in the following cooking directions for whole-grain grits: The milling of corn removes the aleurone and germ layers, removing much of the (already low) amounts of bioavailable niacin and tryptophan found within. The milling and degerming of corn in the preparation of cornmeal became feasible with the development of the Beall degerminator, which was originally patented in 1901 and was used to separate the grit from the germ in corn processing. Casimir Funk, who helped elucidate the role of thiamin in the etiology of beriberi, was an early investigator of the problem of pellagra. Funk suggested that a change in the method of milling corn was responsible for the outbreak of pellagra, but no attention was paid to his article on this subject. In popular culture George Sessions Perry's 1941 novel Hold Autumn in Your Handand Jean Renoir's 1945 film adaptation of it, The Southernerincorporates pellagra ("spring sickness") as a major plot element in the story of an impoverished Texas farm family. Satirical singer/pianist Tom Lehrer referred to pellagra as an endemic disease of the Southern states in the lyrics of his 1960 song "I Wanna Go Back to Dixie" -- "I'll go back to the Swanee / Where pellagra makes ya scrawny / And the honeysuckle clutters up the vine." Season 2, Episode 22 of House, titled "Forever", focuses on a new mother diagnosed with Pellagra who experiences aggression and homicidal tendencies
Biology and health sciences
Health and fitness: General
Health
355802
https://en.wikipedia.org/wiki/Egg%20white
Egg white
Egg white is the clear liquid (also called the albumen or the glair/glaire) contained within an egg. In chickens, it is formed from the layers of secretions of the anterior section of the hen's oviduct during the passage of the egg. It forms around fertilized or unfertilized egg yolks. The primary natural purpose of egg white is to protect the yolk and provide additional nutrition for the growth of the embryo (when fertilized). Egg white consists primarily of about 90% water into which about 10% proteins (including albumins, mucoproteins, and globulins) are dissolved. Unlike the yolk, which is high in lipids (fats), egg white contains almost no fat, and carbohydrate content is less than 1%. Egg whites contain about 56% of the protein in the egg. Egg white has many uses in food (e.g. meringue, mousse) as well as many other uses (e.g. in the preparation of vaccines such as those for influenza). Composition Egg white makes up around two-thirds of a chicken egg by weight. Water constitutes about 90% of this, with protein, trace minerals, fatty material, vitamins, and glucose contributing the remainder. A raw U.S. large egg contains around 33 grams of egg white with 3.6 grams of protein, 0.24 grams of carbohydrate and 55 milligrams of sodium. It contains no cholesterol and the energy content is about 17 calories. Egg white is an alkaline solution and contains around 149 proteins. The table below lists the major proteins in egg whites by percentage and their natural functions. Ovalbumin is the most abundant protein in albumen. Classed as phosphoglycoprotein, during storage, it converts into s-ovalbumin (5% at the time of laying) and can reach up to 80% after six months of cold storage. Ovalbumin in solution is heat-resistant. Denaturation temperature is around 84 °C, but it can be easily denatured by physical stresses. Conalbumin/ovotransferrin is a glycoprotein which has the capacity to bind the bi- and trivalent metal cations into a complex and is more heat sensitive than ovalbumin. At its isoelectric pH (6.5), it can bind two cations and assume a red or yellow color. These metal complexes are more heat stable than the native state. Ovomucoid is the major allergen from egg white and is a heat-resistant glycoprotein found to be a trypsin inhibitor. Lysozyme is a holoprotein which can lyse the wall of certain Gram-positive bacteria and is found at high levels in the chalaziferous layer and the chalazae which anchor the yolk towards the middle of the egg. Ovomucin is a glycoprotein which may contribute to the gel-like structure of thick albumen. The amount of ovomucin in the thick albumen is four times as great as in the thin albumen. Foam The physical stress of beating egg whites can create a foam. Two types of physical stress are caused by beating them with a whisk: denaturation and coagulation. Denaturation occurs as the whisk drags the liquid through itself, creating a force that unfolds the protein molecules. Coagulation comes from the mixing of air into the whites, which causes the proteins to come out of their natural state. These denatured proteins gather together where the air and water meet and create multiple bonds with the other unraveled proteins, and thus become a foam, holding the incorporated air in place; because the proteins consist of amino acids, some are hydrophilic (attracted to water) and some are hydrophobic (repelled by water). When beating egg whites, they are classified in three stages according to the peaks they form when the beater is lifted: soft, firm, and stiff peaks. Overbeaten eggs take on a dry appearance, and eventually collapse. Egg whites do not beat up correctly if they are exposed to any form of fat, such as cooking oils or the fats contained in egg yolk. Copper bowls have been used in France since the 18th century to stabilize egg foams. The copper in the bowl assists in creating a tighter bond in reactive sulfur items such as egg whites. The bond created is so tight that the sulfurs are prevented from reacting with any other material. A silver-plated bowl has the same result as the copper bowl, as will a pinch of powdered copper supplement from a health store used in a glass bowl. Drawbacks of the copper bowl include the expense of the bowl itself, and that the bowls are difficult to keep clean. Copper contamination from the bowl is minimal, as a cup of foam contains a tenth of a human's normal daily intake level. Health issues Although egg whites are prized as a source of low-fat, high-protein nutrition, a small number of people cannot eat them. Egg allergy is more common among infants than adults, and most children will outgrow it by the age of five. Allergic reactions against egg white are more common than reactions against egg yolks. In addition to true allergic reactions, some people experience a food intolerance to egg whites. Eggs are susceptible to Salmonella contamination. Thorough cooking eliminates the direct threat (i.e. cooked egg whites that are solid and not runny), but the threat of cross-contamination remains if people handle contaminated eggs and then touch other foods or items in the kitchen, thus spreading the bacteria. In August 2010, the FDA ordered the recall of 380 million eggs because of possible Salmonella contamination. Cooked eggs are a good source of biotin. However, daily consumption of raw egg whites for several months may result in biotin deficiency, due to their avidin content, as the avidin tightly binds biotin and prevents its absorption. Uses Egg white is a fining agent that can be used in the clarification and stabilization of wine. Egg white can also be added to shaken cocktails to create a delicate froth. Some protein powders also use egg whites as a primary source of protein. The albumen from egg white was used as a binding agent in early photography during an 1855-90 period; such prints were called albumen prints. In the 1750s, egg whites were believed to prevent swelling, and were used for that purpose. To help soothe areas of skin that were afflicted, egg white mixed with Armenian bole could help restore the fibers. Egg whites are also used in bookbinding during the gilding process, where it is referred to as 'glaire', and to give a book cover shine.
Biology and health sciences
Animal reproduction
Biology
355859
https://en.wikipedia.org/wiki/Epoxide
Epoxide
In organic chemistry, an epoxide is a cyclic ether, where the ether forms a three-atom ring: two atoms of carbon and one atom of oxygen. This triangular structure has substantial ring strain, making epoxides highly reactive, more so than other ethers. They are produced on a large scale for many applications. In general, low molecular weight epoxides are colourless and nonpolar, and often volatile. Nomenclature A compound containing the epoxide functional group can be called an epoxy, epoxide, oxirane, and ethoxyline. Simple epoxides are often referred to as oxides. Thus, the epoxide of ethylene (C2H4) is ethylene oxide (C2H4O). Many compounds have trivial names; for instance, ethylene oxide is called "oxirane". Some names emphasize the presence of the epoxide functional group, as in the compound 1,2-epoxyheptane, which can also be called 1,2-heptene oxide. A polymer formed from epoxide precursors is called an epoxy. However, few if any of the epoxy groups in the resin survive the curing process. Synthesis The dominant epoxides industrially are ethylene oxide and propylene oxide, which are produced respectively on the scales of approximately 15 and 3 million tonnes/year. Aside from ethylene oxide, most epoxides are generated when peroxidized reagents donate a single oxygen atom to an alkene. Safety considerations weigh on these reactions because organic peroxides are prone to spontaneous decomposition or even combustion. Both t-butyl hydroperoxide and ethylbenzene hydroperoxide can be used as oxygen sources during propylene oxidation (although a catalyst is required as well, and most industrial producers use dehydrochlorination instead). Ethylene oxidation The ethylene oxide industry generates its product from reaction of ethylene and oxygen. Modified heterogeneous silver catalysts are typically employed. According to a reaction mechanism suggested in 1974 at least one ethylene molecule is totally oxidized for every six that are converted to ethylene oxide: 7 H2C=CH2 + 6 O2 -> 6 C2H4O + 2 CO2 + 2 H2O Only ethylene produces an epoxide during incomplete combustion. Other alkenes fail to react usefully, even propylene, though TS-1 supported Au catalysts can selectively epoxidize propylene. Organic peroxides and metal catalysts Metal complexes are useful catalysts for epoxidations involving hydrogen peroxide and alkyl hydroperoxides. Metal-catalyzed epoxidations were first explored using tert-butyl hydroperoxide (TBHP). Association of TBHP with the metal (M) generates the active metal peroxy complex containing the MOOR group, which then transfers an O center to the alkene. Vanadium(II) oxide catalyzes the epoxidation at specifically less-substituted alkenes. Nucleophilic epoxidation Electron-deficient olefins, such as enones and acryl derivatives can be epoxidized using nucleophilic oxygen compounds such as peroxides. The reaction is a two-step mechanism. First the oxygen performs a nucleophilic conjugate addition to give a stabilized carbanion. This carbanion then attacks the same oxygen atom, displacing a leaving group from it, to close the epoxide ring. Transfer from peroxycarboxylic acids Peroxycarboxylic acids, which are more electrophilic than other peroxides, convert alkenes to epoxides without the intervention of metal catalysts. In specialized applications, dioxirane reagents (e.g. dimethyldioxirane) perform similarly, but are more explosive. Typical laboratory operations employ the Prilezhaev reaction. This approach involves the oxidation of the alkene with a peroxyacid such as mCPBA. Illustrative is the epoxidation of styrene with perbenzoic acid to styrene oxide: The stereochemistry of the reaction is quite sensitive. Depending on the mechanism of the reaction and the geometry of the alkene starting material, cis and/or trans epoxide diastereomers may be formed. In addition, if there are other stereocenters present in the starting material, they can influence the stereochemistry of the epoxidation. The reaction proceeds via what is commonly known as the "Butterfly Mechanism". The peroxide is viewed as an electrophile, and the alkene a nucleophile. The reaction is considered to be concerted. The butterfly mechanism allows ideal positioning of the sigma star orbital for π electrons to attack. Because two bonds are broken and formed to the epoxide oxygen, this is formally an example of a coarctate transition state. Asymmetric epoxidations Chiral epoxides can often be derived enantioselectively from prochiral alkenes. Many metal complexes give active catalysts, and the most important involve titanium, vanadium, or molybdenum. Hydroperoxides are also employed in catalytic enantioselective epoxidations, such as the Sharpless epoxidation and the Jacobsen epoxidation. Together with the Shi epoxidation, these reactions are useful for the enantioselective synthesis of chiral epoxides. Oxaziridine reagents may also be used to generate epoxides from alkenes. The Sharpless epoxidation reaction is one of the premier enantioselective chemical reactions. It is used to prepare 2,3-epoxyalcohols from primary and secondary allylic alcohols. The above reactions all use electrophilic reagents, but some asymmetric nucleophilic epoxidations are possible. Dehydrohalogenation and other γ eliminations Halohydrins react with base to give epoxides. The reaction is spontaneous because the energetic cost of introducing the ring strain (13 kcal/mol) is offset by the larger bond enthalpy of the newly introduced C-O bond (when compared to that of the cleaved C-halogen bond). Formation of epoxides from secondary halohydrins is predicted to occur faster than from primary halohydrins due to increased entropic effects in the secondary halohydrin, and tertiary halohydrins react (if at all) extremely slowly due to steric crowding. Starting with propylene chlorohydrin, most of the world's supply of propylene oxide arises via this route. An intramolecular epoxide formation reaction is one of the key steps in the Darzens reaction. In the Johnson–Corey–Chaykovsky reaction epoxides are generated from carbonyl groups and sulfonium ylides. In this reaction, a sulfonium is the leaving group instead of chloride. Biosynthesis Epoxides are uncommon in nature. They arise usually via oxygenation of alkenes by the action of cytochrome P450. (but see also the short-lived epoxyeicosatrienoic acids which act as signalling molecules. and similar epoxydocosapentaenoic acids, and epoxyeicosatetraenoic acids.) Arene oxides are intermediates in the oxidation of arenes by cytochrome P450. For prochiral arenes (naphthalene, toluene, benzoates, benzopyrene), the epoxides are often obtained in high enantioselectivity. Reactions Ring-opening reactions dominate the reactivity of epoxides. Hydrolysis and addition of nucleophiles Epoxides react with a broad range of nucleophiles, for example, alcohols, water, amines, thiols, and even halides. With two often-nearly-equivalent sites of attack, epoxides exemplify "ambident substrates". Ring-opening regioselectivity in asymmetric epoxides generally follows the SN2 pattern of attack at the least-substituted carbon, but can be affected by carbocation stability under acidic conditions. This class of reactions is the basis of epoxy glues and the production of glycols. Lithium aluminium hydride or aluminium hydride both reduce epoxides through a simple nucleophilic addition of hydride (H−); they produce the corresponding alcohol. Polymerization and oligomerization Polymerization of epoxides gives polyethers. For example ethylene oxide polymerizes to give polyethylene glycol, also known as polyethylene oxide. The reaction of an alcohol or a phenol with ethylene oxide, ethoxylation, is widely used to produce surfactants: ROH + n C2H4O → R(OC2H4)nOH With anhydrides, epoxides give polyesters. Metallation and deoxygenation Lithiation cleaves the ring to β-lithioalkoxides. Epoxides can be deoxygenated using oxophilic reagents, with loss or retention of configuration. The combination of tungsten hexachloride and n-butyllithium gives the alkene. When treated with thiourea, epoxides convert to the episulfide (thiiranes). Other reactions Epoxides undergo ring expansion reactions, illustrated by the insertion of carbon dioxide to give cyclic carbonates. An epoxide adjacent to an alcohol can undergo the Payne rearrangement in base. Uses Ethylene oxide is widely used to generate detergents and surfactants by ethoxylation. Its hydrolysis affords ethylene glycol. It is also used for sterilisation of medical instruments and materials. The reaction of epoxides with amines is the basis for the formation of epoxy glues and structural materials. A typical amine-hardener is triethylenetetramine (TETA). Safety Epoxides are alkylating agents, making many of them highly toxic.
Physical sciences
Esters and ethers
Chemistry
355989
https://en.wikipedia.org/wiki/Nudibranch
Nudibranch
Nudibranchs () belong to the order Nudibranchia, a group of soft-bodied marine gastropod molluscs that shed their shells after their larval stage. They are noted for their often extraordinary colours and striking forms, and they have been given colourful nicknames to match, such as "clown", "marigold", "splendid", "dancer", "dragon", and "sea rabbit". Currently, about 3,000 valid species of nudibranchs are known. The word nudibranch comes from the Latin 'naked' and the Ancient Greek () 'gills'. Nudibranchs are often casually called sea slugs, as they are a family of opisthobranchs (sea slugs), within the phylum Mollusca (molluscs), but many sea slugs belong to several taxonomic groups that are not closely related to nudibranchs. A number of these other sea slugs, such as the photosynthetic Sacoglossa and the colourful Aglajidae, are often confused with nudibranchs. Distribution and habitat Nudibranchs occur in seas worldwide, ranging from the Arctic, through temperate and tropical regions, to the Southern Ocean around Antarctica. They are almost entirely restricted to salt water, although a few species are known to inhabit lower salinities in brackish water. Nudibranchs live at virtually all depths, from the intertidal zone to depths well over . The greatest diversity of nudibranchs is seen in warm, shallow reefs, although one nudibranch species was discovered at a depth near . This nudibranch, described in 2024 as Bathydevius, is the only known nudibranch with a bathypelagic lifestyle and is one of the very few to be bioluminescent. Nudibranchs are benthic animals, found crawling over the substrate. The only exceptions to this are the neustonic Glaucus nudibranchs, which float upside down just under the ocean's surface; the pelagic nudibranchs Cephalopyge trematoides, which swim in the water column; the two pelagic species of Phylliroe, and the evolutionarily distinct, bathypelagic Bathydevius. Anatomical description The body forms of nudibranchs vary greatly. Because they are opisthobranchs, unlike most other gastropods, they are apparently bilaterally symmetrical externally (but not internally) because they have undergone secondary detorsion. In all nudibranchs, the male and female sexual openings are on the right side of the body, reflecting their asymmetrical origins. They lack a mantle cavity. Some species have venomous appendages (cerata) on their sides, which deter predators. Many also have a simple gut and a mouth with a radula. The eyes in nudibranchs are simple and able to discern little more than light and dark. The eyes are set into the body, are about a quarter of a millimeter in diameter, and consist of a lens and five photoreceptors. Nudibranchs vary in adult size from . The adult form is without a shell or operculum (in shelled gastropods, the operculum is a bony or horny plate that can cover the opening of the shell when the body is withdrawn). In most species, there is a swimming veliger larva with a coiled shell, but the shell is shed at metamorphosis when the larva transforms into the adult form. Some species have direct development, and the shell is shed before the animal emerges from the egg mass. The name nudibranch is appropriate, since the dorids (infraclass Anthobranchia) breathe through a "naked gill" shaped into branchial plumes in a rosette on their backs. By contrast, on the back of the aeolids in the clade Cladobranchia, brightly coloured sets of protruding organs called cerata are present. Nudibranchs have cephalic (head) tentacles, which are sensitive to touch, taste, and smell. Club-shaped rhinophores detect odors. Defence mechanisms In the course of their evolution, nudibranchs have lost their shells, while developing alternative defence mechanisms. Some species evolved an external anatomy with textures and colours that mimicked surrounding sessile invertebrate animals (often their prey sponges or soft corals) to avoid predators with camouflage. Other nudibranchs, as seen especially well on Chromodoris quadricolor, have an intensely bright and contrasting colour pattern that makes them especially conspicuous in their surroundings. Nudibranch molluscs are the most commonly cited examples of aposematism in marine ecosystems, but the evidence for this has been contested, mostly because few examples of mimicry are seen among species, many species are nocturnal or cryptic, and bright colours at the red end of the spectrum are rapidly attenuated as a function of water depth. For example, the Spanish dancer nudibranch (genus Hexabranchus), among the largest of tropical marine slugs, potently chemically defended, and brilliantly red and white, is nocturnal and has no known mimics. Other studies of nudibranch molluscs have concluded they are aposematically coloured, for example, the slugs of the family Phylidiidae from Indo-Pacific coral reefs. Nudibranchs that feed on hydrozoids can store the hydrozoids' nematocysts (stinging cells) in the dorsal body wall, the cerata. These stolen nematocysts, called kleptocnidae, wander through the alimentary tract without harming the nudibranch. Once further into the organ, the cells are assimilated by intestinal protuberances and brought to specific placements on the creature's hind body. The specific mechanism by which nudibranchs protect themselves from the hydrozoids and their nematocysts is yet unknown, but special cells with large vacuoles probably play an important role. Similarly, some nudibranchs can also take in plant cells (symbiotic algae from soft corals) and reuse these to make food for themselves. The related group of sacoglossan sea slugs feed on algae and retain just the chloroplasts for their own photosynthetic use, a process known as kleptoplasty. Some of these species have been observed practising autotomy, severing portions of their body to remove parasites, and have been observed to regrow their head if decapitated. Nudibranchs use a variety of chemical defences to aid in protection, but the strategy need not be lethal to be effective; in fact, good arguments exist that chemical defences should evolve to be distasteful rather than toxic. Some sponge-eating nudibranchs concentrate the chemical defences from their prey sponge in their bodies, rendering themselves distasteful to predators. One method of chemical defense used by nudibranchs are secondary metabolites, which play an important role in mediating relationships among marine communities. The evidence that suggests the chemical compounds used by dorid nudibranchs do in fact come from dietary sponges lies in the similarities between the metabolites of prey and nudibranchs, respectively. Furthermore, nudibranchs contain a mixture of sponge chemicals when they are in the presence of multiple food sources, as well as change defence chemicals with a concurrent change in diet. This, however, is not the only way for nudibranchs to develop chemical defences. Certain Antarctic marine species defense mechanisms are believed to be controlled by biological factors like predation, competition, and selective pressures. Certain species can produce their own chemicals de novo without dietary influence. Evidence for the different chemical production methods comes with the characteristic uniformity of chemical composition across drastically different environments and geographic locations found throughout de novo production species compared to the wide variety of dietary and environmentally dependent chemical composition in sequestering species. Another protection method is releasing the ugdon acid from the skin. Once the specimen is physically irritated or touched by another creature, it will release the mucus automatically, eating the animal from the inside out. Apparent production of sound In 1884, Philip Henry Gosse reported observations by "Professor Grant" (possibly Robert Edmond Grant) that two species of nudibranchs emit sounds that are audible to humans. Two very elegant species of Sea-slug, viz., Eolis punctata [i.e. Facelina annulicornis], and Tritonia arborescens [i.e. Dendronotus frondosus], certainly produce audible sounds. Professor Grant, who first observed the interesting fact in some specimens of the latter, which he was keeping in an aquarium, says of the sounds that 'they resemble very much the clink of a steel wire on the side of the jar, one stroke only been given at a time, and repeated at intervals of a minute or two; when placed in a large basin of water, the sound is much obscured and is like that of a watch, one stroke being repeated, as before, at intervals. The sound is longest and most often repeated when the Tritonia are lively and moving about and is not heard when they are cold and without any motion; in the dark, I have not observed any light emitted at the time of the stroke; no globule of air escapes to the surface of the water, nor is any ripple produced on the surface at the instant of the stroke; the sound, when in a glass vessel, is mellow and distinct.' The Professor has kept these Tritonia alive in his room for a month. During the whole period of their confinement, they have continued to produce the sounds with very little diminution of their original intensity. In a small apartment, they are audible at a distance of twelve feet. The sounds obviously proceed from the mouth of the animal, and at the instant of the stroke, we observe the lips suddenly separate as if to allow the water to rush into a small vacuum formed within. As these animals are hermaphrodites, requiring mutual impregnation, the sounds may possibly be a means of communication between them, or, if they are of an electric nature, they may be the means of defending from foreign enemies, one of the most delicate, defenceless, and beautiful Gasteropods that inhabit the deep. Life cycle Nudibranchs are hermaphroditic, thus having a set of reproductive organs for both sexes, but they cannot fertilize themselves. Mating usually takes a few minutes and involves a dance-like courtship. Nudibranchs typically deposit their eggs within a gelatinous spiral, which is often described as looking like a ribbon. The number of eggs varies; it can be as few as just 1 or 2 eggs (Vayssierea felis) or as many as an estimated 25 million (Aplysia fasciata). The eggs contain toxins from sea sponges as a means of deterring predators. After hatching, the infants look almost identical to their adult counterparts, albeit smaller. Infants may also have fewer cerata. The lifespan of nudibranchs can range from a few weeks to a year, depending on the species. Feeding and ecological role All known nudibranchs are carnivorous. Some feed on sponges, others on hydroids (e.g. Cuthona), others on bryozoans (phanerobranchs such as Tambja, Limacia, Plocamopherus and Triopha), and some eat other sea slugs or their eggs (e.g. Favorinus) or, on some occasions, are cannibals and prey on members of their own species. Other groups feed on tunicates (e.g. Nembrotha, Goniodoris), other nudibranchs (Roboastra, which are descended from tunicate-feeding species), barnacles (e.g. Onchidoris bilamellata), and anemones (e.g. the Aeolidiidae and other Cladobranchia). The surface-dwelling nudibranch, Glaucus atlanticus, is a specialist predator of siphonophores, such as the Portuguese man o' war. This predatory mollusc sucks air into its stomach to keep it afloat, and using its muscular foot, it clings to the surface film. If it finds a small victim, Glaucus simply envelops it with its capacious mouth, but if the prey is a larger siphonophore, the mollusc nibbles off its fishing tentacles, the ones carrying the most potent nematocysts. Like some others of its kind, Glaucus does not digest the nematocysts; instead, it uses them to defend itself by passing them from its gut to the surface of its skin. Taxonomy Nudibranchs are commonly divided into two main kinds, dorid and aeolid (also spelled eolid) nudibranchs: Dorids (clade Anthobranchia, Doridacea, or Doridoidea) are recognised by having an intact digestive gland and the feather-like branchial (gill) plume, which forms a cluster on the posterior part of the body, around the anus. Fringes on the mantle do not contain any intestines. Additionally, dorid nudibranchs commonly have distinct pockets, bumps, and/or mantle dermal formations, which are distortions on their skin, used to store bioactive defense chemicals. Aeolids (clade Cladobranchia) have cerata (spread across the back) instead of the branchial plume. The cerata function in place of gills and facilitate gas exchange through the epidermis. Additionally, aeolids possess a branched digestive gland, which may extend into the cerate and often has tips that contain cnidosacs (stinging cells absorbed from prey species and then used by the nudibranch). They lack a mantle. Some are hosts to zooxanthellae. The exact systematics of nudibranchs are a topic of recent revision. Traditionally, nudibranchs have been treated as the order Nudibranchia, located in the gastropod mollusc subclass Opisthobranchia (the marine slugs: which consisted of nudibranchs, sidegill slugs, bubble snails, algae sap-sucking sea slugs, and sea hares). Since 2005, pleurobranchs (which had previously been grouped among sidegill slugs) have been placed alongside nudibranchs in the clade Nudipleura (recognising them as more closely related to each other than to other opisthobranchs). Since 2010, Opisthobranchia has been recognised as not a valid clade (it is paraphyletic) and instead Nudipleura has been placed as the first offshoot of Euthyneura (which is the dominant clade of gastropods). In 2024, a brand new family of deep-sea pelagic nudibranch, Bathydeviidae, was described as containing a single genus, Bathydevius. This family does not appear to be closely related to any other extant nudibranch and is the only known bathypelagic nudibranch taxon. Traditional hierarchy This classification was based on the work of Johannes Thiele (1931), built on the concepts of Henri Milne-Edwards (1848). Order Nudibranchia: Infraorder Anthobranchia Férussac, 1819 (dorids) Superfamily Doridoidea Rafinesque, 1815 Superfamily Doridoxoidea Bergh, 1900 Superfamily Onchidoridoidea Alder & Hancock, 1845 Superfamily Polyceroidea Alder & Hancock, 1845 Infraorder Cladobranchia Willan & Morton, 1984 (aeolids) Superfamily Aeolidioidea J. E. Gray, 1827 Superfamily Arminoidea Rafinesque, 1814 Superfamily Dendronotoidea Allman, 1845 Superfamily Metarminoidea Odhner in Franc, 1968 Early revisions Newer insights derived from morphological data and gene-sequence research seemed to confirm those ideas. On the basis of investigation of 18S rDNA sequence data, strong evidence supports the monophyly of the Nudibranchia and its two major groups, the Anthobranchia/Doridoidea and Cladobranchia. A study published in May 2001, again revised the taxonomy of the Nudibranchia. They were thus divided into two major clades: Anthobranchia (= Bathydoridoidea + Doridoidea) Dexiarchia nom. nov. (= Doridoxoidea + Dendronotoidea + Aeolidoidea + "Arminoidea"). However, according to the taxonomy by Bouchet & Rocroi (2005), currently the most up-to-date system of classifying the gastropods, the Nudibranchia are a subclade within the clade of the Nudipleura. The Nudibranchia are then divided into two clades, with a third described in 2024: Bathydeviidae Euctenidiacea (= Holohepatica) Gnathodoridacea (contains only Bathydorididae) Doridacea Doridoidea Phyllidioidea Onchidoridoidea Polyceroidea (= Phanerobranchiata Non Suctoria) Dexiarchia (= Actenidiacea) Pseudoeuctenidiacea ( = Doridoxida) Cladobranchia ( = Cladohepatica) Euarminida Dendronotida Aeolidida Unassigned Cladobranchia (previously Metarminoidea) Charcotiidae Dironidae Goniaeolididae Heroidae Proctonotidae Madrellidae Pinufiidae Embletoniidae Gallery This gallery shows some of the great variability in the color and form of nudibranchs, and nudibranch egg ribbons.
Biology and health sciences
Gastropods
Animals
356046
https://en.wikipedia.org/wiki/Ghost%20crab
Ghost crab
Ghost crabs are semiterrestrial crabs of the subfamily Ocypodinae. They are common shore crabs in tropical and subtropical regions throughout the world, inhabiting deep burrows in the intertidal zone. They are generalist scavengers and predators of small animals. The name "ghost crab" derives from their nocturnality and their generally pale coloration. They are also sometimes called sand crabs, though the name refers to various other crabs that do not belong to the subfamily. Characteristics of the subfamily include one claw being larger than the other, thick and elongated eyestalks, and a box-like body. The differences in claw sizes, however, are not as marked as in male fiddler crabs. The subfamily includes 22 species in two genera. Taxonomy Ocypodinae is one of two subfamilies in the family Ocypodidae, the other being the fiddler crab subfamily, Ucinae. Both subfamilies have members in which one of the claw-bearing legs (the chelipeds) is much larger than the other. However, only male fiddler crabs exhibit this, while both male and female ghost crabs have unequally sized claws. The difference is also much more pronounced among fiddler crab males. The fiddler crabs' carapaces are broadened at the front, while the carapaces of ghost crabs are more or less box-like. Lastly, the eyes of ghost crabs have large and elongated eyestalks, with the corneas occupying the entire lower part, while in fiddler crabs the eyestalks are long and thin, with the corneas small and located at the tip. Ocypodinae was previously regarded as monotypic, with only one genus, Ocypode, classified under it. In 2013, though, Katsushi Sakai and Michael Türkay reclassified the gulf ghost crab into a separate genus, Hoplocypode based on the differences of their gonopods (appendages modified into copulatory organs) from that of the rest of the members of the genus Ocypode. Genera The subfamily Ocypodinae currently contains 22 species in these two genera: Ocypode Weber, 1795 Hoplocypode Sakai &Türkay, 2013 Description Most ghost crabs have pale-colored bodies that blend in well with the sand, though they are capable of gradually changing body coloration to match their environments and the time of day. Some species are brightly colored, such as Ocypode gaudichaudii and Ocypode ryderi. Ghost crabs have elongated and swollen eyestalks with very large corneas on the bottom half. Their carapaces are deep and box-like, squarish when viewed from the top with straight or slightly curving sides. The regions of the carapace are usually not clearly defined. The "whip" of their antennules (antennular flagella) are small or rudimentary. They fold back into the body diagonally or almost vertically. The plate between them (the interantennular septum) is broad. The third pair of mouth appendages (maxillipeds) completely cover the mouth opening. A small orifice with edges thickly fringed with hair is found between the bases of the second and third pairs of walking legs. Ghost crab species can be most reliably identified by means of the area where they were recovered, the presence of "horns" (styles) on their eyestalks (exophthalmy), the pattern of stridulating (sound-producing) ridges on the inside surface of the palms of their larger claws, and the shape of the gonopods in males. Exophthalmy is exhibited by seven species in the subfamily: Ocypode brevicornis, Ocypode ceratophthalma, Ocypode gaudichaudii, Ocypode macrocera, Ocypode mortoni, Ocypode rotundata, and Ocypode saratan. All of them are found in the Indo-Pacific region with more or less limited ranges, with the exception of Ocypode ceratophthalma, which is found widely. Ocypode cursor, found in the Atlantic Ocean and the Mediterranean Sea, also possesses a tuft of bristles at the end of its eyestalks. Stridulating ridges also differ from species to species, with some displaying rows of tubercles, others displaying rows of smaller ridges (striae), or a combination of both. These are very important in identifying species as they are displayed by both adults and juveniles (though they may be absent in newly regenerated claws). They are used by ghost crabs for communication. Both exophthalmy and stridulating ridges, however, can not be reliably used to determine phylogenetic relationships between different species of ghost crabs. Sakai & Türkay (2013) regarded the shape of the gonopods as more suited for this. As gonopods are important in sexual reproduction, they are less likely to evolve randomly in response to the environment. Ghost crabs of the genus Hoplocypode can be distinguished from those in Ocypode by examining their gonopods. In the former, the first gonopod has a complex, hoof-shaped tip, while in the latter they are simple and curved. Ecology Ghost crabs dig deep burrows near the intertidal zone of open sandy beaches. The burrows are usually composed of a long shaft with a chamber at the end, occasionally with a second entrance shaft. They are semi-terrestrial and breathe oxygen from the air through moistened gills. They must periodically wet their gills with seawater, usually by taking water from moist sand or by running into the surf and letting the waves wash over them. However, they can only remain under water for a limited amount of time, as they will drown. Ghost crabs are generalists, scavenging carrion and debris, as well as preying on small animals, including sea turtle eggs and hatchlings, clams, and other crabs. They are predominantly nocturnal. They remain in their burrows during the hottest part of the day, and throughout the coldest part of the winter. Ghost crabs are swift runners, darting away at the slightest sign of danger. They either head back to their burrows or into the sea to escape intruders. The gaits of ghost crabs alter as their speed increases. Observations on O. ceratophthalma show it can walk indefinitely using all four pairs of walking legs, occasionally alternating which side leads. At higher speeds, the fourth pair of legs is raised off the ground, and at the highest speeds, the crab runs, using only the first and second pairs of walking legs. Ghost crabs utilize a varied animal acoustic communication system. They can create different sounds by striking the ground with their claws, rubbing their claws together to make a rasping sound, rubbing their legs to make a bubbly noise, and rubbing the teeth inside their stomachs to make a growling sound. The lateral teeth of the gastric mill possess a series of comb-like structures that rub against the median tooth to produce stimulation with dominant frequencies below 2 kHz. Ghost crabs also have the ability to change colors to match their surroundings by adjusting the concentration and dispersal of pigments within their chromatophores. They can even match the specific colors of the grains of sand in their habitats. However, unlike metachrosis (which is a rapid change of colors), ghost crabs are only capable of morphological color change, which occurs over a longer span of time. In a study in 1964, individuals of O. ceratophthalma in Hawaii from two populations (one from a black-sand beach and the other from a white-sand beach) were observed to markedly differ in pigmentation although they remain morphologically indistinguishable. Specimens taken from the black-sand beach were much darker than the specimens taken from the white-sand beach, exhibiting nearly 12 times as many black chromatophores. When the black-sand specimens were subsequently exposed to a white background, they were observed to gradually become lighter over the course of about a month. The opposite happened to white-sand specimens when placed in black backgrounds. In a 2013 study in Singapore, Ocypode ceratophthalma was also discovered to change color in response to the time of day. In a span of 24 hours, they were observed to alternate between lighter and darker coloration, being lightest at midday and darkest at night. However, when placed in a dark background, the crabs showed no significant changes in coloration. This suggests, at least for short-term color changes, they follow a day-night cycle to determine body coloration; basing it on their biological clocks rather than on the brightness or darkness of their environments. The researchers believe relying on the time of day rather than ambient light is more advantageous for survival in ghost crabs. It lets ghost crabs avoid changing color when in temporary shadow (for example within their burrows) and thus still remain inconspicuous when they are once again illuminated by daylight. However, this behavior is only exhibited by juveniles. Ghost crabs lay their eggs in the sea, which develop into planktonic marine larvae. Distribution Ghost crabs dominate sandy shores in tropical and subtropical regions of the world, replacing the sandhoppers that predominate in cooler areas. Three species are found in the Atlantic Ocean and the Mediterranean Sea, and two occur in the eastern Pacific coast of the Americas. The rest of the species are found in the western Pacific and the Indian Ocean to the tip of southern Africa. Conservation Ghost crabs are negatively affected by human activity on sandy beaches, such as sand trampling by foot traffic, the building of seawalls, or the presence of inorganic pollutants. Due to their worldwide distribution and the ease by which their burrows can be surveyed, ghost crab burrows are regarded as valuable ecological indicators for quickly assessing the impact of human disturbance on beach habitats.
Biology and health sciences
Crabs and hermit crabs
Animals
356244
https://en.wikipedia.org/wiki/New%20World%20monkey
New World monkey
New World monkeys are the five families of primates that are found in the tropical regions of Mexico, Central and South America: Callitrichidae, Cebidae, Aotidae, Pitheciidae, and Atelidae. The five families are ranked together as the Ceboidea (), the only extant superfamily in the parvorder Platyrrhini (). Platyrrhini is derived from the Greek for "broad nosed", and their noses are flatter than those of other simians, with sideways-facing nostrils. Monkeys in the family Atelidae, such as the spider monkey, are the only primates to have prehensile tails. New World monkeys' closest relatives are the other simians, the Catarrhini ("down-nosed"), comprising Old World monkeys and apes. New World monkeys descend from African simians that colonized South America, a line that split off about 40 million years ago. Evolutionary history About 40 million years ago, the Simiiformes infraorder split into the parvorders Platyrrhini (New World monkeys) and Catarrhini (apes and Old World monkeys) somewhere on the African continent. Platyrrhini are currently conjectured to have dispersed to South America on a raft of vegetation across the Atlantic Ocean during the Eocene epoch, possibly via several intermediate now submerged islands. Several other groups of animals made the same journey across the Atlantic, notably including caviomorph rodents. At the time the New World monkeys dispersed to South America, the Isthmus of Panama had not yet formed, so ocean currents, unlike today, favoured westward dispersal, the climate was quite different, and the width of the Atlantic Ocean was less than the present width by about a third (possibly less, based on the current estimate of the Atlantic mid-ocean ridge formation processes spreading rate of ). The non-platyrrhini Ucayalipithecus of Amazonian Peru who might have rafted across the Atlantic between ~35–32 million years ago, are nested within the extinct Parapithecoidea from the Eocene of Afro-Arabia, suggesting that there were at least two separate dispersal events of primates to South America, Parvimico and Perupithecus from Peru appear to be at the base of the Platyrrhini, as are Szalatavus, Lagonimico, and Canaanimico. Possible evidence for a third transatlantic dispersal event comes from a fossil molar belonging to Ashaninkacebus simpsoni, which has strong affinities with stem anthropoid primates of South Asian origin, the Eosimiidae. The chromosomal content of the ancestor species appears to have been 2n = 54. In extant species, the 2n value varies from 16 in the titi monkey to 62 in the woolly monkey. A Bayesian estimate of the most recent common ancestor of the extant species has a 95% credible interval of -. Classification The following is the listing of the various platyrrhine families, as defined by Rylands & Mittermeier (2009), and their position in the Order Primates: Order Primates Suborder Strepsirrhini: lemurs, lorises, galagos, etc. Suborder Haplorrhini: tarsiers + monkeys, including apes Infraorder Tarsiiformes: tarsiers Infraorder Simiiformes Parvorder Platyrrhini: New World monkeys Family Callitrichidae: marmosets and tamarins Family Cebidae: capuchins and squirrel monkeys Family Aotidae: night or owl monkeys (douroucoulis) Family Pitheciidae: titis, sakis, and uakaris Family Atelidae: howler, spider, woolly spider, and woolly monkeys Parvorder Catarrhini: Old World monkeys, apes (including humans) The arrangement of the New World monkey families, indeed the listing of which groups consist of families and which consist of lower taxonomic groupings, has changed over the years. McKenna & Bell (1997) used two families: Callitrichidae and Atelidae, with Atelidae divided into Cebinae, Pitheciinae, and Atelinae. Rosenberger (2002 following Horowitz 1999) demoted Callitrichidae to a subfamily, putting it under the newly raised Cebidae family. Groves (2005) used four families, but as a flat structure. One possible arrangement of the five families and their subfamilies of Rylands & Mittermeier can be seen in Silvestro et al. (2017): Characteristics New World monkeys are small to mid-sized primates, ranging from the pygmy marmoset (the world's smallest monkey), at and a weight of , to the southern muriqui, at and a weight of . New World monkeys differ slightly from Old World monkeys in several aspects. The most prominent phenotypic distinction is the nose, which is the feature used most commonly to distinguish between the two groups. The clade for New World monkeys, Platyrrhini, means "flat nosed". The noses of New World monkeys are flatter than the narrow noses of Old World monkeys, and have side-facing nostrils. New World monkeys are the only monkeys with prehensile tails—in comparison with the shorter, non-grasping tails of the anthropoids of the Old World. Prehensility has evolved at least two distinct times in platyrrhines, in the Atelidae family (spider monkeys, woolly spider monkeys, howler monkeys, and woolly monkeys), and in capuchin monkeys (Cebus). Although prehensility is present in all of these primate species, skeletal and muscular-based morphological differences between these two groups indicate that the trait evolved separately through convergent evolution. The fully prehensile tails that have evolved in Atelidae allow the primates to suspend their entire body weight by only their tails, with arms and legs free for other foraging and locomotive activities. Semi-prehensile tails in Cebus can be used for balance by wrapping the tail around branches and supporting a large portion of their weight. New World monkeys (except for the howler monkeys of genus Alouatta) also typically lack the trichromatic vision of Old World monkeys. Colour vision in New World primates relies on a single gene on the X-chromosome to produce pigments that absorb medium and long wavelength light, which contrasts with short wavelength light. As a result, males rely on a single medium/long pigment gene and are dichromatic, as are homozygous females. Heterozygous females may possess two alleles with different sensitivities within this range, and so can display trichromatic vision. Platyrrhines also differ from Old World monkeys in that they have twelve premolars instead of eight; having a dental formula of or (consisting of 2 incisors, 1 canine, 3 premolars, and 2 or 3 molars). This is in contrast with Old World Anthropoids, including gorillas, chimpanzees, bonobos, siamangs, gibbons, orangutans, and most humans, which share a dental formula of . Many New World monkeys are small and almost all are arboreal, so knowledge of them is less comprehensive than that of the more easily observed Old World monkeys. Unlike most Old World monkeys, many New World monkeys form monogamous pair bonds, and show substantial paternal care of young. They eat fruits, nuts, insects, flowers, bird eggs, spiders, and small mammals. Unlike humans and most Old World monkeys, their thumbs are not opposable (except for some cebids).
Biology and health sciences
Primates
null
356259
https://en.wikipedia.org/wiki/Old%20World%20monkey
Old World monkey
Old World monkeys are primates in the family Cercopithecidae (). Twenty-four genera and 138 species are recognized, making it the largest primate family. Old World monkey genera include baboons (genus Papio), red colobus (genus Piliocolobus), and macaques (genus Macaca). Common names for other Old World monkeys include the talapoin, guenon, colobus, douc (douc langur, genus Pygathrix), vervet, gelada, mangabey (a group of genera), langur, mandrill, drill, surili (Presbytis), patas, and proboscis monkey. Phylogenetically, they are more closely related to apes than to New World monkeys, with the Old World monkeys and apes diverging from a common ancestor between 25 million and 30 million years ago. This clade, containing the Old World monkeys and the apes, diverged from a common ancestor with the New World monkeys around 45 to 55 million years ago. The individual species of Old World monkey are more closely related to each other than to apes or any other grouping, with a common ancestor around 14 million years ago. The smallest Old World monkey is the talapoin, with a head and body in length, and weighing between . The largest is the male mandrill, around in length, and weighing up to Old World monkeys have a variety of facial features; some have snouts, some are flat-nosed, and many exhibit coloration. Most have tails, but they are not prehensile. Old World monkeys are native to Africa and Asia today, inhabiting numerous environments: tropical rain forests, savannas, shrublands, and mountainous terrain. They inhabited much of Europe in the past; today, the only survivors in Europe are the Barbary macaques of Gibraltar. Whether they were native to Gibraltar or were brought by humans is unknown. Some Old World monkeys are arboreal, such as the colobus monkeys; others are terrestrial, such as the baboons. Most are at least partially omnivorous, but all prefer plant matter, which forms the bulk of their diets. Most are highly opportunistic, primarily eating fruit, but also consuming almost any food item available, such as flowers, leaves, bulbs and rhizomes, insects, snails, small mammals, and garbage and handouts from humans. Taxonomic classification and phylogeny Two subfamilies are recognized, the Cercopithecinae, which are mainly African, but include the diverse genus of macaques, which are Asian and North African, and the Colobinae, which includes most of the Asian genera, but also the African colobus monkeys. The Linnaean classification beginning with the superfamily is: Superfamily Cercopithecoidea Family Cercopithecidae: Old World monkeys Subfamily Cercopithecinae Tribe Cercopithecini Genus Allenopithecus – Allen's swamp monkey Genus Miopithecus – talapoins Genus Erythrocebus – patas monkeys Genus Chlorocebus Genus Cercopithecus – guenons Genus Allochrocebus – terrestrial guenons Tribe Papionini Genus Macaca – macaques Genus Lophocebus – crested mangabeys Genus Rungwecebus – kipunji Genus Papio – baboons Genus Theropithecus – gelada Genus Cercocebus – white-eyelid mangabeys Genus Mandrillus – mandrill and drill Subfamily Colobinae African group Genus Colobus – black-and-white colobuses Genus Piliocolobus – red colobuses Genus Procolobus – olive colobus Langur (leaf monkey) group Genus Semnopithecus – gray langurs or Hanuman langurs Genus Trachypithecus – lutungs Genus Presbytis – surilis Odd-nosed group Genus Pygathrix – doucs Genus Rhinopithecus – snub-nosed monkeys Genus Nasalis – proboscis monkey Genus Simias – pig-tailed langur The distinction between apes and monkeys is complicated by the traditional paraphyly of monkeys: Apes emerged as a sister group of Old World monkeys in the catarrhines, which are a sister group of New World monkeys. Therefore, cladistically, apes, catarrhines and related contemporary extinct groups, such as Parapithecidae, are monkeys as well, for any consistent definition of "monkey". "Old World monkey" may also legitimately be taken to be meant to include all the catarrhines, including apes and extinct species such as Aegyptopithecus, in which case the apes, Cercopithecoidea and Aegyptopithecus as well as (under an even more expanded definition) even the Platyrrhini emerged within the Old World monkeys. Historically, monkeys from the "Old World" (Afro-Arabia), somehow drifted to the "New World" some 40 million years ago, forming the "New World monkeys" (platyrrhines). Apes would emerge later within the Afro-Arabia group. Characteristics Old World monkeys are medium to large in size, and range from arboreal forms, such as the colobus monkeys, to fully terrestrial forms, such as the baboons. The smallest is the talapoin, with a head and body 34–37 cm in length, and weighing between 0.7 and 1.3 kilograms, while the largest is the male mandrill (the females of the species being significantly smaller), at around 70 cm in length, and weighing up to 50 kilograms. Most Old World monkeys have tails (the family name means "tailed ape"), unlike the tailless apes. The tails of Old World monkeys are not prehensile, unlike those of the New World monkeys (platyrrhines). The distinction of catarrhines from platyrrhines depends on the structure of the rhinarium, and the distinction of Old World monkeys from apes depends on dentition (the number of teeth is the same in both, but they are shaped differently). In platyrrhines, the nostrils face sideways, while in catarrhines, they face downward. Other distinctions include both a tubular ectotympanic (ear bone), and eight, not twelve, premolars in catarrhines, giving them a dental formula of: Several Old World monkeys have anatomical oddities. For example, the colobus monkeys have stubs for thumbs to assist with their arboreal movement, the proboscis monkey has an extraordinary nose, while the snub-nosed monkeys have almost no nose at all. The penis of the male mandrill is crimson and the scrotum is lilac; the face is also brightly colored. The coloration is more pronounced in dominant males. Habitat and distribution The Old World monkeys are native to Africa and Asia today, inhabiting numerous environments: tropical rain forests, savannas, shrublands, and mountainous terrain. They inhabited much of Europe during the Neogene period; today the only survivors in Europe are the Barbary macaques of Gibraltar. Behaviour and ecology Diet Most Old World monkeys are at least partially omnivorous, but all prefer plant matter, which forms the bulk of their diet. Leaf monkeys are the most vegetarian, subsisting primarily on leaves, and eating only a small number of insects, while the other species are highly opportunistic, primarily eating fruit, but also consuming almost any food items available, such as flowers, leaves, bulbs and rhizomes, insects, snails, and even small vertebrates. The Barbary macaque's diet consists mostly of leaves and roots, though it will also eat insects and uses cedar trees as a water source. Reproduction Gestation in the Old World monkeys lasts between five and seven months. Births are usually single, although, as with humans, twins occur occasionally. The young are born relatively well-developed, and are able to cling onto their mother's fur with their hands from birth. Compared with most other mammals, they take a long time to reach sexual maturity, with four to six years being typical of most species. Social systems In most species, daughters remain with their mothers for life, so that the basic social group among Old World monkeys is a matrilineal troop. Males leave the group on reaching adolescence, and find a new troop to join. In many species, only a single adult male lives with each group, driving off all rivals, but others are more tolerant, establishing hierarchical relationships between dominant and subordinate males. Group sizes are highly variable, even within species, depending on the availability of food and other resources.
Biology and health sciences
Primates
null
356988
https://en.wikipedia.org/wiki/Pleural%20effusion
Pleural effusion
A pleural effusion is accumulation of excessive fluid in the pleural space, the potential space that surrounds each lung. Under normal conditions, pleural fluid is secreted by the parietal pleural capillaries at a rate of 0.6 millilitre per kilogram weight per hour, and is cleared by lymphatic absorption leaving behind only 5–15 millilitres of fluid, which helps to maintain a functional vacuum between the parietal and visceral pleurae. Excess fluid within the pleural space can impair inspiration by upsetting the functional vacuum and hydrostatically increasing the resistance against lung expansion, resulting in a fully or partially collapsed lung. Various kinds of fluid can accumulate in the pleural space, such as serous fluid (hydrothorax), blood (hemothorax), pus (pyothorax, more commonly known as pleural empyema), chyle (chylothorax), or very rarely urine (urinothorax) or feces (coprothorax). When unspecified, the term "pleural effusion" normally refers to hydrothorax. A pleural effusion can also be compounded by a pneumothorax (accumulation of air in the pleural space), leading to a hydropneumothorax. Types Various methods can be used to classify pleural fluid. By the origin of the fluid: Serous fluid (hydrothorax) Blood (haemothorax) Chyle (chylothorax) Pus (pyothorax or empyema) Urine (urinothorax) By pathophysiology: Transudative pleural effusion Exudative pleural effusion By the underlying cause (see next section). Causes Transudative The most common causes of transudative pleural effusion in the United States are heart failure and cirrhosis. Nephrotic syndrome, leading to the loss of large amounts of albumin in urine and resultant low albumin levels in the blood and reduced colloid osmotic pressure, is another less common cause of pleural effusion. Pulmonary emboli were once thought to cause transudative effusions, but have been recently shown to be exudative. The mechanism for the exudative pleural effusion in pulmonary thromboembolism is probably related to increased permeability of the capillaries in the lung, which results from the release of cytokines or inflammatory mediators (e.g. vascular endothelial growth factor) from the platelet-rich blood clots. The excessive interstitial lung fluid traverses the visceral pleura and accumulates in the pleural space. Conditions associated with transudative pleural effusions include: Congestive heart failure Liver cirrhosis Severe hypoalbuminemia Nephrotic syndrome Acute atelectasis Myxedema Peritoneal dialysis Meigs's syndrome Obstructive uropathy End-stage kidney disease Exudative When a pleural effusion has been determined to be exudative, additional evaluation is needed to determine its cause, and amylase, glucose, pH and cell counts should be measured. Red blood cell counts are elevated in cases of bloody effusions (for example after heart surgery or hemothorax from incomplete evacuation of blood). Amylase levels are elevated in cases of esophageal rupture, pancreatic pleural effusion, or cancer. Glucose is decreased with cancer, bacterial infections, or rheumatoid pleuritis. pH is low in empyema (<7.2) and maybe low in cancer. If cancer is suspected, the pleural fluid is sent for cytology. If cytology is negative, and cancer is still suspected, either a thoracoscopy, or needle biopsy of the pleura may be performed. Gram staining and culture should also be done. If tuberculosis is possible, examination for Mycobacterium tuberculosis (either a Ziehl–Neelsen or Kinyoun stain, and mycobacterial cultures) should be done. A polymerase chain reaction for tuberculous DNA may be done, or adenosine deaminase or interferon gamma levels may also be checked. The most common causes of exudative pleural effusions are bacterial pneumonia, cancer (with lung cancer, breast cancer, and lymphoma causing approximately 75% of all malignant pleural effusions), viral infection, and pulmonary embolism. Another common cause is after heart surgery when incompletely drained blood can lead to an inflammatory response that causes exudative pleural fluid. Conditions associated with exudative pleural effusions: Parapneumonic effusion due to pneumonia Malignancy (either lung cancer or metastases to the pleura from elsewhere) Infection (empyema due to bacterial pneumonia) Trauma Pulmonary infarction Pulmonary embolism Autoimmune disorders Pancreatitis Ruptured esophagus (Boerhaave's syndrome) Rheumatoid pleurisy Drug-induced lupus Other/ungrouped Other causes of pleural effusion include tuberculosis (though stains of pleural fluid are only rarely positive for acid-fast bacilli, this is the most common cause of pleural effusions in some developing countries), autoimmune disease such as systemic lupus erythematosus, bleeding (often due to chest trauma), chylothorax (most commonly caused by trauma), and accidental infusion of fluids. Less common causes include esophageal rupture or pancreatic disease, intra-abdominal abscesses, rheumatoid arthritis, asbestos pleural effusion, mesothelioma, Meigs's syndrome (ascites and pleural effusion due to a benign ovarian tumor), and ovarian hyperstimulation syndrome. Pleural effusions may also occur through medical or surgical interventions, including the use of medications (pleural fluid is usually eosinophilic), coronary artery bypass surgery, abdominal surgery, endoscopic variceal sclerotherapy, radiation therapy, liver or lung transplantation, insertion of ventricular shunt as a treatment method of hydrocephalus, and intra- or extravascular insertion of central lines. Pathophysiology Pleural fluid is secreted by the parietal layer of the pleura by way of bulk flow and reabsorbed by the lymphatics in the most dependent parts of the parietal pleura, primarily the diaphragmatic and mediastinal regions. Exudative pleural effusions occur when the pleura is damaged, e.g., by trauma, infection, or malignancy, and transudative pleural effusions develop when there is either excessive production of pleural fluid or the resorption capacity is reduced. Light's criteria can be used to differentiate between exudative and transudative pleural effusions. Diagnosis A pleural effusion is usually diagnosed on the basis of medical history and physical exam, and confirmed by a chest X-ray. Once accumulated fluid is more than 300 mL, there are usually detectable clinical signs, such as decreased movement of the chest on the affected side, dullness to percussion over the fluid, diminished breath sounds on the affected side, decreased vocal resonance and fremitus (though this is an inconsistent and unreliable sign), and pleural friction rub. Above the effusion, where the lung is compressed, there may be bronchial breathing sounds and egophony. A large effusion there may cause tracheal deviation away from the effusion. A systematic review (2009) published as part of the Rational Clinical Examination Series in the Journal of the American Medical Association showed that dullness to conventional percussion was most accurate for diagnosing pleural effusion (summary positive likelihood ratio, 8.7; 95% confidence interval, 2.2–33.8), while the absence of reduced tactile vocal fremitus made pleural effusion less likely (negative likelihood ratio, 0.21; 95% confidence interval, 0.12–0.37). Imaging A pleural effusion appears as an area of whiteness on a standard posteroanterior chest X-ray. Normally, the space between the visceral pleura and the parietal pleura cannot be seen. A pleural effusion infiltrates the space between these layers. Because the pleural effusion has a density similar to water, it can be seen on radiographs. Since the effusion has greater density than the rest of the lung, it gravitates towards the lower portions of the pleural cavity. The pleural effusion behaves according to basic fluid dynamics, conforming to the shape of pleural space, which is determined by the lung and chest wall. If the pleural space contains both air and fluid, then an air-fluid level that is horizontal will be present, instead of conforming to the lung space. Chest radiographs in the lateral decubitus position (with the patient lying on the side of the pleural effusion) are more sensitive and can detect as little as 50 mL of fluid. Between 250 and 600mL of fluid must be present before upright chest X-rays can detect a pleural effusion (e.g., blunted costophrenic angles). Chest computed tomography is more accurate for diagnosis and may be obtained to better characterize the presence, size, and characteristics of a pleural effusion. Lung ultrasound, nearly as accurate as CT and more accurate than chest X-ray, is increasingly being used at the point of care to diagnose pleural effusions, with the advantage that it is a safe, dynamic, and repeatable imaging modality. To increase diagnostic accuracy of detection of pleural effusion sonographically, markers such as boomerang and VIP signs can be utilized. Thoracentesis Once a pleural effusion is diagnosed, its cause must be determined. Pleural fluid is drawn out of the pleural space in a process called thoracentesis, and it should be done in almost all patients who have pleural fluid that is at least 10 mm in thickness on CT, ultrasonography, or lateral decubitus X-ray and that is new or of uncertain etiology. In general, the only patients who do not require thoracentesis are those who have heart failure with symmetric pleural effusions and no chest pain or fever; in these patients, diuresis can be tried, and thoracentesis is avoided unless effusions persist for more than 3 days. In a thoracentesis, a needle is inserted through the back of the chest wall in the sixth, seventh, or eighth intercostal space on the midaxillary line, into the pleural space. The use of ultrasound to guide the procedure is now standard of care as it increases accuracy and decreases complications. After removal, the fluid may then be evaluated for: Chemical composition including protein, lactate dehydrogenase (LDH), albumin, amylase, pH, and glucose Gram stain and culture to identify possible bacterial infections White and red blood cell counts and differential white blood cell counts Cytopathology to identify cancer cells, but may also identify some infective organisms Other tests as suggested by the clinical situation – lipids, fungal culture, viral culture, tuberculosis cultures, lupus cell prep, specific immunoglobulins Light's criteria Definitions of the terms "transudate" and "exudate" are the source of much confusion. Briefly, transudate is produced through pressure filtration without capillary injury while exudate is "inflammatory fluid" leaking between cells. Transudative pleural effusions are defined as effusions that are caused by systemic factors that alter the pleural equilibrium, or Starling forces. The components of the Starling forces – hydrostatic pressure, permeability, and oncotic pressure (effective pressure due to the composition of the pleural fluid and blood) – are altered in many diseases, e.g., left ventricular failure, kidney failure, liver failure, and cirrhosis. Exudative pleural effusions, by contrast, are caused by alterations in local factors that influence the formation and absorption of pleural fluid (e.g., bacterial pneumonia, cancer, pulmonary embolism, and viral infection). An accurate diagnosis of the cause of the effusion, transudate versus exudate, relies on a comparison of the chemistries in the pleural fluid to those in the blood, using Light's criteria. According to Light's criteria (Light, et al. 1972), a pleural effusion is likely exudative if at least one of the following exists: The ratio of pleural fluid protein to serum protein is greater than 0.5 The ratio of pleural fluid LDH and serum LDH is greater than 0.6 Pleural fluid LDH is greater than 0.6 or times the normal upper limit for serum. Different laboratories have different values for the upper limit of serum LDH, but examples include 200 and 300 IU/l. The sensitivity and specificity of Light's criteria for detection of exudates have been measured in many studies and are usually reported to be around 98% and 80%, respectively. This means that although Light's criteria are relatively accurate, twenty percent of patients that are identified by Light's criteria as having exudative pleural effusions actually have transudative pleural effusions. Therefore, if a patient identified by Light's criteria as having an exudative pleural effusion appears clinically to have a condition that usually produces transudative effusions, additional testing is needed. In such cases, albumin levels in blood and pleural fluid are measured. If the difference between the albumin level in the blood and the pleural fluid is greater than 1.2 g/dL (12 g/L), this suggests that the patient has a transudative pleural effusion. However, pleural fluid testing is not perfect, and the final decision about whether a fluid is a transudate or an exudate is based not on chemical analysis of the fluid, but an accurate diagnosis of the disease that produces the fluid. The traditional definitions of transudate as a pleural effusion due to systemic factors and an exudate as a pleural effusion due to local factors have been used since 1940 or earlier (Light et al., 1972). Previous to Light's landmark study, which was based on work by Chandrasekhar, investigators unsuccessfully attempted to use other criteria, such as specific gravity, pH, and protein content of the fluid, to differentiate between transudates and exudates. Light's criteria are highly statistically sensitive for exudates (although not very statistically specific). More recent studies have examined other characteristics of pleural fluid that may help to determine whether the process producing the effusion is local (exudate) or systemic (transudate). The table above illustrates some of the results of these more recent studies. However, it should be borne in mind that Light's criteria are still the most widely used criteria. The Rational Clinical Examination Series review found that bilateral effusions, symmetric and asymmetric, are the most common distribution in heart failure (60% of effusions in heart failure will be bilateral). When there is asymmetry in heart failure-associated pleural effusions (either unilateral or one side larger than the other), the right side is usually more involved than the left. The instruments pictured are accurately shaped, however, most hospitals now use safer disposable trocars. Because these are single use, they are always sharp and have a much smaller risk of cross patient contamination. Treatment Treatment depends on the underlying cause of the pleural effusion. Therapeutic aspiration may be sufficient; larger effusions may require insertion of an intercostal drain (either pigtail or surgical). When managing these chest tubes, it is important to make sure the chest tubes do not become occluded or clogged. A clogged chest tube in the setting of continued production of fluid will result in residual fluid left behind when the chest tube is removed. This fluid can lead to complications such as hypoxia due to lung collapse from the fluid, or fibrothorax if scarring occurs. Repeated effusions may require chemical (talc, bleomycin, tetracycline/doxycycline), or surgical pleurodesis, in which the two pleural surfaces are scarred to each other so that no fluid can accumulate between them. This is a surgical procedure that involves inserting a chest tube, then either mechanically abrading the pleura or inserting the chemicals to induce a scar. This requires the chest tube to stay in until the fluid drainage stops. This can take days to weeks and can require prolonged hospitalizations. If the chest tube becomes clogged, fluid will be left behind and the pleurodesis will fail. Pleurodesis fails in as many as 30% of cases. An alternative is to place a PleurX Pleural Catheter or Aspira Drainage Catheter. This is a 15Fr chest tube with a one-way valve. Each day the patient or caregivers connect it to a simple vacuum tube and remove from 600 to 1000 mL of fluid, and can be repeated daily. When not in use, the tube is capped. This allows patients to be outside the hospital. For patients with malignant pleural effusions, it allows them to continue chemotherapy if indicated. Generally, the tube is in for about 30 days, and then it is removed when space undergoes spontaneous pleurodesis. Tubercular pleural effusion is one of the common extrapulmonary forms of tuberculosis. Treatment consists of antituberculosis treatment (ATT). The currently recommended ATT regime is two months of isoniazid, rifampicin, ethambutol and pyrazinamide followed by four months of isoniazid, rifampicin and ethambutol.
Biology and health sciences
Specific diseases
Health
357346
https://en.wikipedia.org/wiki/Hundred%20%28county%20division%29
Hundred (county division)
A hundred is an administrative division that is geographically part of a larger region. It was formerly used in England, Wales, some parts of the United States, Denmark, Sweden, Finland, Norway, and in Cumberland County in the British Colony of New South Wales. It is still used in other places, including in Australia (in South Australia and the Northern Territory). Other terms for the hundred in English and other languages include wapentake, herred (Danish and Bokmål Norwegian), herad (Nynorsk Norwegian), härad or hundare (Swedish), Harde (German), hiird (North Frisian), kihlakunta (Finnish), and cantref (Welsh). In Ireland, a similar subdivision of counties is referred to as a barony, and a hundred is a subdivision of a particularly large townland (most townlands are not divided into hundreds). Etymology The origin of the division of counties into hundreds is described by the Oxford English Dictionary (OED) as "exceedingly obscure". It may once have referred to an area of 100 hides; in early Anglo-Saxon England a hide was the amount of land farmed by and required to support a peasant family, but by the eleventh century in many areas it supported four families. Alternatively the hundred may have been an area originally settled by one "hundred" men at arms, or the area liable to provide one "hundred" men under arms. In this early medieval use, the number term "hundred" can itself be unclear, meaning the "short" hundred (100) or in some contexts the long hundred of 120. There was an equivalent traditional Germanic system. In Old High German a is a division of a gau, but the OED believes that the link between the two is not established. England Administrative functions From the 11th century in England, and to a lesser extent from the 16th century in Wales, and until the middle of the 19th century, the annual assemblies had varying degrees of power at a local level in the feudal system. Of chief importance was their more regular use for taxation, and six centuries of taxation returns for the divisions survive to this day. Groupings of divisions, small shires, were used to define parliamentary constituencies from 1832 to 1885. On the redistribution of seats in 1885 a different county subdivision, the petty sessional division, was used. Hundreds were also used to administer the first five national censuses from 1801 to 1841. The system of county divisions was not as stable as the system of counties being established at the time, and lists frequently differ on how many hundreds a county had. In many parts of the country, the Domesday Book contained a radically different set of divisions from that which later became established. The numbers of divisions in each county varied widely. Leicestershire had six (up from four at Domesday), whereas Devon, nearly three times the size, had 32. By the end of the 19th century, several single-purpose subdivisions of counties, such as poor law unions, sanitary districts, and highway districts, had sprung up, which, together with the introduction of urban districts and rural districts in 1894, mostly replaced the role of the parishes, and to a lesser extent the less extensive role of hundreds. The division names gave their name to multiple modern local government districts. Hundred In south and western England, a hundred was the division of a shire for military and judicial purposes under the common law, which could have varying extent of common feudal ownership, from complete suzerainty to minor royal or ecclesiastical prerogatives and rights of ownership. Until the introduction of districts by the Local Government Act 1894, hundreds were the only widely used assessment unit intermediate in size between the parish, with its various administrative functions, and the county, with its formal, ceremonial functions. The term "hundred" is first recorded in the laws of Edmund I (939–46) as a measure of land and the area served by a hundred court. In the Midlands, they often covered an area of about 100 hides, but this did not apply in the south; this may suggest that it was an ancient West Saxon measure that was applied rigidly when Mercia became part of the newly established English kingdom in the 10th century. The Hundred Ordinance, which dates to the middle of the century, provided that the court was to meet monthly, and thieves were to be pursued by all the leading men of the district. During Norman times, the hundred would pay geld based on the number of hides. To assess how much everyone had to pay, a clerk and a knight were sent by the king to each county; they sat with the shire-reeve (or sheriff), of the county and a select group of local knights. There would be two knights from each hundred. After it was determined what geld had to be paid, the bailiff and knights of the hundred were responsible for getting the money to the sheriff, and the sheriff for getting it to the Exchequer. Above the hundred was the shire, under the control of a sheriff. Hundred boundaries were independent of both parish and county boundaries, although often aligned, meaning that a hundred could be split between counties, or a parish could be split between hundreds. Exceptionally, in the counties of Kent and Sussex, there was a sub-division intermediate in size between the hundred and the shire: several hundreds were grouped together to form lathes in Kent and rapes in Sussex. At the time of the Norman conquest of England, Kent was divided into seven lathes and Sussex into four rapes. Hundred courts Over time, the principal functions of the hundred became the administration of law and the keeping of the peace. By the 12th century, the hundred court was held twelve times a year. This was later increased to fortnightly, although an ordinance of 1234 reduced the frequency to once every three weeks. In some hundreds, courts were held at a fixed place; while in others, courts moved with each sitting to a different location. The main duty of the hundred court was the maintenance of the frankpledge system. The court was formed of twelve freeholders, or freemen. According to a 13th-century statute, freeholders did not have to attend their lord's manorial courts, thus any suits involving them would be heard in a hundred court. For especially serious crimes, the hundred was under the jurisdiction of the Crown; the chief magistrate was a sheriff, and his circuit was called the sheriff's tourn. However, many hundreds came into private hands, with the lordship of the hundred being attached to the principal manor of the area and becoming hereditary. Helen Cam estimated that even before the Conquest, over 130 hundreds were in private hands; while an inquest of 1316 found that by that date 388 of 628 named hundreds were held, not by the Crown, but by its subjects. Where a hundred was under a lord, a steward, acting as a judge and the chief official of the lord of the manor, was appointed in place of a sheriff. The importance of the hundred courts declined from the 17th century, and most of their powers were extinguished with the establishment of county courts in 1867. The remaining duty of the inhabitants of a hundred to make good damages caused by riot was ended by the Riot (Damages) Act 1886, when the cost was transferred to the county police rate. The jurisdiction of hundred courts was curtailed by the Administration of Justice Act 1977. Chiltern Hundreds The steward of the Chiltern Hundreds is notable as a legal fiction, owing to a quirk of British Parliamentary law. A Crown Steward was appointed to maintain law and order in the area, but these duties ceased to be performed in the 16th century, and the holder ceased to gain any benefits during the 17th century. The position has since been used as a procedural device to allow resignation from the British House of Commons as a (formerly) remunerated office of the Crown. Wapentake A wapentake, an Old Norse–derived term as common in Northern England, was the equivalent of the Anglo-Saxon hundred in the northern Danelaw. In the Domesday Book of 1086, the term is used instead of hundreds in Yorkshire, the Five Boroughs of Derby, Leicester, Lincoln, Nottingham and Stamford, and also sometimes in Northamptonshire. The laws in wapentakes were similar to those in hundreds with minor variations. According to the first-century historian Tacitus, in Scandinavia the wapentake referred to a vote passed at an assembly by the brandishing of weapons. In some counties, such as Leicestershire, the wapentakes recorded at the time of Domesday Book later evolved into hundreds. In others, such as Lincolnshire, the term remained in use. Although no longer part of local government, there is some correspondence between the rural deanery and the former wapentake or hundred; especially in the East Midlands, the Buckingham Archdeaconry and the York Diocese. Wales In Wales an ancient Celtic system of division called cantrefi (a hundred farmsteads; singular cantref) had existed for centuries and was of particular importance in the administration of the Welsh law. The antiquity of the cantrefi is demonstrated by the fact that they often mark the boundary between dialects. Some were originally kingdoms in their own right; others may have been artificial units created later. With the coming of Christianity, the llan (similar to the parish) based Celtic churches often took the borders of the older cantrefi, and the same happened when Norman 'hundreds' were enforced on the people of Wales. Each cantref had its own court, which was an assembly of the uchelwyr, the main landowners of the cantref. This would be presided over by the king if he happened to be present, or if he was not present, by his representative. Apart from the judges there would be a clerk, an usher and sometimes two professional pleaders. The cantref court dealt with crimes, the determination of boundaries, and inheritance. Nordic countries The term (hundred) was used in Svealand and present-day Finland. The name is assumed to mean an area that should organise 100 men to crew four rowed war boats, which each had 12 pairs of oars and a commander. Eventually, that division was superseded by introducing the härad or Herred, which was the term in the rest of the Nordic countries. This word was either derived from Proto-Norse *harja-raiðō (warband) or Proto-Germanic *harja-raiða (war equipment, cf. wapentake). Similar to skipreide, a part of the coast where the inhabitants were responsible for equipping and manning a war ship. Hundreds were not organized in Norrland, the northern sparsely populated part of Sweden. In Sweden, a countryside was typically divided in a few socken units (parish), where the ecclesiastical and worldly administrative units often coincided. This began losing its basic significance through the municipal reform of 1862. A was originally a subdivision of a (province), but since the government reform of 1634, ("county") took over all administrative roles of the province. A functioned also as electoral district for one peasant representative during the Riksdag of the Estates (Swedish parliament 1436–1866). The (assize court) was the court of first instance in the countryside, abolished in 1971 and superseded by (modern district courts). Today, the hundreds serve no administrative role in Sweden, although some judicial district courts still bear the name (e.g. ) and the hundreds are occasionally used in expressions, e.g. (district of seven hundreds). It is not entirely clear when hundreds were organised in the western part of Finland. The name of the province of Satakunta, roughly meaning hundred ( meaning "one hundred" in Finnish), hints at influences from the times before the Northern Crusades, Christianization, and incorporation into Sweden. As , hundreds remained the fundamental administrative division for the state authorities until 2009. Each was subordinated to a (province/county) and had its own police department, district court and prosecutors. Typically, cities would comprise an urban by themselves, but several rural municipalities would belong to a rural . In a rural hundred the lensmann (chief of local state authorities) was called nimismies ("appointed man"), or archaically vallesmanni (from Swedish). In the Swedish era (up to 1809), his main responsibilities were maintenance of stagecoach stations and coaching inns, supplying traveling government personnel with food and lodging, transport of criminal prisoners, police responsibilities, arranging district court proceedings (tingsrätt), collection of taxes, and sometimes arranging hunts to cull the wolf and bear population. Following the abolition of the provinces as an administrative unit in 2009, the territory for each authority could be demarcated separately, i.e. police districts need not equal court districts in number. The title "härad" survives in the honorary title of herastuomari (Finnish) or häradsdomare (Swedish), which can be given to lay judges after 8–10 years of service. The term herred or herad was used in Norway between 1863 and 1992 for rural municipalities, besides the term kommune (heradskommune). Today, only four municipalities in western Norway call themselves herad, as Ulvik and Kvam. Some Norwegian districts have the word herad in their name, of historical reasons - among them Krødsherad and Heradsbygd in eastern Norway. United States Counties in Delaware, New Jersey and Pennsylvania were divided into hundreds in the 17th century, following the English practice familiar to the colonists. They survive in Delaware (see List of hundreds of Delaware), and were used as tax reporting and voting districts until the 1960s, but now serve no administrative role: their only official legal use is in real estate title descriptions. Following American independence, the term "hundred" fell out of favour and was replaced by "election district". However, the names of the old hundreds continue to show up in deeds for another 50 years. Some plantations in early colonial Virginia used the term hundred in their names, such as Martin's Hundred, Flowerdew Hundred, and West and Shirley Hundred. Bermuda Hundred was the first incorporated town in the English colony of Virginia. It was founded by Sir Thomas Dale in 1613, six years after Jamestown. While debating what became the Land Ordinance of 1785, Thomas Jefferson's committee wanted to divide the public lands in the west into "hundreds of ten geographical miles square, each mile containing 6086 and 4-10ths of a foot". The legislation instead introduced the six-mile square township of the Public Land Survey System. Maryland The hundred was also used as a division of the county in Maryland. Carroll County, Maryland was formed in 1836 by taking the following hundreds from Baltimore County: North Hundred, Pipe Creek Hundred, Delaware Upper Hundred, Delaware Lower Hundred; and from Frederick County: Pipe Creek Hundred, Westminster Hundred, Unity Hundred, Burnt House Hundred, Piney Creek Hundred, and Taneytown Hundred. Maryland's Somerset County, which was established in 1666, was initially divided into six hundreds: Mattapony, Pocomoke, Boquetenorton, Wicomico, and Baltimore Hundreds; later subdivisions of the hundreds added five more: Pitts Creek, Acquango, Queponco, Buckingham, and Worcester Hundreds. The original borders of Talbot County (founded at some point prior to 12 February 1661) contained nine hundreds: Treadhaven Hundred, Bolenbroke Hundred, Mill Hundred, Tuckahoe Hundred, Worrell Hundred, Bay Hundred, Island Hundred, Lower Kent Island Hundred, Chester Hundred. In 1669 Chester Hundred was given to Kent County. In 1707 Queen Anne's County was created from the northern parts of Talbot County, reducing the latter to seven hundreds (Lower Kent Island Hundred becoming a part of the former). Of these, only Bay Hundred legally remains in existence, as District 5 in Talbot County. The geographic region, which includes several unincorporated communities and part of present-day Saint Michaels, continues to be known by the name Bay Hundred, with state and local governments using the name in ways ranging from water trail guides to community pools, while local newspapers regularly use the name in reporting news. Australia In South Australia, land titles record in which hundred a parcel of land is located. Similar to the notion of the South Australian counties listed on the system of titles, hundreds are not generally used when referring to a district and are little known by the general population, except when transferring land title. When the land in the region of the present Darwin, in the Northern Territory, was first surveyed, the territory was administered by South Australia, and the surveyed land was divided up into hundreds. The Cumberland County (Sydney) was also allocated hundreds in the nineteenth century, although these were later repealed. A hundred is traditionally one hundred square miles or , although this is often not exact as boundaries often follow local topography.
Physical sciences
Area
Basics and measurement
357353
https://en.wikipedia.org/wiki/Large%20Hadron%20Collider
Large Hadron Collider
The Large Hadron Collider (LHC) is the world's largest and highest-energy particle accelerator. It was built by the European Organization for Nuclear Research (CERN) between 1998 and 2008 in collaboration with over 10,000 scientists and hundreds of universities and laboratories across more than 100 countries. It lies in a tunnel in circumference and as deep as beneath the France–Switzerland border near Geneva. The first collisions were achieved in 2010 at an energy of 3.5 teraelectronvolts (TeV) per beam, about four times the previous world record. The discovery of the Higgs boson at the LHC was announced in 2012. Between 2013 and 2015, the LHC was shut down and upgraded; after those upgrades it reached 6.5 TeV per beam (13.0 TeV total collision energy). At the end of 2018, it was shut down for maintenance and further upgrades, reopened over three years later in April 2022. The collider has four crossing points where the accelerated particles collide. Nine detectors, each designed to detect different phenomena, are positioned around the crossing points. The LHC primarily collides proton beams, but it can also accelerate beams of heavy ions, such as in lead–lead collisions and proton–lead collisions. The LHC's goal is to allow physicists to test the predictions of different theories of particle physics, including measuring the properties of the Higgs boson, searching for the large family of new particles predicted by supersymmetric theories, and studying other unresolved questions in particle physics. Background The term hadron refers to subatomic composite particles composed of quarks held together by the strong force (analogous to the way that atoms and molecules are held together by the electromagnetic force). The best-known hadrons are the baryons such as protons and neutrons; hadrons also include mesons such as the pion and kaon, which were discovered during cosmic ray experiments in the late 1940s and early 1950s. A collider is a type of a particle accelerator that brings two opposing particle beams together such that the particles collide. In particle physics, colliders, though harder to construct, are a powerful research tool because they reach a much higher center of mass energy than fixed target setups. Analysis of the byproducts of these collisions gives scientists good evidence of the structure of the subatomic world and the laws of nature governing it. Many of these byproducts are produced only by high-energy collisions, and they decay after very short periods of time. Thus many of them are hard or nearly impossible to study in other ways. Purpose Many physicists hope that the Large Hadron Collider will help answer some of the fundamental open questions in physics, which concern the basic laws governing the interactions and forces among elementary particles and the deep structure of space and time, particularly the interrelation between quantum mechanics and general relativity. These high-energy particle experiments can provide data to support different scientific models. For example, the Standard Model and Higgsless model required high-energy particle experiment data to validate their predictions and allow further theoretical development. The Standard Model was completed by detection of the Higgs boson by the LHC in 2012. LHC collisions have explored other questions, including: Do all known particles have supersymmetric partners, as part of supersymmetry in an extension of the Standard Model and Poincaré symmetry? Are there extra dimensions, as predicted by various models based on string theory, and can we detect them? What is the nature of the dark matter, a hypothetical form of matter which appears to account for 27% of the mass-energy of the universe? Other open questions that may be explored using high-energy particle collisions include: It is already known that electromagnetism and the weak nuclear force are different manifestations of a single force called the electroweak force. The LHC may clarify whether the electroweak force and the strong nuclear force are similarly just different manifestations of one universal unified force, as predicted by various Grand Unification Theories. Why is the fourth fundamental force (gravity) so many orders of magnitude weaker than the other three fundamental forces?
Physical sciences
Particle physics: General
null
357367
https://en.wikipedia.org/wiki/Granny%20Smith
Granny Smith
The Granny Smith, also known as a green apple or sour apple, is an apple cultivar that originated in Australia in 1868. It is named after Maria Ann Smith, who propagated the cultivar from a chance seedling. The tree is thought to be a hybrid of Malus sylvestris, the European wild apple, with the domesticated apple Malus domestica as the polleniser. The fruit is hard, firm and with a light green skin and crisp, juicy flesh. The flavour is tart and acidic. It remains firm when baked, making it a popular cooking apple used in pies, where it can be sweetened. The apple goes from being completely green to turning yellow when overripe. The US Apple Association reported in 2019 that the Granny Smith was the third most popular apple in the United States of America. History The Granny Smith cultivar originated in Eastwood, New South Wales, Australia (now a suburb of Sydney) in 1868. Its discoverer, Maria Ann Smith (née Sherwood), had emigrated to the district from Beckley, East Sussex in 1839 with her husband Thomas. They purchased a small orchard in the area in 1855–1856 and began cultivating fruit, for which the area was a well known centre in colonial Australia. Smith had eight children and was a prominent figure in the district, earning the nickname "Granny" Smith in her advanced years. The first description of the origin of the Granny Smith apple was not published until 1924. In that year, Farmer and Settler published the account of a local historian who had interviewed two men who had known Smith. One of those interviewed recalled that, in 1868, he (then twelve years old) and his father had been invited to Smith's farm to inspect a chance seedling that had sprung near a creek. Smith had dumped there, among the ferns, the remains of French crab-apples that had been grown in Tasmania. Another story recounted that Smith had been testing French crab-apples for cooking, and, throwing the apple cores out her window as she worked, had found that the new cultivar had sprung up underneath her kitchen windowsill. Whatever the case, Smith took it upon herself to propagate the new cultivar on her property, finding the apples good for cooking and for general consumption. Having "all the appearances of a cooking apple," they were not tart but instead were "sweet and crisp to eat." She took a stall at Sydney's George Street market, where the apples stored "exceptionally well and became popular" and "once a week sold her produce there." Smith died only a couple of years after her discovery (in 1870), but her work had been noticed by other local planters. Edward Gallard was one such planter, who extensively planted Granny Smith trees on his property and bought the Smith farm when Thomas died in 1876. Gallard was successful in marketing the apple locally, but it did not receive widespread attention until 1890. In that year, it was exhibited as "Smith's Seedling" at the Castle Hill Agricultural and Horticultural Show, and the following year it won the prize for cooking apples under the name "Granny Smith's Seedling." The apple was so highly successful that the following year, many were exhibiting Granny Smith apples at horticultural shows. In 1895, the New South Wales Department of Agriculture recognised the cultivar and had begun growing the trees at the Government Experimental Station in Bathurst, New South Wales, recommending the gazette of its properties as a late-picking cooking apple for potential export. Over the following years the government actively promoted the apple, leading to its widespread adoption. Its worldwide fame grew from the fact that it could be picked from March and stored till November. Enterprising fruit merchants in the 1890s and the 1900s experimented with methods to transport the apples overseas in cold storage. Because of its excellent shelf life, the Granny Smith could be exported long distances and most times of the year, at a time when Australian food exports were growing dramatically on the back of international demand. Granny Smiths were exported in enormous quantities after the First World War, and by 1975, 40 percent of Australia's apple crop was Granny Smith. By this time, it was being grown intensely elsewhere in the Southern Hemisphere, as well as in France and the United States. The advent of the Granny Smith apple is now celebrated annually in Eastwood with the Granny Smith Festival. Uses Culinary Granny Smith apples are light green in colour. The tart flavor of these apples makes them one of the most versatile varieties of apple to cook with. They are popularly used in many apple dishes, such as apple pie, apple cobbler, apple crumble, and apple cake. They are also commonly eaten raw as table apples, and at least one company (Woodchuck Hard Cider) makes Granny Smith varietal cider. Properties It is moderately susceptible to fire blight and is very prone to scab, powdery mildew, and cedar apple rust. Granny Smith is much more easily preserved in storage than other apples, a factor which has greatly contributed to its success in export markets. Its long storage life has been attributed to its fairly low levels of ethylene production, and in the right conditions Granny Smiths can be stored without loss of quality for as long as a year. This cultivar needs fewer winter chill hours and a longer season to mature the fruit, so it is favoured for the milder areas of the apple growing regions. However, they are susceptible to superficial scald and bitter pit. Superficial scald may be controlled by treatment with diphenylamine before storage. It can also be controlled with low-oxygen storage. Pit can be controlled with calcium sprays during the growing season and with postharvest calcium dips. According to the US Apple Association website, it is one of the fifteen most popular apple cultivars in the United States. Cultural references In 1968, the rock band The Beatles used an image of a Granny Smith apple as the logo for their corporation, Apple Corps Limited. For their record label, Apple Records, one side of vinyl albums featured the exterior of the fruit, while the other side of the recording featured a cross-section of the apple. Yoko Ono's 1966 artwork Apple used a Granny Smith apple in its 2015 recreation at New York City's Museum of Modern Art. John Lennon had taken a bite from the apple on display in its 1966 incarnation at the Indica Gallery in London. The Granny Smith was one of four apples honored by the United States Postal Service in a 2013 set of four 33¢ stamps commemorating historic strains, joined by Northern Spy, Baldwin, and Golden Delicious.
Biology and health sciences
Pomes
Plants
357416
https://en.wikipedia.org/wiki/Monomial
Monomial
In mathematics, a monomial is, roughly speaking, a polynomial which has only one term. Two definitions of a monomial may be encountered: A monomial, also called a power product or primitive monomial, is a product of powers of variables with nonnegative integer exponents, or, in other words, a product of variables, possibly with repetitions. For example, is a monomial. The constant is a primitive monomial, being equal to the empty product and to for any variable . If only a single variable is considered, this means that a monomial is either or a power of , with a positive integer. If several variables are considered, say, then each can be given an exponent, so that any monomial is of the form with non-negative integers (taking note that any exponent makes the corresponding factor equal to ). A monomial in the first sense multiplied by a nonzero constant, called the coefficient of the monomial. A primitive monomial is a special case of a monomial in this second sense, where the coefficient is . For example, in this interpretation and are monomials (in the second example, the variables are and the coefficient is a complex number). In the context of Laurent polynomials and Laurent series, the exponents of a monomial may be negative, and in the context of Puiseux series, the exponents may be rational numbers. In mathematical analysis, it is common to consider polynomials written in terms of a shifted variable for some constant rather than a variable alone, as in the study of Taylor series. By a slight abuse of notation, monomials of shifted variables, for instance may be called monomials in the sense of shifted monomials or centered monomials, where is the center or is the shift. Since the word "monomial", as well as the word "polynomial", comes from the late Latin word "binomium" (binomial), by changing the prefix "bi-" (two in Latin), a monomial should theoretically be called a "mononomial". "Monomial" is a syncope by haplology of "mononomial". Comparison of the two definitions With either definition, the set of monomials is a subset of all polynomials that is closed under multiplication. Both uses of this notion can be found, and in many cases the distinction is simply ignored, see for instance examples for the first and second meaning. In informal discussions the distinction is seldom important, and tendency is towards the broader second meaning. When studying the structure of polynomials however, one often definitely needs a notion with the first meaning. This is for instance the case when considering a monomial basis of a polynomial ring, or a monomial ordering of that basis. An argument in favor of the first meaning is that no obvious other notion is available to designate these values, though primitive monomial is in use and does make the absence of constants clear. The remainder of this article assumes the first meaning of "monomial". Monomial basis The most obvious fact about monomials (first meaning) is that any polynomial is a linear combination of them, so they form a basis of the vector space of all polynomials, called the monomial basis - a fact of constant implicit use in mathematics. Number The number of monomials of degree in variables is the number of multicombinations of elements chosen among the variables (a variable can be chosen more than once, but order does not matter), which is given by the multiset coefficient . This expression can also be given in the form of a binomial coefficient, as a polynomial expression in , or using a rising factorial power of : The latter forms are particularly useful when one fixes the number of variables and lets the degree vary. From these expressions one sees that for fixed n, the number of monomials of degree d is a polynomial expression in of degree with leading coefficient . For example, the number of monomials in three variables () of degree d is ; these numbers form the sequence 1, 3, 6, 10, 15, ... of triangular numbers. The Hilbert series is a compact way to express the number of monomials of a given degree: the number of monomials of degree in variables is the coefficient of degree of the formal power series expansion of The number of monomials of degree at most in variables is . This follows from the one-to-one correspondence between the monomials of degree in variables and the monomials of degree at most in variables, which consists in substituting by 1 the extra variable. Multi-index notation The multi-index notation is often useful for having a compact notation, specially when there are more than two or three variables. If the variables being used form an indexed family like one can set and Then the monomial can be compactly written as With this notation, the product of two monomials is simply expressed by using the addition of exponent vectors: Degree The degree of a monomial is defined as the sum of all the exponents of the variables, including the implicit exponents of 1 for the variables which appear without exponent; e.g., in the example of the previous section, the degree is . The degree of is 1+1+2=4. The degree of a nonzero constant is 0. For example, the degree of −7 is 0. The degree of a monomial is sometimes called order, mainly in the context of series. It is also called total degree when it is needed to distinguish it from the degree in one of the variables. Monomial degree is fundamental to the theory of univariate and multivariate polynomials. Explicitly, it is used to define the degree of a polynomial and the notion of homogeneous polynomial, as well as for graded monomial orderings used in formulating and computing Gröbner bases. Implicitly, it is used in grouping the terms of a Taylor series in several variables. Geometry In algebraic geometry the varieties defined by monomial equations for some set of α have special properties of homogeneity. This can be phrased in the language of algebraic groups, in terms of the existence of a group action of an algebraic torus (equivalently by a multiplicative group of diagonal matrices). This area is studied under the name of torus embeddings.
Mathematics
Basics
null
357632
https://en.wikipedia.org/wiki/Mating
Mating
In biology, mating is the pairing of either opposite-sex or hermaphroditic organisms for the purposes of sexual reproduction. Fertilization is the fusion of two gametes. Copulation is the union of the sex organs of two sexually reproducing animals for insemination and subsequent internal fertilization. Mating may also lead to external fertilization, as seen in amphibians, fishes and plants. For most species, mating is between two individuals of opposite sexes. However, for some hermaphroditic species, copulation is not required because the parent organism is capable of self-fertilization (autogamy); for example, banana slugs. The term mating is also applied to related processes in bacteria, archaea and viruses. Mating in these cases involves the pairing of individuals, accompanied by the pairing of their homologous chromosomes and then exchange of genomic information leading to formation of recombinant progeny (see mating systems). Animals For animals, mating strategies include random mating, disassortative mating, assortative mating, or a mating pool. In some birds, it includes behaviors such as nest-building and feeding offspring. The human practice of mating and artificially inseminating domesticated animals is part of animal husbandry. In some terrestrial arthropods, including insects representing basal (primitive) phylogenetic clades, the male deposits spermatozoa on the substrate, sometimes stored within a special structure. Courtship involves inducing the female to take up the sperm package into her genital opening without actual copulation. Courtship is often facilitated through forming groups, called leks, in flies and many other insects. For example, male Tokunagayusurika akamusi forms swarms dancing in the air to attract females. In groups such as dragonflies and many spiders, males extrude sperm into secondary copulatory structures removed from their genital opening, which are then used to inseminate the female (in dragonflies, it is a set of modified sternites on the second abdominal segment; in spiders, it is the male pedipalps). In advanced groups of insects, the male uses its aedeagus, a structure formed from the terminal segments of the abdomen, to deposit sperm directly (though sometimes in a capsule called a "spermatophore") into the female's reproductive tract. Other animals reproduce sexually with external fertilization, including many basal vertebrates. Vertebrates reproduce with internal fertilization through cloacal copulation (in reptiles, some fish, and most birds) or ejaculation of semen through the penis into the female's vagina (in mammals). In domesticated animals, there are various type of mating methods being employed to mate animals like pen mating (when female is moved to the desired male into a pen) or paddock mating (where one male is let loose in the paddock with several females). Plants and fungi Like in animals, mating in other Eukaryotes, such as plants and fungi, denotes . However, in vascular plants this is mostly achieved without physical contact between mating individuals (see pollination), and in some cases, e.g., in fungi no distinguishable male or female organs exist (see isogamy); however, mating types in some fungal species are somewhat analogous to sexual dimorphism in animals, and determine whether or not two individual isolates can mate. Yeasts are eukaryotic microorganisms classified in the kingdom Fungi, with 1,500 species currently described. In general, under high stress conditions like nutrient starvation, haploid cells will die; under the same conditions, however, diploid cells of Saccharomyces cerevisiae can undergo sporulation, entering sexual reproduction (meiosis) and produce a variety of haploid spores, which can go on to mate (conjugate) and reform the diploid. Protists Protists are a large group of diverse eukaryotic microorganisms, mainly unicellular animals and plants, that do not form tissues. The earliest eukaryotes were likely protists. Mating and sexual reproduction are widespread among extant eukaryotes including protists such as Paramecium and Chlamydomonas. In many eukaryotic species, mating is promoted by sex pheromones including the protist Blepharisma japonicum. Based on a phylogenetic analysis, Dacks and Roger proposed that facultative sex was present in the common ancestor of all eukaryotes. However, to many biologists it seemed unlikely until recently, that mating and sex could be a primordial and fundamental characteristic of eukaryotes. A principal reason for this view was that mating and sex appeared to be lacking in certain pathogenic protists whose ancestors branched off early from the eukaryotic family tree. However, several of these protists are now known to be capable of, or to recently have had, the capability for meiosis and hence mating. To cite one example, the common intestinal parasite Giardia intestinalis was once considered to be a descendant of a protist lineage that predated the emergence of meiosis and sex. However, G. intestinalis was recently found to have a core set of genes that function in meiosis and that are widely present among sexual eukaryotes. These results suggested that G. intestinalis is capable of meiosis and thus mating and sexual reproduction. Furthermore, direct evidence for meiotic recombination, indicative of mating and sexual reproduction, was also found in G. intestinalis. Other protists for which evidence of mating and sexual reproduction has recently been described are parasitic protozoa of the genus Leishmania, Trichomonas vaginalis, and acanthamoeba. Protists generally reproduce asexually under favorable environmental conditions, but tend to reproduce sexually under stressful conditions, such as starvation or heat shock.
Biology and health sciences
Ethology
Biology
357636
https://en.wikipedia.org/wiki/Flight%20control%20surfaces
Flight control surfaces
Aircraft flight control surfaces are aerodynamic devices allowing a pilot to adjust and control the aircraft's flight attitude. Development of an effective set of flight control surfaces was a critical advance in the development of aircraft. Early efforts at fixed-wing aircraft design succeeded in generating sufficient lift to get the aircraft off the ground, but once aloft, the aircraft proved uncontrollable, often with disastrous results. The development of effective flight controls is what allowed stable flight. This article describes the control surfaces used on a fixed-wing aircraft of conventional design. Other fixed-wing aircraft configurations may use different control surfaces but the basic principles remain. The controls (stick and rudder) for rotary wing aircraft (helicopter or autogyro) accomplish the same motions about the three axes of rotation, but manipulate the rotating flight controls (main rotor disk and tail rotor disk) in a completely different manner. Flight control surfaces are operated by aircraft flight control systems. Considered as a generalized fluid control surface, rudders, in particular, are shared between aircraft and watercraft. Development The Wright brothers are credited with developing the first practical control surfaces. It is a main part of their patent on flying. Unlike modern control surfaces, they used wing warping. In an attempt to circumvent the Wright patent, Glenn Curtiss made hinged control surfaces, the same type of concept first patented some four decades earlier in the United Kingdom. Hinged control surfaces have the advantage of not causing stresses that are a problem of wing warping and are easier to build into structures. Axes of motion An aircraft is free to rotate around three axes that are perpendicular to each other and intersect at its center of gravity (CG). To control position and direction a pilot must be able to control rotation about each of them. Transverse axis The transverse axis, also known as lateral axis, passes through an aircraft from wingtip to wingtip. Rotation about this axis is called pitch. Pitch changes the vertical direction that the aircraft's nose is pointing. The elevators are the primary control surfaces for pitch. Longitudinal axis The longitudinal axis passes through the aircraft from nose to tail. Rotation about this axis is called roll. The angular displacement about this axis is called bank. The pilot changes bank angle by increasing the lift on one wing and decreasing it on the other. This differential lift causes rotation around the longitudinal axis. The ailerons are the primary control of bank. The rudder also has a secondary effect on bank. Vertical axis The vertical axis passes through an aircraft from top to bottom. Rotation about this axis is called yaw. Yaw changes the direction the aircraft's nose is pointing, left or right. The primary control of yaw is with the rudder. Ailerons also have a secondary effect on yaw. These axes move with the aircraft and change relative to the earth as the aircraft moves. For example, for an aircraft whose left wing is pointing straight down, its "vertical" axis is parallel with the ground, while its "transverse" axis is perpendicular to the ground. Main control surfaces The main control surfaces of a fixed-wing aircraft are attached to the airframe on hinges or tracks so they may move and thus deflect the air stream passing over them. This redirection of the air stream generates an unbalanced force to rotate the plane about the associated axis. Ailerons Ailerons are mounted on the trailing edge of each wing near the wingtips and move in opposite directions. When the pilot moves the aileron control to the left, or turns the wheel counter-clockwise, the left aileron goes up and the right aileron goes down. A raised aileron reduces lift on that wing and a lowered one increases lift, so moving the aileron control in this way causes the left wing to drop and the right wing to rise. This causes the aircraft to roll to the left and begin to turn to the left. Centering the control returns the ailerons to the neutral position, maintaining the bank angle. The aircraft will continue to turn until opposite aileron motion returns the bank angle to zero to fly straight. Elevator The elevator is a moveable part of the horizontal stabilizer, hinged to the back of the fixed part of the horizontal tail. The elevators move up and down together. When the pilot pulls the stick backward, the elevators go up. Pushing the stick forward causes the elevators to go down. Raised elevators push down on the tail and cause the nose to pitch up. This makes the wings fly at a higher angle of attack, which generates more lift and more drag. Centering the stick returns the elevators to neutral and stops the change of pitch. Some aircraft, such as an MD-80, use a servo tab within the elevator surface to aerodynamically move the main surface into position. The direction of travel of the control tab will thus be in a direction opposite to the main control surface. It is for this reason that an MD-80 tail looks like it has a 'split' elevator system. In the canard arrangement, the elevators are hinged to the rear of a foreplane and move in the opposite sense, for example when the pilot pulls the stick back the elevators go down to increase the lift at the front and lift the nose up. Rudder The rudder is typically mounted on the trailing edge of the vertical stabilizer, part of the empennage. When the pilot pushes the left pedal, the rudder deflects left. Pushing the right pedal causes the rudder to deflect right. Deflecting the rudder right pushes the tail left and causes the nose to yaw to the right. Centering the rudder pedals returns the rudder to neutral and stops the yaw. Secondary effects of controls Ailerons The ailerons primarily cause roll. Whenever lift is increased, induced drag is also increased so when the aileron control is moved to roll the aircraft to the left, the right aileron is lowered which increases lift on the right wing and therefore increases induced drag on the right wing. Using ailerons causes adverse yaw, meaning the nose of the aircraft yaws in a direction opposite to the aileron application. When moving the aileron control to bank the wings to the left, adverse yaw moves the nose of the aircraft to the right. Adverse yaw is most pronounced in low-speed aircraft with long wings, such as gliders. It is counteracted by the pilot using the rudder pedals. Differential ailerons are ailerons which have been rigged such that the downgoing aileron deflects less than the upward-moving one, causing less adverse yaw. Rudder The rudder is a fundamental control surface which is typically controlled by pedals rather than at the stick. It is the primary means of controlling yaw—the rotation of an airplane about its vertical axis. The rudder may also be called upon to counter-act the adverse yaw produced by the roll-control surfaces. If rudder is continuously applied in level flight the aircraft will yaw initially in the direction of the applied rudder – the primary effect of rudder. After a few seconds the aircraft will tend to bank in the direction of yaw. This arises initially from the increased speed of the wing opposite to the direction of yaw and the reduced speed of the other wing. The faster wing generates more lift and so rises, while the other wing tends to go down because of generating less lift. Continued application of rudder sustains rolling tendency because the aircraft flying at an angle to the airflow - skidding towards the forward wing. When applying right rudder in an aircraft with dihedral the left hand wing will have increased angle of attack and the right hand wing will have decreased angle of attack which will result in a roll to the right. An aircraft with anhedral will show the opposite effect. This effect of the rudder is commonly used in model aircraft where if sufficient dihedral or polyhedral is included in the wing design, primary roll control such as ailerons may be omitted altogether. Turning the aircraft Unlike turning a boat, changing the direction of an aircraft normally must be done with the ailerons rather than the rudder. The rudder turns (yaws) the aircraft but has little effect on its direction of travel. With aircraft, the change in direction is caused by the horizontal component of lift, acting on the wings. The pilot tilts the lift force, which is perpendicular to the wings, in the direction of the intended turn by rolling the aircraft into the turn. As the bank angle is increased, the lifting force can be split into two components: one acting vertically and one acting horizontally. If the total lift is kept constant, the vertical component of lift will decrease. As the weight of the aircraft is unchanged, this would result in the aircraft descending if not countered. To maintain level flight requires increased positive (up) elevator to increase the angle of attack, increase the total lift generated and keep the vertical component of lift equal with the weight of the aircraft. This cannot continue indefinitely. The total load factor required to maintain level flight is directly related to the bank angle. This means that for a given airspeed, level flight can only be maintained up to a certain given angle of bank. Beyond this angle of bank, the aircraft will suffer an accelerated stall if the pilot attempts to generate enough lift to maintain level flight. Alternate main control surfaces Some aircraft configurations have non-standard primary controls. For example, instead of elevators at the back of the stabilizers, the entire tailplane may change angle. Some aircraft have a tail in the shape of a V, and the moving parts at the back of those combine the functions of elevators and rudder. Delta wing aircraft may have "elevons" at the back of the wing, which combine the functions of elevators and ailerons. Secondary control surfaces Spoilers On low drag aircraft such as sailplanes, spoilers are used to disrupt airflow over the wing and greatly reduce lift. This allows a glider pilot to lose altitude without gaining excessive airspeed. Spoilers are sometimes called "lift dumpers". Spoilers that can be used asymmetrically are called spoilerons and can affect an aircraft's roll. Flaps Flaps are mounted on the trailing edge on the inboard section of each wing (near the wing roots). They are deflected down to increase the effective curvature of the wing. Flaps raise the maximum lift coefficient of the aircraft and therefore reduce its stalling speed. They are used during low speed, high angle of attack flight including take-off and descent for landing. Some aircraft are equipped with "flaperons", which are more commonly called "inboard ailerons". These devices function primarily as ailerons, but on some aircraft, will "droop" when the flaps are deployed, thus acting as both a flap and a roll-control inboard aileron. Slats Slats, also known as leading edge devices, are extensions to the front of a wing for lift augmentation, and are intended to reduce the stalling speed by altering the airflow over the wing. Slats may be fixed or retractable - fixed slats (e.g. as on the Fieseler Fi 156 Storch) give excellent slow speed and STOL capabilities, but compromise higher speed performance. Retractable slats, as seen on most airliners, provide reduced stalling speed for take-off and landing, but are retracted for cruising. Air brakes Air brakes are used to increase drag. Spoilers might act as air brakes, but are not pure air brakes as they also function as lift-dumpers or in some cases as roll control surfaces. Air brakes are usually surfaces that deflect outwards from the fuselage (in most cases symmetrically on opposing sides) into the airstream in order to increase form-drag. As they are in most cases located elsewhere on the aircraft, they do not directly affect the lift generated by the wing. Their purpose is to slow down the aircraft. They are particularly useful when a high rate of descent is required. They are common on high performance military aircraft as well as civilian aircraft, especially those lacking reverse thrust capability. Control trimming surfaces Trimming controls allow a pilot to balance the lift and drag being produced by the wings and control surfaces over a wide range of load and airspeed. This reduces the effort required to adjust or maintain a desired flight attitude. Elevator trim Elevator trim balances the control force necessary to maintain the correct aerodynamic force on the tail to balance the aircraft. Whilst carrying out certain flight exercises, a lot of trim could be required to maintain the desired angle of attack. This mainly applies to slow flight, where a nose-up attitude is required, in turn requiring a lot of trim causing the tailplane to exert a strong downforce. Elevator trim is correlated with the speed of the airflow over the tail, thus airspeed changes to the aircraft require re-trimming. An important design parameter for aircraft is the stability of the aircraft when trimmed for level flight. Any disturbances such as gusts or turbulence will be damped over a short period of time and the aircraft will return to its level flight trimmed airspeed. Trimming tail plane Except for very light aircraft, trim tabs on the elevators are unable to provide the force and range of motion desired. To provide the appropriate trim force the entire horizontal tail plane is made adjustable in pitch. This allows the pilot to select exactly the right amount of positive or negative lift from the tail plane while reducing drag from the elevators. Control horn A control horn is a section of control surface which projects ahead of the pivot point. It generates a force which tends to increase the surface's deflection thus reducing the control pressure experienced by the pilot. Control horns may also incorporate a counterweight which helps to balance the control and prevent it from fluttering in the airstream. Some designs feature separate anti-flutter weights. (In radio controlled model aircraft, the term "control horn" has a different meaning) Spring trim In the simplest arrangement, trimming is done by a mechanical spring (or bungee) which adds appropriate force to augment the pilot's control input. The spring is usually connected to an elevator trim lever to allow the pilot to set the spring force applied. Rudder and aileron trim Most fixed-wing aircraft have a trimming control surface on the elevator, but larger aircraft also have a trim control for the rudder, and another for the ailerons. The rudder trim is to counter any asymmetric thrust from the engines. Aileron trim is to counter the effects of the centre of gravity being displaced from the aircraft centerline. This can be caused by fuel or an item of payload being loaded more on one side of the aircraft compared to the other, such as when one fuel tank has more fuel than the other.
Technology
Aircraft components
null
357657
https://en.wikipedia.org/wiki/Bloch%27s%20theorem
Bloch's theorem
In condensed matter physics, Bloch's theorem states that solutions to the Schrödinger equation in a periodic potential can be expressed as plane waves modulated by periodic functions. The theorem is named after the Swiss physicist Felix Bloch, who discovered the theorem in 1929. Mathematically, they are written where is position, is the wave function, is a periodic function with the same periodicity as the crystal, the wave vector is the crystal momentum vector, is Euler's number, and is the imaginary unit. Functions of this form are known as Bloch functions or Bloch states, and serve as a suitable basis for the wave functions or states of electrons in crystalline solids. The description of electrons in terms of Bloch functions, termed Bloch electrons (or less often Bloch Waves), underlies the concept of electronic band structures. These eigenstates are written with subscripts as , where is a discrete index, called the band index, which is present because there are many different wave functions with the same (each has a different periodic component ). Within a band (i.e., for fixed ), varies continuously with , as does its energy. Also, is unique only up to a constant reciprocal lattice vector , or, . Therefore, the wave vector can be restricted to the first Brillouin zone of the reciprocal lattice without loss of generality. Applications and consequences Applicability The most common example of Bloch's theorem is describing electrons in a crystal, especially in characterizing the crystal's electronic properties, such as electronic band structure. However, a Bloch-wave description applies more generally to any wave-like phenomenon in a periodic medium. For example, a periodic dielectric structure in electromagnetism leads to photonic crystals, and a periodic acoustic medium leads to phononic crystals. It is generally treated in the various forms of the dynamical theory of diffraction. Wave vector Suppose an electron is in a Bloch state where is periodic with the same periodicity as the crystal lattice. The actual quantum state of the electron is entirely determined by , not or directly. This is important because and are not unique. Specifically, if can be written as above using , it can also be written using , where is any reciprocal lattice vector (see figure at right). Therefore, wave vectors that differ by a reciprocal lattice vector are equivalent, in the sense that they characterize the same set of Bloch states. The first Brillouin zone is a restricted set of values of with the property that no two of them are equivalent, yet every possible is equivalent to one (and only one) vector in the first Brillouin zone. Therefore, if we restrict to the first Brillouin zone, then every Bloch state has a unique . Therefore, the first Brillouin zone is often used to depict all of the Bloch states without redundancy, for example in a band structure, and it is used for the same reason in many calculations. When is multiplied by the reduced Planck constant, it equals the electron's crystal momentum. Related to this, the group velocity of an electron can be calculated based on how the energy of a Bloch state varies with ; for more details see crystal momentum. Detailed example For a detailed example in which the consequences of Bloch's theorem are worked out in a specific situation, see the article Particle in a one-dimensional lattice (periodic potential). Statement A second and equivalent way to state the theorem is the following Proof Using lattice periodicity Being Bloch's theorem a statement about lattice periodicity, in this proof all the symmetries are encoded as translation symmetries of the wave function itself. Using operators In this proof all the symmetries are encoded as commutation properties of the translation operators Using group theory Apart from the group theory technicalities this proof is interesting because it becomes clear how to generalize the Bloch theorem for groups that are not only translations. This is typically done for space groups which are a combination of a translation and a point group and it is used for computing the band structure, spectrum and specific heats of crystals given a specific crystal group symmetry like FCC or BCC and eventually an extra basis. In this proof it is also possible to notice how it is key that the extra point group is driven by a symmetry in the effective potential but it shall commute with the Hamiltonian. In the generalized version of the Bloch theorem, the Fourier transform, i.e. the wave function expansion, gets generalized from a discrete Fourier transform which is applicable only for cyclic groups, and therefore translations, into a character expansion of the wave function where the characters are given from the specific finite point group. Also here is possible to see how the characters (as the invariants of the irreducible representations) can be treated as the fundamental building blocks instead of the irreducible representations themselves. Velocity and effective mass If we apply the time-independent Schrödinger equation to the Bloch wave function we obtain with boundary conditions Given this is defined in a finite volume we expect an infinite family of eigenvalues; here is a parameter of the Hamiltonian and therefore we arrive at a "continuous family" of eigenvalues dependent on the continuous parameter and thus at the basic concept of an electronic band structure. This shows how the effective momentum can be seen as composed of two parts, a standard momentum and a crystal momentum . More precisely the crystal momentum is not a momentum but it stands for the momentum in the same way as the electromagnetic momentum in the minimal coupling, and as part of a canonical transformation of the momentum. For the effective velocity we can derive For the effective mass The quantity on the right multiplied by a factor is called effective mass tensor and we can use it to write a semi-classical equation for a charge carrier in a band where is an acceleration. This equation is analogous to the de Broglie wave type of approximation As an intuitive interpretation, both of the previous two equations resemble formally and are in a semi-classical analogy with Newton's second law for an electron in an external Lorentz force. History and related equations The concept of the Bloch state was developed by Felix Bloch in 1928 to describe the conduction of electrons in crystalline solids. The same underlying mathematics, however, was also discovered independently several times: by George William Hill (1877), Gaston Floquet (1883), and Alexander Lyapunov (1892). As a result, a variety of nomenclatures are common: applied to ordinary differential equations, it is called Floquet theory (or occasionally the Lyapunov–Floquet theorem). The general form of a one-dimensional periodic potential equation is Hill's equation: where is a periodic potential. Specific periodic one-dimensional equations include the Kronig–Penney model and Mathieu's equation. Mathematically Bloch's theorem is interpreted in terms of unitary characters of a lattice group, and is applied to spectral geometry.
Physical sciences
Basics_2
Physics
357672
https://en.wikipedia.org/wiki/Posterior%20probability
Posterior probability
The posterior probability is a type of conditional probability that results from updating the prior probability with information summarized by the likelihood via an application of Bayes' rule. From an epistemological perspective, the posterior probability contains everything there is to know about an uncertain proposition (such as a scientific hypothesis, or parameter values), given prior knowledge and a mathematical model describing the observations available at a particular time. After the arrival of new information, the current posterior probability may serve as the prior in another round of Bayesian updating. In the context of Bayesian statistics, the posterior probability distribution usually describes the epistemic uncertainty about statistical parameters conditional on a collection of observed data. From a given posterior distribution, various point and interval estimates can be derived, such as the maximum a posteriori (MAP) or the highest posterior density interval (HPDI). But while conceptually simple, the posterior distribution is generally not tractable and therefore needs to be either analytically or numerically approximated. Definition in the distributional case In Bayesian statistics, the posterior probability is the probability of the parameters given the evidence , and is denoted . It contrasts with the likelihood function, which is the probability of the evidence given the parameters: . The two are related as follows: Given a prior belief that a probability distribution function is and that the observations have a likelihood , then the posterior probability is defined as , where is the normalizing constant and is calculated as for continuous , or by summing over all possible values of for discrete . The posterior probability is therefore proportional to the product Likelihood · Prior probability. Example Suppose there is a school with 60% boys and 40% girls as students. The girls wear trousers or skirts in equal numbers; all boys wear trousers. An observer sees a (random) student from a distance; all the observer can see is that this student is wearing trousers. What is the probability this student is a girl? The correct answer can be computed using Bayes' theorem. The event is that the student observed is a girl, and the event is that the student observed is wearing trousers. To compute the posterior probability , we first need to know: , or the probability that the student is a girl regardless of any other information. Since the observer sees a random student, meaning that all students have the same probability of being observed, and the percentage of girls among the students is 40%, this probability equals 0.4. , or the probability that the student is not a girl (i.e. a boy) regardless of any other information ( is the complementary event to ). This is 60%, or 0.6. , or the probability of the student wearing trousers given that the student is a girl. As they are as likely to wear skirts as trousers, this is 0.5. , or the probability of the student wearing trousers given that the student is a boy. This is given as 1. , or the probability of a (randomly selected) student wearing trousers regardless of any other information. Since (via the law of total probability), this is . Given all this information, the posterior probability of the observer having spotted a girl given that the observed student is wearing trousers can be computed by substituting these values in the formula: An intuitive way to solve this is to assume the school has N students. Number of boys = 0.6N and number of girls = 0.4N. If N is sufficiently large, total number of trouser wearers = 0.6N+ 50% of 0.4N. And number of girl trouser wearers = 50% of 0.4N. Therefore, in the population of trousers, girls are (50% of 0.4N)/(0.6N+ 50% of 0.4N) = 25%. In other words, if you separated out the group of trouser wearers, a quarter of that group will be girls. Therefore, if you see trousers, the most you can deduce is that you are looking at a single sample from a subset of students where 25% are girls. And by definition, chance of this random student being a girl is 25%. Every Bayes-theorem problem can be solved in this way. Calculation The posterior probability distribution of one random variable given the value of another can be calculated with Bayes' theorem by multiplying the prior probability distribution by the likelihood function, and then dividing by the normalizing constant, as follows: gives the posterior probability density function for a random variable given the data , where is the prior density of , is the likelihood function as a function of , is the normalizing constant, and is the posterior density of given the data . Credible interval Posterior probability is a conditional probability conditioned on randomly observed data. Hence it is a random variable. For a random variable, it is important to summarize its amount of uncertainty. One way to achieve this goal is to provide a credible interval of the posterior probability. Classification In classification, posterior probabilities reflect the uncertainty of assessing an observation to particular class, see also class-membership probabilities. While statistical classification methods by definition generate posterior probabilities, Machine Learners usually supply membership values which do not induce any probabilistic confidence. It is desirable to transform or rescale membership values to class-membership probabilities, since they are comparable and additionally more easily applicable for post-processing.
Mathematics
Statistics
null
9607933
https://en.wikipedia.org/wiki/Handshaking%20lemma
Handshaking lemma
In graph theory, the handshaking lemma is the statement that, in every finite undirected graph, the number of vertices that touch an odd number of edges is even. For example, if there is a party of people who shake hands, the number of people who shake an odd number of other people's hands is even. The handshaking lemma is a consequence of the degree sum formula, also sometimes called the handshaking lemma, according to which the sum of the degrees (the numbers of times each vertex is touched) equals twice the number of edges in the graph. Both results were proven by in his famous paper on the Seven Bridges of Königsberg that began the study of graph theory. Beyond the Seven Bridges of Königsberg Problem, which subsequently formalized Eulerian Tours, other applications of the degree sum formula include proofs of certain combinatorial structures. For example, in the proofs of Sperner's lemma and the mountain climbing problem the geometric properties of the formula commonly arise. The complexity class PPA encapsulates the difficulty of finding a second odd vertex, given one such vertex in a large implicitly-defined graph. Definitions and statement An undirected graph consists of a system of vertices, and edges connecting unordered pairs of vertices. In any graph, the degree of a vertex is defined as the number of edges that have as an endpoint. For graphs that are allowed to contain loops connecting a vertex to itself, a loop should be counted as contributing two units to the degree of its endpoint for the purposes of the handshaking lemma. Then, the handshaking lemma states that, in every finite graph, there must be an even number of vertices for which is an odd number. The vertices of odd degree in a graph are sometimes called odd nodes (or odd vertices); in this terminology, the handshaking lemma can be rephrased as the statement that every graph has an even number of odd nodes. The degree sum formula states that where is the set of nodes (or vertices) in the graph and is the set of edges in the graph. That is, the sum of the vertex degrees equals twice the number of edges. In directed graphs, another form of the degree-sum formula states that the sum of in-degrees of all vertices, and the sum of out-degrees, both equal the number of edges. Here, the in-degree is the number of incoming edges, and the out-degree is the number of outgoing edges. A version of the degree sum formula also applies to finite families of sets or, equivalently, multigraphs: the sum of the degrees of the elements (where the degree equals the number of sets containing it) always equals the sum of the cardinalities of the sets. Both results also apply to any subgraph of the given graph and in particular to its connected components. A consequence is that, for any odd vertex, there must exist a path connecting it to another odd vertex. Applications Euler paths and tours Leonhard Euler first proved the handshaking lemma in his work on the Seven Bridges of Königsberg, asking for a walking tour of the city of Königsberg (now Kaliningrad) crossing each of its seven bridges once. This can be translated into graph-theoretic terms as asking for an Euler path or Euler tour of a connected graph representing the city and its bridges: a walk through the graph that traverses each edge once, either ending at a different vertex than it starts in the case of an Euler path or returning to its starting point in the case of an Euler tour. Euler stated the fundamental results for this problem in terms of the number of odd vertices in the graph, which the handshaking lemma restricts to be an even number. If this number is zero, an Euler tour exists, and if it is two, an Euler path exists. Otherwise, the problem cannot be solved. In the case of the Seven Bridges of Königsberg, the graph representing the problem has four odd vertices, and has neither an Euler path nor an Euler tour. It was therefore impossible to tour all seven bridges in Königsberg without repeating a bridge. In the Christofides–Serdyukov algorithm for approximating the traveling salesperson problem, the geometric implications of the degree sum formula plays a vital role, allowing the algorithm to connect vertices in pairs in order to construct a graph on which an Euler tour forms an approximate TSP tour. Combinatorial enumeration Several combinatorial structures may be shown to be even in number by relating them to the odd vertices in an appropriate "exchange graph". For instance, as C. A. B. Smith proved, in any cubic graph there must be an even number of Hamiltonian cycles through any fixed edge ; these are cycles that pass through each vertex exactly once. used a proof based on the handshaking lemma to extend this result to graphs in which all vertices have odd degree. Thomason defines an exchange graph the vertices of which are in one-to-one correspondence with the Hamiltonian paths in beginning at and continuing through edge . Two such paths and are defined as being connected by an edge in if one may obtain by adding a new edge to the end of and removing another edge from the middle of . This operation is reversible, forming a symmetric relation, so is an undirected graph. If path ends at vertex then the vertex corresponding to in has degree equal to the number of ways that may be extended by an edge that does not connect back to ; that is, the degree of this vertex in is either (an even number) if does not form part of a Hamiltonian cycle through or (an odd number) if is part of a Hamiltonian cycle through . Since has an even number of odd vertices, must have an even number of Hamiltonian cycles through . Other applications The handshaking lemma (or degree sum formula) are also used in proofs of several other results in mathematics. These include the following: Sperner's lemma states that, if a big triangle is subdivided into smaller triangles meeting edge-to-edge, and the vertices are labeled with three colors so that only two of the colors are used along each edge of the big triangle, then at least one of the smaller triangles has vertices of all three colors; it has applications in fixed-point theorems, root-finding algorithms, and fair division. One proof of this lemma forms an exchange graph whose vertices are the triangles (both small and large) and whose edges connect pairs of triangles that share two vertices of some particular two colors. The big triangle necessarily has odd degree in this exchange graph, as does a small triangle with all three colors, but not the other small triangles. By the handshaking lemma, there must be an odd number of small triangles with all three colors, and therefore at least one such triangle must exist. The mountain climbing problem states that, for sufficiently well-behaved functions on a unit interval, with equal values at the ends of the interval, it is possible to coordinate the motion of two points, starting from opposite ends of the interval, so that they meet somewhere in the middle while remaining at points of equal value throughout the motion. One proof of this involves approximating the function by a piecewise linear function with the same extreme points, parameterizing the position of the two moving points by the coordinates of a single point in the unit square, and showing that the available positions for the two points form a finite graph, embedded in this square, with only the starting position and its reversal as odd vertices. By the handshaking lemma, these two positions belong to the same connected component of the graph, and a path from one to the other necessarily passes through the desired meeting point. The reconstruction conjecture concerns the problem of uniquely determining the structure of a graph from the multiset of subgraphs formed by removing a single vertex from it. Given this information, the degree-sum formula can be used to recover the number of edges in the given graph and the degrees of each vertex. From this, it is possible to determine whether the given graph is a regular graph, and if so to determine it uniquely from any vertex-deleted subgraph by adding a new neighbor for all the subgraph vertices of too-low degree. Therefore, all regular graphs can be reconstructed. The game of Hex is played by two players, who place pieces of their color on a tiling of a parallelogram-shaped board by hexagons until one player has a connected path of adjacent pieces from one side of the board to the other. It can never end in a draw: by the time the board has been completely filled with pieces, one of the players will have formed a winning path. One proof of this forms a graph from a filled game board, with vertices at the corners of the hexagons, and with edges on sides of hexagons that separate the two players' colors. This graph has four odd vertices at the corners of the board, and even vertices elsewhere, so it must contain a path connecting two corners, which necessarily has a winning path for one player on one of its sides. Proof Euler's proof of the degree sum formula uses the technique of double counting: he counts the number of incident pairs where is an edge and vertex is one of its endpoints, in two different ways. Vertex belongs to pairs, where (the degree of ) is the number of edges incident to it. Therefore, the number of incident pairs is the sum of the degrees. However, each edge in the graph belongs to exactly two incident pairs, one for each of its endpoints; therefore, the number of incident pairs is . Since these two formulas count the same set of objects, they must have equal values. The same proof can be interpreted as summing the entries of the incidence matrix of the graph in two ways, by rows to get the sum of degrees and by columns to get twice the number of edges. For graphs, the handshaking lemma follows as a corollary of the degree sum formula. In a sum of integers, the parity of the sum is not affected by the even terms in the sum; the overall sum is even when there is an even number of odd terms, and odd when there is an odd number of odd terms. Since one side of the degree sum formula is the even number the sum on the other side must have an even number of odd terms; that is, there must be an even number of odd-degree vertices. Alternatively, it is possible to use mathematical induction to prove the degree sum formula, or to prove directly that the number of odd-degree vertices is even, by removing one edge at a time from a given graph and using a case analysis on the degrees of its endpoints to determine the effect of this removal on the parity of the number of odd-degree vertices. In special classes of graphs Regular graphs The degree sum formula implies that every -regular graph with vertices has edges. Because the number of edges must be an integer, it follows that when is odd the number of vertices must be even. Additionally, for odd values the number of edges must be divisible Bipartite and biregular graphs A bipartite graph has its vertices split into two subsets, with each edge having one endpoint in each subset. It follows from the same double counting argument that, in each subset, the sum of degrees equals the number of edges in the graph. In particular, both subsets have equal degree sums. For biregular graphs, with a partition of the vertices into subsets and with every vertex in a subset having degree , it must be the case that ; both equal the number of edges. Infinite graphs The handshaking lemma does not apply in its usual form to infinite graphs, even when they have only a finite number of odd-degree vertices. For instance, an infinite path graph with one endpoint has only a single odd-degree vertex rather than having an even number of such vertices. However, it is possible to formulate a version of the handshaking lemma using the concept of an end, an equivalence class of semi-infinite paths ("rays") considering two rays as equivalent when there exists a third ray that uses infinitely many vertices from each of them. The degree of an end is the maximum number of edge-disjoint rays that it contains, and an end is odd if its degree is finite and odd. More generally, it is possible to define an end as being odd or even, regardless of whether it has infinite degree, in graphs for which all vertices have finite degree. Then, in such graphs, the number of odd vertices and odd ends, added together, is either even or infinite. Subgraphs By a theorem of Gallai the vertices of any graph can be partitioned as where in the two resulting induced subgraphs, has all degrees even and has all degrees odd. Here, must be even by the handshaking lemma. It is also possible to find even-degree and odd-degree induced subgraphs with many vertices. An induced subgraph of even degree can be found with at least half of the vertices, and an induced subgraph of odd degree (in a graph with no isolated vertices) can be found with . Computational complexity In connection with the exchange graph method for proving the existence of combinatorial structures, it is of interest to ask how efficiently these structures may be found. For instance, suppose one is given as input a Hamiltonian cycle in a cubic graph; it follows from Smith's theorem that there exists a second cycle. How quickly can this second cycle be found? investigated the computational complexity of questions such as this, or more generally of finding a second odd-degree vertex when one is given a single odd vertex in a large implicitly-defined graph. He defined the complexity class PPA to encapsulate problems such as this one; a closely related class defined on directed graphs, PPAD, has attracted significant attention in algorithmic game theory because computing a Nash equilibrium is computationally equivalent to the hardest problems in this class. Computational problems proven to be complete for the complexity class PPA include computational tasks related to Sperner's lemma and to fair subdivision of resources according to the Hobby–Rice theorem.
Mathematics
Graph theory
null
16351567
https://en.wikipedia.org/wiki/Mass%20fraction%20%28chemistry%29
Mass fraction (chemistry)
In chemistry, the mass fraction of a substance within a mixture is the ratio (alternatively denoted ) of the mass of that substance to the total mass of the mixture. Expressed as a formula, the mass fraction is: Because the individual masses of the ingredients of a mixture sum to , their mass fractions sum to unity: Mass fraction can also be expressed, with a denominator of 100, as percentage by mass (in commercial contexts often called percentage by weight, abbreviated wt.% or % w/w; see mass versus weight). It is one way of expressing the composition of a mixture in a dimensionless size; mole fraction (percentage by moles, mol%) and volume fraction (percentage by volume, vol%) are others. When the prevalences of interest are those of individual chemical elements, rather than of compounds or other substances, the term mass fraction can also refer to the ratio of the mass of an element to the total mass of a sample. In these contexts an alternative term is mass percent composition. The mass fraction of an element in a compound can be calculated from the compound's empirical formula or its chemical formula. Terminology Percent concentration does not refer to this quantity. This improper name persists, especially in elementary textbooks. In biology, the unit "%" is sometimes (incorrectly) used to denote mass concentration, also called mass/volume percentage. A solution with 1g of solute dissolved in a final volume of 100mL of solution would be labeled as "1%" or "1% m/v" (mass/volume). This is incorrect because the unit "%" can only be used for dimensionless quantities. Instead, the concentration should simply be given in units of g/mL. Percent solution or percentage solution are thus terms best reserved for mass percent solutions (m/m, m%, or mass solute/mass total solution after mixing), or volume percent solutions (v/v, v%, or volume solute per volume of total solution after mixing). The very ambiguous terms percent solution and percentage solutions with no other qualifiers continue to occasionally be encountered. In thermal engineering, vapor quality is used for the mass fraction of vapor in the steam. In alloys, especially those of noble metals, the term fineness is used for the mass fraction of the noble metal in the alloy. Properties The mass fraction is independent of temperature. Related quantities Mixing ratio The mixing of two pure components can be expressed introducing the (mass) mixing ratio of them . Then the mass fractions of the components will be The mass ratio equals the ratio of mass fractions of components: due to division of both numerator and denominator by the sum of masses of components. Mass concentration The mass fraction of a component in a solution is the ratio of the mass concentration of that component ρi (density of that component in the mixture) to the density of solution . Molar concentration The relation to molar concentration is like that from above substituting the relation between mass and molar concentration: where is the molar concentration, and is the molar mass of the component . Mass percentage Mass percentage is defined as the mass fraction multiplied by 100. Mole fraction The mole fraction can be calculated using the formula where is the molar mass of the component , and is the average molar mass of the mixture. Replacing the expression of the molar-mass products, Spatial variation and gradient In a spatially non-uniform mixture, the mass fraction gradient gives rise to the phenomenon of diffusion.
Physical sciences
Ratio
Basics and measurement
12071235
https://en.wikipedia.org/wiki/Integrated%20multi-trophic%20aquaculture
Integrated multi-trophic aquaculture
Integrated multi-trophic aquaculture (IMTA) is a type of aquaculture where the byproducts, including waste, from one aquatic species are used as inputs (fertilizers, food) for another. Farmers combine fed aquaculture (e.g., fish, shrimp) with inorganic extractive (e.g., seaweed) and organic extractive (e.g., shellfish) aquaculture to create balanced systems for environment remediation (biomitigation), economic stability (improved output, lower cost, product diversification and risk reduction) and social acceptability (better management practices). Selecting appropriate species and sizing the various populations to provide necessary ecosystem functions allows the biological and chemical processes involved to achieve a stable balance, mutually benefiting the organisms and improving ecosystem health. Ideally, the co-cultured species each yield valuable commercial "crops". IMTA can synergistically increase total output, even if some of the crops yield less than they would, short-term, in a monoculture. Terminology and related approaches "Integrated" refers to intensive and synergistic cultivation, using water-borne nutrient and energy transfer. "Multi-trophic" means that the various species occupy different trophic levels, i.e., different (but adjacent) links in the food chain. IMTA is a specialized form of the age-old practice of aquatic polyculture, which was the co-culture of various species, often without regard to trophic level. In this broader case, the organisms may share biological and chemical processes that may be minimally complementary, potentially leading to reduced production of both species due to competition for the same food resource. However, some traditional systems such as polyculture of carps in China employ species that occupy multiple niches within the same pond, or the culture of fish that is integrated with a terrestrial agricultural species, can be considered forms of IMTA. The more general term "Integrated Aquaculture" is used to describe the integration of monocultures through water transfer between the culture systems. The terms "IMTA" and "integrated aquaculture" differ primarily in their precision and are sometimes interchanged. Aquaponics, fractionated aquaculture, integrated agriculture-aquaculture systems, integrated peri-urban-aquaculture systems, and integrated fisheries-aquaculture systems are all variations of the IMTA concept. Range of approaches Today, low-intensity traditional/incidental multi-trophic aquaculture is much more common than modern IMTA. Most are relatively simple, such as fish, seaweed or shellfish. True IMTA can be land-based, using ponds or tanks, or even open-water marine or freshwater systems. Implementations have included species combinations such as shellfish/shrimp, fish/seaweed/shellfish, fish/seaweed, fish/shrimp and seaweed/shrimp. IMTA in open water (offshore cultivation) can be done by the use of buoys with lines on which the seaweed grows. The buoys/lines are placed next to the fishnets or cages in which the fish grows. In some tropical Asian countries some traditional forms of aquaculture of finfish in floating cages, nearby fish and shrimp ponds, and oyster farming integrated with some capture fisheries in estuaries can be considered a form of IMTA. Since 2010, IMTA has been used commercially in Norway, Scotland, and Ireland. In the future, systems with other components for additional functions, or similar functions but different size brackets of particles, are likely. Multiple regulatory issues remain open. Modern history of land-based systems Ryther and co-workers created modern, integrated, intensive, land mariculture. They originated, both theoretically and experimentally, the integrated use of extractive organisms—shellfish, microalgae and seaweeds—in the treatment of household effluents, descriptively and with quantitative results. A domestic wastewater effluent, mixed with seawater, was the nutrient source for phytoplankton, which in turn became food for oysters and clams. They cultivated other organisms in a food chain rooted in the farm's organic sludge. Dissolved nutrients in the final effluent were filtered by seaweed (mainly Gracilaria and Ulva) biofilters. The value of the original organisms grown on human waste effluents was minimal. In 1976, Huguenin proposed adaptations to the treatment of intensive aquaculture effluents in both inland and coastal areas. Tenore followed by integrating with their system of carnivorous fish and the macroalgivore abalone. In 1977, Hughes-Games described the first practical marine fish/shellfish/phytoplankton culture, followed by Gordin, et al., in 1981. By 1989, a semi-intensive (1 kg fish/m−3) seabream and grey mullet pond system by the Gulf of Aqaba (Eilat) on the Red Sea supported dense diatom populations, excellent for feeding oysters. Hundreds of kilos of fish and oysters cultured here were sold. Researchers also quantified the water quality parameters and nutrient budgets in (5 kg fish m−3) green water seabream ponds. The phytoplankton generally maintained reasonable water quality and converted on average over half the waste nitrogen into algal biomass. Experiments with intensive bivalve cultures yielded high bivalve growth rates. This technology supported a small farm in southern Israel. Sustainability IMTA promotes economic and environmental sustainability by converting byproducts and uneaten feed from fed organisms into harvestable crops, thereby reducing eutrophication, and increasing economic diversification. Properly managed multi-trophic aquaculture accelerates growth without detrimental side-effects. This increases the site's ability to assimilate the cultivated organisms, thereby reducing negative environmental impacts. IMTA enables farmers to diversify their output by replacing purchased inputs with byproducts from lower trophic levels, often without new sites. Initial economic research suggests that IMTA can increase profits and can reduce financial risks due to weather, disease and market fluctuations. Over a dozen studies have investigated the economics of IMTA systems since 1985. Nutrient flow Typically, carnivorous fish or shrimp occupy IMTA's higher trophic levels. They excrete soluble ammonia and phosphorus (orthophosphate). Seaweeds and similar species can extract these inorganic nutrients directly from their environment. Fish and shrimp also release organic nutrients which feed shellfish and deposit feeders. Species such as shellfish that occupy intermediate trophic levels often play a dual role, both filtering organic bottom-level organisms from the water and generating some ammonia. Waste feed may also provide additional nutrients; either by direct consumption or via decomposition into individual nutrients. In some projects, the waste nutrients are also gathered and reused in the food given to the fish in cultivation. This can happen by processing the seaweed grown into food. Recovery efficiency Nutrient recovery efficiency is a function of technology, harvest schedule, management, spatial configuration, production, species selection, trophic level biomass ratios, natural food availability, particle size, digestibility, season, light, temperature, and water flow. Since these factors significantly vary by site and region, recovery efficiency also varies. In a hypothetical family-scale fish/microalga /bivalve/seaweed farm, based on pilot scale data, at least 60% of nutrient input reached commercial products, nearly three times more than in modern net pen farms. Expected average annual yields of the system for a hypothetical were of seabream, of bivalves and of seaweeds. These results required precise water quality control and attention to suitability for bivalve nutrition, due to the difficulty in maintaining consistent phytoplankton populations. Seaweeds' nitrogen uptake efficiency ranges from 2-100% in land-based systems. Uptake efficiency in open-water IMTA is unknown. Food safety and quality Feeding the wastes of one species to another has the potential for contamination, although this has yet to be observed in IMTA systems. Mussels and kelp growing adjacent to Atlantic salmon cages in the Bay of Fundy have been monitored since 2001 for contamination by medicines, heavy metals, arsenic, PCBs and pesticides. Concentrations are consistently either non-detectable or well below regulatory limits established by the Canadian Food Inspection Agency, the United States Food and Drug Administration and European Community Directives. Taste testers indicate that these mussels are free of "fishy" taste and aroma and could not distinguish them from "wild" mussels. The mussels' meat yield is significantly higher, reflecting the increase in nutrient availability. Recent findings suggest mussels grown adjacent to salmon farms are advantageous for winter harvest because they maintain high meat weight and condition index (meat to shell ratio). This finding is of particular interest because the Bay of Fundy, where this research was conducted, produces low condition index mussels during winter months in monoculture situations, and seasonal presence of paralytic shellfish poisoning (PSP) typically restricts mussel harvest to the winter months. Selected projects Historic and ongoing research projects include: Asia Japan, China, South Korea, Thailand, Vietnam, Indonesia, Bangladesh, etc. have co-cultured aquatic species for centuries in marine, brackish and fresh water environments. Fish, shellfish and seaweeds have been cultured together in bays, lagoons and ponds. Trial and error has improved integration over time. The proportion of Asian aquaculture production that occurs in IMTA systems is unknown. After the 2004 tsunami, many of the shrimp farmers in Aceh Province of Indonesia and Ranong Province of Thailand were trained in IMTA. This has been especially important as the mono-culture of marine shrimp was widely recognized as unsustainable. Production of tilapia, mud crabs, seaweeds, milkfish, and mussels have been incorporated. AquaFish Collaborative Research Support Program Canada Bay of Fundy Industry, academia and government are collaborating here to expand production to commercial scale. The current system integrates Atlantic salmon, blue mussels and kelp; deposit feeders are under consideration. AquaNet (one of Canada's Networks of Centres of Excellence) funded phase one. The Atlantic Canada Opportunities Agency is funding phase two. The project leaders are Thierry Chopin (University of New Brunswick in Saint John) and Shawn Robinson (Department of Fisheries and Oceans, St. Andrews Biological Station). Pacific SEA-lab Pacific SEA-lab is researching and is licensed for the co-culture of sablefish, scallops, oysters, blue mussels, urchins and kelp. "SEA" stands for Sustainable Ecological Aquaculture. The project aims to balance four species. The project is headed by Stephen Cross under a British Columbia Innovation Award at the University of Victoria Coastal Aquaculture Research & Training (CART) network. Chile The i-mar Research Center at the Universidad de Los Lagos, in Puerto Montt is working to reduce the environmental impact of intensive salmon culture. Initial research involved trout, oysters and seaweeds. Present research is focusing on open waters with salmon, seaweeds and abalone. The project leader is Alejandro Buschmann. Israel SeaOr Marine Enterprises Ltd. SeaOr Marine Enterprises Ltd., which operated for several years on the Israeli Mediterranean coast, north of Tel Aviv, cultured marine fish (gilthead seabream), seaweeds (Ulva and Gracilaria) and Japanese abalone. Its approach leveraged local climate and recycled fish waste products into seaweed biomass, which was fed to the abalone. It also effectively purified the water sufficiently to allow the water to be recycled to the fishponds and to meet point-source effluent environmental regulations. PGP Ltd. PGP Ltd. is a small farm in Southern Israel. It cultures marine fish, microalgae, bivalves and Artemia. Effluents from seabream and seabass collect in sedimentation ponds, where dense populations of microalgae—mostly diatoms—develop. Clams, oysters and sometimes Artemia filter the microalgae from the water, producing a clear effluent. The farm sells the fish, bivalves and Artemia. The Netherlands In the Netherlands, Willem Brandenburg of UR Wageningen (Plant Sciences Group) has established the first seaweed farm in the Netherlands. The farm is called "De Wierderij" and is used for research. South Africa Three farms grow seaweeds for feed in abalone effluents in land-based tanks. Up to 50% of re-circulated water passes through the seaweed tanks. Somewhat uniquely, neither fish nor shrimp comprise the upper trophic species. The motivation is to avoid over-harvesting natural seaweed beds and red tides, rather than nutrient abatement. These commercial successes developed from research collaboration between Irvine and Johnson Cape Abalone and scientists from the University of Cape Town and the University of Stockholm. United Kingdom The Scottish Association for Marine Science, in Oban is developing co-cultures of salmon, oysters, sea urchins, and brown and red seaweeds via several projects. Research focuses on biological and physical processes, as well as production economics and implications for coastal zone management. Researchers include: M. Kelly, A. Rodger, L. Cook, S. Dworjanyn, and C. Sanderson. Bangladesh Indian carps and stinging catfish are cultured in Bangladesh, but the methods could be more productive. The pond and cage cultures used are based only on the fish. They don't take advantage of the productivity increases that could take place if other trophic levels were included. Expensive artificial feeds are used, partly to supply the fish with protein. These costs could be reduced if freshwater snails, such as Viviparus bengalensis, were simultaneously cultured, thus increasing the available protein. The organic and inorganic wastes produced as a byproduct of culturing could also be minimized by integrating freshwater snail and aquatic plants, such as water spinach, respectively. Gallery
Technology
Aquaculture
null
12074071
https://en.wikipedia.org/wiki/Ei%20mechanism
Ei mechanism
{{DISPLAYTITLE:Ei mechanism}} In organic chemistry, the Ei mechanism (Elimination Internal/Intramolecular), also known as a thermal syn elimination or a pericyclic syn elimination, is a special type of elimination reaction in which two vicinal (adjacent) substituents on an alkane framework leave simultaneously via a cyclic transition state to form an alkene in a syn elimination. This type of elimination is unique because it is thermally activated and does not require additional reagents, unlike regular eliminations, which require an acid or base, or would in many cases involve charged intermediates. This reaction mechanism is often found in pyrolysis. General features Compounds that undergo elimination through cyclic transition states upon heating, with no other reagents present, are given the designation as Ei reactions. Depending on the compound, elimination takes place through a four, five, or six-membered transition state. The elimination must be syn and the atoms coplanar for four and five-membered transition states, but coplanarity is not required for six-membered transition states. There is a substantial amount of evidence to support the existence of the Ei mechanism such as: 1) the kinetics of the reactions were found to be first order, 2) the use of free-radical inhibitors did not affect the rate of the reactions, indicating no free-radical mechanisms are involved 3) isotope studies for the Cope elimination indicate the C-H and C-N bonds are partially broken in the transition state, this is also supported by computations that show bond lengthening in the transition state and 4) without the intervention of other mechanisms, the Ei mechanism gives exclusively syn elimination products. There are many factors that affect the product composition of Ei reactions, but typically they follow Hofmann’s rule and lose a β-hydrogen from the least substituted position, giving the alkene that is less substituted (the opposite of Zaitsev's rule). Some factors affecting product composition include steric effects, conjugation, and stability of the forming alkene. For acyclic substrates, the Z-isomer is typically the minor product due to the destabilizing gauche interaction in the transition state, but the selectivity is not usually high. The pyrolysis of N,N-dimethyl-2-phenylcyclohexylamine-N-oxide shows how conformational effects and the stability of the transition state affect product composition for cyclic substrates. In the trans isomer, there are two cis-β-hydrogens that can eliminate. The major product is the alkene that is in conjugation with the phenyl ring, presumably due to the stabilizing effect on the transition state. In the cis isomer, there is only one cis-B-hydrogen that can eliminate, giving the nonconjugated regioisomer as the major product. Ester (acetate) pyrolysis The pyrolytic decomposition of esters is an example of a thermal syn elimination. When subjected to temperatures above 400 °C, esters containing β-hydrogens can eliminate a carboxylic acid through a 6-membered transition state, resulting in an alkene. Isotopic labeling was used to confirm that syn elimination occurs during ester pyrolysis in the formation of stilbene. Sulfur-based Sulfoxide elimination β-hydroxy phenyl sulfoxides were found to undergo thermal elimination through a 5-membered cyclic transition state, yielding β-keto esters and methyl ketones after tautomerization and a sulfenic acid. Allylic alcohols can be formed from β-hydroxy phenyl sulfoxides that contain a β’-hydrogen through an Ei mechanism, tending to give the β,γ-unsaturation. 1,3-Dienes were found to be formed upon the treatment of an allylic alcohol with an aryl sulfide in the presence of triethylamine. Initially, a sulfenate ester is formed followed by a [2,3]-sigmatropic rearrangement to afford an allylic sulfoxide which undergoes thermal syn elimination to yield the 1,3-diene. Chugaev elimination The Chugaev elimination is the pyrolysis of a xanthate ester, resulting in an olefin. To form the xanthate ester, an alcohol reacts with carbon disulfide in the presence of a base, resulting in a metal xanthate which is trapped with an alkylating agent (typically methyl iodide). The olefin is formed through the thermal syn elimination of the β-hydrogen and xanthate ester. The reaction is irreversible because the resulting by-products, carbonyl sulfide and methanethiol, are very stable. The Chugaev elimination is very similar to the ester pyrolysis, but requires significantly lower temperatures to achieve the elimination, thus making it valuable for rearrangement-prone substrates. Burgess dehydration reaction The dehydration of secondary and tertiary alcohols to yield an olefin through a sulfamate ester intermediate is called the Burgess dehydration reaction. The reaction conditions used are typically very mild, giving it some advantage over other dehydration methods for sensitive substrates. This reaction was used during the first total synthesis of taxol to install an exo-methylene group on the C ring. First, the alcohol displaces the triethylamine on the Burgess reagent, forming the sulfamate ester intermediate. β-hydrogen abstraction and elimination of the sulfamate ester through a 6-membered cyclic transition state yields the alkene. Thiosulfinate elimination Thiosulfinates can eliminate in the manner analogous to sulfoxides. Representative is the fragmentation of allicin to thioacrolein, which will go on to form vinyldithiins. Such reactions are important in the antioxidant chemistry of garlic and other plants of the genus Allium. Selenium-based Selenoxide elimination The selenoxide elimination has been used in converting ketones, esters, and aldehydes to their α,β-unsaturated derivatives. The mechanism for this reaction is analogous to the sulfoxide elimination, which is a thermal syn elimination through a 5-membered cyclic transition state. Selenoxides are preferred for this type of transformation over sulfoxides due to their increased reactivity toward β-elimination, in some cases allowing the elimination to take place at room temperature. The areneselenic acid generated after the elimination step is in equilibrium with the diphenyl diselenide which can react with olefins to yield β-hydroxy selenides under acidic or neutral conditions. Under basic conditions, this side reaction is suppressed. Grieco elimination The one-pot dehydration of a primary alcohol to give an alkene through an o-nitrophenyl selenoxide intermediate is called the Grieco elimination. The reaction begins with the formation of a selenophosphonium salt which reacts with the alcohol to form an oxaphosphonium salt. The aryl selenium anion displaces tributylphosphine oxide forming the alkyl aryl selenide species. The selenide is then treated with excess hydrogen peroxide leading to the selenoxide which eliminates the β-hydrogen through a 5-member cyclic transition state, yielding an alkene. The electron-withdrawing nitro group was found to increase both the rate of elimination and the final yield of the olefin. Nitrogen-based Cope elimination The Cope elimination (Cope reaction) is the elimination of a tertiary amine oxide to yield an alkene and a hydroxylamine through an Ei mechanism. The Cope elimination was used in the synthesis of a mannopyranosylamine mimic. The tertiary amine was oxidized to the amine oxide using m-chloroperoxybenzoic acid (mCPBA) and subjected to high temperatures for thermal syn elimination of the β-hydrogen and amine oxide through a cyclic transition state, yielding the alkene. It is worth noting that the indicated hydrogen (in green) is the only hydrogen available for syn elimination. Cyclic amine oxides (5, 7-10-membered nitrogen containing rings) can also undergo internal syn elimination to yield acyclic hydroxylamines containing terminal alkenes. Special cases for the Hofmann elimination The mechanism for the Hofmann elimination is generally E2, but can go through an Ei pathway under certain circumstances. For some sterically hindered molecules the base deprotonates a methyl group on the amine instead of the β-hydrogen directly, forming an ylide intermediate which eliminates trimethylamine through a 5-membered transition state, forming the alkene. Deuterium labeling studies confirmed this mechanism by observing the formation of deuterated trimethylamine (and no deuterated water, which would form from the E2 mechanism). The Wittig modified Hofmann elimination goes through the same Ei mechanism, but instead of using silver oxide and water as base, the Wittig modification uses strong bases like alkylithiums or KNH2/liquid NH3. Iodoso elimination Secondary and tertiary alkyl iodides with strongly electron-withdrawing groups at the α-carbon were found to undergo a pericyclic syn elimination when exposed to m-chloroperbenzoic acid (mCPBA). It is proposed that the reaction goes through an iodoso intermediate before the syn elimination of hypoiodous acid. The scope of this reaction does not include primary alkyl iodides because the iodoso intermediate rearranges to the hypoiodite intermediate, which, under the reaction conditions, is converted to an alcohol. Strongly electron-withdrawing groups suppress the rearrangement pathway, allowing the pericyclic syn elimination pathway to predominate.
Physical sciences
Organic reactions
Chemistry
1615787
https://en.wikipedia.org/wiki/Ipomoea%20aquatica
Ipomoea aquatica
Ipomoea aquatica, widely known as water spinach, is a semi-aquatic, tropical plant grown as a vegetable for its tender shoots. I. aquatica is generally believed to have been first domesticated in Southeast Asia. It is widely cultivated in Southeast Asia, East Asia, and South Asia. It grows abundantly near waterways and requires little to no care. Description Ipomoea aquatica grows in water or on moist soil. Its stems are or longer, rooting at the nodes. The hollow cavity within the stem makes the plant buoyant. The leaves vary from typically sagittate (arrowhead-shaped) to lanceolate, long and broad. The flowers are trumpet-shaped, in diameter, and usually white in colour with a mauve centre. Propagation is either by planting cuttings of the stem shoots, which will root along nodes, or by planting the seeds from flowers that produce seed pods. Names Ipomoea aquatica is most widely known as kangkong (also spelled kangkung), its common name in Maritime Southeast Asia, which likely originates from either Malay or one of the languages of the Philippines. It is also known as water spinach, river spinach, water morning glory, water convolvulus, or by the more ambiguous names Chinese spinach, Chinese watercress, Chinese convolvulus or swamp cabbage. It is known as () in Mandarin, () in Cantonese and in Hawaii, and () in modern Cantonese. In Tamil–speaking parts of South India and Sri Lanka, this spinach is known as (), In Vietnamese, this spinach is called rau muống. In Bengali, it is called Burma saag (বর্মা শাক) (Burma is another term for the country of Myanmar). In Thai it is known as pak bung. In Korean, it is called gongsimchae (공심채). Origin and distribution The origin of Ipomoea aquatica is not quite clear, but it is generally believed to be native to Southeast Asia and was first cultivated there. This is supported by phylogenetic studies, its ideal climatic conditions, and the number of native pathogens in the region (like Albugo spp.); as well as its predominant cultivation range, the prevalence in usage as food and traditional medicine, and the number of distinct native names in Southeast Asian languages and language families. Several sources have also cited China or India as the location of the plant's domestication. However, these claims have no supporting evidence other than the appearance of the plant's name in historical records. The first clear mention of I. aquatica in Chinese records is in the Nanfang Caomu Zhuang written by the Chinese botanist Ji Han (AD 263-307). Ji Han specifically identifies I. aquatica as being "a strange vegetable of the south" with a foreign origin brought over by "western countries". The claim for an Indian origin is based on the presence of the old name for the plant in Sanskrit, presumed to be from around 200 BC, but this is putative. Ipomoea aquatica is also found in Africa, the southwestern Pacific Islands, and northern Australia. However, in Africa and the Pacific Islands, the number of native common names isn't as varied as in Southeast Asia, and there are very few references to the local use of I. aquatica for any purpose. Similarly, in Australia, it does not have indigenous names at all and is entirely absent in the traditional diet of Indigenous Australians. These imply that I. aquatica weren't native to these regions and were likely introduced relatively late from tropical Asia. Composition Nutrition Safety Health risk Many of the waters where water spinach grows are fed by domestic or other waste. Pigs in southeast Asia are a natural reservoir for the parasite Fasciolopsis buski. Infections in the Mekong regions resulted from feeding on water spinach. Infections of F. buski in humans through water spinach can be anticipated. The infection can be prevented by proper preparation such as frying or boiling. Contamination with thermotolerant coliforms (ThC) or protozoan parasites with fecal origin, are very likely when the water spinach is planted in wastewater fed urban systems. Water spinach has great potential as a purifier of aquatic habitats. It is an efficient accumulator of cadmium, lead, and mercury. This characteristic can be dangerous if water spinach is planted for human or animal feed in polluted aquatic systems. Mercury in water spinach is composed mostly as methylmercury and has the highest potential of becoming a threat to human health. The edible parts of the plant have a lower heavy metal concentration. The stems and bottom of the edible portion of the plant are higher in concentration and should be removed to minimize the heavy metal intake. Uses Culinary The vegetable is a common ingredient in East, South and Southeast Asian dishes, such as in stir-fried water spinach. In Singapore, Indonesia, and Malaysia, the tender shoots along with the leaves are usually stir-fried with chili pepper, garlic, ginger, dried shrimp paste (belacan/terasi) and other spices. In Penang and Ipoh, it is cooked with cuttlefish and a sweet and spicy sauce. Also known as in the Hokkien dialect, it can also be boiled with preserved cuttlefish, then rinsed and mixed with spicy paste to become . Boiled also can be served with fermented krill noodles – – and prawn mi. In Burmese cuisine, water spinach is the primary ingredient in a Burmese salad called (), made with blanched water spinach, lime juice, fried garlic and garlic oil, roasted rice flour and dried shrimp. In Indonesian cuisine it is called ; boiled or blanched together with other vegetables it forms the ingredient of gado-gado or pecel salads in peanut sauce. Some recipes that use include plecing kangkung from Lombok, mie kangkung (kangkong noodles) from Jakarta, and petis kangkung from Semarang. In Thailand, where it is called (), it is eaten raw, often along with green papaya salad or nam phrik, in stir-fries and in curries such as kaeng som. In the Philippines, where it is called , the tender shoots are cut into segments and cooked, together with the leaves, in fish and meat stews, such as sinigang. The vegetable is also commonly eaten alone. In adobong kangkóng (also called ), it is sautéed in cooking oil, onions, garlic, vinegar, and soy sauce. In (or ), it is blanched and served in vinegar or calamansi juice and fresh tomatoes and onions with salt and pepper to taste. In (or ), it is sautéed with garlic and topped with bagoong alamang (shrimp paste) or bagoong isda (fermented fish) and sliced fresh tomatoes and onions, commonly also with cubed crispy (pork belly) or pork adobo. It can also be spiced with siling haba or siling labuyo peppers, soy sauce, black pepper, and sugar. It differs from in that it does not use vinegar. A local appetiser called crispy kangkóng has the leaves coated in a flour-based batter and fried until crisp, similar to Japanese vegetable tempura. Phytoremediation Using aquatic macrophytes to remove nutrients from wastewater and to control freshwater eutrophication has been reported to be a feasible way of phytoremediation. Various plants, including I. aquatica, have been tested for this use. Owing to its being edible and thus marketable, it could be an attractive option for this use. Animal feed Water spinach is fed to livestock as green fodder with high nutritive value—especially the leaves, for they are a good source of carotene. It is fed to cattle, pigs, fish, ducks, and chicken. In limited quantities, I. aquatica can have a somewhat laxative effect. Medical Medicinal use I. aquatica is used in the traditional medicine of southeast Asia and in the traditional medicine of some countries in Africa. In southeast Asian medicine it is used against piles, and nosebleeds, as an anthelmintic, and to treat high blood pressure. In Ayurveda, leaf extracts are used against jaundice and nervous debility. In indigenous medicine in Sri Lanka, water spinach is supposed to have insulin-like properties. Dr. Christophe Wiart cites several promising studies showing improvements in blood glucose levels in humans and rats and concludes that clinical trials are warranted. Antioxidant bioactive compounds and anti-microbial substances can be detected in water spinach. Furthermore, plant extracts of water spinach inhibit cancer cell growth of Vero, Hep-2 and A-549 cells, though they have moderate anti-cancer properties. Cultivation Ipomoea aquatica is most commonly grown in east, south, and southeast Asia. It flourishes naturally in waterways, and requires little if any care. It is used extensively in Indonesian, Burmese, Thai, Lao, Cambodian, Malay, Vietnamese, Filipino, and Chinese cuisine, especially in rural or kampung (village) areas. The vegetable is also extremely popular in Taiwan, where it grows well. During the Japanese occupation of Singapore in World War II, the vegetable grew remarkably easily in many areas, and became a popular wartime crop. Water spinach has been found to be cultivated in the following countries: Australia Bangladesh Burma Cambodia China Fiji India Maldives Indonesia Japan Malaysia Nepal New Guinea Philippines Sri Lanka Taiwan Thailand Vietnam In the United States, it is cultivated in California, Florida, Hawaii, Texas, Arizona, and the U.S. Virgin Islands. It is also found in Africa and in its wild form; it is collected and used by the Sambaa people in Tanzania. Water spinach is also potentially suitable for cultivation in greenhouses in more temperate regions. In non-tropical areas, it is easily grown in containers given enough water in a bright sunny location. It readily roots from cuttings. Requirements for climate and soil Water spinach is ideal for sub-tropical and tropical climate, as it does not grow well below and is sensitive to frost. High soil moisture is beneficial for growth. Clay soils and marshy soils rich in organic matter are suitable for water spinach. The ideal pH range for the growth is from 5 to 7. The provision of shade has been shown to have a positive influence on the yield of water spinach. Traditional cultivation methods Water spinach is cultivated in a variety of systems. In Hong Kong, two methods are traditionally used: the dryland method and the wetland method. In the dryland method, water spinach is grown on raised beds which are separated by irrigation ditches. The seeds can be sown directly onto the beds. Alternatively, a nursery may be used and the seedlings are transplanted when they reach a sufficient size. In either case, the distance between the plants should be about by the time they are tall. Regular irrigation is crucial in the dryland system and so is sufficient fertilization. Water spinach cultivated with the dryland method is ready for harvest 50 to 60 days after sowing. Harvesting is done by pulling up the whole plant. The wetland method is the traditionally more common and important method for cultivation in Hong Kong: In the wetland method, water spinach is cultivated on flat fields surrounded by raised banks, which have oftentimes been used as rice paddies in the past. These former rice paddies have a heavy clay soil with an iron-pan. This helps to retain water for the water spinach. The seedlings to be used in this method are usually grown in a nursery on a dry field, as germination under water is quite poor. Six weeks after sowing, cuttings can be taken from the seedlings for transplantation. One cutting is an approximately long cut from the stem containing seven or eight nodes. This is then planted in the field with a spacing of about . The field is prepared beforehand by flooding it to a depth of . The soil itself is tramped into a liquid mud so that the cuttings can root easily. Once the plants are established, the depth of the flooding is increased to . The first harvest in the wetland method can usually be done at around 30 days after transplantation. Also, the harvesting differs from the dryland system: In the wetland, the upper part of the main shoot is cut at about water level. This stimulates lateral growth and produces horizontal shoots carrying vertical branches. After the first harvests, every seven to ten days throughout the summer, these vertical branches can be harvested. After the planting period, the fields are drained and once the fruit of the water spinach is ripe, it is harvested, dried, then trodden to release the seeds which are to be used for the following season. Use of fertilizer How much fertilizer is used for the cultivation strongly depends on the region. Most research is from the 1980s and 1990s. Generally, it has been shown that a dose of N/ha is sufficient and that the application of K can be beneficial on the yield. Also, the application of plant growth regulators, for example Adenine and Zetanine, has been found to be an effective means to promote water spinach growth. One study has determined, that the highest yields are produced with the application of 60 kg/ha of N, 90 kg/ha of P2O5 and 50 kg/ha of K2O for the first harvest. For the second harvest the optimal fertilization was determined as 120 kg/ha of N, 45 kg/ha of P2O5 and 100 kg/ha of K2O. Taiwan: In Taiwan, the usual fertilization includes the basic application of about 10 t/ha of cow manure followed by 50 kg/ha of ammonium sulfate after each harvest. Bangkok: In Bangkok, it is common to apply about 300 kg/ha of NPK fertilizer twice a month. Indonesia: In Indonesia, usually 150 kg to 300 kg of NPK are applied per hectare. Pathogens and pests There are several pathogens and pests reported, affecting I. aquatica. Pathogens include Pythium, causing problems like damping-off, Cercospora leaf spot and root nematodes. Also, aphids may be problems in fields. Additionally, there are several polyphagous insects feeding on I. aquatica. Lepidoptera species include Diacrisia strigatula Walker and Spodoptera litura. The "woolly-bear" caterpillars (D. virginica [Fabricius]) of the eastern United States and D. strigatula (Chinese tiger moth) are other species with wide food preferences. A specialist pathogen on I. aquatica is the oomycete Albugo ipomoeae-aquaticae, though its range is restricted to southern and southeast Asia. Invasiveness Ipomoea aquatica is listed by the USDA as a noxious weed, especially in the states of Florida, California, and Hawaii, where it can be observed growing in the wild. In the US, water spinach has mainly become a problem in Florida. It is unclear why it is a problem there; however, although the fast growth rate has been cited as a threat to native plants in certain areas of Florida. It could be owing to the time since introduction, or owing to climatic factors. I. aquatica has been extensively cultivated in Texas for over 30 years, having been originally brought there by Asian immigrants. Because no evidence indicates the plant has escaped into the wild, Texas lifted its ban on cultivation for personal use with no restrictions or requirements, noting its importance as a vegetable in many cultures, and also began permitting cultivation for commercial sales with the requirement of an exotic species permit. Possession of I. aquatica has been prohibited in Florida since 1973, but it is still being grown and sold illegally. Some of the infestations in Florida public lakes have been eradicated, or at least attempts have been made. In Sri Lanka it invades wetlands, where its long, floating stems form dense mats which can block the flow of water and prevent the passage of boats. Gallery
Biology and health sciences
Leafy vegetables
Plants
1616775
https://en.wikipedia.org/wiki/Dissociation%20%28chemistry%29
Dissociation (chemistry)
Dissociation in chemistry is a general process in which molecules (or ionic compounds such as salts, or complexes) separate or split into other things such as atoms, ions, or radicals, usually in a reversible manner. For instance, when an acid dissolves in water, a covalent bond between an electronegative atom and a hydrogen atom is broken by heterolytic fission, which gives a proton (H+) and a negative ion. Dissociation is the opposite of association or recombination. Dissociation constant For reversible dissociations in a chemical equilibrium AB <=> A + B the dissociation constant Kd is the ratio of dissociated to undissociated compound where the brackets denote the equilibrium concentrations of the species. Dissociation degree The dissociation degree is the fraction of original solute molecules that have dissociated. It is usually indicated by the Greek symbol α. More accurately, degree of dissociation refers to the amount of solute dissociated into ions or radicals per mole. In case of very strong acids and bases, degree of dissociation will be close to 1. Less powerful acids and bases will have lesser degree of dissociation. There is a simple relationship between this parameter and the van 't Hoff factor . If the solute substance dissociates into ions, then For instance, for the following dissociation KCl <=> K+ + Cl- As , we would have that . Salts The dissociation of salts by solvation in a solution, such as water, means the separation of the anions and cations. The salt can be recovered by evaporation of the solvent. An electrolyte refers to a substance that contains free ions and can be used as an electrically conductive medium. Most of the solute does not dissociate in a weak electrolyte, whereas in a strong electrolyte a higher ratio of solute dissociates to form free ions. A weak electrolyte is a substance whose solute exists in solution mostly in the form of molecules (which are said to be "undissociated"), with only a small fraction in the form of ions. Simply because a substance does not readily dissolve does not make it a weak electrolyte. Acetic acid () and ammonium () are good examples. Acetic acid is extremely soluble in water, but most of the compound dissolves into molecules, rendering it a weak electrolyte. Weak bases and weak acids are generally weak electrolytes. In an aqueous solution there will be some and some and . A strong electrolyte is a solute that exists in solution completely or nearly completely as ions. Again, the strength of an electrolyte is defined as the percentage of solute that is ions, rather than molecules. The higher the percentage, the stronger the electrolyte. Thus, even if a substance is not very soluble, but does dissociate completely into ions, the substance is defined as a strong electrolyte. Similar logic applies to a weak electrolyte. Strong acids and bases are good examples, such as HCl and . These will all exist as ions in an aqueous medium. Gases The degree of dissociation in gases is denoted by the symbol , where refers to the percentage of gas molecules which dissociate. Various relationships between and exist depending on the stoichiometry of the equation. The example of dinitrogen tetroxide () dissociating to nitrogen dioxide () will be taken. If the initial concentration of dinitrogen tetroxide is 1 mole per litre, this will decrease by at equilibrium giving, by stoichiometry, moles of . The equilibrium constant (in terms of pressure) is given by the equation where represents the partial pressure. Hence, through the definition of partial pressure and using to represent the total pressure and to represent the mole fraction; The total number of moles at equilibrium is , which is equivalent to . Thus, substituting the mole fractions with actual values in term of and simplifying; This equation is in accordance with Le Chatelier's principle. will remain constant with temperature. The addition of pressure to the system will increase the value of , so must decrease to keep constant. In fact, increasing the pressure of the equilibrium favours a shift to the left favouring the formation of dinitrogen tetroxide (as on this side of the equilibrium there is less pressure since pressure is proportional to number of moles) hence decreasing the extent of dissociation . Acids in aqueous solution The reaction of an acid in water solvent is often described as a dissociation HA <=> H+ + A- where HA is a proton acid such as acetic acid, CH3COOH. The double arrow means that this is an equilibrium process, with dissociation and recombination occurring at the same time. This implies that the acid dissociation constant However a more explicit description is provided by the Brønsted–Lowry acid–base theory, which specifies that the proton H+ does not exist as such in solution but is instead accepted by (bonded to) a water molecule to form the hydronium ion H3O+. The reaction can therefore be written as HA + H2O <=> H3O+ + A- and better described as an ionization or formation of ions (for the case when HA has no net charge). The equilibrium constant is then where [H_2O] is not included because in dilute solution the solvent is essentially a pure liquid with a thermodynamic activity of one. Ka is variously named a dissociation constant, an acid ionization constant, an acidity constant or an ionization constant. It serves as an indicator of the acid strength: stronger acids have a higher Ka value (and a lower pKa value). Fragmentation Fragmentation of a molecule can take place by a process of heterolysis or homolysis. Receptors Receptors are proteins that bind small ligands. The dissociation constant Kd is used as indicator of the affinity of the ligand to the receptor. The higher the affinity of the ligand for the receptor the lower the Kd value (and the higher the pKd value).
Physical sciences
Basics_3
Chemistry
1616901
https://en.wikipedia.org/wiki/Pterocarpus%20indicus
Pterocarpus indicus
Pterocarpus indicus (commonly known as Amboyna wood, Malay padauk, Papua New Guinea rosewood, Philippine mahogany, Andaman redwood, Burmese rosewood, narra (from Tagalog) and asana in the Philippines, angsana, or Pashu padauk) is a species of Pterocarpus of the Sweet Pea Family (Papilionaceae) native to southeastern Asia, northern Australasia, and the western Pacific Ocean islands, in Cambodia, southernmost China, East Timor, Indonesia, Malaysia, Papua New Guinea, the Philippines, the Ryukyu Islands, the Solomon Islands, Thailand, and Vietnam. Pterocarpus indicus was one of two species (the other being Eysenhardtia polystachya) used as a source for the 16th- to 18th-century traditional diuretic known as lignum nephriticum. Many populations of Pterocarpus indicus are seriously threatened. It is extinct in Vietnam and possibly in Sri Lanka and Peninsular Malaysia. It was declared the national tree of the Philippines in 1934 by Governor-General Frank Murphy of the Insular Government of the Philippine Islands through Proclamation No. 652. Description It is a large deciduous tree growing to 30–40 m tall, with a trunk up to 2 m diameter. The leaves are 12–22 cm long, pinnate, with 5–11 leaflets, the girth is 12–34 m wide. Most Pterocarpus species prefer seasonal weather but P. indicus prefer rainforests. The flowers are produced in panicles 6–13 cm long containing a few to numerous flowers; flowering is from February to May in the Philippines, Borneo and the Malay peninsula. They are slightly fragrant and have yellow or orange-yellow petals. The fruit is a semiorbicular pod 2–3 cm diameter, surrounded by a flat 4–6 cm diameter membranaceous wing (wing-like structure) which aids dispersal by the wind. It contains one or two seeds, and does not split open at maturity; it ripens within 4–6 years, and becomes purple when dry. The central part of the pod can be smooth (f. indica), bristly (f. echinatus (Pers.) Rojo) or intermediate. Note: Pterocarpus macrocarpus, a similar species native to Burma, is referred to as "Rosewood" throughout South East Asia. P. macrocarpus is usually harder than P. indicus. When in burl form both are referred to as Amboyna Burl. Uses The hardwood, which is purplish, is termite-resistant and rose-scented. The wood known in Indonesia as amboyna is the burl of the tree, named after Ambon, where much of this material was originally found. Often amboyna is finely sliced to produce an extremely decorative veneer, used for decoration and in making of furniture and keys on a marimba. It is a premium timber species suitable for high grade furniture, lumber and plywood for light construction purposes. It is also used for cartwheels, wood carving and musical instruments. The flower is used as a honey source while leaf infusions are used as shampoos. Both flowers and leaves were said to be eaten. The leaves are supposedly good for waxing and polishing brass and copper. It is also a source of kino or resin. The leaves of narra are also used in traditional medicine to treat a variety of health problems. Narra leaves contain flavonoids. Flavonoids are antioxidants that provide health benefits to humans, such as anti-inflammatory and anti-allergic benefits. Flavonoids in narra leaves may be capable of preventing damage to your kidneys. In folk medicine, it is used to combat tumors. This property might be due to an acidic polypeptide found in its leaves that inhibited growth of Ehrlich ascites carcinoma cells by disruption of cell and nuclear membranes. It was also one of the sources of lignum nephriticum, a diuretic in Europe during the 16th to 18th centuries. Its reputation is due to its wood infusions, which are fluorescent. The tree is recommended as an ornamental tree for avenues and is sometimes planted in Puerto Rico as a shade and ornament. The tall, dome-shaped crown, with long, drooping branches is very attractive and the flowers are spectacular in areas with a dry season. It is very easily propagated from seed or large stem cuttings, but suffers from disease problems. It is widely planted as a roadside, park, and parking lot tree. In agroforestry, it maintains ecosystem fertility and soil stability. Narra is a leguminous plant that is capable of fixing nitrogen by forming endosymbiotic relationships with nitrogen-fixing bacteria that lives in its root nodules. Nodulating leguminous plants, such as narra, are responsible for transforming atmospheric nitrogen into a plant-usable form. In the Philippines, a permit is required to cut the narra (cf. Tagalog and Cebuano nára, Maranao nara), but nevertheless the popular sturdy wood is widely used for construction and furniture projects. In Singapore, the ease to propagate the tree made it a favourite for the urban planners in Singapore to plant new trees via monoculture in a campaign to transform the rapidly urbaning city into a green city in between 1969 and 1982. In 1985, 1,400 trees died due to "Angsana Wilt Disease," and were cut down. It was found that the fusarium oxysporum fungi species was the cause of the disease. The fungus was carried by ambrosia beetles boring into the trees. The infection was eventually controlled by a combination of monitoring, removal of lightning-damaged trees, and replanting with identified disease-resistant varieties. Symbolism It is the national tree of the Philippines, as well as the provincial tree of Chonburi and Phuket in Thailand.
Biology and health sciences
Fabales
Plants
1617971
https://en.wikipedia.org/wiki/Pteridophyte
Pteridophyte
A pteridophyte is a vascular plant (with xylem and phloem) that reproduces by means of spores. Because pteridophytes produce neither flowers nor seeds, they are sometimes referred to as "cryptogams", meaning that their means of reproduction is hidden. They are also the ancestors of the plants we see today. Ferns, horsetails (often treated as ferns), and lycophytes (clubmosses, spikemosses, and quillworts) are all pteridophytes. However, they do not form a monophyletic group because ferns (and horsetails) are more closely related to seed plants than to lycophytes. "Pteridophyta" is thus no longer a widely accepted taxon, but the term pteridophyte remains in common parlance, as do pteridology and pteridologist as a science and its practitioner, for example by the International Association of Pteridologists and the Pteridophyte Phylogeny Group. Description Pteridophytes (ferns and lycophytes) are free-sporing vascular plants that have a life cycle with alternating, free-living gametophyte and sporophyte phases that are independent at maturity. The body of the sporophyte is well differentiated into roots, stem and leaves. The root system is always adventitious. The stem is either underground or aerial. The leaves may be microphylls or megaphylls. Their other common characteristics include vascular plant apomorphies (e.g., vascular tissue) and land plant plesiomorphies (e.g., spore dispersal and the absence of seeds). Taxonomy Phylogeny Of the pteridophytes, ferns account for nearly 90% of the extant diversity. Smith et al. (2006), the first higher-level pteridophyte classification published in the molecular phylogenetic era, considered the ferns as monilophytes, as follows: Division Tracheophyta (tracheophytes) - vascular plants Subdivision Lycopodiophyta (lycophytes) - less than 1% of extant vascular plants Sub division Euphyllophytina (euphyllophytes) Infradivision Moniliformopses (monilophytes) Infradivision Spermatophyta - seed plants, ~260,000 species where the monilophytes comprise about 9,000 species, including horsetails (Equisetaceae), whisk ferns (Psilotaceae), and all eusporangiate and all leptosporangiate ferns. Historically both lycophytes and monilophytes were grouped together as pteridophytes (ferns and fern allies) on the basis of being spore-bearing ("seed-free"). In Smith's molecular phylogenetic study the ferns are characterised by lateral root origin in the endodermis, usually mesarch protoxylem in shoots, a pseudoendospore, plasmodial tapetum, and sperm cells with 30-1000 flagella. The term "moniliform" as in Moniliformopses and monilophytes means "bead-shaped" and was introduced by Kenrick and Crane (1997) as a scientific replacement for "fern" (including Equisetaceae) and became established by Pryer et al. (2004). Christenhusz and Chase (2014) in their review of classification schemes provide a critique of this usage, which they discouraged as irrational. In fact the alternative name Filicopsida was already in use. By comparison "lycopod" or lycophyte (club moss) means wolf-plant. The term "fern ally" included under Pteridophyta generally refers to vascular spore-bearing plants that are not ferns, including lycopods, horsetails, whisk ferns and water ferns (Marsileaceae, Salviniaceae and Ceratopteris). This is not a natural grouping but rather a convenient term for non-fern, and is also discouraged, as is eusporangiate for non-leptosporangiate ferns. However both Infradivision and Moniliformopses are also invalid names under the International Code of Botanical Nomenclature. Ferns, despite forming a monophyletic clade, are formally only considered as four classes (Psilotopsida; Equisetopsida; Marattiopsida; Polypodiopsida), 11 orders and 37 families, without assigning a higher taxonomic rank. Furthermore, within the Polypodiopsida, the largest grouping, a number of informal clades were recognised, including leptosporangiates, core leptosporangiates, polypods (Polypodiales), and eupolypods (including Eupolypods I and Eupolypods II). In 2014 Christenhusz and Chase, summarising the known knowledge at that time, treated this group as two separate unrelated taxa in a consensus classification; Lycopodiophyta (lycopods) 1 subclass, 3 orders, each with one family, 5 genera, approx. 1,300 species Polypodiophyta (ferns) 4 subclasses, 11 orders, 21 families, approx. 212 genera, approx. 10,535 species Subclass Equisetidae Warm. Subclass Ophioglossidae Klinge Subclass Marattiidae Klinge Subclass Polypodiidae Cronquist, Takht. & Zimmerm. These subclasses correspond to Smith's four classes, with Ophioglossidae corresponding to Psilotopsida. The two major groups previously included in Pteridophyta are phylogenetically related as follows: Subdivision Pteridophytes consist of two separate but related classes, whose nomenclature has varied. The system put forward by the Pteridophyte Phylogeny Group in 2016, PPG I, is: Class Lycopodiopsida Bartl. – lycophytes: clubmosses, quillworts and spikemosses; 3 extant orders Order Lycopodiales DC. ex Bercht. & J.Presl – clubmosses; 1 extant family Order Isoetales Prantl – quillworts; 1 extant family Order Selaginellales Prantl – spikemosses; 1 extant family Class Polypodiopsida Cronquist, Takht. & W.Zimm. – ferns; 11 extant orders Subclass Equisetidae Warm. – horsetails; 1 extant order, family and genus (Equisetum) Order Equisetales DC. ex Bercht. & J.Presl – 1 extant family Subclass Ophioglossidae Klinge – 2 extant orders Order Psilotales Prant – whisk ferns; 1 extant family Order Ophioglossales Link – grape ferns; 1 extant family Subclass Marattiidae Klinge – marattioid ferns; 1 extant order Order Marattiales Link – 1 extant family Subclass Polypodiidae Cronquist, Takht. & W.Zimm. – leptosporangiate ferns; 7 extant orders Order Osmundales Link – 1 extant family Order Hymenophyllales A.B.Frank – 1 extant family Order Gleicheniales Schimp – 3 extant families Order Schizaeales Schimp. – 3 extant families Order Salviniales Link – 2 extant families Order Cyatheales A.B.Frank – 8 extant families Order Polypodiales Link – 26 extant families In addition to these living groups, several groups of pteridophytes are now extinct and known only from fossils. These groups include the Rhyniopsida, Zosterophyllopsida, Trimerophytopsida, the Lepidodendrales and the Progymnospermopsida. Modern studies of the land plants agree that seed plants emerged from pteridophytes more closer to ferns than lycophytes. Therefore, pteridophytes do not form a clade but constitute a paraphyletic grade. Life cycle Just as with bryophytes and spermatophytes (seed plants), the life cycle of pteridophytes involves alternation of generations. This means that a diploid generation (the sporophyte, which produces spores) is followed by a haploid generation (the gametophyte or prothallus, which produces gametes). Pteridophytes differ from bryophytes in that the sporophyte is branched and generally much larger and more conspicuous, and from seed plants in that both generations are independent and free-living. The sexuality of pteridophyte gametophytes can be classified as follows: Dioicous: each individual gametophyte is either male (producing antheridia and hence sperm) or female (producing archegonia and hence egg cells). Monoicous: each individual gametophyte produces both antheridia and archegonia and can function both as a male and as a female. Protandrous: the antheridia mature before the archegonia (male first, then female). Protogynous: the archegonia mature before the antheridia (female first, then male). These terms are not the same as monoecious and dioecious, which refer to whether a seed plant's sporophyte bears both male and female gametophytes, i. e., produces both pollen and seeds, or just one of the sexes.
Biology and health sciences
Pteridophytes
null
1618377
https://en.wikipedia.org/wiki/Oceanic%20basin
Oceanic basin
In hydrology, an oceanic basin (or ocean basin) is anywhere on Earth that is covered by seawater. Geologically, most of the ocean basins are large geologic basins that are below sea level. Most commonly the ocean is divided into basins following the continents distribution: the North and South Atlantic (together approximately 75 million km2/ 29 million mi2), North and South Pacific (together approximately 155 million km2/ 59 million mi2), Indian Ocean (68 million km2/ 26 million mi2) and Arctic Ocean (14 million km2/ 5.4 million mi2). Also recognized is the Southern Ocean (20 million km2/ 7 million mi2). All ocean basins collectively cover 71% of the Earth's surface, and together they contain almost 97% of all water on the planet. They have an average depth of almost 4 km (about 2.5 miles). Definitions of boundaries Boundaries based on continents "Limits of Oceans and Seas", published by the International Hydrographic Office in 1953, is a document that defined the ocean's basins as they are largely known today. The main ocean basins are the ones named in the previous section. These main basins are divided into smaller parts. Some examples are: the Baltic Sea (with three subdivisions), the North Sea, the Greenland Sea, the Norwegian Sea, the Laptev Sea, the Gulf of Mexico, the South China Sea, and many more. The limits were set for convenience of compiling sailing directions but had no geographical or physical ground and to this day have no political significance. For instance, the line between the North and South Atlantic is set at the equator. The Antarctic or Southern Ocean, which reaches from 60° south to Antarctica had been omitted until 2000, but is now also recognized by the International Hydrographic Office. Nevertheless, and since ocean basins are interconnected, many oceanographers prefer to refer to one single ocean basin instead of multiple ones.   Older references (e.g., Littlehales 1930) consider the oceanic basins to be the complement to the continents, with erosion dominating the latter, and the sediments so derived ending up in the ocean basins. This vision is supported by the fact that oceans lie lower than continents, so the former serve as sedimentary basins that collect sediment eroded from the continents, known as clastic sediments, as well as precipitation sediments. Ocean basins also serve as repositories for the skeletons of carbonate- and silica-secreting organisms such as coral reefs, diatoms, radiolarians, and foraminifera. More modern sources (e.g., Floyd 1991) regard the ocean basins more as basaltic plains, than as sedimentary depositories, since most sedimentation occurs on the continental shelves and not in the geologically defined ocean basins. Definition based on surface connectivity The flow in the ocean is not uniform but varies with depth. Vertical circulation in the ocean is very slow compared to horizonal flow and observing the deep ocean is difficult. Defining the ocean basins based on connectivity of the entire ocean (depth and width) is therefore not possible. Froyland et al. (2014) defined ocean basins based on surface connectivity. This is achieved by creating a Markov Chain model of the surface ocean dynamics using short term time trajectory data from a global ocean model. These trajectories are of particles that move only on the surface of the ocean. The model outcome gives the probability of a particle at a certain grid point to end up somewhere else on the ocean's surface. With the model outcome a matrix can be created from which the Eigenvectors and Eigenvalues are taken. These Eigenvectors show regions of attraction, aka regions where things on the surface of the ocean (plastic, biomass, water etc.) become trapped. One of these regions is for example the Atlantic garbage patch. With this approach the five main ocean basins are still the North and South Atlantic, North and South Pacific and the Arctic Ocean, but with different boundaries between the basins. These boundaries show the lines of very little surface connectivity between the different regions which means that a particle on the ocean surface in a certain region is more likely to stay in the same region than to pass over to a different one. Formation of oceanic crusts and basins Earth's structure Depending on the chemical composition and the physical state, the Earth can be divided into three major components:  the mantle, the core, and the crust. The crust is referred to as the outside layer of the Earth. It is made of solid rock, mostly basalt and granite. The crust that lies below sea level is known as the oceanic crust, while on land it is known as the continental crust. The former is thinner and is composed of relatively dense basalt, while the latter is less dense and mainly composed of granite. The lithosphere is composed of the crust (oceanic and continental) and the uppermost part of the mantle. The lithosphere is broken into sections called plates. Processes of tectonic plates Tectonic plates move very slowly (5 to 10 cm (2 to 4 inches) per year) relative to each other and interact along their boundaries. This movement is responsible for most of the Earth's seismic and volcanic activity. Depending on how the plates interact with each other, there are three types of boundaries. Convergent boundary: the plates collide, and eventually the denser one slides underneath the lighter one, a process known as subduction. This type of interaction can take place between an oceanic and an oceanic crust, creating a so-called oceanic trench. It can also take place between an oceanic and a continental crust, forming a mountain range in the continent like the Andes, and it can take place between a continental and continental crust, resulting in large mountain chains, like the Himalayas. Divergent boundary: the plates move apart from each other. If this occurs on land a rift is formed, which eventually becomes a rift valley. The most active divergent boundaries lie under the sea. In the ocean, if magma or molten rock ascent from the mantle and fill the gap created by two diverging plates, a mid-ocean ridge is formed. Transform boundary: also called transform fault, occurs when the movement between the plates is horizontal, so no crust is created or destroyed. It can happen both, on land and in the sea, but most of the faults are in the oceanic crust. Size of trenches The Earth's deepest trench is the Mariana Trench which extends for about 2500 km (1600 miles) across the seabed. It is near the Mariana Islands, a volcanic archipelago in the West Pacific. Its deepest point is 10994 m (nearly 7 miles) below the surface of the sea. The Earth's longest trench runs alongside the coast of Peru and Chile, reaching a depth of 8065 m (26460 feet) and extending for approximately 5900 km (3700 miles). It occurs where the oceanic Nazca plate slides under the continental South American plate and is associated with the upthrust and volcanic activity of the Andes. History and age of oceanic crust The oldest oceanic crust is in the far western equatorial Pacific, east of the Mariana Islands. It is located far away from oceanic spreading centers, where oceanic crust is constantly created or destroyed. The oldest crust is estimated to be only around 200 million years old, compared to the age of Earth which is 4.6 billion years. 200 million years ago nearly all land mass was one large continent called Pangea, which started to split up. During the splitting process of Pangea, some ocean basins shrunk, such as the Pacific, while others were created, such as the Atlantic and Arctic basins. The Atlantic Basin began to form around 180 million years ago, when the continent Laurasia (North America and Eurasia) started to drift away from Africa and South America. The Pacific plate grew, and subduction led to a shrinking of its bordering plates. The Pacific plate continues to move northward. Around 130 million years ago the South Atlantic started to form, as South America and Africa started to separate. At around this time India and Madagascar rifted northwards, away from Australia and Antarctica, creating seafloor around Western Australia and East Antarctica. When Madagascar and India separated between 90 and 80 million years ago, the spreading ridges in the Indian Ocean were reorganized. The northernmost part of the Atlantic Ocean was also formed at this time when Europe and Greenland separated. About 60 million years ago a new rift and oceanic ridge formed between Greenland and Europe, separating them and initiating the formation of oceanic crust in the Norwegian Sea and the Eurasian Basin in the eastern Arctic Ocean. Changes in ocean basins State of the current ocean basins The area occupied by the individual ocean basins has fluctuated in the past due to, amongst other, tectonic plate movements. Therefore, an oceanic basin can be actively changing size and/or depth or can be relatively inactive. The elements of an active and growing oceanic basin include an elevated mid-ocean ridge, flanking abyssal hills leading down to abyssal plains and an oceanic trench. Changes in biodiversity, floodings and other climate variations are linked to sea-level, and are reconstructed with different models and observations (e.g., age of oceanic crust). Sea level is affected not only by the volume of the ocean basin, but also by the volume of water in them. Factors that influence the volume of the ocean basins are: Plate tectonics and the volume of mid-ocean ridges: the depth of the seafloor increases with distance to a ridge, as the oceanic lithosphere cools and thickens. The volume of ocean basins can be modeled using reconstructions of plate tectonics and using an age-depth relationship (see also Seafloor depth vs age). Marine sedimentations: these influence global mean depth and volume of the ocean, but they are difficult to determine and reconstruct. Passive margins and crustal extensions: to compensate the extension of continents due to continental rifting, oceanic crust decreases and therefore so does the volume of the ocean basin. However, the increase in continental area leads to a stretching and thinning of the continental crust, much of which ends up below sea level, thus again leading to an increase in ocean basin volume. The Atlantic Ocean and the Arctic Ocean are good examples of active, growing oceanic basins, whereas the Mediterranean Sea is shrinking. The Pacific Ocean is also an active, shrinking oceanic basin, even though it has both spreading ridge and oceanic trenches. Perhaps the best example of an inactive oceanic basin is the Gulf of Mexico, which formed in Jurassic times and has been doing nothing but collecting sediments since then. The Aleutian Basin is another example of a relatively inactive oceanic basin. The Japan Basin in the Sea of Japan which formed in the Miocene, is still tectonically active although recent changes have been relatively mild.
Physical sciences
Oceanic and coastal landforms
Earth science
1618406
https://en.wikipedia.org/wiki/Diamondback%20terrapin
Diamondback terrapin
The diamondback terrapin or simply terrapin (Malaclemys terrapin) is a species of terrapin native to the brackish coastal tidal marshes of the East Coast of the United States and the Gulf of Mexico coast, as well as in Bermuda. It belongs to the monotypic genus Malaclemys. It has one of the largest ranges of all turtles in North America, stretching as far south as the Florida Keys and as far north as Cape Cod. The name "terrapin" is derived from the Algonquian word . It applies to Malaclemys terrapin in both British English and American English. The name originally was used by early European settlers in North America to describe these brackish-water turtles that inhabited neither freshwater habitats nor the sea. It retains this primary meaning in American English. In British English, however, other semi-aquatic turtle species, such as the red-eared slider, might also be called terrapins. Description The common name refers to the diamond pattern on top of its shell (carapace), but the overall pattern and coloration vary greatly. No two diamondback terrapins look alike. The shell is usually wider at the back than in the front, and from above it appears wedge-shaped. The shell coloring can vary from brown to grey, and its body color can be grey, brown, yellow, or white. All have a unique pattern of wiggly, black markings or spots on their body and head. The diamondback terrapin has large webbed feet. The species is sexually dimorphic in that the males grow to a carapace length of approximately , while the females grow to an average carapace length of around , though they are capable of growing larger. The largest female on record was just over in carapace length. Specimens from regions that are consistently warmer in temperature tend to be larger than those from cooler, more northern areas. Male diamondback terrapins weigh on average, while females weigh around . The largest females can weigh up to . Diamondback terrapins can live up to 40 years in captivity, but scientists estimate they typically live for about 25 years in the wild. Adaptations to their environment Terrapins look much like their freshwater relatives, but are well adapted to the near shore marine environment. They have several adaptations that allow them to survive in varying salinities. They can live in full strength salt water for extended periods, and their skin is largely impermeable to salt. Terrapins have lachrymal salt glands, not present in their relatives, which are used primarily when the turtle is dehydrated. They can distinguish between drinking water of different salinities. Terrapins also exhibit unusual and sophisticated behavior to obtain fresh water, including drinking the freshwater surface layer that can accumulate on top of salt water during rainfall and raising their heads into the air with mouths open to catch falling rain drops. Terrapins are strong swimmers. They have strongly webbed hind feet, but not flippers like sea turtles do. Like their relatives (Graptemys), they have strong jaws for crushing shells of prey, such as clams and snails. This is especially true of females, who have larger and more muscular jaws than males. Subspecies Seven subspecies are recognized, including the nominate race. M. t. centrata (Latreille, 1801) – Carolina diamondback terrapin (Georgia, Florida, North Carolina, South Carolina) M. t. littoralis (Hay, 1904) – Texas diamondback terrapin (Texas) M. t. macrospilota (Hay, 1904) – ornate diamondback terrapin (Florida) M. t. pileata (Wied, 1865) – Mississippi diamondback terrapin (Alabama, Florida, Louisiana, Mississippi, Texas) M. t. rhizophorarum Fowler, 1906 – mangrove diamondback terrapin (Florida) M. t. tequesta Schwartz, 1955 – East Florida diamondback terrapin (Florida) M. t. terrapin (Schoepff, 1793) – northern diamondback terrapin (Connecticut, Delaware, Maryland, Massachusetts, New Jersey, New York, North Carolina, Rhode Island, Virginia) The isolated Bermudan population, which arrived in Bermuda on its own rather than being introduced by humans, has not yet been officially assigned to a subspecies but, based on mtDNA, it is closely related to the population from the Carolinas. Distribution and habitat Diamondback terrapins live in the very narrow strip of coastal habitats on the Atlantic and Gulf Coasts of the United States, from as far north as Cape Cod, Massachusetts, to the southern tip of Florida and around the Gulf Coast to Texas. In most of their range, terrapins live in Spartina marshes that are flooded at high tide, but in Florida they also live in mangrove swamps. This turtle can survive in freshwater as well as full-strength ocean water, but adults prefer intermediate salinities. Despite its preference for salt water, it is not a true sea turtle and is not fully marine. They have no competition from other turtles, although common snapping turtles do occasionally make use of salty marshes. It is unclear why terrapins do not inhabit the upper reaches of rivers within their range, as in captivity they tolerate fresh water. It is possible they are limited by the distribution of their prey. Terrapins live quite close to shore, unlike sea turtles, which wander far out to sea; however, a population of terrapins on Bermuda has been determined to be self-established rather than introduced by humans. Terrapins tend to live in the same areas for most or all of their lives, and do not make long-distance migrations. Life cycle Adult diamondback terrapins mate in the early spring, and clutches of 4–22 eggs are laid in sand dunes in the early summer. They hatch in late summer or early fall. Maturity in males is reached in 2–3 years at around in length; it takes longer for females: 6–7 years (8–10 years for northern diamondback terrapins) at a length of around . Reproduction Like all reptiles, terrapin fertilization occurs internally. Courtship has been seen in May and June, and is similar to that of the closely related red-eared slider (Trachemys scripta). Female terrapins can mate with multiple males and store sperm for years, resulting in some clutches of eggs with more than one father. Like many turtles, terrapins have temperature dependent sex determination, meaning that the sex of hatchlings is the result of incubation temperature. Females can lay up to three clutches of eggs/year in the wild, and up to five clutches/year in captivity. It is not known how often they may skip reproduction, so true clutch frequency is unknown. Females may wander considerable distances on land before nesting. Nests are usually laid in sand dunes or scrub vegetation near the ocean in June and July, but nesting may start as early as late April in Florida. Females will quickly abandon a nest attempt if they are disturbed while nesting. Clutch sizes vary latitudinally, with average clutch sizes as low as 5.8/eggs/clutch in southern Florida to 10.9 in New York. After covering the nest, terrapins quickly return to the ocean and do not return except to nest again. The eggs usually hatch in 60–85 days, depending on the temperature and the depth of the nest. Hatchlings usually emerge from the nest in August and September, but may overwinter in the nest after hatching. Nest predation is a large threat to Diamondback Terrapins. A study was done on 3159 nests finding that the greatest predator was racoons and predated nests were completely emptied of egg. Hatchlings sometimes stay on land in the nesting areas in both fall and spring and they may remain terrestrial for much or all of the winter in some places. Hatchling terrapins are freeze tolerant, which may facilitate overwintering on land. Hatchlings have lower salt tolerance than adults and Gibbons et al. provided strong evidence that one- and two-year-old terrapins use different habitats than do old individuals. Growth rates, age of maturity, and maximum age are not well known for terrapins in the wild, but males reach sexual maturity before females because of their smaller adult size. In females at least, sexual maturity is dependent on size rather than age. Estimations of age based on counts of growth rings on the shell are as yet untested, so it is not clear how to determine the ages of wild terrapins. Seasonal activities Because nesting is the only terrapin activity that occurs on land, most other aspects of terrapin behavior are poorly known. Limited data suggest that terrapins hibernate in the colder months in most of their range, in the mud of creeks and marshes. Diet The diamondback terrapin typically feeds on fish, crustaceans (such as shrimp and crabs) marine worms, marine snails (especially the saltmarsh periwinkle), clams, barnacles, mussels, other mollusks, insects, carrion, and sometimes ingest small amounts of plant material, such as algae. At high densities the terrapin may eat enough invertebrates to have ecosystem-level effects, partially because periwinkles themselves can overgraze important marsh plants, such as cordgrass (Spartina alterniflora). Gender and age can greatly affect the diet of the diamondback terrapin, males and juvenile females tend to have less diversity in their diet. Adult females, due to their powerful, defined jaw, will occasionally feed on crustaceans such as crabs and are more likely to consume hard-shelled mollusks. Conservation Status In the early 1900s, the species was considered a delicacy and was hunted almost to extinction. The population also decreased due to the development of coastal areas, terrapins being susceptible to wounds from the propellers on motorboats. Another common cause of death is the trapping of the turtles in recreational crab traps, as the turtles are attracted to the same bait as the crabs. The Wetlands Institute estimates that a minimum of 14,000 to 15,000 terrapins drown in crab traps annually set along the New Jersey coast alone. Countless more drown or starve to death in ghost traps, abandoned or lost crab traps along with discarded nets, throughout their habitat. Among two such abandoned crab pots in a tidal marsh in Georgia, a study found 133 dead terrapins, according to the Center for Biological Diversity. Further, an increase in seawalls and bulkheads being built for storm and erosion control, exacerbated by climate change and sea level rise, inadvertently eliminate terrapin's nesting habitat on beaches and upland areas with soft shorelines. Due to these factors, the diamondback terrapin is listed as an endangered species in Rhode Island, a threatened species in Massachusetts and is considered a "species of concern" in Georgia, Delaware, Alabama, Louisiana, North Carolina, and Virginia. The diamondback terrapin is listed as a "high priority species" under the South Carolina Wildlife Action Plan. In New Jersey, it was recommended to be listed as a Species of Special Concern in 2001. In July 2016, the species was removed from the New Jersey game list and is now listed as non-game with no hunting season. In Connecticut, there is no open hunting season for this animal. However, it holds no federal conservation status. In 2021, a terrapin was hatched in Massachusetts with two heads, two gastrointestinal systems, and two spines that are fused together. Conservation status The species is classified as Vulnerable by the IUCN due to decreasing population numbers in most of its range. There is limited protection for terrapins on a state-by-state level throughout its range; it is listed as Endangered in Rhode Island and Threatened in Massachusetts. The Diamondback Terrapin Working Group deal with regional protection issues. There is no national protection except through the Lacey Act, and little international protection. Diamondback terrapins are the only U.S. turtles that inhabit the brackish waters of estuaries, tidal creeks and salt marshes. With a historic range stretching from Massachusetts to Texas, terrapin populations have been severely depleted by land development and other human impacts along the Atlantic coast. Earthwatch Institute, a global non-profit that teams volunteers with scientists to conduct important environmental research, supports a research program called "Tagging the Terrapins of the Jersey Shore." This program allows volunteers to explore the coastal sprawl of New Jersey's Ocean County on Barnegat Bay, one of the most extensive salt marsh ecosystems on the East Coast, in search of this ornate turtle. On this project, volunteers contribute to environmental sustainability in the face of rampant development. Veteran turtle scientists Dr. Hal Avery, Dr. Jim Spotila, Dr. Walter Bien and Dr. Ed Standora are overseeing this program and the viability of terrapin populations in the face of growing environmental change. Threats The conservation status was heavily impacted by the consumption of diamondback terrapins in the 1900s when their sweet meat eventually became a multi-million dollar industry for gourmet restaurants. Around the 1920s, with the enforcement of prohibition, the consumption of diamondback terrapins declined. Since then, however, the population has never fully recovered. Another threat is exposure to paralytic shellfish toxins (PST), such as saxitoxins, which may contaminate mussels and other shellfish in terrapins' diets and are a result of the harmful algal blooms that have been prominent along the coastline where diamondback terrapins reside. Gastrointestinal studies have identified the presence of ingested PST in their tissues which causes the muscle weakness, paralysis, etc. that eventually lead to death. The major threats to diamondback terrapins are all associated with humans and probably differ in different parts of their range. People tend to build their cities on ocean coasts near the mouths of large rivers and in doing so they have destroyed many of the huge marshes that terrapins inhabited. Nationwide, probably >75% of the salt marshes where terrapins lived have been destroyed or altered. Currently, ocean level rise threatens the remainder. Traps used to catch crabs, both commercially and privately, have commonly caught and drowned many diamondback terrapins, which can result in male-biased populations, local population declines, and even extinctions. When these traps are lost or abandoned ("ghost traps"), they can kill terrapins for many years. Terrapin-excluding devices are available to retrofit crab traps; these reduce the number of terrapins captured while having little or no impact on crab capture rates. In some states (NJ, DE, MD), these devices are required by law. Nests, hatchlings, and sometimes adults are commonly eaten by raccoons, foxes, rats and many species of birds, especially crows and gulls. Density of these predators are often increased because of their association with humans. Predation rates can be extremely high; predation by raccoons on terrapin nests at Jamaica Bay Wildlife Refuge in New York varied from 92 to 100% each year from 1998 to 2008, Burke unpubl. data). Terrapins are killed by cars when nesting females cross roads and mortality can be high enough to seriously affect populations. Terrapins are still harvested for food in some states. Terrapins may be affected by pollutants such as metals and organic compounds, but this has not been demonstrated in wild populations. There is an active casual and professional pet trade in terrapins and it is unknown how many are removed from the wild for this purpose. Some people breed the species in captivity and some color variants are considered especially desirable. In Europe, Malaclemys are widely kept as pets, as are many closely related species. Relationship with humans In Maryland, diamondback terrapins were so plentiful in the 18th century that slaves protested the excessive use of this food source as their main protein. Late in the 19th century, demand for turtle soup claimed a harvest of 89,150 pounds from Chesapeake Bay in one year. In 1899, terrapin was offered on the dinner menu of renowned Delmonico's Restaurant in New York City as its third most expensive item. Either Maryland or Baltimore terrapin was available for $2.50 (). Although demand was high, over-harvesting diminished capture to a mere 823 pounds by 1920. According to the FAA National Wildlife Strike Database, a total of 18 strikes between diamondback terrapins and civil aircraft were reported in the US from 1990 to 2007, none of which caused damage to the aircraft. On July 8, 2009, flights at John F. Kennedy Airport in New York City were delayed for up to one and a half hours as 78 diamondback terrapins had invaded one of the runways. The turtles, which according to airport authorities were believed to have entered the runway in order to nest, were removed and released back into the wild. A similar incident happened on June 29, 2011, when over 150 turtles crossed runway 4, closing the runway and disrupting air traffic. Those terrapins were also relocated safely. The Port Authority of New York and New Jersey installed a turtle barrier along runway 4L at JFK to reduce the number of terrapins on the runway and encourage them to nest elsewhere. Nevertheless, on June 26, 2014, 86 terrapins made it onto the same runway, as a high tide carried them over the barrier. Their population is controlled by the raccoon population; it has been shown that as the raccoons decrease in number, mating terrapins increase, leading to increased turtle activity at the airport. Many human activities threaten the safety of diamondback terrapins. The terrapins get caught and drown in crab nets that humans put out, are suffocated by pollution that humans greatly contribute to, and lose their marsh and estuarine habitats because of urban development. Recent anthropogenic habitat modification, such as the construction of docks, roads, and housing developments, as well as activities such as crab-trapping, likely play a role in low annual survivorship. History as a delicacy Diamondback terrapins were heavily harvested for food in colonial America and probably before that by Native Americans. Terrapins were so abundant and easily obtained that slaves and even the Continental Army ate large numbers of them. In the 19th century, a dish called "Terrapin à la Maryland", a stew with cream and sherry, was a canonical element, along with canvasback duck, of the elegant and regional "Maryland Feast" menu, an "elite standard...that lasted for decades". By 1917, terrapins sold for as much as $5 each (). Huge numbers of terrapins were harvested from marshes and marketed in cities. By the early 1900s, populations in the northern part of the range were severely depleted and the southern part was greatly reduced as well. As early as 1902 the U.S. Bureau of Fisheries (which later became the U.S. Fish and Wildlife Service) recognized that terrapin populations were declining and started building large research facilities, centered at the Beaufort, North Carolina Fisheries Laboratory, to investigate methods for captive breeding terrapins for food. People tried (unsuccessfully) to establish them in many other locations, including San Francisco. Use as a symbol Maryland named the diamondback terrapin its official state reptile in 1994. The University of Maryland, College Park has used the species as its nickname (the Maryland Terrapins) and mascot (Testudo) since 1933, and the school newspaper has been named The Diamondback since 1921. Accordingly, the athletic teams are often known as "Terps" for short. The Baltimore baseball club entry in the Federal League during 1914 and 1915 was called the Baltimore Terrapins. The terrapin has also been a symbol of the Grateful Dead, upon the release of their studio album, "Terrapin Station"; as a result, many images of the terrapin dancing with a tambourine appear on posters and t-shirts in Grateful Dead memorabilia. Inspired by the song, the Terrapin Beer Company also uses a terrapin as its namesake and logo on its packaging.
Biology and health sciences
Turtles
Animals
1618887
https://en.wikipedia.org/wiki/Square%20yard
Square yard
The square yard (Northern India: gaj, Pakistan: gaz) is an imperial unit and U.S. customary unit of area. It is in widespread use in most of the English-speaking world, particularly the United States, United Kingdom, Canada, Pakistan and India. It is defined as the area of a square with sides of one yard (three feet, thirty-six inches, 0.9144 metres) in length. Symbols There is no universally agreed symbol but the following are used: square yards, square yard, square yds, square yd sq yards, sq yard, sq yds, sq yd, sq.yd. yards/-2, yard/-2, yds/-2, yd/-2 yards^2, yard^2, yds^2, yd^2 yards², yard², yds², yd² Conversions One square yard is equivalent to: 1,296 square inches 9 square feet ≈0.00020661157 acres ≈0.000000322830579 square miles 836 127.36 square millimetres 8 361.2736 square centimetres 0.83612736 square metres 0.000083612736 hectares 0.00000083612736 square kilometres 1.00969 gaj
Physical sciences
Area
Basics and measurement
1619420
https://en.wikipedia.org/wiki/White-necked%20jacobin
White-necked jacobin
The white-necked jacobin (Florisuga mellivora) is a medium-size hummingbird that ranges from Mexico south through Central America and northern South America into Brazil, Peru and Bolivia. It is also found in Trinidad & Tobago. Other common names are great jacobin and collared hummingbird. Taxonomy In 1743 the English naturalist George Edwards included a picture and a description of the white-necked jacobin in his A Natural History of Uncommon Birds. He used the English name "white-belly'd huming bird". Edwards based his etching on a specimen owned by the Duke of Richmond that had been collected in Suriname. When in 1758 the Swedish naturalist Carl Linnaeus updated his Systema Naturae for the tenth edition, he placed the white-necked jacobin with the other hummingbirds in the genus Trochilus. Linnaeus included a brief description, coined the binomial name Trochilus mellivorus and cited Edwards' work. The specific epithet combines the Latin mel meaning "honey" and -vorus meaning "eating". The type locality is Suriname. The white-necked jacobin is now placed in the genus Florisuga that was introduced in 1850 by Charles Bonaparte. These two subspecies are recognised. F. m. flabellifera has sometimes been called F. m. tobagensis. F. m. mellivora Linnaeus (1758) F. m. flabellifera Gould (1846) Description The white-necked jacobin is long. Males weigh and females . The male is unmistakable with its dark blue head and chest and white belly and tail; the tail feathers have black tips. A white band on the nape separates the blue head from the bright green back and long uppertail coverts. Females are highly variable, and may resemble adult or immature males. The majority of females have green upperparts, a blue-green throat and breast with white "scales", a white belly, and a mostly green tail with a blue end. Immature males vary from female-like, but with more white in the tail, to male-like with more black there. Immature females also vary but usually have less white in the tail and are somewhat bronzy on the throat and chest. Distribution and habitat The nominate subspecies of white-necked jacobin, F. m. mellivora, is found from southern Veracruz and northern Oaxaca, Mexico, through southern Belize, northern Guatemala, eastern Honduras and Nicaragua, eastern and western Costa Rica, and Panama into South America. In that continent it is found in much of Colombia and Ecuador, eastern Peru, northern Bolivia, most of Venezuela, the Guianas, the northwestern half of Brazil, and the island of Trinidad. F. m. flabellifera is found only on the island of Tobago. The nominate has been recorded as a vagrant in Argentina and on the islands of Aruba and Curaçao. The white-necked jacobin inhabits the canopy and edges of humid forest and also semi-open landscapes such as tall secondary forest, gallery forest, and coffee and cacao plantations. It is usually seen high in trees but comes lower at edges and in clearings. In elevation it usually ranges from sea level to about but has also rarely been seen as high as . Behavior Movement The white-necked jacobin's movement pattern is not well understood. It apparently moves seasonally as flower abundance changes, but details are lacking. Feeding The white-necked jacobin feeds on nectar at the flowers of tall trees, epiphytes, shrubs, and Heliconia plants. Several may feed in one tree and are aggressive to each other, but they are otherwise seldom territorial. Both sexes hawk small insects, mostly by hovering, darting, or sallying from perches. Breeding The white-necked jacobin breeds in the dry to early wet seasons, which vary across their range. The nest is a shallow cup of plant down and cobweb placed on the upper surface of a leaf where another leaf provides a "roof". It is typically above ground and sometimes near a stream. Males display and chase in the canopy and along edges during the breeding season. Females use a fluttering flight to distract predators. Vocalization The white-necked jacobin is not highly vocal. Its song is "a long series of high-pitched single notes, repeated at rate of c. 0.7–1 note/second 'tseee....tseee....tseee....tseee....'." Calls include "a short 'tsik',...a longer, high-pitched 'sweet', and a descending 'swee-swee-swee-swee' in antagonistic interactions." Status The IUCN has assessed the white-necked jacobin as being of Least Concern. It has an extremely large range, but its population has not been quantified and its trend is unknown. It is deemed uncommon to common in most of its range. It occurs in many protected areas and appears able to use human-altered landscapes such as tree plantations. Gallery
Biology and health sciences
Apodiformes
Animals
60411
https://en.wikipedia.org/wiki/Steller%27s%20sea%20cow
Steller's sea cow
Steller's sea cow (Hydrodamalis gigas) is an extinct sirenian described by Georg Wilhelm Steller in 1741. At that time, it was found only around the Commander Islands in the Bering Sea between Alaska and Russia; its range extended across the North Pacific during the Pleistocene epoch, and likely contracted to such an extreme degree due to the glacial cycle. It is possible indigenous populations interacted with the animal before Europeans. Steller first encountered it on Vitus Bering's Great Northern Expedition when the crew became shipwrecked on Bering Island. Much of what is known about its behavior comes from Steller's observations on the island, documented in his posthumous publication On the Beasts of the Sea. Within 27 years of its discovery by Europeans, the slow-moving and easily-caught mammal was hunted into extinction for its meat, fat, and hide. Some 18th-century adults would have reached weights of and lengths up to . It was a member of the family Dugongidae, of which the long dugong (Dugong dugon) is the sole living member. It had a thicker layer of blubber than other members of the order, an adaptation to the cold waters of its environment. Its tail was forked, like that of whales or dugongs. Lacking true teeth, it had an array of white bristles on its upper lip and two keratinous plates within its mouth for chewing. It fed mainly on kelp, and communicated with sighs and snorting sounds. Steller believed it was a monogamous and social animal living in small family groups and raising its young, similar to modern sirenians. Description Steller's sea cows are reported to have grown to long as adults, much larger than extant sirenians. In 1987, a rather complete skeleton was found on Bering Island measuring . In 2017, another such skeleton was found on Bering Island measuring , and in life probably about . Georg Steller's writings contain two contradictory estimates of weight: . The true value is estimated to fall between these figures, at about . This size made the sea cow one of the largest mammals of the Holocene epoch, along with baleen whales and some few toothed whales, and was likely an adaptation to reduce its surface-area to volume ratio and conserve heat. Unlike other sirenians, Steller's sea cow was positively buoyant, meaning that it was unable to submerge completely. It had a very thick outer skin, , to prevent injury from sharp rocks and ice and possibly to prevent unsubmerged skin from drying out. The sea cow's blubber was thick, another adaptation to the frigid climate of the Bering Sea. Its skin was brownish-black, with white patches on some individuals. It was smooth along its back and rough on its sides, with crater-like depressions most likely caused by parasites. This rough texture led to the animal being nicknamed the "bark animal". Hair on its body was sparse, but the insides of the sea cow's flippers were covered in bristles. The fore limbs were roughly long, and the tail fluke was forked. The sea cow's head was small and short in comparison to its huge body. The animal's upper lip was large and broad, extending so far beyond the lower jaw that the mouth appeared to be located underneath the skull. Unlike other sirenians, Steller's sea cow was toothless and instead had a dense array of interlacing white bristles on its upper lip. The bristles were about in length and were used to tear seaweed stalks and hold food. The sea cow also had two keratinous plates, called ceratodontes, located on its palate and mandible, used for chewing. According to Steller, these plates (or "masticatory pads") were held together by interdental papillae, a part of the gums, and had many small holes containing nerves and arteries. As with all sirenians, the sea cow's snout pointed downwards, which allowed it to better grasp kelp. The sea cow's nostrils were roughly long and wide. In addition to those within its mouth, the sea cow also had stiff bristles long protruding from its muzzle. Steller's sea cow had small eyes located halfway between its nostrils and ears with black irises, livid eyeballs, and canthi which were not externally visible. The animal had no eyelashes, but like other diving creatures such as sea otters, Steller's sea cow had a nictitating membrane, which covered its eyes to prevent injury while feeding. The tongue was small and remained in the back of the mouth, unable to reach the masticatory (chewing) pads. The sea cow's spine is believed to have had seven cervical (neck), 17 thoracic, three lumbar, and 34 caudal (tail) vertebrae. Its ribs were large, with five of 17 pairs making contact with the sternum; it had no clavicles. As in all sirenians, the scapula of Steller's sea cow was fan-shaped, being larger on the posterior side and narrower towards the neck. The anterior border of the scapula was nearly straight, whereas those of modern sirenians are curved. Like other sirenians, the bones of Steller's sea cow were pachyosteosclerotic, meaning they were both bulky (pachyostotic) and dense (osteosclerotic). In all collected skeletons of the sea cow, the manus is missing; since Dusisiren—the sister taxon of Hydrodamalis—had reduced phalanges (finger bones), Steller's sea cow possibly did not have a manus at all. The sea cow's heart was in weight; its stomach measured long and wide. The full length of its intestinal tract was about , equaling more than 20 times the animal's length. The sea cow had no gallbladder, but did have a wide common bile duct. Its anus was in width, with its feces resembling those of horses. The male's penis was long. Genetic evidence indicates convergent evolution with other marine mammals of genes related to metabolic and immune function, including leptin associated with energy homeostasis and reproductive regulation. Ecology and behavior Whether Steller's sea cow had any natural predators is unknown. It may have been hunted by killer whales and sharks, though its buoyancy may have made it difficult for killer whales to drown, and the rocky kelp forests in which the sea cow lived may have deterred sharks. According to Steller, the adults guarded the young from predators. Steller described an ectoparasite on the sea cows that was similar to the whale louse (Cyamus ovalis), but the parasite remains unidentified due to the host's extinction and loss of all original specimens collected by Steller. It was first formally described as Sirenocyamus rhytinae in 1846 by Johann Friedrich von Brandt, although it has since been placed into the genus Cyamus as Cyamus rhytinae. It was the only species of cyamid amphipod to be reported inhabiting a sirenian. Steller also identified an endoparasite in the sea cows, which was likely an ascarid nematode. Like other sirenians, Steller's sea cow was an obligate herbivore and spent most of the day feeding, only lifting its head every 4–5 minutes for breathing. Kelp was its main food source, making it an algivore. The sea cow likely fed on several species of kelp, which have been identified as Agarum spp., Alaria praelonga, Halosaccion glandiforme, Laminaria saccharina, Nereocyctis luetkeana, and Thalassiophyllum clathrus. Steller's sea cow only fed directly on the soft parts of the kelp, which caused the tougher stem and holdfast to wash up on the shore in heaps. The sea cow may have also fed on seagrass, but the plant was not common enough to support a viable population and could not have been the sea cow's primary food source. Further, the available seagrasses in the sea cow's range (Phyllospadix spp. and Zostera marina) may have grown too deep underwater or been too tough for the animal to consume. Since the sea cow floated, it likely fed on canopy kelp, as it is believed to have only had access to food no deeper than below the tide. Kelp releases a chemical deterrent to protect it from grazing, but canopy kelp releases a lower concentration of the chemical, allowing the sea cow to graze safely. Steller noted that the sea cow grew thin during the frigid winters, indicating a period of fasting due to low kelp growth. Fossils of Pleistocene Aleutian Island sea cow populations were larger than those from the Commander Islands, indicating that the growth of Commander Island sea cows may have been stunted due to a less favorable habitat and less food than the warmer Aleutian Islands. Steller described the sea cow as being highly social (gregarious). It lived in small family groups and helped injured members, and was also apparently monogamous. Steller's sea cow may have exhibited parental care, and the young were kept at the front of the herd for protection against predators. Steller reported that as a female was being captured, a group of other sea cows attacked the hunting boat by ramming and rocking it, and after the hunt, her mate followed the boat to shore, even after the captured animal had died. Mating season occurred in early spring and gestation took a little over a year, with calves likely delivered in autumn, as Steller observed a greater number of calves in autumn than at any other time of the year. Since female sea cows had only one set of mammary glands, they likely had one calf at a time. The sea cow used its fore limbs for swimming, feeding, walking in shallow water, defending itself, and holding on to its partner during copulation. According to Steller, the fore limbs were also used to anchor the sea cow down to prevent it from being swept away by the strong nearshore waves. While grazing, the sea cow progressed slowly by moving its tail (fluke) from side to side; more rapid movement was achieved by strong vertical beating of the tail. They often slept on their backs after feeding. According to Steller, the sea cow was nearly mute and made only heavy breathing sounds, raspy snorting similar to a horse, and sighs. Despite their large size, as with many other marine megafauna in the region, Steller's sea cows may have been prey for the local transient orcas (Orcinus orca); it is likely that they experienced predation, as Steller observed that foraging sea cows with calves would always keep their calves between themselves and the shore, and orcas would have been the most likely candidate for causing this behavior. In addition, early indigenous peoples of the North Pacific may have depended on the sea cow for food, and it is possible that this dependency may have extirpated the sea cow from portions of the North Pacific aside from the Commander Islands. Steller's sea cows may have also had a mutualistic (or possibly even parasitic) relationship with local seabird species; Steller often observed birds perching on the exposed backs of the sea cows, feeding on the parasitic Cyamus rhytinae; this unique relationship that disappeared with the sea cows may have been a food source for many birds, and is similar to the recorded interactions between oxpeckers (Buphagus) and extant African megafauna. Taxonomy Phylogeny Steller's sea cow was a member of the genus Hydrodamalis, a group of large sirenians, whose sister taxon was Dusisiren. Like those of Steller's sea cow, the ancestors of Dusisiren lived in tropical mangroves before adapting to the cold climates of the North Pacific. Hydrodamalis and Dusisiren are classified together in the subfamily Hydrodamalinae, which diverged from other sirenians around 4 to 8 mya. Steller's sea cow is a member of the family Dugongidae, the sole surviving member of which, and thus Steller's sea cow's closest living relative, is the dugong (Dugong dugon). Steller's sea cow was a direct descendant of the Cuesta sea cow (H. cuestae), an extinct tropical sea cow that lived off the coast of western North America, particularly California. The Cuesta sea cow is thought to have become extinct due to the onset of the Quaternary glaciation and the subsequent cooling of the oceans. Many populations died out, but the lineage of Steller's sea cow was able to adapt to the colder temperatures. The Takikawa sea cow (H. spissa) of Japan is thought of by some researchers to be a taxonomic synonym of the Cuesta sea cow, but based on a comparison of endocasts, the Takikawa and Steller's sea cows are more derived than the Cuesta sea cow. This has led some to believe that the Takikawa sea cow is its own species. The evolution of the genus Hydrodamalis was characterized by increased size, and a loss of teeth and phalanges, as a response to the onset of the Quaternary glaciation. Research history Steller's sea cow was discovered in 1741 by Georg Wilhelm Steller, and was named after him. Steller researched the wildlife of Bering Island while he was shipwrecked there for about a year; the animals on the island included relict populations of sea cows, sea otters, Steller sea lions, and northern fur seals. As the crew hunted the animals to survive, Steller described them in detail. Steller's account was included in his posthumous publication De bestiis marinis, or The Beasts of the Sea, which was published in 1751 by the Russian Academy of Sciences in Saint Petersburg. Zoologist Eberhard von Zimmermann formally described Steller's sea cow in 1780 as Manati gigas. Biologist Anders Jahan Retzius in 1794 put the sea cow in the new genus Hydrodamalis, with the specific name of stelleri, in honor of Steller. In 1811, naturalist Johann Karl Wilhelm Illiger reclassified Steller's sea cow into the genus Rytina, which many writers at the time adopted. The name Hydrodamalis gigas, the correct combinatio nova if a separate genus is recognised, was first used in 1895 by Theodore Sherman Palmer. For decades after its discovery, no skeletal remains of a Steller's sea cow were known. This may have been due to rising and falling sea levels over the course of the Quaternary period, which could have left many sea cow bones hidden. The first bones of a Steller's sea cow were unearthed in about 1840, over 70 years after it was presumed to have become extinct. The first partial sea cow skull was discovered in 1844 by Ilya Voznesensky while on the Commander Islands, and the first skeleton was discovered in 1855 on northern Bering Island. These specimens were sent to Saint Petersburg in 1857, and another nearly complete skeleton arrived in Moscow around 1860. Until recently, all the full skeletons were found during the 19th century, being the most productive period in terms of unearthed skeletal remains, from 1878 to 1883. During this time, 12 of the 22 skeletons having known dates of collection were discovered. Some authors did not believe possible the recovery of further significant skeletal material from the Commander Islands after this period, but a skeleton was found in 1983, and two zoologists collected about 90 bones in 1991. Only two to four skeletons of the sea cow exhibited in various museums of the world originate from a single individual. It is known that Adolf Erik Nordenskiöld, Benedykt Dybowski, and Leonhard Hess Stejneger unearthed many skeletal remains from different individuals in the late 1800s, from which composite skeletons were assembled. As of 2006, 27 nearly complete skeletons and 62 complete skulls have been found, but most of them are assemblages of bones from two to 16 different individuals. Illustrations The Pallas Picture is the only known drawing of Steller's sea cow believed to be from a complete specimen. It was published by Peter Simon Pallas in his 1840 work Icones ad Zoographia Rosso-Asiatica. Pallas did not specify a source; Stejneger suggested it may have been one of the original illustrations produced by Friedrich Plenisner, a member of Vitus Bering's crew as a painter and surveyor who drew a figure of a female sea cow on Steller's request. Most of Plenisner's depictions were lost during transit from Siberia to Saint Petersburg. Another drawing of Steller's sea cow similar to the Pallas Picture appeared on a 1744 map drawn by Sven Waxell and Sofron Chitrow. The picture may have also been based upon a specimen, and was published in 1893 by Pekarski. The map depicted Vitus Bering's route during the Great Northern Expedition, and featured illustrations of Steller's sea cow and Steller's sea lion in the upper-left corner. The drawing contains some inaccurate features such as the inclusion of eyelids and fingers, leading to doubt that it was drawn from a specimen. Johann Friedrich von Brandt, director of the Russian Academy of Sciences, had the "Ideal Image" drawn in 1846 based upon the Pallas Picture, and then the "Ideal Picture" in 1868 based upon collected skeletons. Two other possible drawings of Steller's sea cow were found in 1891 in Waxell's manuscript diary. There was a map depicting a sea cow, as well as a Steller sea lion and a northern fur seal. The sea cow was depicted with large eyes, a large head, claw-like hands, exaggerated folds on the body, and a tail fluke in perspective lying horizontally rather than vertically. The drawing may have been a distorted depiction of a juvenile, as the figure bears a resemblance to a manatee calf. Another similar image was found by Alexander von Middendorff in 1867 in the library of the Russian Academy of Sciences, and is probably a copy of the Tsarskoye Selo Picture. Range The range of Steller's sea cow at the time of its discovery was apparently restricted to the shallow seas around the Commander Islands, which include Bering and Copper Islands. The Commander Islands remained uninhabited until 1825, when the Russian-American Company relocated Aleuts from Attu Island and Atka Island there. The first fossils discovered outside the Commander Islands were found in interglacial Pleistocene deposits in Amchitka, and further fossils dating to the late Pleistocene were found in Monterey Bay, California, and Honshu, Japan. This suggests that the sea cow had a far more extensive range in prehistoric times. It cannot be excluded that these fossils belong to other Hydrodamalis species. The southernmost find is a Middle Pleistocene rib bone from the Bōsō Peninsula of Japan. The remains of three individuals were found preserved in the South Bight Formation of Amchitka; as late Pleistocene interglacial deposits are rare in the Aleutians, the discovery suggests that sea cows were abundant in that era. According to Steller, the sea cow often resided in the shallow, sandy shorelines and in the mouths of freshwater rivers. Genetic evidence suggests Steller's sea cow, as well as the modern dugong, suffered a population bottleneck (a significant reduction in population) bottoming roughly 400,000 years ago. Bone fragments and accounts by native Aleut people suggest that sea cows also historically inhabited the Near Islands, potentially with viable populations that were in contact with humans in the western Aleutian Islands prior to Steller's discovery in 1741. A sea cow rib discovered in 1998 on Kiska Island was dated to around 1,000 years old, and is now in the possession of the Burke Museum in Seattle. The dating may be skewed due to the marine reservoir effect which causes radiocarbon-dated marine specimens to appear several hundred years older than they are. Marine reservoir effect is caused by the large reserves of C14 in the ocean, and it is more likely that the animal died between 1710 and 1785. A 2004 study reported that sea cow bones discovered on Adak Island were around 1,700 years old, and sea cow bones discovered on Buldir Island were found to be around 1,600 years old. It is possible the bones were from cetaceans and were misclassified. Rib bones of a Steller's sea cow have also been found on St. Lawrence Island, and the specimen is thought to have lived between 800 and 920 CE. Interactions with humans Extinction Genetic evidence suggests the Steller's sea cows around the Commander Islands were the last of a much more ubiquitous population dispersed across the North Pacific coastal zones. They had the same genetic diversity as the last and rather inbred population of woolly mammoths on Wrangel Island. During glacial periods and reduction in sea levels and temperatures, suitable habitat substantially regressed, fragmenting the population. By the time sea levels stabilized around 5,000 years ago, the population had already plummeted. Together, these indicate that even without human influence, the Steller's sea cow would have still been a dead clade walking, with the vast majority of the population having already gone extinct from natural climatic and sea level shifts, with the tiny remaining population at major risk from a genetic extinction vortex. The presence of Steller's sea cows in the Aleutian Islands may have caused the Aleut people to migrate westward to hunt them. This possibly led to the sea cow's extirpation in that area, assuming it had not already happened yet, but the archaeological evidence is inconclusive. One factor potentially leading to extinction of Steller's sea cow, specifically off the coast of St. Lawrence Island, was the Siberian Yupik people who have inhabited St. Lawrence island for 2,000 years. They may have hunted the sea cows into extinction, as the natives have a dietary culture heavily dependent upon marine mammals. The onset of the Medieval Warm Period, which reduced the availability of kelp, may have also been the cause for their local extinction in that area. It has also been argued that the decline of Steller's sea cow may have been an indirect effect of the harvesting of sea otters by the area's aboriginal people. With the otter population reduced, the sea urchin population would have increased, in turn reducing the stock of kelp, its principal food. In historic times, though, aboriginal hunting had depleted sea otter populations only in localized areas, and as the sea cow would have been easy prey for aboriginal hunters, accessible populations may have been exterminated with or without simultaneous otter hunting. In any event, the range of the sea cow was limited to coastal areas off uninhabited islands by the time Bering arrived, and the animal was already endangered. When Europeans discovered them, there may have been only 2,000 individuals left. This small population was quickly wiped out by fur traders, seal hunters, and others who followed Vitus Bering's route past its habitat to Alaska. It was also hunted to collect its valuable subcutaneous fat. The animal was hunted and used by Ivan Krassilnikov in 1754 and Ivan Korovin 1762, but Dimitri Bragin, in 1772, and others later, did not see it. Brandt thus concluded that by 1768, twenty-seven years after it had been discovered by Europeans, the species was extinct. In 1887, Stejneger estimated that there had been fewer than 1,500 individuals remaining at the time of Steller's discovery, and argued there was already an immediate danger of the sea cow's extinction. The first attempt to hunt the animal by Steller and the other crew members was unsuccessful due to its strength and thick hide. They had attempted to impale it and haul it to shore using a large hook and heavy cable, but the crew could not pierce its skin. In a second attempt a month later, a harpooner speared an animal, and men on shore hauled it in while others repeatedly stabbed it with bayonets. It was dragged into shallow waters, and the crew waited until the tide receded and it was beached to butcher it. After this, they were hunted with relative ease, the challenge being in hauling the animal back to shore. This bounty inspired maritime fur traders to detour to the Commander Islands and restock their food supplies during North Pacific expeditions. Impact of extinction While not a keystone species, Steller's sea cows likely influenced the community composition of the kelp forests they inhabited, and also boosted their productivity and resilience to environmental stressors by allowing more light into kelp forests and more kelp to grow, and enhancing the recruitment and dispersal of kelp through their feeding behavior. In the modern day, the flow of nutrients from kelp forests to adjacent ecosystems is regulated by the seasons, with seasonal storms and currents being the primary factor. The Steller's sea cow may have allowed this flow to continue year-round, thus allowing for more productivity in adjacent habitats. The disturbance caused by the Steller's sea cow may have facilitated the dispersal of kelp, most notably Nereocystis species, to other habitats, allowing recruitment and colonization of new areas, and facilitating genetic exchange. Their presence may have also allowed sea otters and large marine invertebrates to coexist, indicating a commonly-documented decline in marine invertebrate populations driven by sea otters (an example being in populations of the black leather chiton) may be due to lost ecosystem functions associated with the Steller's sea cow. This indicates that due to the sea cow's extinction, the ecosystem dynamics and resilience of North Pacific kelp forests may have already been compromised well before more well-known modern stressors like overharvesting and climate change. Later reported sightings Sea cow sightings have been reported after Brandt's official 1768 date of extinction. Lucien Turner, an American ethnologist and naturalist, said the natives of Attu Island reported that the sea cows survived into the 1800s, and were sometimes hunted. In 1963, the official journal of the Academy of Sciences of the USSR published an article announcing a possible sighting. The previous year, the whaling ship Buran had reported a group of large marine mammals grazing on seaweed in shallow water off Kamchatka, in the Gulf of Anadyr. The crew reported seeing six of these animals ranging from , with trunks and split lips. There have also been alleged sightings by local fishermen in the northern Kuril Islands, and around the Kamchatka and Chukchi peninsulas. Uses Steller's sea cow was described as being "tasty" by Steller; the meat was said to have a taste similar to corned beef, though it was tougher, redder, and needed to be cooked longer. The meat was abundant on the animal, and slow to spoil, perhaps due to the high amount of salt in the animal's diet effectively curing it. The fat could be used for cooking and as an odorless lamp oil. The crew of the St. Peter drank the fat in cups and Steller described it as having a taste like almond oil. The thick, sweet milk of female sea cows could be drunk or made into butter, and the thick, leathery hide could be used to make clothing, such as shoes and belts, and large skin boats sometimes called baidarkas or umiaks. Towards the end of the 19th century, bones and fossils from the extinct animal were valuable and often sold to museums at high prices. Most were collected during this time, limiting trade after 1900. Some are still sold commercially, as the highly dense cortical bone is well-suited for making items such as knife handles and decorative carvings. Because the sea cow is extinct, native artisan products made in Alaska from this "mermaid ivory" are legal to sell in the United States and do not fall under the jurisdiction of the Marine Mammal Protection Act (MMPA) or the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES), which restrict the trade of marine mammal products. Although the distribution is legal, the sale of unfossilized bones is generally prohibited and trade in products made of the bones is regulated because some of the material is unlikely to be authentic and probably comes from arctic cetaceans. The ethnographer Elizabeth Porfirevna Orlova, from the Russian Museum of Ethnography, was working on the Commander Island Aleuts from August to September 1961. Her research includes notes about a game of accuracy, called kakan ("stones") played with the bones of the Steller's sea cow. Kakan was usually played at home between adults during bad weather, at least during Orlova's fieldwork. In media and folklore In the story The White Seal from The Jungle Book by Rudyard Kipling, which takes place in the Bering Sea, Kotick the rare white seal consults Sea Cow during his journey to find a new home. Tales of a Sea Cow is a 2012 docufiction film by Icelandic-French artist Etienne de France about a fictional 2006 discovery of Steller's sea cows off the coast of Greenland. The film has been exhibited in art museums and universities in Europe. Steller's sea cows appear in two books of poetry: Nach der Natur (1995) by Winfried Georg Sebald, and Species Evanescens (2009) by Russian poet Andrei Bronnikov. Bronnikov's book depicts the events of the Great Northern Expedition through the eyes of Steller; Sebald's book looks at the conflict between man and nature, including the extinction of Steller's sea cow. The novel Elolliset (Living things) (2023) by Finnish author and literary scholar Iida Turpeinen uses Steller's sea cow and its demise as a central theme. It features multiple characters at different times in history that were involved with the animal, beginning from Steller’s expedition and telling how the complete skeleton was conserved and ended up in the Helsinki museum of natural history. Genetic research and potential revival In 2021, the nuclear genome of the species was sequenced from skeletal remains. The reconstructed genome showed that the species was already declining due to low genetic diversity caused by climate change and hunting by Paleolithic humans prior to its discovery by Steller, similar to the final populations of woolly mammoths on Wrangel Island. A year later, in late 2022, a group of Russian scientists funded by Sergi Bachin began research to potentially revive the species and reintroduce it to the Bering Sea. Artic Sirenia plans to revive the species through genome editing of the dugong, but an artificial womb is necessary to conceive a living sea cow due to the lack of an appropriate living surrogate species. Ben Lamm of Colossal Biosciences has also stated that he and his company want to revive the species after they complete their first four projects (woolly mammoth, dodo, thylacine, and northern white rhinoceros) and have an artificial animal womb developed.
Biology and health sciences
Sirenia
Animals
60424
https://en.wikipedia.org/wiki/Hallucigenia
Hallucigenia
Hallucigenia is a genus of lobopodian known from Cambrian aged fossils in Burgess Shale-type deposits in Canada and China, and from isolated spines around the world. The generic name reflects the type species' unusual appearance and eccentric history of study; when it was erected as a genus, H. sparsa was reconstructed as an enigmatic animal upside down and back to front. Lobopodians are a grade of Paleozoic panarthropods from which the velvet worms, water bears, and arthropods arose. Description Hallucigenia is a long tubular animal with up to ten pairs of slender legs (lobopods). The first 2 or 3 leg pairs are slender and featureless, while the remaining 7 or 8 pairs each terminate with 1 or 2 claws. Above the trunk region are 7 pairs of rigid conical sclerites (spines) corresponding to the 3rd–9th leg pairs. The trunk is either featureless (H. sparsa) or divided by heteronomous annulations (H. fortis and H. hongmeia). The "head" and "tail" end of the animal are difficult to identify; one end extends some distance beyond the legs and often droops down as if to reach the substrate. Some specimens display traces of a simple gut. Research in the mid-2010s clarified that the longer end is a head with anteroventral mouth and at least a pair of simple eyes. The shape of head differs between species – elongated in H. sparsa, rounded in H. fortis, while those of H. hongmeia remain unknown. At least in H. sparsa, the head possesses radial teeth and pharyngeal teeth within the front of the gut. Hallucigenia spines are made up of one to four nested elements. The spine surface of H. sparsa is covered in an ornament of minute triangular 'scales', while the spine surface of Hallucigenia hongmeia is a net-like texture of microscopic circular openings, which can be interpreted as the remains of Papillae. History of study Hallucigenia sparsa was originally described by Charles Walcott as a species of the polychaete worm Canadia. In his 1977 redescription of the organism, Simon Conway Morris recognized the animal as something quite distinct, for which he proposed the name Hallucigenia because of the "bizarre and dream-like appearance of the animal." No specimen was available that showed both rows of legs, so Conway Morris reconstructed the animal walking on its spines, with its single row of legs interpreted as tentacles on the animal's back. A dark stain at one end of the animal was interpreted as a featureless head. Only the forward tentacles could easily reach to the 'head', meaning that a mouth on the head would have to be fed by passing food along the line of tentacles. Conway Morris suggested that a hollow tube within each of the tentacles might be a mouth. This raised questions, such as how it would walk on the stiff legs, but it was accepted (with reservations) as the best available interpretation. An alternative interpretation considered Hallucigenia to be an appendage of a larger, unknown animal. There had been precedent for this, as Anomalocaris had been originally identified as three separate creatures before being identified as a single huge (for its time) to creature. In 1991, Lars Ramskold and Hou Xianguang, working with additional specimens of a "hallucigenid", Microdictyon, from the lower Cambrian Maotianshan shales of China, reinterpreted Hallucigenia as a lobopodian, a legged worm-like taxon which were still thought to be exclusively related to onychophoran (velvet worm) at that time. They inverted it, interpreting the tentacles, which they believe to be paired, as walking structures and the spines as protective. Further preparation of fossil specimens showed that the 'second legs' were buried at an angle to the plane along which the rock had split, and could be revealed by removing the overlying sediment. Ramskold and Hou also believe that the blob-like 'head' is actually a stain that appears in many specimens, not a preserved portion of the anatomy. This stain may be an artifact of decomposition. Affinity Since the revisions around 1990s, Hallucigenia is unquestionably a lobopodian panarthropod, although the relationship with other panarthropods remains controversial. Hallucigenia has long been interpreted as a stem-group onychophoran (velvet worms) – a position that has found support from multiple phylogenetic analysis. A key character demonstrating this affinity is the cone-in-cone construction of Hallucigenia claws, a feature shared only with modern onychophorans. On the other hand, some analysis rather support the position of Hallucigenia as a basal panarthropod outside of onychophoran stem-group. Under this scenario, the cone-in-cone structure shared between Hallucigenia and onychophorans represent panarthropod plesiomorphy. Hallucigenia also exhibits certain characters inherited from the ancestral ecdysozoan, but lost in the modern onychophorans – in particular its distinctive foregut armature. Below is a cladogram for Hallucigenia according to Yang et al., 2015: Diversity In 2002, Desmond Collins informally suggested that new Hallucigenia fossils from the Burgess Shale showed male and female forms, one with "a rigid trunk, robust neck and a globular head" and the other thinner, and with a small head. Three species of Hallucigenia have been described. The first specimen, Hallucigenia sparsa, was discovered in Canada. Two other species, H. fortis and H. hongmeia, are represented by the Maotianshan Shales' fossils of Chengjiang. Distribution Hallucigenia was first described from the Burgess Shale in southeastern British Columbia, Canada. 109 specimens of Hallucigenia are known from the Greater Phyllopod bed, where they comprise 0.3% of the community. Hallucigenia also forms a minor component of Chinese lagerstätten. Isolated hallucigeniid spines, however, are widely distributed in a range of Cambrian deposits, preserved both as carbonaceous and mineralized fossils.
Biology and health sciences
Ecdysozoa
Animals
60426
https://en.wikipedia.org/wiki/Symbiogenesis
Symbiogenesis
Symbiogenesis (endosymbiotic theory, or serial endosymbiotic theory) is the leading evolutionary theory of the origin of eukaryotic cells from prokaryotic organisms. The theory holds that mitochondria, plastids such as chloroplasts, and possibly other organelles of eukaryotic cells are descended from formerly free-living prokaryotes (more closely related to the Bacteria than to the Archaea) taken one inside the other in endosymbiosis. Mitochondria appear to be phylogenetically related to Rickettsiales bacteria, while chloroplasts are thought to be related to cyanobacteria. The idea that chloroplasts were originally independent organisms that merged into a symbiotic relationship with other one-celled organisms dates back to the 19th century, when it was espoused by researchers such as Andreas Schimper. The endosymbiotic theory was articulated in 1905 and 1910 by the Russian botanist Konstantin Mereschkowski, and advanced and substantiated with microbiological evidence by Lynn Margulis in 1967. Among the many lines of evidence supporting symbiogenesis are that mitochondria and plastids contain their own chromosomes and reproduce by splitting in two, parallel but separate from the sexual reproduction of the rest of the cell; that the chromosomes of some mitochondria and plastids are single circular DNA molecules similar to the circular chromosomes of bacteria; that the transport proteins called porins are found in the outer membranes of mitochondria and chloroplasts, and also bacterial cell membranes; and that cardiolipin is found only in the inner mitochondrial membrane and bacterial cell membranes. History The Russian botanist Konstantin Mereschkowski first outlined the theory of symbiogenesis (from Greek: σύν syn "together", βίος bios "life", and γένεσις genesis "origin, birth") in his 1905 work, The nature and origins of chromatophores in the plant kingdom, and then elaborated it in his 1910 The Theory of Two Plasms as the Basis of Symbiogenesis, a New Study of the Origins of Organisms. Mereschkowski proposed that complex life-forms had originated by two episodes of symbiogenesis, the incorporation of symbiotic bacteria to form successively nuclei and chloroplasts. Mereschkowski knew of the work of botanist Andreas Schimper. In 1883, Schimper had observed that the division of chloroplasts in green plants closely resembled that of free-living cyanobacteria. Schimper had tentatively proposed (in a footnote) that green plants had arisen from a symbiotic union of two organisms. In 1918 the French scientist Paul Jules Portier published Les Symbiotes, in which he claimed that the mitochondria originated from a symbiosis process. Ivan Wallin advocated the idea of an endosymbiotic origin of mitochondria in the 1920s. The Russian botanist Boris Kozo-Polyansky became the first to explain the theory in terms of Darwinian evolution. In his 1924 book A New Principle of Biology. Essay on the Theory of Symbiogenesis, he wrote, "The theory of symbiogenesis is a theory of selection relying on the phenomenon of symbiosis." These theories did not gain traction until more detailed electron-microscopic comparisons between cyanobacteria and chloroplasts were made, such as by Hans Ris in 1961 and 1962. These, combined with the discovery that plastids and mitochondria contain their own DNA, led to a resurrection of the idea of symbiogenesis in the 1960s. Lynn Margulis advanced and substantiated the theory with microbiological evidence in a 1967 paper, On the origin of mitosing cells. In her 1981 work Symbiosis in Cell Evolution she argued that eukaryotic cells originated as communities of interacting entities, including endosymbiotic spirochaetes that developed into eukaryotic flagella and cilia. This last idea has not received much acceptance, because flagella lack DNA and do not show ultrastructural similarities to bacteria or to archaea (see also: Evolution of flagella and Prokaryotic cytoskeleton). According to Margulis and Dorion Sagan, "Life did not take over the globe by combat, but by networking" (i.e., by cooperation). Christian de Duve proposed that the peroxisomes may have been the first endosymbionts, allowing cells to withstand growing amounts of free molecular oxygen in the Earth's atmosphere. However, it now appears that peroxisomes may be formed de novo, contradicting the idea that they have a symbiotic origin. The fundamental theory of symbiogenesis as the origin of mitochondria and chloroplasts is now widely accepted. From endosymbionts to organelles Biologists usually distinguish organelles from endosymbionts – whole organisms living inside other organisms – by their reduced genome sizes. As an endosymbiont evolves into an organelle, most of its genes are transferred to the host cell genome. The host cell and organelle therefore need to develop a transport mechanism that enables the return of the protein products needed by the organelle but now manufactured by the cell. Free-living ancestors Alphaproteobacteria were formerly thought to be the free-living organisms most closely related to mitochondria. Later research indicates that mitochondria are most closely related to Pelagibacterales bacteria, in particular, those in the SAR11 clade. Nitrogen-fixing filamentous cyanobacteria are the free-living organisms most closely related to plastids. Both cyanobacteria and alphaproteobacteria maintain a large (>6Mb) genome encoding thousands of proteins. Plastids and mitochondria exhibit a dramatic reduction in genome size when compared with their bacterial relatives. Chloroplast genomes in photosynthetic organisms are normally 120–200kb encoding 20–200 proteins and mitochondrial genomes in humans are approximately 16kb and encode 37 genes, 13 of which are proteins. Using the example of the freshwater amoeboid, however, Paulinella chromatophora, which contains chromatophores found to be evolved from cyanobacteria, Keeling and Archibald argue that this is not the only possible criterion; another is that the host cell has assumed control of the regulation of the former endosymbiont's division, thereby synchronizing it with the cell's own division. Nowack and her colleagues gene sequenced the chromatophore (1.02Mb) and found that only 867 proteins were encoded by these photosynthetic cells. Comparisons with their closest free living cyanobacteria of the genus Synechococcus (having a genome size 3Mb, with 3300 genes) revealed that chromatophores had undergone a drastic genome shrinkage. Chromatophores contained genes that were accountable for photosynthesis but were deficient in genes that could carry out other biosynthetic functions; this observation suggests that these endosymbiotic cells are highly dependent on their hosts for their survival and growth mechanisms. Thus, these chromatophores were found to be non-functional for organelle-specific purposes when compared with mitochondria and plastids. This distinction could have promoted the early evolution of photosynthetic organelles. The loss of genetic autonomy, that is, the loss of many genes from endosymbionts, occurred very early in evolutionary time. Taking into account the entire original endosymbiont genome, there are three main possible fates for genes over evolutionary time. The first is the loss of functionally redundant genes, in which genes that are already represented in the nucleus are eventually lost. The second is the transfer of genes to the nucleus, while the third is that genes remain in the organelle that was once an organism. The loss of autonomy and integration of the endosymbiont with its host can be primarily attributed to nuclear gene transfer. As organelle genomes have been greatly reduced over evolutionary time, nuclear genes have expanded and become more complex. As a result, many plastid and mitochondrial processes are driven by nuclear encoded gene products. In addition, many nuclear genes originating from endosymbionts have acquired novel functions unrelated to their organelles. Gene transfer mechanisms The mechanisms of gene transfer are not fully known; however, multiple hypotheses exist to explain this phenomenon. The possible mechanisms include the Complementary DNA (cDNA) hypothesis and the bulk flow hypothesis. The cDNA hypothesis involves the use of messenger RNA (mRNAs) to transport genes from organelles to the nucleus where they are converted to cDNA and incorporated into the genome. The cDNA hypothesis is based on studies of the genomes of flowering plants. Protein coding RNAs in mitochondria are spliced and edited using organelle-specific splice and editing sites. Nuclear copies of some mitochondrial genes, however, do not contain organelle-specific splice sites, suggesting a processed mRNA intermediate. The cDNA hypothesis has since been revised as edited mitochondrial cDNAs are unlikely to recombine with the nuclear genome and are more likely to recombine with their native mitochondrial genome. If the edited mitochondrial sequence recombines with the mitochondrial genome, mitochondrial splice sites would no longer exist in the mitochondrial genome. Any subsequent nuclear gene transfer would therefore also lack mitochondrial splice sites. The bulk flow hypothesis is the alternative to the cDNA hypothesis, stating that escaped DNA, rather than mRNA, is the mechanism of gene transfer. According to this hypothesis, disturbances to organelles, including autophagy (normal cell destruction), gametogenesis (the formation of gametes), and cell stress release DNA which is imported into the nucleus and incorporated into the nuclear DNA using non-homologous end joining (repair of double stranded breaks). For example, in the initial stages of endosymbiosis, due to a lack of major gene transfer, the host cell had little to no control over the endosymbiont. The endosymbiont underwent cell division independently of the host cell, resulting in many "copies" of the endosymbiont within the host cell. Some of the endosymbionts lysed (burst), and high levels of DNA were incorporated into the nucleus. A similar mechanism is thought to occur in tobacco plants, which show a high rate of gene transfer and whose cells contain multiple chloroplasts. In addition, the bulk flow hypothesis is also supported by the presence of non-random clusters of organelle genes, suggesting the simultaneous movement of multiple genes. Ford Doolittle proposed that (whatever the mechanism) gene transfer behaves like a ratchet, resulting in unidirectional transfer of genes from the organelle to the nuclear genome. When genetic material from an organelle is incorporated into the nuclear genome, either the organelle or nuclear copy of the gene may be lost from the population. If the organelle copy is lost and this is fixed, or lost through genetic drift, a gene is successfully transferred to the nucleus. If the nuclear copy is lost, horizontal gene transfer can occur again, and the cell can 'try again' to have successful transfer of genes to the nucleus. In this ratchet-like way, genes from an organelle would be expected to accumulate in the nuclear genome over evolutionary time. Endosymbiosis of protomitochondria Endosymbiotic theory for the origin of mitochondria suggests that the proto-eukaryote engulfed a protomitochondrion, and this endosymbiont became an organelle, a major step in eukaryogenesis, the creation of the eukaryotes. Mitochondria Mitochondria are organelles that synthesize the energy-carrying molecule ATP for the cell by metabolizing carbon-based macromolecules. The presence of DNA in mitochondria and proteins, derived from mtDNA, suggest that this organelle may have been a prokaryote prior to its integration into the proto-eukaryote. Mitochondria are regarded as organelles rather than endosymbionts because mitochondria and the host cells share some parts of their genome, undergo division simultaneously, and provide each other with means to produce energy. The endomembrane system and nuclear membrane were hypothesized to have derived from the protomitochondria. Nuclear membrane The presence of a nucleus is one major difference between eukaryotes and prokaryotes. Some conserved nuclear proteins between eukaryotes and prokaryotes suggest that these two types had a common ancestor. Another theory behind nucleation is that early nuclear membrane proteins caused the cell membrane to fold and form a sphere with pores like the nuclear envelope. As a way of forming a nuclear membrane, endosymbiosis could be expected to use less energy than if the cell was to develop a metabolic process to fold the cell membrane for the purpose. Digesting engulfed cells without energy-producing mitochondria would have been challenging for the host cell. On this view, membrane-bound bubbles or vesicles leaving the protomitochondria may have formed the nuclear envelope. The process of symbiogenesis by which the early eukaryotic cell integrated the proto-mitochondrion likely included protection of the archaeal host genome from the release of reactive oxygen species. These would have been formed during oxidative phosphorylation and ATP production by the proto-mitochondrion. The nuclear membrane may have evolved as an adaptive innovation for protecting against nuclear genome DNA damage caused by reactive oxygen species. Substantial transfer of genes from the ancestral proto-mitochondrial genome to the nuclear genome likely occurred during early eukaryotic evolution. The greater protection of the nuclear genome against reactive oxygen species afforded by the nuclear membrane may explain the adaptive benefit of this gene transfer. Endomembrane system Modern eukaryotic cells use the endomembrane system to transport products and wastes in, within, and out of cells. The membrane of nuclear envelope and endomembrane vesicles are composed of similar membrane proteins. These vesicles also share similar membrane proteins with the organelle they originated from or are traveling towards. This suggests that what formed the nuclear membrane also formed the endomembrane system. Prokaryotes do not have a complex internal membrane network like eukaryotes, but they could produce extracellular vesicles from their outer membrane. After the early prokaryote was consumed by a proto-eukaryote, the prokaryote would have continued to produce vesicles that accumulated within the cell. Interaction of internal components of vesicles may have led to the endoplasmic reticulum and the Golgi apparatus, both being parts of the endomembrane system. Cytoplasm The syntrophy hypothesis, proposed by López-García and Moreira around the year 2000, suggested that eukaryotes arose by combining the metabolic capabilities of an archaean, a fermenting deltaproteobacterium, and a methanotrophic alphaproteobacterium which became the mitochondrion. In 2020, the same team updated their syntrophy proposal to cover an Asgard archaean that produced hydrogen with deltaproteobacterium that oxidised sulphur. A third organism, an alphaproteobacterium able to respire both aerobically and anaerobically, and to oxidise sulphur, developed into the mitochondrion; it may possibly also have been able to photosynthesise. Date The question of when the transition from prokaryotic to eukaryotic form occurred and when the first crown group eukaryotes appeared on earth is unresolved. The oldest known body fossils that can be positively assigned to the Eukaryota are acanthomorphic acritarchs from the 1.631 Gya Deonar Formation of India. These fossils can still be identified as derived post-nuclear eukaryotes with a sophisticated, morphology-generating cytoskeleton sustained by mitochondria. This fossil evidence indicates that endosymbiotic acquisition of alphaproteobacteria must have occurred before 1.6 Gya. Molecular clocks have also been used to estimate the last eukaryotic common ancestor, however these methods have large inherent uncertainty and give a wide range of dates. Reasonable results include the estimate of c. 1.8 Gya. A 2.3 Gya estimate also seems reasonable, and has the added attraction of coinciding with one of the most pronounced biogeochemical perturbations in Earth history, the early Palaeoproterozoic Great Oxygenation Event. The marked increase in atmospheric oxygen concentrations at that time has been suggested as a contributing cause of eukaryogenesis, inducing the evolution of oxygen-detoxifying mitochondria. Alternatively, the Great Oxidation Event might be a consequence of eukaryogenesis, and its impact on the export and burial of organic carbon. Organellar genomes Plastomes and mitogenomes Some endosymbiont genes remain in the organelles. Plastids and mitochondria retain genes encoding rRNAs, tRNAs, proteins involved in redox reactions, and proteins required for transcription, translation, and replication. There are many hypotheses to explain why organelles retain a small portion of their genome; however no one hypothesis will apply to all organisms, and the topic is still quite controversial. The hydrophobicity hypothesis states that highly hydrophobic (water hating) proteins (such as the membrane bound proteins involved in redox reactions) are not easily transported through the cytosol and therefore these proteins must be encoded in their respective organelles. The code disparity hypothesis states that the limit on transfer is due to differing genetic codes and RNA editing between the organelle and the nucleus. The redox control hypothesis states that genes encoding redox reaction proteins are retained in order to effectively couple the need for repair and the synthesis of these proteins. For example, if one of the photosystems is lost from the plastid, the intermediate electron carriers may lose or gain too many electrons, signalling the need for repair of a photosystem. The time delay involved in signalling the nucleus and transporting a cytosolic protein to the organelle results in the production of damaging reactive oxygen species. The final hypothesis states that the assembly of membrane proteins, particularly those involved in redox reactions, requires coordinated synthesis and assembly of subunits; however, translation and protein transport coordination is more difficult to control in the cytoplasm. Non-photosynthetic plastid genomes The majority of the genes in the mitochondria and plastids are related to the expression (transcription, translation and replication) of genes encoding proteins involved in either photosynthesis (in plastids) or cellular respiration (in mitochondria). One might predict that the loss of photosynthesis or cellular respiration would allow for the complete loss of the plastid genome or the mitochondrial genome respectively. While there are numerous examples of mitochondrial descendants (mitosomes and hydrogenosomes) that have lost their entire organellar genome, non-photosynthetic plastids tend to retain a small genome. There are two main hypotheses to explain this occurrence: The essential tRNA hypothesis notes that there have been no documented functional plastid-to-nucleus gene transfers of genes encoding RNA products (tRNAs and rRNAs). As a result, plastids must make their own functional RNAs or import nuclear counterparts. The genes encoding tRNA-Glu and tRNA-fmet, however, appear to be indispensable. The plastid is responsible for haem biosynthesis, which requires plastid encoded tRNA-Glu (from the gene trnE) as a precursor molecule. Like other genes encoding RNAs, trnE cannot be transferred to the nucleus. In addition, it is unlikely trnE could be replaced by a cytosolic tRNA-Glu as trnE is highly conserved; single base changes in trnE have resulted in the loss of haem synthesis. The gene for tRNA-formylmethionine (tRNA-fmet) is also encoded in the plastid genome and is required for translation initiation in both plastids and mitochondria. A plastid is required to continue expressing the gene for tRNA-fmet so long as the mitochondrion is translating proteins. The limited window hypothesis offers a more general explanation for the retention of genes in non-photosynthetic plastids. According to this hypothesis, genes are transferred to the nucleus following the disturbance of organelles. Disturbance was common in the early stages of endosymbiosis, however, once the host cell gained control of organelle division, eukaryotes could evolve to have only one plastid per cell. Having only one plastid severely limits gene transfer as the lysis of the single plastid would likely result in cell death. Consistent with this hypothesis, organisms with multiple plastids show an 80-fold increase in plastid-to-nucleus gene transfer compared with organisms with single plastids. Evidence There are many lines of evidence that mitochondria and plastids including chloroplasts arose from bacteria. New mitochondria and plastids are formed only through binary fission, the form of cell division used by bacteria and archaea. If a cell's mitochondria or chloroplasts are removed, the cell does not have the means to create new ones. In some algae, such as Euglena, the plastids can be destroyed by certain chemicals or prolonged absence of light without otherwise affecting the cell: the plastids do not regenerate. Transport proteins called porins are found in the outer membranes of mitochondria and chloroplasts and are also found in bacterial cell membranes. A membrane lipid cardiolipin is exclusively found in the inner mitochondrial membrane and bacterial cell membranes. Some mitochondria and some plastids contain single circular DNA molecules that are similar to the DNA of bacteria both in size and structure. Genome comparisons suggest a close relationship between mitochondria and Alphaproteobacteria. Genome comparisons suggest a close relationship between plastids and cyanobacteria. Many genes in the genomes of mitochondria and chloroplasts have been lost or transferred to the nucleus of the host cell. Consequently, the chromosomes of many eukaryotes contain genes that originated from the genomes of mitochondria and plastids. Mitochondria and plastids contain their own ribosomes; these are more similar to those of bacteria (70S) than those of eukaryotes. Proteins created by mitochondria and chloroplasts use N-formylmethionine as the initiating amino acid, as do proteins created by bacteria but not proteins created by eukaryotic nuclear genes or archaea. Secondary endosymbiosis Primary endosymbiosis involves the engulfment of a cell by another free living organism. Secondary endosymbiosis occurs when the product of primary endosymbiosis is itself engulfed and retained by another free living eukaryote. Secondary endosymbiosis has occurred several times and has given rise to extremely diverse groups of algae and other eukaryotes. Some organisms can take opportunistic advantage of a similar process, where they engulf an alga and use the products of its photosynthesis, but once the prey item dies (or is lost) the host returns to a free living state. Obligate secondary endosymbionts become dependent on their organelles and are unable to survive in their absence. A secondary endosymbiosis event involving an ancestral red alga and a heterotrophic eukaryote resulted in the evolution and diversification of several other photosynthetic lineages including Cryptophyta, Haptophyta, Stramenopiles (or Heterokontophyta), and Alveolata. A possible secondary endosymbiosis has been observed in process in the heterotrophic protist Hatena. This organism behaves like a predator until it ingests a green alga, which loses its flagella and cytoskeleton but continues to live as a symbiont. Hatena meanwhile, now a host, switches to photosynthetic nutrition, gains the ability to move towards light, and loses its feeding apparatus. Despite the diversity of organisms containing plastids, the morphology, biochemistry, genomic organisation, and molecular phylogeny of plastid RNAs and proteins suggest a single origin of all extant plastids – although this theory was still being debated in 2008. Nitroplasts A unicellular marine alga, Braarudosphaera bigelowii (a coccolithophore, which is a eukaryote), has been found with a cyanobacterium as an endosymbiont. The cyanobacterium forms a nitrogen-fixing structure, dubbed the nitroplast. It divides evenly when the host cell undergoes mitosis, and many of its proteins derive from the host alga, implying that the endosymbiont has proceeded far along the path towards becoming an organelle. The cyanobacterium is named Candidatus Atelocyanobacterium thalassa, and is abbreviated UCYN-A. The alga is the first eukaryote known to have the ability to fix nitrogen.
Biology and health sciences
Organelles and other cell parts
null
60491
https://en.wikipedia.org/wiki/Abstraction%20%28computer%20science%29
Abstraction (computer science)
In software engineering and computer science, abstraction is the process of generalizing concrete details, such as attributes, away from the study of objects and systems to focus attention on details of greater importance. Abstraction is a fundamental concept in computer science and software engineering, especially within the object-oriented programming paradigm. Examples of this include: the usage of abstract data types to separate usage from working representations of data within programs; the concept of functions or subroutines which represent a specific way of implementing control flow; the process of reorganizing common behavior from groups of non-abstract classes into abstract classes using inheritance and sub-classes, as seen in object-oriented programming languages. Rationale Computing mostly operates independently of the concrete world. The hardware implements a model of computation that is interchangeable with others. The software is structured in architectures to enable humans to create the enormous systems by concentrating on a few issues at a time. These architectures are made of specific choices of abstractions. Greenspun's tenth rule is an aphorism on how such an architecture is both inevitable and complex. Language abstraction is a central form of abstraction in computing: new artificial languages are developed to express specific aspects of a system. Modeling languages help in planning. Computer languages can be processed with a computer. An example of this abstraction process is the generational development of programming language from the first-generation programming language (machine language) to the second-generation programming language (assembly language) and the third-generation programming language (high-level programming language). Each stage can be used as a stepping stone for the next stage. The language abstraction continues for example in scripting languages and domain-specific languages. Within a programming language, some features let the programmer create new abstractions. These include subroutines, modules, polymorphism, and software components. Some other abstractions such as software design patterns and architectural styles remain invisible to a translator and operate only in the design of a system. Some abstractions try to limit the range of concepts a programmer needs to be aware of, by completely hiding the abstractions they are built on. The software engineer and writer Joel Spolsky has criticized these efforts by claiming that all abstractions are leaky – that they can never completely hide the details below; however, this does not negate the usefulness of abstraction. Some abstractions are designed to inter-operate with other abstractions – for example, a programming language may contain a foreign function interface for making calls to the lower-level language. Abstraction features Programming languages Different programming languages provide different types of abstraction, depending on the intended applications for the language. For example: In object-oriented programming languages such as C++, Object Pascal, or Java, the concept of abstraction has become a declarative statement – using the syntax function(parameters) = 0; (in C++) or the reserved words (keywords) abstract and interface (in Java). After such a declaration, it is the responsibility of the programmer to implement a class to instantiate the object of the declaration. Functional programming languages commonly exhibit abstractions related to functions, such as lambda abstractions (making a term into a function of some variable) and higher-order functions (parameters are functions). Modern members of the Lisp programming language family such as Clojure, Scheme and Common Lisp support macro systems to allow syntactic abstraction. Other programming languages such as Scala also have macros, or very similar metaprogramming features (for example, Haskell has Template Haskell, OCaml has MetaOCaml). These can allow programs to omit boilerplate code, abstract away tedious function call sequences, implement new control flow structures, and implement domain-specific languages (DSLs), which allow domain-specific concepts to be expressed in concise and elegant ways. All of these, when used correctly, improve both the programmer's efficiency and the clarity of source code by making the intended purpose more explicit. A consequence of syntactic abstraction is also that any Lisp dialect, and almost any programming language, can in principle, be implemented in any modern Lisp with significantly reduced (but still non-trivial in most cases) effort when compared to "more traditional" programming languages such as Python, C or Java. Specification methods Analysts have developed various methods to formally specify software systems. Some known methods include: Abstract-model based method (VDM, Z); Algebraic techniques (Larch, CLEAR, OBJ, ACT ONE, CASL); Process-based techniques (LOTOS, SDL, Estelle); Trace-based techniques (SPECIAL, TAM); Knowledge-based techniques (Refine, Gist). Specification languages Specification languages generally rely on abstractions of one kind or another, since specifications are typically defined earlier in a project, (and at a more abstract level) than an eventual implementation. The Unified Modeling Language (UML) specification language, for example, allows the definition of abstract classes, which in a waterfall project, remain abstract during the architecture and specification phase of the project. Control abstraction Programming languages offer control abstraction as one of the main purposes of their use. Computer machines understand operations at the very low level such as moving some bits from one location of the memory to another location and producing the sum of two sequences of bits. Programming languages allow this to be done in the higher level. For example, consider this statement written in a Pascal-like fashion: a := (1 + 2) * 5 To a human, this seems a fairly simple and obvious calculation ("one plus two is three, times five is fifteen"). However, the low-level steps necessary to carry out this evaluation, and return the value "15", and then assign that value to the variable "a", are actually quite subtle and complex. The values need to be converted to binary representation (often a much more complicated task than one would think) and the calculations decomposed (by the compiler or interpreter) into assembly instructions (again, which are much less intuitive to the programmer: operations such as shifting a binary register left, or adding the binary complement of the contents of one register to another, are simply not how humans think about the abstract arithmetical operations of addition or multiplication). Finally, assigning the resulting value of "15" to the variable labeled "a", so that "a" can be used later, involves additional 'behind-the-scenes' steps of looking up a variable's label and the resultant location in physical or virtual memory, storing the binary representation of "15" to that memory location, etc. Without control abstraction, a programmer would need to specify all the register/binary-level steps each time they simply wanted to add or multiply a couple of numbers and assign the result to a variable. Such duplication of effort has two serious negative consequences: it forces the programmer to constantly repeat fairly common tasks every time a similar operation is needed it forces the programmer to program for the particular hardware and instruction set Structured programming Structured programming involves the splitting of complex program tasks into smaller pieces with clear flow-control and interfaces between components, with a reduction of the complexity potential for side-effects. In a simple program, this may aim to ensure that loops have single or obvious exit points and (where possible) to have single exit points from functions and procedures. In a larger system, it may involve breaking down complex tasks into many different modules. Consider a system which handles payroll on ships and at shore offices: The uppermost level may feature a menu of typical end-user operations. Within that could be standalone executables or libraries for tasks such as signing on and off employees or printing checks. Within each of those standalone components there could be many different source files, each containing the program code to handle a part of the problem, with only selected interfaces available to other parts of the program. A sign on program could have source files for each data entry screen and the database interface (which may itself be a standalone third party library or a statically linked set of library routines). Either the database or the payroll application also has to initiate the process of exchanging data with between ship and shore, and that data transfer task will often contain many other components. These layers produce the effect of isolating the implementation details of one component and its assorted internal methods from the others. Object-oriented programming embraces and extends this concept. Data abstraction Data abstraction enforces a clear separation between the abstract properties of a data type and the concrete details of its implementation. The abstract properties are those that are visible to client code that makes use of the data type—the interface to the data type—while the concrete implementation is kept entirely private, and indeed can change, for example to incorporate efficiency improvements over time. The idea is that such changes are not supposed to have any impact on client code, since they involve no difference in the abstract behaviour. For example, one could define an abstract data type called lookup table which uniquely associates keys with values, and in which values may be retrieved by specifying their corresponding keys. Such a lookup table may be implemented in various ways: as a hash table, a binary search tree, or even a simple linear list of (key:value) pairs. As far as client code is concerned, the abstract properties of the type are the same in each case. Of course, this all relies on getting the details of the interface right in the first place, since any changes there can have major impacts on client code. As one way to look at this: the interface forms a contract on agreed behaviour between the data type and client code; anything not spelled out in the contract is subject to change without notice. Manual data abstraction While much of data abstraction occurs through computer science and automation, there are times when this process is done manually and without programming intervention. One way this can be understood is through data abstraction within the process of conducting a systematic review of the literature. In this methodology, data is abstracted by one or several abstractors when conducting a meta-analysis, with errors reduced through dual data abstraction followed by independent checking, known as adjudication. Abstraction in object oriented programming In object-oriented programming theory, abstraction involves the facility to define objects that represent abstract "actors" that can perform work, report on and change their state, and "communicate" with other objects in the system. The term encapsulation refers to the hiding of state details, but extending the concept of data type from earlier programming languages to associate behavior most strongly with the data, and standardizing the way that different data types interact, is the beginning of abstraction. When abstraction proceeds into the operations defined, enabling objects of different types to be substituted, it is called polymorphism. When it proceeds in the opposite direction, inside the types or classes, structuring them to simplify a complex set of relationships, it is called delegation or inheritance. Various object-oriented programming languages offer similar facilities for abstraction, all to support a general strategy of polymorphism in object-oriented programming, which includes the substitution of one data type for another in the same or similar role. Although not as generally supported, a configuration or image or package may predetermine a great many of these bindings at compile time, link time, or load time. This would leave only a minimum of such bindings to change at run-time. Common Lisp Object System or Self, for example, feature less of a class-instance distinction and more use of delegation for polymorphism. Individual objects and functions are abstracted more flexibly to better fit with a shared functional heritage from Lisp. C++ exemplifies another extreme: it relies heavily on templates and overloading and other static bindings at compile-time, which in turn has certain flexibility problems. Although these examples offer alternate strategies for achieving the same abstraction, they do not fundamentally alter the need to support abstract nouns in code – all programming relies on an ability to abstract verbs as functions, nouns as data structures, and either as processes. Consider for example a sample Java fragment to represent some common farm "animals" to a level of abstraction suitable to model simple aspects of their hunger and feeding. It defines an Animal class to represent both the state of the animal and its functions: public class Animal extends LivingThing { private Location loc; private double energyReserves; public boolean isHungry() { return energyReserves < 2.5; } public void eat(Food food) { // Consume food energyReserves += food.getCalories(); } public void moveTo(Location location) { // Move to new location this.loc = location; } } With the above definition, one could create objects of type and call their methods like this: thePig = new Animal(); theCow = new Animal(); if (thePig.isHungry()) { thePig.eat(tableScraps); } if (theCow.isHungry()) { theCow.eat(grass); } theCow.moveTo(theBarn); In the above example, the class Animal is an abstraction used in place of an actual animal, LivingThing is a further abstraction (in this case a generalisation) of Animal. If one requires a more differentiated hierarchy of animals – to differentiate, say, those who provide milk from those who provide nothing except meat at the end of their lives – that is an intermediary level of abstraction, probably DairyAnimal (cows, goats) who would eat foods suitable to giving good milk, and MeatAnimal (pigs, steers) who would eat foods to give the best meat-quality. Such an abstraction could remove the need for the application coder to specify the type of food, so they could concentrate instead on the feeding schedule. The two classes could be related using inheritance or stand alone, and the programmer could define varying degrees of polymorphism between the two types. These facilities tend to vary drastically between languages, but in general each can achieve anything that is possible with any of the others. A great many operation overloads, data type by data type, can have the same effect at compile-time as any degree of inheritance or other means to achieve polymorphism. The class notation is simply a coder's convenience. Object-oriented design Decisions regarding what to abstract and what to keep under the control of the coder become the major concern of object-oriented design and domain analysis—actually determining the relevant relationships in the real world is the concern of object-oriented analysis or legacy analysis. In general, to determine appropriate abstraction, one must make many small decisions about scope (domain analysis), determine what other systems one must cooperate with (legacy analysis), then perform a detailed object-oriented analysis which is expressed within project time and budget constraints as an object-oriented design. In our simple example, the domain is the barnyard, the live pigs and cows and their eating habits are the legacy constraints, the detailed analysis is that coders must have the flexibility to feed the animals what is available and thus there is no reason to code the type of food into the class itself, and the design is a single simple Animal class of which pigs and cows are instances with the same functions. A decision to differentiate DairyAnimal would change the detailed analysis but the domain and legacy analysis would be unchanged—thus it is entirely under the control of the programmer, and it is called an abstraction in object-oriented programming as distinct from abstraction in domain or legacy analysis. Considerations When discussing formal semantics of programming languages, formal methods or abstract interpretation, abstraction refers to the act of considering a less detailed, but safe, definition of the observed program behaviors. For instance, one may observe only the final result of program executions instead of considering all the intermediate steps of executions. Abstraction is defined to a concrete (more precise) model of execution. Abstraction may be exact or faithful with respect to a property if one can answer a question about the property equally well on the concrete or abstract model. For instance, if one wishes to know what the result of the evaluation of a mathematical expression involving only integers +, -, ×, is worth modulo n, then one needs only perform all operations modulo n (a familiar form of this abstraction is casting out nines). Abstractions, however, though not necessarily exact, should be sound. That is, it should be possible to get sound answers from them—even though the abstraction may simply yield a result of undecidability. For instance, students in a class may be abstracted by their minimal and maximal ages; if one asks whether a certain person belongs to that class, one may simply compare that person's age with the minimal and maximal ages; if his age lies outside the range, one may safely answer that the person does not belong to the class; if it does not, one may only answer "I don't know". The level of abstraction included in a programming language can influence its overall usability. The Cognitive dimensions framework includes the concept of abstraction gradient in a formalism. This framework allows the designer of a programming language to study the trade-offs between abstraction and other characteristics of the design, and how changes in abstraction influence the language usability. Abstractions can prove useful when dealing with computer programs, because non-trivial properties of computer programs are essentially undecidable (see Rice's theorem). As a consequence, automatic methods for deriving information on the behavior of computer programs either have to drop termination (on some occasions, they may fail, crash or never yield out a result), soundness (they may provide false information), or precision (they may answer "I don't know" to some questions). Abstraction is the core concept of abstract interpretation. Model checking generally takes place on abstract versions of the studied systems. Levels of abstraction Computer science commonly presents levels (or, less commonly, layers) of abstraction, wherein each level represents a different model of the same information and processes, but with varying amounts of detail. Each level uses a system of expression involving a unique set of objects and compositions that apply only to a particular domain. Each relatively abstract, "higher" level builds on a relatively concrete, "lower" level, which tends to provide an increasingly "granular" representation. For example, gates build on electronic circuits, binary on gates, machine language on binary, programming language on machine language, applications and operating systems on programming languages. Each level is embodied, but not determined, by the level beneath it, making it a language of description that is somewhat self-contained. Database systems Since many users of database systems lack in-depth familiarity with computer data-structures, database developers often hide complexity through the following levels: Physical level – The lowest level of abstraction describes how a system actually stores data. The physical level describes complex low-level data structures in detail. Logical level – The next higher level of abstraction describes what data the database stores, and what relationships exist among those data. The logical level thus describes an entire database in terms of a small number of relatively simple structures. Although implementation of the simple structures at the logical level may involve complex physical level structures, the user of the logical level does not need to be aware of this complexity. This is referred to as physical data independence. Database administrators, who must decide what information to keep in a database, use the logical level of abstraction. View level – The highest level of abstraction describes only part of the entire database. Even though the logical level uses simpler structures, complexity remains because of the variety of information stored in a large database. Many users of a database system do not need all this information; instead, they need to access only a part of the database. The view level of abstraction exists to simplify their interaction with the system. The system may provide many views for the same database. Layered architecture The ability to provide a design of different levels of abstraction can simplify the design considerably enable different role players to effectively work at various levels of abstraction support the portability of software artifacts (model-based ideally) Systems design and business process design can both use this. Some design processes specifically generate designs that contain various levels of abstraction. Layered architecture partitions the concerns of the application into stacked groups (layers). It is a technique used in designing computer software, hardware, and communications in which system or network components are isolated in layers so that changes can be made in one layer without affecting the others.
Technology
Software development: General
null
60492
https://en.wikipedia.org/wiki/Abstract%20machine
Abstract machine
In computer science, an abstract machine is a theoretical model that allows for a detailed and precise analysis of how a computer system functions. It is similar to a mathematical function in that it receives inputs and produces outputs based on predefined rules. Abstract machines vary from literal machines in that they are expected to perform correctly and independently of hardware. Abstract machines are "machines" because they allow step-by-step execution of programs; they are "abstract" because they ignore many aspects of actual (hardware) machines. A typical abstract machine consists of a definition in terms of input, output, and the set of allowable operations used to turn the former into the latter. They can be used for purely theoretical reasons as well as models for real-world computer systems. In the theory of computation, abstract machines are often used in thought experiments regarding computability or to analyse the complexity of algorithms. This use of abstract machines is fundamental to the field of computational complexity theory, such as finite state machines, Mealy machines, push-down automata, and Turing machines. Classification Abstract machines are typically categorized into two types based on the quantity of operations they can execute simultaneously at any given moment: deterministic abstract machines and non-deterministic abstract machines. A deterministic abstract machine is a system in which a particular beginning state or condition always yields the same outputs. There is no randomness or variation in how inputs are transformed into outputs. In contrast, a non-deterministic abstract machine can provide various outputs for the same input on different executions. Unlike a deterministic algorithm, which gives the same result for the same input regardless of the number of iterations, a non-deterministic algorithm takes various paths to arrive to different outputs. Non-deterministic algorithms are helpful for obtaining approximate answers when deriving a precise solution using a deterministic approach is difficult or costly. Turing machines, for example, are some of the most fundamental abstract machines in computer science. These machines conduct operations on a tape (a string of symbols) of any length. Their instructions provide for both modifying the symbols and changing the symbol that the machine’s pointer is currently at. For example, a rudimentary Turing machine could have a single command, "convert symbol to 1 then move right", and this machine would only produce a string of 1s. This basic Turing machine is deterministic; however, nondeterministic Turing machines that can execute several actions given the same input may also be built. Implementation Any implementation of an abstract machine in the case of physical implementation (in hardware) uses some kind of physical device (mechanical or electronic) to execute the instructions of a programming language. An abstract machine, however, can also be implemented in software or firmware at levels between the abstract machine and underlying physical device. Implementation in hardware: The direct implementation of abstract machine in hardware is a matter of using physical devices such as memory, arithmetic and logic circuits, buses, etc., to implement a physical machine whose machine language coincides with the programming language. Once constructed, it would be virtually hard to change such a machine. A CPU may be thought of as a concrete hardware realisation of an abstract machine, particularly the processor's design. Simulation using software: Implementing an abstract machine with software entails writing programmes in a different language to implement the data structures and algorithms needed by the abstract machine. This provides the most flexibility since programmes implementing abstract machine constructs can be easily changed. An abstract machine implemented as a software simulation, or for which an interpreter exists, is called a virtual machine. Emulation using firmware: Firmware implementation sits between hardware and software implementation. It consists of microcode simulations of data structures and algorithms for abstract machines. Microcode allows a computer programmer to write machine instructions without needing to fabricate electrical circuitry. Programming language implementation An abstract machine is, intuitively, just an abstraction of the idea of a physical computer. For actual execution, algorithms must be properly formalised using the constructs offered by a programming language. This implies that the algorithms to be executed must be expressed using programming language instructions. The syntax of a programming language enables the construction of programs using a finite set of constructs known as instructions. Most abstract machines share a program store and a state, which often includes a stack and registers. In digital computers, the stack is simply a memory unit with an address register that can count only positive integers (after an initial value is loaded into it). The address register for the stack is known as a stack pointer because its value always refers to the top item on the stack. The program consists of a series of instructions, with a stack pointer indicating the next instruction to be performed. When the instruction is completed, a stack pointer is advanced. This fundamental control mechanism of an abstract machine is also known as its execution loop. Thus, an abstract machine for a programming language is any collection of data structures and algorithms capable of storing and running programs written in the programming language. It bridges the gap between the high level of a programming language and the low level of an actual machine by providing an intermediate language step for compilation. An abstract machine's instructions are adapted to the unique operations necessary to implement operations of a certain source language or set of source languages. Imperative languages In the late 1950s, the Association for Computing Machinery (ACM) and other allied organisations developed many proposals for Universal Computer Oriented Language (UNCOL), such as Conway's machine. The UNCOL concept is good, but it has not been widely used due to the poor performance of the generated code. In many areas of computing, its performance will continue to be an issue despite the development of the Java Virtual Machine in the late 1990s. Algol Object Code (1964), P4-machine (1976), UCSD P-machine (1977), and Forth (1970) are some successful abstract machines of this kind. Object-oriented languages Abstract machines for object-oriented programming languages are often stack-based and have special access instructions for object fields and methods. In these machines, memory management is often implicit performed by a garbage collector (memory recovery feature built into programming languages). Smalltalk-80 (1980), Self (1989), and Java (1994) are examples of this implementation. String processing languages A string processing language is a computer language that focuses on processing strings rather than numbers. There have been string processing languages in the form of command shells, programming tools, macro processors, and scripting languages for decades. Using a suitable abstract machine has two benefits: increased execution speed and enhanced portability. Snobol4 and ML/I are two notable instances of early string processing languages that use an abstract machine to gain machine independence. Functional programming languages The early abstract machines for functional languages, including the SECD machine (1964) and Cardelli's Functional Abstract Machine (1983), defined strict evaluation, also known as eager or call-by-value evaluation, in which function arguments are evaluated before the call and precisely once. Recently, the majority of research has been on lazy (or call-by-need) evaluation, such as the G-machine (1984), Krivine machine (1985), and Three Instruction Machine (1986), in which function arguments are evaluated only if necessary and at most once. One reason is because effective implementation of strict evaluation is now well-understood, therefore the necessity for an abstract machine has diminished. Logical languages Predicate calculus (first order logic) is the foundation of logic programming languages. The most well-known logic programming language is Prolog. The rules in Prolog are written in a uniform format known as universally quantified 'Horn clauses', which means to begin the calculation that attempts to discover a proof of the objective. The Warren Abstract Machine WAM (1983), which has become the de facto standard in Prolog program compilation, has been the focus of most study. It provides special purpose instructions such as data unification instructions and control flow instructions to support backtracking (searching algorithm). Structure A generic abstract machine is made up of a memory and an interpreter. The memory is used to store data and programs, while the interpreter is the component that executes the instructions included in programs. The interpreter must carry out the operations that are unique to the language it is interpreting. However, given the variety of languages, it is conceivable to identify categories of operations and an "execution mechanism" shared by all interpreters. The interpreter's operations and accompanying data structures are divided into the following categories: Operations for processing primitive data: Operations and data structures for controlling the sequence of execution of operations; Operations and data structures for controlling data transfers; Operations and data structures for memory management. Processing primitive data An abstract machine must contain operations for manipulating primitive data types such as strings and integers. For example, integers are nearly universally considered a basic data type for both physical abstract machines and the abstract machines used by many programming languages. The machine carries out the arithmetic operations necessary, such as addition and multiplication, within a single time step. Sequence control Operations and structures for "sequence control" allow controlling the execution flow of program instructions. When certain conditions are met, it is necessary to change the typical sequential execution of a program. Therefore, the interpreter employs data structures (such as those used to store the address of the next instruction to execute) that are modified by operations distinct from those used for data manipulation (for example, operations to update the address of the next instruction to execute). Controlling data transfers Data transfer operations are used to control how operands and data are transported from memory to the interpreter and vice versa. These operations deal with the store and the retrieval order of operands from the store. Memory management Memory management is concerned with the operations performed in memory to allocate data and applications. In the abstract machine, data and programmes can be held indefinitely, or in the case of programming languages, memory can be allocated or deallocated using a more complex mechanism. Hierarchies Abstract machine hierarchies are often employed, in which each machine uses the functionality of the level immediately below and adds additional functionality of its own to meet the level immediately above. A hardware computer, constructed with physical electronic devices, can be added at the most basic level. Above this level, the abstract microprogrammed machine level may be introduced. The abstract machine supplied by the operating system, which is implemented by a program written in machine language, is located immediately above (or directly above the hardware if the firmware level is not there). On the one hand, the operating system extends the capability of the physical machine by providing higher-level primitives that are not available on the physical machine (for example, primitives that act on files). The host machine is formed by the abstract machine given by the operating system, on which a high-level programming language is implemented using an intermediary machine, such as the Java Virtual machine and its byte code language. The level given by the abstract machine for the high-level language (for example, Java) is not usually the final level of hierarchy. At this point, one or more applications that deliver additional services together may be introduced. A "web machine" level, for example, can be added to implement the functionalities necessary to handle Web communications (communications protocols or HTML code presentation). The "Web Service" level is located above this, and it provides the functionalities necessary to make web services communicate, both in terms of interaction protocols and the behaviour of the processes involved. At this level, entirely new languages that specify the behaviour of so-called "business processes" based on Web services may be developed (an example is the Business Process Execution Language). Finally, a specialised application can be found at the highest level (for example, E-commerce) which has very specific and limited functionality.
Mathematics
Discrete mathematics
null
60558
https://en.wikipedia.org/wiki/ARM%20architecture%20family
ARM architecture family
ARM (stylised in lowercase as arm, formerly an acronym for Advanced RISC Machines and originally Acorn RISC Machine) is a family of RISC instruction set architectures (ISAs) for computer processors. Arm Holdings develops the ISAs and licenses them to other companies, who build the physical devices that use the instruction set. It also designs and licenses cores that implement these ISAs. Due to their low costs, low power consumption, and low heat generation, ARM processors are useful for light, portable, battery-powered devices, including smartphones, laptops, and tablet computers, as well as embedded systems. However, ARM processors are also used for desktops and servers, including Fugaku, the world's fastest supercomputer from 2020 to 2022. With over 230 billion ARM chips produced, , ARM is the most widely used family of instruction set architectures. There have been several generations of the ARM design. The original ARM1 used a 32-bit internal structure but had a 26-bit address space that limited it to 64 MB of main memory. This limitation was removed in the ARMv3 series, which has a 32-bit address space, and several additional generations up to ARMv7 remained 32-bit. Released in 2011, the ARMv8-A architecture added support for a 64-bit address space and 64-bit arithmetic with its new 32-bit fixed-length instruction set. Arm Holdings has also released a series of additional instruction sets for different rules; the "Thumb" extension adds both 32- and 16-bit instructions for improved code density, while Jazelle added instructions for directly handling Java bytecode. More recent changes include the addition of simultaneous multithreading (SMT) for improved performance or fault tolerance. History BBC Micro Acorn Computers' first widely successful design was the BBC Micro, introduced in December 1981. This was a relatively conventional machine based on the MOS Technology 6502 CPU but ran at roughly double the performance of competing designs like the Apple II due to its use of faster dynamic random-access memory (DRAM). Typical DRAM of the era ran at about 2 MHz; Acorn arranged a deal with Hitachi for a supply of faster 4 MHz parts. Machines of the era generally shared memory between the processor and the framebuffer, which allowed the processor to quickly update the contents of the screen without having to perform separate input/output (I/O). As the timing of the video display is exacting, the video hardware had to have priority access to that memory. Due to a quirk of the 6502's design, the CPU left the memory untouched for half of the time. Thus by running the CPU at 1 MHz, the video system could read data during those down times, taking up the total 2 MHz bandwidth of the RAM. In the BBC Micro, the use of 4 MHz RAM allowed the same technique to be used, but running at twice the speed. This allowed it to outperform any similar machine on the market. Acorn Business Computer 1981 was also the year that the IBM Personal Computer was introduced. Using the recently introduced Intel 8088, a 16-bit CPU compared to the 6502's 8-bit design, it offered higher overall performance. Its introduction changed the desktop computer market radically: what had been largely a hobby and gaming market emerging over the prior five years began to change to a must-have business tool where the earlier 8-bit designs simply could not compete. Even newer 32-bit designs were also coming to market, such as the Motorola 68000 and National Semiconductor NS32016. Acorn began considering how to compete in this market and produced a new paper design named the Acorn Business Computer. They set themselves the goal of producing a machine with ten times the performance of the BBC Micro, but at the same price. This would outperform and underprice the PC. At the same time, the recent introduction of the Apple Lisa brought the graphical user interface (GUI) concept to a wider audience and suggested the future belonged to machines with a GUI. The Lisa, however, cost $9,995, as it was packed with support chips, large amounts of memory, and a hard disk drive, all very expensive then. The engineers then began studying all of the CPU designs available. Their conclusion about the existing 16-bit designs was that they were a lot more expensive and were still "a bit crap", offering only slightly higher performance than their BBC Micro design. They also almost always demanded a large number of support chips to operate even at that level, which drove up the cost of the computer as a whole. These systems would simply not hit the design goal. They also considered the new 32-bit designs, but these cost even more and had the same issues with support chips. According to Sophie Wilson, all the processors tested at that time performed about the same, with about a 4 Mbit/s bandwidth. Two key events led Acorn down the path to ARM. One was the publication of a series of reports from the University of California, Berkeley, which suggested that a simple chip design could nevertheless have extremely high performance, much higher than the latest 32-bit designs on the market. The second was a visit by Steve Furber and Sophie Wilson to the Western Design Center, a company run by Bill Mensch and his sister, which had become the logical successor to the MOS team and was offering new versions like the WDC 65C02. The Acorn team saw high school students producing chip layouts on Apple II machines, which suggested that anyone could do it. In contrast, a visit to another design firm working on modern 32-bit CPU revealed a team with over a dozen members who were already on revision H of their design and yet it still contained bugs. This cemented their late 1983 decision to begin their own CPU design, the Acorn RISC Machine. Design concepts The original Berkeley RISC designs were in some sense teaching systems, not designed specifically for outright performance. To the RISC's basic register-heavy and load/store concepts, ARM added a number of the well-received design notes of the 6502. Primary among them was the ability to quickly serve interrupts, which allowed the machines to offer reasonable input/output performance with no added external hardware. To offer interrupts with similar performance as the 6502, the ARM design limited its physical address space to 64 MB of total addressable space, requiring 26 bits of address. As instructions were 4 bytes (32 bits) long, and required to be aligned on 4-byte boundaries, the lower 2 bits of an instruction address were always zero. This meant the program counter (PC) only needed to be 24 bits, allowing it to be stored along with the eight bit processor flags in a single 32-bit register. That meant that upon receiving an interrupt, the entire machine state could be saved in a single operation, whereas had the PC been a full 32-bit value, it would require separate operations to store the PC and the status flags. This decision halved the interrupt overhead. Another change, and among the most important in terms of practical real-world performance, was the modification of the instruction set to take advantage of page mode DRAM. Recently introduced, page mode allowed subsequent accesses of memory to run twice as fast if they were roughly in the same location, or "page", in the DRAM chip. Berkeley's design did not consider page mode and treated all memory equally. The ARM design added special vector-like memory access instructions, the "S-cycles", that could be used to fill or save multiple registers in a single page using page mode. This doubled memory performance when they could be used, and was especially important for graphics performance. The Berkeley RISC designs used register windows to reduce the number of register saves and restores performed in procedure calls; the ARM design did not adopt this. Wilson developed the instruction set, writing a simulation of the processor in BBC BASIC that ran on a BBC Micro with a second 6502 processor. This convinced Acorn engineers they were on the right track. Wilson approached Acorn's CEO, Hermann Hauser, and requested more resources. Hauser gave his approval and assembled a small team to design the actual processor based on Wilson's ISA. The official Acorn RISC Machine project started in October 1983. ARM1 Acorn chose VLSI Technology as the "silicon partner", as they were a source of ROMs and custom chips for Acorn. Acorn provided the design and VLSI provided the layout and production. The first samples of ARM silicon worked properly when first received and tested on 26 April 1985. Known as ARM1, these versions ran at 6 MHz. The first ARM application was as a second processor for the BBC Micro, where it helped in developing simulation software to finish development of the support chips (VIDC, IOC, MEMC), and sped up the CAD software used in ARM2 development. Wilson subsequently rewrote BBC BASIC in ARM assembly language. The in-depth knowledge gained from designing the instruction set enabled the code to be very dense, making ARM BBC BASIC an extremely good test for any ARM emulator. ARM2 The result of the simulations on the ARM1 boards led to the late 1986 introduction of the ARM2 design running at 8 MHz, and the early 1987 speed-bumped version at 10 to 12 MHz. A significant change in the underlying architecture was the addition of a Booth multiplier, whereas formerly multiplication had to be carried out in software. Further, a new Fast Interrupt reQuest mode, FIQ for short, allowed registers 8 through 14 to be replaced as part of the interrupt itself. This meant FIQ requests did not have to save out their registers, further speeding interrupts. The first use of the ARM2 was the Acorn Archimedes personal computer models A305, A310, and A440 launched in 1987. According to the Dhrystone benchmark, the ARM2 was roughly seven times the performance of a typical 7 MHz 68000-based system like the Amiga or Macintosh SE. It was twice as fast as an Intel 80386 running at 16 MHz, and about the same speed as a multi-processor VAX-11/784 superminicomputer. The only systems that beat it were the Sun SPARC and MIPS R2000 RISC-based workstations. Further, as the CPU was designed for high-speed I/O, it dispensed with many of the support chips seen in these machines; notably, it lacked any dedicated direct memory access (DMA) controller which was often found on workstations. The graphics system was also simplified based on the same set of underlying assumptions about memory and timing. The result was a dramatically simplified design, offering performance on par with expensive workstations but at a price point similar to contemporary desktops. The ARM2 featured a 32-bit data bus, 26-bit address space and 27 32-bit registers, of which 16 are accessible at any one time (including the PC). The ARM2 had a transistor count of just 30,000, compared to Motorola's six-year-older 68000 model with around 68,000. Much of this simplicity came from the lack of microcode, which represents about one-quarter to one-third of the 68000's transistors, and the lack of (like most CPUs of the day) a cache. This simplicity enabled the ARM2 to have a low power consumption and simpler thermal packaging by having fewer powered transistors. Nevertheless, ARM2 offered better performance than the contemporary 1987 IBM PS/2 Model 50, which initially utilised an Intel 80286, offering 1.8 MIPS @ 10 MHz, and later in 1987, the 2 MIPS of the PS/2 70, with its Intel 386 DX @ 16 MHz. A successor, ARM3, was produced with a 4 KB cache, which further improved performance. The address bus was extended to 32 bits in the ARM6, but program code still had to lie within the first 64 MB of memory in 26-bit compatibility mode, due to the reserved bits for the status flags. Advanced RISC Machines Ltd. – ARM6 In the late 1980s, Apple Computer and VLSI Technology started working with Acorn on newer versions of the ARM core. In 1990, Acorn spun off the design team into a new company named Advanced RISC Machines Ltd., which became ARM Ltd. when its parent company, Arm Holdings plc, floated on the London Stock Exchange and Nasdaq in 1998. The new Apple–ARM work would eventually evolve into the ARM6, first released in early 1992. Apple used the ARM6-based ARM610 as the basis for their Apple Newton PDA. Early licensees In 1994, Acorn used the ARM610 as the main central processing unit (CPU) in their RiscPC computers. DEC licensed the ARMv4 architecture and produced the StrongARM. At 233 MHz, this CPU drew only one watt (newer versions draw far less). This work was later passed to Intel as part of a lawsuit settlement, and Intel took the opportunity to supplement their i960 line with the StrongARM. Intel later developed its own high performance implementation named XScale, which it has since sold to Marvell. Transistor count of the ARM core remained essentially the same throughout these changes; ARM2 had 30,000 transistors, while ARM6 grew only to 35,000. Market share In 2005, about 98% of all mobile phones sold used at least one ARM processor. In 2010, producers of chips based on ARM architectures reported shipments of 6.1 billion ARM-based processors, representing 95% of smartphones, 35% of digital televisions and set-top boxes, and 10% of mobile computers. In 2011, the 32-bit ARM architecture was the most widely used architecture in mobile devices and the most popular 32-bit one in embedded systems. In 2013, 10 billion were produced and "ARM-based chips are found in nearly 60 percent of the world's mobile devices". Licensing Core licence Arm Holdings's primary business is selling IP cores, which licensees use to create microcontrollers (MCUs), CPUs, and systems-on-chips based on those cores. The original design manufacturer combines the ARM core with other parts to produce a complete device, typically one that can be built in existing semiconductor fabrication plants (fabs) at low cost and still deliver substantial performance. The most successful implementation has been the ARM7TDMI with hundreds of millions sold. Atmel has been a precursor design center in the ARM7TDMI-based embedded system. The ARM architectures used in smartphones, PDAs and other mobile devices range from ARMv5 to . In 2009, some manufacturers introduced netbooks based on ARM architecture CPUs, in direct competition with netbooks based on Intel Atom. Arm Holdings offers a variety of licensing terms, varying in cost and deliverables. Arm Holdings provides to all licensees an integratable hardware description of the ARM core as well as complete software development toolset (compiler, debugger, software development kit), and the right to sell manufactured silicon containing the ARM CPU. SoC packages integrating ARM's core designs include Nvidia Tegra's first three generations, CSR plc's Quatro family, ST-Ericsson's Nova and NovaThor, Silicon Labs's Precision32 MCU, Texas Instruments's OMAP products, Samsung's Hummingbird and Exynos products, Apple's A4, A5, and A5X, and NXP's i.MX. Fabless licensees, who wish to integrate an ARM core into their own chip design, are usually only interested in acquiring a ready-to-manufacture verified semiconductor intellectual property core. For these customers, Arm Holdings delivers a gate netlist description of the chosen ARM core, along with an abstracted simulation model and test programs to aid design integration and verification. More ambitious customers, including integrated device manufacturers (IDM) and foundry operators, choose to acquire the processor IP in synthesizable RTL (Verilog) form. With the synthesizable RTL, the customer has the ability to perform architectural level optimisations and extensions. This allows the designer to achieve exotic design goals not otherwise possible with an unmodified netlist (high clock speed, very low power consumption, instruction set extensions, etc.). While Arm Holdings does not grant the licensee the right to resell the ARM architecture itself, licensees may freely sell manufactured products such as chip devices, evaluation boards and complete systems. Merchant foundries can be a special case; not only are they allowed to sell finished silicon containing ARM cores, they generally hold the right to re-manufacture ARM cores for other customers. Arm Holdings prices its IP based on perceived value. Lower performing ARM cores typically have lower licence costs than higher performing cores. In implementation terms, a synthesisable core costs more than a hard macro (blackbox) core. Complicating price matters, a merchant foundry that holds an ARM licence, such as Samsung or Fujitsu, can offer fab customers reduced licensing costs. In exchange for acquiring the ARM core through the foundry's in-house design services, the customer can reduce or eliminate payment of ARM's upfront licence fee. Compared to dedicated semiconductor foundries (such as TSMC and UMC) without in-house design services, Fujitsu/Samsung charge two- to three-times more per manufactured wafer. For low to mid volume applications, a design service foundry offers lower overall pricing (through subsidisation of the licence fee). For high volume mass-produced parts, the long term cost reduction achievable through lower wafer pricing reduces the impact of ARM's NRE (non-recurring engineering) costs, making the dedicated foundry a better choice. Companies that have developed chips with cores designed by Arm include Amazon.com's Annapurna Labs subsidiary, Analog Devices, Apple, AppliedMicro (now: MACOM Technology Solutions), Atmel, Broadcom, Cavium, Cypress Semiconductor, Freescale Semiconductor (now NXP Semiconductors), Huawei, Intel, Maxim Integrated, Nvidia, NXP, Qualcomm, Renesas, Samsung Electronics, ST Microelectronics, Texas Instruments, and Xilinx. Built on ARM Cortex Technology licence In February 2016, ARM announced the Built on ARM Cortex Technology licence, often shortened to Built on Cortex (BoC) licence. This licence allows companies to partner with ARM and make modifications to ARM Cortex designs. These design modifications will not be shared with other companies. These semi-custom core designs also have brand freedom, for example Kryo 280. Companies that are current licensees of Built on ARM Cortex Technology include Qualcomm. Architectural licence Companies can also obtain an ARM architectural licence for designing their own CPU cores using the ARM instruction sets. These cores must comply fully with the ARM architecture. Companies that have designed cores that implement an ARM architecture include Apple, AppliedMicro (now: Ampere Computing), Broadcom, Cavium (now: Marvell), Digital Equipment Corporation, Intel, Nvidia, Qualcomm, Samsung Electronics, Fujitsu, and NUVIA Inc. (acquired by Qualcomm in 2021). ARM Flexible Access On 16 July 2019, ARM announced ARM Flexible Access. ARM Flexible Access provides unlimited access to included ARM intellectual property (IP) for development. Per product licence fees are required once a customer reaches foundry tapeout or prototyping. 75% of ARM's most recent IP over the last two years are included in ARM Flexible Access. As of October 2019: CPUs: Cortex-A5, Cortex-A7, Cortex-A32, Cortex-A34, Cortex-A35, Cortex-A53, Cortex-R5, Cortex-R8, Cortex-R52, Cortex-M0, Cortex-M0+, Cortex-M3, Cortex-M4, Cortex-M7, Cortex-M23, Cortex-M33 GPUs: Mali-G52, Mali-G31. Includes Mali Driver Development Kits (DDK). Interconnect: CoreLink NIC-400, CoreLink NIC-450, CoreLink CCI-400, CoreLink CCI-500, CoreLink CCI-550, ADB-400 AMBA, XHB-400 AXI-AHB System Controllers: CoreLink GIC-400, CoreLink GIC-500, PL192 VIC, BP141 TrustZone Memory Wrapper, CoreLink TZC-400, CoreLink L2C-310, CoreLink MMU-500, BP140 Memory Interface Security IP: CryptoCell-312, CryptoCell-712, TrustZone True Random Number Generator Peripheral Controllers: PL011 UART, PL022 SPI, PL031 RTC Debug & Trace: CoreSight SoC-400, CoreSight SDC-600, CoreSight STM-500, CoreSight System Trace Macrocell, CoreSight Trace Memory Controller Design Kits: Corstone-101, Corstone-201 Physical IP: Artisan PIK for Cortex-M33 TSMC 22ULL including memory compilers, logic libraries, GPIOs and documentation Tools & Materials: Socrates IP ToolingARM Design Studio, Virtual System Models Support: Standard ARM Technical support, ARM online training, maintenance updates, credits toward onsite training and design reviews Cores Arm provides a list of vendors who implement ARM cores in their design (application specific standard products (ASSP), microprocessor and microcontrollers). Example applications of ARM cores ARM cores are used in a number of products, particularly PDAs and smartphones. Some computing examples are Microsoft's first generation Surface, Surface 2 and Pocket PC devices (following 2002), Apple's iPads, and Asus's Eee Pad Transformer tablet computers, and several Chromebook laptops. Others include Apple's iPhone smartphones and iPod portable media players, Canon PowerShot digital cameras, Nintendo Switch hybrid, the Wii security processor and 3DS handheld game consoles, and TomTom turn-by-turn navigation systems. In 2005, Arm took part in the development of Manchester University's computer SpiNNaker, which used ARM cores to simulate the human brain. ARM chips are also used in Raspberry Pi, BeagleBoard, BeagleBone, PandaBoard, and other single-board computers, because they are very small, inexpensive, and consume very little power. 32-bit architecture The 32-bit ARM architecture (ARM32), such as ARMv7-A (implementing AArch32; see section on Armv8-A for more on it), was the most widely used architecture in mobile devices . Since 1995, various versions of the ARM Architecture Reference Manual (see ) have been the primary source of documentation on the ARM processor architecture and instruction set, distinguishing interfaces that all ARM processors are required to support (such as instruction semantics) from implementation details that may vary. The architecture has evolved over time, and version seven of the architecture, ARMv7, defines three architecture "profiles": A-profile, the "Application" profile, implemented by 32-bit cores in the Cortex-A series and by some non-ARM cores R-profile, the "Real-time" profile, implemented by cores in the Cortex-R series M-profile, the "Microcontroller" profile, implemented by most cores in the Cortex-M series Although the architecture profiles were first defined for ARMv7, ARM subsequently defined the ARMv6-M architecture (used by the Cortex M0/M0+/M1) as a subset of the ARMv7-M profile with fewer instructions. CPU modes Except in the M-profile, the 32-bit ARM architecture specifies several CPU modes, depending on the implemented architecture features. At any moment in time, the CPU can be in only one mode, but it can switch modes due to external events (interrupts) or programmatically. User mode: The only non-privileged mode. FIQ mode: A privileged mode that is entered whenever the processor accepts a fast interrupt request. IRQ mode: A privileged mode that is entered whenever the processor accepts an interrupt. Supervisor (svc) mode: A privileged mode entered whenever the CPU is reset or when an SVC instruction is executed. Abort mode: A privileged mode that is entered whenever a prefetch abort or data abort exception occurs. Undefined mode: A privileged mode that is entered whenever an undefined instruction exception occurs. System mode (ARMv4 and above): The only privileged mode that is not entered by an exception. It can only be entered by executing an instruction that explicitly writes to the mode bits of the Current Program Status Register (CPSR) from another privileged mode (not from user mode). Monitor mode (ARMv6 and ARMv7 Security Extensions, ARMv8 EL3): A monitor mode is introduced to support TrustZone extension in ARM cores. Hyp mode (ARMv7 Virtualization Extensions, ARMv8 EL2): A hypervisor mode that supports Popek and Goldberg virtualization requirements for the non-secure operation of the CPU. Thread mode (ARMv6-M, ARMv7-M, ARMv8-M): A mode which can be specified as either privileged or unprivileged. Whether the Main Stack Pointer (MSP) or Process Stack Pointer (PSP) is used can also be specified in CONTROL register with privileged access. This mode is designed for user tasks in RTOS environment but it is typically used in bare-metal for super-loop. Handler mode (ARMv6-M, ARMv7-M, ARMv8-M): A mode dedicated for exception handling (except the RESET which are handled in Thread mode). Handler mode always uses MSP and works in privileged level. Instruction set The original (and subsequent) ARM implementation was hardwired without microcode, like the much simpler 8-bit 6502 processor used in prior Acorn microcomputers. The 32-bit ARM architecture (and the 64-bit architecture for the most part) includes the following RISC features: Load–store architecture. No support for unaligned memory accesses in the original version of the architecture. ARMv6 and later, except some microcontroller versions, support unaligned accesses for half-word and single-word load/store instructions with some limitations, such as no guaranteed atomicity. Uniform 16 × 32-bit register file (including the program counter, stack pointer and the link register). Fixed instruction width of 32 bits to ease decoding and pipelining, at the cost of decreased code density. Later, the Thumb instruction set added 16-bit instructions and increased code density. Mostly single clock-cycle execution. To compensate for the simpler design, compared with processors like the Intel 80286 and Motorola 68020, some additional design features were used: Conditional execution of most instructions reduces branch overhead and compensates for the lack of a branch predictor in early chips. Arithmetic instructions alter condition codes only when desired. 32-bit barrel shifter can be used without performance penalty with most arithmetic instructions and address calculations. Has powerful indexed addressing modes. A link register supports fast leaf function calls. A simple, but fast, 2-priority-level interrupt subsystem has switched register banks. Arithmetic instructions ARM includes integer arithmetic operations for add, subtract, and multiply; some versions of the architecture also support divide operations. ARM supports 32-bit × 32-bit multiplies with either a 32-bit result or 64-bit result, though Cortex-M0 / M0+ / M1 cores do not support 64-bit results. Some ARM cores also support 16-bit × 16-bit and 32-bit × 16-bit multiplies. The divide instructions are only included in the following ARM architectures: Armv7-M and Armv7E-M architectures always include divide instructions. Armv7-R architecture always includes divide instructions in the Thumb instruction set, but optionally in its 32-bit instruction set. Armv7-A architecture optionally includes the divide instructions. The instructions might not be implemented, or implemented only in the Thumb instruction set, or implemented in both the Thumb and ARM instruction sets, or implemented if the Virtualization Extensions are included. Registers Registers R0 through R7 are the same across all CPU modes; they are never banked. Registers R8 through R12 are the same across all CPU modes except FIQ mode. FIQ mode has its own distinct R8 through R12 registers. R13 and R14 are banked across all privileged CPU modes except system mode. That is, each mode that can be entered because of an exception has its own R13 and R14. These registers generally contain the stack pointer and the return address from function calls, respectively. Aliases: R13 is also referred to as SP, the stack pointer. R14 is also referred to as LR, the link register. R15 is also referred to as PC, the program counter. The Current Program Status Register (CPSR) has the following 32 bits. M (bits 0–4) is the processor mode bits. T (bit 5) is the Thumb state bit. F (bit 6) is the FIQ disable bit. I (bit 7) is the IRQ disable bit. A (bit 8) is the imprecise data abort disable bit. E (bit 9) is the data endianness bit. IT (bits 10–15 and 25–26) is the if-then state bits. GE (bits 16–19) is the greater-than-or-equal-to bits. DNM (bits 20–23) is the do not modify bits. J (bit 24) is the Java state bit. Q (bit 27) is the sticky overflow bit. V (bit 28) is the overflow bit. C (bit 29) is the carry/borrow/extend bit. Z (bit 30) is the zero bit. N (bit 31) is the negative/less than bit. Conditional execution Almost every ARM instruction has a conditional execution feature called predication, which is implemented with a 4-bit condition code selector (the predicate). To allow for unconditional execution, one of the four-bit codes causes the instruction to be always executed. Most other CPU architectures only have condition codes on branch instructions. Though the predicate takes up four of the 32 bits in an instruction code, and thus cuts down significantly on the encoding bits available for displacements in memory access instructions, it avoids branch instructions when generating code for small if statements. Apart from eliminating the branch instructions themselves, this preserves the fetch/decode/execute pipeline at the cost of only one cycle per skipped instruction. An algorithm that provides a good example of conditional execution is the subtraction-based Euclidean algorithm for computing the greatest common divisor. In the C programming language, the algorithm can be written as: int gcd(int a, int b) { while (a != b) // We enter the loop when a < b or a > b, but not when a == b if (a > b) // When a > b we do this a -= b; else // When a < b we do that (no "if (a < b)" needed since a != b is checked in while condition) b -= a; return a; } The same algorithm can be rewritten in a way closer to target ARM instructions as: loop: // Compare a and b GT = a > b; LT = a < b; NE = a != b; // Perform operations based on flag results if (GT) a -= b; // Subtract *only* if greater-than if (LT) b -= a; // Subtract *only* if less-than if (NE) goto loop; // Loop *only* if compared values were not equal return a; and coded in assembly language as: ; assign a to register r0, b to r1 loop: CMP r0, r1 ; set condition "NE" if (a ≠ b), ; "GT" if (a > b), ; or "LT" if (a < b) SUBGT r0, r0, r1 ; if "GT" (Greater Than), then a = a − b SUBLT r1, r1, r0 ; if "LT" (Less Than), then b = b − a BNE loop ; if "NE" (Not Equal), then loop B lr ; return which avoids the branches around the then and else clauses. If r0 and r1 are equal then neither of the SUB instructions will be executed, eliminating the need for a conditional branch to implement the while check at the top of the loop, for example had SUBLE (less than or equal) been used. One of the ways that Thumb code provides a more dense encoding is to remove the four-bit selector from non-branch instructions. Other features Another feature of the instruction set is the ability to fold shifts and rotates into the data processing (arithmetic, logical, and register-register move) instructions, so that, for example, the statement in C language: a += (j << 2); could be rendered as a one-word, one-cycle instruction: ADD Ra, Ra, Rj, LSL #2 This results in the typical ARM program being denser than expected with fewer memory accesses; thus the pipeline is used more efficiently. The ARM processor also has features rarely seen in other RISC architectures, such as PC-relative addressing (indeed, on the 32-bit ARM the PC is one of its 16 registers) and pre- and post-increment addressing modes. The ARM instruction set has increased over time. Some early ARM processors (before ARM7TDMI), for example, have no instruction to store a two-byte quantity. Pipelines and other implementation issues The ARM7 and earlier implementations have a three-stage pipeline; the stages being fetch, decode, and execute. Higher-performance designs, such as the ARM9, have deeper pipelines: Cortex-A8 has thirteen stages. Additional implementation changes for higher performance include a faster adder and more extensive branch prediction logic. The difference between the ARM7DI and ARM7DMI cores, for example, was an improved multiplier; hence the added "M". Coprocessors The ARM architecture (pre-Armv8) provides a non-intrusive way of extending the instruction set using "coprocessors" that can be addressed using MCR, MRC, MRRC, MCRR, and similar instructions. The coprocessor space is divided logically into 16 coprocessors with numbers from 0 to 15, coprocessor 15 (cp15) being reserved for some typical control functions like managing the caches and MMU operation on processors that have one. In ARM-based machines, peripheral devices are usually attached to the processor by mapping their physical registers into ARM memory space, into the coprocessor space, or by connecting to another device (a bus) that in turn attaches to the processor. Coprocessor accesses have lower latency, so some peripherals—for example, an XScale interrupt controller—are accessible in both ways: through memory and through coprocessors. In other cases, chip designers only integrate hardware using the coprocessor mechanism. For example, an image processing engine might be a small ARM7TDMI core combined with a coprocessor that has specialised operations to support a specific set of HDTV transcoding primitives. Debugging All modern ARM processors include hardware debugging facilities, allowing software debuggers to perform operations such as halting, stepping, and breakpointing of code starting from reset. These facilities are built using JTAG support, though some newer cores optionally support ARM's own two-wire "SWD" protocol. In ARM7TDMI cores, the "D" represented JTAG debug support, and the "I" represented presence of an "EmbeddedICE" debug module. For ARM7 and ARM9 core generations, EmbeddedICE over JTAG was a de facto debug standard, though not architecturally guaranteed. The ARMv7 architecture defines basic debug facilities at an architectural level. These include breakpoints, watchpoints and instruction execution in a "Debug Mode"; similar facilities were also available with EmbeddedICE. Both "halt mode" and "monitor" mode debugging are supported. The actual transport mechanism used to access the debug facilities is not architecturally specified, but implementations generally include JTAG support. There is a separate ARM "CoreSight" debug architecture, which is not architecturally required by ARMv7 processors. Debug Access Port The Debug Access Port (DAP) is an implementation of an ARM Debug Interface. There are two different supported implementations, the Serial Wire JTAG Debug Port (SWJ-DP) and the Serial Wire Debug Port (SW-DP). CMSIS-DAP is a standard interface that describes how various debugging software on a host PC can communicate over USB to firmware running on a hardware debugger, which in turn talks over SWD or JTAG to a CoreSight-enabled ARM Cortex CPU. DSP enhancement instructions To improve the ARM architecture for digital signal processing and multimedia applications, DSP instructions were added to the instruction set. These are signified by an "E" in the name of the ARMv5TE and ARMv5TEJ architectures. E-variants also imply T, D, M, and I. The new instructions are common in digital signal processor (DSP) architectures. They include variations on signed multiply–accumulate, saturated add and subtract, and count leading zeros. First introduced in 1999, this extension of the core instruction set contrasted with ARM's earlier DSP coprocessor known as Piccolo, which employed a distinct, incompatible instruction set whose execution involved a separate program counter. Piccolo instructions employed a distinct register file of sixteen 32-bit registers, with some instructions combining registers for use as 48-bit accumulators and other instructions addressing 16-bit half-registers. Some instructions were able to operate on two such 16-bit values in parallel. Communication with the Piccolo register file involved load to Piccolo and store from Piccolo coprocessor instructions via two buffers of eight 32-bit entries. Described as reminiscent of other approaches, notably Hitachi's SH-DSP and Motorola's 68356, Piccolo did not employ dedicated local memory and relied on the bandwidth of the ARM core for DSP operand retrieval, impacting concurrent performance. Piccolo's distinct instruction set also proved not to be a "good compiler target". SIMD extensions for multimedia Introduced in the ARMv6 architecture, this was a precursor to Advanced SIMD, also named Neon. Jazelle Jazelle DBX (Direct Bytecode eXecution) is a technique that allows Java bytecode to be executed directly in the ARM architecture as a third execution state (and instruction set) alongside the existing ARM and Thumb-mode. Support for this state is signified by the "J" in the ARMv5TEJ architecture, and in ARM9EJ-S and ARM7EJ-S core names. Support for this state is required starting in ARMv6 (except for the ARMv7-M profile), though newer cores only include a trivial implementation that provides no hardware acceleration. Thumb To improve compiled code density, processors since the ARM7TDMI (released in 1994) have featured the Thumb compressed instruction set, which have their own state. (The "T" in "TDMI" indicates the Thumb feature.) When in this state, the processor executes the Thumb instruction set, a compact 16-bit encoding for a subset of the ARM instruction set. Most of the Thumb instructions are directly mapped to normal ARM instructions. The space saving comes from making some of the instruction operands implicit and limiting the number of possibilities compared to the ARM instructions executed in the ARM instruction set state. In Thumb, the 16-bit opcodes have less functionality. For example, only branches can be conditional, and many opcodes are restricted to accessing only half of all of the CPU's general-purpose registers. The shorter opcodes give improved code density overall, even though some operations require extra instructions. In situations where the memory port or bus width is constrained to less than 32 bits, the shorter Thumb opcodes allow increased performance compared with 32-bit ARM code, as less program code may need to be loaded into the processor over the constrained memory bandwidth. Unlike processor architectures with variable length (16- or 32-bit) instructions, such as the Cray-1 and Hitachi SuperH, the ARM and Thumb instruction sets exist independently of each other. Embedded hardware, such as the Game Boy Advance, typically have a small amount of RAM accessible with a full 32-bit datapath; the majority is accessed via a 16-bit or narrower secondary datapath. In this situation, it usually makes sense to compile Thumb code and hand-optimise a few of the most CPU-intensive sections using full 32-bit ARM instructions, placing these wider instructions into the 32-bit bus accessible memory. The first processor with a Thumb instruction decoder was the ARM7TDMI. All processors supporting 32-bit instruction sets, starting with ARM9, and including XScale, have included a Thumb instruction decoder. It includes instructions adopted from the Hitachi SuperH (1992), which was licensed by ARM. ARM's smallest processor families (Cortex M0 and M1) implement only the 16-bit Thumb instruction set for maximum performance in lowest cost applications. ARM processors that don't support 32-bit addressing also omit Thumb. Thumb-2 Thumb-2 technology was introduced in the ARM1156 core, announced in 2003. Thumb-2 extends the limited 16-bit instruction set of Thumb with additional 32-bit instructions to give the instruction set more breadth, thus producing a variable-length instruction set. A stated aim for Thumb-2 was to achieve code density similar to Thumb with performance similar to the ARM instruction set on 32-bit memory. Thumb-2 extends the Thumb instruction set with bit-field manipulation, table branches and conditional execution. At the same time, the ARM instruction set was extended to maintain equivalent functionality in both instruction sets. A new "Unified Assembly Language" (UAL) supports generation of either Thumb or ARM instructions from the same source code; versions of Thumb seen on ARMv7 processors are essentially as capable as ARM code (including the ability to write interrupt handlers). This requires a bit of care, and use of a new "IT" (if-then) instruction, which permits up to four successive instructions to execute based on a tested condition, or on its inverse. When compiling into ARM code, this is ignored, but when compiling into Thumb it generates an actual instruction. For example: ; if (r0 == r1) CMP r0, r1 ITE EQ ; ARM: no code ... Thumb: IT instruction ; then r0 = r2; MOVEQ r0, r2 ; ARM: conditional; Thumb: condition via ITE 'T' (then) ; else r0 = r3; MOVNE r0, r3 ; ARM: conditional; Thumb: condition via ITE 'E' (else) ; recall that the Thumb MOV instruction has no bits to encode "EQ" or "NE". All ARMv7 chips support the Thumb instruction set. All chips in the Cortex-A series that support ARMv7, all Cortex-R series, and all ARM11 series support both "ARM instruction set state" and "Thumb instruction set state", while chips in the Cortex-M series support only the Thumb instruction set. Thumb Execution Environment (ThumbEE) ThumbEE (erroneously called Thumb-2EE in some ARM documentation), which was marketed as Jazelle RCT (Runtime Compilation Target), was announced in 2005 and deprecated in 2011. It first appeared in the Cortex-A8 processor. ThumbEE is a fourth instruction set state, making small changes to the Thumb-2 extended instruction set. These changes make the instruction set particularly suited to code generated at runtime (e.g. by JIT compilation) in managed Execution Environments. ThumbEE is a target for languages such as Java, C#, Perl, and Python, and allows JIT compilers to output smaller compiled code without reducing performance. New features provided by ThumbEE include automatic null pointer checks on every load and store instruction, an instruction to perform an array bounds check, and special instructions that call a handler. In addition, because it utilises Thumb-2 technology, ThumbEE provides access to registers r8–r15 (where the Jazelle/DBX Java VM state is held). Handlers are small sections of frequently called code, commonly used to implement high level languages, such as allocating memory for a new object. These changes come from repurposing a handful of opcodes, and knowing the core is in the new ThumbEE state. On 23 November 2011, Arm deprecated any use of the ThumbEE instruction set, and Armv8 removes support for ThumbEE. Floating-point (VFP) VFP (Vector Floating Point) technology is a floating-point unit (FPU) coprocessor extension to the ARM architecture (implemented differently in Armv8 – coprocessors not defined there). It provides low-cost single-precision and double-precision floating-point computation fully compliant with the ANSI/IEEE Std 754-1985 Standard for Binary Floating-Point Arithmetic. VFP provides floating-point computation suitable for a wide spectrum of applications such as PDAs, smartphones, voice compression and decompression, three-dimensional graphics and digital audio, printers, set-top boxes, and automotive applications. The VFP architecture was intended to support execution of short "vector mode" instructions but these operated on each vector element sequentially and thus did not offer the performance of true single instruction, multiple data (SIMD) vector parallelism. This vector mode was therefore removed shortly after its introduction, to be replaced with the much more powerful Advanced SIMD, also named Neon. Some devices such as the ARM Cortex-A8 have a cut-down VFPLite module instead of a full VFP module, and require roughly ten times more clock cycles per float operation. Pre-Armv8 architecture implemented floating-point/SIMD with the coprocessor interface. Other floating-point and/or SIMD units found in ARM-based processors using the coprocessor interface include FPA, FPE, iwMMXt, some of which were implemented in software by trapping but could have been implemented in hardware. They provide some of the same functionality as VFP but are not opcode-compatible with it. FPA10 also provides extended precision, but implements correct rounding (required by IEEE 754) only in single precision. VFPv1 Obsolete VFPv2 An optional extension to the ARM instruction set in the ARMv5TE, ARMv5TEJ and ARMv6 architectures. VFPv2 has 16 64-bit FPU registers. VFPv3 or VFPv3-D32 Implemented on most Cortex-A8 and A9 ARMv7 processors. It is backward-compatible with VFPv2, except that it cannot trap floating-point exceptions. VFPv3 has 32 64-bit FPU registers as standard, adds VCVT instructions to convert between scalar, float and double, adds immediate mode to VMOV such that constants can be loaded into FPU registers. VFPv3-D16 As above, but with only 16 64-bit FPU registers. Implemented on Cortex-R4 and R5 processors and the Tegra 2 (Cortex-A9). VFPv3-F16 Uncommon; it supports IEEE754-2008 half-precision (16-bit) floating point as a storage format. VFPv4 or VFPv4-D32Implemented on Cortex-A12 and A15 ARMv7 processors, Cortex-A7 optionally has VFPv4-D32 in the case of an FPU with Neon. VFPv4 has 32 64-bit FPU registers as standard, adds both half-precision support as a storage format and fused multiply-accumulate instructions to the features of VFPv3. VFPv4-D16 As above, but it has only 16 64-bit FPU registers. Implemented on Cortex-A5 and A7 processors in the case of an FPU without Neon. VFPv5-D16-M Implemented on Cortex-M7 when single and double-precision floating-point core option exists. In Debian Linux and derivatives such as Ubuntu and Linux Mint, armhf (ARM hard float) refers to the ARMv7 architecture including the additional VFP3-D16 floating-point hardware extension (and Thumb-2) above. Software packages and cross-compiler tools use the armhf vs. arm/armel suffixes to differentiate. Advanced SIMD (Neon) The Advanced SIMD extension (also known as Neon or "MPE" Media Processing Engine) is a combined 64- and 128-bit SIMD instruction set that provides standardised acceleration for media and signal processing applications. Neon is included in all Cortex-A8 devices, but is optional in Cortex-A9 devices. Neon can execute MP3 audio decoding on CPUs running at 10 MHz, and can run the GSM adaptive multi-rate (AMR) speech codec at 13 MHz. It features a comprehensive instruction set, separate register files, and independent execution hardware. Neon supports 8-, 16-, 32-, and 64-bit integer and single-precision (32-bit) floating-point data and SIMD operations for handling audio and video processing as well as graphics and gaming processing. In Neon, the SIMD supports up to 16 operations at the same time. The Neon hardware shares the same floating-point registers as used in VFP. Devices such as the ARM Cortex-A8 and Cortex-A9 support 128-bit vectors, but will execute with 64 bits at a time, whereas newer Cortex-A15 devices can execute 128 bits at a time. A quirk of Neon in Armv7 devices is that it flushes all subnormal numbers to zero, and as a result the GCC compiler will not use it unless , which allows losing denormals, is turned on. "Enhanced" Neon defined since Armv8 does not have this quirk, but as of the same flag is still required to enable Neon instructions. On the other hand, GCC does consider Neon safe on AArch64 for Armv8. ProjectNe10 is ARM's first open-source project (from its inception; while they acquired an older project, now named Mbed TLS). The Ne10 library is a set of common, useful functions written in both Neon and C (for compatibility). The library was created to allow developers to use Neon optimisations without learning Neon, but it also serves as a set of highly optimised Neon intrinsic and assembly code examples for common DSP, arithmetic, and image processing routines. The source code is available on GitHub. ARM Helium technology Helium is the M-Profile Vector Extension (MVE). It adds more than 150 scalar and vector instructions. Security extensions TrustZone (for Cortex-A profile) The Security Extensions, marketed as TrustZone Technology, is in ARMv6KZ and later application profile architectures. It provides a low-cost alternative to adding another dedicated security core to an SoC, by providing two virtual processors backed by hardware based access control. This lets the application core switch between two states, referred to as worlds (to reduce confusion with other names for capability domains), to prevent information leaking from the more trusted world to the less trusted world. This world switch is generally orthogonal to all other capabilities of the processor, thus each world can operate independently of the other while using the same core. Memory and peripherals are then made aware of the operating world of the core and may use this to provide access control to secrets and code on the device. Typically, a rich operating system is run in the less trusted world, with smaller security-specialised code in the more trusted world, aiming to reduce the attack surface. Typical applications include DRM functionality for controlling the use of media on ARM-based devices, and preventing any unapproved use of the device. In practice, since the specific implementation details of proprietary TrustZone implementations have not been publicly disclosed for review, it is unclear what level of assurance is provided for a given threat model, but they are not immune from attack. Open Virtualization is an open source implementation of the trusted world architecture for TrustZone. AMD has licensed and incorporated TrustZone technology into its Secure Processor Technology. AMD's APUs include a Cortex-A5 processor for handling secure processing, which is enabled in some, but not all products. In fact, the Cortex-A5 TrustZone core had been included in earlier AMD products, but was not enabled due to time constraints. Samsung Knox uses TrustZone for purposes such as detecting modifications to the kernel, storing certificates and attestating keys. TrustZone for Armv8-M (for Cortex-M profile) The Security Extension, marketed as TrustZone for Armv8-M Technology, was introduced in the Armv8-M architecture. While containing similar concepts to TrustZone for Armv8-A, it has a different architectural design, as world switching is performed using branch instructions instead of using exceptions. It also supports safe interleaved interrupt handling from either world regardless of the current security state. Together these features provide low latency calls to the secure world and responsive interrupt handling. ARM provides a reference stack of secure world code in the form of Trusted Firmware for M and PSA Certified. No-execute page protection As of ARMv6, the ARM architecture supports no-execute page protection, which is referred to as XN, for eXecute Never. Large Physical Address Extension (LPAE) The Large Physical Address Extension (LPAE), which extends the physical address size from 32 bits to 40 bits, was added to the Armv7-A architecture in 2011. The physical address size may be even larger in processors based on the 64-bit (Armv8-A) architecture. For example, it is 44 bits in Cortex-A75 and Cortex-A65AE. Armv8-R and Armv8-M The Armv8-R and Armv8-M architectures, announced after the Armv8-A architecture, share some features with Armv8-A. However, Armv8-M does not include any 64-bit AArch64 instructions, and Armv8-R originally did not include any AArch64 instructions; those instructions were added to Armv8-R later. Armv8.1-M The Armv8.1-M architecture, announced in February 2019, is an enhancement of the Armv8-M architecture. It brings new features including: A new vector instruction set extension. The M-Profile Vector Extension (MVE), or Helium, is for signal processing and machine learning applications. Additional instruction set enhancements for loops and branches (Low Overhead Branch Extension). Instructions for half-precision floating-point support. Instruction set enhancement for TrustZone management for Floating Point Unit (FPU). New memory attribute in the Memory Protection Unit (MPU). Enhancements in debug including Performance Monitoring Unit (PMU), Unprivileged Debug Extension, and additional debug support focus on signal processing application developments. Reliability, Availability and Serviceability (RAS) extension. 64/32-bit architecture Armv8 Armv8-A Announced in October 2011, Armv8-A (often called ARMv8 while the Armv8-R is also available) represents a fundamental change to the ARM architecture. It supports two Execution states: a 64-bit state named AArch64 and a 32-bit state named AArch32. In the AArch64 state, a new 64-bit A64 instruction set is supported; in the AArch32 state, two instruction sets are supported: the original 32-bit instruction set, named A32, and the 32-bit Thumb-2 instruction set, named T32. AArch32 provides user-space compatibility with Armv7-A. The processor state can change on an Exception level change; this allows 32-bit applications to be executed in AArch32 state under a 64-bit OS whose kernel executes in AArch64 state, and allows a 32-bit OS to run in AArch32 state under the control of a 64-bit hypervisor running in AArch64 state. ARM announced their Cortex-A53 and Cortex-A57 cores on 30 October 2012. Apple was the first to release an Armv8-A compatible core in a consumer product (Apple A7 in iPhone 5S). AppliedMicro, using an FPGA, was the first to demo Armv8-A. The first Armv8-A SoC from Samsung is the Exynos 5433 used in the Galaxy Note 4, which features two clusters of four Cortex-A57 and Cortex-A53 cores in a big.LITTLE configuration; but it will run only in AArch32 mode. To both AArch32 and AArch64, Armv8-A makes VFPv3/v4 and advanced SIMD (Neon) standard. It also adds cryptography instructions supporting AES, SHA-1/SHA-256 and finite field arithmetic. AArch64 was introduced in Armv8-A and its subsequent revision. AArch64 is not included in the 32-bit Armv8-R and Armv8-M architectures. An ARMv8-A processor can support one or both of AArch32 and AArch64; it may support AArch32 and AArch64 at lower Exception levels and only AArch64 at higher Exception levels. For example, the ARM Cortex-A32 supports only AArch32, the ARM Cortex-A34 supports only AArch64, and the ARM Cortex-A72 supports both AArch64 and AArch32. An ARMv9-A processor must support AArch64 at all Exception levels, and may support AArch32 at EL0. Armv8-R Optional AArch64 support was added to the Armv8-R profile, with the first ARM core implementing it being the Cortex-R82. It adds the A64 instruction set. Armv9 Armv9-A Announced in March 2021, the updated architecture places a focus on secure execution and compartmentalisation. Arm SystemReady Arm SystemReady is a compliance program that helps ensure the interoperability of an operating system on Arm-based hardware from datacenter servers to industrial edge and IoT devices. The key building blocks of the program are the specifications for minimum hardware and firmware requirements that the operating systems and hypervisors can rely upon. These specifications are: Base System Architecture (BSA) and the market segment specific supplements (e.g., Server BSA supplement) Base Boot Requirements (BBR) and Base Boot Security Requirements (BBSR) These specifications are co-developed by Arm and its partners in the System Architecture Advisory Committee (SystemArchAC). Architecture Compliance Suite (ACS) is the test tools that help to check the compliance of these specifications. The Arm SystemReady Requirements Specification documents the requirements of the certifications. This program was introduced by Arm in 2020 at the first DevSummit event. Its predecessor Arm ServerReady was introduced in 2018 at the Arm TechCon event. This program currently includes two bands: SystemReady Band: this band focuses on operating system interoperability for Advanced Configuration and Power Interface ACPI environments, where generic operating systems can be installed on either new or old hardware without modification. This band is relevant for systems using Windows, Linux, VMware, and BSD environments. SystemReady Devicetree Band: this band optimizes install and boot for embedded systems where devicetree is the preferred method of describing hardware, with a focus on forward compatibility. This applies to Linux distributions and BSD environments specifically. PSA Certified PSA Certified, formerly named Platform Security Architecture, is an architecture-agnostic security framework and evaluation scheme. It is intended to help secure Internet of things (IoT) devices built on system-on-a-chip (SoC) processors. It was introduced to increase security where a full trusted execution environment is too large or complex. The architecture was introduced by Arm in 2017 at the annual TechCon event. Although the scheme is architecture agnostic, it was first implemented on Arm Cortex-M processor cores intended for microcontroller use. PSA Certified includes freely available threat models and security analyses that demonstrate the process for deciding on security features in common IoT products. It also provides freely downloadable application programming interface (API) packages, architectural specifications, open-source firmware implementations, and related test suites. Following the development of the architecture security framework in 2017, the PSA Certified assurance scheme launched two years later at Embedded World in 2019. PSA Certified offers a multi-level security evaluation scheme for chip vendors, OS providers and IoT device makers. The Embedded World presentation introduced chip vendors to Level 1 Certification. A draft of Level 2 protection was presented at the same time. Level 2 certification became a usable standard in February 2020. The certification was created by PSA Joint Stakeholders to enable a security-by-design approach for a diverse set of IoT products. PSA Certified specifications are implementation and architecture agnostic, as a result they can be applied to any chip, software or device. The certification also removes industry fragmentation for IoT product manufacturers and developers. Operating system support 32-bit operating systems Historical operating systems The first 32-bit ARM-based personal computer, the Acorn Archimedes, was originally intended to run an ambitious operating system called ARX. The machines shipped with RISC OS, which was also used on later ARM-based systems from Acorn and other vendors. Some early Acorn machines were also able to run a Unix port called RISC iX. (Neither is to be confused with RISC/os, a contemporary Unix variant for the MIPS architecture.) Embedded operating systems The 32-bit ARM architecture is supported by a large number of embedded and real-time operating systems, including: A2 Android ChibiOS/RT Deos DRYOS eCos embOS FreeBSD FreeRTOS INTEGRITY Linux Micro-Controller Operating Systems Mbed MINIX 3 MQX Nucleus PLUS NuttX OKL4 Operating System Embedded (OSE) OS-9 Pharos Plan 9 PikeOS QNX RIOT RTEMS RTXC Quadros SCIOPTA ThreadX TizenRT T-Kernel VxWorks Windows Embedded Compact Windows 10 IoT Core Zephyr Mobile device operating systems As of March 2024, the 32-bit ARM architecture used to be the primary hardware environment for most mobile device operating systems such as the following but many of these platforms such as Android and Apple iOS have evolved to the 64-bit ARM architecture: Android ChromeOS Mobian Sailfish postmarketOS Tizen Ubuntu Touch webOS Formerly, but now discontinued: Bada BlackBerry OS/BlackBerry 10 Firefox OS MeeGo Newton OS iOS 10 and earlier Symbian Windows 10 Mobile Windows RT Windows Phone Windows Mobile Desktop and server operating systems The 32-bit ARM architecture is supported by RISC OS and by multiple Unix-like operating systems including: FreeBSD NetBSD OpenBSD OpenSolaris several Linux distributions, such as: Debian Armbian Gentoo Ubuntu Raspberry Pi OS (formerly Raspbian) Slackware 64-bit operating systems Embedded operating systems INTEGRITY OSE SCIOPTA seL4 Pharos FreeRTOS QNX VxWorks Zephyr Mobile device operating systems Android supports Armv8-A in Android Lollipop (5.0) and later. iOS supports Armv8-A in iOS 7 and later on 64-bit Apple SoCs. iOS 11 and later, and iPadOS, only support 64-bit ARM processors and applications. HarmonyOS NEXT was developed specifically for ARM processors, starting from its launch in 2024. Mobian PostmarketOS Arch Linux ARM Manjaro Desktop and server operating systems Support for Armv8-A was merged into the Linux kernel version 3.7 in late 2012. Armv8-A is supported by a number of Linux distributions, such as: Debian Armbian Alpine Linux Ubuntu Fedora openSUSE SUSE Linux Enterprise RHEL Raspberry Pi OS (formerly Raspbian) Support for Armv8-A was merged into FreeBSD in late 2014. OpenBSD has Armv8 support . NetBSD has Armv8 support since early 2018. Windows - Windows 10 runs 32-bit "x86 and 32-bit ARM applications", as well as native ARM64 desktop apps; Windows 11 runs native ARM64 apps and can also run x86 and x86-64 apps via emulation. Support for 64-bit ARM apps in the Microsoft Store has been available since November 2018. macOS has ARM support since late 2020; the first release to support ARM is macOS Big Sur. Rosetta 2 adds support for x86-64 applications but not virtualization of x86-64 computer platforms. Porting to 32- or 64-bit ARM operating systems Windows applications recompiled for ARM and linked with Winelib, from the Wine project, can run on 32-bit or 64-bit ARM in Linux, FreeBSD, or other compatible operating systems. x86 binaries, e.g. when not specially compiled for ARM, have been demonstrated on ARM using QEMU with Wine (on Linux and more), but do not work at full speed or same capability as with Winelib.
Technology
Computer architecture concepts
null
60560
https://en.wikipedia.org/wiki/Tetrapod
Tetrapod
A tetrapod (; from Ancient Greek τετρα- (tetra-) 'four' and πούς (poús) 'foot') is any four-limbed vertebrate animal of the superclass Tetrapoda (). Tetrapods include all extant and extinct amphibians and amniotes, with the latter in turn evolving into two major clades, the sauropsids (reptiles, including dinosaurs and therefore birds) and synapsids (extinct pelycosaurs, therapsids and all extant mammals, including humans). Hox gene mutations have resulted in some tetrapods becoming limbless (snakes, legless lizards, and caecilians) or two-limbed (cetaceans, moas, and some lizards). Nevertheless, these limbless groups still qualify as tetrapods through their ancestry, and some retain a pair of vestigial spurs that are remnants of the hindlimbs. Tetrapods evolved from a group of primitive semiaquatic animals known as the Tetrapodomorpha which, in turn, evolved from ancient lobe-finned fish (sarcopterygians) around in the Middle Devonian period. Tetrapodomorphs were transitional between lobe-finned fishes and true four-limbed tetrapods, though most still fit the body plan expected of other lobe-finned fishes. The oldest fossils of four-limbed vertebrates (tetrapods in the broad sense of the word) are trackways from the Middle Devonian, and body fossils became common near the end of the Late Devonian, around 370–360 million years ago. These Devonian species all belonged to the tetrapod stem group, meaning that they were not directly related to any modern tetrapod group. Broad anatomical descriptors like "tetrapod" and "amphibian" can approximate some members of the stem group, but a few paleontologists opt for more specific terms such as Stegocephali. Limbs evolved prior to terrestrial locomotion, but by the start of the Carboniferous Period, 360 million years ago, a few stem-tetrapods were experimenting with a semiaquatic lifestyle to exploit food and shelter on land. The first crown-tetrapods (those descended from the last common ancestors of extant tetrapods) appeared by the Visean age of the Early Carboniferous. The specific aquatic ancestors of the tetrapods and the process by which they colonized Earth's land after emerging from water remains unclear. The transition from a body plan for gill-based aquatic respiration and tail-propelled aquatic locomotion to one that enables the animal to survive out of water and move around on land is one of the most profound evolutionary changes known. Tetrapods have numerous anatomical and physiological features that are distinct from their aquatic fish ancestors. These include distinct head and neck structures for feeding and movements, appendicular skeletons (shoulder and pelvic girdles in particular) for weight bearing and locomotion, more versatile eyes for seeing, middle ears for hearing, and more efficient heart and lungs for oxygen circulation and exchange outside water. Stem-tetrapods and "fish-a-pods" were primarily aquatic. Modern amphibians, which evolved from earlier groups, are generally semiaquatic; the first stages of their lives are as waterborne eggs and fish-like larvae known as tadpoles, and later undergo metamorphosis to grow limbs and become partly terrestrial and partly aquatic. However, most tetrapod species today are amniotes, most of which are terrestrial tetrapods whose branch evolved from earlier tetrapods early in the Late Carboniferous. The key innovation in amniotes over amphibians is the amnion, which enables the eggs to retain their aqueous contents on land, rather than needing to stay in water. (Some amniotes later evolved internal fertilization, although many aquatic species outside the tetrapod tree had evolved such before the tetrapods appeared, e.g. Materpiscis.) Some tetrapods, such as snakes and caecilians, have lost some or all of their limbs through further speciation and evolution; some have only concealed vestigial bones as a remnant of the limbs of their distant ancestors. Others returned to being amphibious or otherwise living partially or fully aquatic lives, the first during the Carboniferous period, others as recently as the Cenozoic. One fundamental subgroup of amniotes, the sauropsids, diverged into the reptiles: lepidosaurs (lizards, snakes, and the tuatara), archosaurs (crocodilians and dinosaurs, of which birds are a subset), turtles, and various other extinct forms. The remaining group of amniotes, the synapsids, include mammals and their extinct relatives. Amniotes include the only tetrapods that further evolved for flight—such as birds from among the dinosaurs, the extinct pterosaurs from earlier archosaurs, and bats from among the mammals. Definitions The precise definition of "tetrapod" is a subject of strong debate among paleontologists who work with the earliest members of the group. Apomorphy-based definitions A majority of paleontologists use the term "tetrapod" to refer to all vertebrates with four limbs and distinct digits (fingers and toes), as well as legless vertebrates with limbed ancestors. Limbs and digits are major apomorphies (newly evolved traits) which define tetrapods, though they are far from the only skeletal or biological innovations inherent to the group. The first vertebrates with limbs and digits evolved in the Devonian, including the Late Devonian-age Ichthyostega and Acanthostega, as well as the trackmakers of the Middle Devonian-age Zachelmie trackways. Defining tetrapods based on one or two apomorphies can present a problem if these apomorphies were acquired by more than one lineage through convergent evolution. To resolve this potential concern, the apomorphy-based definition is often supported by an equivalent cladistic definition. Cladistics is a modern branch of taxonomy which classifies organisms through evolutionary relationships, as reconstructed by phylogenetic analyses. A cladistic definition would define a group based on how closely related its constituents are. Tetrapoda is widely considered a monophyletic clade, a group with all of its component taxa sharing a single common ancestor. In this sense, Tetrapoda can also be defined as the "clade of limbed vertebrates", including all vertebrates descended from the first limbed vertebrates. Crown group tetrapods A portion of tetrapod workers, led by French paleontologist Michel Laurin, prefer to restrict the definition of tetrapod to the crown group. A crown group is a subset of a category of animal defined by the most recent common ancestor of living representatives. This cladistic approach defines "tetrapods" as the nearest common ancestor of all living amphibians (the lissamphibians) and all living amniotes (reptiles, birds, and mammals), along with all of the descendants of that ancestor. In effect, "tetrapod" is a name reserved solely for animals which lie among living tetrapods, so-called crown tetrapods. This is a node-based clade, a group with a common ancestry descended from a single "node" (the node being the nearest common ancestor of living species). Defining tetrapods based on the crown group would exclude many four-limbed vertebrates which would otherwise be defined as tetrapods. Devonian "tetrapods", such as Ichthyostega and Acanthostega, certainly evolved prior to the split between lissamphibians and amniotes, and thus lie outside the crown group. They would instead lie along the stem group, a subset of animals related to, but not within, the crown group. The stem and crown group together are combined into the total group, given the name Tetrapodomorpha, which refers to all animals closer to living tetrapods than to Dipnoi (lungfishes), the next closest group of living animals. Many early tetrapodomorphs are clearly fish in ecology and anatomy, but later tetrapodomorphs are much more similar to tetrapods in many regards, such as the presence of limbs and digits. Laurin's approach to the definition of tetrapods is rooted in the belief that the term has more relevance for neontologists (zoologists specializing in living animals) than paleontologists (who primarily use the apomorphy-based definition). In 1998, he re-established the defunct historical term Stegocephali to replace the apomorphy-based definition of tetrapod used by many authors. Other paleontologists use the term stem-tetrapod to refer to those tetrapod-like vertebrates that are not members of the crown group, including both early limbed "tetrapods" and tetrapodomorph fishes. The term "fishapod" was popularized after the discovery and 2006 publication of Tiktaalik, an advanced tetrapodomorph fish which was closely related to limbed vertebrates and showed many apparently transitional traits. The two subclades of crown tetrapods are Batrachomorpha and Reptiliomorpha. Batrachomorphs are all animals sharing a more recent common ancestry with living amphibians than with living amniotes (reptiles, birds, and mammals). Reptiliomorphs are all animals sharing a more recent common ancestry with living amniotes than with living amphibians. Gaffney (1979) provided the name Neotetrapoda to the crown group of tetrapods, though few subsequent authors followed this proposal. Biodiversity Tetrapoda includes three living classes: amphibians, reptiles, and mammals. Overall, the biodiversity of lissamphibians, as well as of tetrapods generally, has grown exponentially over time; the more than 30,000 species living today are descended from a single amphibian group in the Early to Middle Devonian. However, that diversification process was interrupted at least a few times by major biological crises, such as the Permian–Triassic extinction event, which at least affected amniotes. The overall composition of biodiversity was driven primarily by amphibians in the Palaeozoic, dominated by reptiles in the Mesozoic and expanded by the explosive growth of birds and mammals in the Cenozoic. As biodiversity has grown, so has the number of species and the number of niches that tetrapods have occupied. The first tetrapods were aquatic and fed primarily on fish. Today, the Earth supports a great diversity of tetrapods that live in many habitats and subsist on a variety of diets. The following table shows summary estimates for each tetrapod class from the IUCN Red List of Threatened Species, 2014.3, for the number of extant species that have been described in the literature, as well as the number of threatened species. Classification The classification of tetrapods has a long history. Traditionally, tetrapods are divided into four classes based on gross anatomical and physiological traits. Snakes and other legless reptiles are considered tetrapods because they are sufficiently like other reptiles that have a full complement of limbs. Similar considerations apply to caecilians and aquatic mammals. Newer taxonomy is frequently based on cladistics instead, giving a variable number of major "branches" (clades) of the tetrapod family tree. As is the case throughout evolutionary biology today, there is debate over how to properly classify the groups within Tetrapoda. Traditional biological classification sometimes fails to recognize evolutionary transitions between older groups and descendant groups with markedly different characteristics. For example, the birds, which evolved from the dinosaurs, are defined as a separate group from them, because they represent a distinct new type of physical form and functionality. In phylogenetic nomenclature, in contrast, the newer group is always included in the old. For this school of taxonomy, dinosaurs and birds are not groups in contrast to each other, but rather birds are a sub-type of dinosaurs. History of classification The tetrapods, including all large- and medium-sized land animals, have been among the best understood animals since earliest times. By Aristotle's time, the basic division between mammals, birds and egg-laying tetrapods (the "herptiles") was well known, and the inclusion of the legless snakes into this group was likewise recognized. With the birth of modern biological classification in the 18th century, Linnaeus used the same division, with the tetrapods occupying the first three of his six classes of animals. While reptiles and amphibians can be quite similar externally, the French zoologist Pierre André Latreille recognized the large physiological differences at the beginning of the 19th century and split the herptiles into two classes, giving the four familiar classes of tetrapods: amphibians, reptiles, birds and mammals. Modern classification With the basic classification of tetrapods settled, a half a century followed where the classification of living and fossil groups was predominantly done by experts working within classes. In the early 1930s, American vertebrate palaeontologist Alfred Romer (1894–1973) produced an overview, drawing together taxonomic work from the various subfields to create an orderly taxonomy in his Vertebrate Paleontology. This classical scheme with minor variations is still used in works where systematic overview is essential, e.g. Benton (1998) and Knobill and Neill (2006). While mostly seen in general works, it is also still used in some specialist works like Fortuny et al. (2011). The taxonomy down to subclass level shown here is from Hildebrand and Goslow (2001): Superclass Tetrapoda – four-limbed vertebrates Class Amphibia – amphibians Subclass Ichthyostegalia – early fish-like amphibians (paraphyletic group outside leading to the crown-clade Neotetrapoda) Subclass Anthracosauria – reptile-like amphibians (often thought to be the ancestors of the amniotes) Subclass Temnospondyli – large-headed Paleozoic and Mesozoic amphibians Subclass Lissamphibia – modern amphibians Class Reptilia – reptiles Subclass Diapsida – diapsids, including crocodiles, dinosaurs, birds, lizards, snakes and turtles Subclass Euryapsida – euryapsids Subclass Synapsida – synapsids, including mammal-like reptiles-now a separate group (often thought to be the ancestors of mammals) Subclass Anapsida – anapsids Class Mammalia – mammals Subclass Prototheria – egg-laying mammals, including monotremes Subclass Allotheria – multituberculates Subclass Theria – live-bearing mammals, including marsupials and placentals This classification is the one most commonly encountered in school textbooks and popular works. While orderly and easy to use, it has come under critique from cladistics. The earliest tetrapods are grouped under class Amphibia, although several of the groups are more closely related to amniotes than to modern day amphibians. Traditionally, birds are not considered a type of reptile, but crocodiles are more closely related to birds than they are to other reptiles, such as lizards. Birds themselves are thought to be descendants of theropod dinosaurs. Basal non-mammalian synapsids ("mammal-like reptiles") traditionally also sort under class Reptilia as a separate subclass, but they are more closely related to mammals than to living reptiles. Considerations like these have led some authors to argue for a new classification based purely on phylogeny, disregarding the anatomy and physiology. Evolution Tetrapods evolved from early bony fishes (Osteichthyes), specifically from the tetrapodomorph branch of lobe-finned fishes (Sarcopterygii), living in the early to middle Devonian period. The first tetrapods probably evolved in the Emsian stage of the Early Devonian from Tetrapodomorph fish living in shallow water environments. The very earliest tetrapods would have been animals similar to Acanthostega, with legs and lungs as well as gills, but still primarily aquatic and unsuited to life on land. The earliest tetrapods inhabited saltwater, brackish-water, and freshwater environments, as well as environments of highly variable salinity. These traits were shared with many early lobed-finned fishes. As early tetrapods are found on two Devonian continents, Laurussia (Euramerica) and Gondwana, as well as the island of North China, it is widely supposed that early tetrapods were capable of swimming across the shallow (and relatively narrow) continental-shelf seas that separated these landmasses. Since the early 20th century, several families of tetrapodomorph fishes have been proposed as the nearest relatives of tetrapods, among them the rhizodonts (notably Sauripterus), the osteolepidids, the tristichopterids (notably Eusthenopteron), and more recently the elpistostegalians (also known as Panderichthyida) notably the genus Tiktaalik. A notable feature of Tiktaalik is the absence of bones covering the gills. These bones would otherwise connect the shoulder girdle with skull, making the shoulder girdle part of the skull. With the loss of the gill-covering bones, the shoulder girdle is separated from the skull, connected to the torso by muscle and other soft-tissue connections. The result is the appearance of the neck. This feature appears only in tetrapods and Tiktaalik, not other tetrapodomorph fishes. Tiktaalik also had a pattern of bones in the skull roof (upper half of the skull) that is similar to the end-Devonian tetrapod Ichthyostega. The two also shared a semi-rigid ribcage of overlapping ribs, which may have substituted for a rigid spine. In conjunction with robust forelimbs and shoulder girdle, both Tiktaalik and Ichthyostega may have had the ability to locomote on land in the manner of a seal, with the forward portion of the torso elevated, the hind part dragging behind. Finally, Tiktaalik fin bones are somewhat similar to the limb bones of tetrapods. However, there are issues with positing Tiktaalik as a tetrapod ancestor. For example, it had a long spine with far more vertebrae than any known tetrapod or other tetrapodomorph fish. Also the oldest tetrapod trace fossils (tracks and trackways) predate it by a considerable margin. Several hypotheses have been proposed to explain this date discrepancy: 1) The nearest common ancestor of tetrapods and Tiktaalik dates to the Early Devonian. By this hypothesis, the lineage is the closest to tetrapods, but Tiktaalik itself was a late-surviving relic. 2) Tiktaalik represents a case of parallel evolution. 3) Tetrapods evolved more than once. History Palaeozoic Devonian stem-tetrapods The oldest evidence for the existence of tetrapods comes from trace fossils: tracks (footprints) and trackways found in Zachełmie, Poland, dated to the Eifelian stage of the Middle Devonian, , although these traces have also been interpreted as the ichnogenus Piscichnus (fish nests/feeding traces). The adult tetrapods had an estimated length of 2.5 m (8 feet), and lived in a lagoon with an average depth of 1–2 m, although it is not known at what depth the underwater tracks were made. The lagoon was inhabited by a variety of marine organisms and was apparently salt water. The average water temperature was 30 degrees C (86 F). The second oldest evidence for tetrapods, also tracks and trackways, date from ca. 385 Mya (Valentia Island, Ireland). The oldest partial fossils of tetrapods date from the Frasnian beginning ≈380 mya. These include Elginerpeton and Obruchevichthys. Some paleontologists dispute their status as true (digit-bearing) tetrapods. All known forms of Frasnian tetrapods became extinct in the Late Devonian extinction, also known as the end-Frasnian extinction. This marked the beginning of a gap in the tetrapod fossil record known as the Famennian gap, occupying roughly the first half of the Famennian stage. The oldest near-complete tetrapod fossils, Acanthostega and Ichthyostega, date from the second half of the Fammennian. Although both were essentially four-footed fish, Ichthyostega is the earliest known tetrapod that may have had the ability to pull itself onto land and drag itself forward with its forelimbs. There is no evidence that it did so, only that it may have been anatomically capable of doing so. The publication in 2018 of Tutusius umlambo and Umzantsia amazana from high latitude Gondwana setting indicate that the tetrapods enjoyed a global distribution by the end of the Devonian and even extend into the high latitudes. The end-Fammenian marked another extinction, known as the end-Fammenian extinction or the Hangenberg event, which is followed by another gap in the tetrapod fossil record, Romer's gap, also known as the Tournaisian gap. This gap, which was initially 30 million years, but has been gradually reduced over time, currently occupies much of the 13.9-million year Tournaisian, the first stage of the Carboniferous period. Tetrapod-like vertebrates first appeared in the Early Devonian period, and species with limbs and digits were around by the Late Devonian. These early "stem-tetrapods" included animals such as Ichthyostega, with legs and lungs as well as gills, but still primarily aquatic and poorly adapted for life on land. The Devonian stem-tetrapods went through two major population bottlenecks during the Late Devonian extinctions, also known as the end-Frasnian and end-Fammenian extinctions. These extinction events led to the disappearance of stem-tetrapods with fish-like features. When stem-tetrapods reappear in the fossil record in early Carboniferous deposits, some 10 million years later, the adult forms of some are somewhat adapted to a terrestrial existence. Why they went to land in the first place is still debated. Carboniferous During the early Carboniferous, the number of digits on hands and feet of stem-tetrapods became standardized at no more than five, as lineages with more digits died out (exceptions within crown-group tetrapods arose among some secondarily aquatic members). By mid-Carboniferous times, the stem-tetrapods had radiated into two branches of true ("crown group") tetrapods, one ancestral to modern amphibians and the other ancestral to amniotes. Modern amphibians are most likely derived from the temnospondyls, a particularly diverse and long-lasting group of tetrapods. A less popular proposal draws comparisons to the "lepospondyls", an eclectic mixture of various small tetrapods, including burrowing, limbless, and other bizarrely-shaped forms. The reptiliomorphs (sometimes known as "anthracosaurs") were the relatives and ancestors of the amniotes (reptiles, mammals, and kin). The first amniotes are known from the early part of the Late Carboniferous. All basal amniotes had a small body size, like many of their contemporaries, though some Carboniferous tetrapods evolved into large crocodile-like predators, informally known as "labyrinthodonts". Amphibians must return to water to lay eggs; in contrast, amniote eggs have a membrane ensuring gas exchange out of water and can therefore be laid on land. Amphibians and amniotes were affected by the Carboniferous rainforest collapse (CRC), an extinction event that occurred around 307 million years ago. The sudden collapse of a vital ecosystem shifted the diversity and abundance of major groups. Amniotes and temnospondyls in particular were more suited to the new conditions. They invaded new ecological niches and began diversifying their diets to include plants and other tetrapods, previously having been limited to insects and fish. Permian In the Permian period, amniotes became particularly well-established, and two important clades filled in most terrestrial niches: the sauropsids and the synapsids. The latter were the most important and successful Permian land animals, establishing complex terrestrial ecosystems of predators and prey while acquiring various adaptations retained by their modern descendants, the mammals. Sauropsid diversity was more subdued during the Permian, but they did begin to fracture into several lineages ancestral to modern reptiles. Amniotes were not the only tetrapods to experiment with prolonged life on land. Some temnospondyls, seymouriamorphs, and diadectomorphs also successfully filled terrestrial niches in the earlier part of the Permian. Non-amniote tetrapods declined in the later part of the Permian. The end of the Permian saw a major turnover in fauna during the Permian–Triassic extinction event. There was a protracted loss of species, due to multiple extinction pulses. Many of the once large and diverse groups died out or were greatly reduced. Mesozoic The diapsid reptiles (a subgroup of the sauropsids) strongly diversified during the Triassic, giving rise to the turtles, pseudosuchians (crocodilian ancestors), dinosaurs, pterosaurs, and lepidosaurs, along with many other reptile groups on land and sea. Some of the new Triassic reptiles would not survive into the Jurassic, but others would flourish during the Jurassic. Lizards, turtles, dinosaurs, pterosaurs, crocodylomorphs, and plesiosaurs were particular beneficiaries of the Triassic-Jurassic transition. Birds, a particular subset of theropod dinosaurs capable of flight via feathered wings, evolved in the Late Jurassic. In the Cretaceous, snakes developed from lizards, rhynchocephalians (tuataras and kin) declined, and modern birds and crocodilians started to establish themselves. Among the characteristic Paleozoic non-amniote tetrapods, few survived into the Mesozoic. Temnospondyls briefly recovered in the Triassic, spawning the large aquatic stereospondyls and the small terrestrial lissamphibians (the earliest frogs, salamanders, and caecilians). However, stereospondyl diversity would crash at the end of the Triassic. By the Late Cretaceous, the only surviving amphibians were lissamphibians. Many groups of synapsids, such as anomodonts and therocephalians, that once comprised the dominant terrestrial fauna of the Permian, also became extinct during the Triassic. During the Jurassic, one synapsid group (Cynodontia) gave rise to the modern mammals, which survived through the rest of the Mesozoic to later diversify during the Cenozoic. The Cretaceous-Paleogene extinction event at the end of the Mesozoic killed off many organisms, including all the non-avian dinosaurs and nearly all marine reptiles. Birds survived and diversified during the Cenozoic, similar to mammals. Cenozoic Following the great extinction event at the end of the Mesozoic, representatives of seven major groups of tetrapods persisted into the Cenozoic era. One of them, a group of semiaquatic reptiles known as the Choristodera, became extinct 11 million years ago for unclear reasons. The seven Cenozoic tetrapods groups are: Lissamphibia: frogs, salamanders, and caecilians Mammalia: monotremes, marsupials, placentals,and †multituberculates Lepidosauria: tuataras and lizards (including amphisbaenians and snakes) Testudines: turtles Crocodilia: crocodiles, alligators, caimans and gharials Aves: birds †Choristodera (extinct) Phylogeny Stem group Stem tetrapods are all animals more closely related to tetrapods than to lungfish, but excluding the tetrapod crown group. The cladogram below illustrates the relationships of stem-tetrapods. All these lineages are extinct except for Dipnomorpha and Tetrapoda; from Swartz, 2012: Crown group Crown tetrapods are defined as the nearest common ancestor of all living tetrapods (amphibians, reptiles, birds, and mammals) along with all of the descendants of that ancestor. The inclusion of certain extinct groups in the crown Tetrapoda depends on the relationships of modern amphibians, or lissamphibians. There are currently three major hypotheses on the origins of lissamphibians. In the temnospondyl hypothesis (TH), lissamphibians are most closely related to dissorophoid temnospondyls, which would make temnospondyls tetrapods. In the lepospondyl hypothesis (LH), lissamphibians are the sister taxon of lysorophian lepospondyls, making lepospondyls tetrapods and temnospondyls stem-tetrapods. In the polyphyletic hypothesis (PH), frogs and salamanders evolved from dissorophoid temnospondyls while caecilians come out of microsaur lepospondyls, making both lepospondyls and temnospondyls true tetrapods. Origins of modern amphibians Temnospondyl hypothesis (TH) This hypothesis comes in a number of variants, most of which have lissamphibians coming out of the dissorophoid temnospondyls, usually with the focus on amphibamids and branchiosaurids. The temnospondyl hypothesis is the currently favored or majority view, supported by Ruta et al (2003a,b), Ruta and Coates (2007), Coates et al (2008), Sigurdsen and Green (2011), and Schoch (2013, 2014). Cladogram modified after Coates, Ruta and Friedman (2008). Lepospondyl hypothesis (LH) Cladogram modified after Laurin, How Vertebrates Left the Water (2010). Polyphyly hypothesis (PH) This hypothesis has batrachians (frogs and salamanders) coming out of dissorophoid temnospondyls, with caecilians out of microsaur lepospondyls. There are two variants, one developed by Carroll, the other by Anderson. Cladogram modified after Schoch, Frobisch, (2009). Anatomy and physiology The tetrapod's ancestral fish, tetrapodomorph, possessed similar traits to those inherited by the early tetrapods, including internal nostrils and a large fleshy fin built on bones that could give rise to the tetrapod limb. To propagate in the terrestrial environment, animals had to overcome certain challenges. Their bodies needed additional support, because buoyancy was no longer a factor. Water retention was now important, since it was no longer the living matrix, and could be lost easily to the environment. Finally, animals needed new sensory input systems to have any ability to function reasonably on land. Skull The brain only filled half of the skull in the early tetrapods. The rest was filled with fatty tissue or fluid, which gave the brain space for growth as they adapted to a life on land. The palatal and jaw structures of tetramorphs were similar to those of early tetrapods, and their dentition was similar too, with labyrinthine teeth fitting in a pit-and-tooth arrangement on the palate. A major difference between early tetrapodomorph fishes and early tetrapods was in the relative development of the front and back skull portions; the snout is much less developed than in most early tetrapods and the post-orbital skull is exceptionally longer than an amphibian's. A notable characteristic that make a tetrapod's skull different from a fish's are the relative frontal and rear portion lengths. The fish had a long rear portion while the front was short; the orbital vacuities were thus located towards the anterior end. In the tetrapod, the front of the skull lengthened, positioning the orbits farther back on the skull. Neck In tetrapodomorph fishes such as Eusthenopteron, the part of the body that would later become the neck was covered by a number of gill-covering bones known as the opercular series. These bones functioned as part of pump mechanism for forcing water through the mouth and past the gills. When the mouth opened to take in water, the gill flaps closed (including the gill-covering bones), thus ensuring that water entered only through the mouth. When the mouth closed, the gill flaps opened and water was forced through the gills. In Acanthostega, a basal tetrapod, the gill-covering bones have disappeared, although the underlying gill arches are still present. Besides the opercular series, Acanthostega also lost the throat-covering bones (gular series). The opercular series and gular series combined are sometimes known as the operculo-gular or operculogular series. Other bones in the neck region lost in Acanthostega (and later tetrapods) include the extrascapular series and the supracleithral series. Both sets of bones connect the shoulder girdle to the skull. With the loss of these bones, tetrapods acquired a neck, allowing the head to rotate somewhat independently of the torso. This, in turn, required stronger soft-tissue connections between head and torso, including muscles and ligaments connecting the skull with the spine and shoulder girdle. Bones and groups of bones were also consolidated and strengthened. In Carboniferous tetrapods, the neck joint (occiput) provided a pivot point for the spine against the back of the skull. In tetrapodomorph fishes such as Eusthenopteron, no such neck joint existed. Instead, the notochord (a rod made of proto-cartilage) entered a hole in the back of the braincase and continued to the middle of the braincase. Acanthostega had the same arrangement as Eusthenopteron, and thus no neck joint. The neck joint evolved independently in different lineages of early tetrapods. All tetrapods appear to hold their necks at the maximum possible vertical extension when in a normal, alert posture. Dentition Tetrapods had a tooth structure known as "plicidentine" characterized by infolding of the enamel as seen in cross-section. The more extreme version found in early tetrapods is known as "labyrinthodont" or "labyrinthodont plicidentine". This type of tooth structure has evolved independently in several types of bony fishes, both ray-finned and lobe finned, some modern lizards, and in a number of tetrapodomorph fishes. The infolding appears to evolve when a fang or large tooth grows in a small jaw, erupting when it is still weak and immature. The infolding provides added strength to the young tooth, but offers little advantage when the tooth is mature. Such teeth are associated with feeding on soft prey in juveniles. Axial skeleton With the move from water to land, the spine had to resist the bending caused by body weight and had to provide mobility where needed. Previously, it could bend along its entire length. Likewise, the paired appendages had not been formerly connected to the spine, but the slowly strengthening limbs now transmitted their support to the axis of the body. Girdles The shoulder girdle was disconnected from the skull, resulting in improved terrestrial locomotion. The early sarcopterygians' cleithrum was retained as the clavicle, and the interclavicle was well-developed, lying on the underside of the chest. In primitive forms, the two clavicles and the interclavical could have grown ventrally in such a way as to form a broad chest plate. The upper portion of the girdle had a flat, scapular blade (shoulder bone), with the glenoid cavity situated below performing as the articulation surface for the humerus, while ventrally there was a large, flat coracoid plate turning in toward the midline. The pelvic girdle also was much larger than the simple plate found in fishes, accommodating more muscles. It extended far dorsally and was joined to the backbone by one or more specialized sacral ribs. The hind legs were somewhat specialized in that they not only supported weight, but also provided propulsion. The dorsal extension of the pelvis was the ilium, while the broad ventral plate was composed of the pubis in front and the ischium in behind. The three bones met at a single point in the center of the pelvic triangle called the acetabulum, providing a surface of articulation for the femur. Limbs Fleshy lobe-fins supported on bones seem to have been an ancestral trait of all bony fishes (Osteichthyes). The ancestors of the ray-finned fishes (Actinopterygii) evolved their fins in a different direction. The tetrapodomorph ancestors of the tetrapods further developed their lobe fins. The paired fins had bones distinctly homologous to the humerus, ulna, and radius in the fore-fins and to the femur, tibia, and fibula in the pelvic fins. The paired fins of the early sarcopterygians were smaller than tetrapod limbs, but the skeletal structure was very similar in that the early sarcopterygians had a single proximal bone (analogous to the humerus or femur), two bones in the next segment (forearm or lower leg), and an irregular subdivision of the fin, roughly comparable to the structure of the carpus/tarsus and phalanges of a hand. Locomotion In typical early tetrapod posture, the upper arm and upper leg extended nearly straight horizontal from its body, and the forearm and the lower leg extended downward from the upper segment at a near right angle. The body weight was not centered over the limbs, but was rather transferred 90 degrees outward and down through the lower limbs, which touched the ground. Most of the animal's strength was used to just lift its body off the ground for walking, which was probably slow and difficult. With this sort of posture, it could only make short broad strides. This has been confirmed by fossilized footprints found in Carboniferous rocks. Feeding Early tetrapods had a wide gaping jaw with weak muscles to open and close it. In the jaw were moderate-sized palatal and vomerine (upper) and coronoid (lower) fangs, as well rows of smaller teeth. This was in contrast to the larger fangs and small marginal teeth of earlier tetrapodomorph fishes such as Eusthenopteron. Although this indicates a change in feeding habits, the exact nature of the change in unknown. Some scholars have suggested a change to bottom-feeding or feeding in shallower waters (Ahlberg and Milner 1994). Others have suggesting a mode of feeding comparable to that of the Japanese giant salamander, which uses both suction feeding and direct biting to eat small crustaceans and fish. A study of these jaws shows that they were used for feeding underwater, not on land. In later terrestrial tetrapods, two methods of jaw closure emerge: static and kinetic inertial (also known as snapping). In the static system, the jaw muscles are arranged in such a way that the jaws have maximum force when shut or nearly shut. In the kinetic inertial system, maximum force is applied when the jaws are wide open, resulting in the jaws snapping shut with great velocity and momentum. Although the kinetic inertial system is occasionally found in fish, it requires special adaptations (such as very narrow jaws) to deal with the high viscosity and density of water, which would otherwise impede rapid jaw closure. The tetrapod tongue is built from muscles that once controlled gill openings. The tongue is anchored to the hyoid bone, which was once the lower half of a pair of gill bars (the second pair after the ones that evolved into jaws). The tongue did not evolve until the gills began to disappear. Acanthostega still had gills, so this would have been a later development. In an aquatically feeding animals, the food is supported by water and can literally float (or get sucked in) to the mouth. On land, the tongue becomes important. Respiration The evolution of early tetrapod respiration was influenced by an event known as the "charcoal gap", a period of more than 20 million years, in the middle and late Devonian, when atmospheric oxygen levels were too low to sustain wildfires. During this time, fish inhabiting anoxic waters (very low in oxygen) would have been under evolutionary pressure to develop their air-breathing ability. Early tetrapods probably relied on four methods of respiration: with lungs, with gills, cutaneous respiration (skin breathing), and breathing through the lining of the digestive tract, especially the mouth. Gills The early tetrapod Acanthostega had at least three and probably four pairs of gill bars, each containing deep grooves in the place where one would expect to find the afferent branchial artery. This strongly suggests that functional gills were present. Some aquatic temnospondyls retained internal gills at least into the early Jurassic. Evidence of clear fish-like internal gills is present in Archegosaurus. Lungs Lungs originated as an extra pair of pouches in the throat, behind the gill pouches. They were probably present in the last common ancestor of bony fishes. In some fishes they evolved into swim bladders for maintaining buoyancy. Lungs and swim bladders are homologous (descended from a common ancestral form) as is the case for the pulmonary artery (which delivers de-oxygenated blood from the heart to the lungs) and the arteries that supply swim bladders. Air was introduced into the lungs by a process known as buccal pumping. In the earliest tetrapods, exhalation was probably accomplished with the aid of the muscles of the torso (the thoracoabdominal region). Inhaling with the ribs was either primitive for amniotes, or evolved independently in at least two different lineages of amniotes. It is not found in amphibians. The muscularized diaphragm is unique to mammals. Recoil aspiration Although tetrapods are widely thought to have inhaled through buccal pumping (mouth pumping), according to an alternative hypothesis, aspiration (inhalation) occurred through passive recoil of the exoskeleton in a manner similar to the contemporary primitive ray-finned fish Polypterus. This fish inhales through its spiracle (blowhole), an anatomical feature present in early tetrapods. Exhalation is powered by muscles in the torso. During exhalation, the bony scales in the upper chest region become indented. When the muscles are relaxed, the bony scales spring back into position, generating considerable negative pressure within the torso, resulting in a very rapid intake of air through the spiracle. Cutaneous respiration Skin breathing, known as cutaneous respiration, is common in fish and amphibians, and occur both in and out of water. In some animals waterproof barriers impede the exchange of gases through the skin. For example, keratin in human skin, the scales of reptiles, and modern proteinaceous fish scales impede the exchange of gases. However, early tetrapods had scales made of highly vascularized bone covered with skin. For this reason, it is thought that early tetrapods could engage some significant amount of skin breathing. Carbon dioxide metabolism Although air-breathing fish can absorb oxygen through their lungs, the lungs tend to be ineffective for discharging carbon dioxide. In tetrapods, the ability of lungs to discharge CO2 came about gradually, and was not fully attained until the evolution of amniotes. The same limitation applies to gut air breathing (GUT), i.e., breathing with the lining of the digestive tract. Tetrapod skin would have been effective for both absorbing oxygen and discharging CO2, but only up to a point. For this reason, early tetrapods may have experienced chronic hypercapnia (high levels of blood CO2). This is not uncommon in fish that inhabit waters high in CO2. According to one hypothesis, the "sculpted" or "ornamented" dermal skull roof bones found in early tetrapods may have been related to a mechanism for relieving respiratory acidosis (acidic blood caused by excess CO2) through compensatory metabolic alkalosis. Circulation Early tetrapods probably had a three-chambered heart, as do modern amphibians and lepidosaurian and chelonian reptiles, in which oxygenated blood from the lungs and de-oxygenated blood from the respiring tissues enters by separate atria, and is directed via a spiral valve to the appropriate vessel — aorta for oxygenated blood and pulmonary vein for deoxygenated blood. The spiral valve is essential to keeping the mixing of the two types of blood to a minimum, enabling the animal to have higher metabolic rates, and be more active than otherwise. Senses Olfaction The difference in density between air and water causes smells (certain chemical compounds detectable by chemoreceptors) to behave differently. An animal first venturing out onto land would have difficulty in locating such chemical signals if its sensory apparatus had evolved in the context of aquatic detection. The vomeronasal organ also evolved in the nasal cavity for the first time, for detecting pheromones from biological substrates on land, though it was subsequently lost or reduced to vestigial in some lineages, like archosaurs and catarrhines, but expanded in others like lepidosaurs. Lateral line system Fish have a lateral line system that detects pressure fluctuations in the water. Such pressure is non-detectable in air, but grooves for the lateral line sense organs were found on the skull of early tetrapods, suggesting either an aquatic or largely aquatic habitat. Modern amphibians, which are semi-aquatic, exhibit this feature whereas it has been retired by the higher vertebrates. Vision Changes in the eye came about because the behavior of light at the surface of the eye differs between an air and water environment due to the difference in refractive index, so the focal length of the lens altered to function in air. The eye was now exposed to a relatively dry environment rather than being bathed by water, so eyelids developed and tear ducts evolved to produce a liquid to moisten the eyeball. Early tetrapods inherited a set of five rod and cone opsins known as the vertebrate opsins. Four cone opsins were present in the first vertebrate, inherited from invertebrate ancestors: LWS/MWS (long—to—medium—wave sensitive) - green, yellow, or red SWS1 (short—wave sensitive) - ultraviolet or violet - lost in monotremes (platypus, echidna) SWS2 (short—wave sensitive) - violet or blue - lost in therians (placental mammals and marsupials) RH2 (rhodopsin—like cone opsin) - green - lost separately in amphibians and mammals, retained in reptiles and birds A single rod opsin, rhodopsin, was present in the first jawed vertebrate, inherited from a jawless vertebrate ancestor: RH1 (rhodopsin) - blue-green - used night vision and color correction in low-light environments Balance Tetrapods retained the balancing function of the inner ear from fish ancestry. Hearing Air vibrations could not set up pulsations through the skull as in a proper auditory organ. The spiracle was retained as the otic notch, eventually closed in by the tympanum, a thin, tight membrane of connective tissue also called the eardrum (however this and the otic notch were lost in the ancestral amniotes, and later eardrums were obtained independently). The hyomandibula of fish migrated upwards from its jaw supporting position, and was reduced in size to form the columella. Situated between the tympanum and braincase in an air-filled cavity, the columella was now capable of transmitting vibrations from the exterior of the head to the interior. Thus the columella became an important element in an impedance matching system, coupling airborne sound waves to the receptor system of the inner ear. This system had evolved independently within several different amphibian lineages. The impedance matching ear had to meet certain conditions to work. The columella had to be perpendicular to the tympanum, small and light enough to reduce its inertia, and suspended in an air-filled cavity. In modern species that are sensitive to over 1 kHz frequencies, the footplate of the columella is 1/20th the area of the tympanum. However, in early amphibians the columella was too large, making the footplate area oversized, preventing the hearing of high frequencies. So it appears they could only hear high intensity, low frequency sounds—and the columella more probably just supported the brain case against the cheek. Only in the early Triassic, about a hundred million years after they conquered land, did the tympanic middle ear evolve (independently) in all the tetrapod lineages. About fifty million years later (late Triassic), in mammals, the columella was reduced even further to become the stapes.
Biology and health sciences
Vertebrates, general classification
null
60569
https://en.wikipedia.org/wiki/Rectifier
Rectifier
A rectifier is an electrical device that converts alternating current (AC), which periodically reverses direction, to direct current (DC), which flows in only one direction. The process is known as rectification, since it "straightens" the direction of current. Physically, rectifiers take a number of forms, including vacuum tube diodes, wet chemical cells, mercury-arc valves, stacks of copper and selenium oxide plates, semiconductor diodes, silicon-controlled rectifiers and other silicon-based semiconductor switches. Historically, even synchronous electromechanical switches and motor-generator sets have been used. Early radio receivers, called crystal radios, used a "cat's whisker" of fine wire pressing on a crystal of galena (lead sulfide) to serve as a point-contact rectifier or "crystal detector". Rectifiers have many uses, but are often found serving as components of DC power supplies and high-voltage direct current power transmission systems. Rectification may serve in roles other than to generate direct current for use as a source of power. As noted, rectifiers can serve as detectors of radio signals. In gas heating systems flame rectification is used to detect the presence of a flame. Depending on the type of alternating current supply and the arrangement of the rectifier circuit, the output voltage may require additional smoothing to produce a uniform steady voltage. Many applications of rectifiers, such as power supplies for radio, television and computer equipment, require a steady constant DC voltage (as would be produced by a battery). In these applications the output of the rectifier is smoothed by an electronic filter, which may be a capacitor, choke, or set of capacitors, chokes and resistors, possibly followed by a voltage regulator to produce a steady voltage. A device that performs the opposite function, that is converting DC to AC, is called an inverter. Rectifier devices Before the development of silicon semiconductor rectifiers, vacuum tube thermionic diodes and copper oxide- or selenium-based metal rectifier stacks were used. The first vacuum tube diodes designed for rectifier application in power supply circuits were introduced in April 1915 by Saul Dushman of General Electric. With the introduction of semiconductor electronics, vacuum tube rectifiers became obsolete, except for some enthusiasts of vacuum tube audio equipment. For power rectification from very low to very high current, semiconductor diodes of various types (junction diodes, Schottky diodes, etc.) are widely used. Other devices that have control electrodes as well as acting as unidirectional current valves are used where more than simple rectification is required—e.g., where variable output voltage is needed. High-power rectifiers, such as those used in high-voltage direct current power transmission, employ silicon semiconductor devices of various types. These are thyristors or other controlled switching solid-state switches, which effectively function as diodes to pass current in only one direction. Rectifier circuits Rectifier circuits may be single-phase or multi-phase. Most low power rectifiers for domestic equipment are single-phase, but three-phase rectification is very important for industrial applications and for the transmission of energy as DC (HVDC). Single-phase rectifiers Half-wave rectification In half-wave rectification of a single-phase supply, either the positive or negative half of the AC wave is passed, while the other half is blocked. Because only one half of the input waveform reaches the output, mean voltage is lower. Half-wave rectification requires a single diode in a single-phase supply, or three in a three-phase supply. Rectifiers yield a unidirectional but pulsating direct current; half-wave rectifiers produce far more ripple than full-wave rectifiers, and much more filtering is needed to eliminate harmonics of the AC frequency from the output. The no-load output DC voltage of an ideal half-wave rectifier for a sinusoidal input voltage is: : where: Vdc, Vav – the DC or average output voltage, Vpeak, the peak value of the phase input voltages, Vrms, the root mean square (RMS) value of output voltage. Full-wave rectification A full-wave rectifier converts the whole of the input waveform to one of constant polarity (positive or negative) at its output. Mathematically, this corresponds to the absolute value function. Full-wave rectification converts both polarities of the input waveform to pulsating DC (direct current), and yields a higher average output voltage. Two diodes and a center-tapped transformer, or four diodes in a bridge configuration and any AC source (including a transformer without center tap), are needed. Single semiconductor diodes, double diodes with a common cathode or common anode, and four- or six-diode bridges are manufactured as single components. For single-phase AC, if the transformer is center-tapped, then two diodes back-to-back (cathode-to-cathode or anode-to-anode, depending on output polarity required) can form a full-wave rectifier. Twice as many turns are required on the transformer secondary to obtain the same output voltage than for a bridge rectifier, but the power rating is unchanged. The average and RMS no-load output voltages of an ideal single-phase full-wave rectifier are: Very common double-diode rectifier vacuum tubes contained a single common cathode and two anodes inside a single envelope, achieving full-wave rectification with positive output. The 5U4 and the 80/5Y3 (4 pin)/(octal) were popular examples of this configuration. Three-phase rectifiers Single-phase rectifiers are commonly used for power supplies for domestic equipment. However, for most industrial and high-power applications, three-phase rectifier circuits are the norm. As with single-phase rectifiers, three-phase rectifiers can take the form of a half-wave circuit, a full-wave circuit using a center-tapped transformer, or a full-wave bridge circuit. Thyristors are commonly used in place of diodes to create a circuit that can regulate the output voltage. Many devices that provide direct current actually 'generate' three-phase AC. For example, an automobile alternator contains nine diodes, six of which function as a full-wave rectifier for battery charging. Three-phase, half-wave circuit An uncontrolled three-phase, half-wave midpoint circuit requires three diodes, one connected to each phase. This is the simplest type of three-phase rectifier but suffers from relatively high harmonic distortion on both the AC and DC connections. This type of rectifier is said to have a pulse-number of three, since the output voltage on the DC side contains three distinct pulses per cycle of the grid frequency: The peak values of this three-pulse DC voltage are calculated from the RMS value of the input phase voltage (line to neutral voltage, 120 V in North America, 230 V within Europe at mains operation): . The average no-load output voltage results from the integral under the graph of a positive half-wave with the period duration of (from 30° to 150°): Three-phase, full-wave circuit using center-tapped transformer If the AC supply is fed via a transformer with a center tap, a rectifier circuit with improved harmonic performance can be obtained. This rectifier now requires six diodes, one connected to each end of each transformer secondary winding. This circuit has a pulse-number of six, and in effect, can be thought of as a six-phase, half-wave circuit. Before solid state devices became available, the half-wave circuit, and the full-wave circuit using a center-tapped transformer, were very commonly used in industrial rectifiers using mercury-arc valves. This was because the three or six AC supply inputs could be fed to a corresponding number of anode electrodes on a single tank, sharing a common cathode. With the advent of diodes and thyristors, these circuits have become less popular and the three-phase bridge circuit has become the most common circuit. Three-phase bridge rectifier uncontrolled For an uncontrolled three-phase bridge rectifier, six diodes are used, and the circuit again has a pulse number of six. For this reason, it is also commonly referred to as a six-pulse bridge. The B6 circuit can be seen simplified as a series connection of two three-pulse center circuits. For low-power applications, double diodes in series, with the anode of the first diode connected to the cathode of the second, are manufactured as a single component for this purpose. Some commercially available double diodes have all four terminals available so the user can configure them for single-phase split supply use, half a bridge, or three-phase rectifier. For higher-power applications, a single discrete device is usually used for each of the six arms of the bridge. For the very highest powers, each arm of the bridge may consist of tens or hundreds of separate devices in parallel (where very high current is needed, for example in aluminium smelting) or in series (where very high voltages are needed, for example in high-voltage direct current power transmission). The pulsating DC voltage results from the differences of the instantaneous positive and negative phase voltages , phase-shifted by 30°: The ideal, no-load average output voltage of the B6 circuit results from the integral under the graph of a DC voltage pulse with the period duration of (from 60° to 120°) with the peak value : If the three-phase bridge rectifier is operated symmetrically (as positive and negative supply voltage), the center point of the rectifier on the output side (or the so-called isolated reference potential) opposite the center point of the transformer (or the neutral conductor) has a potential difference in the form of a triangular common-mode voltage. For this reason, these two centers must never be connected to each other, otherwise short-circuit currents would flow. The ground of the three-phase bridge rectifier in symmetrical operation is thus decoupled from the neutral conductor or the earth of the mains voltage. Powered by a transformer, earthing of the center point of the bridge is possible, provided that the secondary winding of the transformer is electrically isolated from the mains voltage and the star point of the secondary winding is not on earth. In this case, however, (negligible) leakage currents are flowing over the transformer windings. The common-mode voltage is formed out of the respective average values of the differences between the positive and negative phase voltages, which form the pulsating DC voltage. The peak value of the delta voltage amounts of the peak value of the phase input voltage and is calculated with minus half of the DC voltage at 60° of the period: The RMS value of the common-mode voltage is calculated from the form factor for triangular oscillations: If the circuit is operated asymmetrically (as a simple supply voltage with just one positive pole), both the positive and negative poles (or the isolated reference potential) are pulsating opposite the center (or the ground) of the input voltage analogously to the positive and negative waveforms of the phase voltages. However, the differences in the phase voltages result in the six-pulse DC voltage (over the duration of a period). The strict separation of the transformer center from the negative pole (otherwise short-circuit currents will flow) or a possible grounding of the negative pole when powered by an isolating transformer apply correspondingly to the symmetrical operation. Three-phase bridge rectifier controlled The controlled three-phase bridge rectifier uses thyristors in place of diodes. The output voltage is reduced by the factor cos(α): Or, expressed in terms of the line to line input voltage: where: VLLpeak is the peak value of the line to line input voltages, Vpeak is the peak value of the phase (line to neutral) input voltages, and α is the firing angle of the thyristor (0 if diodes are used to perform rectification) The above equations are only valid when no current is drawn from the AC supply or in the theoretical case when the AC supply connections have no inductance. In practice, the supply inductance causes a reduction of DC output voltage with increasing load, typically in the range 10–20% at full load. The effect of supply inductance is to slow down the transfer process (called commutation) from one phase to the next. As result of this is that at each transition between a pair of devices, there is a period of overlap during which three (rather than two) devices in the bridge are conducting simultaneously. The overlap angle is usually referred to by the symbol μ (or u), and may be 20 30° at full load. With supply inductance taken into account, the output voltage of the rectifier is reduced to The overlap angle μ is directly related to the DC current, and the above equation may be re-expressed as where: Lc is the commutating inductance per phase, and Id is the direct current. Twelve-pulse bridge Although better than single-phase rectifiers or three-phase half-wave rectifiers, six-pulse rectifier circuits still produce considerable harmonic distortion on both the AC and DC connections. For very high-power rectifiers the twelve-pulse bridge connection is usually used. A twelve-pulse bridge consists of two six-pulse bridge circuits connected in series, with their AC connections fed from a supply transformer that produces a 30° phase shift between the two bridges. This cancels many of the characteristic harmonics the six-pulse bridges produce. The 30-degree phase shift is usually achieved by using a transformer with two sets of secondary windings, one in star (wye) connection and one in delta connection. Voltage-multiplying rectifiers The simple half-wave rectifier can be built in two electrical configurations with the diodes pointing in opposite directions, one version connects the negative terminal of the output direct to the AC supply and the other connects the positive terminal of the output direct to the AC supply. By combining both of these with separate output smoothing it is possible to get an output voltage of nearly double the peak AC input voltage. This also provides a tap in the middle, which allows use of such a circuit as a split rail power supply. A variant of this is to use two capacitors in series for the output smoothing on a bridge rectifier then place a switch between the midpoint of those capacitors and one of the AC input terminals. With the switch open, this circuit acts like a normal bridge rectifier. With the switch closed, it acts like a voltage doubling rectifier. In other words, this makes it easy to derive a voltage of roughly 320 V (±15%, approx.) DC from any 120 V or 230 V mains supply in the world, this can then be fed into a relatively simple switched-mode power supply. However, for a given desired ripple, the value of both capacitors must be twice the value of the single one required for a normal bridge rectifier; when the switch is closed each one must filter the output of a half-wave rectifier, and when the switch is open the two capacitors are connected in series with an equivalent value of half one of them. In a Cockcroft-Walton voltage multiplier, stages of capacitors and diodes are cascaded to amplify a low AC voltage to a high DC voltage. These circuits are capable of producing a DC output voltage potential up to about ten times the peak AC input voltage, in practice limited by current capacity and voltage regulation issues. Diode voltage multipliers, frequently used as a trailing boost stage or primary high voltage (HV) source, are used in HV laser power supplies, powering devices such as cathode-ray tubes (CRT) (like those used in CRT based television, radar and sonar displays), photon amplifying devices found in image intensifying and photo multiplier tubes (PMT), and magnetron based radio frequency (RF) devices used in radar transmitters and microwave ovens. Before the introduction of semiconductor electronics, transformerless vacuum tube receivers powered directly from AC power sometimes used voltage doublers to generate roughly 300 VDC from a 100–120 V power line. Quantification of rectifiers Several ratios are used to quantify the function and performance of rectifiers or their output, including transformer utilization factor (TUF), conversion ratio (η), ripple factor, form factor, and peak factor. The two primary measures are DC voltage (or offset) and peak-peak ripple voltage, which are constituent components of the output voltage. Conversion ratio Conversion ratio (also called "rectification ratio", and confusingly, "efficiency") η is defined as the ratio of DC output power to the input power from the AC supply. Even with ideal rectifiers, the ratio is less than 100% because some of the output power is AC power rather than DC which manifests as ripple superimposed on the DC waveform. The ratio can be improved with the use of smoothing circuits which reduce the ripple and hence reduce the AC content of the output. Conversion ratio is reduced by losses in transformer windings and power dissipation in the rectifier element itself. This ratio is of little practical significance because a rectifier is almost always followed by a filter to increase DC voltage and reduce ripple. In some three-phase and multi-phase applications the conversion ratio is high enough that smoothing circuitry is unnecessary. In other circuits, like filament heater circuits in vacuum tube electronics where the load is almost entirely resistive, smoothing circuitry may be omitted because resistors dissipate both AC and DC power, so no power is lost. For a half-wave rectifier the ratio is very modest. (the divisors are 2 rather than because no power is delivered on the negative half-cycle) Thus maximum conversion ratio for a half-wave rectifier is, Similarly, for a full-wave rectifier, Three-phase rectifiers, especially three-phase full-wave rectifiers, have much greater conversion ratios because the ripple is intrinsically smaller. For a three-phase half-wave rectifier, For a three-phase full-wave rectifier, Transformer utilization ratio The transformer utilization factor (TUF) of a rectifier circuit is defined as the ratio of the DC power available at the input resistor to the AC rating of the output coil of a transformer. The rating of the transformer can be defined as: Rectifier voltage drop
Technology
Functional circuits
null
60575
https://en.wikipedia.org/wiki/Cardiac%20arrest
Cardiac arrest
Cardiac arrest (also known as sudden cardiac arrest [SCA]) is when the heart suddenly and unexpectedly stops beating. When the heart stops beating, blood cannot properly circulate around the body and the blood flow to the brain and other organs is decreased. When the brain does not receive enough blood, this can cause a person to lose consciousness and brain cells can start to die due to lack of oxygen. Coma and persistent vegetative state may result from cardiac arrest. Cardiac arrest is also identified by a lack of central pulses and abnormal or absent breathing. Cardiac arrest and resultant hemodynamic collapse often occur due to arrhythmias (irregular heart rhythms). Ventricular fibrillation and ventricular tachycardia are most commonly recorded. However, as many incidents of cardiac arrest occur out-of-hospital or when a person is not having their cardiac activity monitored, it is difficult to identify the specific mechanism in each case. Structural heart disease, such as coronary artery disease, is a common underlying condition in people who experience cardiac arrest. The most common risk factors include age and cardiovascular disease. Additional underlying cardiac conditions include heart failure and inherited arrhythmias. Additional factors that may contribute to cardiac arrest include major blood loss, lack of oxygen, electrolyte disturbance (such as very low potassium), electrical injury, and intense physical exercise. Cardiac arrest is diagnosed by the inability to find a pulse in an unresponsive patient. The goal of treatment for cardiac arrest is to rapidly achieve return of spontaneous circulation using a variety of interventions including CPR, defibrillation, and/or cardiac pacing. Two protocols have been established for CPR: basic life support (BLS) and advanced cardiac life support (ACLS). If return of spontaneous circulation is achieved with these interventions, then sudden cardiac arrest has occurred. By contrast, if the person does not survive the event, this is referred to as sudden cardiac death. Among those whose pulses are re-established, the care team may initiate measures to protect the person from brain injury and preserve neurological function. Some methods may include airway management and mechanical ventilation, maintenance of blood pressure and end-organ perfusion via fluid resuscitation and vasopressor support, correction of electrolyte imbalance, EKG monitoring and management of reversible causes, and temperature management. Targeted temperature management may improve outcomes. In post-resuscitation care, an implantable cardiac defibrillator may be considered to reduce the chance of death from recurrence. Per the 2015 American Heart Association Guidelines, there were approximately 535,000 incidents of cardiac arrest annually in the United States (about 13 per 10,000 people). Of these, 326,000 (61%) experience cardiac arrest outside of a hospital setting, while 209,000 (39%) occur within a hospital. Cardiac arrest becomes more common with age and affects males more often than females. Black people are twice as likely to die from cardiac arrest as white people. Asian and Hispanic people are not as frequently affected as white people. Signs and symptoms Cardiac arrest is not preceded by any warning symptoms in approximately 50 percent of people. For individuals who do experience symptoms, the symptoms are usually nonspecific to the cardiac arrest. For example, new or worsening chest pain, fatigue, blackouts, dizziness, shortness of breath, weakness, or vomiting. When cardiac arrest is suspected by a layperson (due to signs of unconsciousness, abnormal breathing, and/or no pulse) it should be assumed that the victim is in cardiac arrest. Bystanders should call emergency medical services (such as 911, 999 or 112) and initiate CPR. Risk factors Major risk factors for cardiac arrest include age and underlying cardiovascular disease. A prior episode of sudden cardiac arrest increases the likelihood of future episodes. A 2021 meta-analysis assessing the recurrence of cardiac arrest in out-of-hospital cardiac arrest survivors identified that 15% of survivors experienced a second event, most often in the first year. Furthermore, of those who experienced recurrence, 35% had a third episode. Additional significant risk factors include cigarette smoking, high blood pressure, high cholesterol, history of arrhythmia, lack of physical exercise, obesity, diabetes, family history, cardiomyopathy, alcohol use, and possibly caffeine intake. Current cigarette smokers with coronary artery disease were found to have a two to threefold increase in the risk of sudden death between ages 30 and 59. Furthermore, it was found that former smokers' risk was closer to that of those who had never smoked. A statistical analysis of many of these risk factors determined that approximately 50% of all cardiac arrests occur in 10% of the population perceived to be at greatest risk, due to aggregate harm of multiple risk factors, demonstrating that cumulative risk of multiple comorbidities exceeds the sum of each risk individually. Causes and mechanisms The underlying causes of sudden cardiac arrest can result from cardiac and non-cardiac etiologies. The most common underlying causes are different, depending on the patient's age. Common cardiac causes include coronary artery disease, non-atherosclerotic coronary artery abnormalities, structural heart damage, and inherited arrhythmias. Common non-cardiac causes include respiratory arrest, diabetes, medications, and trauma. The most common mechanism underlying sudden cardiac arrest is an arrhythmia (an irregular rhythm). Without organized electrical activity in the heart muscle, there is inconsistent contraction of the ventricles, which prevents the heart from generating adequate cardiac output (forward pumping of blood from the heart to the rest of the body). This hemodynamic collapse results in poor blood flow to the brain and other organs, which if prolonged causes persistent damage. There are many different types of arrhythmias, but the ones most frequently recorded in sudden cardiac arrest are ventricular tachycardia and ventricular fibrillation. Both ventricular tachycardia and ventricular fibrillation can prevent the heart from generating coordinated ventricular contractions, thereby failing to sustain adequate blood circulation. Less common types of arrhythmias occurring in cardiac arrest include pulseless electrical activity, bradycardia, and asystole. These rhythms are seen when there is prolonged cardiac arrest, progression of ventricular fibrillation, or efforts like defibrillation executed to resuscitate the person. Cardiac causes Coronary artery disease Coronary artery disease (CAD), also known as atherosclerotic cardiovascular disease, involves the deposition of cholesterol and subsequent inflammation-driven formation of atherosclerotic plaques in the arteries. CAD involves the accumulation and remodeling of the coronary vessels along with other systemic blood vessels. When an atherosclerotic plaque dislodges, it can block the flow of blood and oxygen through small arteries, such as the coronary arteries, resulting in ischemic injury. In the heart, this results in myocardial tissue damage which can lead to structural and functional changes that disrupt normal conduction patterns and alter heart rate and contraction. CAD underlies 68 percent of sudden cardiac deaths in the United States. Indeed, postmortem examinations have shown that the most common finding in cases of sudden cardiac death is chronic, high-grade stenosis of at least one segment of a major coronary artery. While CAD is a leading contributing factor, this is an age-dependent factor, with CAD being a less common cause of sudden cardiac death in people under the age of 40. Non-atherosclerotic coronary artery abnormalities Abnormalities of the coronary arteries not related to atherosclerosis include inflammation (known as coronary arteritis), embolism, vasospasm, mechanical abnormalities related to connective tissue diseases or trauma, and congenital coronary artery anomalies (most commonly anomalous origin of the left coronary artery from the pulmonary artery). These conditions account for 10-15% of cardiac arrest and sudden cardiac death. Coronary arteritis commonly results from a pediatric febrile inflammatory condition known as Kawasaki disease. Other types of vasculitis can also contribute to an increased risk of sudden cardiac death. Embolism, or clotting, of the coronary arteries most commonly occurs from septic emboli secondary to endocarditis with involvement of the aortic valve, tricuspid valve, or prosthetic valves. Coronary vasospasm may result in cardiac arrhythmias, altering the heart's electrical conduction with a risk of complete cardiac arrest from severe or prolonged rhythm changes. Mechanical abnormalities with an associated risk of cardiac arrest may arise from coronary artery dissection, which can be attributed to Marfan Syndrome or trauma. Structural heart disease Examples of structural heart diseases include: cardiomyopathies (hypertrophic, dilated, or arrhythmogenic), cardiac rhythm disturbances, myocarditis, and congestive heart failure. Left ventricular hypertrophy is a leading cause of sudden cardiac deaths in the adult population. This is most commonly the result of longstanding high blood pressure, or hypertension, which has led to maladaptive overgrowth of muscular tissue of the left ventricle, the heart's main pumping chamber. This is because elevated blood pressure over the course of several years requires the heart to adapt to the requirement of pumping harder to adequately circulate blood throughout the body. If the heart does this for a prolonged period of time, the left ventricle can experience hypertrophy (grow larger) in a way that decreases the heart's effectiveness. Left ventricular hypertrophy can be demonstrated on an echocardiogram and electrocardiogram (EKG). Abnormalities of the cardiac conduction system (notably the Atrioventricular Node and His-Purkinje system) may predispose an individual to arrhythmias with a risk of progressing to sudden cardiac arrest, albeit this risk remains low. Many of these conduction blocks can be treated with internal cardiac defibrillators for those determined to be at high risk due to severity of fibrosis or severe electrophysiologic disturbances. Structural heart diseases unrelated to coronary artery disease account for 10% of all sudden cardiac deaths. A 1999 review of sudden cardiac deaths in the United States found that structural heart diseases accounted for over 30% of sudden cardiac arrests for those under 30 years. Inherited arrhythmia syndromes Arrhythmias not due to structural heart disease account for 5 to 10% of sudden cardiac arrests. These are frequently caused by genetic disorders. The genetic mutations often affect specialized proteins known as ion channels that conduct electrically charged particles across the cell membrane, and this group of conditions is therefore often referred to as channelopathies. Examples of these inherited arrhythmia syndromes include Long QT syndrome (LQTS), Brugada Syndrome, Catecholaminergic polymorphic ventricular tachycardia, and Short QT syndrome. Many are also associated with environmental or neurogenic triggers such as response to loud sounds that can initiate lethal arrhythmias. LQTS, a condition often mentioned in young people's deaths, occurs in one of every 5000 to 7000 newborns and is estimated to be responsible for 3000 deaths annually compared to the approximately 300,000 cardiac arrests seen by emergency services. These conditions are a fraction of the overall deaths related to cardiac arrest but represent conditions that may be detected prior to arrest and may be treatable. The symptomatic expression of LQTS is quite broad and more often presents with syncope rather than cardiac arrest. The risk of cardiac arrest is still present, and people with family histories of sudden cardiac arrests should be screened for LQTS and other treatable causes of lethal arrhythmia. Higher levels of risk for cardiac arrest are associated with female sex, more significant QT prolongation, history of unexplained syncope (fainting spells), or premature sudden cardiac death. Additionally, individuals with LQTS should avoid certain medications that carry the risk of increasing the severity of this conduction abnormality, such as certain anti-arrhythmics, anti-depressants, and quinolone or macrolide antibiotics. Another condition that promotes arrhythmias is Wolff-Parkinson-White syndrome, in which an accessory conduction pathway bypassing the atrioventricular node is present and can cause abnormal conduction patterns leading to supraventricular tachycardia and cardiac arrest. Non-cardiac causes Non-cardiac causes account for 15 to 25% of cardiac arrests. Common non-cardiac causes include respiratory arrest, diabetes, certain medications, and blunt trauma (especially to the chest). Respiratory arrest will be followed by cardiac arrest unless promptly treated. Respiratory arrest can be caused by pulmonary embolus, choking, drowning, trauma, drug overdose, and poisoning. Pulmonary embolus carries a high mortality rate and may be the triggering cause for up to 5% of cardiac arrests, according to a retrospective study from an urban tertiary care emergency department. Diabetes-related factors contributing to cardiac arrest include silent myocardial ischemia, nervous system dysfunction, and electrolyte disturbances leading to abnormal cardiac repolarization. Certain medications can worsen an existing arrhythmia. Some examples include antibiotics like macrolides, diuretics, and heart medications such as anti-arrhythmic medications. Additional non-cardiac causes include hemorrhage, aortic rupture, hypovolemic shock, pulmonary embolism, poisoning such as from the stings of certain jellyfish, and electrical injury. Circadian patterns are also recognized as triggering factors in cardiac arrest. Per a 2021 systematic review, throughout the day there are two main peak times in which cardiac arrest occurs. The first is in the morning hours and the second is in the afternoon. Moreover, survival rates following cardiac arrest were lowest when occurring between midnight and 6am. Many of these non-cardiac causes of cardiac arrest are reversible. A common mnemonic used to recall the reversible causes of cardiac arrest is referred to as the Hs and Ts. The Hs are hypovolemia, hypoxia, hydrogen cation excess (acidosis), hyperkalemia, hypokalemia, hypothermia, and hypoglycemia. The Ts are toxins, (cardiac) tamponade, tension pneumothorax, thrombosis (myocardial infarction), thromboembolism, and trauma. Mechanism The definitive electrical mechanisms of cardiac arrest, which may arise from any of the functional, structural, or physiologic abnormalities mentioned above, are characterized by arrhythmias. Ventricular fibrillation and pulseless or sustained ventricular tachycardia are the most commonly recorded arrhythmias preceding cardiac arrest. These are rapid and erratic arrhythmias that alter the circulatory pathway such that adequate blood flow cannot be sustained and is inadequate to meet the body's needs. The mechanism responsible for the majority of sudden cardiac deaths is ventricular fibrillation. Ventricular fibrillation is a tachyarrhythmia characterized by turbulent electrical activity in the ventricular myocardium leading to a heart rate too disorganized and rapid to produce any meaningful cardiac output, thus resulting in insufficient perfusion of the brain and essential organs. Some of the electrophysiologic mechanisms underpinning ventricular fibrillations include ectopic automaticity, re-entry, and triggered activity. However, structural changes in the diseased heart as a result of inherited factors (mutations in ion-channel coding genes, for example) cannot explain the sudden onset of cardiac arrest. In ventricular tachycardia, the heart also beats faster than normal, which may prevent the heart chambers from properly filling with blood. Ventricular tachycardia is characterized by an altered QRS complex and a heart rate greater than 100 beats per minute. When V-tach is sustained (lasts for at least 30 seconds), inadequate blood flow to heart tissue can lead to cardiac arrest. Bradyarrhythmias occur following dissociation of spontaneous electrical conduction and the mechanical function of the heart resulting in pulseless electrical activity (PEA) or through complete absence of electrical activity of the heart resulting in asystole. Similar to the result of tachyarrhythmias, these conditions lead to an inability to sustain adequate blood flow as well, though in the case of bradyarrhythmias, the underlying cause is an absence of mechanical activity rather than rapid beats leading to disorganization. Diagnosis Cardiac arrest is synonymous with clinical death. The physical examination to diagnose cardiac arrest focuses on the absence of a pulse. In many cases, lack of a central pulse (carotid arteries or subclavian arteries) is the gold standard. Lack of a pulse in the periphery (radial/pedal) may also result from other conditions (e.g. shock) or be the rescuer's misinterpretation. Obtaining a thorough history can help inform the potential cause and prognosis. The provider taking the person's clinical history should try to learn whether the episode was observed by anyone else, when it happened, what the patient was doing (in particular whether there was any trauma), and whether drugs were involved. During resuscitation efforts, continuous monitoring equipment including EKG leads should be attached to the patient so that providers can analyze the electrical activity of the cardiac cycle and use this information to guide the management efforts. EKG readings will help to identify the arrhythmia present and allow the team to monitor any changes that occur with the administration of CPR and defibrillation. Clinicians classify cardiac arrest into "shockable" versus "non-shockable", as determined by the EKG rhythm. This refers to whether a particular class of cardiac dysrhythmia is treatable using defibrillation. The two "shockable" rhythms are ventricular fibrillation and pulseless ventricular tachycardia, while the two "non-shockable" rhythms are asystole and pulseless electrical activity. Moreover, in the post-resuscitation patient, a 12-lead EKG can help identify some causes of cardiac arrest, such as STEMI which may require specific treatments. Point-of-care ultrasound (POCUS) is a tool that can be used to examine the movement of the heart and its force of contraction at the patient's bedside. POCUS can accurately diagnose cardiac arrest in hospital settings, as well as visualize cardiac wall motion contractions. Using POCUS, clinicians can have limited, two-dimensional views of different parts of the heart during arrest. These images can help clinicians determine whether electrical activity within the heart is pulseless or pseudo-pulseless, as well as help them diagnose the potentially reversible causes of an arrest. Published guidelines from the American Society of Echocardiography, American College of Emergency Physicians, European Resuscitation Council, and the American Heart Association, as well as the 2018 preoperative Advanced Cardiac Life Support guidelines, have recognized the potential benefits of using POCUS in diagnosing and managing cardiac arrest. POCUS can help predict outcomes in resuscitation efforts. Specifically, use of transthoracic ultrasound can be a helpful tool in predicting mortality in cases of cardiac arrest, with a systematic review from 2020 finding that there is a significant positive correlation between presence of cardiac motion and short term survival with CPR. Owing to the inaccuracy diagnosis solely based on central pulse detection, some bodies like the European Resuscitation Council have de-emphasized its importance. Instead, the current guidelines prompt individuals to begin CPR on any unconscious person with absent or abnormal breathing. The Resuscitation Council in the United Kingdom stands in line with the European Resuscitation Council's recommendations and those of the American Heart Association. They have suggested that the technique to check carotid pulses should be used only by healthcare professionals with specific training and expertise, and even then that it should be viewed in conjunction with other indicators like agonal respiration. Various other methods for detecting circulation and therefore diagnosing cardiac arrest have been proposed. Guidelines following the 2000 International Liaison Committee on Resuscitation recommendations were for rescuers to look for "signs of circulation" but not specifically the pulse. These signs included coughing, gasping, color, twitching, and movement. Per evidence that these guidelines were ineffective, the current International Liaison Committee on Resuscitation recommendation is that cardiac arrest should be diagnosed in all casualties who are unconscious and not breathing normally, a similar protocol to that which the European Resuscitation Council has adopted. In a non-acute setting where the patient is expired, diagnosis of cardiac arrest can be done via molecular autopsy or postmortem molecular testing, which uses a set of molecular techniques to find the ion channels that are cardiac defective. This could help elucidate the cause of death in the patient. Other physical signs or symptoms can help determine the potential cause of the cardiac arrest. Below is a chart of the clinical findings and signs/symptoms a person may have and potential causes associated with them. Prevention Primary prevention With the lack of positive outcomes following cardiac arrest, efforts have been spent finding effective strategies to prevent cardiac arrest events. The approach to primary prevention promotes a healthy diet, exercise, limited alcohol consumption, and smoking cessation. Exercise is an effective preventative measure for cardiac arrest in the general population but may be risky for those with pre-existing conditions. The risk of a transient catastrophic cardiac event increases in individuals with heart disease during and immediately after exercise. The lifetime and acute risks of cardiac arrest are decreased in people with heart disease who perform regular exercise, perhaps suggesting the benefits of exercise outweigh the risks. A 2021 study found that diet may be a modifiable risk factor for a lower incidence of sudden cardiac death. The study found that those who fell under the category of having "Southern [United States] diets" representing those of "added fats, fried food, eggs, organ and processed meats, and sugar-sweetened beverages" had a positive association with an increased risk of cardiac arrest, while those deemed following the "Mediterranean diets" had an inverse relationship regarding the risk of cardiac arrest. According to a 2012 review published, omega-3 PUFA supplementation is not associated with a lower risk of sudden cardiac death. A Cochrane review published in 2016 found moderate-quality evidence to show that blood pressure-lowering drugs do not reduce the risk of sudden cardiac death. Secondary prevention An implantable cardioverter-defibrillator (ICD) is a battery-powered device that monitors electrical activity in the heart, and when an arrhythmia is detected, can deliver an electrical shock to terminate the abnormal rhythm. ICDs are used to prevent sudden cardiac death (SCD) in those who have survived a prior episode of sudden cardiac arrest (SCA) due to ventricular fibrillation or ventricular tachycardia. Numerous studies have been conducted on the use of ICDs for the secondary prevention of SCD. These studies have shown improved survival with ICDs compared to the use of anti-arrhythmic drugs. ICD therapy is associated with a 50% relative risk reduction in death caused by an arrhythmia and a 25% relative risk reduction in all-cause mortality. Prevention of SCD with ICD therapy for high-risk patient populations has similarly shown improved survival rates in several large studies. The high-risk patient populations in these studies were defined as those with severe ischemic cardiomyopathy (determined by a reduced left ventricular ejection fraction (LVEF)). The LVEF criteria used in these trials ranged from less than or equal to 30% in MADIT-II to less than or equal to 40% in MUSTT. In certain high-risk patient populations (such as patients with LQTS), ICDs are also used to prevent sudden cardiac death (primary prevention). Crash teams In hospital, a cardiac arrest is referred to as a "crash", or a "code". This typically refers to code blue on the hospital emergency codes. A dramatic drop in vital sign measurements is referred to as "coding" or "crashing", though coding is usually used when it results in cardiac arrest, while crashing might not. Treatment for cardiac arrest is sometimes referred to as "calling a code". Patients in general wards often deteriorate for several hours or even days before a cardiac arrest occurs. This has been attributed to a lack of knowledge and skill amongst ward-based staff, in particular, a failure to measure the respiratory rate, which is often the major predictor of a deterioration and can often change up to 48 hours prior to a cardiac arrest. In response, many hospitals now have increased training for ward-based staff. A number of "early warning" systems also exist that aim to quantify the person's risk of deterioration based on their vital signs and thus provide a guide to staff. In addition, specialist staff are being used more effectively to augment the work already being done at the ward level. These include: Crash teams (or code teams) – These are designated staff members with particular expertise in resuscitation who are called to the scene of all arrests within the hospital. This usually involves a specialized cart of equipment (including a defibrillator) and drugs called a "crash cart" or "crash trolley". Medical emergency teams – These teams respond to all emergencies with the aim of treating people in the acute phase of their illness in order to prevent a cardiac arrest. These teams have been found to decrease the rates of in-hospital cardiac arrest (IHCA) and improve survival. Critical care outreach – In addition to providing the services of the other two types of teams, these teams are responsible for educating non-specialist staff. In addition, they help to facilitate transfers between intensive care/high dependency units and the general hospital wards. This is particularly important as many studies have shown that a significant percentage of patients discharged from critical care environments quickly deteriorate and are re-admitted; the outreach team offers support to ward staff to prevent this from happening. Management Sudden cardiac arrest may be treated via attempts at resuscitation. This is usually carried out based on basic life support, advanced cardiac life support (ACLS), pediatric advanced life support (PALS), or neonatal resuscitation program (NRP) guidelines. Cardiopulmonary resuscitation Early cardiopulmonary resuscitation (CPR) is essential to surviving cardiac arrest with good neurological function. It is recommended that it be started as soon as possible with minimal interruptions once begun. The components of CPR that make the greatest difference in survival are chest compressions and defibrillating shockable rhythms. After defibrillation, chest compressions should be continued for two minutes before another rhythm check. This is based on a compression rate of 100-120 compressions per minute, a compression depth of 5–6 centimeters into the chest, full chest recoil, and a ventilation rate of 10 breath ventilations per minute. Mechanical chest compressions (as performed by a machine) are no better than chest compressions performed by hand. It is unclear if a few minutes of CPR before defibrillation results in different outcomes than immediate defibrillation. Correctly performed bystander CPR has been shown to increase survival, however it is performed in fewer than 30% of out-of-hospital cardiac arrests (OHCAs) . A 2019 meta-analysis found that use of dispatcher-assisted CPR improved outcomes, including survival, when compared with undirected bystander CPR. Likewise, a 2022 systematic review on exercise-related cardiac arrests supported early intervention of bystander CPR and AED use (for shockable rhythms) as they improve survival outcomes. If high-quality CPR has not resulted in return of spontaneous circulation and the person's heart rhythm is in asystole, stopping CPR and pronouncing the person's death is generally reasonable after 20 minutes. Exceptions to this include certain cases with hypothermia or drowning victims. Some of these cases should have longer and more sustained CPR until they are nearly normothermic. If cardiac arrest occurs after 20 weeks of pregnancy, the uterus should be pulled or pushed to the left during CPR. If a pulse has not returned by four minutes, an emergency Cesarean section is recommended. Airway management High levels of oxygen are generally given during CPR. Either a bag valve mask or an advanced airway may be used to help with breathing particularly since vomiting and regurgitation are common, especially in OHCA. If this occurs, then modification to existing oropharyngeal suction may be required, such as using suction-assisted airway management. Tracheal intubation has not been found to improve survival rates or neurological outcomes in cardiac arrest, and in the prehospital environment, may worsen it. Endotracheal tubes and supraglottic airways appear equally useful. Mouth-to-mouth as a means of providing respirations to the person has been phased out due to the risk of contracting infectious diseases from the affected person. When done by emergency medical personnel, 30 compressions followed by two breaths appear to be better than continuous chest compressions and breaths being given while compressions are ongoing. For bystanders, CPR that involves only chest compressions results in better outcomes as compared to standard CPR for those who have gone into cardiac arrest due to heart issues. Defibrillation Defibrillation is indicated if an electric-shockable heart rhythm is present. The two shockable rhythms are ventricular fibrillation and pulseless ventricular tachycardia. These shockable rhythms have a 25-40% likelihood of survival, compared with a significantly lower rate (less than 5%) in non-shockable rhythms. The non-shockable rhythms include asystole and pulseless electrical activity. Ventricular fibrillation involves the ventricles of the heart (lower chambers responsible for pumping blood) rapidly contracting in an disorganized pattern, and thereby limiting blood flow from the heart. This can be due to uncoordinated electrical activity. The electrocardiogram (ECG) generally shows irregular QRS complexes without P-waves. By contrast, the ECG for ventricular tachycardia will generally show a wide complex QRS with more than 100 beats occurring per minute. If sustained, ventricular tachycardia can also lead hemodynamic instability and compromise, resulting in pulselessness and poor perfusion to vital organs. A defibrillator will deliver an electrical current through a pair of electrodes placed on the person's chest. This is thought to depolarize myocardial tissue thereby stopping the arrhythmia. Defibrillators can deliver energy as monophasic or biphasic waveforms, although biphasic defibrillators are the most common. For ventricular fibrillation, defibrillation techniques may utilized either monophasic or biphasic waveforms. Prior studies suggest that biphasic shock is more likely to produce successful defibrillation after a single shock, however rate of survival is comparable between the methods. In out-of-hospital arrests, the defibrillation is made by an automated external defibrillator (AED), a portable machine that can be used by any user. The AED provides voice instructions that guide the process, automatically checks the person's condition, and applies the appropriate electric shocks. Some defibrillators even provide feedback on the quality of CPR compressions, encouraging the lay rescuer to press the person's chest hard enough to circulate blood. In addition, there is increasing use of public access defibrillation. This involves placing AEDs in public places and training staff in these areas on how to use them. This allows defibrillation to occur prior to the arrival of emergency services, which has been shown to increase chances of survival. People who have cardiac arrests in remote locations have worse outcomes following cardiac arrest. Defibrillation is applied to certain arrhythmias such as ventricular fibrillation and pulseless ventricular tachycardia. Defibrillation cannot be applied to asystole, and CPR must be initiated first in this case. Moreover, defibrillation is different than synchronized cardioversion. In synchronized cardioversion, a similar approach is utilized in that electrical current is applied to correct an arrhythmia, however this is used in cases where a pulse is present but the patient is hemodynamically unstable, such as supraventricular tachycardia. Defibrillators may also be used as part of post-cardiac arrest management. These defibrillators include wearable defibrillator (such as LifeVest), subcutaneous cardiac defibrillator, and implantable cardiac defibrillator. Medications Medications recommended in the ACLS protocol include epinephrine, amiodarone, and lidocaine. The timing and administration of these medications depends on the underlying arrhythmia of the arrest. Epinephrine acts on the alpha-1 receptor, which in turn increases the blood flow that supplies the heart. Epinephrine in adults improves survival but does not appear to improve neurologically normal survival. In ventricular fibrillation and pulseless ventricular tachycardia, 1 mg of epinephrine is given every 3–5 minutes, following an initial round of CPR and defibrillation. Doses higher than 1 mg of epinephrine are not recommended for routine use in cardiac arrest. If the person has a non-shockable rhythm, such as asystole, following an initial round of CPR, 1 mg of epinephrine should be given every 3–5 minutes, with the goal of obtaining a shockable rhythm. Amiodarone and lidocaine are anti-arrhythmic medications. Amiodarone is a class III anti-arrhythmic. Amiodarone may be used in cases of ventricular fibrillation, pulseless ventricular tachycardia, and wide complex tachycardia. Lidocaine is a class Ib anti-arrhythmic, also used to manage acute arrhythmias. Anti-arrhythmic medications may be used after an unsuccessful defibrillation attempt. However, neither lidocaine nor amiodarone, in those who continue in ventricular tachycardia or ventricular fibrillation despite defibrillation, improves survival to hospital discharge, despite both equally improving survival to hospital admission. Following an additional round of CPR and defibrillation, amiodarone can also be administered. The first dose is given as a 300 mg bolus. The second dose is given as a 600 mg bolus. Additional medications Bicarbonate, given as sodium bicarbonate, works to stabilize acidosis and hyperkalemia, both of which can contribute to and exacerbate cardiac arrest. If acid-base or electrolyte disturbance is evident, bicarbonate may be used. However, if there is little suspicion that these imbalances are occurring and contributing to the arrest, routine use of bicarbonate is not recommended as it does not provide additional benefit. Calcium, given as calcium chloride, works as an inotrope and vasopressor. Calcium is used in specific circumstances such as electrolyte disturbances (hyperkalemia) and calcium-channel blocker toxicity. Overall, calcium is not routinely used during cardiac arrest as it does not provide additional benefit (compared to non-use) and may even cause harm (poor neurologic outcomes). Vasopressin overall does not improve or worsen outcomes compared to epinephrine. The combination of epinephrine, vasopressin, and methylprednisolone appears to improve outcomes. The use of atropine, lidocaine, and amiodarone have not been shown to improve survival from cardiac arrest. Atropine is used for symptomatic bradycardia. It is given at a does of 1 mg (iv), and additional 1 mg (iv) doses can be given every 3–5 minutes for a total of 3 mg. However, the 2010 guidelines from the American Heart Association removed the recommendation for atropine use in pulseless electrical activity and asystole for lack of evidence supporting its use. Special considerations Hemodialysis patients carry a greater risk of cardiac arrest events. Multiple factors contribute including increased cardiovascular risk factors, electrolyte disturbances (calcium and potassium, caused by accumulation and aggressive removal), and acid-base disturbances. Calcium levels are considered a key factor contributing to cardiac arrests in this population. Tricyclic antidepressant (TCA) overdose can lead to cardiac arrest with typical ECG findings including wide QRS and prolonged QTc. Treatment for this condition includes activated charcoal and sodium bicarbonate. Magnesium can be given at a does of 2 g (iv or oral bolus) to manage torsades de points. However, without specific indication, magnesium is not generally given in cardiac arrest. In people with a confirmed pulmonary embolism as the cause of arrest, thrombolytics may be of benefit. Evidence for use of naloxone in those with cardiac arrest due to opioids is unclear, but it may still be used. In people with cardiac arrest due to a local anesthetic, lipid emulsion may be used. Targeted temperature management Current international guidelines suggest cooling adults after cardiac arrest using targeted temperature management (TTM) with the goal of improving neurological outcomes. The process involves cooling for a 24-hour period, with a target temperature of , followed by gradual rewarming over the next 12 to 24 hrs. There are several methods used to lower the body temperature, such as applying ice packs or cold-water circulating pads directly to the body or infusing cold saline. The effectiveness of TTM after OHCA is an area of ongoing study. Several recent reviews have found that patients treated with TTM have more favorable neurological outcomes. However, pre-hospital TTM after OHCA has been shown to increase the risk of adverse outcomes. The rates of re-arrest may be higher in people who were treated with pre-hospital TTM. Moreover, TTM may have adverse neurological effects in people who survive post-cardiac arrest. Osborn waves on ECG are frequent during TTM, particularly in patients treated with 33 °C. Osborn waves are not associated with increased risk of ventricular arrhythmia, and may be considered a benign physiological phenomenon, associated with lower mortality in univariable analyses. Do not resuscitate Some people choose to avoid aggressive measures at the end of life. A do not resuscitate order (DNR) in the form of an advance health care directive makes it clear that in the event of cardiac arrest, the person does not wish to receive cardiopulmonary resuscitation. Other directives may be made to stipulate the desire for intubation in the event of respiratory failure or, if comfort measures are all that are desired, by stipulating that healthcare providers should "allow natural death". Chain of survival Several organizations promote the idea of a chain of survival. The chain consists of the following "links": Early recognition. If possible, recognition of illness before the person develops a cardiac arrest will allow the rescuer to prevent its occurrence. Early recognition that a cardiac arrest has occurred is key to survival, for every minute a patient stays in cardiac arrest, their chances of survival drop by roughly 10%. Early CPR improves the flow of blood and of oxygen to vital organs, an essential component of treating a cardiac arrest. In particular, by keeping the brain supplied with oxygenated blood, the chances of neurological damage are decreased. Early defibrillation is effective for the management of ventricular fibrillation and pulseless ventricular tachycardia. Early advanced care. Early post-resuscitation care, which may include percutaneous coronary intervention. If one or more links in the chain are missing or delayed, then the chances of survival drop significantly. These protocols are often initiated by a code blue, which usually denotes impending or acute onset of cardiac arrest or respiratory failure. Other Resuscitation with extracorporeal membrane oxygenation devices has been attempted with better results for in-hospital cardiac arrest (29% survival) than OHCA (4% survival) in populations selected to benefit most. Cardiac catheterization in those who have survived an OHCA appears to improve outcomes, although high-quality evidence is lacking. It is recommended to be done as soon as possible in those who have had a cardiac arrest with ST elevation due to underlying heart problems. The precordial thump may be considered in those with witnessed, monitored, unstable ventricular tachycardia (including pulseless VT) if a defibrillator is not immediately ready for use, but it should not delay CPR and shock delivery or be used in those with unwitnessed OHCA. Prognosis The overall rate of survival among those who have OHCA is 10%. Among those who have an OHCA, 70% occur at home, and their survival rate is 6%. For those who have an in-hospital cardiac arrest (IHCA), the survival rate one year from at least the occurrence of cardiac arrest is estimated to be 13%. For IHCA, survival to discharge is around 22%. Those who survive to return of spontaneous circulation and hospital admission frequently present with post-cardiac arrest syndrome, which usually presents with neurological injury that can range from mild memory problems to coma. One-year survival is estimated to be higher in people with cardiac admission diagnoses (39%) when compared to those with non-cardiac admission diagnoses (11%). A 1997 review found rates of survival to discharge of 14%, although different studies varied from 0 to 28%. In those over the age of 70 who have a cardiac arrest while in hospital, survival to hospital discharge is less than 20%. How well these individuals manage after leaving the hospital is not clear. The global rate of people who were able to recover from OHCA after receiving CPR has been found to be approximately 30%, and the rate of survival to discharge from the hospital has been estimated at 9%. Survival to discharge from the hospital is more likely among people whose cardiac arrest was witnessed by a bystander or emergency medical services, who received bystander CPR, and who live in Europe and North America. Relatively lower survival to hospital discharge rates have been observed in Asian countries. Prognosis is typically assessed 72 hours or more after cardiac arrest. Rates of survival are better in those who had someone witness their collapse, received bystander CPR, and/or had either V-fib or V-tach when assessed. Survival among those with V-fib or V-tach is 15 to 23%. Women are more likely to survive cardiac arrest and leave the hospital than men. Hypoxic ischemic brain injury is a concerning outcome for people suffering a cardiac arrest. Most improvements in cognition occur during the first three months following cardiac arrest, with some individuals reporting improvement up to one year post-cardiac arrest. 50 – 70% of cardiac arrest survivors report fatigue as a symptom. Epidemiology United States The risk of cardiac arrest varies with geographical region, age, and gender. The lifetime risk is three times greater in men (12.3%) than women (4.2%) based on analysis of the Framingham Heart Study. This gender difference disappeared beyond 85 years of age. Around half of these individuals are younger than 65 years of age. Based on death certificates, sudden cardiac death accounts for about 20% of all deaths in the United States. In the United States, approximately 326,000 cases of out-of-hospital and 209,000 cases of IHCA occur among adults annually, which works out to be an incidence of approximately 110.8 per 100,000 adults per year. In the United States, during-pregnancy cardiac arrest occurs in about one in twelve-thousand deliveries or 1.8 per 10,000 live births. Rates are lower in Canada. Other regions Non-Western regions of the world have differing incidences. The incidence of sudden cardiac death in China is 41.8 per 100,000 and in South India is 39.7 per 100,000. Society and culture Names In many publications, the stated or implicit meaning of "sudden cardiac death" is sudden death from cardiac causes. Some physicians call cardiac arrest "sudden cardiac death" even if the person survives. Thus one can hear mentions of "prior episodes of sudden cardiac death" in a living person. In 2021, the American Heart Association clarified that "heart attack" is often mistakenly used to describe cardiac arrest. While a heart attack refers to death of heart muscle tissue as a result of blood supply loss, cardiac arrest is caused when the heart's electrical system malfunctions. Furthermore, the American Heart Association explains that "if corrective measures are not taken rapidly, this condition progresses to sudden death. Cardiac arrest should be used to signify an event as described above, that is reversed, usually by CPR and/or defibrillation or cardioversion, or cardiac pacing. Sudden cardiac death should not be used to describe events that are not fatal". Slow code A "slow code" is a slang term for the practice of deceptively delivering sub-optimal CPR to a person in cardiac arrest, when CPR is considered to have no medical benefit. A "show code" is the practice of faking the response altogether for the sake of the person's family. Such practices are ethically controversial and are banned in some jurisdictions. The European Resuscitation Council Guidelines released a statement in 2021 that clinicians are not suggested to participate/take part in "slow codes". According to the American College of Physicians, half-hearted resuscitation efforts are deceptive and should not be performed by physicians or nurses. Children In children, the most common cause of cardiac arrest is shock or respiratory failure that has not been treated. Cardiac arrhythmias are another possible cause. Arrhythmias such as asystole or bradycardia are more likely in children, in contrast to ventricular fibrillation or tachycardia as seen in adults. Additional causes of sudden unexplained cardiac arrest in children include hypertrophic cardiomyopathy and coronary artery abnormalities. In childhood hypertrophic cardiomyopathy, previous adverse cardiac events, non-sustained ventricular tachycardia, syncope, and left ventricular hypertrophy have been shown to predict sudden cardiac death. Other causes can include drugs, such as cocaine and methamphetamine, or overdose of medications, such as antidepressants. For management of pediatric cardiac arrest, CPR should be initiated if suspected. Guidelines provide algorithms for pediatric cardiac arrest management. Recommended medications during pediatric resuscitation include epinephrine, lidocaine, and amiodarone. However, the use of sodium bicarbonate or calcium is not recommended. The use of calcium in children has been associated with poor neurological function as well as decreased survival. Correct dosing of medications in children is dependent on weight, and to minimize time spent calculating medication doses, the use of a Broselow tape is recommended. Rates of survival in children with cardiac arrest are 3 to 16% in North America.
Biology and health sciences
Injury
null
60600
https://en.wikipedia.org/wiki/Barcode
Barcode
A barcode or bar code is a method of representing data in a visual, machine-readable form. Initially, barcodes represented data by varying the widths, spacings and sizes of parallel lines. These barcodes, now commonly referred to as linear or one-dimensional (1D), can be scanned by special optical scanners, called barcode readers, of which there are several types. Later, two-dimensional (2D) variants were developed, using rectangles, dots, hexagons and other patterns, called 2D barcodes or matrix codes, although they do not use bars as such. Both can be read using purpose-built 2D optical scanners, which exist in a few different forms. Matrix codes can also be read by a digital camera connected to a microcomputer running software that takes a photographic image of the barcode and analyzes the image to deconstruct and decode the code. A mobile device with a built-in camera, such as a smartphone, can function as the latter type of barcode reader using specialized application software and is suitable for both 1D and 2D codes. The barcode was invented by Norman Joseph Woodland and Bernard Silver and patented in the US in 1952. The invention was based on Morse code that was extended to thin and thick bars. However, it took over twenty years before this invention became commercially successful. UK magazine Modern Railways December 1962 pages 387–389 record how British Railways had already perfected a barcode-reading system capable of correctly reading rolling stock travelling at with no mistakes. An early use of one type of barcode in an industrial context was sponsored by the Association of American Railroads in the late 1960s. Developed by General Telephone and Electronics (GTE) and called KarTrak ACI (Automatic Car Identification), this scheme involved placing colored stripes in various combinations on steel plates which were affixed to the sides of railroad rolling stock. Two plates were used per car, one on each side, with the arrangement of the colored stripes encoding information such as ownership, type of equipment, and identification number. The plates were read by a trackside scanner located, for instance, at the entrance to a classification yard, while the car was moving past. The project was abandoned after about ten years because the system proved unreliable after long-term use. Barcodes became commercially successful when they were used to automate supermarket checkout systems, a task for which they have become almost universal. The Uniform Grocery Product Code Council had chosen, in 1973, the barcode design developed by George Laurer. Laurer's barcode, with vertical bars, printed better than the circular barcode developed by Woodland and Silver. Their use has spread to many other tasks that are generically referred to as automatic identification and data capture (AIDC). The first successful system using barcodes was in the UK supermarket group Sainsbury's in 1972 using shelf-mounted barcodes which were developed by Plessey. In June 1974, Marsh supermarket in Troy, Ohio used a scanner made by Photographic Sciences Corporation to scan the Universal Product Code (UPC) barcode on a pack of Wrigley's chewing gum. QR codes, a specific type of 2D barcode, rose in popularity in the second decade of the 2000s due to the growth in smartphone ownership. Other systems have made inroads in the AIDC market, but the simplicity, universality and low cost of barcodes has limited the role of these other systems, particularly before technologies such as radio-frequency identification (RFID) became available after 2023. History In 1948, Bernard Silver, a graduate student at Drexel Institute of Technology in Philadelphia, Pennsylvania, US overheard the president of the local food chain, Food Fair, asking one of the deans to research a system to automatically read product information during checkout. Silver told his friend Norman Joseph Woodland about the request, and they started working on a variety of systems. Their first working system used ultraviolet ink, but the ink faded too easily and was expensive. Convinced that the system was workable with further development, Woodland left Drexel, moved into his father's apartment in Florida, and continued working on the system. His next inspiration came from Morse code, and he formed his first barcode from sand on the beach. "I just extended the dots and dashes downwards and made narrow lines and wide lines out of them." To read them, he adapted technology from optical soundtracks in movies, using a 500-watt incandescent light bulb shining through the paper onto an RCA935 photomultiplier tube (from a movie projector) on the far side. He later decided that the system would work better if it were printed as a circle instead of a line, allowing it to be scanned in any direction. On 20 October 1949 Woodland and Silver filed a patent application for "Classifying Apparatus and Method", in which they described both the linear and bull's eye printing patterns, as well as the mechanical and electronic systems needed to read the code. The patent was issued on 7 October 1952 as US Patent 2,612,994. In 1951, Woodland moved to IBM and continually tried to interest IBM in developing the system. The company eventually commissioned a report on the idea, which concluded that it was both feasible and interesting, but that processing the resulting information would require equipment that was some time off in the future. IBM offered to buy the patent, but the offer was not accepted. Philco purchased the patent in 1962 and then sold it to RCA sometime later. Collins at Sylvania During his time as an undergraduate, David Jarrett Collins worked at the Pennsylvania Railroad and became aware of the need to automatically identify railroad cars. Immediately after receiving his master's degree from MIT in 1959, he started work at GTE Sylvania and began addressing the problem. He developed a system called KarTrak using blue, white and red reflective stripes attached to the side of the cars, encoding a four-digit company identifier and a six-digit car number. Light reflected off the colored stripes was read by photomultiplier vacuum tubes. The Boston and Maine Railroad tested the KarTrak system on their gravel cars in 1961. The tests continued until 1967, when the Association of American Railroads (AAR) selected it as a standard, automatic car identification, across the entire North American fleet. The installations began on 10 October 1967. However, the economic downturn and rash of bankruptcies in the industry in the early 1970s greatly slowed the rollout, and it was not until 1974 that 95% of the fleet was labeled. To add to its woes, the system was found to be easily fooled by dirt in certain applications, which greatly affected accuracy. The AAR abandoned the system in the late 1970s, and it was not until the mid-1980s that they introduced a similar system, this time based on radio tags. The railway project had failed, but a toll bridge in New Jersey requested a similar system so that it could quickly scan for cars that had purchased a monthly pass. Then the US Post Office requested a system to track trucks entering and leaving their facilities. These applications required special retroreflector labels. Finally, Kal Kan asked the Sylvania team for a simpler (and cheaper) version which they could put on cases of pet food for inventory control. Computer Identics Corporation In 1967, with the railway system maturing, Collins went to management looking for funding for a project to develop a black-and-white version of the code for other industries. They declined, saying that the railway project was large enough, and they saw no need to branch out so quickly. Collins then quit Sylvania and formed the Computer Identics Corporation. As its first innovations, Computer Identics moved from using incandescent light bulbs in its systems, replacing them with helium–neon lasers, and incorporated a mirror as well, making it capable of locating a barcode up to a meter (3 feet) in front of the scanner. This made the entire process much simpler and more reliable, and typically enabled these devices to deal with damaged labels, as well, by recognizing and reading the intact portions. Computer Identics Corporation installed one of its first two scanning systems in the spring of 1969 at a General Motors (Buick) factory in Flint, Michigan. The system was used to identify a dozen types of transmissions moving on an overhead conveyor from production to shipping. The other scanning system was installed at General Trading Company's distribution center in Carlstadt, New Jersey to direct shipments to the proper loading bay. Universal Product Code In 1966 the National Association of Food Chains (NAFC) held a meeting on the idea of automated checkout systems. RCA, which had purchased the rights to the original Woodland patent, attended the meeting and initiated an internal project to develop a system based on the bullseye code. The Kroger grocery chain volunteered to test it. In the mid-1970s the NAFC established the Ad-Hoc Committee for U.S. Supermarkets on a Uniform Grocery-Product Code to set guidelines for barcode development. In addition, it created a symbol-selection subcommittee to help standardize the approach. In cooperation with consulting firm, McKinsey & Co., they developed a standardized 11-digit code for identifying products. The committee then sent out a contract tender to develop a barcode system to print and read the code. The request went to Singer, National Cash Register (NCR), Litton Industries, RCA, Pitney-Bowes, IBM and many others. A wide variety of barcode approaches was studied, including linear codes, RCA's bullseye concentric circle code, starburst patterns and others. In the spring of 1971 RCA demonstrated their bullseye code at another industry meeting. IBM executives at the meeting noticed the crowds at the RCA booth and immediately developed their own system. IBM marketing specialist Alec Jablonover remembered that the company still employed Woodland, and he established a new facility in Research Triangle Park to lead development. In July 1972 RCA began an 18-month test in a Kroger store in Cincinnati. Barcodes were printed on small pieces of adhesive paper, and attached by hand by store employees when they were adding price tags. The code proved to have a serious problem; the printers would sometimes smear ink, rendering the code unreadable in most orientations. However, a linear code, like the one being developed by Woodland at IBM, was printed in the direction of the stripes, so extra ink would simply make the code "taller" while remaining readable. So on 3 April 1973 the IBM UPC was selected as the NAFC standard. IBM had designed five versions of UPC symbology for future industry requirements: UPC A, B, C, D, and E. NCR installed a testbed system at Marsh's Supermarket in Troy, Ohio, near the factory that was producing the equipment. On 26 June 1974, a 10-pack of Wrigley's Juicy Fruit gum was scanned, registering the first commercial use of the UPC. In 1971 an IBM team was assembled for an intensive planning session, threshing out, 12 to 18 hours a day, how the technology would be deployed and operate cohesively across the system, and scheduling a roll-out plan. By 1973, the team were meeting with grocery manufacturers to introduce the symbol that would need to be printed on the packaging or labels of all of their products. There were no cost savings for a grocery to use it, unless at least 70% of the grocery's products had the barcode printed on the product by the manufacturer. IBM projected that 75% would be needed in 1975. Economic studies conducted for the grocery industry committee projected over $40 million in savings to the industry from scanning by the mid-1970s. Those numbers were not achieved in that time-frame and some predicted the demise of barcode scanning. The usefulness of the barcode required the adoption of expensive scanners by a critical mass of retailers while manufacturers simultaneously adopted barcode labels. Neither wanted to move first and results were not promising for the first couple of years, with Business Week proclaiming "The Supermarket Scanner That Failed" in a 1976 article. Sims Supermarkets were the first location in Australia to use barcodes, starting in 1979. Barcode system A barcode system is a network of hardware and software, consisting primarily of mobile computers, printers, handheld scanners, infrastructure, and supporting software. Barcode systems are used to automate data collection where hand recording is neither timely nor cost effective. Despite often being provided by the same company, Barcoding systems are not radio-frequency identification (RFID) systems. Many companies use both technologies as part of larger resource management systems. A typical barcode system consist of some infrastructure, either wired or wireless that connects some number of mobile computers, handheld scanners, and printers to one or many databases that store and analyze the data collected by the system. At some level there must be some software to manage the system. The software may be as simple as code that manages the connection between the hardware and the database or as complex as an ERP, MRP, or some other inventory management software. Hardware A wide range of hardware is manufactured for use in barcode systems by such manufacturers as Datalogic, Intermec, HHP (Hand Held Products), Microscan Systems, Unitech, Metrologic, PSC, and PANMOBIL, with the best known brand of handheld scanners and mobile computers being produced by Symbol, a division of Motorola. Software Some ERP, MRP, and other inventory management software have built in support for barcode reading. Alternatively, custom interfaces can be created using a language such as C++, C#, Java, Visual Basic.NET, and many others. In addition, software development kits are produced to aid the process. Industrial adoption In 1981 the United States Department of Defense adopted the use of Code 39 for marking all products sold to the United States military. This system, Logistics Applications of Automated Marking and Reading Symbols (LOGMARS), is still used by DoD and is widely viewed as the catalyst for widespread adoption of barcoding in industrial uses. Use Barcodes are widely used around the world in many contexts. In stores, UPC barcodes are pre-printed on most items other than fresh produce from a grocery store. This speeds up processing at check-outs and helps track items and also reduces instances of shoplifting involving price tag swapping, although shoplifters can now print their own barcodes. Barcodes that encode a book's ISBN are also widely pre-printed on books, journals and other printed materials. In addition, retail chain membership cards use barcodes to identify customers, allowing for customized marketing and greater understanding of individual consumer shopping patterns. At the point of sale, shoppers can get product discounts or special marketing offers through the address or e-mail address provided at registration. Barcodes are widely used in healthcare and hospital settings, ranging from patient identification (to access patient data, including medical history, drug allergies, etc.) to creating SOAP notes with barcodes to medication management. They are also used to facilitate the separation and indexing of documents that have been imaged in batch scanning applications, track the organization of species in biology, and integrate with in-motion checkweighers to identify the item being weighed in a conveyor line for data collection. They can also be used to keep track of objects and people; they are used to keep track of rental cars, airline luggage, nuclear waste, express mail, and parcels. Barcoded tickets (which may be printed by the customer on their home printer, or stored on their mobile device) allow the holder to enter sports arenas, cinemas, theatres, fairgrounds, and transportation, and are used to record the arrival and departure of vehicles from rental facilities etc. This can allow proprietors to identify duplicate or fraudulent tickets more easily. Barcodes are widely used in shop floor control applications software where employees can scan work orders and track the time spent on a job. Barcodes are also used in some kinds of non-contact 1D and 2D position sensors. A series of barcodes are used in some kinds of absolute 1D linear encoder. The barcodes are packed close enough together that the reader always has one or two barcodes in its field of view. As a kind of fiducial marker, the relative position of the barcode in the field of view of the reader gives incremental precise positioning, in some cases with sub-pixel resolution. The data decoded from the barcode gives the absolute coarse position. An "address carpet", used in digital paper, such as Howell's binary pattern and the Anoto dot pattern, is a 2D barcode designed so that a reader, even though only a tiny portion of the complete carpet is in the field of view of the reader, can find its absolute X, Y position and rotation in the carpet. Matrix codes can embed a hyperlink to a web page. A mobile device with a built-in camera might be used to read the pattern and browse the linked website, which can help a shopper find the best price for an item in the vicinity. Since 2005, airlines use an IATA-standard 2D barcode on boarding passes (Bar Coded Boarding Pass (BCBP)), and since 2008 2D barcodes sent to mobile phones enable electronic boarding passes. Some applications for barcodes have fallen out of use. In the 1970s and 1980s, software source code was occasionally encoded in a barcode and printed on paper (Cauzin Softstrip and Paperbyte are barcode symbologies specifically designed for this application), and the 1991 Barcode Battler computer game system used any standard barcode to generate combat statistics. Artists have used barcodes in art, such as Scott Blake's Barcode Jesus, as part of the post-modernism movement. Symbologies The mapping between messages and barcodes is called a symbology. The specification of a symbology includes the encoding of the message into bars and spaces, any required start and stop markers, the size of the quiet zone required to be before and after the barcode, and the computation of a checksum. Linear symbologies can be classified mainly by two properties: Continuous vs. discrete Characters in discrete symbologies are composed of n bars and n − 1 spaces. There is an additional space between characters, but it does not convey information, and may have any width as long as it is not confused with the end of the code. Characters in continuous symbologies are composed of n bars and n spaces, and usually abut, with one character ending with a space and the next beginning with a bar, or vice versa. A special end pattern that has bars on both ends is required to end the code. Two-width vs. many-width A two-width, also called a binary bar code, contains bars and spaces of two widths, "wide" and "narrow". The precise width of the wide bars and spaces is not critical; typically, it is permitted to be anywhere between 2 and 3 times the width of the narrow equivalents. Some other symbologies use bars of two different heights (POSTNET), or the presence or absence of bars (CPC Binary Barcode). These are normally also considered binary bar codes. Bars and spaces in many-width symbologies are all multiples of a basic width called the module; most such codes use four widths of 1, 2, 3 and 4 modules. Some symbologies use interleaving. The first character is encoded using black bars of varying width. The second character is then encoded by varying the width of the white spaces between these bars. Thus, characters are encoded in pairs over the same section of the barcode. Interleaved 2 of 5 is an example of this. Stacked symbologies repeat a given linear symbology vertically. The most common among the many 2D symbologies are matrix codes, which feature square or dot-shaped modules arranged on a grid pattern. 2D symbologies also come in circular and other patterns and may employ steganography, hiding modules within an image (for example, DataGlyphs). Linear symbologies are optimized for laser scanners, which sweep a light beam across the barcode in a straight line, reading a slice of the barcode light-dark patterns. Scanning at an angle makes the modules appear wider, but does not change the width ratios. Stacked symbologies are also optimized for laser scanning, with the laser making multiple passes across the barcode. In the 1990s development of charge-coupled device (CCD) imagers to read barcodes was pioneered by Welch Allyn. Imaging does not require moving parts, as a laser scanner does. In 2007, linear imaging had begun to supplant laser scanning as the preferred scan engine for its performance and durability. 2D symbologies cannot be read by a laser, as there is typically no sweep pattern that can encompass the entire symbol. They must be scanned by an image-based scanner employing a CCD or other digital camera sensor technology. Barcode readers The earliest, and still the cheapest, barcode scanners are built from a fixed light and a single photosensor that is manually moved across the barcode. Barcode scanners can be classified into three categories based on their connection to the computer. The older type is the RS-232 barcode scanner. This type requires special programming for transferring the input data to the application program. Keyboard interface scanners connect to a computer using a PS/2 or AT keyboard–compatible adaptor cable (a "keyboard wedge"). The barcode's data is sent to the computer as if it had been typed on the keyboard. Like the keyboard interface scanner, USB scanners do not need custom code for transferring input data to the application program. On PCs running Windows the human interface device emulates the data merging action of a hardware "keyboard wedge", and the scanner automatically behaves like an additional keyboard. Most modern smartphones are able to decode barcode using their built-in camera. Google's mobile Android operating system can use their own Google Lens application to scan QR codes, or third-party apps like Barcode Scanner to read both one-dimensional barcodes and QR codes. Google's Pixel devices can natively read QR codes inside the default Pixel Camera app. Nokia's Symbian operating system featured a barcode scanner, while mbarcode is a QR code reader for the Maemo operating system. In Apple iOS 11, the native camera app can decode QR codes and can link to URLs, join wireless networks, or perform other operations depending on the QR Code contents. Other paid and free apps are available with scanning capabilities for other symbologies or for earlier iOS versions. With BlackBerry devices, the App World application can natively scan barcodes and load any recognized Web URLs on the device's Web browser. Windows Phone 7.5 is able to scan barcodes through the Bing search app. However, these devices are not designed specifically for the capturing of barcodes. As a result, they do not decode nearly as quickly or accurately as a dedicated barcode scanner or portable data terminal. Quality control and verification It is common for producers and users of bar codes to have a quality management system which includes verification and validation of bar codes. Barcode verification examines scanability and the quality of the barcode in comparison to industry standards and specifications. Barcode verifiers are primarily used by businesses that print and use barcodes. Any trading partner in the supply chain can test barcode quality. It is important to verify a barcode to ensure that any reader in the supply chain can successfully interpret a barcode with a low error rate. Retailers levy large penalties for non-compliant barcodes. These chargebacks can reduce a manufacturer's revenue by 2% to 10%. A barcode verifier works the way a reader does, but instead of simply decoding a barcode, a verifier performs a series of tests. For linear barcodes these tests are: Edge contrast (EC) The difference between the space reflectance (Rs) and adjoining bar reflectance (Rb). EC=Rs-Rb Minimum bar reflectance (Rb) The smallest reflectance value in a bar. Minimum space reflectance (Rs) The smallest reflectance value in a space. Symbol contrast (SC) Symbol contrast is the difference in reflectance values of the lightest space (including the quiet zone) and the darkest bar of the symbol. The greater the difference, the higher the grade. The parameter is graded as either A, B, C, D, or F. SC=Rmax-Rmin Minimum edge contrast (ECmin) The difference between the space reflectance (Rs) and adjoining bar reflectance (Rb). EC=Rs-Rb Modulation (MOD) The parameter is graded either A, B, C, D, or F. This grade is based on the relationship between minimum edge contrast (ECmin) and symbol contrast (SC). MOD=ECmin/SC The greater the difference between minimum edge contrast and symbol contrast, the lower the grade. Scanners and verifiers perceive the narrower bars and spaces to have less intensity than wider bars and spaces; the comparison of the lesser intensity of narrow elements to the wide elements is called modulation. This condition is affected by aperture size. Inter-character gap In discrete barcodes, the space that disconnects the two contiguous characters. When present, inter-character gaps are considered spaces (elements) for purposes of edge determination and reflectance parameter grades. Defects Decode Extracting the information which has been encoded in a bar code symbol. Decodability Can be graded as A, B, C, D, or F. The Decodability grade indicates the amount of error in the width of the most deviant element in the symbol. The less deviation in the symbology, the higher the grade. Decodability is a measure of print accuracy using the symbology reference decode algorithm. 2D matrix symbols look at the parameters: Symbol contrast Modulation Decode Unused error correction Fixed (finder) pattern damage Grid non-uniformity Axial non-uniformity Depending on the parameter, each ANSI test is graded from 0.0 to 4.0 (F to A), or given a pass or fail mark. Each grade is determined by analyzing the scan reflectance profile (SRP), an analog graph of a single scan line across the entire symbol. The lowest of the 8 grades is the scan grade, and the overall ISO symbol grade is the average of the individual scan grades. For most applications a 2.5 (C) is the minimal acceptable symbol grade. Compared with a reader, a verifier measures a barcode's optical characteristics to international and industry standards. The measurement must be repeatable and consistent. Doing so requires constant conditions such as distance, illumination angle, sensor angle and verifier aperture. Based on the verification results, the production process can be adjusted to print higher quality barcodes that will scan down the supply chain. Bar code validation may include evaluations after use (and abuse) testing such as sunlight, abrasion, impact, moisture, etc. Barcode verifier standards Barcode verifier standards are defined by the International Organization for Standardization (ISO), in ISO/IEC 15426-1 (linear) or ISO/IEC 15426-2 (2D). The current international barcode quality specification is ISO/IEC 15416 (linear) and ISO/IEC 15415 (2D). The European Standard EN 1635 has been withdrawn and replaced by ISO/IEC 15416. The original U.S. barcode quality specification was ANSI X3.182. (UPCs used in the US – ANSI/UCC5). As of 2011 the ISO workgroup JTC1 SC31 was developing a Direct Part Marking (DPM) quality standard: ISO/IEC TR 29158. Benefits In point-of-sale management, barcode systems can provide detailed up-to-date information on the business, accelerating decisions and with more confidence. For example: Fast-selling items can be identified quickly and automatically reordered. Slow-selling items can be identified, preventing inventory build-up. The effects of merchandising changes can be monitored, allowing fast-moving, more profitable items to occupy the best space. Historical data can be used to predict seasonal fluctuations very accurately. Items may be repriced on the shelf to reflect both sale prices and price increases. This technology also enables the profiling of individual consumers, typically through a voluntary registration of discount cards. While pitched as a benefit to the consumer, this practice is considered to be potentially dangerous by privacy advocates. Besides sales and inventory tracking, barcodes are very useful in logistics and supply chain management. When a manufacturer packs a box for shipment, a unique identifying number (UID) can be assigned to the box. A database can link the UID to relevant information about the box; such as order number, items packed, quantity packed, destination, etc. The information can be transmitted through a communication system such as electronic data interchange (EDI) so the retailer has the information about a shipment before it arrives. Shipments that are sent to a distribution center (DC) are tracked before forwarding. When the shipment reaches its final destination, the UID gets scanned, so the store knows the shipment's source, contents, and cost. Barcode scanners are relatively low cost and extremely accurate compared to key-entry, with only about 1 substitution error in 15,000 to 36 trillion characters entered. The exact error rate depends on the type of barcode. Types of barcodes Linear barcodes A first generation, "one dimensional" barcode that is made up of lines and spaces of various widths or sizes that create specific patterns. 2D barcodes 2D barcodes consist of bars, but use both dimensions for encoding. Matrix (2D) codes A matrix code or simply a 2D code, is a two-dimensional way to represent information. It can represent more data per unit area. Apart from dots various other patterns can be used. Example images In popular culture In architecture, a building in Lingang New City by German architects Gerkan, Marg and Partners incorporates a barcode design, as does a shopping mall called Shtrikh-kod (Russian for barcode) in Narodnaya ulitsa ("People's Street") in the Nevskiy district of St. Petersburg, Russia. In media, in 2011, the National Film Board of Canada and ARTE France launched a web documentary entitled Barcode.tv, which allows users to view films about everyday objects by scanning the product's barcode with their iPhone camera. In professional wrestling, the WWE stable D-Generation X incorporated a barcode into their entrance video, as well as on a T-shirt. In video games, the protagonist of the Hitman video game series has a barcode tattoo on the back of his head; QR codes can also be scanned in a side mission in Watch Dogs. The 2018 videogame Judgment features QR Codes that protagonist Takayuki Yagami can photograph with his phone camera. These are mostly to unlock parts for Yagami's Drone. Interactive Textbooks were first published by Harcourt College Publishers to Expand Education Technology with Interactive Textbooks. Designed barcodes Some companies integrate custom designs into barcodes on their consumer products without impairing their readability. Opposition Some have regarded barcodes to be an intrusive surveillance technology. Some Christians, pioneered by a 1982 book The New Money System 666 by Mary Stewart Relfe, believe the codes hide the number 666, representing the "Number of the beast". Old Believers, a separation of the Russian Orthodox Church, believe barcodes are the stamp of the Antichrist. Television host Phil Donahue described barcodes as a "corporate plot against consumers".
Technology
Data storage
null
60605
https://en.wikipedia.org/wiki/Dust%20storm
Dust storm
A dust storm, also called a sandstorm, is a meteorological phenomenon common in arid and semi-arid regions. Dust storms arise when a gust front or other strong wind blows loose sand and dirt from a dry surface. Fine particles are transported by saltation and suspension, a process that moves soil from one place and deposits it in another. The arid regions of North Africa, the Middle East, Central Asia and China are the main terrestrial sources of airborne dust. It has been argued that poor management of Earth's drylands, such as neglecting the fallow system, are increasing the size and frequency of dust storms from desert margins and changing both the local and global climate, as well as impacting local economies. The term sandstorm is used most often in the context of desert dust storms, especially in the Sahara Desert, or places where sand is a more prevalent soil type than dirt or rock, when, in addition to fine particles obscuring visibility, a considerable amount of larger sand particles are blown closer to the surface. The term dust storm is more likely to be used when finer particles are blown long distances, especially when the dust storm affects urban areas. Causes As the force of dust passing over loosely held particles increases, particles of sand first start to vibrate, then to move across the surface in a process called saltation. As they repeatedly strike the ground, they loosen and break off smaller particles of dust which then begin to travel in suspension. At wind speeds above that which causes the smallest to suspend, there will be a population of dust grains moving by a range of mechanisms: suspension, saltation and creep. A study from 2008 finds that the initial saltation of sand particles induces a static electric field by friction. Saltating sand acquires a negative charge relative to the ground which in turn loosens more sand particles which then begin saltating. This process has been found to double the number of particles predicted by previous theories. Particles become loosely held mainly due to a prolonged drought or arid conditions, and high wind speeds. Gust fronts may be produced by the outflow of rain-cooled air from an intense thunderstorm. Or, the wind gusts may be produced by a dry cold front: that is, a cold front that is moving into a dry air mass and is producing no precipitation—the type of dust storm which was common during the Dust Bowl years in the U.S. Following the passage of a dry cold front, convective instability resulting from cooler air riding over heated ground can maintain the dust storm initiated at the front. In desert areas, dust and sand storms are most commonly caused by either thunderstorm outflows, or by strong pressure gradients which cause an increase in wind velocity over a wide area. The vertical extent of the dust or sand that is raised is largely determined by the stability of the atmosphere above the ground as well as by the weight of the particulates. In some cases, dust and sand may be confined to a relatively-shallow layer by a low-lying temperature inversion. In other instances, dust (but not sand) may be lifted as high as . Dust storms are a major health hazard. Drought and wind contribute to the emergence of dust storms, as do poor farming and grazing practices by exposing the dust and sand to the wind. Wildfires can lead to dust storms as well. One poor farming practice which contributes to dust storms is dryland farming. Particularly poor dryland farming techniques are intensive tillage or not having established crops or cover crops when storms strike at particularly vulnerable times prior to revegetation. In a semi-arid climate, these practices increase susceptibility to dust storms. However, soil conservation practices may be implemented to control wind erosion. Physical and environmental effects A sandstorm can transport and carry large volumes of sand unexpectedly. Dust storms can carry large amounts of dust, with the leading edge being composed of a wall of thick dust as much as high. Dust and sand storms which come off the Sahara Desert are locally known as a simoom or simoon (sîmūm, sîmūn). The haboob (həbūb) is a sandstorm prevalent in the region of Sudan around Khartoum, with occurrences being most common in the summer. The Sahara desert is a key source of dust storms, particularly the Bodélé Depression and an area covering the confluence of Mauritania, Mali, and Algeria. Sahara dust is frequently emitted into the Mediterranean atmosphere and transported by the winds sometimes as far north as central Europe and Great Britain. Saharan dust storms have increased approximately 10-fold during the half-century since the 1950s, causing topsoil loss in Niger, Chad, northern Nigeria, and Burkina Faso. In Mauritania there were just two dust storms a year in the early 1960s; there are about 80 a year since 2007, according to English geographer Andrew Goudie, professor at the University of Oxford. Levels of Saharan dust coming off the east coast of Africa in June 2007 were five times those observed in June 2006, and were the highest observed since at least 1999, which may have cooled Atlantic waters enough to slightly reduce hurricane activity in late 2007. Dust storms have also been shown to increase the spread of disease across the globe. Bacteria and fungus spores in the ground are blown into the atmosphere by the storms with the minute particles and interact with urban air pollution. Short-term effects of exposure to desert dust include immediate increased symptoms and worsening of the lung function in individuals with asthma, increased mortality and morbidity from long-transported dust from both Saharan and Asian dust storms suggesting that long-transported dust storm particles adversely affects the circulatory system. Dust pneumonia is the result of large amounts of dust being inhaled. Prolonged and unprotected exposure of the respiratory system in a dust storm can also cause silicosis, which, if left untreated, will lead to asphyxiation; silicosis is an incurable condition that may also lead to lung cancer. There is also the danger of keratoconjunctivitis sicca ("dry eyes") which, in severe cases without immediate and proper treatment, can lead to blindness. Economic impact Dust storms cause soil loss from the drylands, and worse, they preferentially remove organic matter and the nutrient-rich lightest particles, thereby reducing agricultural productivity. Also, the abrasive effect of the storm damages young crop plants. Dust storms also reduce visibility, affecting aircraft and road transportation. Dust can also have beneficial effects where it deposits: Central and South American rainforests get significant quantities of mineral nutrients from the Sahara; iron-poor ocean regions get iron; and dust in Hawaii increases plantain growth. In northern China as well as the mid-western U.S., ancient dust storm deposits known as loess are highly fertile soils, but they are also a significant source of contemporary dust storms when soil-securing vegetation is disturbed. Iranian cities existence are challenged by dust storms. On Mars Dust storms are not limited to Earth and have also been known to form on Mars. These dust storms can extend over larger areas than those on Earth, sometimes encircling the planet, with wind speeds as high as . However, given Mars' much lower atmospheric pressure (roughly 1% that of Earth's), the intensity of Mars storms could never reach the hurricane-force winds experienced on Earth. Martian dust storms are formed when solar heating warms the Martian atmosphere and causes the air to move, lifting dust off the ground. The chance for storms is increased when there are great temperature variations like those seen at the equator during the Martian summer.
Physical sciences
Storms
null
60644
https://en.wikipedia.org/wiki/Peripheral
Peripheral
A peripheral device, or simply peripheral, is an auxiliary hardware device that a computer uses to transfer information externally. A peripheral is a hardware component that is accessible to and controlled by a computer but is not a core component of the computer. A peripheral can be categorized based on the direction in which information flows relative to the computer: The computer receives data from an input device; examples: mouse, keyboard, scanner, game controller, microphone and webcam The computer sends data to an output device; examples: monitor, printer, headphones, and speakers The computer sends and receives data via an input/output device; examples: storage device (such as disk drive, solid-state drive, USB flash drive, memory card and tape drive), modem, router, gateway and network adapter Many modern electronic devices, such as Internet-enabled digital watches, video game consoles, smartphones, and tablet computers, have interfaces for use as a peripheral.
Technology
User interface
null
60646
https://en.wikipedia.org/wiki/Tower%20Bridge
Tower Bridge
Tower Bridge is a Grade I listed combined bascule, suspension, and, until 1960, cantilever bridge in London, built between 1886 and 1894, designed by Horace Jones and engineered by John Wolfe Barry with the help of Henry Marc Brunel. It crosses the River Thames close to the Tower of London and is one of five London bridges owned and maintained by the City Bridge Foundation, a charitable trust founded in 1282. The bridge was constructed to connect the 39 per cent of London's population that lived east of London Bridge, equivalent to the populations of "Manchester on the one side, and Liverpool on the other", while allowing shipping to access the Pool of London between the Tower of London and London Bridge. The bridge was opened by Edward, Prince of Wales, and Alexandra, Princess of Wales, on 30 June 1894. The bridge is in length including the abutments and consists of two bridge towers connected at the upper level by two horizontal walkways, and a central pair of bascules that can open to allow shipping. Originally hydraulically powered, the operating mechanism was converted to an electro-hydraulic system in 1972. The bridge is part of the London Inner Ring Road and thus the boundary of the London congestion charge zone, and remains an important traffic route with 40,000 crossings every day. The bridge deck is freely accessible to both vehicles and pedestrians, whereas the bridge's twin towers, high-level walkways, and Victorian engine rooms form part of the Tower Bridge Exhibition. Tower Bridge has become a recognisable London landmark. It is sometimes confused with London Bridge, about upstream, which has led to a persistent urban legend about an American purchasing the wrong bridge. History Inception By the late 19th century, the population and commercial development in the East End of London was increasing, leading to demand for a new river crossing downstream of London Bridge. A traditional fixed bridge at street level could not be built because it would cut off access by sailing ships to the port facilities in the Pool of London between London Bridge and the Tower of London. A Special Bridge or Subway Committee chaired by Sir Albert Joseph Altman was formed in December 1875 to find a solution. On 7 December 1876, the Committee presented a report recommending a bridge or subway to the east of London Bridge should be constructed, funds permitting. More than fifty designs were submitted, including one from civil engineer Sir Joseph Bazalgette, which was rejected because of a lack of sufficient headroom. None of the designs gained support and it was not until 24 July 1884 that the Bridge House Estates Committee brought forward a report that proposed "a low level bridge, with mechanical opening or openings" be built. Following a deputation from the Committee visiting Belgium, Holland, and Newcastle Bridge, a proposal was presented on 28 October 1884 to the Court of Common Council for a mechanical bridge built according to one of three models: Design A, a swing bridge; Design B, a variation of that swing bridge; and Design C a bascule bridge. Design C was recommended and a bill was prepared to present to Parliament. Legislation The Act of Parliament authorising construction received royal assent on 14 August 1885 and is called the Corporation of London (Tower Bridge) Act 1885. The Act was specific about the design of what it (and contemporary media) named "The Tower Bridge", rather than just "Tower Bridge". Key stipulations were: The central opening span to be 200 feet clear width with a height of 135 feet above Trinity high water when open, and a height of 29 feet when closed. The size of the piers to be 185 feet long and 70 feet wide. The length of each of the two side spans to be 270 feet. During construction a clear waterway of 160 feet wide had to be maintained for river traffic. The design of the bridge should be made to accord with the architecture of the Tower of London. The bridge was to be completed within 4 years from the passing of the Act. The bridge was to be opened at any time for the passage of any vessels, regardless of any delays to land traffic. Barry later noted that "at one time it was intended that the new works should be made suitable for the mounting of guns and for military occupation." But that "The latter idea was afterwards to a great extent discarded." However, the Act provided for "the senior officer commanding in the Tower [of London]...shall at all times have...the right to occupy the Tower Bridge." The extent of the maritime trade conducted at this time between the site of Tower Bridge and London Bridge (a distance of approximately 0.5 miles) is demonstrated in Schedule B of the Act which lists 11 active docks, quays and wharfs operating on the north side of the Thames, and 20 wharfs operating on the south side. Two further Acts of Parliament were required to extend the time allowed to complete the works. On 12 August 1889 the Corporation of London (Tower Bridge) Act 1889 received royal assent to extend the time allowed for construction by a further four years to 1893 and make various adjustments to neighbouring streets that had proved necessary. The work was not yet complete after those four years, and on 29 June 1893 the Corporation of London (Tower Bridge) Extension of Time Act 1893 received royal assent extending the time by a further year. Construction Construction was funded by the City Bridge Foundation, a charity established in 1282 for maintenance of London Bridge that subsequently expanded to cover Tower Bridge, Blackfriars Bridge, Southwark Bridge and the Millennium Bridge. Sir John Wolfe Barry was appointed engineer and Sir Horace Jones the architect (who, as the City Architect, was also one of the judges). Jones and Barry designed a bridge with two bridge towers built on piers. The central span was split into two equal bascules or leaves, which could be raised to allow river traffic to pass. The two side spans were suspension bridges, with rods anchored both at the abutments and through rods contained in the bridge's upper walkways. Construction - overseen by Edward Cruttwell - started on 22 April 1886, with the foundation stone laid by the Prince of Wales on 21 June, and took eight years. The work was divided into eight contracts. Mr (later Sir) John Jackson won three of those contracts and was responsible for the northern approach to the bridge (which started in February 1887), the foundations of the piers and the abutments of the bridge (started February 1887), and the cast iron parapet for the northern approach (December 1887) at a total accepted tender cost of £189,732; Sir W. G. Armstrong, Mitchell, and Co. Ltd, was awarded the hydraulics contract for which they tendered £85,232 (December 1887); Mr William Webster was responsible for the southern approach at £38,383 (July 1888); Sir William Arrol & Co. had the contract for the metalwork of the superstructure at £337,113 (May 1889), which amounted to about 12,100 tons; Messrs Perry & Co won two contracts covering the masonry superstructure (May 1889), and paving and lighting (May 1892) for a total accepted tender of £179,455. The total accepted tender for the eight contracts was £830,005. On average 432 people worked on the site, although at least 1,200 worked on its construction overall and received invitations to the entertainment provided for the workmen at its opening. Cruttwell was the resident engineer throughout the period of construction (and remained associated with the bridge until his death in 1933). He noted that there were "only" ten fatal accidents during the construction: four in sinking the foundations, one on the approaches, and the remaining five on the superstructure. Two piers, containing over of concrete, were sunk into the riverbed to support the construction. The first caisson was started in September 1886 and it was not until January 1890 that both piers were complete. The reason for the long duration of the foundation works was the need to defer excavation of the second pier until the staging for the first pier had been removed to allow 160 feet of clear water-way for shipping. More than of steel were used in the framework for the towers and walkways, which were then clad in Cornish granite and Portland stone to protect the underlying steelwork, and achieve the stipulation that the bridge should fit architecturally with the Tower of London. Jones died in 1887, and Barry took over as architect. Barry later summarised the contributions to the construction of Tower Bridge: "Mr Fyson, who undertook much of the preparation of the detailed drawings; Mr Stevenson, who had been his assistant with the architectural work; and...most of all...Mr Cruttwell, the Resident Engineer, and Mr Homfray, who superintended the machinery." Stevenson replaced Jones's original brick façade with the more ornate Victorian Gothic style, which made the bridge a distinctive landmark and was intended to harmonise the bridge with the nearby Tower of London. The total cost of construction was £1,184,000 (equivalent to £ in ). Opening Tower Bridge was officially opened on 30 June 1894 by the Prince and Princess of Wales. The opening ceremony was attended by the Lord Chamberlain, the Lord Carrington and the Home Secretary, H. H. Asquith. It was reported that "few [pageants] have been more brilliant or will have a more abiding and historic interest" than the opening of Tower Bridge in the history of the City of London, and it was a "semi-State" occasion. In addition to the official opening, the City of London Corporation gave an "entertainment", at a cost of £300, to 1,200 workmen and their wives. Edward Cruttwell, who had been in charge of the building of the bridge from the beginning, presided. After dinner, each workman was presented with a commemorative pipe and packet of tobacco, and each workman's wife with a box of sweetmeats. An Act of Parliament stipulated that a tug boat should be on station to assist vessels in danger when crossing the bridge, a requirement that remained in place until the 1960s. The bridge connected Iron Gate, on the north bank of the river, with Lane, on the south – now known as Tower Bridge Approach and Tower Bridge Road, respectively. Until the bridge was opened, the Tower Subway – to the west – was the shortest way to cross the river from Tower Hill to Tooley Street in Southwark. Opened in 1870, Tower Subway was among the world's earliest underground ("tube") railways, but it closed after just three months and was reopened as a tolled pedestrian foot tunnel. Once Tower Bridge was open, the majority of foot traffic transferred to using the bridge, as there was no toll to cross. Having lost most of its income, the tunnel was closed in 1898. The high-level open-air walkways between the towers gained a reputation for prostitutes and pickpockets. Since they were only accessible by stairs, the walkways were seldom used by regular pedestrians and were closed in 1910. The walkway reopened in 1982 as part of the Tower Bridge Exhibition. 20th century During the Second World War, Tower Bridge was seen as a major transport link to the Port of London, and consequently was a target for enemy action. In 1940, the high-level span took a direct hit, severing the hydraulic mechanism and taking the bridge out of action. In April 1941, a parachute mine exploded close to the bridge, causing serious damage to the bascule, towers, and engine room. In 1942, a third engine was installed in case the existing ones were damaged by enemy action. It was a 150 hp horizontal cross-compound engine, built by Vickers Armstrong Ltd. at their Elswick works in Newcastle upon Tyne. It was fitted with a flywheel having a diameter and weighing 9 tons, and was governed to a speed of 30 rpm. The engine became redundant when the rest of the system was modernised in 1974 and was donated to the Forncett Industrial Steam Museum by the City of London Corporation. The southern section of the bridge, in the London Borough of Southwark, was Grade I listed on 6 December 1949. The remainder of the bridge, in the London Borough of Tower Hamlets, was listed on 27 September 1973. In 1960, the upper bridges of the two pedestrian walkways that connected the two main towers were converted from being cantilever bridges, projecting horizontally out into space, to suspension bridges when suspension cables were added. This was to reinforce the strength of the walkways. In 1974, the original operating mechanism was largely replaced by a new electro-hydraulic drive system, designed by Geoffrey Beresford Hartwell, of BHA Cromwell House, with the original final pinions driven by modern hydraulic motors. In 1982, the Tower Bridge Exhibition opened, housed in the bridge's twin towers, the long-closed high-level walkways, and the Victorian engine rooms. The latter still houses the original steam engines and some of the original hydraulic machinery. 21st century The bridge closed for a month in 2000 to repair the bascules and perform other maintenance. A computer system was installed to control the raising and lowering of the bascules remotely. However, the system proved unreliable, resulting in the bridge being stuck in the open or closed positions on several occasions during 2005 until its sensors were replaced. In April 2008, authorities announced that the bridge would undergo a £4 million refurbishment that would take four years to complete. The work entailed stripping existing paint down to bare metal and repainting in blue and white. Before this, the bridge's colour scheme dated from 1977, when it was painted red, white, and blue for Queen Elizabeth II's Silver Jubilee. Its colours were subsequently restored to blue and white. Each section was enshrouded in scaffolding and plastic sheeting to prevent the old paint falling into the Thames and causing pollution. Starting in mid-2008, contractors worked on a quarter of the bridge at a time to minimise disruption, but some road closures were inevitable. The completed work should stand for 25 years. The renovation of the walkway interior was completed in mid-2009. The renovation of the four suspension chains was completed in March 2010 using a state-of-the-art coating system requiring up to six different layers of paint. A lighting system based on RGB LED luminaires was installed, concealed within the bridge superstructure, and attached without drilling holes, owing to the bridge's Grade I listing. On 8 July 2012, as part of the London Olympics, the west walkway was transformed into a Live Music Sculpture by the British composer Samuel Bordoli. 30 classical musicians were arranged along the length of the bridge above the Thames behind the Olympic rings. The sound travelled backward and forwards along the walkway, echoing the structure of the bridge. Following the Olympics, the rings were removed from Tower Bridge and replaced by the emblem of the Paralympic Games for the 2012 Summer Paralympics. In 2016, Tower Bridge was closed to all road traffic from 1 October to 30 December. This was to allow structural maintenance work to take place on the timber decking, lifting mechanism and waterproofing the brick arches on the bridge's approaches. During this, the bridge was still open to waterborne traffic. It was open to pedestrians for all but three weekends when a free ferry service was in operation. Design Structure The structure was originally designed by Horace Jones but his design had an arch instead of the current high-level footways. Concerns about ships' masts and rigging hitting the arch meant that his design faced objections and Jones "began to think that the arched form must be given up". It was at this point in 1885 that John Wolfe Barry partnered with Jones and developed the concept of the "level soffit for the upper bridge" which allowed the design to proceed as it stands today. Unusually, Tower Bridge was designed to include three types of bridge: the two spans from the shore to the piers are a suspension bridge; the central, opening span is a bascule bridge; and the high level walkways were cantilever bridges until converted to suspension bridges in 1960. The main towers "consist of a skeleton of steelwork, covered with a facing of granite and Portland stone, backed with brickwork on the inside faces." An octagonal steel column stands at each corner of each tower (119 feet 6 inches long). The smaller abutment towers were "generally similar...though on a smaller scale." "The total length of the bridge, including the abutments, is 940 feet, and the length of the approaches are 1,260 feet on the north side, and 780 feet on the south side. The width of the bridge and approaches between the parapets is 60 feet, except across the opening span where it is 49 feet. The steepest gradients are 1 in 60 on the north side, and 1 in 40 on the south side." The central span of between the towers is split into two equal bascules, or leaves, which project 100 feet out each and extend backwards 62 feet 6 inches within the face of each pier. The bascules, weighing about 1,070 tons each including ballast and paving, are counterbalanced to minimise the force required and allow raising in five minutes and have an arc of rotation of 82° with the centre of the arc, or pivot point, being 13 feet 3 inches inside the face of each pier and 5 feet 7 inches beneath the surface of the roadway. The two side spans are suspension bridges, each long, with the suspension rods anchored both at the abutments and through rods contained within the bridge's upper walkways. The pedestrian walkways are above the river at high tide and accessed by lifts and staircases. These walkways were designed as cantilever bridges for a distance from each tower of 55 feet, with girders bridging the 120 feet between the ends of the cantilevers. The structure was designed to withstand wind pressures of 56 lb per square foot (), a design constraint introduced following the Tay Bridge disaster which had occurred just 15 years before the opening of Tower Bridge. There is a chimney on the bridge that is painted to look like a lamppost. It was connected to a fireplace in a guardroom located in one of the bridge piers. Hydraulic system The original raising mechanism was powered by pressurised water stored in six hydraulic accumulators. There were then two pairs of engines on each pier. Each pair consisted of a larger engine 8 ½ inches in diameter and a smaller one at 7 ½ inches. All eight engines had three cylinders. The reason for two pairs of engines on each pier was to build in redundancy. The machinery was "equal to twice the requirements of the Board of Trade", already very rigorous standards established following the Tay Bridge Disaster. The system was designed and installed by Hamilton Owen Rendel while working for Armstrong, Mitchell and Company of Newcastle upon Tyne. Water at a pressure of was pumped into the accumulators by a pair of stationary steam engines. Each drove a force pump from its piston tail rod. The accumulators each comprise a ram which sits a very heavy weight to maintain the desired pressure. The entire hydraulic system along with the gas lighting system was installed by William Sugg & Co Ltd. The gas lighting was initially by open-flame burners within the lanterns, but was soon updated to the later incandescent system. In 1974, the original operating mechanism was largely replaced by a new electro-hydraulic drive system, designed by BHA Cromwell House. The only remaining parts of the old system are the final pinions, which fit into the racks on the bascules and were driven by hydraulic motors and gearing. Oil is now used in place of water as the new hydraulic fluid. Signalling and control Originally, river traffic passing beneath the bridge was required to follow several rules and signals. Daytime control was provided by red semaphore signals, mounted on small control cabins on either end of both of the bridge piers. At night, coloured lights were used, in either direction, on both of the piers: two red lights to show that the bridge was closed, and two green to show that it was open. In foggy weather, a gong was sounded as well. Vessels passing through the bridge were required to display signals. By day, a black ball at least in diameter was mounted high up where it could be seen. Night passage called for two red lights in the same position. Foggy weather required repeated blasts from the ship's steam whistle. If a black ball was suspended from the middle of each walkway (or a red light at night) this indicated that the bridge could not be opened. These signals were repeated about downstream, at Cherry Garden Pier, where boats needing to pass through the bridge had to hoist their signals/lights and sound their horn, as appropriate, to alert the Bridge Master. Some of the control mechanism for the signalling equipment has been preserved and is housed in the Tower Bridge's museum. Traffic Road During the first twelve months after the opening of the bridge, the average stoppage of road traffic at each bridge lift was six minutes. The average number of vehicles crossing the bridge daily was about 8,000. Tower Bridge is still a busy crossing of the Thames, used by more than 40,000 people (motorists, cyclists, and pedestrians) every day. The bridge is on the London Inner Ring Road, and is on the eastern boundary of the London congestion charge zone (drivers do not incur the charge by crossing the bridge). To maintain the integrity of the structure, the City of London Corporation has imposed a speed restriction, and an weight limit on vehicles using the bridge. A camera system measures the speed of traffic crossing the bridge, using a number plate recognition system to send fixed penalty charges to speeding drivers. A second system monitors other vehicle parameters. Induction loops and piezoelectric sensors are used to measure the weight, the height of the chassis above ground level, and the number of axles of each vehicle, with drivers of overweight vehicles also receiving fixed penalty notices. Pedestrian Pedestrians can walk across on the bascule bridge when it is down for auto and truck traffic. Ship traffic has priority, meaning traffic using the bascule bridge waits when the bridge opens for ships. From the outset in 1894, the high-level connection was a pedestrian route and was intended to allow pedestrian movement to continue while the bascule bridge was open for ship traffic. During the first twelve months after opening, the average number of pedestrians using the bridge was about 60,000 daily. The pedestrian walkways in each direction were changed to have glass floors in 2014. Lifts bring pedestrians up to and down from the high level pedestrian bridge. The upper walkways are part of the Tower Bridge Exhibition, which included a tour of how the bascule bridge works. Pedestrians can view the city from the bridge and through its upper walkway. There is a fee for the Tower Bridge Exhibition, tickets are needed. River During the first twelve months after the opening of the bridge, the bascules were raised for the passage of vessels 6,160 times, an average of seventeen times daily. Now, the bascules are raised about a thousand times a year. River traffic is now much reduced, but it still takes priority over road traffic. Today, 24 hours' notice is required before opening the bridge, and opening times are published in advance on the bridge's website; there is no charge for vessels to open the bridge. Cycling Transport for London have proposed Cycle Superhighway 4 to run across Tower Bridge. Touring the bridge The Tower Bridge attraction is a display housed inside the Bridge's Towers, the high-level Walkways, and the Victorian Engine Rooms. It uses films, photos, and interactive displays to explain why and how Tower Bridge was built. Visitors can access the original steam engines that once powered the bridge bascules, housed in Engine Rooms, underneath the south end of the bridge. The attraction charges an admission fee. The entrance is from the Ticket Office on the west side of the North Tower, from where visitors can climb the stairs (or take a lift) to the high-level Walkways to cross to the South Tower. In the Towers and Walkways is interpretation about the history of the Bridge. The Walkways also provide views over the city, the Tower of London and the Pool of London, and include two Glass Floors, where you can look down to see the road and River Thames below. From the South Tower, visitors can visit exit and follow the Blue Line to the Victorian Engine Rooms, with the original steam engines, which are situated in a separate building underneath the southern approach to the Bridge. Of 1,114 English visitor attractions tracked by Visit England, in 2019 Tower Bridge had 889,338 visitors and was the 34th most visited attraction in England, and the 17th most visited attraction that charged an admission fee. It is one of only three bridges in England tracked as a visitor attraction alongside Clifton Suspension Bridge and The Iron Bridge. Reaction Although Tower Bridge is an undoubted landmark, with the City of London calling it "London's defining landmark", some professional commentators in the early 20th century were critical of its aesthetics. "It represents the vice of tawdriness and pretentiousness, and of falsification of the actual facts of the structure", wrote Henry Heathcote Statham, while Frank Brangwyn stated that "A more absurd structure than the Tower Bridge was never thrown across a strategic river". Benjamin Crisler, the New York Times film critic, wrote in 1938: "Three unique and valuable institutions the British have that we in America have not: Magna Carta, the Tower Bridge and Alfred Hitchcock." Architectural historian Dan Cruickshank selected Tower Bridge as one of his four choices for the 2002 BBC television documentary series Britain's Best Buildings. The bridge and its surrounding landscape was depicted in an official BBC trailer for the 2021 Rugby League World Cup (in reference to London being one of the host cities). Tower Bridge has been mistaken for the next bridge upstream, London Bridge. A popular urban legend is that in 1968, Robert P. McCulloch, the purchaser of the old London Bridge that was later shipped to Lake Havasu City in Arizona, believed that he was buying Tower Bridge. This was denied by McCulloch himself and has been debunked by Ivan Luckin, the vendor of the bridge. A partial replica of Tower Bridge has been built in the city of Suzhou in China. The replica differs from the original in having no lifting mechanism and four separate towers. The Suzhou replica was renovated in 2019, giving it a new look that differs from the original London design. Tower Bridge is the emblem of the Greater London Scout Region of The Scout Association and features on the badges of the six London Scout counties. Incidents On 10 August 1912, the pioneering stunt pilot Francis McClean flew between the bascules and the high-level walkways in his Short Brothers S.33 floatplane. McClean became a celebrity overnight because of the stunt, and went on to fly underneath London Bridge, Blackfriars Bridge and Waterloo Bridge. On 3 August 1922, a 13-year-old boy fell off a slipway next to the south side of Tower Bridge. A man jumped into the Thames to save him, but both were pulled under a barge by Butler's Wharf and drowned. In December 1952, the bridge opened while a number 78 double-decker bus was crossing from the south bank. At that time, the gateman would ring a warning bell and close the gates when the bridge was clear before the watchman ordered the raising of the bridge. The process failed while a relief watchman was on duty. The bus was near the edge of the south bascule when it started to rise; driver Albert Gunter made a split-second decision to accelerate, clearing a gap to drop onto the north bascule, which had not yet started to rise. There were no serious injuries. Gunter was given £10 (equivalent to £ in ) by the City Corporation to honour his act of bravery. On 5 April 1968, a Royal Air Force Hawker Hunter FGA.9 jet fighter from No. 1 Squadron made an unauthorised flight through Tower Bridge. Unimpressed that senior staff was not going to celebrate the RAF's 50th birthday with a flypast, the pilot flew at low altitude down the Thames without authorisation, past the Houses of Parliament, and continued towards the bridge. He flew beneath the walkway, at an estimated . He was placed under arrest upon landing, and discharged from the RAF on medical grounds without the chance to defend himself at a court martial. On 31 July 1973, a single-engined Beagle Pup was twice flown under the pedestrian walkway of Tower Bridge by 29-year-old stockbroker's clerk Peter Martin. Martin, who was on bail following accusations of stock market fraud, then "buzzed" buildings in the city before flying north towards the Lake District, where he died when his aircraft crashed some two hours later. In May 1997, the motorcade of United States President Bill Clinton was divided by the opening of the bridge. The Thames sailing barge Gladys, on her way to a gathering at St Katharine Docks, arrived on schedule and the bridge was opened for her. Returning from a Thames-side lunch at Le Pont de la Tour restaurant with UK Prime Minister Tony Blair, President Clinton was less punctual and arrived just as the bridge was rising. The bridge opening split the motorcade in two, much to the consternation of security staff. A spokesman for Tower Bridge is quoted as saying: "We tried to contact the American Embassy, but they wouldn't answer the phone." On 19 August 1999, Jef Smith, a Freeman of the City of London, drove a flock of two sheep across the bridge. He was exercising a claimed ancient permission, granted as a right to freemen, to make a point about the powers of older citizens and the way their rights were being eroded. Before dawn on 31 October 2003, a Fathers 4 Justice campaigner climbed a tower crane near Tower Bridge at the start of a six-day protest dressed as Spider-Man. Fearing for his safety, and that of motorists should he fall, police cordoned off the area, closing the bridge and surrounding roads and causing widespread traffic congestion across the City and East London. On 11 May 2009, six people were trapped and injured after a lift fell inside the north tower. On 9 August 2021, the bridge remained open after a technical failure. The bridge had opened to let the Jubilee Trust Tall Ship through from 2 p.m. before getting stuck. The bridge was closed and reopened to traffic approximately 12 hours later.
Technology
Transport infrastructure
null
60703
https://en.wikipedia.org/wiki/Fennec%20fox
Fennec fox
The fennec fox (Vulpes zerda) is a small fox native to the deserts of North Africa, ranging from Western Sahara and Mauritania to the Sinai Peninsula. Its most distinctive feature is its unusually large ears, which serve to dissipate heat and listen for underground prey. The fennec is the smallest fox species. Its coat, ears, and kidney functions have adapted to the desert environment with high temperatures and little water. The fennec fox mainly eats insects, small mammals and birds. It has a life span of up to 14 years in captivity and about 10 years in the wild. Pups are preyed upon by the Pharaoh eagle-owl; both adults and pups may possibly fall prey to jackals and striped hyenas. Fennec families dig out burrows in the sand for habitation and protection, which can be as large as and adjoin the burrows of other families. Precise population figures are not known but are estimated from the frequency of sightings; these indicate that the fennec fox is currently not threatened by extinction. Knowledge of social interactions is limited to information gathered from captive animals. The fennec fox is commonly trapped for exhibition or sale in North Africa, and it is considered an exotic pet in some parts of the world. Taxonomy The fennec fox was scientifically described as Canis zerda by Eberhardt Zimmermann in 1780. In 1788, Johann Friedrich Gmelin gave the species the synonym of Canis cerdo with the type locality being the Sahara Desert. A few years later, Friedrich Albrecht Anton Meyer assigned the name Viverra aurita to the species in 1793; the type locality was Algeria. Subsequent synonyms include Fennecus arabicus by Anselme Gaëtan Desmarest in 1804, Megalotis cerda by Johann Karl Wilhelm Illiger in 1811 which was based on earlier descriptions by Gmelin, and another synonym by Desmarest (Fennecus brucei) in 1820; the type locality was Algeria, Tunisia, Libya, and Sudan. In 1827, the species was given another synonym (Canis fennecus) by René Lesson whose work was largely based off the species scientific description in 1780. In the 1840s, the species received synonyms by Pierre Boitard in 1842 (Vulpes denhamii) and John Edward Gray in 1843 (Vulpes zuarensis). The type locality of these were "interior of Africa" and Egypt, respectively. More than a 100 years later in 1978, Gordon Barclay Corbet renamed the species to Vulpes zerda, which is its current scientific name. It was also originally assigned to the Canis genus, but following molecular analysis it was moved to Vulpes despite it having a few distinct morphological and behavioral traits. Phylogeny According to DNA evidence, the closest living relative to the fennec fox is the Blanford's fox. They are two of eight "desert fox" species, which is a group of Vulpes that share comparable ecologies. The other members include the corsac fox, pale fox, kit fox, Tibetan fox, Rüppell's fox and Cape fox. All eight species evolved to survive in desert environments, developing several traits such as sandy colored coats, large ears, pigmented eyes, and specialized kidneys. The word fennec is derived from the Arabic word fanak which likely has Persian origins. The fennec fox is one of 13 extant Vulpes species and a member of the family Canidae. Description The fennec fox has sand-colored fur which reflects sunlight during the day and helps keep it warm at night. Its nose is black and its tapering tail has a black tip. Its long ears have longitudinal reddish stripes on the back and are so densely haired inside that the external auditory meatus is not visible. The edges of the ears are whitish, but darker on the back. The ear to body ratio is the greatest in the canid family and likely helps in dissipating heat and locating prey. It has large, dense kidneys with somewhat compact medulla, which help store water in times of scarcity. It has dark streaks running from the inner eye to either side of the slender muzzle. Its large eyes are dark. The dental formula is with small and narrow canines. The pads of its paws are covered with dense fur, which facilitates walking on hot, sandy soil. The fennec fox is the smallest canid species. Females range in head-to-body size from with a long tail and long ears, and weigh . Males are slightly larger, ranging in head-to-body size from with a long tail and long ears, weighing at least . Distribution and habitat The fennec fox is distributed throughout the Sahara, from Morocco and Mauritania to northern Sudan, through Egypt and its Sinai Peninsula. It inhabits small sand dunes and vast treeless sand areas with sparse vegetation such as grasses, sedges and small shrubs. In the northern part of its range annual rainfalls have been recorded at <100 mm compared to 300 mm in its southern range. The fennec fox's range likely overlaps with that of other canines such as the golden jackal and Rüppell's fox. Compared to these canids, the fennec fox seems to inhabit areas with more extreme climate and has been known to build burrows in grainier surfaces; this adaptation gives it an edge over competitors. Behaviour and ecology Behaviour Fennec foxes are primarily nocturnal, displaying heightened activity during the cooler nighttime hours. This behaviour helps them escape the extreme Saharan heat and reduces water loss through panting. A fennec fox digs its den in sand, either in open areas or places sheltered by plants with stable sand dunes. In compacted soils, dens are up to large, with up to 15 different entrances. In some cases, different families interconnect their dens, or locate them close together. In soft, looser sand, dens tend to be simpler with only one entrance leading to a single chamber. Captive individuals reside in family groups. Fennec foxes exhibit playful behavior, especially among younger individuals. Hunting and diet The fennec fox is omnivorous, feeding on small rodents, lizards (geckos and skinks), small birds and their eggs, insects, fruits, leaves, roots and also some tubers. It relies on the moisture content of prey, but drinks water when available. It hunts alone and digs in the sand for small vertebrates and insects. Some individuals were observed to bury prey for later consumption and searching for food in the vicinity of human settlements. In the Algerian Sahara, 114 scat samples were collected that contained more than 400 insects, plant fragments and date palm (Phoenix dactylifera) fruits, remains of birds, mammals, squamata and insects. Reproduction Fennec foxes mate for life. Captive animals reach sexual maturity at around nine months and mate between January and April. Female fennec foxes are in estrus for an average of 24 hours and usually breed once per year; the copulation tie lasts up to two hours and 45 minutes. Gestation usually lasts between 50 and 52 days, though sometimes up to 63 days. After mating, the male becomes very aggressive and protects the female, and provides her with food during pregnancy and lactation. Females give birth between March and June to a litter of one to four pups that open their eyes after 8 to 11 days. Both female and male care for the pups. They communicate by barking, purring, yapping and squeaking. Pups remain in the family even after a new litter is born. The pups are weaned at the age of 61 to 70 days. Adults rear pups until they are around 16 to 17 weeks old. The average lifespan in the wild is 10 years. The oldest captive male fennec fox was 14 years old, and the oldest female 13 years. Predators, parasites and diseases African horned owl species such as the Pharaoh eagle-owl prey on fennec fox pups. Anecdotal reports exist about jackals and striped hyenas also preying on the fennec fox. But according to nomads, the fennec fox is fast and changes directions so well that even their Salukis are hardly ever able to capture it. Captive fennec foxes are susceptible to canine distemper virus, displaying fever, mucopurulent ocular discharge, diarrhea, severe emaciation, seizures, generalized ataxia, severe dehydration, brain congestion, gastric ulcers and death. Stress because of capture and long-distance transportation are thought to be the causes. In 2012, a study reported a case of Trichophyton mentagrophytes, a fungus species, in a 2-year-old male. It died not too long after contracting the pathogen from anorexia and icterus. A 2019 review of the deaths of fennec foxes due to medical conditions or pathogens at the Bronx and Prospect Park Zoos since 1980 found that the majority of such deaths were attributed to neoplasia and infection. Most foxes developed infections or medical conditions from atopic dermatitis and other dermatologic dliseases, as well as trauma. Parasites known to infect the fennec fox include roundworms such as Capillaria and Angiostrongylus vasorum, as well as the alveolate Toxoplasma gondii. Threats In North Africa, the fennec fox is commonly trapped for exhibition or sale to tourists. Expansion of permanent human settlements in southern Morocco caused its disappearance in these areas and restricted it to marginal areas. Other factors such as roadwork, seismic surveys, mining, oil fields, commercial expansion and the increased number of human communities in their range are cited as potential threats. Conservation As of 2015, the fennec fox is classified as Least concern on the IUCN Red List. It is listed in CITES Appendix II and is protected in Morocco and Western Sahara, Algeria, Tunisia and Egypt, where it has been documented in several protected areas. Another measure taken to conserve the species is the placement of individuals in captive environments such as zoos. Educational programs are also promoted to further this initiative. Interactions with humans In culture The fennec fox is the national animal of Algeria. It also serves as the nickname for the Algeria national football team "Les Fennecs". The species is depicted in The Little Prince, a 1943 novella by Antoine de Saint-Exupéry which follows the story of a pilot who is forced to make an emergency plane landing in the remote Sahara Desert. In 2000, the fennec fox was portrayed on the cover of a Ranger Rick magazine. Moreover, within the furry community, the fennec fox is said to be normative in nature, which in the contexts of the community, is a rather rare attribute. In Roman art and literature, there is dearth of depictions of fox species in general. However, according to Martial's Epigrams which describes the "long-eared fox" as a popular pet, it is likely that the fennec fox was kept as an exotic pet in the Roman empire. Additionally, the species has made appearances as Fenneko in the animed TV series Aggretsuko. In captivity The fennec fox is bred commercially as an exotic pet. Commercial breeders remove the pups from their mother to hand-raise them, as tame foxes are more valuable. A breeders' registry has been set up in the United States to avoid any problems associated with inbreeding. As of 2020, 15 US states authorized the ownership of foxes without the need for a document, although one is also allowed. Due to poor diet, captive foxes have been known to grow to abnormally large sizes. Captive foxes have often been recorded exhibiting stereotyped behaviors; this may due to the insufficient environments they are placed in. When noises from zookeepers and visitors alike are produced, foxes often respond by pacing repeatedly. Similarly, in one case, two male individuals in the National Zoological Park spend the majority of their time pacing around their enclosures. It is suggested that larger, outdoor, enclosures may help reduce stereotyped behaviors, as they provide more space for foxes to flee from perceived danger and hide in a provided safe spot.
Biology and health sciences
Canines
Animals
60710
https://en.wikipedia.org/wiki/Ferrocene
Ferrocene
Ferrocene is an organometallic compound with the formula . The molecule is a complex consisting of two cyclopentadienyl rings sandwiching a central iron atom. It is an orange solid with a camphor-like odor that sublimes above room temperature, and is soluble in most organic solvents. It is remarkable for its stability: it is unaffected by air, water, strong bases, and can be heated to 400 °C without decomposition. In oxidizing conditions it can reversibly react with strong acids to form the ferrocenium cation . Ferrocene and the ferrocenium cation are sometimes abbreviated as Fc and respectively. The first reported synthesis of ferrocene was in 1951. Its unusual stability puzzled chemists, and required the development of new theory to explain its formation and bonding. The discovery of ferrocene and its many analogues, known as metallocenes, sparked excitement and led to a rapid growth in the discipline of organometallic chemistry. Geoffrey Wilkinson and Ernst Otto Fischer, both of whom worked on elucidating the structure of ferrocene, later shared the 1973 Nobel Prize in Chemistry for their work on organometallic sandwich compounds. Ferrocene itself has no large-scale applications, but has found more niche uses in catalysis, as a fuel additive, and as a tool in undergraduate education. History Discovery Ferrocene was discovered by accident twice. The first known synthesis may have been made in the late 1940s by unknown researchers at Union Carbide, who tried to pass hot cyclopentadiene vapor through an iron pipe. The vapor reacted with the pipe wall, creating a "yellow sludge" that clogged the pipe. Years later, a sample of the sludge that had been saved was obtained and analyzed by Eugene O. Brimm, shortly after reading Kealy and Pauson's article, and was found to consist of ferrocene. The second time was around 1950, when Samuel A. Miller, John A. Tebboth, and John F. Tremaine, researchers at British Oxygen, were attempting to synthesize amines from hydrocarbons and nitrogen in a modification of the Haber process. When they tried to react cyclopentadiene with nitrogen at 300 °C, at atmospheric pressure, they were disappointed to see the hydrocarbon react with some source of iron, yielding ferrocene. While they too observed its remarkable stability, they put the observation aside and did not publish it until after Pauson reported his findings. Kealy and Pauson were later provided with a sample by Miller et al., who confirmed that the products were the same compound. In 1951, Peter L. Pauson and Thomas J. Kealy at Duquesne University attempted to prepare fulvalene () by oxidative dimerization of cyclopentadiene (). To that end, they reacted the Grignard compound cyclopentadienyl magnesium bromide in diethyl ether with ferric chloride as an oxidizer. However, instead of the expected fulvalene, they obtained a light orange powder of "remarkable stability", with the formula . Determining the structure Pauson and Kealy conjectured that the compound had two cyclopentadienyl groups, each with a single covalent bond from the saturated carbon atom to the iron atom. However, that structure was inconsistent with then-existing bonding models and did not explain the unexpected stability of the compound, and chemists struggled to find the correct structure. The structure was deduced and reported independently by three groups in 1952. Robert Burns Woodward, Geoffrey Wilkinson, et al. deduced observe that the compound was diamagnetic and nonpolar. A few months later they described its reactions as being typical of aromatic compounds such as benzene. The name ferrocene was coined by Mark Whiting, a postdoc with Woodward.. Ernst Otto Fischer and Wolfgang Pfab also noted ferrocene's diamagneticity and high symmetry. They also synthesize nickelocene and cobaltocene and confirmed they had the same structure. Fischer described the structure as Doppelkegelstruktur ("double-cone structure"), although the term "sandwich" came to be preferred by British and American chemists. Philip Frank Eiland and Raymond Pepinsky confirmed the structure through X-ray crystallography and later by NMR spectroscopy. The "sandwich" structure of ferrocene was shockingly novel and led to intensive theoretical studies. Application of molecular orbital theory with the assumption of a Fe2+ centre between two cyclopentadienide anions resulted in the successful Dewar–Chatt–Duncanson model, allowing correct prediction of the geometry of the molecule as well as explaining its remarkable stability. Impact The discovery of ferrocene was considered so significant that Wilkinson and Fischer shared the 1973 Nobel Prize in Chemistry "for their pioneering work, performed independently, on the chemistry of the organometallic, called sandwich compounds". Structure and bonding Mössbauer spectroscopy indicates that the iron center in ferrocene should be assigned the +2 oxidation state. Each cyclopentadienyl (Cp) ring should then be allocated a single negative charge. Thus ferrocene could be described as iron(II) bis(cyclopentadienide), . Each ring has six π-electrons, which makes them aromatic according to Hückel's rule. These π-electrons are then shared with the metal via covalent bonding. Since Fe2+ has six d-electrons, the complex attains an 18-electron configuration, which accounts for its stability. In modern notation, this sandwich structural model of the ferrocene molecule is denoted as , where η denotes hapticity, the number of atoms through which each ring binds. The carbon–carbon bond distances around each five-membered ring are all 1.40 Å, and all Fe–C bond distances are 2.04 Å. From room temperature down to 164 K, X-ray crystallography yields the monoclinic space group; the cyclopentadienide rings are a staggered conformation, resulting in a centrosymmetric molecule, with symmetry group D5d. However, below 110 K, ferrocene crystallizes in an orthorhombic crystal lattice in which the Cp rings are ordered and eclipsed, so that the molecule has symmetry group D5h. In the gas phase, electron diffraction and computational studies show that the Cp rings are eclipsed. While ferrocene has no permanent dipole moment at room temperature, between 172.8 and 163.5 K the molecule exhibits an "incommensurate modulation", breaking the D5 symmetry and acquiring an electric dipole. The Cp rings rotate with a low barrier about the Cp(centroid)–Fe–Cp(centroid) axis, as observed by measurements on substituted derivatives of ferrocene using 1H and 13C nuclear magnetic resonance spectroscopy. For example, methylferrocene (CH3C5H4FeC5H5) exhibits a singlet for the C5H5 ring. In solution, and at room temperature, eclipsed D5h ferrocene was determined to dominate over the staggered D5d conformer, as suggested by both Fourier-transform infrared spectroscopy and DFT calculations. Synthesis Early methods The first reported syntheses of ferrocene were nearly simultaneous. Pauson and Kealy synthesised ferrocene using iron(III) chloride and cyclopentadienyl magnesium bromide. A redox reaction produces iron(II) chloride. The formation of fulvalene, the intended outcome does not occur. Another early synthesis of ferrocene was by Miller et al., who treated metallic iron with gaseous cyclopentadiene at elevated temperature. An approach using iron pentacarbonyl was also reported. Fe(CO)5 + 2 C5H6 → Fe(C5H5)2 + 5 CO + H2 Via alkali cyclopentadienide More efficient preparative methods are generally a modification of the original transmetalation sequence using either commercially available sodium cyclopentadienide or freshly cracked cyclopentadiene deprotonated with potassium hydroxide and reacted with anhydrous iron(II) chloride in ethereal solvents. Modern modifications of Pauson and Kealy's original Grignard approach are known: Using sodium cyclopentadienide:       2 NaC5H5   +   FeCl2   →   Fe(C5H5)2   +   2 NaCl Using freshly-cracked cyclopentadiene:     FeCl2·4H2O   +   2 C5H6   +   2 KOH   →   Fe(C5H5)2   +   2 KCl   +   6 H2O Using an iron(II) salt with a Grignard reagent:     2 C5H5MgBr   +   FeCl2   →   Fe(C5H5)2   +   2 MgBrCl Even some amine bases (such as diethylamine) can be used for the deprotonation, though the reaction proceeds more slowly than when using stronger bases: 2 C5H6   +   2 (CH3CH2)2NH   +   FeCl2   →   Fe(C5H5)2   +   2 (CH3CH2)2NH2Cl Direct transmetalation can also be used to prepare ferrocene from some other metallocenes, such as manganocene: FeCl2   +   Mn(C5H5)2   →   MnCl2   +   Fe(C5H5)2 Properties Ferrocene is an air-stable orange solid with a camphor-like odor. As expected for a symmetric, uncharged species, ferrocene is soluble in normal organic solvents, such as benzene, but is insoluble in water. It is stable to temperatures as high as 400 °C. Ferrocene readily sublimes, especially upon heating in a vacuum. Its vapor pressure is about 1 Pa at 25 °C, 10 Pa at 50 °C, 100 Pa at 80 °C, 1000 Pa at 116 °C, and 10,000 Pa (nearly 0.1 atm) at 162 °C. Reactions With electrophiles Ferrocene undergoes many reactions characteristic of aromatic compounds, enabling the preparation of substituted derivatives. A common undergraduate experiment is the Friedel–Crafts reaction of ferrocene with acetic anhydride (or acetyl chloride) in the presence of phosphoric acid as a catalyst. Under conditions for a Mannich reaction, ferrocene gives N,N-dimethylaminomethylferrocene. Ferrocene can itself be oxidized to the ferrocenium cation (Fc+); the ferrocene/ferrocenium couple is often used as a reference in electrochemistry. It is an aromatic substance and undergoes substitution reactions rather than addition reactions on the cyclopentadienyl ligands. For example, Friedel-Crafts acylation of ferrocene with acetic anhydride yields acetylferrocene just as acylation of benzene yields acetophenone under similar conditions. Vilsmeier-Haack reaction (formylation) using formylanilide and phosphorus oxychloride gives ferrocenecarboxaldehyde. Diformylation does not occur readily, showing the electronic communication between the two rings. Protonation of ferrocene allows isolation of [Cp2FeH]PF6. In the presence of aluminium chloride, Me2NPCl2 and ferrocene react to give ferrocenyl dichlorophosphine, whereas treatment with phenyldichlorophosphine under similar conditions forms P,P-diferrocenyl-P-phenyl phosphine. Ferrocene reacts with P4S10 forms a diferrocenyl-dithiadiphosphetane disulfide. Lithiation Ferrocene reacts with butyllithium to give 1,1′-dilithioferrocene, which is a versatile nucleophile. In combination with butyllithiium, tert-butyllithium produces monolithioferrocene. Redox chemistry Ferrocene undergoes a one-electron oxidation at around 0.4 V versus a saturated calomel electrode (SCE), becoming ferrocenium. This reversible oxidation has been used as standard in electrochemistry as Fc+/Fc = 0.64 V versus the standard hydrogen electrode, however other values have been reported. Ferrocenium tetrafluoroborate is a common reagent. The remarkably reversible oxidation-reduction behaviour has been extensively used to control electron-transfer processes in electrochemical and photochemical systems. Substituents on the cyclopentadienyl ligands alters the redox potential in the expected way: electron-withdrawing groups such as a carboxylic acid shift the potential in the anodic direction (i.e. made more positive), whereas electron-releasing groups such as methyl groups shift the potential in the cathodic direction (more negative). Thus, decamethylferrocene is much more easily oxidised than ferrocene and can even be oxidised to the corresponding dication. Ferrocene is often used as an internal standard for calibrating redox potentials in non-aqueous electrochemistry. Stereochemistry of substituted ferrocenes Disubstituted ferrocenes can exist as either 1,2-, 1,3- or 1,1′- isomers, none of which are interconvertible. Ferrocenes that are asymmetrically disubstituted on one ring are chiral – for example [CpFe(EtC5H3Me)]. This planar chirality arises despite no single atom being a stereogenic centre. The substituted ferrocene shown at right (a 4-(dimethylamino)pyridine derivative) has been shown to be effective when used for the kinetic resolution of racemic secondary alcohols. Several approaches have been developed to asymmetrically 1,1′-functionalise the ferrocene. Applications of ferrocene and its derivatives Ferrocene and its numerous derivatives have no large-scale applications, but have many niche uses that exploit the unusual structure (ligand scaffolds, pharmaceutical candidates), robustness (anti-knock formulations, precursors to materials), and redox (reagents and redox standards). Ligand scaffolds Chiral ferrocenyl phosphines are employed as ligands for transition-metal catalyzed reactions. Some of them have found industrial applications in the synthesis of pharmaceuticals and agrochemicals. For example, the diphosphine 1,1′-bis(diphenylphosphino)ferrocene (dppf) is a valued ligand for palladium-coupling reactions and Josiphos ligand is useful for hydrogenation catalysis. They are named after the technician who made the first one, Josi Puleo. Fuel additives Ferrocene and its derivatives are antiknock agents used in the fuel for petrol engines. They are safer than previously used tetraethyllead. Petrol additive solutions containing ferrocene can be added to unleaded petrol to enable its use in vintage cars designed to run on leaded petrol. The iron-containing deposits formed from ferrocene can form a conductive coating on spark plug surfaces. Ferrocene polyglycol copolymers, prepared by effecting a polycondensation reaction between a ferrocene derivative and a substituted dihydroxy alcohol, has promise as a component of rocket propellants. These copolymers provide rocket propellants with heat stability, serving as a propellant binder and controlling propellant burn rate. Ferrocene has been found to be effective at reducing smoke and sulfur trioxide produced when burning coal. The addition by any practical means, impregnating the coal or adding ferrocene to the combustion chamber, can significantly reduce the amount of these undesirable byproducts, even with a small amount of the metal cyclopentadienyl compound. Pharmaceuticals Ferrocene derivatives have been investigated as drugs, with one compound approved for use in the USSR in the 1970s as an iron supplement, though it is no longer marketed today. Only one drug has entered clinical trials in recent years, Ferroquine (7-chloro-N-(2-((dimethylamino)methyl)ferrocenyl)quinolin-4-amine), an antimalarial, which has reached Phase IIb trials. Ferrocene-containing polymer-based drug delivery systems have been investigated. The anticancer activity of ferrocene derivatives was first investigated in the late 1970s, when derivatives bearing amine or amide groups were tested against lymphocytic leukemia. Some ferrocenium salts exhibit anticancer activity, but no compound has seen evaluation in the clinic. Ferrocene derivatives have strong inhibitory activity against human lung cancer cell line A549, colorectal cancer cell line HCT116, and breast cancer cell line MCF-7. An experimental drug was reported which is a ferrocenyl version of tamoxifen. The idea is that the tamoxifen will bind to the estrogen binding sites, resulting in cytotoxicity. Ferrocifens are exploited for cancer applications by a French biotech, Feroscan, founded by Pr. Gerard Jaouen. Solid rocket propellant Ferrocene and related derivatives are used as powerful burn rate catalysts in ammonium perchlorate composite propellant. Derivatives and variations Ferrocene analogues can be prepared with variants of cyclopentadienyl. For example, bisindenyliron and bisfluorenyliron. Carbon atoms can be replaced by heteroatoms as illustrated by Fe(η5-C5Me5)(η5-P5) and Fe(η5-C5H5)(η5-C4H4N) ("azaferrocene"). Azaferrocene arises from decarbonylation of Fe(η5-C5H5)(CO)2(η1-pyrrole) in cyclohexane. This compound on boiling under reflux in benzene is converted to ferrocene. Because of the ease of substitution, many structurally unusual ferrocene derivatives have been prepared. For example, the penta(ferrocenyl)cyclopentadienyl ligand, features a cyclopentadienyl anion derivatized with five ferrocene substituents. In hexaferrocenylbenzene, C6[(η5-C5H4)Fe(η5-C5H5)]6, all six positions on a benzene molecule have ferrocenyl substituents (R). X-ray diffraction analysis of this compound confirms that the cyclopentadienyl ligands are not co-planar with the benzene core but have alternating dihedral angles of +30° and −80°. Due to steric crowding the ferrocenyls are slightly bent with angles of 177° and have elongated C-Fe bonds. The quaternary cyclopentadienyl carbon atoms are also pyramidalized. Also, the benzene core has a chair conformation with dihedral angles of 14° and displays bond length alternation between 142.7 pm and 141.1 pm, both indications of steric crowding of the substituents. The synthesis of hexaferrocenylbenzene has been reported using Negishi coupling of hexaiodidobenzene and diferrocenylzinc, using tris(dibenzylideneacetone)dipalladium(0) as catalyst, in tetrahydrofuran: The yield is only 4%, which is further evidence consistent with substantial steric crowding around the arene core. Materials chemistry Ferrocene, a precursor to iron nanoparticles, can be used as a catalyst for the production of carbon nanotubes. Vinylferrocene can be converted to (polyvinylferrocene, PVFc), a ferrocenyl version of polystyrene (the phenyl groups are replaced with ferrocenyl groups). Another polyferrocene which can be formed is poly(2-(methacryloyloxy)ethyl ferrocenecarboxylate), PFcMA. In addition to using organic polymer backbones, these pendant ferrocene units have been attached to inorganic backbones such as polysiloxanes, polyphosphazenes, and polyphosphinoboranes, (–PH(R)–BH2–)n, and the resulting materials exhibit unusual physical and electronic properties relating to the ferrocene / ferrocinium redox couple. Both PVFc and PFcMA have been tethered onto silica wafers and the wettability measured when the polymer chains are uncharged and when the ferrocene moieties are oxidised to produce positively charged groups. The contact angle with water on the PFcMA-coated wafers was 70° smaller following oxidation, while in the case of PVFc the decrease was 30°, and the switching of wettability is reversible. In the PFcMA case, the effect of lengthening the chains and hence introducing more ferrocene groups is significantly larger reductions in the contact angle upon oxidation.
Physical sciences
Organometallics
Chemistry
60770
https://en.wikipedia.org/wiki/Curvature
Curvature
In mathematics, curvature is any of several strongly related concepts in geometry that intuitively measure the amount by which a curve deviates from being a straight line or by which a surface deviates from being a plane. If a curve or surface is contained in a larger space, curvature can be defined extrinsically relative to the ambient space. Curvature of Riemannian manifolds of dimension at least two can be defined intrinsically without reference to a larger space. For curves, the canonical example is that of a circle, which has a curvature equal to the reciprocal of its radius. Smaller circles bend more sharply, and hence have higher curvature. The curvature at a point of a differentiable curve is the curvature of its osculating circle — that is, the circle that best approximates the curve near this point. The curvature of a straight line is zero. In contrast to the tangent, which is a vector quantity, the curvature at a point is typically a scalar quantity, that is, it is expressed by a single real number. For surfaces (and, more generally for higher-dimensional manifolds), that are embedded in a Euclidean space, the concept of curvature is more complex, as it depends on the choice of a direction on the surface or manifold. This leads to the concepts of maximal curvature, minimal curvature, and mean curvature. History In Tractatus de configurationibus qualitatum et motuum, the 14th-century philosopher and mathematician Nicole Oresme introduces the concept of curvature as a measure of departure from straightness; for circles he has the curvature as being inversely proportional to the radius; and he attempts to extend this idea to other curves as a continuously varying magnitude. The curvature of a differentiable curve was originally defined through osculating circles. In this setting, Augustin-Louis Cauchy showed that the center of curvature is the intersection point of two infinitely close normal lines to the curve. Plane curves Intuitively, the curvature describes for any part of a curve how much the curve direction changes over a small distance travelled (e.g. angle in ), so it is a measure of the instantaneous rate of change of direction of a point that moves on the curve: the larger the curvature, the larger this rate of change. In other words, the curvature measures how fast the unit tangent vector to the curve at point p rotates when point p moves at unit speed along the curve. In fact, it can be proved that this instantaneous rate of change is exactly the curvature. More precisely, suppose that the point is moving on the curve at a constant speed of one unit, that is, the position of the point is a function of the parameter , which may be thought as the time or as the arc length from a given origin. Let be a unit tangent vector of the curve at , which is also the derivative of with respect to . Then, the derivative of with respect to is a vector that is normal to the curve and whose length is the curvature. To be meaningful, the definition of the curvature and its different characterizations require that the curve is continuously differentiable near , for having a tangent that varies continuously; it requires also that the curve is twice differentiable at , for insuring the existence of the involved limits, and of the derivative of . The characterization of the curvature in terms of the derivative of the unit tangent vector is probably less intuitive than the definition in terms of the osculating circle, but formulas for computing the curvature are easier to deduce. Therefore, and also because of its use in kinematics, this characterization is often given as a definition of the curvature. Osculating circle Historically, the curvature of a differentiable curve was defined through the osculating circle, which is the circle that best approximates the curve at a point. More precisely, given a point on a curve, every other point of the curve defines a circle (or sometimes a line) passing through and tangent to the curve at . The osculating circle is the limit, if it exists, of this circle when tends to . Then the center and the radius of curvature of the curve at are the center and the radius of the osculating circle. The curvature is the reciprocal of radius of curvature. That is, the curvature is where is the radius of curvature (the whole circle has this curvature, it can be read as turn over the length ). This definition is difficult to manipulate and to express in formulas. Therefore, other equivalent definitions have been introduced. In terms of arc-length parametrization Every differentiable curve can be parametrized with respect to arc length. In the case of a plane curve, this means the existence of a parametrization , where and are real-valued differentiable functions whose derivatives satisfy This means that the tangent vector has a length equal to one and is thus a unit tangent vector. If the curve is twice differentiable, that is, if the second derivatives of and exist, then the derivative of exists. This vector is normal to the curve, its length is the curvature , and it is oriented toward the center of curvature. That is, Moreover, because the radius of curvature is (assuming 𝜿(s) ≠ 0) and the center of curvature is on the normal to the curve, the center of curvature is the point (In case the curvature is zero, the center of curvature is not located anywhere on the plane R2 and is often said to be located "at infinity".) If is the unit normal vector obtained from by a counterclockwise rotation of , then with . The real number is called the oriented curvature or signed curvature. It depends on both the orientation of the plane (definition of counterclockwise), and the orientation of the curve provided by the parametrization. In fact, the change of variable provides another arc-length parametrization, and changes the sign of . In terms of a general parametrization Let be a proper parametric representation of a twice differentiable plane curve. Here proper means that on the domain of definition of the parametrization, the derivative is defined, differentiable and nowhere equal to the zero vector. With such a parametrization, the signed curvature is where primes refer to derivatives with respect to . The curvature is thus These can be expressed in a coordinate-free way as These formulas can be derived from the special case of arc-length parametrization in the following way. The above condition on the parametrisation imply that the arc length is a differentiable monotonic function of the parameter , and conversely that is a monotonic function of . Moreover, by changing, if needed, to , one may suppose that these functions are increasing and have a positive derivative. Using notation of the preceding section and the chain rule, one has and thus, by taking the norm of both sides where the prime denotes differentiation with respect to . The curvature is the norm of the derivative of with respect to . By using the above formula and the chain rule this derivative and its norm can be expressed in terms of and only, with the arc-length parameter completely eliminated, giving the above formulas for the curvature. Graph of a function The graph of a function , is a special case of a parametrized curve, of the form As the first and second derivatives of are 1 and 0, previous formulas simplify to for the curvature, and to for the signed curvature. In the general case of a curve, the sign of the signed curvature is somewhat arbitrary, as it depends on the orientation of the curve. In the case of the graph of a function, there is a natural orientation by increasing values of . This makes significant the sign of the signed curvature. The sign of the signed curvature is the same as the sign of the second derivative of . If it is positive then the graph has an upward concavity, and, if it is negative the graph has a downward concavity. If it is zero, then one has an inflection point or an undulation point. When the slope of the graph (that is the derivative of the function) is small, the signed curvature is well approximated by the second derivative. More precisely, using big O notation, one has It is common in physics and engineering to approximate the curvature with the second derivative, for example, in beam theory or for deriving the wave equation of a string under tension, and other applications where small slopes are involved. This often allows systems that are otherwise nonlinear to be treated approximately as linear. Polar coordinates If a curve is defined in polar coordinates by the radius expressed as a function of the polar angle, that is is a function of , then its curvature is where the prime refers to differentiation with respect to . This results from the formula for general parametrizations, by considering the parametrization Implicit curve For a curve defined by an implicit equation with partial derivatives denoted , , , , , the curvature is given by The signed curvature is not defined, as it depends on an orientation of the curve that is not provided by the implicit equation. Note that changing into would not change the curve defined by , but it would change the sign of the numerator if the absolute value were omitted in the preceding formula. A point of the curve where is a singular point, which means that the curve is not differentiable at this point, and thus that the curvature is not defined (most often, the point is either a crossing point or a cusp). The above formula for the curvature can be derived from the expression of the curvature of the graph of a function by using the implicit function theorem and the fact that, on such a curve, one has Examples It can be useful to verify on simple examples that the different formulas given in the preceding sections give the same result. Circle A common parametrization of a circle of radius is . The formula for the curvature gives It follows, as expected, that the radius of curvature is the radius of the circle, and that the center of curvature is the center of the circle. The circle is a rare case where the arc-length parametrization is easy to compute, as it is It is an arc-length parametrization, since the norm of is equal to one. This parametrization gives the same value for the curvature, as it amounts to division by in both the numerator and the denominator in the preceding formula. The same circle can also be defined by the implicit equation with . Then, the formula for the curvature in this case gives Parabola Consider the parabola . It is the graph of a function, with derivative , and second derivative . So, the signed curvature is It has the sign of for all values of . This means that, if , the concavity is upward directed everywhere; if , the concavity is downward directed; for , the curvature is zero everywhere, confirming that the parabola degenerates into a line in this case. The (unsigned) curvature is maximal for , that is at the stationary point (zero derivative) of the function, which is the vertex of the parabola. Consider the parametrization . The first derivative of is , and the second derivative is zero. Substituting into the formula for general parametrizations gives exactly the same result as above, with replaced by . If we use primes for derivatives with respect to the parameter . The same parabola can also be defined by the implicit equation with . As , and , one obtains exactly the same value for the (unsigned) curvature. However, the signed curvature is meaningless here, as is a valid implicit equation for the same parabola, which gives the opposite sign for the curvature. Frenet–Serret formulas for plane curves The expression of the curvature In terms of arc-length parametrization is essentially the first Frenet–Serret formula where the primes refer to the derivatives with respect to the arc length , and is the normal unit vector in the direction of . As planar curves have zero torsion, the second Frenet–Serret formula provides the relation For a general parametrization by a parameter , one needs expressions involving derivatives with respect to . As these are obtained by multiplying by the derivatives with respect to , one has, for any proper parametrization Curvature comb A curvature comb can be used to represent graphically the curvature of every point on a curve. If is a parametrised curve its comb is defined as the parametrized curve where are the curvature and normal vector and is a scaling factor (to be chosen as to enhance the graphical representation). Space curves As in the case of curves in two dimensions, the curvature of a regular space curve in three dimensions (and higher) is the magnitude of the acceleration of a particle moving with unit speed along a curve. Thus if is the arc-length parametrization of then the unit tangent vector is given by and the curvature is the magnitude of the acceleration: The direction of the acceleration is the unit normal vector , which is defined by The plane containing the two vectors and is the osculating plane to the curve at . The curvature has the following geometrical interpretation. There exists a circle in the osculating plane tangent to whose Taylor series to second order at the point of contact agrees with that of . This is the osculating circle to the curve. The radius of the circle is called the radius of curvature, and the curvature is the reciprocal of the radius of curvature: The tangent, curvature, and normal vector together describe the second-order behavior of a curve near a point. In three dimensions, the third-order behavior of a curve is described by a related notion of torsion, which measures the extent to which a curve tends to move as a helical path in space. The torsion and curvature are related by the Frenet–Serret formulas (in three dimensions) and their generalization (in higher dimensions). General expressions For a parametrically-defined space curve in three dimensions given in Cartesian coordinates by , the curvature is where the prime denotes differentiation with respect to the parameter . This can be expressed independently of the coordinate system by means of the formula where × denotes the vector cross product. The following formula is valid for the curvature of curves in a Euclidean space of any dimension: Curvature from arc and chord length Given two points and on , let be the arc length of the portion of the curve between and and let denote the length of the line segment from to . The curvature of at is given by the limit where the limit is taken as the point approaches on . The denominator can equally well be taken to be . The formula is valid in any dimension. Furthermore, by considering the limit independently on either side of , this definition of the curvature can sometimes accommodate a singularity at . The formula follows by verifying it for the osculating circle. Surfaces The curvature of curves drawn on a surface is the main tool for the defining and studying the curvature of the surface. Curves on surfaces For a curve drawn on a surface (embedded in three-dimensional Euclidean space), several curvatures are defined, which relates the direction of curvature to the surface's unit normal vector, including the: normal curvature geodesic curvature geodesic torsion Any non-singular curve on a smooth surface has its tangent vector contained in the tangent plane of the surface. The normal curvature, , is the curvature of the curve projected onto the plane containing the curve's tangent and the surface normal ; the geodesic curvature, , is the curvature of the curve projected onto the surface's tangent plane; and the geodesic torsion (or relative torsion), , measures the rate of change of the surface normal around the curve's tangent. Let the curve be arc-length parametrized, and let so that form an orthonormal basis, called the Darboux frame. The above quantities are related by: Principal curvature All curves on the surface with the same tangent vector at a given point will have the same normal curvature, which is the same as the curvature of the curve obtained by intersecting the surface with the plane containing and . Taking all possible tangent vectors, the maximum and minimum values of the normal curvature at a point are called the principal curvatures, and , and the directions of the corresponding tangent vectors are called principal normal directions. Normal sections Curvature can be evaluated along surface normal sections, similar to above (see for example the Earth radius of curvature). Developable surfaces Some curved surfaces, such as those made from a smooth sheet of paper, can be flattened down into the plane without distorting their intrinsic features in any way. Such developable surfaces have zero Gaussian curvature (see below). Gaussian curvature In contrast to curves, which do not have intrinsic curvature, but do have extrinsic curvature (they only have a curvature given an embedding), surfaces can have intrinsic curvature, independent of an embedding. The Gaussian curvature, named after Carl Friedrich Gauss, is equal to the product of the principal curvatures, . It has a dimension of length−2 and is positive for spheres, negative for one-sheet hyperboloids and zero for planes and cylinders. It determines whether a surface is locally convex (when it is positive) or locally saddle-shaped (when it is negative). Gaussian curvature is an intrinsic property of the surface, meaning it does not depend on the particular embedding of the surface; intuitively, this means that ants living on the surface could determine the Gaussian curvature. For example, an ant living on a sphere could measure the sum of the interior angles of a triangle and determine that it was greater than 180 degrees, implying that the space it inhabited had positive curvature. On the other hand, an ant living on a cylinder would not detect any such departure from Euclidean geometry; in particular the ant could not detect that the two surfaces have different mean curvatures (see below), which is a purely extrinsic type of curvature. Formally, Gaussian curvature only depends on the Riemannian metric of the surface. This is Gauss's celebrated Theorema Egregium, which he found while concerned with geographic surveys and mapmaking. An intrinsic definition of the Gaussian curvature at a point is the following: imagine an ant which is tied to with a short thread of length . It runs around while the thread is completely stretched and measures the length of one complete trip around . If the surface were flat, the ant would find . On curved surfaces, the formula for will be different, and the Gaussian curvature at the point can be computed by the Bertrand–Diguet–Puiseux theorem as The integral of the Gaussian curvature over the whole surface is closely related to the surface's Euler characteristic; see the Gauss–Bonnet theorem. The discrete analog of curvature, corresponding to curvature being concentrated at a point and particularly useful for polyhedra, is the (angular) defect; the analog for the Gauss–Bonnet theorem is Descartes' theorem on total angular defect. Because (Gaussian) curvature can be defined without reference to an embedding space, it is not necessary that a surface be embedded in a higher-dimensional space in order to be curved. Such an intrinsically curved two-dimensional surface is a simple example of a Riemannian manifold. Mean curvature The mean curvature is an extrinsic measure of curvature equal to half the sum of the principal curvatures, . It has a dimension of length−1. Mean curvature is closely related to the first variation of surface area. In particular, a minimal surface such as a soap film has mean curvature zero and a soap bubble has constant mean curvature. Unlike Gauss curvature, the mean curvature is extrinsic and depends on the embedding, for instance, a cylinder and a plane are locally isometric but the mean curvature of a plane is zero while that of a cylinder is nonzero. Second fundamental form The intrinsic and extrinsic curvature of a surface can be combined in the second fundamental form. This is a quadratic form in the tangent plane to the surface at a point whose value at a particular tangent vector to the surface is the normal component of the acceleration of a curve along the surface tangent to ; that is, it is the normal curvature to a curve tangent to (see above). Symbolically, where is the unit normal to the surface. For unit tangent vectors , the second fundamental form assumes the maximum value and minimum value , which occur in the principal directions and , respectively. Thus, by the principal axis theorem, the second fundamental form is Thus the second fundamental form encodes both the intrinsic and extrinsic curvatures. Shape operator An encapsulation of surface curvature can be found in the shape operator, , which is a self-adjoint linear operator from the tangent plane to itself (specifically, the differential of the Gauss map). For a surface with tangent vectors and normal , the shape operator can be expressed compactly in index summation notation as (Compare the alternative expression of curvature for a plane curve.) The Weingarten equations give the value of in terms of the coefficients of the first and second fundamental forms as The principal curvatures are the eigenvalues of the shape operator, the principal curvature directions are its eigenvectors, the Gauss curvature is its determinant, and the mean curvature is half its trace. Curvature of space By extension of the former argument, a space of three or more dimensions can be intrinsically curved. The curvature is intrinsic in the sense that it is a property defined at every point in the space, rather than a property defined with respect to a larger space that contains it. In general, a curved space may or may not be conceived as being embedded in a higher-dimensional ambient space; if not then its curvature can only be defined intrinsically. After the discovery of the intrinsic definition of curvature, which is closely connected with non-Euclidean geometry, many mathematicians and scientists questioned whether ordinary physical space might be curved, although the success of Euclidean geometry up to that time meant that the radius of curvature must be astronomically large. In the theory of general relativity, which describes gravity and cosmology, the idea is slightly generalised to the "curvature of spacetime"; in relativity theory spacetime is a pseudo-Riemannian manifold. Once a time coordinate is defined, the three-dimensional space corresponding to a particular time is generally a curved Riemannian manifold; but since the time coordinate choice is largely arbitrary, it is the underlying spacetime curvature that is physically significant. Although an arbitrarily curved space is very complex to describe, the curvature of a space which is locally isotropic and homogeneous is described by a single Gaussian curvature, as for a surface; mathematically these are strong conditions, but they correspond to reasonable physical assumptions (all points and all directions are indistinguishable). A positive curvature corresponds to the inverse square radius of curvature; an example is a sphere or hypersphere. An example of negatively curved space is hyperbolic geometry (see also: non-positive curvature). A space or space-time with zero curvature is called flat. For example, Euclidean space is an example of a flat space, and Minkowski space is an example of a flat spacetime. There are other examples of flat geometries in both settings, though. A torus or a cylinder can both be given flat metrics, but differ in their topology. Other topologies are also possible for curved space . Generalizations The mathematical notion of curvature is also defined in much more general contexts. Many of these generalizations emphasize different aspects of the curvature as it is understood in lower dimensions. One such generalization is kinematic. The curvature of a curve can naturally be considered as a kinematic quantity, representing the force felt by a certain observer moving along the curve; analogously, curvature in higher dimensions can be regarded as a kind of tidal force (this is one way of thinking of the sectional curvature). This generalization of curvature depends on how nearby test particles diverge or converge when they are allowed to move freely in the space; see Jacobi field. Another broad generalization of curvature comes from the study of parallel transport on a surface. For instance, if a vector is moved around a loop on the surface of a sphere keeping parallel throughout the motion, then the final position of the vector may not be the same as the initial position of the vector. This phenomenon is known as holonomy. Various generalizations capture in an abstract form this idea of curvature as a measure of holonomy; see curvature form. A closely related notion of curvature comes from gauge theory in physics, where the curvature represents a field and a vector potential for the field is a quantity that is in general path-dependent: it may change if an observer moves around a loop. Two more generalizations of curvature are the scalar curvature and Ricci curvature. In a curved surface such as the sphere, the area of a disc on the surface differs from the area of a disc of the same radius in flat space. This difference (in a suitable limit) is measured by the scalar curvature. The difference in area of a sector of the disc is measured by the Ricci curvature. Each of the scalar curvature and Ricci curvature are defined in analogous ways in three and higher dimensions. They are particularly important in relativity theory, where they both appear on the side of Einstein's field equations that represents the geometry of spacetime (the other side of which represents the presence of matter and energy). These generalizations of curvature underlie, for instance, the notion that curvature can be a property of a measure; see curvature of a measure. Another generalization of curvature relies on the ability to compare a curved space with another space that has constant curvature. Often this is done with triangles in the spaces. The notion of a triangle makes senses in metric spaces, and this gives rise to spaces.
Mathematics
Functions: General
null
60777
https://en.wikipedia.org/wiki/Limonite
Limonite
Limonite () is an iron ore consisting of a mixture of hydrated iron(III) oxide-hydroxides in varying composition. The generic formula is frequently written as , although this is not entirely accurate as the ratio of oxide to hydroxide can vary quite widely. Limonite is one of the three principal iron ores, the others being hematite and magnetite, and has been mined for the production of iron since at least 400 BC. Names Limonite is named for the Ancient Greek word ( ), meaning "wet meadow", or ( ), meaning "marshy lake", as an allusion to its occurrence as in meadows and marshes. In its brown form, it is sometimes called brown hematite or brown iron ore. Characteristics Limonite is relatively dense with a specific gravity varying from 2.7 to 4.3. It is usually medium to dark yellowish brown in color. The streak of limonite on an unglazed porcelain plate is always yellowish brown, a character which distinguishes it from hematite with a red streak, or from magnetite with a black streak. The hardness is quite variable, ranging from 1 to 5. In thin section it appears as red, yellow, or brown and has a high index of refraction, 2.0–2.4. Limonite minerals are strongly birefringent, but grain sizes are usually too small for this to be detectable. Although originally defined as a single mineral, limonite is now recognized as a field term for a mixture of related hydrated iron oxide minerals, among them goethite, lepidocrocite, akaganeite, and jarosite. Determination of the precise mineral composition is practical only with X-ray diffraction techniques. Individual minerals in limonite may form crystals, but limonite does not, although specimens may show a fibrous or microcrystalline structure, and limonite often occurs in concretionary forms or in compact and earthy masses; sometimes mammillary, botryoidal, reniform or stalactitic. Because of its amorphous nature, and occurrence in hydrated areas limonite often presents as a clay or mudstone. However, there are limonite pseudomorphs after other minerals such as pyrite. This means that chemical weathering transforms the crystals of pyrite into limonite by hydrating the molecules, but the external shape of the pyrite crystal remains. Limonite pseudomorphs have also been formed from other iron oxides, hematite and magnetite; from the carbonate siderite and from iron rich silicates such as almandine garnets. Formation Limonite usually forms from the hydration of hematite and magnetite, from the oxidation and hydration of iron rich sulfide minerals, and chemical weathering of other iron rich minerals such as olivine, pyroxene, amphibole, and biotite. It is often the major iron component in lateritic soils, and limonite laterite ores are a source of nickel and potentially cobalt and other valuable metals, present as trace elements. It is often deposited in run-off streams from mining operations. Uses Nickel-rich limonite ores represent the largest reserves of nickel. Such minerals are classified as lateritic nickel ore deposits. One of the first uses was as a pigment. The yellow form produced yellow ochre for which Cyprus was famous, while the darker forms produced more earthy tones. Roasting the limonite changed it partially to hematite, producing red ochres, burnt umbers and siennas. Bog iron ore and limonite mudstones are mined as a source of iron. Iron caps or gossans of siliceous iron oxide typically form as the result of intensive oxidation of sulfide ore deposits. These gossans were used by prospectors as guides to buried ore. Limonite was mined for its ancillary gold content. The oxidation of sulfide deposits which contained gold, often resulted in the concentration of gold in the iron oxide and quartz of the gossans. The gold of the primary veins was concentrated into the limonites of the deeply weathered rocks. In another example the deeply weathered iron formations of Brazil served to concentrate gold with the limonite of the resulting soils. History Limonite was one of the earliest materials used as a pigment by humans, and can be seen in Neolithic cave paintings and pictographs. While the first iron ore was likely meteoric iron, and hematite was far easier to smelt, in Africa, where the first evidence of iron metallurgy occurs, limonite is the most prevalent iron ore. Before smelting, as the ore was heated and the water driven off, more and more of the limonite was converted to hematite. The ore was then pounded as it was heated above 1250 °C, at which temperature the metallic iron begins sticking together and non-metallic impurities are thrown off as sparks. Complex systems developed, notably in Tanzania, to process limonite. Nonetheless, hematite and magnetite remained the ores of choice when smelting was by bloomeries, and it was only with the development of blast furnaces in the 1st century BCE in China and about 1150 CE in Europe, that the brown iron ore of limonite could be used to best advantage. Bog iron ore and limonite were mined in the US, but this ended with the development of advanced mining techniques. Goldbearing limonite gossans were productively mined in the Shasta County, California mining district. Similar deposits were mined near Rio Tinto in Spain and Mount Morgan in Australia. In the Dahlonega gold belt in Lumpkin County, Georgia gold was mined from limonite-rich lateritic or saprolite soil. As saprolite deposits have been exhausted in many mining sites, limonite has become the most prominent source of nickel for use in energy dense batteries.
Physical sciences
Minerals
Earth science
60784
https://en.wikipedia.org/wiki/Boulder
Boulder
In geology, a boulder (or rarely bowlder) is a rock fragment with size greater than in diameter. Smaller pieces are called cobbles and pebbles. While a boulder may be small enough to move or roll manually, others are extremely massive. In common usage, a boulder is too large for a person to move. Smaller boulders are usually just called rocks or stones. Etymology The word boulder derives from boulder stone, from Middle English bulderston or Swedish bullersten. About In places covered by ice sheets during ice ages, such as Scandinavia, northern North America, and Siberia, glacial erratics are common. Erratics are boulders picked up by ice sheets during their advance, and deposited when they melt. These boulders are called "erratic" because they typically are of a different rock type than the bedrock on which they are deposited. One such boulder is used as the pedestal of the Bronze Horseman in Saint Petersburg, Russia. Some noted rock formations involve giant boulders exposed by erosion, such as the Devil's Marbles in Australia's Northern Territory, the Horeke basalts in New Zealand, where an entire valley contains only boulders, and The Baths on the island of Virgin Gorda in the British Virgin Islands. Boulder-sized clasts are found in some sedimentary rocks, such as coarse conglomerate and boulder clay.
Physical sciences
Sedimentology
Earth science
60825
https://en.wikipedia.org/wiki/Endorphins
Endorphins
Endorphins (contracted from endogenous morphine) are peptides produced in the brain that block the perception of pain and increase feelings of wellbeing. They are produced and stored in the pituitary gland of the brain. Endorphins are endogenous painkillers often produced in the brain and adrenal medulla during physical exercise or orgasm and inhibit pain, muscle cramps, and relieve stress. History Opioid peptides in the brain were first discovered in 1973 by investigators at the University of Aberdeen, John Hughes and Hans Kosterlitz. They isolated "enkephalins" (from the Greek ) from pig brain, identified as Met-enkephalin and Leu-enkephalin. This came after the discovery of a receptor that was proposed to produce the pain-relieving analgesic effects of morphine and other opioids, which led Kosterlitz and Hughes to their discovery of the endogenous opioid ligands. Research during this time was focused on the search for a painkiller that did not have the addictive character or overdose risk of morphine. Rabi Simantov and Solomon H. Snyder isolated morphine-like peptides from calf brain. Eric J. Simon, who independently discovered opioid receptors, later termed these peptides as endorphins. This term was essentially assigned to any peptide that demonstrated morphine-like activity. In 1976, Choh Hao Li and David Chung recorded the sequences of α-, β-, and γ-endorphin isolated from camel pituitary glands for their opioid activity. Li determined that β-endorphin produced strong analgesic effects. Wilhelm Feldberg and Derek George Smyth in 1977 confirmed this, finding β-endorphin to be more potent than morphine. They also confirmed that its effects were reversed by naloxone, an opioid antagonist. Studies have subsequently distinguished between enkephalins, endorphins, and endogenously produced morphine, which is not a peptide. Opioid peptides are classified based on their precursor propeptide: all endorphins are synthesized from the precursor proopiomelanocortin (POMC), encoded by proenkephalin A, and dynorphins encoded by pre-dynorphin. Etymology The word endorphin is derived from / meaning "within" (endogenous, / , "proceeding from within"), and morphine, from Morpheus (), the god of dreams in the Greek mythology. Thus, endorphin is a contraction of 'endo(genous) (mo)rphin' (morphin being the old spelling of morphine). Types The class of endorphins consists of three endogenous opioid peptides: α-endorphin, β-endorphin, and γ-endorphin. The endorphins are all synthesized from the precursor protein, proopiomelanocortin, and all contain a Met-enkephalin motif at their N-terminus: Tyr-Gly-Gly-Phe-Met. α-endorphin and γ-endorphin result from proteolytic cleavage of β-endorphin between the Thr(16)-Leu(17) residues and Leu(17)-Phe(18) respectively. α-endorphin has the shortest sequence, and β-endorphin has the longest sequence. α-endorphin and γ-endorphin are primarily found in the anterior and intermediate pituitary. While β-endorphin is studied for its opioid activity, α-endorphin and γ-endorphin both lack affinity for opiate receptors and thus do not affect the body in the same way that β-endorphin does. Some studies have characterized α-endorphin activity as similar to that of psychostimulants and γ-endorphin activity to that of neuroleptics separately. Synthesis Endorphin precursors are primarily produced in the pituitary gland. All three types of endorphins are fragments of the precursor protein proopiomelanocortin (POMC). At the trans-Golgi network, POMC binds to a membrane-bound protein, carboxypeptidase E (CPE). CPE facilitates POMC transport into immature budding vesicles. In mammals, pro-peptide convertase 1 (PC1) cleaves POMC into adrenocorticotropin (ACTH) and beta-lipotropin (β-LPH). β-LPH, a pituitary hormone with little opiate activity, is then continually fragmented into different peptides, including α-endorphin, β-endorphin, and γ-endorphin. Peptide convertase 2 (PC2) is responsible for cleaving β-LPH into β-endorphin and γ-lipotropin. Formation of α-endorphin and γ-endorphin results from proteolytic cleavage of β-endorphin. Regulation Noradrenaline has been shown to increase endorphins production within inflammatory tissues, resulting in an analgesic effect; the stimulation of sympathetic nerves by electro-acupuncture is believed to be the cause of its analgesic effects. Mechanism of action Endorphins are released from the pituitary gland, typically in response to pain, and can act in both the central nervous system (CNS) and the peripheral nervous system (PNS). In the PNS, β-endorphin is the primary endorphin released from the pituitary gland. Endorphins inhibit transmission of pain signals by binding μ-receptors of peripheral nerves, which block their release of neurotransmitter substance P. The mechanism in the CNS is similar but works by blocking a different neurotransmitter: gamma-aminobutyric acid (GABA). In turn, inhibition of GABA increases the production and release of dopamine, a neurotransmitter associated with reward learning. Functions Endorphins play a major role in the body's inhibitory response to pain. Research has demonstrated that meditation by trained individuals can be used to trigger endorphin release. Laughter may also stimulate endorphin production and elevate one's pain threshold. Endorphin production can be triggered by vigorous aerobic exercise. The release of β-endorphin has been postulated to contribute to the phenomenon known as "runner's high". However, several studies have supported the hypothesis that the runner's high is due to the release of endocannabinoids rather than that of endorphins. Endorphins may contribute to the positive effect of exercise on anxiety and depression. The same phenomenon may also play a role in exercise addiction. Regular intense exercise may cause the brain to downregulate the production of endorphins in periods of rest to maintain homeostasis, causing a person to exercise more intensely in order to receive the same feeling.
Biology and health sciences
Animal hormones
Biology
60828
https://en.wikipedia.org/wiki/Lepton
Lepton
In particle physics, a lepton is an elementary particle of half-integer spin (spin ) that does not undergo strong interactions. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons or muons), including the electron, muon, and tauon, and neutral leptons, better known as neutrinos. Charged leptons can combine with other particles to form various composite particles such as atoms and positronium, while neutrinos rarely interact with anything, and are consequently rarely observed. The best known of all leptons is the electron. There are six types of leptons, known as flavours, grouped in three generations. The first-generation leptons, also called electronic leptons, comprise the electron () and the electron neutrino (); the second are the muonic leptons, comprising the muon () and the muon neutrino (); and the third are the tauonic leptons, comprising the tau () and the tau neutrino (). Electrons have the least mass of all the charged leptons. The heavier muons and taus will rapidly change into electrons and neutrinos through a process of particle decay: the transformation from a higher mass state to a lower mass state. Thus electrons are stable and the most common charged lepton in the universe, whereas muons and taus can only be produced in high-energy collisions (such as those involving cosmic rays and those carried out in particle accelerators). Leptons have various intrinsic properties, including electric charge, spin, and mass. Unlike quarks, however, leptons are not subject to the strong interaction, but they are subject to the other three fundamental interactions: gravitation, the weak interaction, and to electromagnetism, of which the latter is proportional to charge, and is thus zero for the electrically neutral neutrinos. For every lepton flavor, there is a corresponding type of antiparticle, known as an antilepton, that differs from the lepton only in that some of its properties have equal magnitude but opposite sign. According to certain theories, neutrinos may be their own antiparticle. It is not currently known whether this is the case. The first charged lepton, the electron, was theorized in the mid-19th century by several scientists and was discovered in 1897 by J. J. Thomson. The next lepton to be observed was the muon, discovered by Carl D. Anderson in 1936, which was classified as a meson at the time. After investigation, it was realized that the muon did not have the expected properties of a meson, but rather behaved like an electron, only with higher mass. It took until 1947 for the concept of "leptons" as a family of particles to be proposed. The first neutrino, the electron neutrino, was proposed by Wolfgang Pauli in 1930 to explain certain characteristics of beta decay. It was first observed in the Cowan–Reines neutrino experiment conducted by Clyde Cowan and Frederick Reines in 1956. The muon neutrino was discovered in 1962 by Leon M. Lederman, Melvin Schwartz, and Jack Steinberger, and the tau discovered between 1974 and 1977 by Martin Lewis Perl and his colleagues from the Stanford Linear Accelerator Center and Lawrence Berkeley National Laboratory. The tau neutrino remained elusive until July 2000, when the DONUT collaboration from Fermilab announced its discovery. Leptons are an important part of the Standard Model. Electrons are one of the components of atoms, alongside protons and neutrons. Exotic atoms with muons and taus instead of electrons can also be synthesized, as well as lepton–antilepton particles such as positronium. Etymology The name lepton comes from the Greek leptós, "fine, small, thin" (neuter nominative/accusative singular form: λεπτόν leptón); the earliest attested form of the word is the Mycenaean Greek , re-po-to, written in Linear B syllabic script. Lepton was first used by physicist Léon Rosenfeld in 1948: Following a suggestion of Prof. C. Møller, I adopt—as a pendant to "nucleon"—the denomination "lepton" (from λεπτός, small, thin, delicate) to denote a particle of small mass. Rosenfeld chose the name as the common name for electrons and (then hypothesized) neutrinos. Additionally, the muon, initially classified as a meson, was reclassified as a lepton in the 1950s. The masses of those particles are small compared to nucleons—the mass of an electron () and the mass of a muon (with a value of ) are fractions of the mass of the "heavy" proton (), and the mass of a neutrino is nearly zero. However, the mass of the tau (discovered in the mid-1970s) () is nearly twice that of the proton and times that of the electron. History The first lepton identified was the electron, discovered by J.J. Thomson and his team of British physicists in 1897. Then in 1930, Wolfgang Pauli postulated the electron neutrino to preserve conservation of energy, conservation of momentum, and conservation of angular momentum in beta decay. Pauli theorized that an undetected particle was carrying away the difference between the energy, momentum, and angular momentum of the initial and observed final particles. The electron neutrino was simply called the neutrino, as it was not yet known that neutrinos came in different flavours (or different "generations"). Nearly 40 years after the discovery of the electron, the muon was discovered by Carl D. Anderson in 1936. Due to its mass, it was initially categorized as a meson rather than a lepton. It later became clear that the muon was much more similar to the electron than to mesons, as muons do not undergo the strong interaction, and thus the muon was reclassified: electrons, muons, and the (electron) neutrino were grouped into a new group of particles—the leptons. In 1962, Leon M. Lederman, Melvin Schwartz, and Jack Steinberger showed that more than one type of neutrino exists by first detecting interactions of the muon neutrino, which earned them the 1988 Nobel Prize, although by then the different flavours of neutrino had already been theorized. The tau was first detected in a series of experiments between 1974 and 1977 by Martin Lewis Perl with his colleagues at the SLAC LBL group. Like the electron and the muon, it too was expected to have an associated neutrino. The first evidence for tau neutrinos came from the observation of "missing" energy and momentum in tau decay, analogous to the "missing" energy and momentum in beta decay leading to the discovery of the electron neutrino. The first detection of tau neutrino interactions was announced in 2000 by the DONUT collaboration at Fermilab, making it the second-to-latest particle of the Standard Model to have been directly observed, with Higgs boson being discovered in 2012. Although all present data is consistent with three generations of leptons, some particle physicists are searching for a fourth generation. The current lower limit on the mass of such a fourth charged lepton is , while its associated neutrino would have a mass of at least . Properties Spin and chirality Leptons are spin  particles. The spin-statistics theorem thus implies that they are fermions and thus that they are subject to the Pauli exclusion principle: no two leptons of the same species can be in the same state at the same time. Furthermore, it means that a lepton can have only two possible spin states, namely up or down. A closely related property is chirality, which in turn is closely related to a more easily visualized property called helicity. The helicity of a particle is the direction of its spin relative to its momentum; particles with spin in the same direction as their momentum are called right-handed and they are otherwise called left-handed. When a particle is massless, the direction of its momentum relative to its spin is the same in every reference frame, whereas for massive particles it is possible to 'overtake' the particle by choosing a faster-moving reference frame; in the faster frame, the helicity is reversed. Chirality is a technical property, defined through transformation behaviour under the Poincaré group, that does not change with reference frame. It is contrived to agree with helicity for massless particles, and is still well defined for particles with mass. In many quantum field theories, such as quantum electrodynamics and quantum chromodynamics, left- and right-handed fermions are identical. However, the Standard Model's weak interaction treats left-handed and right-handed fermions differently: only left-handed fermions (and right-handed anti-fermions) participate in the weak interaction. This is an example of parity violation explicitly written into the model. In the literature, left-handed fields are often denoted by a capital L subscript (e.g. the normal electron e) and right-handed fields are denoted by a capital R subscript (e.g. a positron e). Right-handed neutrinos and left-handed anti-neutrinos have no possible interaction with other particles (see Sterile neutrino) and so are not a functional part of the Standard Model, although their exclusion is not a strict requirement; they are sometimes listed in particle tables to emphasize that they would have no active role if included in the model. Even though electrically charged right-handed particles (electron, muon, or tau) do not engage in the weak interaction specifically, they can still interact electrically, and hence still participate in the combined electroweak force, although with different strengths (W). Electromagnetic interaction One of the most prominent properties of leptons is their electric charge, . The electric charge determines the strength of their electromagnetic interactions. It determines the strength of the electric field generated by the particle (see Coulomb's law) and how strongly the particle reacts to an external electric or magnetic field (see Lorentz force). Each generation contains one lepton with and one lepton with zero electric charge. The lepton with electric charge is commonly simply referred to as a charged lepton while a neutral lepton is called a neutrino. For example, the first generation consists of the electron with a negative electric charge and the electrically neutral electron neutrino . In the language of quantum field theory, the electromagnetic interaction of the charged leptons is expressed by the fact that the particles interact with the quantum of the electromagnetic field, the photon. The Feynman diagram of the electron–photon interaction is shown on the right. Because leptons possess an intrinsic rotation in the form of their spin, charged leptons generate a magnetic field. The size of their magnetic dipole moment is given by where is the mass of the lepton and is the so-called " factor" for the lepton. First-order quantum mechanical approximation predicts that the  factor is 2 for all leptons. However, higher-order quantum effects caused by loops in Feynman diagrams introduce corrections to this value. These corrections, referred to as the anomalous magnetic dipole moment, are very sensitive to the details of a quantum field theory model, and thus provide the opportunity for precision tests of the Standard Model. The theoretical and measured values for the electron anomalous magnetic dipole moment are within agreement within eight significant figures. The results for the muon, however, are problematic, hinting at a small, persistent discrepancy between the Standard Model and experiment. Weak interaction In the Standard Model, the left-handed charged lepton and the left-handed neutrino are arranged in doublet that transforms in the spinor representation () of the weak isospin SU(2) gauge symmetry. This means that these particles are eigenstates of the isospin projection with eigenvalues and respectively. In the meantime, the right-handed charged lepton transforms as a weak isospin scalar () and thus does not participate in the weak interaction, while there is no evidence that a right-handed neutrino exists at all. The Higgs mechanism recombines the gauge fields of the weak isospin SU(2) and the weak hypercharge U(1) symmetries to three massive vector bosons (, , ) mediating the weak interaction, and one massless vector boson, the photon (γ), responsible for the electromagnetic interaction. The electric charge can be calculated from the isospin projection and weak hypercharge through the Gell-Mann–Nishijima formula, To recover the observed electric charges for all particles, the left-handed weak isospin doublet must thus have , while the right-handed isospin scalar must have . The interaction of the leptons with the massive weak interaction vector bosons is shown in the figure on the right. Mass In the Standard Model, each lepton starts out with no intrinsic mass. The charged leptons (i.e. the electron, muon, and tau) obtain an effective mass through interaction with the Higgs field, but the neutrinos remain massless. For technical reasons, the masslessness of the neutrinos implies that there is no mixing of the different generations of charged leptons as there is for quarks. The zero mass of neutrino is in close agreement with current direct experimental observations of the mass. However, it is known from indirect experiments—most prominently from observed neutrino oscillations—that neutrinos have to have a nonzero mass, probably less than . This implies the existence of physics beyond the Standard Model. The currently most favoured extension is the so-called seesaw mechanism, which would explain both why the left-handed neutrinos are so light compared to the corresponding charged leptons, and why we have not yet seen any right-handed neutrinos. Lepton flavor quantum numbers The members of each generation's weak isospin doublet are assigned leptonic numbers that are conserved under the Standard Model. Electrons and electron neutrinos have an electronic number of , while muons and muon neutrinos have a muonic number of , while tau particles and tau neutrinos have a tauonic number of . The antileptons have their respective generation's leptonic numbers of −1. Conservation of the leptonic numbers means that the number of leptons of the same type remains the same, when particles interact. This implies that leptons and antileptons must be created in pairs of a single generation. For example, the following processes are allowed under conservation of leptonic numbers:    →   + ,  →   + , but none of these:      →   + ,  →   + ,    →   + . However, neutrino oscillations are known to violate the conservation of the individual leptonic numbers. Such a violation is considered to be smoking gun evidence for physics beyond the Standard Model. A much stronger conservation law is the conservation of the total number of leptons ( ), conserved even in the case of neutrino oscillations, but even it is still violated by a tiny amount by the chiral anomaly. Universality The coupling of leptons to all types of gauge boson are flavour-independent: The interaction between leptons and a gauge boson measures the same for each lepton. This property is called lepton universality and has been tested in measurements of the muon and tau lifetimes and of boson partial decay widths, particularly at the Stanford Linear Collider (SLC) and Large Electron–Positron Collider (LEP) experiments. The decay rate () of muons through the process is approximately given by an expression of the form (see muon decay for more details) where is some constant, and is the Fermi coupling constant. The decay rate of tau particles through the process is given by an expression of the same form where is some other constant. Muon–tauon universality implies that . On the other hand, electron–muon universality implies The branching ratios for the electronic mode (17.82%) and muonic (17.39%) mode of tau decay are not equal due to the mass difference of the final state leptons. Universality also accounts for the ratio of muon and tau lifetimes. The lifetime of a lepton (with = "" or "") is related to the decay rate by , where denotes the branching ratios and denotes the resonance width of the process with and replaced by two different particles from "" or "" or "". The ratio of tau and muon lifetime is thus given by Using values from the 2008 Review of Particle Physics for the branching ratios of the muon and tau yields a lifetime ratio of ~ , comparable to the measured lifetime ratio of ~ . The difference is due to and not actually being constants: They depend slightly on the mass of leptons involved. Recent tests of lepton universality in meson decays, performed by the LHCb, BaBar, and Belle experiments, have shown consistent deviations from the Standard Model predictions. However the combined statistical and systematic significance is not yet high enough to claim an observation of new physics. In July 2021 results on lepton flavour universality have been published testing W decays, previous measurements by the LEP had given a slight imbalance but the new measurement by the ATLAS collaboration have twice the precision and give a ratio of , which agrees with the standard-model prediction of unity. In 2024 a preprint by the ATLAS collaboration has published a new value of the most precise ratio so far testing the lepton flavour universality. Table of leptons {| class="wikitable" style="text-align:center" |+Properties of leptons |- !rowspan=2| Spin !rowspan=2| Particle or antiparticle name !rowspan=2| Symbol !rowspan=2| Charge !colspan=3| Lepton flavor number !rowspan=2| Mass !rowspan=2| Lifetime |- ! ! ! |- |rowspan=13| |style="text-align:left"| electron | | −1 | +1 |rowspan=2| 0 |rowspan=2| 0 |rowspan=2 style="text-align:right"| |rowspan=2| stable |- |style="text-align:left"| positron | | +1 | −1 |- |style="text-align:left"| muon | | −1 |rowspan=2| 0 | +1 |rowspan=2| 0 |rowspan=2 style="text-align:right"| |rowspan=2 style="text-align:right"|          |- |style="text-align:left"|antimuon | | +1 | −1 |- |style="text-align:left"| tau | | −1 |rowspan=2| 0 |rowspan=2| 0 | +1 |rowspan=2 style="text-align:right"| |rowspan=2 style="text-align:right"| |- |style="text-align:left"| antitau | | +1 | −1 |- !colspan=8| |- |style="text-align:left"| electron neutrino | | rowspan="6" | 0 | +1 |rowspan=2| 0 |rowspan=2| 0 |rowspan=2 style="text-align:left"| <  |rowspan=2| unknown |- |style="text-align:left"| electron antineutrino | | −1 |- |style="text-align:left"| muon neutrino | |rowspan=2| 0 | +1 |rowspan=2| 0 |rowspan=2 style="text-align:left"| < 0.17 |rowspan=2| unknown |- |style="text-align:left"| muon antineutrino | | −1 |- |style="text-align:left"| tau neutrino | |rowspan=2| 0 |rowspan=2| 0 | +1 |rowspan=2 style="text-align:left"| < 15.5 |rowspan=2| unknown |- |style="text-align:left"| tau antineutrino | | −1 |- |}
Physical sciences
Fermions
null
60871
https://en.wikipedia.org/wiki/Luminescence
Luminescence
Luminescence is a spontaneous emission of radiation from an electronically or vibrationally excited species not in thermal equilibrium with its environment. A luminescent object emits cold light in contrast to incandescence, where an object only emits light after heating. Generally, the emission of light is due to the movement of electrons between different energy levels within an atom after excitation by external factors. However, the exact mechanism of light emission in vibrationally excited species is unknown. The dials, hands, scales, and signs of aviation and navigational instruments and markings are often coated with luminescent materials in a process known as luminising. Types Ionoluminescence, a result of bombardment by fast ions Radioluminescence, a result of bombardment by ionizing radiation Electroluminescence, a result of an electric current passed through a substance Cathodoluminescence, a result of a luminescent material being struck by electrons Chemiluminescence, the emission of light as a result of a chemical reaction Bioluminescence, a result of biochemical reactions in a living organism Electrochemiluminescence, a result of an electrochemical reaction Lyoluminescence, a result of dissolving a solid (usually heavily irradiated) in a liquid solvent Candoluminescence, is light emitted by certain materials at elevated temperatures, which differs from the blackbody emission expected at the temperature in question. Mechanoluminescence, a result of a mechanical action on a solid Triboluminescence, generated when bonds in a material are broken when that material is scratched, crushed, or rubbed Fractoluminescence, generated when bonds in certain crystals are broken by fractures Piezoluminescence, produced by the action of pressure on certain solids Sonoluminescence, a result of imploding bubbles in a liquid when excited by sound Crystalloluminescence, produced during crystallization Thermoluminescence, the re-emission of absorbed energy when a substance is heated Cryoluminescence, the emission of light when an object is cooled (an example of this is wulfenite) Photoluminescence, a result of the absorption of photons Fluorescence, traditionally defined as the emission of light that ends immediately after the source of excitation is removed. As the definition does not fully describe the phenomenon, quantum mechanics is employed where it is defined as there is no change in spin multiplicity from the state of excitation to emission of light. Phosphorescence, traditionally defined as persistent emission of light after the end of excitation. As the definition does not fully describe the phenomenon, quantum mechanics is employed where it is defined as there is a change in spin multiplicity from the state of excitation to the emission of light. Applications Light-emitting diodes (LEDs) emit light via electro-luminescence. Phosphors, materials that emit light when irradiated by higher-energy electromagnetic radiation or particle radiation Laser, and lamp industry Phosphor thermometry, measuring temperature using phosphorescence Thermoluminescence dating Thermoluminescent dosimeter Non-disruptive observation of processes within a cell. Luminescence occurs in some minerals when they are exposed to low-powered sources of ultraviolet or infrared electromagnetic radiation (for example, portable UV lamps) at atmospheric pressure and atmospheric temperatures. This property of these minerals can be used during the process of mineral identification at rock outcrops in the field or in the laboratory. History The term luminescence was first introduced in 1888.
Physical sciences
Electromagnetic radiation
Physics
60876
https://en.wikipedia.org/wiki/Markov%20chain
Markov chain
In probability theory and statistics, a Markov chain or Markov process is a stochastic process describing a sequence of possible events in which the probability of each event depends only on the state attained in the previous event. Informally, this may be thought of as, "What happens next depends only on the state of affairs now." A countably infinite sequence, in which the chain moves state at discrete time steps, gives a discrete-time Markov chain (DTMC). A continuous-time process is called a continuous-time Markov chain (CTMC). Markov processes are named in honor of the Russian mathematician Andrey Markov. Markov chains have many applications as statistical models of real-world processes. They provide the basis for general stochastic simulation methods known as Markov chain Monte Carlo, which are used for simulating sampling from complex probability distributions, and have found application in areas including Bayesian statistics, biology, chemistry, economics, finance, information theory, physics, signal processing, and speech processing. The adjectives Markovian and Markov are used to describe something that is related to a Markov process. Principles Definition A Markov process is a stochastic process that satisfies the Markov property (sometimes characterized as "memorylessness"). In simpler terms, it is a process for which predictions can be made regarding future outcomes based solely on its present state and—most importantly—such predictions are just as good as the ones that could be made knowing the process's full history. In other words, conditional on the present state of the system, its future and past states are independent. A Markov chain is a type of Markov process that has either a discrete state space or a discrete index set (often representing time), but the precise definition of a Markov chain varies. For example, it is common to define a Markov chain as a Markov process in either discrete or continuous time with a countable state space (thus regardless of the nature of time), but it is also common to define a Markov chain as having discrete time in either countable or continuous state space (thus regardless of the state space). Types of Markov chains The system's state space and time parameter index need to be specified. The following table gives an overview of the different instances of Markov processes for different levels of state space generality and for discrete time v. continuous time: Note that there is no definitive agreement in the literature on the use of some of the terms that signify special cases of Markov processes. Usually the term "Markov chain" is reserved for a process with a discrete set of times, that is, a discrete-time Markov chain (DTMC), but a few authors use the term "Markov process" to refer to a continuous-time Markov chain (CTMC) without explicit mention. In addition, there are other extensions of Markov processes that are referred to as such but do not necessarily fall within any of these four categories (see Markov model). Moreover, the time index need not necessarily be real-valued; like with the state space, there are conceivable processes that move through index sets with other mathematical constructs. Notice that the general state space continuous-time Markov chain is general to such a degree that it has no designated term. While the time parameter is usually discrete, the state space of a Markov chain does not have any generally agreed-on restrictions: the term may refer to a process on an arbitrary state space. However, many applications of Markov chains employ finite or countably infinite state spaces, which have a more straightforward statistical analysis. Besides time-index and state-space parameters, there are many other variations, extensions and generalizations (see Variations). For simplicity, most of this article concentrates on the discrete-time, discrete state-space case, unless mentioned otherwise. Transitions The changes of state of the system are called transitions. The probabilities associated with various state changes are called transition probabilities. The process is characterized by a state space, a transition matrix describing the probabilities of particular transitions, and an initial state (or initial distribution) across the state space. By convention, we assume all possible states and transitions have been included in the definition of the process, so there is always a next state, and the process does not terminate. A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement. Formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps. Since the system changes randomly, it is generally impossible to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important. History Andrey Markov studied Markov processes in the early 20th century, publishing his first paper on the topic in 1906. Markov Processes in continuous time were discovered long before his work in the early 20th century in the form of the Poisson process. Markov was interested in studying an extension of independent random sequences, motivated by a disagreement with Pavel Nekrasov who claimed independence was necessary for the weak law of large numbers to hold. In his first paper on Markov chains, published in 1906, Markov showed that under certain conditions the average outcomes of the Markov chain would converge to a fixed vector of values, so proving a weak law of large numbers without the independence assumption, which had been commonly regarded as a requirement for such mathematical laws to hold. Markov later used Markov chains to study the distribution of vowels in Eugene Onegin, written by Alexander Pushkin, and proved a central limit theorem for such chains. In 1912 Henri Poincaré studied Markov chains on finite groups with an aim to study card shuffling. Other early uses of Markov chains include a diffusion model, introduced by Paul and Tatyana Ehrenfest in 1907, and a branching process, introduced by Francis Galton and Henry William Watson in 1873, preceding the work of Markov. After the work of Galton and Watson, it was later revealed that their branching process had been independently discovered and studied around three decades earlier by Irénée-Jules Bienaymé. Starting in 1928, Maurice Fréchet became interested in Markov chains, eventually resulting in him publishing in 1938 a detailed study on Markov chains. Andrey Kolmogorov developed in a 1931 paper a large part of the early theory of continuous-time Markov processes. Kolmogorov was partly inspired by Louis Bachelier's 1900 work on fluctuations in the stock market as well as Norbert Wiener's work on Einstein's model of Brownian movement. He introduced and studied a particular set of Markov processes known as diffusion processes, where he derived a set of differential equations describing the processes. Independent of Kolmogorov's work, Sydney Chapman derived in a 1928 paper an equation, now called the Chapman–Kolmogorov equation, in a less mathematically rigorous way than Kolmogorov, while studying Brownian movement. The differential equations are now called the Kolmogorov equations or the Kolmogorov–Chapman equations. Other mathematicians who contributed significantly to the foundations of Markov processes include William Feller, starting in 1930s, and then later Eugene Dynkin, starting in the 1950s. Examples Mark V. Shaney is a third-order Markov chain program, and a Markov text generator. It ingests the sample text (the Tao Te Ching, or the posts of a Usenet group) and creates a massive list of every sequence of three successive words (triplet) which occurs in the text. It then chooses two words at random, and looks for a word which follows those two in one of the triplets in its massive list. If there is more than one, it picks at random (identical triplets count separately, so a sequence which occurs twice is twice as likely to be picked as one which only occurs once). It then adds that word to the generated text. Then, in the same way, it picks a triplet that starts with the second and third words in the generated text, and that gives a fourth word. It adds the fourth word, then repeats with the third and fourth words, and so on. Random walks based on integers and the gambler's ruin problem are examples of Markov processes. Some variations of these processes were studied hundreds of years earlier in the context of independent variables. Two important examples of Markov processes are the Wiener process, also known as the Brownian motion process, and the Poisson process, which are considered the most important and central stochastic processes in the theory of stochastic processes. These two processes are Markov processes in continuous time, while random walks on the integers and the gambler's ruin problem are examples of Markov processes in discrete time. A famous Markov chain is the so-called "drunkard's walk", a random walk on the number line where, at each step, the position may change by +1 or −1 with equal probability. From any position there are two possible transitions, to the next or previous integer. The transition probabilities depend only on the current position, not on the manner in which the position was reached. For example, the transition probabilities from 5 to 4 and 5 to 6 are both 0.5, and all other transition probabilities from 5 are 0. These probabilities are independent of whether the system was previously in 4 or 6. A series of independent states (for example, a series of coin flips) satisfies the formal definition of a Markov chain. However, the theory is usually applied only when the probability distribution of the next state depends on the current one. A non-Markov example Suppose that there is a coin purse containing five quarters (each worth 25¢), five dimes (each worth 10¢), and five nickels (each worth 5¢), and one by one, coins are randomly drawn from the purse and are set on a table. If represents the total value of the coins set on the table after draws, with , then the sequence is not a Markov process. To see why this is the case, suppose that in the first six draws, all five nickels and a quarter are drawn. Thus . If we know not just , but the earlier values as well, then we can determine which coins have been drawn, and we know that the next coin will not be a nickel; so we can determine that with probability 1. But if we do not know the earlier values, then based only on the value we might guess that we had drawn four dimes and two nickels, in which case it would certainly be possible to draw another nickel next. Thus, our guesses about are impacted by our knowledge of values prior to . However, it is possible to model this scenario as a Markov process. Instead of defining to represent the total value of the coins on the table, we could define to represent the count of the various coin types on the table. For instance, could be defined to represent the state where there is one quarter, zero dimes, and five nickels on the table after 6 one-by-one draws. This new model could be represented by possible states, where each state represents the number of coins of each type (from 0 to 5) that are on the table. (Not all of these states are reachable within 6 draws.) Suppose that the first draw results in state . The probability of achieving now depends on ; for example, the state is not possible. After the second draw, the third draw depends on which coins have so far been drawn, but no longer only on the coins that were drawn for the first state (since probabilistically important information has since been added to the scenario). In this way, the likelihood of the state depends exclusively on the outcome of the state. Formal definition Discrete-time Markov chain A discrete-time Markov chain is a sequence of random variables X1, X2, X3, ... with the Markov property, namely that the probability of moving to the next state depends only on the present state and not on the previous states: if both conditional probabilities are well defined, that is, if The possible values of Xi form a countable set S called the state space of the chain. Variations Time-homogeneous Markov chains are processes where for all n. The probability of the transition is independent of n. Stationary Markov chains are processes where for all n and k. Every stationary chain can be proved to be time-homogeneous by Bayes' rule.A necessary and sufficient condition for a time-homogeneous Markov chain to be stationary is that the distribution of is a stationary distribution of the Markov chain. A Markov chain with memory (or a Markov chain of order m) where m is finite, is a process satisfying In other words, the future state depends on the past m states. It is possible to construct a chain from which has the 'classical' Markov property by taking as state space the ordered m-tuples of X values, i.e., . Continuous-time Markov chain A continuous-time Markov chain (Xt)t ≥ 0 is defined by a finite or countable state space S, a transition rate matrix Q with dimensions equal to that of the state space and initial probability distribution defined on the state space. For i ≠ j, the elements qij are non-negative and describe the rate of the process transitions from state i to state j. The elements qii are chosen such that each row of the transition rate matrix sums to zero, while the row-sums of a probability transition matrix in a (discrete) Markov chain are all equal to one. There are three equivalent definitions of the process. Infinitesimal definition Let be the random variable describing the state of the process at time t, and assume the process is in a state i at time t. Then, knowing , is independent of previous values , and as h → 0 for all j and for all t, where is the Kronecker delta, using the little-o notation. The can be seen as measuring how quickly the transition from i to j happens. Jump chain/holding time definition Define a discrete-time Markov chain Yn to describe the nth jump of the process and variables S1, S2, S3, ... to describe holding times in each of the states where Si follows the exponential distribution with rate parameter −qYiYi. Transition probability definition For any value n = 0, 1, 2, 3, ... and times indexed up to this value of n: t0, t1, t2, ... and all states recorded at these times i0, i1, i2, i3, ... it holds that where pij is the solution of the forward equation (a first-order differential equation) with initial condition P(0) is the identity matrix. Finite state space If the state space is finite, the transition probability distribution can be represented by a matrix, called the transition matrix, with the (i, j)th element of P equal to Since each row of P sums to one and all elements are non-negative, P is a right stochastic matrix. Stationary distribution relation to eigenvectors and simplices A stationary distribution is a (row) vector, whose entries are non-negative and sum to 1, is unchanged by the operation of transition matrix P on it and so is defined by By comparing this definition with that of an eigenvector we see that the two concepts are related and that is a normalized () multiple of a left eigenvector e of the transition matrix P with an eigenvalue of 1. If there is more than one unit eigenvector then a weighted sum of the corresponding stationary states is also a stationary state. But for a Markov chain one is usually more interested in a stationary state that is the limit of the sequence of distributions for some initial distribution. The values of a stationary distribution are associated with the state space of P and its eigenvectors have their relative proportions preserved. Since the components of π are positive and the constraint that their sum is unity can be rewritten as we see that the dot product of π with a vector whose components are all 1 is unity and that π lies on a simplex. Time-homogeneous Markov chain with a finite state space If the Markov chain is time-homogeneous, then the transition matrix P is the same after each step, so the k-step transition probability can be computed as the k-th power of the transition matrix, Pk. If the Markov chain is irreducible and aperiodic, then there is a unique stationary distribution . Additionally, in this case Pk converges to a rank-one matrix in which each row is the stationary distribution : where 1 is the column vector with all entries equal to 1. This is stated by the Perron–Frobenius theorem. If, by whatever means, is found, then the stationary distribution of the Markov chain in question can be easily determined for any starting distribution, as will be explained below. For some stochastic matrices P, the limit does not exist while the stationary distribution does, as shown by this example: (This example illustrates a periodic Markov chain.) Because there are a number of different special cases to consider, the process of finding this limit if it exists can be a lengthy task. However, there are many techniques that can assist in finding this limit. Let P be an n×n matrix, and define It is always true that Subtracting Q from both sides and factoring then yields where In is the identity matrix of size n, and 0n,n is the zero matrix of size n×n. Multiplying together stochastic matrices always yields another stochastic matrix, so Q must be a stochastic matrix (see the definition above). It is sometimes sufficient to use the matrix equation above and the fact that Q is a stochastic matrix to solve for Q. Including the fact that the sum of each the rows in P is 1, there are n+1 equations for determining n unknowns, so it is computationally easier if on the one hand one selects one row in Q and substitutes each of its elements by one, and on the other one substitutes the corresponding element (the one in the same column) in the vector 0, and next left-multiplies this latter vector by the inverse of transformed former matrix to find Q. Here is one method for doing so: first, define the function f(A) to return the matrix A with its right-most column replaced with all 1's. If [f(P − In)]−1 exists then Explain: The original matrix equation is equivalent to a system of n×n linear equations in n×n variables. And there are n more linear equations from the fact that Q is a right stochastic matrix whose each row sums to 1. So it needs any n×n independent linear equations of the (n×n+n) equations to solve for the n×n variables. In this example, the n equations from "Q multiplied by the right-most column of (P-In)" have been replaced by the n stochastic ones. One thing to notice is that if P has an element Pi,i on its main diagonal that is equal to 1 and the ith row or column is otherwise filled with 0's, then that row or column will remain unchanged in all of the subsequent powers Pk. Hence, the ith row or column of Q will have the 1 and the 0's in the same positions as in P. Convergence speed to the stationary distribution As stated earlier, from the equation (if exists) the stationary (or steady state) distribution is a left eigenvector of row stochastic matrix P. Then assuming that P is diagonalizable or equivalently that P has n linearly independent eigenvectors, speed of convergence is elaborated as follows. (For non-diagonalizable, that is, defective matrices, one may start with the Jordan normal form of P and proceed with a bit more involved set of arguments in a similar way.) Let U be the matrix of eigenvectors (each normalized to having an L2 norm equal to 1) where each column is a left eigenvector of P and let Σ be the diagonal matrix of left eigenvalues of P, that is, Σ = diag(λ1,λ2,λ3,...,λn). Then by eigendecomposition Let the eigenvalues be enumerated such that: Since P is a row stochastic matrix, its largest left eigenvalue is 1. If there is a unique stationary distribution, then the largest eigenvalue and the corresponding eigenvector is unique too (because there is no other which solves the stationary distribution equation above). Let ui be the i-th column of U matrix, that is, ui is the left eigenvector of P corresponding to λi. Also let x be a length n row vector that represents a valid probability distribution; since the eigenvectors ui span we can write If we multiply x with P from right and continue this operation with the results, in the end we get the stationary distribution . In other words, = a1 u1 ← xPP...P = xPk as k → ∞. That means Since is parallel to u1(normalized by L2 norm) and (k) is a probability vector, (k) approaches to a1 u1 = as k → ∞ with a speed in the order of λ2/λ1 exponentially. This follows because hence λ2/λ1 is the dominant term. The smaller the ratio is, the faster the convergence is. Random noise in the state distribution can also speed up this convergence to the stationary distribution. General state space Harris chains Many results for Markov chains with finite state space can be generalized to chains with uncountable state space through Harris chains. The use of Markov chains in Markov chain Monte Carlo methods covers cases where the process follows a continuous state space. Locally interacting Markov chains "Locally interacting Markov chains" are Markov chains with an evolution that takes into account the state of other Markov chains. This corresponds to the situation when the state space has a (Cartesian-) product form. See interacting particle system and stochastic cellular automata (probabilistic cellular automata). See for instance Interaction of Markov Processes or. Properties Two states are said to communicate with each other if both are reachable from one another by a sequence of transitions that have positive probability. This is an equivalence relation which yields a set of communicating classes. A class is closed if the probability of leaving the class is zero. A Markov chain is irreducible if there is one communicating class, the state space. A state has period if is the greatest common divisor of the number of transitions by which can be reached, starting from . That is: The state is periodic if ; otherwise and the state is aperiodic. A state i is said to be transient if, starting from i, there is a non-zero probability that the chain will never return to i. It is called recurrent (or persistent) otherwise. For a recurrent state i, the mean hitting time is defined as: State i is positive recurrent if is finite and null recurrent otherwise. Periodicity, transience, recurrence and positive and null recurrence are class properties — that is, if one state has the property then all states in its communicating class have the property. A state i is called absorbing if there are no outgoing transitions from the state. Irreducibility Since periodicity is a class property, if a Markov chain is irreducible, then all its states have the same period. In particular, if one state is aperiodic, then the whole Markov chain is aperiodic. If a finite Markov chain is irreducible, then all states are positive recurrent, and it has a unique stationary distribution given by . Ergodicity A state i is said to be ergodic if it is aperiodic and positive recurrent. In other words, a state i is ergodic if it is recurrent, has a period of 1, and has finite mean recurrence time. If all states in an irreducible Markov chain are ergodic, then the chain is said to be ergodic. Equivalently, there exists some integer such that all entries of are positive. It can be shown that a finite state irreducible Markov chain is ergodic if it has an aperiodic state. More generally, a Markov chain is ergodic if there is a number N such that any state can be reached from any other state in any number of steps less or equal to a number N. In case of a fully connected transition matrix, where all transitions have a non-zero probability, this condition is fulfilled with N = 1. A Markov chain with more than one state and just one out-going transition per state is either not irreducible or not aperiodic, hence cannot be ergodic. Terminology Some authors call any irreducible, positive recurrent Markov chains ergodic, even periodic ones. In fact, merely irreducible Markov chains correspond to ergodic processes, defined according to ergodic theory. Some authors call a matrix primitive iff there exists some integer such that all entries of are positive. Some authors call it regular. Index of primitivity The index of primitivity, or exponent, of a regular matrix, is the smallest such that all entries of are positive. The exponent is purely a graph-theoretic property, since it depends only on whether each entry of is zero or positive, and therefore can be found on a directed graph with as its adjacency matrix. There are several combinatorial results about the exponent when there are finitely many states. Let be the number of states, then The exponent is . The only case where it is an equality is when the graph of goes like . If has diagonal entries, then its exponent is . If is symmetric, then has positive diagonal entries, which by previous proposition means its exponent is . (Dulmage-Mendelsohn theorem) The exponent is where is the girth of the graph. It can be improved to , where is the diameter of the graph. Measure-preserving dynamical system If a Markov chain has a stationary distribution, then it can be converted to a measure-preserving dynamical system: Let the probability space be , where is the set of all states for the Markov chain. Let the sigma-algebra on the probability space be generated by the cylinder sets. Let the probability measure be generated by the stationary distribution, and the Markov chain transition. Let be the shift operator: . Similarly we can construct such a dynamical system with instead. Since irreducible Markov chains with finite state spaces have a unique stationary distribution, the above construction is unambiguous for irreducible Markov chains. In ergodic theory, a measure-preserving dynamical system is called "ergodic" iff any measurable subset such that implies or (up to a null set). The terminology is inconsistent. Given a Markov chain with a stationary distribution that is strictly positive on all states, the Markov chain is irreducible iff its corresponding measure-preserving dynamical system is ergodic. Markovian representations In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the "current" and "future" states. For example, let X be a non-Markovian process. Then define a process Y, such that each state of Y represents a time-interval of states of X. Mathematically, this takes the form: If Y has the Markov property, then it is a Markovian representation of X. An example of a non-Markovian process with a Markovian representation is an autoregressive time series of order greater than one. Hitting times The hitting time is the time, starting in a given set of states until the chain arrives in a given state or set of states. The distribution of such a time period has a phase type distribution. The simplest such distribution is that of a single exponentially distributed transition. Expected hitting times For a subset of states A ⊆ S, the vector kA of hitting times (where element represents the expected value, starting in state i that the chain enters one of the states in the set A) is the minimal non-negative solution to Time reversal For a CTMC Xt, the time-reversed process is defined to be . By Kelly's lemma this process has the same stationary distribution as the forward process. A chain is said to be reversible if the reversed process is the same as the forward process. Kolmogorov's criterion states that the necessary and sufficient condition for a process to be reversible is that the product of transition rates around a closed loop must be the same in both directions. Embedded Markov chain One method of finding the stationary probability distribution, , of an ergodic continuous-time Markov chain, Q, is by first finding its embedded Markov chain (EMC). Strictly speaking, the EMC is a regular discrete-time Markov chain, sometimes referred to as a jump process. Each element of the one-step transition probability matrix of the EMC, S, is denoted by sij, and represents the conditional probability of transitioning from state i into state j. These conditional probabilities may be found by From this, S may be written as where I is the identity matrix and diag(Q) is the diagonal matrix formed by selecting the main diagonal from the matrix Q and setting all other elements to zero. To find the stationary probability distribution vector, we must next find such that with being a row vector, such that all elements in are greater than 0 and = 1. From this, may be found as (S may be periodic, even if Q is not. Once is found, it must be normalized to a unit vector.) Another discrete-time process that may be derived from a continuous-time Markov chain is a δ-skeleton—the (discrete-time) Markov chain formed by observing X(t) at intervals of δ units of time. The random variables X(0), X(δ), X(2δ), ... give the sequence of states visited by the δ-skeleton. Special types of Markov chains Markov model Markov models are used to model changing systems. There are 4 main types of models, that generalize Markov chains depending on whether every sequential state is observable or not, and whether the system is to be adjusted on the basis of observations made: Bernoulli scheme A Bernoulli scheme is a special case of a Markov chain where the transition probability matrix has identical rows, which means that the next state is independent of even the current state (in addition to being independent of the past states). A Bernoulli scheme with only two possible states is known as a Bernoulli process. Note, however, by the Ornstein isomorphism theorem, that every aperiodic and irreducible Markov chain is isomorphic to a Bernoulli scheme; thus, one might equally claim that Markov chains are a "special case" of Bernoulli schemes. The isomorphism generally requires a complicated recoding. The isomorphism theorem is even a bit stronger: it states that any stationary stochastic process is isomorphic to a Bernoulli scheme; the Markov chain is just one such example. Subshift of finite type When the Markov matrix is replaced by the adjacency matrix of a finite graph, the resulting shift is termed a topological Markov chain or a subshift of finite type. A Markov matrix that is compatible with the adjacency matrix can then provide a measure on the subshift. Many chaotic dynamical systems are isomorphic to topological Markov chains; examples include diffeomorphisms of closed manifolds, the Prouhet–Thue–Morse system, the Chacon system, sofic systems, context-free systems and block-coding systems. Applications Markov chains have been employed in a wide range of topics across the natural and social sciences, and in technological applications. They have been used for forecasting in several areas: for example, price trends, wind power, stochastic terrorism, and solar irradiance. The Markov chain forecasting models utilize a variety of settings, from discretizing the time series, to hidden Markov models combined with wavelets, and the Markov chain mixture distribution model (MCM). Physics Markovian systems appear extensively in thermodynamics and statistical mechanics, whenever probabilities are used to represent unknown or unmodelled details of the system, if it can be assumed that the dynamics are time-invariant, and that no relevant history need be considered which is not already included in the state description. For example, a thermodynamic state operates under a probability distribution that is difficult or expensive to acquire. Therefore, Markov Chain Monte Carlo method can be used to draw samples randomly from a black-box to approximate the probability distribution of attributes over a range of objects. Markov chains are used in lattice QCD simulations. Chemistry A reaction network is a chemical system involving multiple reactions and chemical species. The simplest stochastic models of such networks treat the system as a continuous time Markov chain with the state being the number of molecules of each species and with reactions modeled as possible transitions of the chain. Markov chains and continuous-time Markov processes are useful in chemistry when physical systems closely approximate the Markov property. For example, imagine a large number n of molecules in solution in state A, each of which can undergo a chemical reaction to state B with a certain average rate. Perhaps the molecule is an enzyme, and the states refer to how it is folded. The state of any single enzyme follows a Markov chain, and since the molecules are essentially independent of each other, the number of molecules in state A or B at a time is n times the probability a given molecule is in that state. The classical model of enzyme activity, Michaelis–Menten kinetics, can be viewed as a Markov chain, where at each time step the reaction proceeds in some direction. While Michaelis-Menten is fairly straightforward, far more complicated reaction networks can also be modeled with Markov chains. An algorithm based on a Markov chain was also used to focus the fragment-based growth of chemicals in silico towards a desired class of compounds such as drugs or natural products. As a molecule is grown, a fragment is selected from the nascent molecule as the "current" state. It is not aware of its past (that is, it is not aware of what is already bonded to it). It then transitions to the next state when a fragment is attached to it. The transition probabilities are trained on databases of authentic classes of compounds. Also, the growth (and composition) of copolymers may be modeled using Markov chains. Based on the reactivity ratios of the monomers that make up the growing polymer chain, the chain's composition may be calculated (for example, whether monomers tend to add in alternating fashion or in long runs of the same monomer). Due to steric effects, second-order Markov effects may also play a role in the growth of some polymer chains. Similarly, it has been suggested that the crystallization and growth of some epitaxial superlattice oxide materials can be accurately described by Markov chains. Biology Markov chains are used in various areas of biology. Notable examples include: Phylogenetics and bioinformatics, where most models of DNA evolution use continuous-time Markov chains to describe the nucleotide present at a given site in the genome. Population dynamics, where Markov chains are in particular a central tool in the theoretical study of matrix population models. Neurobiology, where Markov chains have been used, e.g., to simulate the mammalian neocortex. Systems biology, for instance with the modeling of viral infection of single cells. Compartmental models for disease outbreak and epidemic modeling. Testing Several theorists have proposed the idea of the Markov chain statistical test (MCST), a method of conjoining Markov chains to form a "Markov blanket", arranging these chains in several recursive layers ("wafering") and producing more efficient test sets—samples—as a replacement for exhaustive testing. Solar irradiance variability Solar irradiance variability assessments are useful for solar power applications. Solar irradiance variability at any location over time is mainly a consequence of the deterministic variability of the sun's path across the sky dome and the variability in cloudiness. The variability of accessible solar irradiance on Earth's surface has been modeled using Markov chains, also including modeling the two states of clear and cloudiness as a two-state Markov chain. Speech recognition Hidden Markov models have been used in automatic speech recognition systems. Information theory Markov chains are used throughout information processing. Claude Shannon's famous 1948 paper A Mathematical Theory of Communication, which in a single step created the field of information theory, opens by introducing the concept of entropy by modeling texts in a natural language (such as English) as generated by an ergodic Markov process, where each letter may depend statistically on previous letters. Such idealized models can capture many of the statistical regularities of systems. Even without describing the full structure of the system perfectly, such signal models can make possible very effective data compression through entropy encoding techniques such as arithmetic coding. They also allow effective state estimation and pattern recognition. Markov chains also play an important role in reinforcement learning. Markov chains are also the basis for hidden Markov models, which are an important tool in such diverse fields as telephone networks (which use the Viterbi algorithm for error correction), speech recognition and bioinformatics (such as in rearrangements detection). The LZMA lossless data compression algorithm combines Markov chains with Lempel-Ziv compression to achieve very high compression ratios. Queueing theory Markov chains are the basis for the analytical treatment of queues (queueing theory). Agner Krarup Erlang initiated the subject in 1917. This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources (such as bandwidth). Numerous queueing models use continuous-time Markov chains. For example, an M/M/1 queue is a CTMC on the non-negative integers where upward transitions from i to i + 1 occur at rate λ according to a Poisson process and describe job arrivals, while transitions from i to i – 1 (for i > 1) occur at rate μ (job service times are exponentially distributed) and describe completed services (departures) from the queue. Internet applications The PageRank of a webpage as used by Google is defined by a Markov chain. It is the probability to be at page in the stationary distribution on the following Markov chain on all (known) webpages. If is the number of known webpages, and a page has links to it then it has transition probability for all pages that are linked to and for all pages that are not linked to. The parameter is taken to be about 0.15. Markov models have also been used to analyze web navigation behavior of users. A user's web link transition on a particular website can be modeled using first- or second-order Markov models and can be used to make predictions regarding future navigation and to personalize the web page for an individual user. Statistics Markov chain methods have also become very important for generating sequences of random numbers to accurately reflect very complicated desired probability distributions, via a process called Markov chain Monte Carlo (MCMC). In recent years this has revolutionized the practicability of Bayesian inference methods, allowing a wide range of posterior distributions to be simulated and their parameters found numerically. Conflict and combat In 1971 a Naval Postgraduate School Master's thesis proposed to model a variety of combat between adversaries as a Markov chain "with states reflecting the control, maneuver, target acquisition, and target destruction actions of a weapons system" and discussed the parallels between the resulting Markov chain and Lanchester's laws. In 1975 Duncan and Siverson remarked that Markov chains could be used to model conflict between state actors, and thought that their analysis would help understand "the behavior of social and political organizations in situations of conflict." Economics and finance Markov chains are used in finance and economics to model a variety of different phenomena, including the distribution of income, the size distribution of firms, asset prices and market crashes. D. G. Champernowne built a Markov chain model of the distribution of income in 1953. Herbert A. Simon and co-author Charles Bonini used a Markov chain model to derive a stationary Yule distribution of firm sizes. Louis Bachelier was the first to observe that stock prices followed a random walk. The random walk was later seen as evidence in favor of the efficient-market hypothesis and random walk models were popular in the literature of the 1960s. Regime-switching models of business cycles were popularized by James D. Hamilton (1989), who used a Markov chain to model switches between periods of high and low GDP growth (or, alternatively, economic expansions and recessions). A more recent example is the Markov switching multifractal model of Laurent E. Calvet and Adlai J. Fisher, which builds upon the convenience of earlier regime-switching models. It uses an arbitrarily large Markov chain to drive the level of volatility of asset returns. Dynamic macroeconomics makes heavy use of Markov chains. An example is using Markov chains to exogenously model prices of equity (stock) in a general equilibrium setting. Credit rating agencies produce annual tables of the transition probabilities for bonds of different credit ratings. Social sciences Markov chains are generally used in describing path-dependent arguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due to Karl Marx's , tying economic development to the rise of capitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development, the configuration of structural factors, such as size of the middle class, the ratio of urban to rural residence, the rate of political mobilization, etc., will generate a higher probability of transitioning from authoritarian to democratic regime. Music Markov chains are employed in algorithmic music composition, particularly in software such as Csound, Max, and SuperCollider. In a first-order chain, the states of the system become note or pitch values, and a probability vector for each note is constructed, completing a transition probability matrix (see below). An algorithm is constructed to produce output note values based on the transition matrix weightings, which could be MIDI note values, frequency (Hz), or any other desirable metric. A second-order Markov chain can be introduced by considering the current state and also the previous state, as indicated in the second table. Higher, nth-order chains tend to "group" particular notes together, while 'breaking off' into other patterns and sequences occasionally. These higher-order chains tend to generate results with a sense of phrasal structure, rather than the 'aimless wandering' produced by a first-order system. Markov chains can be used structurally, as in Xenakis's Analogique A and B. Markov chains are also used in systems which use a Markov model to react interactively to music input. Usually musical systems need to enforce specific control constraints on the finite-length sequences they generate, but control constraints are not compatible with Markov models, since they induce long-range dependencies that violate the Markov hypothesis of limited memory. In order to overcome this limitation, a new approach has been proposed. Games and sports Markov chains can be used to model many games of chance. The children's games Snakes and Ladders and "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains. At each turn, the player starts in a given state (on a given square) and from there has fixed odds of moving to certain other states (squares). Markov chain models have been used in advanced baseball analysis since 1960, although their use is still rare. Each half-inning of a baseball game fits the Markov chain state when the number of runners and outs are considered. During any at-bat, there are 24 possible combinations of number of outs and position of the runners. Mark Pankin shows that Markov chain models can be used to evaluate runs created for both individual players as well as a team. He also discusses various kinds of strategies and play conditions: how Markov chain models have been used to analyze statistics for game situations such as bunting and base stealing and differences when playing on grass vs. AstroTurf. Markov text generators Markov processes can also be used to generate superficially real-looking text given a sample document. Markov processes are used in a variety of recreational "parody generator" software (see dissociated press, Jeff Harrison, Mark V. Shaney, and Academias Neutronium). Several open-source text generation libraries using Markov chains exist.
Mathematics
Statistics and probability
null
60883
https://en.wikipedia.org/wiki/Watch
Watch
A watch is a timepiece carried or worn by a person. It is designed to keep a consistent movement despite the motions caused by the person's activities. A wristwatch is designed to be worn around the wrist, attached by a watch strap or other type of bracelet, including metal bands or leather straps. A pocket watch is carried in a pocket, often attached to a chain. A stopwatch is a watch that measures intervals of time. During most of their history, beginning in the 16th century, watches were mechanical devices, driven by clockwork, powered by winding a mainspring, and keeping time with an oscillating balance wheel. These are called mechanical watches. In the 1960s the electronic quartz watch was invented, powered by a battery and keeping time with a vibrating quartz crystal. By the 1980s it took over most of the watch market, in what was called the quartz revolution (or the quartz crisis in Switzerland, whose renowned watch industry it decimated). In the 2010s, smartwatches emerged, small wrist-worn computers with touchscreens, with functions that go far beyond timekeeping. Modern watches often display the day, date, month, and year. Mechanical watches may have extra features ("complications") such as moon-phase displays and different types of tourbillon. Quartz watches often include timers, chronographs, and alarm functions. Smartwatches and more complicated electronic watches may even incorporate calculators, GPS and Bluetooth technology or have heart-rate monitoring capabilities, and some use radio clock technology to regularly correct the time. Most watches used mainly for timekeeping have quartz movements. But expensive collectible watches, valued more for their elaborate craftsmanship, aesthetic appeal, and glamorous design than for timekeeping, often have traditional mechanical movements, despite being less accurate and more expensive than their electronic counterparts. As of 2019, the most expensive watch ever sold at auction was the Patek Philippe Grandmaster Chime for US$31.2 million. History Origins Watches evolved from portable spring-driven clocks, which first appeared in 15th-century Europe. The first timepieces to be worn, made in the 16th century beginning in the German cities of Nuremberg and Augsburg, were transitional in size between clocks and watches. Nuremberg clockmaker Peter Henlein (or Henle or Hele) (1485–1542) is often credited as the inventor of the watch. However, other German clockmakers were creating miniature timepieces during this period, and there is no evidence Henlein was the first. Watches were not widely worn in pockets until the 17th century. One account suggests that the word "watch" came from the Old English word woecce – which meant "watchman" – because town watchmen used the technology to keep track of their shifts at work. Another says that the term came from 17th-century sailors, who used the new mechanisms to time the length of their shipboard watches (duty shifts). Development A rise in accuracy occurred in 1657 with the addition of the balance spring to the balance wheel, an invention disputed both at the time and ever since between Robert Hooke and Christiaan Huygens. This innovation significantly improved the accuracy of watches, reducing errors from several hours a day to approximately 10 minutes per day, which led to the introduction of the minute hand on watch faces in Britian around 1680 and in France by 1700. The increased accuracy of the balance wheel focused attention on errors caused by other parts of the movement, igniting a two-century wave of watchmaking innovation. The first thing to be improved was the escapement. The verge escapement was replaced in quality watches by the cylinder escapement, invented by Thomas Tompion in 1695 and further developed by George Graham in the 1720s. Improvements in manufacturing – such as the tooth-cutting machine devised by Robert Hooke – allowed some increase in the volume of watch production, although finishing and assembling was still done by hand until well into the 19th century. A major cause of error in balance-wheel timepieces, caused by changes in elasticity of the balance spring from temperature changes, was solved by the bimetallic temperature-compensated balance wheel invented in 1765 by Pierre Le Roy and improved by Thomas Earnshaw (1749–1829). The lever escapement, the single most important technological breakthrough, though invented by Thomas Mudge in 1754 and improved by Josiah Emery in 1785, only gradually came into use from about 1800 onwards, chiefly in Britain. The British predominated in watch manufacture for much of the 17th and 18th centuries, but maintained a system of production that was geared towards high-quality products for the élite. The British Watch Company modernized clock manufacture with mass-production techniques and the application of duplicating tools and machinery in 1843. In the United States, Aaron Lufkin Dennison started a factory in 1851 in Massachusetts that used interchangeable parts, and by 1861 a successful enterprise operated, incorporated as the Waltham Watch Company. Wristwatches The concept of the wristwatch goes back to the production of the very earliest watches in the 16th century. In 1571, Elizabeth I of England received a wristwatch, described as an "armed watch", from Robert Dudley. The oldest surviving wristwatch (then described as a "bracelet watch") is one made in 1806, and given to Joséphine de Beauharnais. From the beginning, wristwatches were almost exclusively worn by women – men used pocket watches up until the early 20th century. In 1810, the watch-maker Abraham-Louis Breguet made a wristwatch for the Queen of Naples. The first Swiss wristwatch was made in the year 1868 by the Swiss watch-maker Patek Philippe for Countess Koscowicz of Hungary. Wristwatches were first worn by military men towards the end of the 19th century, having increasingly recognized the importance of synchronizing maneuvers during war without potentially revealing plans to the enemy through signaling. The Garstin Company of London patented a "Watch Wristlet" design in 1893, but probably produced similar designs from the 1880s. Officers in the British Army began using wristwatches during colonial military campaigns in the 1880s, such as during the Anglo-Burma War of 1885. During the First Boer War of 1880–1881, the importance of coordinating troop movements and synchronizing attacks against highly mobile Boer insurgents became paramount, and the use of wristwatches subsequently became widespread among the officer class. The company Mappin & Webb began production of their successful "campaign watch" for soldiers during the campaign in the Sudan in 1898 and accelerated production for the Second Boer War of 1899–1902 a few years later. In continental Europe, Girard-Perregaux and other Swiss watchmakers began supplying German naval officers with wristwatches in about 1880. Early models were essentially standard pocket-watches fitted to a leather strap, but by the early 20th century, manufacturers began producing purpose-built wristwatches. The Swiss company Dimier Frères & Cie patented a wristwatch design with the now standard wire lugs in 1903. In 1904, Louis Cartier produced a wristwatch to allow his friend Alberto Santos-Dumont to check flight performance in his airship while keeping both hands on the controls as this proved difficult with a pocket watch. Cartier still markets a line of Santos-Dumont watches and sunglasses. In 1905, Hans Wilsdorf moved to London, and set up his own business, Wilsdorf & Davis, with his brother-in-law Alfred Davis, providing quality timepieces at affordable prices; the company became Rolex in 1915. Wilsdorf was an early convert to the wristwatch, and contracted the Swiss firm Aegler to produce a line of wristwatches. The impact of the First World War of 1914–1918 dramatically shifted public perceptions on the propriety of the man's wristwatch and opened up a mass market in the postwar era. The creeping barrage artillery tactic, developed during the war, required precise synchronization between the artillery gunners and the infantry advancing behind the barrage. Service watches produced during the war were specially designed for the rigors of trench warfare, with luminous dials and unbreakable glass. The UK War Office began issuing wristwatches to combatants from 1917. By the end of the war, almost all enlisted men wore a wristwatch (or wristlet), and after they were demobilized, the fashion soon caught on: the British Horological Journal wrote in 1917, that "the wristlet watch was little used by the sterner sex before the war, but now is seen on the wrist of nearly every man in uniform and of many men in civilian attire." By 1930, the wristwatch vastly exceeded the pocket watch in market share by a decisive ratio of 50:1. Automatic watches John Harwood invented the first successful self-winding system in 1923. In anticipation of Harwood's patent for self-winding mechanisms expiry in 1930, Glycine founder Eugène Meylan started development on a self-winding system as a separate module that could be used with almost any 8.75 ligne (19.74 millimeter) watch movement. Glycine incorporated this module into its watches in October 1930, and began mass-producing automatic watches. Electric watches The Elgin National Watch Company and the Hamilton Watch Company pioneered the first electric watch. The first electric movements used a battery as a power source to oscillate the balance wheel. During the 1950s, Elgin developed the model 725, while Hamilton released two models: the first, the Hamilton 500, released on 3 January 1957, was produced into 1959. This model had problems with the contact wires misaligning, and the watches returned to Hamilton for alignment. The Hamilton 505, an improvement on the 500, proved more reliable: the contact wires were removed and a non-adjustable contact on the balance assembly delivered the power to the balance wheel. Similar designs from many other watch companies followed. Another type of electric watch was developed by the Bulova company that used a tuning-fork resonator instead of a traditional balance wheel to increase timekeeping accuracy, moving from a typical 2.5–4 Hz with a traditional balance wheel to 360 Hz with the tuning-fork design. Quartz watches The commercial introduction of the quartz watch in 1969 in the form of the Seiko Astron 35SQ, and in 1970 in the form of the Omega Beta 21 was a revolutionary improvement in watch technology. In place of a balance wheel, which oscillated at perhaps 5 or 6 beats per second, these devices used a quartz-crystal resonator, which vibrated at 8,192 Hz, driven by a battery-powered oscillator circuit. Most quartz-watch oscillators now operate at 32,768 Hz, though quartz movements have been designed with frequencies as high as 262 kHz. Since the 1980s, more quartz watches than mechanical ones have been marketed. Smart watches The Timex Datalink wristwatch was introduced in 1994. The early Timex Datalink Smartwatches realized a wireless data transfer mode to receive data from a PC. Since then, many companies have released their own iterations of a smartwatch, such as the Apple Watch, Samsung Galaxy Watch, and Huawei Watch. Hybrid watches A hybrid smartwatch is a fusion between a regular mechanical watch and a smartwatch. Parts The movement and case are the basic parts of a watch. A watch band or bracelet is added to form a wristwatch; alternatively, a watch chain is added to form a pocket watch. The case is the outer covering of the watch. The case back is the back portion of the watch's case. Accessing the movement (such as during battery replacement) depends on the type of case back, which are generally categorized into four types: Snap-off case backs (press-on case backs): the watch back pulls straight off and presses straight on. Screw-down case backs (threaded case backs): the entire watch back must be rotated to unscrew from the case. Often it has 6 notches on the external part of the case back. Screw back cases: tiny screws hold the case back to the case Unibody: the only way into the case involves prying the crystal off the front of the watch. The crystal, also called the window or watch glass, is the transparent part of the case that allows viewing the hands and the dial of the movement. Modern wristwatches almost always use one of 4 materials: Acrylic glass (plexiglass, hesalite glass): the most impact-resistant ("unbreakable"), and therefore used in dive watches and most military watches. Acrylic glass is the lowest cost of these materials, so it is used in practically all low-cost watches. Mineral crystal: a tempered glass. Sapphire-coated mineral crystal Synthetic sapphire crystal: the most scratch-resistant; it is difficult to cut and polish, causing watch crystals made of sapphire to be the most expensive. The bezel is the ring holding the crystal in place. The lugs are small metal projections at both ends of the wristwatch case where the watch band attaches to the watch case. The case and the lugs are often machined from one solid piece of stainless steel. Movement The movement of a watch is the mechanism that measures the passage of time and displays the current time (and possibly other information including date, month, and day). Movements may be entirely mechanical, entirely electronic (potentially with no moving parts), or they might be a blend of both. Most watches intended mainly for timekeeping today have electronic movements, with mechanical hands on the watch face indicating the time. Mechanical Compared to electronic movements, mechanical watches are less accurate, often with errors of seconds per day; are sensitive to position, temperature, and magnetism; are costly to produce; require regular maintenance and adjustments; and are more prone to failures. Nevertheless, mechanical watches attract interest from consumers, particularly among watch collectors. Skeleton watches are designed to display the mechanism for aesthetic purposes. A mechanical movement uses an escapement mechanism to control and limit the unwinding and winding parts of a spring, converting what would otherwise be a simple unwinding into a controlled and periodic energy release. The movement also uses a balance wheel, together with the balance spring (also known as a hairspring), to control the gear system's motion in a manner analogous to the pendulum of a pendulum clock. The tourbillon, an optional part for mechanical movements, is a rotating frame for the escapement, used to cancel out or reduce gravitational bias. Due to the complexity of designing a tourbillon, they are expensive, and typically found in prestigious watches. The pin-lever escapement (called the Roskopf movement after its inventor, Georges Frederic Roskopf), which is a cheaper version of the fully levered movement, was manufactured in huge quantities by many Swiss manufacturers, as well as by Timex, until it was replaced by quartz movements. Introduced by Bulova in 1960, tuning-fork watches use a type of electromechanical movement with a precise frequency (most often ) to drive a mechanical watch. The task of converting electronically pulsed fork vibration into rotary movements is done via two tiny jeweled fingers, called pawls. Tuning-fork watches were rendered obsolete when electronic quartz watches were developed. Traditional mechanical watch movements use a spiral spring called a mainspring as its power source that must be rewound periodically by the user by turning the watch crown. Antique pocket watches were wound by inserting a key into the back of the watch and turning it. While most modern watches are designed to run on a winding, requiring winding daily, some run for several days; a few have 192-hour mainsprings, requiring once-weekly winding. Automatic watches A self-winding or automatic watch is one that rewinds the mainspring of a mechanical movement by the natural motions of the wearer's body. The first self-winding mechanism was invented for pocket watches in 1770 by Abraham-Louis Perrelet, but the first "self-winding", or "automatic", wristwatch was the invention of a British watch repairer named John Harwood in 1923. This type of watch winds itself without requiring any special action by the wearer. It uses an eccentric weight, called a winding rotor, which rotates with the movement of the wearer's wrist. The back-and-forth motion of the winding rotor couples to a ratchet to wind the mainspring automatically. Self-winding watches usually can also be wound manually to keep them running when not worn or if the wearer's wrist motions are inadequate to keep the watch wound. In April 2013, the Swatch Group launched the sistem51 wristwatch. It has a mechanical movement consisting of only 51 parts, including 19 jewels and a novel self-winding mechanism with a transparent oscillating weight. Ten years after its introduction, it is still the only mechanical movement manufactured entirely on a fully automated assembly line, including adjustment of the balance wheel and the escapement for accuracy by laser. The low parts count and the fully automated assembly make it an inexpensive automatic Swiss watch. Electronic Electronic movements, also known as quartz movements, have few or no moving parts, except a quartz crystal which is made to vibrate by the piezoelectric effect. A varying electric voltage is applied to the crystal, which responds by changing its shape so, in combination with some electronic components, it functions as an oscillator. It resonates at a specific highly stable frequency, which is used to accurately pace a timekeeping mechanism. Most quartz movements are primarily electronic but are geared to drive mechanical hands on the face of the watch to provide a traditional analog display of the time, a feature most consumers still prefer. In 1959 Seiko placed an order with Epson (a subsidiary company of Seiko and the 'brain' behind the quartz revolution) to start developing a quartz wristwatch. The project was codenamed 59A. By the 1964 Tokyo Summer Olympics, Seiko had a working prototype of a portable quartz watch which was used as the time measurements throughout the event. The first prototypes of an electronic quartz wristwatch (not just quartz watches as the Seiko timekeeping devices at the Tokyo Olympics in 1964) were made by the CEH research laboratory in Neuchâtel, Switzerland. From 1965 through 1967 pioneering development work was done on a miniaturized 8192 Hz quartz oscillator, a thermo-compensation module, and an in-house-made, dedicated integrated circuit (unlike the hybrid circuits used in the later Seiko Astron wristwatch). As a result, the BETA 1 prototype set new timekeeping performance records at the International Chronometric Competition held at the Observatory of Neuchâtel in 1967. In 1970, 18 manufacturers exhibited production versions of the beta 21 wristwatch, including the Omega Electroquartz as well as Patek Philippe, Rolex Oysterquartz and Piaget. The first quartz watch to enter production was the Seiko 35 SQ Astron, which hit the shelves on 25 December 1969, swiftly followed by the Swiss Beta 21, and then a year later the prototype of one of the world's most accurate wristwatches to date: the Omega Marine Chronometer. Since the technology having been developed by contributions from Japanese, American and Swiss, nobody could patent the whole movement of the quartz wristwatch, thus allowing other manufacturers to participate in the rapid growth and development of the quartz watch market. This ended – in less than a decade – almost 100 years of dominance by the mechanical wristwatch legacy. Modern quartz movements are produced in very large quantities, and even the cheapest wristwatches typically have quartz movements. Whereas mechanical movements can typically be off by several seconds a day, an inexpensive quartz movement in a child's wristwatch may still be accurate to within half a second per day – ten times more accurate than a mechanical movement. After a consolidation of the mechanical watch industry in Switzerland during the 1970s, mass production of quartz wristwatches took off under the leadership of the Swatch Group of companies, a Swiss conglomerate with vertical control of the production of Swiss watches and related products. For quartz wristwatches, subsidiaries of Swatch manufacture watch batteries (Renata), oscillators (Oscilloquartz, now Micro Crystal AG) and integrated circuits (Ebauches Electronic SA, renamed EM Microelectronic-Marin). The launch of the new SWATCH brand in 1983 was marked by bold new styling, design, and marketing. Today, the Swatch Group maintains its position as the world's largest watch company. Seiko's efforts to combine the quartz and mechanical movements bore fruit after 20 years of research, leading to the introduction of the Seiko Spring Drive, first in a limited domestic market production in 1999 and to the world in September 2005. The Spring Drive keeps time within quartz standards without the use of a battery, using a traditional mechanical gear train powered by a spring, without the need for a balance wheel either. In 2010, Miyota (Citizen Watch) of Japan introduced a newly developed movement that uses a 3-pronged quartz crystal that was exclusively produced for Bulova to be used in the Precisionist or Accutron II line, a new type of quartz watch with ultra-high frequency (262.144 kHz) which is claimed to be accurate to +/− 10 seconds a year and has a smooth sweeping second hand rather than one that jumps each second. Radio time signal watches are a type of electronic quartz watch that synchronizes (time transfers) its time with an external time source such as in atomic clocks, time signals from GPS navigation satellites, the German DCF77 signal in Europe, WWVB in the US, and others. Movements of this type may, among others, synchronize the time of day and the date, the leap-year status and the state of daylight saving time (on or off). However, other than the radio receiver, these watches are normal quartz watches in all other aspects. Electronic watches require electricity as a power source, and some mechanical movements and hybrid electronic-mechanical movements also require electricity. Usually, the electricity is provided by a replaceable battery. The first use of electrical power in watches was as a substitute for the mainspring, to remove the need for winding. The first electrically powered watch, the Hamilton Electric 500, was released in 1957 by the Hamilton Watch Company of Lancaster, Pennsylvania. Watch batteries (strictly speaking cells, as a battery is composed of multiple cells) are specially designed for their purpose. They are very small and provide tiny amounts of power continuously for very long periods (several years or more). In most cases, replacing the battery requires a trip to a watch-repair shop or watch dealer; this is especially true for watches that are water-resistant, as special tools and procedures are required for the watch to remain water-resistant after battery replacement. Silver-oxide and lithium batteries are popular today; mercury batteries, formerly quite common, are no longer used, for environmental reasons. Cheap batteries may be alkaline, of the same size as silver-oxide cells but providing shorter life. Rechargeable batteries are used in some solar-powered watches. Some electronic watches are powered by the movement of the wearer. For instance, Seiko's kinetic-powered quartz watches use the motion of the wearer's arm: turning a rotating weight which causes a tiny generator to supply power to charge a rechargeable battery that runs the watch. The concept is similar to that of self-winding spring movements, except that electrical power is generated instead of mechanical spring tension. Solar powered watches are powered by light. A photovoltaic cell on the face (dial) of the watch converts light to electricity, which is used to charge a rechargeable battery or capacitor. The movement of the watch draws its power from the rechargeable battery or capacitor. As long as the watch is regularly exposed to fairly strong light (such as sunlight), it never needs a battery replacement. Some models need only a few minutes of sunlight to provide weeks of energy (as in the Citizen Eco-Drive). Some of the early solar watches of the 1970s had innovative and unique designs to accommodate the array of solar cells needed to power them (Synchronar, Nepro, Sicura, and some models by Cristalonic, Alba, Seiko, and Citizen). As the decades progressed and the efficiency of the solar cells increased while the power requirements of the movement and display decreased, solar watches began to be designed to look like other conventional watches. A rarely used power source is the temperature difference between the wearer's arm and the surrounding environment (as applied in the Citizen Eco-Drive Thermo). Display Analog Traditionally, watches have displayed the time in analog form, with a numbered dial upon which are mounted at least a rotating hour hand and a longer, rotating minute hand. Many watches also incorporate a third hand that shows the current second of the current minute. In quartz watches this second hand typically snaps to the next marker every second. In mechanical watches, the second hand may appear to glide continuously, though in fact it merely moves in smaller steps, typically one-fifth to one-tenth of a second, corresponding to the beat (half period) of the balance wheel. With a duplex escapement, the hand advances every two beats (full period) of the balance wheel, typically -second; this happens every four beats (two periods, 1 second), with a double duplex escapement. A truly gliding second hand is achieved with the tri-synchro regulator of Spring Drive watches. All three hands are normally mechanical, physically rotating on the dial, although a few watches have been produced with "hands" simulated by a liquid-crystal display. Analog display of the time is nearly universal in watches sold as jewelry or collectibles, and in these watches, the range of different styles of hands, numbers, and other aspects of the analog dial is very broad. In watches sold for timekeeping, analog display remains very popular, as many people find it easier to read than digital display; but in timekeeping watches the emphasis is on clarity and accurate reading of the time under all conditions (clearly marked digits, easily visible hands, large watch faces, etc.). They are specifically designed for the left wrist with the stem (the knob used for changing the time) on the right side of the watch; this makes it easy to change the time without removing the watch from the wrist. This is the case if one is right-handed and the watch is worn on the left wrist (as is traditionally done). If one is left-handed and wears the watch on the right wrist, one has to remove the watch from the wrist to reset the time or to wind the watch. Analog watches, as well as clocks, are often marketed showing a display time of approximately 1:50 or 10:10. This creates a visually pleasing smile-like face on the upper half of the watch, in addition to enclosing the manufacturer's name. Digital displays often show a time of 12:08, where the increase in the number of active segments or pixels gives a positive feeling. Tactile Tissot, a Swiss luxury watchmaker, makes the Silen-T wristwatch with a touch-sensitive face that vibrates to help the user to tell time eyes-free. The bezel of the watch features raised bumps at each hour mark; after briefly touching the face of the watch, the wearer runs a finger around the bezel clockwise. When the finger reaches the bump indicating the hour, the watch vibrates continuously, and when the finger reaches the bump indicating the minute, the watch vibrates intermittently. Eone Timepieces, a Washington D.C.–based company, launched its first tactile analog wristwatch, the "Bradley", on 11 July 2013 on the Kickstarter website. The device is primarily designed for sight-impaired users, who can use the watch's two ball bearings to determine the time, but it is also suitable for general use. The watch features raised marks at each hour and two moving, magnetically attached ball bearings. One ball bearing, on the edge of the watch, indicates the hour, while the other, on the face, indicates the minute. Digital A digital display shows the time as a number, e.g., 12:08 instead of a short hand pointing towards the number 12 and a long hand 8/60 of the way around the dial. The digits are usually shown as a seven-segment display. The first digital pocket watches appeared in the late 19th century. In the 1920s, the first digital mechanical wristwatches appeared. The first digital electronic watch, a Pulsar LED prototype in 1970, was developed jointly by Hamilton Watch Company and Electro-Data, founded by George H. Thiess. John Bergey, the head of Hamilton's Pulsar division, said that he was inspired to make a digital timepiece by the then-futuristic digital clock that Hamilton themselves made for the 1968 science fiction film 2001: A Space Odyssey. On 4 April 1972, the Pulsar was finally ready, made in an 18-carat gold case and sold for $2,100. It had a red light-emitting diode (LED) display. Digital LED watches were very expensive and out of reach to the common consumer until 1975, when Texas Instruments started to mass-produce LED watches inside a plastic case. These watches, which first retailed for only $20, reduced to $10 in 1976, saw Pulsar lose $6 million and the Pulsar brand sold to Seiko. An early LED watch that was rather problematic was The Black Watch made and sold by British company Sinclair Radionics in 1975. This was only sold for a few years, as production problems and returned (faulty) product forced the company to cease production. Most watches with LED displays required that the user press a button to see the time displayed for a few seconds because LEDs used so much power that they could not be kept operating continuously. Usually, the LED display color would be red. Watches with LED displays were popular for a few years, but soon the LED displays were superseded by liquid crystal displays (LCDs), which used less battery power and were much more convenient in use, with the display always visible and eliminating the need to push a button before seeing the time. Only in darkness would a button needed to be pressed to illuminate the display with a tiny light bulb, later illuminating LEDs and electroluminescent backlights. The first LCD watch with a six-digit LCD was the 1973 Seiko 06LC, although various forms of early LCD watches with a four-digit display were marketed as early as 1972 including the 1972 Gruen Teletime LCD Watch, and the Cox Electronic Systems Quarza. The Quarza, introduced in 1972 had the first Field Effect LCD readable in direct sunlight and produced by the International Liquid Crystal Corporation of Cleveland, Ohio. In Switzerland, Ebauches Electronic SA presented a prototype eight-digit LCD wristwatch showing time and date at the MUBA Fair, Basel, in March 1973, using a twisted nematic LCD manufactured by Brown, Boveri & Cie, Switzerland, which became the supplier of LCDs to Casio for the CASIOTRON watch in 1974. A problem with LCDs is that they use polarized light. If, for example, the user is wearing polarized sunglasses, the watch may be difficult to read because the plane of polarization of the display is roughly perpendicular to that of the glasses. If the light that illuminates the display is polarized, for example if it comes from a blue sky, the display may be difficult or impossible to read. From the 1980s onward, digital watch technology vastly improved. In 1982, Seiko produced the Seiko TV Watch that had a television screen built-in, and Casio produced a digital watch with a thermometer (the TS-1000) as well as another that could translate 1,500 Japanese words into English. In 1985, Casio produced the CFX-400 scientific calculator watch. In 1987, Casio produced a watch that could dial telephone numbers (the DBA-800) and Citizen introduced one that would react to voice. In 1995, Timex released a watch that allowed the wearer to download and store data from a computer to their wrist. Some watches, such as the Timex Datalink USB, feature dot matrix displays. Since their apex during the late 1980s to mid-1990s high technology fad, digital watches have mostly become simpler, less expensive timepieces with little variety between models. Illuminated Many watches have displays that are illuminated, so they can be used in darkness. Various methods have been used to achieve this. Mechanical watches often have luminous paint on their hands and hour marks. In the mid-20th century, radioactive material was often incorporated in the paint, so it would continue to glow without any exposure to light. Radium was often used but produced small amounts of radiation outside the watch that might have been hazardous. Tritium was used as a replacement, since the radiation it produces has such low energy that it cannot penetrate a watch glass. However, tritium is expensive – it has to be made in a nuclear reactor – and it has a half-life of only about 12 years so the paint remains luminous for only a few years. Nowadays, tritium is used in specialized watches, e.g., for military purposes (see Tritium illumination). For other purposes, luminous paint is sometimes used on analog displays, but no radioactive material is contained in it. This means that the display glows soon after being exposed to light and quickly fades. Watches that incorporate batteries often have electric illumination in their displays. However, lights consume far more power than electronic watch movements. To conserve the battery, the light is activated only when the user presses a button. Usually, the light remains lit for a few seconds after the button is released, which allows the user to move the hand out of the way. In some early digital watches, LED displays were used, which could be read as easily in darkness as in daylight. The user had to press a button to light up the LEDs, which meant that the watch could not be read without the button being pressed, even in full daylight. In some types of watches, small incandescent lamps or LEDs illuminate the display, which is not intrinsically luminous. These tend to produce very non-uniform illumination. Other watches use electroluminescent material to produce uniform illumination of the background of the display, against which the hands or digits can be seen. Speech synthesis Talking watches are available, intended for the blind or visually impaired. They speak the time out loud at the press of a button. This has the disadvantage of disturbing others nearby or at least alerting the non-deaf that the wearer is checking the time. Tactile watches are preferred to avoid this awkwardness, but talking watches are preferred for those who are not confident in their ability to read a tactile watch reliably. Handedness Wristwatches with analog displays generally have a small knob, called the crown, that can be used to adjust the time and, in mechanical watches, wind the spring. Almost always, the crown is located on the right-hand side of the watch so it can be worn of the left wrist for a right-handed individual. This makes it inconvenient to use if the watch is being worn on the right wrist. Some manufacturers offer "left-hand drive", aka "destro", configured watches which move the crown to the left side making wearing the watch easier for left-handed individuals. A rarer configuration is the bullhead watch. Bullhead watches are generally, but not exclusively, chronographs. The configuration moves the crown and chronograph pushers to the top of the watch. Bullheads are commonly wristwatch chronographs that are intended to be used as stopwatches off the wrist. Examples are the Citizen Bullhead Change Timer and the Omega Seamaster Bullhead. Digital watches generally have push-buttons that can be used to make adjustments. These are usually equally easy to use on either wrist. Functions Customarily, watches provide the time of day, giving at least the hour and minute, and often the second. Many also provide the current date, and some (called "complete calendar" or "triple date" watches) display the day of the week and the month as well. However, many watches also provide a great deal of information beyond the basics of time and date. Some watches include alarms. Other elaborate and more expensive watches, both pocket and wrist models, also incorporate striking mechanisms or repeater functions, so that the wearer could learn the time by the sound emanating from the watch. This announcement or striking feature is an essential characteristic of true clocks and distinguishes such watches from ordinary timepieces. This feature is available on most digital watches. A complicated watch has one or more functions beyond the basic function of displaying the time and the date; such a functionality is called a complication. Two popular complications are the chronograph complication, which is the ability of the watch movement to function as a stopwatch, and the moonphase complication, which is a display of the lunar phase. Other more expensive complications include Tourbillon, Perpetual calendar, Minute repeater, and Equation of time. A truly complicated watch has many of these complications at once (see Calibre 89 from Patek Philippe for instance). Some watches can both indicate the direction of Mecca and have alarms that can be set for all daily prayer requirements. Among watch enthusiasts, complicated watches are especially collectible. Some watches include a second 12-hour or 24-hour display for UTC or GMT. The similar-sounding terms chronograph and chronometer are often confused, although they mean altogether different things. A chronograph is a watch with an added duration timer, often a stopwatch complication (as explained above), while a chronometer watch is a timepiece that has met an industry-standard test for performance under pre-defined conditions: a chronometer is a high quality mechanical or a thermo-compensated movement that has been tested and certified to operate within a certain standard of accuracy by the COSC (Contrôle Officiel Suisse des Chronomètres). The concepts are different but not mutually exclusive; so a watch can be a chronograph, a chronometer, both, or neither. Electronic sports watches, combining timekeeping with GPS and/or activity tracking, address the general fitness market and have the potential for commercial success (Garmin Forerunner, Garmin Vivofit, Epson, announced model of Swatch Touch series). Braille watches have analog displays with raised bumps around the face to allow blind users to tell the time. Their digital equivalents use synthesised speech to speak the time on command. Fashion Wristwatches and antique pocket watches are often appreciated as jewelry or as collectible works of art rather than just as timepieces. This has created several different markets for wristwatches, ranging from very inexpensive but accurate watches (intended for no other purpose than telling the correct time) to extremely expensive watches that serve mainly as personal adornment or as examples of high achievement in miniaturization and precision mechanical engineering. Traditionally, dress watches appropriate for informal (business), semi-formal, and formal attire are gold, thin, simple, and plain, but increasingly rugged, complicated, or sports watches are considered by some to be acceptable for such attire. Some dress watches have a cabochon on the crown or faceted gemstones on the face, bezel, or bracelet. Some are made entirely of faceted sapphire (corundum). Many fashions and department stores offer a variety of less-expensive, trendy, "costume" watches (usually for women), many of which are similar in quality to basic quartz timepieces but which feature bolder designs. In the 1980s, the Swiss Swatch company hired graphic designers to redesign a new annual collection of non-repairable watches. Trade in counterfeit watches, which mimic expensive brand-name watches, constitutes an estimated market per year. Space The zero-gravity environment and other extreme conditions encountered by astronauts in space require the use of specially tested watches. The first-ever watch to be sent into space was a Russian "Pobeda" watch from the Petrodvorets Watch Factory. It was sent on a single orbit flight on the spaceship Korabl-Sputnik 4 on 9 March 1961. The watch had been attached without authorisation to the wrist of Chernuchka, a dog that successfully did exactly the same trip as Yuri Gagarin, with exactly the same rocket and equipment, just a month before Gagarin's flight. On 12 April 1961, Gagarin wore a Shturmanskie (a transliteration of which actually means "navigator's") wristwatch during his historic first flight into space. The Shturmanskie was manufactured at the First Moscow Factory. Since 1964, the watches of the First Moscow Factory have been marked by the trademark "", transliterated as "POLJOT", which means "flight" in Russian and is a tribute to the many space trips its watches have accomplished. In the late 1970s, Poljot launched a new chrono movement, the 3133. With a 23 jewel movement and manual winding (43 hours), it was a modified Russian version of the Swiss Valjoux 7734 of the early 1970s. Poljot 3133 were taken into space by astronauts from Russia, France, Germany and Ukraine. On the arm of Valeriy Polyakov, a Poljot 3133 chronograph movement-based watch set a space record for the longest space flight in history. Through the 1960s, a large range of watches was tested for durability and precision under extreme temperature changes and vibrations. The Omega Speedmaster Professional was selected by NASA, the U.S. space agency, and it is mostly known thanks to astronaut Buzz Aldrin who wore it during the 1969 Apollo 11 Moon landing. Heuer became the first Swiss watch in space thanks to a Heuer Stopwatch, worn by John Glenn in 1962 when he piloted the Friendship 7 on the first crewed U.S. orbital mission. The Breitling Navitimer Cosmonaute was designed with a 24-hour analog dial to avoid confusion between AM and PM, which are meaningless in space. It was first worn in space by U.S. astronaut Scott Carpenter on 24 May 1962 in the Aurora 7 Mercury capsule. Since 1994 Fortis is the exclusive supplier for crewed space missions authorized by the Russian Federal Space Agency. China National Space Administration (CNSA) astronauts wear the Fiyta spacewatches. At BaselWorld, 2008, Seiko announced the creation of the first watch ever designed specifically for a space walk, Spring Drive Spacewalk. Timex Datalink is flight certified by NASA for space missions and is one of the watches qualified by NASA for space travel. The Casio G-Shock DW-5600C and 5600E, DW 6900, and DW 5900 are Flight-Qualified for NASA space travel. Various Timex Datalink models were used both by cosmonauts and astronauts. Scuba diving Watch construction may be water-resistant. These watches are sometimes called diving watches when they are suitable for scuba diving or saturation diving. The International Organization for Standardization (ISO) issued a standard for water-resistant watches which also prohibits the term "waterproof" to be used with watches, which many countries have adopted. In the United States, advertising a watch as waterproof has been illegal since 1968, per Federal Trade Commission regulations regarding the "misrepresentation of protective features". Water-resistance is achieved by the gaskets which forms a watertight seal, used in conjunction with a sealant applied on the case to help keep water out. The material of the case must also be tested in order to pass as water-resistant. None of the tests defined by ISO 2281 for the Water Resistant mark are suitable to qualify a watch for scuba diving. Such watches are designed for everyday life and must be water-resistant during exercises such as swimming. They can be worn in different temperature and pressure conditions but are under no circumstances designed for scuba diving. The standards for diving watches are regulated by the ISO 6425 international standard. The watches are tested in static or still water under 125% of the rated (water) pressure, thus a watch with a 200-metre rating will be water-resistant if it is stationary and under 250 metres of static water. The testing of the water-resistance is fundamentally different from non-dive watches, because every watch has to be fully tested. Besides water resistance standards to a minimum of 100-metre depth rating, ISO 6425 also provides eight minimum requirements for mechanical diver's watches for scuba diving (quartz and digital watches have slightly differing readability requirements). For diver's watches for mixed-gas saturation diving two additional ISO 6425 requirements have to be met. Watches are classified by their degree of water resistance, which roughly translates to the following (1 metre = 3.281 feet): Some watches use bar instead of meters, which may then be multiplied by 10, and then subtract 10 to be approximately equal to the rating based on metres. Therefore, a 5 bar watch is equivalent to a 40-metre watch. Some watches are rated in atmospheres (atm), which are roughly equivalent to bar. Navigation There is a traditional method by which an analog watch can be used to locate north and south. The Sun appears to move in the sky over a 24-hour period while the hour hand of a 12-hour clock face takes twelve hours to complete one rotation. In the northern hemisphere, if the watch is rotated so that the hour hand points toward the Sun, the point halfway between the hour hand and 12 o'clock will indicate south. For this method to work in the southern hemisphere, the 12 is pointed toward the Sun and the point halfway between the hour hand and 12 o'clock will indicate north. During daylight saving time, the same method can be employed using 1 o'clock instead of 12. This method is accurate enough to be used only at fairly high latitudes.
Technology
Timekeeping
null
60900
https://en.wikipedia.org/wiki/Carp
Carp
The term carp (: carp) is a generic common name for numerous species of freshwater fish from the family Cyprinidae, a very large clade of ray-finned fish mostly native to Eurasia. While carp are prized quarries and are valued (even commercially cultivated) as both food and ornamental fish in many parts of the Old World, they are considered trash fish and invasive pests in many parts of Africa, Australia and most of the United States. Biology The cypriniformes (family Cyprinidae) are traditionally grouped with the Characiformes, Siluriformes, and Gymnotiformes to create the superorder Ostariophysi, since these groups share some common features. These features include being found predominantly in fresh water and possessing Weberian ossicles, an anatomical structure derived from the first five anterior-most vertebrae, and their corresponding ribs and neural crests. The third anterior-most pair of ribs is in contact with the extension of the labyrinth and the posterior with the swim bladder. The function is poorly understood, but this structure is presumed to take part in the transmission of vibrations from the swim bladder to the labyrinth and in the perception of sound, which would explain why the Ostariophysi have such a great capacity for hearing. Most cypriniformes have scales and teeth on the inferior pharyngeal bones which may be modified in relation to the diet. Tribolodon is the only cyprinid genus which tolerates salt water. Several species move into brackish water but return to fresh water to spawn. All of the other cypriniformes live in continental waters and have a wide geographical range. Some consider all cyprinid fishes carp, and the family Cyprinidae itself is often known as the carp family. In colloquial use, carp usually refers only to several larger cyprinid species such as Cyprinus carpio (common carp), Carassius carassius (crucian carp), Ctenopharyngodon idella (grass carp), Hypophthalmichthys molitrix (silver carp), and Hypophthalmichthys nobilis (bighead carp). Common carp are native to both Eastern Europe and Western Asia, so they are sometimes called a "Eurasian" carp. Carp have long been an important food fish to humans. Several species such as the various goldfish (Carassius auratus) breeds and the domesticated common carp variety known as koi (Cyprinus rubrofuscus var. "koi") have been popular ornamental fishes. As a result, carp have been introduced to various locations, though with mixed results. Several species of carp are considered invasive species in the United States, and, worldwide, large sums of money are spent on carp control. At least some species of carp are able to survive for months with practically no oxygen (for example under ice or in stagnant, scummy water) by metabolizing glycogen to form lactic acid which is then converted into ethanol and carbon dioxide. The ethanol diffuses into the surrounding water through the gills. Species Recreational fishing In 1653 Izaak Walton wrote in The Compleat Angler, "The Carp is the queen of rivers; a stately, a good, and a very subtle fish; that was not at first bred, nor hath been long in England, but is now naturalised." Carp are variable in terms of angling value. In Europe, even when not fished for food, they are eagerly sought by anglers, being considered highly prized coarse fish that are difficult to hook. The UK has a thriving carp angling market, with the British record carp standing at 68lb 1oz. It is the fastest growing angling market in the UK, and has spawned a number of specialised carp angling publications such as Carpology, Advanced carp fishing, Carpworld and Total Carp, and informative carp angling web sites, such as Carpfishing UK and Carp Squad. In the United States, carp are also classified as a rough fish, as well as a damaging naturalized exotic species, but with sporting qualities. Carp have long suffered from a poor reputation in the United States as undesirable for angling or for the table, especially since they are typically an invasive species out-competing more desirable local game fish. Nonetheless, many states' departments of natural resources are beginning to view the carp as an angling fish instead of a maligned pest. Groups such as Wild Carp Companies, American Carp Society, and the Carp Anglers Group promote the sport and work with fisheries departments to organize events to introduce and expose others to the unique opportunity the carp offers freshwater anglers. The common carp is one of the most abundant species of carp found in most rivers, creeks, lakes, and ponds throughout the Midwest region of the United States. Common carp are a particularly strong fish that fight hard on the end of anglers' lines, making them an appealing target for recreational fisherman. Since its introduction to the waters of the United States in the 1880s these fish have been viewed as a game fish, despite the fact that they are a destructive, and invasive species. Aquaculture Various species of carp have been domesticated and reared as food fish across Europe and Asia for thousands of years. These various species appear to have been domesticated independently, as the various domesticated carp species are native to different parts of Eurasia. Aquaculture has been pursued in China for at least 2,400 years. A tract by Fan Li in the fifth century BC details many of the ways carp were raised in ponds. The common carp (Cyprinus carpio) is originally from Central Europe. Several carp species (collectively known as Asian carp) were domesticated in East Asia. Carp that are originally from South Asia, for example catla (Gibelion catla), rohu (Labeo rohita) and mrigal (Cirrhinus cirrhosus), are known as Indian carp. Their hardiness and adaptability have allowed domesticated species to be propagated all around the world. Although the carp was an important aquatic food item, as more fish species have become readily available for the table, the importance of carp culture in Western Europe has diminished. Demand has declined, partly due to the appearance of more desirable table fish such as trout and salmon through intensive farming, and environmental constraints. However, fish production in ponds is still a major form of aquaculture in Central and Eastern Europe, including the Russian Federation, where most of the production comes from low or intermediate-intensity ponds. In Asia, the farming of carp continues to surpass the total amount of farmed fish volume of intensively sea-farmed species, such as salmon and tuna. Breeding Selective breeding programs for the common carp include improvement in growth, shape, and resistance to disease. Experiments carried out in the USSR used crossings of broodstocks to increase genetic diversity, and then selected the species for traits such as growth rate, exterior traits and viability, and/or adaptation to environmental conditions such as variations in temperature. Selected carp for fast growth and tolerance to cold, the Ropsha carp. The results showed a 30 to 77.4% improvement of cold tolerance, but did not provide any data for growth rate. An increase in growth rate was observed in the second generation in Vietnam, Moav and Wohlfarth (1976) showed positive results when selecting for slower growth for three generations compared to selecting for faster growth. Schaperclaus (1962) showed resistance to the dropsy disease wherein selected lines suffered low mortality (11.5%) compared to unselected (57%). The major carp species used traditionally in Chinese aquaculture are the black, grass, silver and bighead carp. In the 1950s, the Pearl River Fishery Research Institute in China made a technological breakthrough in the induced breeding of these carps, which has resulted in a rapid expansion of freshwater aquaculture in China. In the late 1990s, scientists at the Chinese Academy of Fishery Sciences developed a new variant of the common carp called the Jian carp (Cyprinus carpio var. Jian). This fish grows rapidly and has a high feed conversion rate. Over 50% of the total aquaculture production of carp in China has now converted to Jian carp. As ornamental fish Carp, along with many of their cyprinid relatives, are popular ornamental aquarium and pond fish. Ornamental goldfish were originally domesticated from their wild form, a dark greyish-brown carp native to Asia, but may have been influenced by Carassius carassius and Carassius gibelio. They were first bred for color in China over a thousand years ago. Due to selective breeding, goldfish have been developed into many distinct breeds, and are found in various colors, color patterns, forms and sizes far different from those of the original carp. Goldfish were kept as ornamental fish in China for thousands of years before being introduced to Japan in 1603, and to Europe in 1611. Nishikigoi, better known simply as koi, are a domesticated varieties of common carp and Amur carp (Cyprinus rubrofuscus) that have been selectively bred for color. The common carp was introduced from China to Japan, where selective breeding in the 1820s in the Niigata region resulted in koi. In Japanese culture, koi are treated with affection, and seen as good luck. They are popular in other parts of the world as outdoor pond fish. As food Bighead carp is enjoyed in many parts of the world, but it has not become a popular foodfish in North America. Acceptance there has been hindered in part by the name "carp", and its association with the common carp which is not a generally favored foodfish in North America. The flesh of the bighead carp is white and firm, different from that of the common carp, which is darker and richer. Bighead carp flesh shares one similarity with common carp flesh – both have intramuscular bones within the filet. However, bighead carp captured from the wild in the United States tend to be much larger than common carp, so the intramuscular bones are also larger and thus less problematic. Common carp, breaded and fried, is part of traditional Christmas Eve dinner in Slovakia, Poland, eastern part of Croatia and in the Czech Republic. In pond based water agriculture it is treated as most prominent food fish. Some recipes are specifically for carp such as the "sweet-and-sour carp" () and "thick miso soup with carp" (). Crucian carp is considered the best-tasting pan fish in Poland. It is known as , and is served traditionally with sour cream (). In Russia, this particular species is called , meaning "golden crucian", and is one of the fish used in a borscht recipe called () or ). Mud carp, due to the low cost of production, is mainly consumed by the poor, locally; it is mostly sold alive, but can be dried and salted. An important food fish in Guangdong Province, it is also cultured in this area and Taiwan. Mud carp is sometimes canned or processed as fish cakes, fish balls, or dumplings, as used in Cantonese and Shunde cuisines. It can be combined with or Chinese fermented black beans in a dish called fried dace with salted black beans. It can be served cooked with vegetables such as Chinese cabbage. Fisherman's soup Kuai Taramosalata Masgouf, a popular Iraqi dish consisting of seasoned, grilled carp. Gefilte fish, an Ashkenazi Jewish dish made from a poached mixture of ground deboned fish, primarily carp, whitefish, and pike. Gallery Common carp in culture In Chinese literature A long tradition of common carp exists in Chinese culture and literature. A popular lyric circulating as early as 2,000 years ago in the late Han period includes an anecdote which relates how a man far away from home sent back to his wife a pair of carp (), in which, when the wife opened the fish to cook, she found a silk strip that carried a love note of just two lines: "Eat well to keep fit, missing you and forget me not". Jumping carp the Chinese folk tale At the Yellow River at Henan () is a waterfall called the Dragon Gate. It is said that if certain carp called yulong can climb the cataract, they will transform into dragons. Every year in the third month of spring, they swim up from the sea and gather in vast numbers in the pool at the foot of the falls. It used to be said that only 71 could make the climb in any year. When the first succeeded, then the rains would begin to fall. This Dragon Gate was said to have been created after the flood by the god-emperor Yu, who split a mountain blocking the path of the Yellow River. It was so famous that throughout China was a common saying, "a student facing his examinations is like a carp attempting to leap the Dragon Gate." Henan is not the only place where this happens. Many other waterfalls in China also have the name Dragon Gate and much the same is said about them. Other famous Dragon Gates are on the Wei River where it passes through the Lung Sheu Mountains and at Tsin in Shanxi Province. The fish's jumping feature is set in such a proverbial idiom as "Liyu (Carp) jumps over the Dragon Gate ()," an idiom that conveys a vivid image symbolizing a sudden uplifting in one's social status, as when one ascends into the upper society or has found favor with the royal or a noble family, perhaps through marriage, but in particular through success in the imperial examination. It is therefore an idiom often used to encourage students or children to achieve success through hard work and perseverance. This symbolic image, as well as the image of the carp itself, has been one of the most popular themes in Chinese paintings, especially those of popular styles. The fish is usually colored in gold or pink, shimmering with an unmistakably auspicious tone. In Japanese culture The modern Japanese Koi fish are a brightly colored species of the Amur carp that have been bred by rice farmers in Japan since the early 19th century. This subspecies of carp plays a significant role in Japanese art, often being depicted as symbols of luck, strength, and tenacity. For this reason, Koi fish are also presented as gifts in Japanese culture as symbols of love, gratitude, and peace. Their bright colors and unique patterns present a high degree of eloquence to the Japanese people, thus creating a level of respect and appreciation for the Koi. With Koi fish being at the forefront of a lot of Japanese art, it is common to find modern depictions of Koi in paintings, home art, murals, and even tattoos. To many people, Koi fish strongly represent Samurai warriors, as they are able to be seen swimming upwards against a rivers current, symbolizing a Samurai's bravery. One typical saying is the phrase "koi no taki-nobori", translating to "Carp climbing the waterfalls", a phrase that is used to describe a persons strength and perseverance.
Biology and health sciences
Cypriniformes
null
60908
https://en.wikipedia.org/wiki/Common%20carp
Common carp
The common carp (Cyprinus carpio), also known as European carp or Eurasian carp, is a widespread freshwater fish of eutrophic waters in lakes and large rivers in Europe and Asia. The native wild populations are considered vulnerable to extinction by the International Union for Conservation of Nature (IUCN), but the species has also been domesticated and introduced (see aquaculture) into environments worldwide, and is often considered a destructive invasive species, being included in the list of the world's 100 worst invasive species. It gives its name to the carp family, Cyprinidae. Taxonomy The type subspecies is Cyprinus carpio carpio, native to much of Europe (notably the Danube and Volga rivers). The subspecies Cyprinus carpio haematopterus (Amur carp), native to eastern Asia, was recognized in the past, but recent authorities treat it as a separate species under the name Cyprinus rubrofuscus. The common carp and various Asian relatives in their pure forms can be separated by meristics and also differ in genetics, but they are able to interbreed. Common carp can also interbreed with the goldfish (Carassius auratus); the result is called Kollar carp. Another artificial hybrid is Ghost Carp, which is bred between common carp and Japanese Purachina Koi. The large variation of colours produced makes ghost carp a popular commercial species. History The common carp is native to Europe and Asia and has been introduced to every part of the world except the poles. They are the third most frequently introduced fish species worldwide, and their history as a farmed fish dates back to Roman times. Carp are used as food in many areas but are also regarded as a pest in several regions due to their ability to out-compete native fish stocks. The original common carp was found in the inland delta of the Danube River about 2000 years ago and was torpedo-shaped and golden-yellow in colour. It had two pairs of barbels and a mesh-like scale pattern. Although this fish was initially kept as an exploited captive, it was later maintained in large, specially built ponds by the Romans in south-central Europe (verified by the discovery of common carp remains in excavated settlements in the Danube delta area). As aquaculture became a profitable branch of agriculture, efforts were made to farm the animals, and the culture systems soon included spawning and growing ponds. The common carp's native range also extends to the Black Sea, Caspian Sea, and Aral Sea. Both European and Asian subspecies have been domesticated. In Europe, domestication of carp as food fish was spread by monks between the 13th and 16th centuries. The wild forms of carp had already reached the delta of the Rhine in the 12th century, probably with some human help. Variants that have arisen with domestication include the mirror carp, with large, mirror-like scales (linear mirror – scaleless except for a row of large scales that run along the lateral line; originating in Germany), the leather carp (virtually unscaled except near dorsal fin), and the fully scaled carp. Koi carp (錦鯉 (nishikigoi) in Japanese, 鯉魚 (pinyin: lĭ yú) in Chinese) is a domesticated ornamental variety that originated in the Niigata region of Japan in the 1820s, but its parent species are likely the East Asian carp, possibly C. rubrofuscus. Physiology and life history The carp has a robust build, with a dark gold sheen that is most prominent on its head. Its body is adorned with large conspicuous scales that are very shiny. It has large pectoral fins and a tapering dorsal fin running down the last two thirds of its body, getting progressively higher as it nears the carp's head. Its caudal and anal fins may either be a dark bronze or washed with a rubbery orange hue. There are two or three spines on the anal fin, the first being serrated, and the dorsal fin has three or four anterior spines, the first of which is also serrated. The mouth of the carp is downward-turned, with two pairs of barbels, one pair at the corners of the upper lip, and the other on the lower. Wild common carp are typically slimmer than domesticated forms, with body length about four times body height, red flesh, and a forward-protruding mouth. Common carp can grow to very large sizes if given adequate space and nutrients. Their average growth rate by weight is about half the growth rate of domesticated carp. They do not reach the lengths and weights of domesticated carp, which (range, 3.2–4.8 times) can grow to a maximum length of , a maximum weight of over . The longest-lived common carp documented was of wild-origin (in non-native habitat of North America), and was 64 years of age. The largest recorded carp, caught by British angler, Colin Smith, in 2013 at Etang La Saussaie Fishery, France, weighed . The average size of the common carp is around and . Habitat Although tolerant of most conditions, common carp prefer large bodies of slow or standing water and soft, vegetative sediments. As schooling fish, they prefer to be in groups of five or more. They naturally live in temperate climates in fresh or slightly brackish water with a pH of 6.5–9.0 and salinity up to about 0.5%, and temperatures of . The ideal temperature is , with spawning beginning at ; they easily survive winter in a frozen-over pond, as long as some free water remains below the ice. Carp are able to tolerate water with very low oxygen levels, by gulping air at the surface. Diet Common carp are omnivorous. They can eat a herbivorous diet of aquatic plants, plant tubers, and seeds, but prefer to scavenge the bottom for insects, crustaceans (including zooplankton and crawfish), molluscs, benthic worms, fish eggs, and fish remains. Common carp feed throughout the day with the most intensive feeding at night and around sunrise. Feeding mechanisms Common carp are benthic feeders and root in sediment for food items. Their barbels may help to feel for food embedded in the sediment, like plant tubers or annelids. Carp pick up sediment by generating suction and mouth the content to identify and select food items by taste and size. Gill rakers form a branchial sieve that may aid in food separation, but the carp is also able to clamp down on food items it detects using a muscular palatal pad and inferior postlingual organ. The sediment is passed back and forth between the mouth and pharynx repeatedly as food is found. The carp may end up spitting out sediment, which contributes to water turbidity. While common carp have no oral teeth, ten pharyngeal teeth are used for crushing or grinding food. The carp has no stomach, and the intestinal length can vary based partially on dietary composition in early life. Reproduction An egg-layer, a typical adult female can lay 300,000 eggs in a single spawn. Although carp typically spawn in the spring, in response to rising water temperatures and rainfall, carp can spawn multiple times in a season. In commercial operations, spawning is often stimulated using a process called hypophysation, where lyophilized pituitary extract is injected into the fish. The pituitary extract contains gonadotropic hormones which stimulate gonad maturation and sex steroid production, ultimately promoting reproduction. Predation A single carp can lay over a million eggs in a year. Eggs and fry often fall victim to bacteria, fungi, and the vast array of tiny predators in the pond environment. Carp which survive to the juvenile stage are preyed upon by other fish such as the northern pike and largemouth bass, and several birds (including cormorants, herons, goosanders, and ospreys) and mammals (including otter and mink). Mirror carp Mirror carp, regionally known as Israeli carp, are a type of domesticated fish commonly found in Europe but widely introduced or cultivated elsewhere. They are a variety of the common carp (Cyprinus carpio) developed through selective breeding. The name "mirror carp" originates from their scales' resemblance to mirrors. Genetics The most striking difference between mirror and common carp is the presence of large mirror-like scales on the former. The mirror-scale phenotype is caused by a genetic mutation present at one of two scale trait loci, denoted by their S and N alleles, respectively. The genotype that produces a mirror scale phenotype is "ssnn" (all recessive), while wild-type carp may have either SSnn or Ssnn genotype. The "S" locus has been identified as containing the gene encoding fibroblast growth factor receptor Fgfr1A1, which was duplicated during the course of carp evolution and consequently does not typically produce lethal phenotypes when only one locus is mutated. The "N" locus has not been identified, but is hypothesized to have bearing on the development of embryonic mesenchyme. Contrary to popular belief, a leather carp is not always a mirror carp without scales. Similar to mirror carp, leather, or "nude" carp, are homozygous recessive at the "S" locus, but unlike mirror carp, true leather carp are heterozygous for a dominant mutant allele at the "N" locus (ssNn genotype). Leather carp also have reduced numbers of red blood cells and slower growth rates than scaled carp. Mirror carp from Hungarian and Asian stocks have been observed to have fewer pharyngeal teeth than scaled carp, while nude carp had fewer still. A population of mirror carp in Madagascar (there an invasive species) was found to have reverted to full scale cover after being introduced from France in the early twentieth century. The feral Malagasy carp still possessed large scales due to their mirror phenotype, but had increased scale coverage approaching that of wild-type carp. Hubert et al. (2016) found that the recessive allele at the "S" locus was still fixed in the population. They believe that the phenotypic reversion was due to compensation by quantitative trait loci as a result of a selective disadvantage for partial scaling in the wild, perhaps related to an impairment in parasite resistance. Introduction into other habitats Common carp have been introduced to most continents and some 59 countries. In absence of natural predators or commercial fishing, they may extensively alter their environments due to their reproductive rate and their feeding habit of grubbing through bottom sediments for food. In feeding, they may destroy, uproot, disturb and eat submerged vegetation, causing serious damage to food sources and habitats of native duck (such as canvasbacks) and fish populations. In 2020, scientists demonstrated that a small proportion of fertilized common carp eggs ingested by waterfowl survive passing through the digestive tract and hatch after being retrieved from the feces. Birds exhibit strong preference for fish eggs, while cyprinids produce hundreds of thousands of eggs at a single spawning event. These data indicate that despite the low proportion of eggs surviving the digestive tract of birds, endozoochory might provide a potentially overlooked dispersal mechanism of invasive cyprinid fish. If proven under natural circumstances, endozoochorous dispersal of invasive fish could be a strong conservation concern for freshwater biodiversity. Australia Carp were introduced to Australia over 150 years ago but were not seen as a recognised pest species until the "Boolarra" strain appeared in the 1960s. After spreading massively through the Murray–Darling basin, aided by massive flooding in 1974, they have established themselves in every Australian territory except for the Northern Territory. In Victoria, the common carp has been declared a noxious fish species, and the quantity a fisherman can take is unlimited. In South Australia, it is an offence for this species to be released back to the wild. An Australian company produces plant fertilizer from carp. Efforts to eradicate a small colony from Lake Crescent in Tasmania, without using chemicals, have been successful, but the long-term, expensive and intensive undertaking is an example of both the possibility and difficulty of safely removing the species once it is established. One proposal, regarded as environmentally questionable, is to control common carp numbers by deliberately exposing them to the carp-specific koi herpes virus with its high mortality rate. In 2016, the Australian Government announced plans to release this virus into the Murray–Darling basin in an attempt to reduce the number of invasive common carp in the water system. However, in 2020, this plan was found to be unlikely to work. The CSIRO has also developed a technique for genetically modifying carp so that they only produce male offspring. This daughterless carp method shows promise for totally eradicating carp from Australia's waterways. North America Common carp were brought to the United States in 1831. In the late 19th century, they were distributed widely throughout the country by the government as a food fish, but they are now rarely eaten in the United States, where they are generally considered pests. As in Australia, their introduction has been shown to have negative environmental consequences. In Utah, the common carp's population in Utah Lake is expected to be reduced by 75 percent by using nets to catch millions of them, and either giving them to people who will eat them or processing them into fertilizer. This, in turn, will give the declining population of the native June sucker a chance to recover. Another method of control is to trap them with seine nets in tributaries they use to spawn, and exposing them to the piscicide rotenone. This method has been shown to reduce their impact within 24 hours and greatly increase native vegetation and desirable fish species. It also allows native fish to prey more easily on young carp. Common carp are thought to have been introduced into the Canadian province of British Columbia from the neighboring Washington state. They were first noted in the Okanagan Valley in 1912, as was their rapid growth in population. Carp are currently distributed in the lower Columbia (Arrow Lakes), lower Kootenay, Kettle (Christina Lake), and throughout the Okanagan system. Common carp aquaculture Common carp contributed around 4.67 million tons on a global scale during 2015–2016, roughly accounting for 7.4% of the total global inland fisheries production. In Europe, common carp contributed 1.8% (0.17 Mt) of the total inland fisheries production (9.42 Mt) during 2015–2016. It is a major farmed species in European freshwater aquaculture with production localized in central and eastern European countries. The Russian Federation (0.06 Mt) followed by Poland (0.02 Mt), Czech Republic (0.02 Mt), Hungary (0.01 Mt) and Ukraine (0.01 Mt) represents about 70% of carp production in Europe during 2016. In fact, the land‐locked central European countries rely heavily on common carp aquaculture in fishponds. The average productivity of carp culture systems in central European countries ranges between 0.3 and 1 ton ha−1. The European common carp production, in terms of volume, reached its peak (0.18 Mt) during 2009–2010 and has been declining since. Carp farming is often criticized as an anthropogenic driver of eutrophication of inland freshwater bodies - especially in the Central Eastern European Region (CEER). There has been some debate between environmentalists and carp farmers concerning eutrophication of water bodies, manifested into lobbying at ministry levels surrounding fishpond legislations. European carp aquaculture in fish ponds most likely has a lower nutrient burden to the environment than most food production sectors in the European Union. As food and sport Breeding and Fishing The Romans farmed carp and this pond culture continued through the monasteries of Europe and to this day. In China, Korea, and Japan, carp farming took place as early as the Yayoi period (c. 300 BC – AD 300). The annual tonnage of common carp produced in China alone, not to mention the other cyprinids, exceeds the weight of all other fish, such as trout and salmon, produced by aquaculture worldwide. Roughly three million tonnes are produced annually, accounting for 14% of all farmed freshwater fish in 2002. China is by far the largest commercial producer, accounting for about 70% of carp production. Carp is eaten in many parts of the world both when caught from the wild and raised in aquaculture. Common carp are extremely popular with anglers in many parts of Europe, and their popularity as quarry is slowly increasing among anglers in the United States (though they are still generally considered pests and destroyed in most areas of the U.S.), and southern Canada. Carp are also popular with spear, bow, and fly fishermen. Food culture In Central Europe, it is a traditional part of a Christmas Eve dinner. Hungarian fisherman's soup, a specially prepared fish soup of carp alone or mixed with other freshwater fish, is part of the traditional meal for Christmas Eve in Hungary along with stuffed cabbage and poppy seed roll and walnut roll. A traditional Czech Christmas Eve dinner is a thick soup of carp's head and offal, fried carp meat (sometimes the meat is skinned and baked instead) with potato salad or boiled carp in black sauce. A Slovak Christmas Eve dinner is quite similar, with soup varying according to the region and fried carp as the main dish. Also in Austria, parts of Germany, and Poland, a fried carp is one of the traditional dishes on Christmas Eve. Carp are mixed with other common fish to make gefilte fish, popular in Jewish cuisine. In Western Europe, the carp is cultivated more commonly as a sport fish, although there is a small market for it as a food fish. In the United States, carp is mostly ignored as a food fish. Almost all U.S. shoppers bypass carp, due to a preference for filleted fish as opposed to cooking whole. Carp have smaller intramuscular bones called y-bones, which makes them a whole fish species for cooking. When Eurasian Carp was introduced to Lake Toba in Sumatra, it was adapted to be used for the traditional dish Arsik.
Biology and health sciences
Cypriniformes
Animals
60951
https://en.wikipedia.org/wiki/Cardamom
Cardamom
Cardamom (), sometimes cardamon or cardamum, is a spice made from the seeds of several plants in the genera Elettaria and Amomum in the family Zingiberaceae. Both genera are native to the Indian subcontinent and Indonesia. They are recognized by their small seed pods: triangular in cross-section and spindle-shaped, with a thin, papery outer shell and small, black seeds; Elettaria pods are light green and smaller, while Amomum pods are larger and dark brown. Species used for cardamom are native throughout tropical and subtropical Asia. The first references to cardamom are found in Sumer, and in Ayurveda. In the 21st century, it is cultivated mainly in India, Indonesia, and Guatemala. Etymology The word cardamom is derived from the Latin , as a Latinisation of the Greek (), a compound of (, "cress") and (), of unknown origin. The earliest attested form of the word signifying "cress" is the Mycenaean Greek , written in Linear B syllabic script, in the list of flavorings on the spice tablets found among palace archives in the House of the Sphinxes in Mycenae. The modern genus name Elettaria is derived from the root attested in Dravidian languages. Types and distribution The two main types of cardamom are: True or green cardamom (or white cardamom when bleached) comes from the species Elettaria cardamomum and is distributed from India to Malaysia. What is often referred to as white cardamon is actually Siam cardamom, Amomum krervanh. Black cardamom, also known as brown, greater, large, longer, or Nepal cardamom, comes from the species Amomum subulatum and is native to the eastern Himalayas and mostly cultivated in Eastern Nepal, Sikkim, and parts of Darjeeling district in West Bengal of India, and southern Bhutan. Uses Both forms of cardamom are used as flavorings and cooking spices in both food and drink. E. cardamomum (green cardamom) is used as a spice, a masticatory, or is smoked. Food and beverage Cardamom has a strong taste, with an aromatic, resinous fragrance. Black cardamom has a more smoky – though not bitter – aroma, with a coolness some consider similar to mint. Green cardamom is one of the most expensive spices by weight, but little is needed to impart flavor. It is best stored in the pod, as exposed or ground seeds quickly lose their flavor. Grinding the pods and seeds together lowers both the quality and the price. For recipes requiring whole cardamom pods, a generally accepted equivalent is 10 pods equals teaspoons (7.4 ml) of ground cardamom. Cardamom is a common ingredient in Indian cooking. It is also often used in baking in the Nordic countries, in particular in Sweden, Norway, and Finland, where it is used in traditional treats such as the Scandinavian Yule bread , the Swedish sweet bun, and Finnish sweet bread . In the Middle East, green cardamom powder is used as a spice for sweet dishes, and as a traditional flavouring in coffee and tea. Cardamom is used to a wide extent in savoury dishes. In some Middle Eastern countries, coffee and cardamom are often ground in a wooden mortar, a , and cooked together in a skillet, a , over wood or gas, to produce mixtures with up to 40% cardamom. In Asia, both types of cardamom are widely used in both sweet and savoury dishes, particularly in the south. Both are frequent components in such spice mixes as Indian and Nepali masalas and Thai curry pastes. Green cardamom is often used in traditional Indian sweets and in masala chai (spiced tea). Both are also often used as a garnish in basmati rice and other dishes. Individual seeds are sometimes chewed and used in much the same way as chewing gum. It is used by confectionery giant Wrigley; its Eclipse Breeze Exotic Mint packaging indicates the product contains "cardamom to neutralize the toughest breath odors". It is also included in aromatic bitters, gin, and herbal teas. In Korea, Tavoy cardamom (Wurfbainia villosa var. xanthioides) and red cardamom (Lanxangia tsao-ko) are used in tea called . Composition The essential oil content of cardamom seeds depends on storage conditions and may be as high as 8%. The oil is typically 45% α-terpineol, 27% myrcene, 8% limonene, 6% menthone, 3% β-phellandrene, 2% 1,8-cineol, 2% sabinene and 2% heptane. Other sources report the following contents: 1,8-cineol (20 to 50%), α-terpenylacetate (30%), sabinene, limonene (2 to 14%), and borneol. In the seeds of round cardamom from Java (Wurfbainia compacta), the content of essential oil is lower (2 to 4%), and the oil contains mainly 1,8-cineol (up to 70%) plus β-pinene (16%); furthermore, α-pinene, α-terpineol and humulene are found. Production In 2022, world production of cardamom (included with nutmeg and mace for reporting to the United Nations) was 138,888 tonnes, led by India, Indonesia and Guatemala, which together accounted for 85% of the total (table). Production practices According to Nair (2011), in the years when India achieves a good crop, it is still less productive than Guatemala. Other notable producers include Costa Rica, El Salvador, Honduras, Papua New Guinea, Sri Lanka, Tanzania, Thailand, and Vietnam. Much production of cardamom in India is cultivated on private property or in areas the government leases out to farmers. Traditionally, small plots of land within the forests (called eld-kandies) where the wild or acclimatised plant existed are cleared during February and March. Brushwood is cut and burned, and the roots of powerful weeds are torn up to free the soil. Soon after clearing, cardamom plants spring up. After two years the cardamom plants may have eight-to-ten leaves and reach in height. In the third year, they may be in height. In the following May or June the ground is again weeded, and by September to November a light crop is obtained. In the fourth year, weeding again occurs, and if the cardamoms grow less than apart a few are transplanted to new positions. The plants bear for three or four years; and historically the life of each plantation was about eight or nine years. In Malabar the seasons run a little later than in Mysore, and – according to some reports – a full crop may be obtained in the third year. Cardamoms grown above elevation are considered to be of higher quality than those grown below that altitude. Plants may be raised from seed or by division of the rhizome. In about a year, the seedlings reach about in length, and are ready for transplantation. The flowering season is April to May, and after swelling in August and September, by the first half of October usually attain the desired degree of ripening. The crop is accordingly gathered in October and November, and in exceptionally moist weather, the harvest protracts into December. At the time of harvesting, the scapes or shoots bearing the clusters of fruits are broken off close to the stems and placed in baskets lined with fresh leaves. The fruits are spread out on carefully prepared floors, sometimes covered with mats, and are then exposed to the sun. Four or five days of careful drying and bleaching in the sun is usually enough. In rainy weather, drying with artificial heat is necessary, though the fruits suffer greatly in colour; they are consequently sometimes bleached with steam and sulphurous vapour or with ritha nuts. The industry is highly labour-intensive, each hectare requiring considerable maintenance throughout the year. Production constraints include recurring climate vagaries, the absence of regular re-plantation, and ecological conditions associated with deforestation. Cultivation In 1873 and 1874, Ceylon (now Sri Lanka) exported about each year. In 1877, Ceylon exported , in 1879, , and in the 1881–82 season, In 1903, of cardamom growing areas were owned by European planters. The produce of the Travancore plantations was given as , or just a little under that of Ceylon. The yield of the Mysore plantations was approximately , and the cultivation was mainly in Kadur district. The volume for 1903–04 stated the value of the cardamoms exported to have been Rs. 3,37,000 as compared with Rs. 4,16,000 the previous year. India, which ranks second in world production, recorded a decline of 6.7 percent in cardamom production for 2012–13, and projected a production decline of 30–40% in 2013–14, compared with the previous year due to unfavorable weather. In India, the state of Kerala is by far the most productive producer, with the districts of Idukki, Palakkad and Wynad being the principal producing areas. Given that a number of bureaucrats have personal interests in the industry, in India, several organisations have been set up to protect cardamom producers such as the Cardamom Growers Association (est. 1992) and the Kerala Cardamom Growers Association (est. 1974). Research in India's cardamom plantations began in the 1970s while Kizhekethil Chandy held the office of Chairman of the Cardamom Board. The Kerala Land Reforms Act imposed restrictions on the size of certain agricultural holdings per household to the benefit of cardamom producers. In 1979–1980, Guatemala surpassed India in worldwide production. Guatemala cultivates Elettaria cardamomum, which is native to the Malabar Coast of India. Alta Verapaz Department produces 70 percent of Guatemala's cardamom. Cardamom was introduced to Guatemala before World War I by the German coffee planter Oscar Majus Kloeffer. After World War II, production was increased to 13,000 to 14,000 tons annually. The average annual income for a plantation-owning household in 1998 was US$3,408. Although the typical harvest requires over 210 days of labor per year, most cardamom farmers are better off than many other agricultural workers, and there are a significant number of those from the upper strata of society involved in the cultivation process. Increased demand since the 1980s, principally from China, for both Wurfbainia villosa and Lanxangia tsao-ko, has provided a key source of income for poor farmers living at higher altitudes in localized areas of China, Laos, and Vietnam, people typically isolated from many other markets. Laos exports about 400 tonnes annually through Thailand according to the FAO. Trade Cardamom production's demand and supply patterns of trade are influenced by price movements, nationally and internationally, in 5 to 6-year cycles. Importing leaders mentioned are Saudi Arabia and Kuwait, while other significant importers include Germany, Iran, Japan, Jordan, Pakistan, Qatar, United Arab Emirates, the UK, and the former USSR. According to the United Nations Conference on Trade and Development, 80 percent of cardamom's total consumption occurs in the Middle East. In the 19th century, Bombay and Madras were among the principal distributing ports of cardamom. India's exports to foreign countries increased during the early 20th century, particularly to the United Kingdom, followed by Arabia, Aden, Germany, Turkey, Japan, Persia and Egypt. However, some 95% of cardamom produced in India is for domestic purposes, and India is itself by far the most important consuming country for cardamoms in the world. India also imports cardamom from Sri Lanka. In 1903–1904, these imports came to , valued at Rs. 1,98,710. In contrast, Guatemala's local consumption is negligible, which supports the exportation of most of the cardamom that is produced. In the mid-1800s, Ceylon's cardamom was chiefly imported by Canada. After saffron and vanilla, cardamom is currently the third most expensive spice, and is used as a spice and flavouring for food and liqueurs. History Cardamom has been used in flavorings and food over centuries. During the Middle Ages, cardamom dominated the trade industry. The Arab states played a significant role in the trade of Indian spices, including cardamom. It is now ranked the third most expensive spice following saffron and vanilla. Cardamom production began in ancient times, and has been referred to in ancient Sanskrit texts as . The Babylonians and Assyrians used the spice early on, and trade in cardamom opened up along land routes and by the interlinked Persian Gulf route controlled from Dilmun as early as the third millennium BCE Early Bronze Age, into western Asia and the Mediterranean world. The ancient Greeks thought highly of cardamom, and the Greek physicians Dioscorides and Hippocrates wrote about its therapeutic properties, identifying it as a digestive aid. Due to demand in ancient Greece and Rome, the cardamom trade developed into a handsome luxury business; cardamom was one of the spices eligible for import tax in Alexandria in 126 CE. In medieval times, Venice became the principal importer of cardamom into the west, along with pepper, cloves and cinnamon, which was traded with merchants from the Levant with salt and meat products. In China, Amomum was an important part of the economy during the Song Dynasty (960–1279). In 1150, the Arab geographer Muhammad al-Idrisi noted that cardamom was being imported to Aden, in Yemen, from India and China. The Portuguese became involved in the trade in the 16th century, and the industry gained wide-scale European interest in the 19th century. Gallery
Biology and health sciences
Monocots
null
60980
https://en.wikipedia.org/wiki/Digital%20Visual%20Interface
Digital Visual Interface
Digital Visual Interface (DVI) is a video display interface developed by the Digital Display Working Group (DDWG). The digital interface is used to connect a video source, such as a video display controller, to a display device, such as a computer monitor. It was developed with the intention of creating an industry standard for the transfer of uncompressed digital video content. DVI devices manufactured as DVI-I have support for analog connections, and are compatible with the analog VGA interface by including VGA pins, while DVI-D devices are digital-only. This compatibility, along with other advantages, led to its widespread acceptance over competing digital display standards Plug and Display (P&D) and Digital Flat Panel (DFP). Although DVI is predominantly associated with computers, it is sometimes used in other consumer electronics such as television sets and DVD players. History An earlier attempt to promulgate an updated standard to the analog VGA connector was made by the Video Electronics Standards Association (VESA) in 1994 and 1995, with the Enhanced Video Connector (EVC), which was intended to consolidate cables between the computer and monitor. EVC used a 35-pin Molex MicroCross connector and carried analog video (input and output), analog stereo audio (input and output), and data (via USB and FireWire). At the same time, with the increasing availability of digital flat-panel displays, the priority shifted to digital video transmission, which would remove the extra analog/digital conversion steps required for VGA and EVC; the EVC connector was reused by VESA, which released the Plug & Display (P&D) standard in 1997. P&D offered single-link TMDS digital video with, as an option, analog video output and data (USB and FireWire), using a 35-pin MicroCross connector similar to EVC; the analog audio and video input lines from EVC were repurposed to carry digital video for P&D. Because P&D was a physically large, expensive connector, a consortium of companies developed the DFP standard (1999), which was focused solely on digital video transmission using a 20-pin micro ribbon connector and omitted the analog video and data capabilities of P&D. DVI instead chose to strip just the data functions from P&D, using a 29-pin MicroCross connector to carry digital and analog video. Critically, DVI allows dual-link TMDS signals, meaning it supports higher resolutions than the single-link P&D and DFP connectors, which led to its successful adoption as an industry standard. Compatibility of DVI with P&D and DFP is accomplished typically through passive adapters that provide appropriate physical interfaces, as all three standards use the same DDC/EDID handshaking protocols and TMDS digital video signals. DVI made its way into products starting in 1999. One of the first DVI monitors was Apple's original Cinema Display, which launched in 1999. Technical overview DVI's digital video transmission format is based on panelLink, a serial format developed by Silicon Image that utilizes a high-speed serial link called transition minimized differential signaling (TMDS). TMDS Digital video pixel data is transported using multiple TMDS twisted pairs. At the electrical level, these pairs are highly resistant to electrical noise and other forms of analog distortion. Single link A single link DVI connection has four TMDS pairs. Three data pairs carry their designated 8-bit RGB component (red, green, or blue) of the video signal for a total of 24 bits per pixel. The fourth pair carries the TMDS clock. The binary data is encoded using 8b/10b encoding. DVI does not use packetization, but rather transmits the pixel data as if it were a rasterized analog video signal. As such, the complete frame is drawn during each vertical refresh period. The full active area of each frame is always transmitted without compression. Video modes typically use horizontal and vertical refresh timings that are compatible with cathode-ray tube (CRT) displays, though this is not a requirement. In single link mode, the maximum TMDS clock frequency is 165 MHz, which supports a maximum resolution of 2.75 megapixels (including blanking interval) at 60 Hz refresh. For practical purposes, this allows a maximum 16:10 screen resolution of 1920 × 1200 at 60 Hz. Dual link To support higher-resolution display devices, the DVI specification contains a provision for dual link. Dual link DVI doubles the number of TMDS data pairs, effectively doubling the video bandwidth, which allows higher resolutions up to 2560 × 1600 at 60 Hz or higher refresh rates for lower resolutions. Compatibility For backward compatibility with displays using analog VGA signals, some of the contacts in the DVI connector carry the analog VGA signals. To ensure a basic level of interoperability, DVI compliant devices are required to support one baseline display mode, "low pixel format" (640 × 480 at 60 Hz). DDC Like modern analog VGA connectors, the DVI connector includes pins for the display data channel (DDC), which allows the graphics adapter to read the monitor's extended display identification data (EDID). When a source and display using the DDC2 revision are connected, the source first queries the display's capabilities by reading the monitor EDID block over an I²C link. The EDID block contains the display's identification, color characteristics (such as gamma value), and table of supported video modes. The table can designate a preferred mode or native resolution. Each mode is a set of timing values that define the duration and frequency of the horizontal/vertical sync, the positioning of the active display area, the horizontal resolution, vertical resolution, and refresh rate. Cable length The maximum length recommended for DVI cables is not included in the specification, since it is dependent on the TMDS clock frequency. In general, cable lengths up to will work for display resolutions up to 1920 × 1200. Longer cables up to in length can be used with display resolutions 1280 × 1024 or lower. For greater distances, the use of a DVI booster—a signal repeater which may use an external power supply—is recommended to help mitigate signal degradation. Connector The DVI connector on a device is given one of three names, depending on which signals it implements: DVI-I (integrated, combines digital and analog in the same connector; digital may be single or dual link) DVI-D (digital only, single link or dual link) DVI-A (analog only) Most DVI connector types—the exception is DVI-A—have pins that pass digital video signals. These come in two varieties: single link and dual link. Single link DVI employs a single transmitter with a TMDS clock up to 165 MHz that supports resolutions up to 1920 × 1200 at 60 Hz. Dual link DVI adds six pins, at the center of the connector, for a second transmitter increasing the bandwidth and supporting resolutions up to 2560 × 1600 at 60 Hz. A connector with these additional pins is sometimes referred to as DVI-DL (dual link). Dual link should not be confused with dual display (also known as dual head), which is a configuration consisting of a single computer connected to two monitors, sometimes using a DMS-59 connector for two single link DVI connections. In addition to digital, some DVI connectors also have pins that pass an analog signal, which can be used to connect an analog monitor. The analog pins are the four that surround the flat blade on a DVI-I or DVI-A connector. A VGA monitor, for example, can be connected to a video source with DVI-I through the use of a passive adapter. Since the analog pins are directly compatible with VGA signaling, passive adapters are simple and cheap to produce, providing a cost-effective solution to support VGA on DVI. The long flat pin on a DVI-I connector is wider than the same pin on a DVI-D connector, so even if the four analog pins were manually removed, it still wouldn't be possible to connect a male DVI-I to a female DVI-D. It is possible, however, to join a male DVI-D connector with a female DVI-I connector. DVI is the only widespread video standard that includes analog and digital transmission in the same connector. Competing standards are exclusively digital: these include a system using low-voltage differential signaling (LVDS), known by its proprietary names FPD-Link (flat-panel display) and FLATLINK; and its successors, the LVDS Display Interface (LDI) and OpenLDI. Some DVD players, HDTV sets, and video projectors have DVI connectors that transmit an encrypted signal for copy protection using the High-bandwidth Digital Content Protection (HDCP) protocol. Computers can be connected to HDTV sets over DVI, but the graphics card must support HDCP to play content protected by digital rights management (DRM). Specifications Digital Minimum TMDS clock frequency: 25.175 MHz Used for the mandatory "low pixel format" display mode: VGA (640x480) @ 60 Hz Maximum single link TMDS clock frequency: 165 MHz Single link maximum gross bit rate (including 8b/10b overhead): 4.95 Gbit/s Net bit rate (subtracting 8b/10b overhead): 3.96 Gbit/s Dual link bit rates are twice that of single link at an identical clock frequency. Gross bit rate (Including 8b/10b overhead) at a 165 MHz clock: 9.90 Gbit/s. Net bit rate (subtracting 8b/10b overhead): 7.92 Gbit/s Clocks above 165 MHz are allowed in dual link mode Bits per pixel: 24 bits per pixel support is mandatory in all resolutions supported. Less than 24 bits per pixel is optional. Dual link optionally supports up to 48 bits per pixel. If a depth greater than 24 bits per pixel is desired, the least significant bits are sent on the second link. Pixels per TMDS clock cycle: 1 (single link at 24 bits or less per pixel, and dual link for 25 to 48 bits per pixel) or 2 (dual link at 24 bits or less per pixel) Example display modes (single link): SXGA () @ 85 Hz with GTF blanking (159 MHz TMDS clock) FHD () @ 60 Hz with CVT-RB blanking (139 MHz TMDS clock) UXGA () @ 60 Hz with GTF blanking (161 MHz TMDS clock) WUXGA () @ 60 Hz with CVT-RB blanking (154 MHz TMDS clock) WQXGA () @ 30 Hz with CVT-RB blanking (132 MHz TMDS clock) Example display modes (dual link): QXGA () @ 72 Hz with CVT blanking (2 pixels per 163 MHz TMDS clock) FHD () @ 144 Hz WUXGA () @ 120 Hz with CVT-RB blanking (2 pixels per 154 MHz TMDS clock) WQXGA () @ 60 Hz with CVT-RB blanking (2 pixels per 135 MHz TMDS clock) WQUXGA () @ 30 Hz with CVT-RB blanking (2 pixels per 146 MHz TMDS clock) Generalized Timing Formula (GTF) is a VESA standard which can easily be calculated with the Linux gtf utility. Coordinated Video Timings-Reduced Blanking (CVT-RB) is a VESA standard which offers reduced horizontal and vertical blanking for non-CRT based displays. Digital data encoding One of the purposes of DVI stream encoding is to provide a DC-balanced output that reduces decoding errors. This goal is achieved by using 10-bit symbols for 8-bit or less characters and using the extra bits for the DC balancing. Like other ways of transmitting video, there are two different regions: the active region, where pixel data is sent, and the control region, where synchronization signals are sent. The active region is encoded using transition-minimized differential signaling, where the control region is encoded with a fixed 8b/10b encoding. As the two schemes yield different 10-bit symbols, a receiver can fully differentiate between active and control regions. When DVI was designed, most computer monitors were still of the cathode-ray tube type that require analog video synchronization signals. The timing of the digital synchronization signals matches the equivalent analog ones, so the process of transforming DVI to and from an analog signal does not require extra (high-speed) memory, expensive at the time. HDCP is an extra layer that transforms the 10-bit symbols before transmitting. Only after correct authorization can the receiver undo the HDCP encryption. Control regions are not encrypted in order to let the receiver know when the active region starts. Clock and data relationship DVI provide one TMDS clock pair and 3 TMDS data pairs in single link mode or 6 TMDS data pairs in dual link mode. TMDS data pairs operate at a gross bit rate that is 10 times the frequency of the TMDS clock. In each TMDS clock period there is a 10-bit symbol per TMDS data pair representing 8-bits of pixel color. In single link mode each set of three 10-bit symbols represents one 24-bit pixel, while in dual link mode each set of six 10-bit symbols either represents two 24-bit pixels or one pixel of up to 48-bit color depth. The specification document allows the data and the clock to not be aligned. However, as the ratio between the TMDS clock and gross bit rate per TMDS pair is fixed at 1:10, the unknown alignment is kept over time. The receiver must recover the bits on the stream using any of the techniques of clock/data recovery to find the correct symbol boundary. The DVI specification allows the TMDS clock to vary between 25 MHz and 165 MHz. This 1:6.6 ratio can make clock recovery difficult, as phase-locked loops, if used, need to work over a large frequency range. One benefit of DVI over other interfaces is that it is relatively straightforward to transform the signal from the digital domain into the analog domain using a video DAC, as both clock and synchronization signals are transmitted. Fixed frequency interfaces, like DisplayPort, need to reconstruct the clock from the transmitted data. Display power management The DVI specification includes signaling for reducing power consumption. Similar to the analog VESA display power management signaling (DPMS) standard, a connected device can turn a monitor off when the connected device is powered down, or programmatically if the display controller of the device supports it. Devices with this capability can also attain Energy Star certification. Analog The analog section of the DVI specification document is brief and points to other specifications like VESA VSIS for electrical characteristics and GTFS for timing information. The motivation for including analog is to keep compatibility with the previous VGA cables and connectors. VGA pins for HSync, Vsync and three video channels are available in both DVI-I or DVI-A (but not DVI-D) connectors and are electrically compatible, while pins for DDC (clock and data) and 5 V power and ground are kept in all DVI connectors. Thus, a passive adapter can interface between DVI-I or DVI-A (but not DVI-D) and VGA connectors. DVI and HDMI compatibility HDMI is a newer digital audio/video interface developed and promoted by the consumer electronics industry. DVI and HDMI have the same electrical specifications for their TMDS and VESA/DDC twisted pairs. However HDMI and DVI differ in several key ways. HDMI lacks VGA compatibility and does not include analog signals. DVI is limited to the RGB color model while HDMI also supports YCbCr 4:4:4 and YCbCr 4:2:2 color spaces, which are generally not used for computer graphics. In addition to digital video, HDMI supports the transport of packets used for digital audio. HDMI sources differentiate between legacy DVI displays and HDMI-capable displays by reading the display's EDID block. To promote interoperability between DVI-D and HDMI devices, HDMI source components and displays support DVI-D signaling. For example, an HDMI display can be driven by a DVI-D source because HDMI and DVI-D both define an overlapping minimum set of supported resolutions and frame buffer formats. Some DVI-D sources use non-standard extensions to output HDMI signals including audio (e.g. ATI 3000-series and NVIDIA GTX 200-series). Some multimedia displays use a DVI to HDMI adapter to input the HDMI signal with audio. Exact capabilities vary by video card specifications. In the reverse scenario, a DVI display that lacks optional support for HDCP might be unable to display protected content even though it is otherwise compatible with the HDMI source. Features specific to HDMI such as remote control, audio transport, xvYCC and deep color are not usable in devices that support only DVI signals. HDCP compatibility between source and destination devices is subject to manufacturer specifications for each device. Proposed successors IEEE 1394 was proposed by High-Definition Audio-Video Network Alliance (HANA Alliance) for all cabling needs, including video, over coaxial or 1394 cable as a combined data stream. However, this interface does not have enough throughput to handle uncompressed HD video, so it is unsuitable for applications such as video games and interactive program guides. High-Definition Multimedia Interface (HDMI), a forward-compatible standard that also includes digital audio transmission Unified Display Interface (UDI) was proposed by Intel to replace both DVI and HDMI, but was deprecated in favor of DisplayPort. DisplayPort (a license-free standard proposed by VESA to succeed DVI that has optional DRM mechanisms) / Mini DisplayPort Thunderbolt: an interface that uses the USB-C connector (from Thunderbolt 3 and onward; the Mini DisplayPort connector was used for Thunderbolt 1 and 2) but combines PCI Express (PCIe) and DisplayPort (DP) into one serial signal, permitting the connection of PCIe devices in addition to video displays. It provides DC power as well. In December 2010, Intel, AMD, and several computer and display manufacturers announced they would stop supporting DVI-I, VGA and LVDS-technologies from 2013/2015, and instead speed up adoption of DisplayPort and HDMI. They also stated: "Legacy interfaces such as VGA, DVI and LVDS have not kept pace, and newer standards such as DisplayPort and HDMI clearly provide the best connectivity options moving forward. In our opinion, DisplayPort 1.2 is the future interface for PC monitors, along with HDMI 1.4a for TV connectivity".
Technology
User interface
null
60996
https://en.wikipedia.org/wiki/Firefly
Firefly
The Lampyridae are a family of elateroid beetles with more than 2,000 described species, many of which are light-emitting. They are soft-bodied beetles commonly called fireflies, lightning bugs, or glowworms for their conspicuous production of light, mainly during twilight, to attract mates. The type species is Lampyris noctiluca: the common glow-worm of Europe. Light production in the Lampyridae is thought to have originated as a warning signal that the larvae were distasteful. This ability to create light was then co-opted as a mating signal and, in a further development, adult female fireflies of the genus Photuris mimic the flash pattern of the Photinus beetle to trap their males as prey. Fireflies are found in temperate and tropical climates. Many live in marshes or in wet, wooded areas where their larvae have abundant sources of food. While all known fireflies glow as larvae, only some species produce light in their adult stage, and the location of the light organ varies among species and between sexes of the same species. Fireflies have attracted human attention since classical antiquity; their presence has been taken to signify a wide variety of conditions in different cultures and is especially appreciated aesthetically in Japan, where parks are set aside for this specific purpose. Biology Fireflies are beetles and in many aspects resemble other beetles at all stages of their life cycle, undergoing complete metamorphosis. A few days after mating, a female lays her fertilized eggs on or just below the surface of the ground. The eggs hatch three to four weeks later. In certain firefly species with aquatic larvae, such as Aquatica leii, the female oviposits on emergent portions of aquatic plants, and the larvae descend into the water after hatching. The larvae feed until the end of the summer. Most fireflies hibernate as larvae. Some do this by burrowing underground, while others find places on or under the bark of trees. They emerge in the spring. At least one species, Ellychnia corrusca, overwinters as an adult. The larvae of most species are specialized predators and feed on other larvae, terrestrial snails, and slugs. Some are so specialized that they have grooved mandibles that deliver digestive fluids directly to their prey. The larval stage lasts from several weeks up to, in certain species, two or more years. The larvae pupate for one to two and a half weeks and emerge as adults. Adult diet varies among firefly species: some are predatory, while others feed on plant pollen or nectar. Some adults, like the European glow-worm, have no mouth, emerging only to mate and lay eggs before dying. In most species, adults live for a few weeks in summer. Fireflies vary widely in their general appearance, with differences in color, shape, size, and features such as antennae. Adults differ in size depending on the species, with the largest up to long. Many species have non-flying larviform females. These can often be distinguished from the larvae only because the adult females have compound eyes, unlike the simple eyes of larvae, though the females have much smaller (and often highly regressed) eyes than those of their males. The most commonly known fireflies are nocturnal, although numerous species are diurnal and usually not luminescent; however, some species that remain in shadowy areas may produce light. Most fireflies are distasteful to vertebrate predators, as they contain the steroid pyrones lucibufagins, similar to the cardiotonic bufadienolides found in some poisonous toads. All fireflies glow as larvae, where bioluminescence is an aposematic warning signal to predators. Light and chemical production Light production in fireflies is due to the chemical process of bioluminescence. This occurs in specialized light-emitting organs, usually on a female firefly's lower abdomen. The enzyme luciferase acts on luciferin, in the presence of magnesium ions, ATP, and oxygen to produce light. Oxygen is supplied via an abdominal trachea or breathing tube. Gene coding for these substances has been inserted into many different organisms. Firefly luciferase is used in forensics, and the enzyme has medical uses – in particular, for detecting the presence of ATP or magnesium. Fireflies produce a "cold light", with no infrared or ultraviolet frequencies. The light may be yellow, green, or pale red, with wavelengths from 510 to 670 nanometers. Some species such as the dimly glowing "blue ghost" of the Eastern US may seem to emit blueish-white light from a distance and in low light conditions, but their glow is bright green when observed up close. Their perceived blue tint may be due to the Purkinje effect. During a study on the genome of Aquatica leii, scientists discovered two key genes are responsible for the formation, activation, and positioning of this firefly's light organ: Alabd-B and AlUnc-4. Adults emit light primarily for mate selection. Early larval bioluminescence was adopted in the phylogeny of adult fireflies, and was repeatedly gained and lost before becoming fixed and retained as a mechanism of sexual communication in many species. Adult lampyrids have a variety of ways to communicate with mates in courtships: steady glows, flashing, and the use of chemical signals unrelated to photic systems. Chemical signals, or pheromones, are the ancestral form of sexual communication; this pre-dates the evolution of flash signaling in the lineage, and is retained today in diurnally-active species. Some species, especially lightning bugs of the genera Photinus, Photuris, and Pyractomena, are distinguished by the unique courtship flash patterns emitted by flying males in search of females. In general, females of the genus Photinus do not fly, but do give a flash response to males of their own species. Signals, whether photic or chemical, allow fireflies to identify mates of their own species. Flash signaling characteristics include differences in duration, timing, color, number and rate of repetitions, height of flight, and direction of flight (e.g. climbing or diving) and vary interspecifically and geographically. When flash signals are not sufficiently distinguished between species in a population, sexual selection encourages divergence of signaling patterns. Synchronization of flashing occurs in several species; it is explained as phase synchronization and spontaneous order. Tropical fireflies routinely synchronise their flashes among large groups, particularly in Southeast Asia. At night along river banks in the Malaysian jungles, fireflies synchronize their light emissions precisely. Current hypotheses about the causes of this behavior involve diet, social interaction, and altitude. In the Philippines, thousands of fireflies can be seen all year-round in the town of Donsol. In the United States, one of the most famous sightings of fireflies blinking in unison occurs annually near Elkmont, Tennessee, in the Great Smoky Mountains during the first weeks of June. Congaree National Park in South Carolina is another host to this phenomenon. Female "femme fatale" Photuris fireflies mimic the photic signaling patterns of the smaller Photinus, attracting males to what appears to be a suitable mate, then eating them. This provides the females with a supply of the toxic defensive lucibufagin chemicals. Many fireflies do not produce light. Usually these species are diurnal, or day-flying, such as those in the genus Ellychnia. A few diurnal fireflies that inhabit primarily shadowy places, such as beneath tall plants or trees, are luminescent. One such genus is Lucidota. Non-bioluminescent fireflies use pheromones to signal mates. Some basal groups lack bioluminescence and use chemical signaling instead. Phosphaenus hemipterus has photic organs, yet is a diurnal firefly and displays large antennae and small eyes. These traits suggest that pheromones are used for sexual selection, while photic organs are used for warning signals. In controlled experiments, males coming from downwind arrived at females first, indicating that males travel upwind along a pheromone plume. Males can find females without the use of visual cues, so sexual communication in P. hemipterus appears to be mediated entirely by pheromones. Evolution Fossil history The oldest known fossils of the Lampyridae family are Protoluciola and Flammarionella from the Late Cretaceous (Cenomanian ~ 99 million years ago) Burmese amber of Myanmar, which belong to the subfamily Luciolinae. The light producing organs are clearly present. The ancestral glow colour for the last common ancestor of all living fireflies has been inferred to be green, based on genomic analysis. Taxonomy The fireflies (including the lightning bugs) are a family, Lampyridae, of some 2,000 species within the Coleoptera. The family forms a single clade, a natural phylogenetic group. The term glowworm is used for both adults and larvae of firefly species such as Lampyris noctiluca, the common European glowworm, in which only the nonflying adult females glow brightly; the flying males glow weakly and intermittently. In the Americas, "glow worms" are the closely related Coleopteran family Phengodidae, while in New Zealand and Australia, a "glow worm" is a luminescent larva of the fungus gnat Arachnocampa, within the true flies, Diptera. Phylogeny The phylogeny of the Lampyridae family, based on both phylogenetic and morphological evidence by Martin et al. 2019, is: Interaction with humans Conservation Firefly populations are thought to be declining worldwide. While monitoring data for many regions are scarce, a growing number of anecdotal reports, coupled with several published studies from Europe and Asia, suggest that fireflies are endangered. Recent IUCN Red List assessments for North American fireflies have identified species with heightened extinction risk in the US, with 18 taxa categorized as threatened with extinction. Fireflies face threats including habitat loss and degradation, light pollution, pesticide use, poor water quality, invasive species, over-collection, and climate change. Firefly tourism, a quickly growing sector of the travel and tourism industry, has also been identified as a potential threat to fireflies and their habitats when not managed appropriately. Like many other organisms, fireflies are directly affected by land-use change (e.g., loss of habitat area and connectivity), which is identified as the main driver of biodiversity changes in terrestrial ecosystems. Pesticides, including insecticides and herbicides, have also been indicated as a likely cause of firefly decline. These chemicals can not only harm fireflies directly but also potentially reduce prey populations and degrade habitat. Light pollution is an especially concerning threat to fireflies. Since the majority of firefly species use bioluminescent courtship signals, they are also sensitive to environmental levels of light and consequently to light pollution. A growing number of studies investigating the effects of artificial light at night on fireflies has shown that light pollution can disrupt fireflies' courtship signals and even interfere with larval dispersal. Researchers agree that protecting and enhancing firefly habitat is necessary to conserve their populations. Recommendations include reducing or limiting artificial light at night, restoring habitats where threatened species occur, and eliminating unnecessary pesticide use, among many others. Sundarbans Firefly Sanctuary in Bangladesh was established in 2019. In culture Fireflies have featured in human culture around the world for centuries. In Japan, the emergence of fireflies (Japanese: ) signifies the anticipated changing of the seasons; firefly viewing is a special aesthetic pleasure of midsummer, celebrated in parks that exist for that one purpose. The Japanese sword Hotarumaru, made in the 14th century, is so named for a legend that its flaws were repaired by fireflies. In Italy, the firefly (Italian: ) appears in Canto XXVI of Dante's Inferno, written in the 14th century:
Biology and health sciences
Beetles (Coleoptera)
null
61088
https://en.wikipedia.org/wiki/Smallpox%20vaccine
Smallpox vaccine
The smallpox vaccine is used to prevent smallpox infection caused by the variola virus. It is the first vaccine to have been developed against a contagious disease. In 1796, British physician Edward Jenner demonstrated that an infection with the relatively mild cowpox virus conferred immunity against the deadly smallpox virus. Cowpox served as a natural vaccine until the modern smallpox vaccine emerged in the 20th century. From 1958 to 1977, the World Health Organization (WHO) conducted a global vaccination campaign that eradicated smallpox, making it the only human disease to be eradicated. Although routine smallpox vaccination is no longer performed on the general public, the vaccine is still being produced for research, and to guard against bioterrorism, biological warfare, and mpox. The term vaccine derives from the Latin word for cow, reflecting the origins of smallpox vaccination. Edward Jenner referred to cowpox as variolae vaccinae (smallpox of the cow). The origins of the smallpox vaccine became murky over time, especially after Louis Pasteur developed laboratory techniques for creating vaccines in the 19th century. Allan Watt Downie demonstrated in 1939 that the modern smallpox vaccine was serologically distinct from cowpox, and vaccinia was subsequently recognized as a separate viral species. Whole-genome sequencing has revealed that vaccinia is most closely related to horsepox, and the cowpox strains found in Great Britain are the least closely related to vaccinia. Types As the oldest vaccine, the smallpox vaccine has gone through several generations of medical technology. From 1796 to the 1880s, the vaccine was transmitted from one person to another through arm-to-arm vaccination. Smallpox vaccine was successfully maintained in cattle starting in the 1840s, and calf lymph vaccine became the leading smallpox vaccine in the 1880s. First-generation vaccines grown on the skin of live animals were widely distributed in the 1950s–1970s to eradicate smallpox. Second-generation vaccines were grown in chorioallantoic membrane or cell cultures for greater purity, and they were used in some areas during the smallpox eradication campaign. Third-generation vaccines are based on attenuated strains of vaccinia and saw limited use prior to the eradication of smallpox. All three generations of vaccine are available in stockpiles. First and second-generation vaccines contain live unattenuated vaccinia virus and can cause serious side effects in a small percentage of recipients, including death in 1–10 people per million vaccinations. Third-generation vaccines are much safer due to the milder side effects of the attenuated vaccinia strains. Second and third-generation vaccines are still being produced, with manufacturing capacity being built up in the 2000s due to fears of bioterrorism and biological warfare. First-generation The first-generation vaccines are manufactured by growing live vaccinia virus in the skin of live animals. Most first-generation vaccines are calf lymph vaccines that were grown on the skin of cows, but other animals were also used, including sheep. The development of freeze-dried vaccine in the 1950s made it possible to preserve vaccinia virus for long periods of time without refrigeration, leading to the availability of freeze-dried vaccines such as Dryvax. The vaccine is administered by multiple puncture of the skin (scarification) with a bifurcated needle that holds vaccine solution in the fork. The skin should be cleaned with water rather than alcohol, as the alcohol could inactivate the vaccinia virus. If alcohol is used, it must be allowed to evaporate completely before the vaccine is administered. Vaccination results in a skin lesion that fills with pus and eventually crusts over. This manifestation of localized vaccinia infection is known as a vaccine "take" and demonstrates immunity to smallpox. After 2–3 weeks, the scab will fall off and leave behind a vaccine scar. First generation vaccines consist of live, unattenuated vaccinia virus. One-third of first-time vaccinees develop side effects significant enough to miss school, work, or other activities, or have difficulty sleeping. 15–20% of children receiving the vaccine for the first time develop fevers of over . The vaccinia lesion can transmit the virus to other people. Rare side effects include postvaccinal encephalitis and myopericarditis. Many countries have stockpiled first generation smallpox vaccines. In a 2006 predictive analysis of casualties if there were a mass vaccination of the populations of Germany and the Netherlands, it was estimated that a total of 9.8 people in the Netherlands and 46.2 people in Germany would die from uncontrolled vaccinia infection after being vaccinated with the New York City Board of Health strain. More deaths were predicted for vaccines based other strains: Lister (55.1 Netherlands, 268.5 Germany) and Bern (303.5 Netherlands, 1,381 Germany). Second-generation The second-generation vaccines consist of live vaccinia virus grown in the chorioallantoic membrane or cell culture. The second-generation vaccines are also administered through scarification with a bifurcated needle, and they carry the same side effects as the first-generation vaccinia strain that was cloned. However, the use of eggs or cell culture allows for vaccine production in a sterile environment, while first-generation vaccine contains skin bacteria from the animal that the vaccine was grown on. Ernest William Goodpasture, Alice Miles Woodruff, and G. John Buddingh grew vaccinia virus on the chorioallantoic membrane of chicken embryos in 1932. The Texas Department of Health began producing egg-based vaccine in 1939 and started using it in vaccination campaigns in 1948. Lederle Laboratories began selling its Avianized smallpox vaccine in the United States in 1959. Egg-based vaccine was also used widely in Brazil, New Zealand, and Sweden, and on a smaller scale in many other countries. Concerns about temperature stability and avian sarcoma leukosis virus prevented it from being used more widely during the eradication campaign, although no increase in leukemia was seen in Brazil and Sweden despite the presence of ASLV in the chickens. Vaccinia was first grown in cell culture in 1931 by Thomas Milton Rivers. The WHO funded work in the 1960s at the Dutch National Institute for Public Health and the Environment (RIVM) on growing the Lister/Elstree strain in rabbit kidney cells and tested it in 45,443 Indonesian children in 1973, with comparable results to the same strain of calf lymph vaccine. Two other cell culture vaccines were developed from the Lister strain in the 2000s: Elstree-BN (Bavarian Nordic) and VV Lister CEP (Chicken Embryo Primary, Sanofi Pasteur). Lister/Elstree-RIVM was stockpiled in the Netherlands, and Elstree-BN was sold to some European countries for stockpiles. However, Sanofi dropped its own vaccine after it acquired Acambis in 2008. ACAM2000 is a vaccine developed by Acambis, which was acquired by Sanofi Pasteur in 2008, before selling the smallpox vaccine to Emergent Biosolutions in 2017. Six strains of vaccinia were isolated from 3,000 doses of Dryvax and found to exhibit significant variation in virulence. The strain with the most similar virulence to the overall Dryvax mixture was selected and grown in MRC-5 cells to make the ACAM1000 vaccine. After a successful phase I trial of ACAM1000, the virus was passaged three times in Vero cells to develop ACAM2000, which entered mass production at Baxter. The United States ordered over 200 million doses of ACAM2000 in 1999–2001 for its stockpile, and production is ongoing to replace expired vaccine. ACAM2000 was approved for mpox prevention in the United States in August 2024. Third-generation The third-generation vaccines are based on attenuated vaccinia viruses that are much less virulent and carry lesser side effects. The attenuated viruses may be replicating or non-replicating. MVA Modified vaccinia Ankara (MVA, ) is a replication-incompetent variant of vaccinia that was developed in West Germany through serial passage. The original Ankara strain of vaccinia was maintained at the vaccine institute in Ankara, Turkey on donkeys and cows. The Ankara strain was taken to West Germany in 1953, where Herrlich and Mayr grew it on chorioallantoic membrane at the University of Munich. After 572 serial passages, the vaccinia virus had lost over 14% of its genome and could no longer replicate in human cells. MVA was used in West Germany in 1977–1980, but the eradication of smallpox ended the vaccination campaign after only 120,000 doses. MVA stimulates the production of fewer antibodies than replicating vaccines. During the smallpox eradication campaign, MVA was considered to be a pre-vaccine that would be administered before a replicating vaccine to reduce the side effects, or an alternative vaccine that could be safely given to people at high risk from a replicating vaccine. Japan evaluated MVA and rejected it due to its low immunogenicity, deciding to develop its own attenuated vaccine instead. In the 2000s, MVA was tested in animal models at much higher dosages. When MVA is given to monkeys at 40 times the dosage of Dryvax, it stimulates a more rapid immune response while still causing lesser side effects. MVA-BN MVA-BN (also known as: Imvanex in the European Union; Imvamune in Canada; and Jynneos) is a vaccine manufactured by Bavarian Nordic by growing MVA in cell culture. Unlike replicating vaccines, MVA-BN is administered by injection via the subcutaneous route and does not result in a vaccine "take." A "take" or "major cutaneous reaction" is a pustular lesion or an area of definite induration or congestion surrounding a central lesion, which can be a scab or an ulcer. MVA-BN can also be administered intradermally to increase the number of available doses. It is safer for immunocompromised patients and those who are at risk from a vaccinia infection. MVA-BN has been approved in the European Union, Canada, and the United States. Clinical trials have found that MVA-BN is safer and just as immunogenic as ACAM2000. This vaccine has also been approved for use against mpox. LC16m8 LC16m8 is a replicating attenuated strain of vaccinia that is manufactured by Kaketsuken in Japan. Working at the Chiba Serum Institute in Japan, So Hashizume passaged the Lister strain 45 times in primary rabbit kidney cells, interrupting the process after passages 36, 42, and 45 to grow clones on chorioallantoic membrane and select for pock size. The resulting variant was designated LC16m8 (Lister clone 16, medium pocks, clone 8). Unlike the severely-damaged MVA, LC16m8 contains every gene that is present in the ancestral vaccinia. However, a single-nucleotide deletion truncates membrane protein B5R from a residue length of 317 to 92. Although the truncated protein decreases production of extracellular enveloped virus, animal models have shown that antibodies against other membrane proteins are sufficient for immunity. LC16m8 was approved in Japan in 1975 after testing in over 50,000 children. Vaccination with LC16m8 results in a vaccine "take", but safety is similar to MVA. Safety Vaccinia is infectious, which improves its effectiveness, but causes serious complications for people with impaired immune systems (for example chemotherapy and AIDS patients) or history of eczema, and is not considered safe for pregnant women. A woman planning on conceiving should not receive smallpox immunization. Vaccines that only contain attenuated vaccinia viruses (an attenuated virus is one in which the pathogenicity has been decreased through serial passage) have been proposed, but some researchers have questioned the possible effectiveness of such a vaccine. According to the US Centers for Disease Control and Prevention (CDC), "within 3 days of being exposed to the virus, the vaccine might protect you from getting the disease. If you still get the disease, you might get much less sick than an unvaccinated person would. Within 4 to 7 days of being exposed to the virus, the vaccine likely gives you some protection from the disease. If you still get the disease, you might not get as sick as an unvaccinated person would." In May 2007, the Vaccines and Related Biological Products Advisory Committee (VRBPAC) of the US Food and Drug Administration (FDA) voted unanimously that a new live virus vaccine produced by Acambis, ACAM2000, is both safe and effective for use in persons at high risk of exposure to smallpox virus. However, due to the high rate of serious adverse effects, the vaccine will only be made available to the CDC for the Strategic National Stockpile. ACAM2000 was approved for medical use in the United States in August 2007. Stockpiles Since smallpox has been eradicated, the public is not routinely vaccinated against the disease. The World Health Organization maintained a stockpile of 200million doses in 1980, to guard against reemergence of the disease, but 99% of the stockpile was destroyed in the late 1980s when smallpox failed to return. After the September 11 attacks in 2001, many governments began building up vaccine stockpiles again for fear of bioterrorism. Several companies sold off their stockpiles of vaccines manufactured in the 1970s, and production of smallpox vaccines resumed. Aventis Pasteur discovered a stockpile from the 1950s and donated it to the US government. Stockpiles of newer vaccines must be repurchased periodically since they carry expiration dates. The United States had received 269 million doses of ACAM2000 and 28 million doses of MVA-BN by 2019, but only 100 million doses of ACAM2000 and 65,000 doses of MVA-BN were still available from the stockpile at the start of the 2022 monkeypox outbreak. First-generation vaccines have no specified expiration date and remain viable indefinitely in deep freeze. The U.S. stockpile of WetVax was manufactured in 1956–1957 and maintained since then at , and it was still effective when tested in 2004. Replicating vaccines also remain effective even at 1:10 dilution, so a limited number of doses can be stretched to cover a much larger population. History Variolation The mortality of the severe form of smallpox – variola major – was very high without vaccination, up to 35% in some outbreaks. A method of inducing immunity known as inoculation, insufflation or "variolation" was practiced before the development of a modern vaccine and likely occurred in Africa and China well before the practice arrived in Europe. It may also have occurred in India, but this is disputed; other investigators contend the ancient Sanskrit medical texts of India do not describe these techniques. The first clear reference to smallpox inoculation was made by the Chinese author Wan Quan (1499–1582) in his Douzhen xinfa (痘疹心法) published in 1549. Inoculation for smallpox does not appear to have been widespread in China until the reign era of the Longqing Emperor (r. 1567–1572) during the Ming Dynasty. In China, powdered smallpox scabs were blown up the noses of the healthy. The patients would then develop a mild case of the disease and from then on were immune to it. The technique did have a 0.5–2.0% mortality rate, but that was considerably less than the 20–30% mortality rate of the disease itself. Two reports on the Chinese practice of inoculation were received by the Royal Society in London in 1700; one by Dr. Martin Lister who received a report by an employee of the East India Company stationed in China and another by Clopton Havers. According to Voltaire (1742), the Turks derived their use of inoculation from neighbouring Circassia. Voltaire does not speculate on where the Circassians derived their technique from, though he reports that the Chinese have practiced it "these hundred years". Variolation was also practiced throughout the latter half of the 17th century by physicians in Turkey, Persia, and Africa. In 1714 and 1716, two reports of the Ottoman Empire Turkish method of inoculation were made to the Royal Society in England, by Emmanuel Timoni, a doctor affiliated with the British Embassy in Constantinople, and Giacomo Pylarini. Source material tells us on Lady Mary Wortley Montagu; "When Lady Mary was in the Ottoman Empire, she discovered the local practice of inoculation against smallpox called variolation." In 1718 she had her son, aged five variolated. He recovered quickly. She returned to London and had her daughter variolated in 1721 by Charles Maitland, during an epidemic of smallpox. This encouraged the British Royal Family to take an interest and a trial of variolation was carried out on prisoners in Newgate Prison. This was successful and in 1722 Caroline of Ansbach, the Princess of Wales, allowed Maitland to vaccinate her children. The success of these variolations assured the British people that the procedure was safe. Stimulated by a severe epidemic, variolation was first employed in North America in 1721. The procedure had been known in Boston since 1706, when preacher Cotton Mather learned it from Onesimus, a man he held as a slave, who – like many of his peers – had been inoculated in Africa before they were kidnapped. This practice was widely criticized at first. However, a limited trial showed six deaths occurred out of 244 who were variolated (2.5%), while 844 out of 5980 died of natural disease (14%), and the process was widely adopted throughout the colonies. The inoculation technique was documented as having a mortality rate of only one in a thousand. Two years after Kennedy's description appeared, March 1718, Dr. Charles Maitland successfully inoculated the five-year-old son of the British ambassador to the Turkish court under orders from the ambassador's wife Lady Mary Wortley Montagu, who four years later introduced the practice to England. An account from letter by Lady Mary Wortley Montagu to Sarah Chiswell, dated 1 April 1717, from the Turkish Embassy describes this treatment: Early vaccination In the early empirical days of vaccination, before Louis Pasteur's work on establishing the germ theory and Joseph Lister's on antisepsis and asepsis, there was considerable cross-infection. William Woodville, one of the early vaccinators and director of the London Smallpox Hospital is thought to have contaminated the cowpox matter – the vaccine – with smallpox matter and this essentially produced variolation. Other vaccine material was not reliably derived from cowpox, but from other skin eruptions of cattle. During the earlier days of empirical experimentation in 1758, American Calvinist Jonathan Edwards died from a smallpox inoculation. Some of the earliest statistical and epidemiological studies were performed by James Jurin in 1727 and Daniel Bernoulli in 1766. In 1768, Dr John Fewster reported that variolation induced no reaction in persons who had had cowpox. Edward Jenner was born in Berkeley, England. As a young child, Jenner was variolated with the other schoolboys through parish funds, but nearly died due to the seriousness of his infection. Fed purgative medicine and going through the bloodletting process, Jenner was put in one of the variolation stables until he recovered. At the age of 13, he was apprenticed to apothecary Daniel Ludlow and later surgeon George Hardwick in nearby Sodbury. He observed that people who caught cowpox while working with cattle were known not to catch smallpox. Jenner assumed a causal connection but the idea was not taken up at that time. From 1770 to 1772 Jenner received advanced training in London at St. George's Hospital and as the private pupil of John Hunter, then returned to set up practice in Berkeley. Perhaps there was already an informal public understanding of some connection between disease resistance and working with cattle. The "beautiful milkmaid" seems to have been a frequent image in the art and literature of this period. But it is known for certain that in the years following 1770, at least six people in England and Germany (Sevel, Jensen, Jesty 1774, Rendall, Plett 1791) tested successfully the possibility of using the cowpox vaccine as an immunization for smallpox in humans. Jenner sent a paper reporting his observations to the Royal Society in April 1797. It was not submitted formally and there is no mention of it in the Society's records. Jenner had sent the paper informally to Sir Joseph Banks, the Society's president, who asked Everard Home for his views. Reviews of his rejected report, published for the first time in 1999, were skeptical and called for further vaccinations. Additional vaccinations were performed and in 1798 Jenner published his work entitled An Inquiry into the Causes and Effects of the Variolae Vaccinae, a disease discovered in some of the western counties of England, particularly Gloucestershire and Known by the Name of Cow Pox. It was an analysis of 23 cases including several individuals who had resisted natural exposure after previous cowpox. It is not known how many Jenner vaccinated or challenged by inoculation with smallpox virus; e.g. Case 21 included 'several children and adults'. Crucially all of at least four whom Jenner deliberately inoculated with smallpox virus resisted it. These included the first and last patients in a series of arm-to-arm transfers. He concluded that cowpox inoculation was a safe alternative to smallpox inoculation, but rashly claimed that the protective effect was lifelong. This last proved to be incorrect. Jenner also tried to distinguish between 'True' cowpox which produced the desired result and 'Spurious' cowpox which was ineffective and/or produced severe reaction. Modern research suggests Jenner was trying to distinguish between effects caused by what would now be recognised as non-infectious vaccine, a different virus (e.g. paravaccinia/milker's nodes), or contaminating bacterial pathogens. This caused confusion at the time, but would become important criteria in vaccine development. A further source of confusion was Jenner's belief that fully effective vaccine obtained from cows originated in an equine disease, which he mistakenly referred to as grease. This was criticised at the time but vaccines derived from horsepox were soon introduced and later contributed to the complicated problem of the origin of vaccinia virus, the virus in present-day vaccine. The introduction of the vaccine to the New World took place in Trinity, Newfoundland, in 1798 by Dr. John Clinch, boyhood friend and medical colleague of Jenner. The first smallpox vaccine in the United States was administered in 1799. The physician Valentine Seaman gave his children a smallpox vaccination using a serum acquired from Jenner. By 1800, Jenner's work had been published in all the major European languages and had reached Benjamin Waterhouse in the United States – an indication of rapid spread and deep interest. Despite some concern about the safety of vaccination the mortality using carefully selected vaccine was close to zero, and it was soon in use all over Europe and the United States. In 1804 the Balmis Expedition, an official Spanish mission commanded by Francisco Javier de Balmis, sailed to spread the vaccine throughout the Spanish Empire, first to the Canary Islands and on to Spanish Central America. While his deputy, José Salvany, took vaccine to the west and east coasts of Spanish South America, Balmis sailed to Manila in the Philippines and on to Canton and Macao on the Chinese coast. He returned to Spain in 1806. The vaccine was not carried in the form of flasks, but in the form of 22 orphaned boys, who were 'carriers' of the live cowpox virus. After arrival, "other Spanish governors and doctors used enslaved girls to move the virus between islands, using lymph fluid harvested from them to inoculate their local populations". Napoleon was an early proponent of smallpox vaccination and ordered that army recruits be given the vaccine. Additionally a vaccination program was created for the French Army and his Imperial Guard. In 1811 he had his son, Napoleon II, vaccinated after his birth. By 1815 about half of French children were vaccinated and by the end of the Napoleonic Empire smallpox deaths accounted for 1.8% of deaths, as opposed to the 4.8% of deaths it accounted for at the time of the French Revolution. On March 26, 1806, the Swiss canton Thurgau became the first state in the world to introduce compulsory smallpox vaccinations, by order of the cantonal councillor Jakob Christoph Scherb. Half a year later, Elisa Bonaparte issued a corresponding order for her Principality of Lucca and Piombino on 25 December 1806. On 26 August 1807, Bavaria introduced a similar measure. Baden followed in 1809, Prussia in 1815, Württemberg in 1818, Sweden in 1816, England in 1867 and the German Empire in 1874 through the Reichs Vaccination Act. In Lutheran Sweden, the Protestant clergy played a pioneering role in voluntary smallpox vaccination as early as 1800. The first vaccination was carried out in Liechtenstein in 1801, and from 1812 it was mandatory to vaccinate. The question of who first tried cowpox inoculation/vaccination cannot be answered with certainty. Most, but still limited, information is available for Benjamin Jesty, Peter Plett and John Fewster. In 1774 Jesty, a farmer of Yetminster in Dorset, observing that the two milkmaids living with his family were immune to smallpox, inoculated his family with cowpox to protect them from smallpox. He attracted a certain amount of local criticism and ridicule at the time then interest waned. Attention was later drawn to Jesty, and he was brought to London in 1802 by critics jealous of Jenner's prominence at a time when he was applying to Parliament for financial reward. During 1790–92 Peter Plett, a teacher from Holstein, reported limited results of cowpox inoculation to the Medical Faculty of the University of Kiel. However, the Faculty favoured variolation and took no action. John Fewster, a surgeon friend of Jenner's from nearby Thornbury, discussed the possibility of cowpox inoculation at meetings as early as 1765. He may have done some cowpox inoculations in 1796 at about the same time that Jenner vaccinated Phipps. However, Fewster, who had a flourishing variolation practice, may have considered this option but used smallpox instead. He thought vaccination offered no advantage over variolation, but maintained friendly contact with Jenner and certainly made no claim of priority for vaccination when critics attacked Jenner's reputation. It seems clear that the idea of using cowpox instead of smallpox for inoculation was considered, and actually tried in the late 18th century, and not just by the medical profession. Therefore, Jenner was not the first to try cowpox inoculation. However, he was the first to publish his evidence and distribute vaccine freely, provide information on selection of suitable material, and maintain it by arm-to-arm transfer. The authors of the official World Health Organization (WHO) account Smallpox and its Eradication assessing Jenner's role wrote: As vaccination spread, some European countries made it compulsory. Concern about its safety led to opposition and then repeal of legislation in some instances. Compulsory infant vaccination was introduced in England by the Vaccination Act 1853 (16 & 17 Vict. c. 100). By 1871, parents could be fined for non-compliance, and then imprisoned for non-payment. This intensified opposition, and the Vaccination Act 1898 (61 & 62 Vict. c. 49) introduced a conscience clause. This allowed exemption on production of a certificate of conscientious objection signed by two magistrates. Such certificates were not always easily obtained and a further act in 1907 allowed exemption by a statutory declaration which could not be refused. Although theoretically still compulsory, the Vaccination Act 1907 (7 Edw. 7. c. 31) effectively marked the end of compulsory infant vaccination in England. In the United States vaccination was regulated by individual states, the first to impose compulsory vaccination being Massachusetts in 1809. There then followed sequences of compulsion, opposition and repeal in various states. By 1930 Arizona, Utah, North Dakota and Minnesota prohibited compulsory vaccination, 35 states allowed regulation by local authorities, or had no legislation affecting vaccination, whilst in ten states, including Washington, D.C. and Massachusetts, infant vaccination was compulsory. Compulsory infant vaccination was regulated by only allowing access to school for those who had been vaccinated. Those seeking to enforce compulsory vaccination argued that the public good overrode personal freedom, a view supported by the U.S. Supreme Court in Jacobson v. Massachusetts in 1905, a landmark ruling which set a precedent for cases dealing with personal freedom and the public good. Louis T. Wright, an African-American Harvard Medical School graduate (1915), introduced, while serving in the Army during World War I, intradermal, smallpox vaccination for the soldiers. Developments in production Until the end of the 19th century, vaccination was performed either directly with vaccine produced on the skin of calves or, particularly in England, with vaccine obtained from the calf but then maintained by arm-to-arm transfer; initially in both cases vaccine could be dried on ivory points for short-term storage or transport but increasing use was made of glass capillary tubes for this purpose towards the end of the century. During this period there were no adequate methods for assessing the safety of the vaccine and there were instances of contaminated vaccine transmitting infections such as erysipelas, tetanus, septicaemia and tuberculosis. In the case of arm-to-arm transfer there was also the risk of transmitting syphilis. Although this did occur occasionally, estimated as 750 cases in 100 million vaccinations, some critics of vaccination e.g. Charles Creighton believed that uncontaminated vaccine itself was a cause of syphilis. Smallpox vaccine was the only vaccine available during this period, and so the determined opposition to it initiated a number of vaccine controversies that spread to other vaccines and into the 21st century. Sydney Arthur Monckton Copeman, an English Government bacteriologist interested in smallpox vaccine investigated the effects on the bacteria in it of various treatments, including glycerine. Glycerine was sometimes used simply as a diluent by some continental vaccine producers. However, Copeman found that vaccine suspended in 50% chemically pure glycerine and stored under controlled conditions contained very few "extraneous" bacteria and produced satisfactory vaccinations. He later reported that glycerine killed the causative organisms of erysipelas and tuberculosis when they were added to the vaccine in "considerable quantity", and that his method was widely used on the continent. In 1896, Copeman was asked to supply "extra good calf vaccine" to vaccinate the future Edward VIII. Vaccine produced by Copeman's method was the only type issued free to public vaccinators by the British Government Vaccine Establishment from 1899. At the same time the Vaccination Act 1898 (61 & 62 Vict. c. 49) banned arm-to-arm vaccination, thus preventing transmission of syphilis by this vaccine. However, private practitioners had to purchase vaccine from commercial producers. Although proper use of glycerine reduced bacterial contamination considerably the crude starting material, scraped from the skin of infected calves, was always heavily contaminated and no vaccine was totally free from bacteria. A survey of vaccines in 1900 found wide variations in bacterial contamination. Vaccine issued by the Government Vaccine Establishment contained 5,000 bacteria per gram, while commercial vaccines contained up to 100,000 per gram. The level of bacterial contamination remained unregulated until the Therapeutic Substances Act 1925 (15 & 16 Geo. 5. c. 60) set an upper limit of 5,000 per gram, and rejected any batch of vaccine found to contain the causative organisms of erysipelas or wound infections. Unfortunately glycerolated vaccine lost its potency quickly at ambient temperatures which restricted its use in tropical climates. However, it remained in use into the 1970s when a satisfactory cold chain was available. Animals continued to be widely used by vaccine producers during the smallpox eradication campaign. A WHO survey of 59 producers, some of whom used more than one source of vaccine, found that 39 used calves, 12 used sheep and 6 used water buffalo, whilst only 3 made vaccine in cell culture and 3 in embryonated hens' eggs. English vaccine was occasionally made in sheep during World War I but from 1946 only sheep were used. In the late 1940s and early 1950s, Leslie Collier, an English microbiologist working at the Lister Institute of Preventive Medicine, developed a method for producing a heat-stable freeze-dried vaccine in powdered form. Collier added 0.5% phenol to the vaccine to reduce the number of bacterial contaminants but the key stage was to add 5% peptone to the liquid vaccine before it was dispensed into ampoules. This protected the virus during the freeze drying process. After drying, the ampoules were sealed under nitrogen. Like other vaccines, once reconstituted it became ineffective after 1–2 days at ambient temperatures. However, the dried vaccine was 100% effective when reconstituted after 6 months storage at allowing it to be transported to, and stored in, remote tropical areas. Collier's method was increasingly used and, with minor modifications, became the standard for vaccine production adopted by the WHO Smallpox Eradication Unit when it initiated its global smallpox eradication campaign in 1967, at which time 23 of 59 manufacturers were using the Lister strain. In a letter about landmarks in the history of smallpox vaccine, written to and quoted from by Derrick Baxby, Donald Henderson, chief of the Smallpox Eradication Unit from 1967 to 1977 wrote; "Copeman and Collier made an enormous contribution for which neither, in my opinion ever received due credit". Smallpox vaccine was inoculated by scratches into the superficial layers of the skin, with a wide variety of instruments used to achieve this. They ranged from simple needles to multi-pointed and multi-bladed spring-operated instruments specifically designed for the purpose. A major contribution to smallpox vaccination was made in the 1960s by Benjamin Rubin, an American microbiologist working for Wyeth Laboratories. Based on initial tests with textile needles with the eyes cut off transversely half-way he developed the bifurcated needle. This was a sharpened two-prong fork designed to hold one dose of reconstituted freeze-dried vaccine by capillarity. Easy to use with minimum training, cheap to produce ($5 per 1000), using one quarter as much vaccine as other methods, and repeatedly re-usable after flame sterilization, it was used globally in the WHO Smallpox Eradication Campaign from 1968. Rubin estimated that it was used to do 200 million vaccinations per year during the last years of the campaign. Those closely involved in the campaign were awarded the "Order of the Bifurcated Needle". This, a personal initiative by Donald Henderson, was a lapel badge, designed and made by his daughter, formed from the needle shaped to form an "O". This represented "Target Zero", the objective of the campaign. Eradication of smallpox Smallpox was eradicated by a massive international search for outbreaks, backed up with a vaccination program, starting in 1967. It was organised and co-ordinated by a World Health Organization (WHO) unit, set up and headed by Donald Henderson. The last case in the Americas occurred in 1971 (Brazil), south-east Asia (Indonesia) in 1972, and on the Indian subcontinent in 1975 (Bangladesh). After two years of intensive searches, what proved to be the last endemic case anywhere in the world occurred in Somalia, in October 1977. A Global Commission for the Certification of Smallpox Eradication chaired by Frank Fenner examined the evidence from, and visited where necessary, all countries where smallpox had been endemic. In December 1979 they concluded that smallpox had been eradicated; a conclusion endorsed by the WHO General Assembly in May 1980. However, even as the disease was being eradicated there still remained stocks of smallpox virus in many laboratories. Accelerated by two cases of smallpox in 1978, one fatal (Janet Parker), caused by an accidental and unexplained containment breach at a laboratory at the University of Birmingham Medical School, the WHO ensured that known stocks of smallpox virus were either destroyed or moved to safer laboratories. By 1979, only four laboratories were known to have smallpox virus. All English stocks held at St Mary's Hospital, London were transferred to more secure facilities at Porton Down and then to the US at the Centers for Disease Control and Prevention (CDC) in Atlanta, Georgia in 1982, and all South African stocks were destroyed in 1983. By 1984, the only known stocks were kept at the CDC in the U.S. and the State Research Center of Virology and Biotechnology (VECTOR) in Koltsovo, Russia. These states report that their repositories are for possible anti-bioweaponry research and insurance if some obscure reservoir of natural smallpox is discovered in the future. Anti-terrorism preparation Among more than 270,000 US military service members vaccinated with smallpox vaccine between December 2002, and March 2003, eighteen cases of probable myopericarditis were reported (all in first-time vaccinees who received the NYCBOH strain of vaccinia virus), an incidence of 7.8 per 100,000 during the 30 days they were observed. All cases were in young, otherwise healthy adult white men and all survived. In 2002, the United States government started a program to vaccinate 500,000 volunteer health care professionals throughout the country. Recipients were healthcare workers who would be first-line responders in the event of a bioterrorist attack. Many healthcare workers refused or did not pursue vaccination, worried about vaccine side effects, compensation and liability. Most did not see an immediate need for the vaccine. Some healthcare systems refused to participate, worried about becoming a destination for smallpox patients in the event of an epidemic. Fewer than 40,000 actually received the vaccine. On 21 April 2022, Public Services and Procurement Canada published a notice of tender seeking to stockpile 500,000 doses of smallpox vaccine in order to protect against a potential accidental or intentional release of the eradicated virus. On 6 May, the contract was awarded to Bavarian Nordic for their Imvamune vaccine. These were deployed by the Public Health Agency of Canada for targeted vaccination in response to the 2022 mpox outbreak. Origin The origin of the modern smallpox vaccine has long been unclear, but horsepox was identified in the 2010s as the most likely ancestor. Edward Jenner had obtained his vaccine from a cow, so he named the virus vaccinia, after the Latin word for cow. Jenner believed that both cowpox and smallpox were viruses that originated in the horse and passed to the cow, and some doctors followed his reasoning by inoculating their patients directly with horsepox. The situation was further muddied when Louis Pasteur developed techniques for creating vaccines in the laboratory in the late 19th century. As medical researchers subjected viruses to serial passage, inadequate recordkeeping resulted in the creation of laboratory strains with unclear origins. By the late 19th century, it was unknown whether the vaccine originated from cowpox, horsepox, or an attenuated strain of smallpox. In 1939, Allan Watt Downie showed that the vaccinia virus was serologically distinct from the "spontaneous" cowpox virus. This work established vaccinia and cowpox as two separate viral species. The term vaccinia now refers only to the smallpox vaccine, while cowpox no longer has a Latin name. The development of whole genome sequencing in the 1990s made it possible to compare orthopoxvirus genomes and identify their relationships with each other. The horsepox virus was sequenced in 2006 and found to be most closely related to vaccinia. In a phylogenetic tree of the orthopoxviruses, horsepox forms a clade with vaccinia strains, and cowpox strains form a different clade. Horsepox is extinct in the wild, and the only known sample was collected in 1976. Because the sample was collected at the end of the smallpox eradication campaign, scientists considered the possibility that horsepox is a strain of vaccinia that had escaped into the wild. However, as more smallpox vaccines were sequenced, older vaccines were found to be more similar to horsepox than modern vaccinia strains. A smallpox vaccine manufactured by Mulford in 1902 is 99.7% similar to horsepox, closer than any previously known strain of vaccinia. Modern Brazilian vaccines with a documented introduction date of 1887, made from material collected in an 1866 outbreak of "cowpox" in France, are more similar to horsepox than other strains of vaccinia. Five smallpox vaccines manufactured in the United States in 1859–1873 are most similar to each other and horsepox, as well as the 1902 Mulford vaccine. One of the 1859–1873 vaccines was identified as a novel strain of horsepox, containing a complete gene from the 1976 horsepox sample that has deletions in vaccinia. Terminology The word "vaccine" is derived from Variolae vaccinae (i.e. smallpox of the cow), the term devised by Jenner to denote cowpox and used in the long title of his An enquiry into the causes and effects of Variolae vaccinae, known by the name of cow pox. Vaccination, the term which soon replaced cowpox inoculation and vaccine inoculation, was first used in print by Jenner's friend, Richard Dunning in 1800. Initially, the terms vaccine/vaccination referred only to smallpox, but in 1881 Louis Pasteur proposed at the 7th International Congress of Medicine that to honour Jenner the terms be widened to cover the new protective inoculations being introduced. According to some sources the term was first introduced by Jenner's friend Richard Dunning in 1800.
Biology and health sciences
Drugs and pharmacology
null
61097
https://en.wikipedia.org/wiki/Roche%20limit
Roche limit
In celestial mechanics, the Roche limit, also called Roche radius, is the distance from a celestial body within which a second celestial body, held together only by its own force of gravity, will disintegrate because the first body's tidal forces exceed the second body's self-gravitation. Inside the Roche limit, orbiting material disperses and forms rings, whereas outside the limit, material tends to coalesce. The Roche radius depends on the radius of the first body and on the ratio of the bodies' densities. The term is named after Édouard Roche (, ), the French astronomer who first calculated this theoretical limit in 1848. Explanation The Roche limit typically applies to a satellite's disintegrating due to tidal forces induced by its primary, the body around which it orbits. Parts of the satellite that are closer to the primary are attracted more strongly by gravity from the primary than parts that are farther away; this disparity effectively pulls the near and far parts of the satellite apart from each other, and if the disparity (combined with any centrifugal effects due to the object's spin) is larger than the force of gravity holding the satellite together, it can pull the satellite apart. Some real satellites, both natural and artificial, can orbit within their Roche limits because they are held together by forces other than gravitation. Objects resting on the surface of such a satellite would be lifted away by tidal forces. A weaker satellite, such as a comet, could be broken up when it passes within its Roche limit. Since, within the Roche limit, tidal forces overwhelm the gravitational forces that might otherwise hold the satellite together, no satellite can gravitationally coalesce out of smaller particles within that limit. Indeed, almost all known planetary rings are located within their Roche limit. (Notable exceptions are Saturn's E-Ring and Phoebe ring. These two rings could possibly be remnants from the planet's proto-planetary accretion disc that failed to coalesce into moonlets, or conversely have formed when a moon passed within its Roche limit and broke apart.) The gravitational effect occurring below the Roche limit is not the only factor that causes comets to break apart. Splitting by thermal stress, internal gas pressure, and rotational splitting are other ways for a comet to split under stress. Determination The limiting distance to which a satellite can approach without breaking up depends on the rigidity of the satellite. At one extreme, a completely rigid satellite will maintain its shape until tidal forces break it apart. At the other extreme, a highly fluid satellite gradually deforms leading to increased tidal forces, causing the satellite to elongate, further compounding the tidal forces and causing it to break apart more readily. Most real satellites would lie somewhere between these two extremes, with tensile strength rendering the satellite neither perfectly rigid nor perfectly fluid. For example, a rubble-pile asteroid will behave more like a fluid than a solid rocky one; an icy body will behave quite rigidly at first but become more fluid as tidal heating accumulates and its ices begin to melt. But note that, as defined above, the Roche limit refers to a body held together solely by the gravitational forces which cause otherwise unconnected particles to coalesce, thus forming the body in question. The Roche limit is also usually calculated for the case of a circular orbit, although it is straightforward to modify the calculation to apply to the case (for example) of a body passing the primary on a parabolic or hyperbolic trajectory. Rigid satellites The rigid-body Roche limit is a simplified calculation for a spherical satellite. Irregular shapes such as those of tidal deformation on the body or the primary it orbits are neglected. It is assumed to be in hydrostatic equilibrium. These assumptions, although unrealistic, greatly simplify calculations. The Roche limit for a rigid spherical satellite is the distance, , from the primary at which the gravitational force on a test mass at the surface of the object is exactly equal to the tidal force pulling the mass away from the object: where is the radius of the primary, is the density of the primary, and is the density of the satellite. This can be equivalently written as where is the radius of the secondary, is the mass of the primary, and is the mass of the secondary. A third equivalent form uses only one property for each of the two bodies, the mass of the primary and the density of the secondary, is These all represent the orbital distance inside of which loose material (e.g. regolith) on the surface of the satellite closest to the primary would be pulled away, and likewise material on the side opposite the primary will also go away from, rather than toward, the satellite. Fluid satellites A more accurate approach for calculating the Roche limit takes the deformation of the satellite into account. An extreme example would be a tidally locked liquid satellite orbiting a planet, where any force acting upon the satellite would deform it into a prolate spheroid. The calculation is complex and its result cannot be represented in an exact algebraic formula. Roche himself derived the following approximate solution for the Roche limit: However, a better approximation that takes into account the primary's oblateness and the satellite's mass is: where is the oblateness of the primary. The fluid solution is appropriate for bodies that are only loosely held together, such as a comet. For instance, comet Shoemaker–Levy 9's decaying orbit around Jupiter passed within its Roche limit in July 1992, causing it to fragment into a number of smaller pieces. On its next approach in 1994 the fragments crashed into the planet. Shoemaker–Levy 9 was first observed in 1993, but its orbit indicated that it had been captured by Jupiter a few decades prior.
Physical sciences
Celestial mechanics
Astronomy
61162
https://en.wikipedia.org/wiki/Taurine
Taurine
Taurine (), or 2-aminoethanesulfonic acid, is a non-proteinogenic naturally occurring amino sulfonic acid that is widely distributed in animal tissues. It is a major constituent of bile and can be found in the large intestine, and accounts for up to 0.1% of total human body weight. Taurine is named after Latin (cognate to Ancient Greek ταῦρος, taûros) meaning bull or ox, as it was first isolated from ox bile in 1827 by German scientists Friedrich Tiedemann and Leopold Gmelin. It was discovered in human bile in 1846 by Edmund Ronalds. Although taurine is abundant in human organs with diverse putative roles, it is not an essential human dietary nutrient and is not included among nutrients with a recommended intake level. Taurine is synthesized naturally in the human liver from methionine and cysteine. Taurine is commonly sold as a dietary supplement, but there is no good clinical evidence that taurine supplements provide any benefit to human health. Taurine is used as a food additive for cats (who require it as an essential nutrient), dogs, and poultry. Taurine concentrations in land plants are low or undetectable, but up to wet weight have been found in algae. Chemical and biochemical features Taurine exists as a zwitterion , as verified by X-ray crystallography. The sulfonic acid has a low pKa ensuring that it is fully ionized to the sulfonate at the pHs found in the intestinal tract. Synthesis Synthetic taurine is obtained by the ammonolysis of isethionic acid (2-hydroxyethanesulfonic acid), which in turn is obtained from the reaction of ethylene oxide with aqueous sodium bisulfite. A direct approach involves the reaction of aziridine with sulfurous acid. In 1993, about  tonnes of taurine were produced for commercial purposes: 50% for pet food and 50% in pharmaceutical applications. As of 2010, China alone has more than 40 manufacturers of taurine. Most of these enterprises employ the ethanolamine method to produce a total annual production of about  tonnes. In the laboratory, taurine can be produced by alkylation of ammonia with bromoethanesulfonate salts. Biosynthesis Taurine is naturally derived from cysteine. Mammalian taurine synthesis occurs in the liver via the cysteine sulfinic acid pathway. In this pathway, cysteine is first oxidized to its sulfinic acid, catalyzed by the enzyme cysteine dioxygenase. Cysteine sulfinic acid, in turn, is decarboxylated by sulfinoalanine decarboxylase to form hypotaurine. Hypotaurine is enzymatically oxidized to yield taurine by hypotaurine dehydrogenase. Taurine is also produced by the transsulfuration pathway, which converts homocysteine into cystathionine. The cystathionine is then converted to hypotaurine by the sequential action of three enzymes: cystathionine gamma-lyase, cysteine dioxygenase, and cysteine sulfinic acid decarboxylase. Hypotaurine is then oxidized to taurine as described above. A pathway for taurine biosynthesis from serine and sulfate is reported in microalgae, developing chicken embryos, and chick liver. Serine dehydratase converts serine to 2-aminoacrylate, which is converted to cysteic acid by 3′-phosphoadenylyl sulfate:2-aminoacrylate C-sulfotransferase. Cysteic acid is converted to taurine by cysteine sulfinic acid decarboxylase. In food Taurine occurs naturally in fish and meat. The mean daily intake from omnivore diets was determined to be around (range ), and to be low or negligible from a vegan diet. Typical taurine consumption in the American diet is about per day. Taurine is partially destroyed by heat in processes such as baking and boiling. This is a concern for cat food, as cats have a dietary requirement for taurine and can easily become deficient. Either raw feeding or supplementing taurine can satisfy this requirement. Both lysine and taurine can mask the metallic flavor of potassium chloride, a salt substitute. Breast milk Prematurely born infants are believed to lack the enzymes needed to convert cystathionine to cysteine, and may, therefore, become deficient in taurine. Taurine is present in breast milk, and has been added to many infant formulas as a measure of prudence since the early 1980s. However, this practice has never been rigorously studied, and as such it has yet to be proven to be necessary, or even beneficial. Energy drinks and dietary supplements Taurine is an ingredient in some energy drinks in amounts of 1–3 g per serving. A 1999 assessment of European consumption of energy drinks found that taurine intake was per day. Research Taurine is not regarded as an essential human dietary nutrient and has not been assigned recommended intake levels. High-quality clinical studies to determine possible effects of taurine in the body or following dietary supplementation are absent from the literature. Preliminary human studies on the possible effects of taurine supplementation have been inadequate due to low subject numbers, inconsistent designs, and variable doses. Preliminary studies have suggested that supplementing with taurine may increase exercise capacity and affects lipid profiles in individuals with diabetes. Safety and toxicity According to the European Food Safety Authority, taurine is "considered to be a skin and eye irritant and skin sensitiser, and to be hazardous if inhaled;" it may be safe to consume up to 6 grams of taurine per day. Other sources indicate that taurine is safe for supplemental intake in normal healthy adults at up to 3 grams per day. A 2008 review found no documented reports of negative or positive health effects associated with the amount of taurine used in energy drinks, concluding, "The amounts of guarana, taurine, and ginseng found in popular energy drinks are far below the amounts expected to deliver either therapeutic benefits or adverse events". Animal dietary requirement Cats Cats lack the enzymatic machinery (sulfinoalanine decarboxylase) to produce taurine and must therefore acquire it from their diet. A taurine deficiency in cats can lead to retinal degeneration and eventually blindness – a condition known as central retinal degeneration as well as hair loss and tooth decay. Other effects of a diet lacking in this essential amino acid are dilated cardiomyopathy, and reproductive failure in female cats. Decreased plasma taurine concentration has been demonstrated to be associated with feline dilated cardiomyopathy. Unlike CRD, the condition is reversible with supplementation. Taurine is now a requirement of the Association of American Feed Control Officials (AAFCO) and any dry or wet food product labeled approved by the AAFCO should have a minimum of 0.1% taurine in dry food and 0.2% in wet food. Studies suggest the amino acid should be supplied at per kilogram of bodyweight per day for domestic cats. Other mammals A number of other mammals also have a requirement for taurine. While the majority of dogs can synthesize taurine, case reports have described a singular American cocker spaniel, 19 Newfoundland dogs, and a family of golden retrievers suffering from taurine deficiency treatable with supplementation. Foxes on fur farms also appear to require dietary taurine. The rhesus, cebus and cynomolgus monkeys each require taurine at least in infancy. The giant anteater also requires taurine. Birds Taurine appears to be essential for the development of passerine birds. Many passerines seek out taurine-rich spiders to feed their young, particularly just after hatching. Researchers compared the behaviours and development of birds fed a taurine-supplemented diet to a control diet and found the juveniles fed taurine-rich diets as neonates were much larger risk takers and more adept at spatial learning tasks. Under natural conditions, each blue tit nestling receive of taurine per day from parents. Taurine can be synthesized by chickens. Supplementation has no effect on chickens raised under adequate lab conditions, but seems to help with growth under stresses such as heat and dense housing. Fish Species of fish, mostly carnivorous ones, show reduced growth and survival when the fish-based feed in their food is replaced with soy meal or feather meal. Taurine has been identified as the factor responsible for this phenomenon; supplementation of taurine to plant-based fish feed reverses these effects. Future aquaculture is expected to use more of these more environmentally-friendly protein sources, so supplementation would become more important. The need of taurine in fish is conditional, differing by species and growth stage. The Olive flounder, for example, has lower capacity to synthesize taurine compared to the rainbow trout. Juvenile fish are less efficient at taurine biosyntheis due to reduced cysteine sulfinate decarboxylase levels. Derivatives Taurine is used in the preparation of the anthelmintic drug netobimin (Totabin). Taurolidine Taurocholic acid and tauroselcholic acid Tauromustine 5-Taurinomethyluridine and 5-taurinomethyl-2-thiouridine are modified uridines in (human) mitochondrial tRNA. Tauryl is the functional group attaching at the sulfur, 2-aminoethylsulfonyl. Taurino is the functional group attaching at the nitrogen, 2-sulfoethylamino. Thiotaurine Peroxytaurine which is a degradation product by both superoxide and heat degradation.
Physical sciences
Amides and amines
Chemistry
61164
https://en.wikipedia.org/wiki/Transmitter
Transmitter
In electronics and telecommunications, a radio transmitter or just transmitter (often abbreviated as XMTR or TX in technical documents) is an electronic device which produces radio waves with an antenna with the purpose of signal transmission up to a radio receiver. The transmitter itself generates a radio frequency alternating current, which is applied to the antenna. When excited by this alternating current, the antenna radiates radio waves. Transmitters are necessary component parts of all electronic devices that communicate by radio, such as radio (audio) and television broadcasting stations, cell phones, walkie-talkies, wireless computer networks, Bluetooth enabled devices, garage door openers, two-way radios in aircraft, ships, spacecraft, radar sets and navigational beacons. The term transmitter is usually limited to equipment that generates radio waves for communication purposes; or radiolocation, such as radar and navigational transmitters. Generators of radio waves for heating or industrial purposes, such as microwave ovens or diathermy equipment, are not usually called transmitters, even though they often have similar circuits. The term is popularly used more specifically to refer to a broadcast transmitter, a transmitter used in broadcasting, as in FM radio transmitter or television transmitter. This usage typically includes both the transmitter proper, the antenna, and often the building it is housed in. Description A transmitter can be a separate piece of electronic equipment, or an electrical circuit within another electronic device. A transmitter and a receiver combined in one unit is called a transceiver. The purpose of most transmitters is radio communication of information over a distance. The information is provided to the transmitter in the form of an electronic signal called the modulation signal, such as an audio (sound) signal from a microphone, a video (TV) signal from a video camera, or in wireless networking devices, a digital signal from a computer. The transmitter generates a radio frequency signal which when applied to the antenna produces the radio waves, called the carrier signal. It combines the carrier with the modulation signal, a process called modulation. The information can be added to the carrier in several different ways, in different types of transmitters. In an amplitude modulation (AM) transmitter, the information is added to the radio signal by varying its amplitude. In a frequency modulation (FM) transmitter, it is added by varying the radio signal's frequency slightly. Many other types of modulation are also used. The radio signal from the transmitter is applied to the antenna, which radiates the energy as radio waves. The antenna may be enclosed inside the case or attached to the outside of the transmitter, as in portable devices such as cell phones, walkie-talkies, and garage door openers. In more powerful transmitters, the antenna may be located on top of a building or on a separate tower, and connected to the transmitter by a feed line, that is a transmission line. Operation Electromagnetic waves are radiated by electric charges when they are accelerated. Radio waves, electromagnetic waves of radio frequency, are generated by time-varying electric currents, consisting of electrons flowing through a metal conductor called an antenna which are changing their velocity and thus accelerating. An alternating current flowing back and forth in an antenna will create an oscillating magnetic field around the conductor. The alternating voltage will also charge the ends of the conductor alternately positive and negative, creating an oscillating electric field around the conductor. If the frequency of the oscillations is high enough, in the radio frequency range above about 20 kHz, the oscillating coupled electric and magnetic fields will radiate away from the antenna into space as an electromagnetic wave, a radio wave. A radio transmitter is an electronic circuit which transforms electric power from a power source, a battery or mains power, into a radio frequency alternating current to apply to the antenna, and the antenna radiates the energy from this current as radio waves. The transmitter also encodes information such as an audio or video signal into the radio frequency current to be carried by the radio waves. When they strike the antenna of a radio receiver, the waves excite similar (but less powerful) radio frequency currents in it. The radio receiver extracts the information from the received waves. Components A practical radio transmitter mainly consists of the following parts: In high power transmitters, a power supply circuit to transform the input electrical power to the higher voltages needed to produce the required power output. An electronic oscillator circuit to generate the radio frequency signal. This usually generates a sine wave of constant amplitude, called the carrier wave because it produces the radio waves which "carry" the information through space. In most modern transmitters, this is a crystal oscillator in which the frequency is precisely controlled by the vibrations of a quartz crystal. The frequency of the carrier wave is considered the frequency of the transmitter. A modulator circuit to add the information to be transmitted to the carrier wave produced by the oscillator. This is done by varying some aspect of the carrier wave. The information is provided to the transmitter as an electronic signal called the modulation signal. The modulation signal may be an audio signal, which represents sound, a video signal which represents moving images, or for data in the form of a binary digital signal which represents a sequence of bits, a bitstream. Different types of transmitters use different modulation methods to transmit information: In an AM (amplitude modulation) transmitter the amplitude (strength) of the carrier wave is varied in proportion to the modulation signal. In an FM (frequency modulation) transmitter the frequency of the carrier is varied by the modulation signal. In an FSK (frequency-shift keying) transmitter, which transmits digital data, the frequency of the carrier is shifted between two frequencies which represent the two binary digits, 0 and 1. OFDM (orthogonal frequency-division multiplexing) is a family of complicated digital modulation methods very widely used in high bandwidth systems such as Wi-Fi networks, cellphones, digital television broadcasting, and digital audio broadcasting (DAB) to transmit digital data using a minimum of radio spectrum bandwidth. OFDM has higher spectral efficiency and more resistance to fading than AM or FM. In OFDM multiple radio carrier waves closely spaced in frequency are transmitted within the radio channel, with each carrier modulated with bits from the incoming bitstream so multiple bits are being sent simultaneously, in parallel. At the receiver the carriers are demodulated and the bits are combined in the proper order into one bitstream. Many other types of modulation are also used. In large transmitters the oscillator and modulator together are often referred to as the exciter. A radio frequency (RF) amplifier to increase the power of the signal, to increase the range of the radio waves. An impedance matching (antenna tuner) circuit to transform the output impedance of the transmitter to match the impedance of the antenna (or the transmission line to the antenna), to transfer power efficiently to the antenna. If these impedances are not equal, it causes a condition called standing waves, in which the power is reflected back from the antenna toward the transmitter, wasting power and sometimes overheating the transmitter. In higher frequency transmitters, in the UHF and microwave range, free running oscillators are unstable at the output frequency. Older designs used an oscillator at a lower frequency, which was multiplied by frequency multipliers to get a signal at the desired frequency. Modern designs more commonly use an oscillator at the operating frequency which is stabilized by phase locking to a very stable lower frequency reference, usually a crystal oscillator. Regulation Two radio transmitters in the same area that attempt to transmit on the same frequency will interfere with each other, causing garbled reception, so neither transmission may be received clearly. Interference with radio transmissions can not only have a large economic cost, it can be life-threatening (for example, in the case of interference with emergency communications or air traffic control). For this reason, in most countries, use of transmitters is strictly controlled by law. Transmitters must be licensed by governments, under a variety of license classes depending on use such as broadcast, marine radio, Airband, Amateur and are restricted to certain frequencies and power levels. A body called the International Telecommunication Union (ITU) allocates the frequency bands in the radio spectrum to various classes of users. In some classes, each transmitter is given a unique call sign consisting of a string of letters and numbers which must be used as an identifier in transmissions. The operator of the transmitter usually must hold a government license, such as a general radiotelephone operator license, which is obtained by passing a test demonstrating adequate technical and legal knowledge of safe radio operation. Exceptions to the above regulations allow the unlicensed use of low-power short-range transmitters in consumer products such as cell phones, cordless telephones, wireless microphones, walkie-talkies, Wi-Fi and Bluetooth devices, garage door openers, and baby monitors. In the US, these fall under Part 15 of the Federal Communications Commission (FCC) regulations. Although they can be operated without a license, these devices still generally must be type-approved before sale. History The first primitive radio transmitters (called spark gap transmitters) were built by German physicist Heinrich Hertz in 1887 during his pioneering investigations of radio waves. These generated radio waves by a high voltage spark between two conductors. Beginning in 1895, Guglielmo Marconi developed the first practical radio communication systems using these transmitters, and radio began to be used commercially around 1900. Spark transmitters could not transmit audio (sound) and instead transmitted information by radiotelegraphy: the operator tapped on a telegraph key which turned the transmitter on-and-off to produce radio wave pulses spelling out text messages in telegraphic code, usually Morse code. At the receiver, these pulses were sometimes directly recorded on paper tapes, but more common was audible reception. The pulses were audible as beeps in the receiver's earphones, which were translated back to text by an operator who knew Morse code. These spark-gap transmitters were used during the first three decades of radio (1887–1917), called the wireless telegraphy or "spark" era. Because they generated damped waves, spark transmitters were electrically "noisy". Their energy was spread over a broad band of frequencies, creating radio noise which interfered with other transmitters. Damped wave emissions were banned by international law in 1934. Two short-lived competing transmitter technologies came into use after the turn of the century, which were the first continuous wave transmitters: the arc converter (Poulsen arc) in 1904 and the Alexanderson alternator around 1910, which were used into the 1920s. All these early technologies were replaced by vacuum tube transmitters in the 1920s, which used the feedback oscillator invented by Edwin Armstrong and Alexander Meissner around 1912, based on the Audion (triode) vacuum tube invented by Lee De Forest in 1906. Vacuum tube transmitters were inexpensive and produced continuous waves, and could be easily modulated to transmit audio (sound) using amplitude modulation (AM). This made AM radio broadcasting possible, which began in about 1920. Practical frequency modulation (FM) transmission was invented by Edwin Armstrong in 1933, who showed that it was less vulnerable to noise and static than AM. The first FM radio station was licensed in 1937. Experimental television transmission had been conducted by radio stations since the late 1920s, but practical television broadcasting didn't begin until the late 1930s. The development of radar during World War II motivated the evolution of high frequency transmitters in the UHF and microwave ranges, using new active devices such as the magnetron, klystron, and traveling wave tube. The invention of the transistor allowed the development in the 1960s of small portable transmitters such as wireless microphones, garage door openers and walkie-talkies. The development of the integrated circuit (IC) in the 1970s made possible the current proliferation of wireless devices, such as cell phones and Wi-Fi networks, in which integrated digital transmitters and receivers (wireless modems) in portable devices operate automatically, in the background, to exchange data with wireless networks. The need to conserve bandwidth in the increasingly congested radio spectrum is driving the development of new types of transmitters such as spread spectrum, trunked radio systems and cognitive radio. A related trend has been an ongoing transition from analog to digital radio transmission methods. Digital modulation can have greater spectral efficiency than analog modulation; that is it can often transmit more information (data rate) in a given bandwidth than analog, using data compression algorithms. Other advantages of digital transmission are increased noise immunity, and greater flexibility and processing power of digital signal processing integrated circuits.
Technology
Broadcasting
null
61190
https://en.wikipedia.org/wiki/Cauchy%27s%20integral%20theorem
Cauchy's integral theorem
In mathematics, the Cauchy integral theorem (also known as the Cauchy–Goursat theorem) in complex analysis, named after Augustin-Louis Cauchy (and Édouard Goursat), is an important statement about line integrals for holomorphic functions in the complex plane. Essentially, it says that if is holomorphic in a simply connected domain Ω, then for any simply closed contour in Ω, that contour integral is zero. Statement Fundamental theorem for complex line integrals If is a holomorphic function on an open region , and is a curve in from to then, Also, when has a single-valued antiderivative in an open region , then the path integral is path independent for all paths in . Formulation on simply connected regions Let be a simply connected open set, and let be a holomorphic function. Let be a smooth closed curve. Then: (The condition that be simply connected means that has no "holes", or in other words, that the fundamental group of is trivial.) General formulation Let be an open set, and let be a holomorphic function. Let be a smooth closed curve. If is homotopic to a constant curve, then: (Recall that a curve is homotopic to a constant curve if there exists a smooth homotopy (within ) from the curve to the constant curve. Intuitively, this means that one can shrink the curve into a point without exiting the space.) The first version is a special case of this because on a simply connected set, every closed curve is homotopic to a constant curve. Main example In both cases, it is important to remember that the curve does not surround any "holes" in the domain, or else the theorem does not apply. A famous example is the following curve: which traces out the unit circle. Here the following integral: is nonzero. The Cauchy integral theorem does not apply here since is not defined at . Intuitively, surrounds a "hole" in the domain of , so cannot be shrunk to a point without exiting the space. Thus, the theorem does not apply. Discussion As Édouard Goursat showed, Cauchy's integral theorem can be proven assuming only that the complex derivative exists everywhere in . This is significant because one can then prove Cauchy's integral formula for these functions, and from that deduce these functions are infinitely differentiable. The condition that be simply connected means that has no "holes" or, in homotopy terms, that the fundamental group of is trivial; for instance, every open disk , for , qualifies. The condition is crucial; consider which traces out the unit circle, and then the path integral is nonzero; the Cauchy integral theorem does not apply here since is not defined (and is certainly not holomorphic) at . One important consequence of the theorem is that path integrals of holomorphic functions on simply connected domains can be computed in a manner familiar from the fundamental theorem of calculus: let be a simply connected open subset of , let be a holomorphic function, and let be a piecewise continuously differentiable path in with start point and end point . If is a complex antiderivative of , then The Cauchy integral theorem is valid with a weaker hypothesis than given above, e.g. given , a simply connected open subset of , we can weaken the assumptions to being holomorphic on and continuous on and a rectifiable simple loop in . The Cauchy integral theorem leads to Cauchy's integral formula and the residue theorem. Proof If one assumes that the partial derivatives of a holomorphic function are continuous, the Cauchy integral theorem can be proven as a direct consequence of Green's theorem and the fact that the real and imaginary parts of must satisfy the Cauchy–Riemann equations in the region bounded by and moreover in the open neighborhood of this region. Cauchy provided this proof, but it was later proven by Goursat without requiring techniques from vector calculus, or the continuity of partial derivatives. We can break the integrand as well as the differential into their real and imaginary components: In this case we have By Green's theorem, we may then replace the integrals around the closed contour with an area integral throughout the domain that is enclosed by as follows: But as the real and imaginary parts of a function holomorphic in the domain and must satisfy the Cauchy–Riemann equations there: We therefore find that both integrands (and hence their integrals) are zero This gives the desired result
Mathematics
Complex analysis
null
61211
https://en.wikipedia.org/wiki/Anatase
Anatase
Anatase is a metastable mineral form of titanium dioxide (TiO2) with a tetragonal crystal structure. Although colorless or white when pure, anatase in nature is usually a black solid due to impurities. Three other polymorphs (or mineral forms) of titanium dioxide are known to occur naturally: brookite, akaogiite, and rutile, with rutile being the most common and most stable of the bunch. Anatase is formed at relatively low temperatures and found in minor concentrations in igneous and metamorphic rocks. Glass coated with a thin film of TiO2 shows antifogging and self-cleaning properties under ultraviolet radiation. Anatase is always found as small, isolated, and sharply developed crystals, and like rutile, it crystallizes in a tetragonal system. Anatase is metastable at all temperatures and pressures, with rutile being the equilibrium polymorph. Nevertheless, anatase is often the first titanium dioxide phase to form in many processes due to its lower surface energy, with a transformation to rutile taking place at elevated temperatures. Although the degree of symmetry is the same for both anatase and rutile phases, there is no relation between the interfacial angles of the two minerals, except in the prism-zone of 45° and 90°. The common octahedral crystal habit of anatase, with four perfect cleavage planes, has an angle over its polar edge of 82°9', whereas rutile octahedra only has a polar edge angle of 56°52½'. The steeper angle gives anatase crystals a longer vertical axis and skinnier appearance than rutile. Additional important differences exist between the physical characters of anatase and rutile. For example, anatase is less hard (5.5–6 vs. 6–6.5 on the Mohs scale) and less dense (specific gravity about 3.9 vs. 4.2) than rutile. Anatase is also optically negative, whereas rutile is optically positive. Anatase has a more strongly adamantine or metallic-adamantine luster than that of rutile as well. Nomenclature The modern name was introduced by René Just Haüy in 1801, but the mineral was known and described before. It derives from 'stretching out', because the crystals are stretched along an axis compared to other dipyramidal ones. Another name commonly in use for anatase is octahedrite, which is earlier than anatase and was given by Horace Bénédict de Saussure because of the common (acute) octahedral habit of the crystals. Other names, now obsolete, are oisanite (by Jean-Claude Delamétherie) and dauphinite, from the well-known French locality of Le Bourg-d'Oisans in Dauphiné. Crystal habit Two growth habits of anatase crystals may be distinguished. The more common occurs as simple acute octahedra with an indigo-blue to black color and steely luster. Crystals of this kind are abundant at Le Bourg-d'Oisans in Dauphiné, France, where they are associated with rock-crystal, feldspar, and axinite in crevices in granite and mica schist. Similar crystals of microscopic size are widely distributed in sedimentary rocks such as sandstones, clays, and slates, from which they may be separated by washing away the lighter constituents of the powdered rock. The (101) plane of anatase is the most thermodynamically stable surface and thus the most widely exposed facet in natural and synthetic anatase. Crystals of the second type have numerous pyramidal faces developed, and they are usually flatter or sometimes prismatic in habit. Their color is honey-yellow to brown. Such crystals closely resemble the mineral xenotime in appearance and were historically thought to be a special form of xenotime, termed wiserine. They occur attached to the walls of crevices in gneisses in the Alps, a well-known locality being the Binnenthal near Brig in canton Valais, Switzerland. While anatase is not an equilibrium phase of TiO2, it is metastable near room temperature. At temperatures between 550 and about 1000 °C, anatase converts to rutile. The temperature of this transformation strongly depends on impurities, or dopants, as well as the morphology of the sample. Synthetic anatase Due to its potential application as a semiconductor, anatase is often prepared synthetically. Crystalline anatase can be prepared in laboratories by chemical methods such as the sol-gel process. This might be done through controlled hydrolysis of titanium tetrachloride (TiCl4) or titanium ethoxide. Often dopants are included in such synthesis processes to control the morphology, electronic structure, and surface chemistry of an anatase sample.
Physical sciences
Minerals
Earth science
61213
https://en.wikipedia.org/wiki/Laurent%20series
Laurent series
In mathematics, the Laurent series of a complex function is a representation of that function as a power series which includes terms of negative degree. It may be used to express complex functions in cases where a Taylor series expansion cannot be applied. The Laurent series was named after and first published by Pierre Alphonse Laurent in 1843. Karl Weierstrass had previously described it in a paper written in 1841 but not published until 1894. Definition The Laurent series for a complex function about an arbitrary point is given by where the coefficients are defined by a contour integral that generalizes Cauchy's integral formula: The path of integration is counterclockwise around a Jordan curve enclosing and lying in an annulus in which is holomorphic (analytic). The expansion for will then be valid anywhere inside the annulus. The annulus is shown in red in the figure on the right, along with an example of a suitable path of integration labeled . When is defined as the circle , where , this amounts to computing the complex Fourier coefficients of the restriction of to . The fact that these integrals are unchanged by a deformation of the contour is an immediate consequence of Green's theorem. One may also obtain the Laurent series for a complex function at . However, this is the same as when . In practice, the above integral formula may not offer the most practical method for computing the coefficients for a given function ; instead, one often pieces together the Laurent series by combining known Taylor expansions. Because the Laurent expansion of a function is unique whenever it exists, any expression of this form that equals the given function in some annulus must actually be the Laurent expansion of . Convergence Laurent series with complex coefficients are an important tool in complex analysis, especially to investigate the behavior of functions near singularities. Consider for instance the function with . As a real function, it is infinitely differentiable everywhere; as a complex function however it is not differentiable at . The Laurent series of is obtained via the power series representation, which converges to for all except at the singularity . The graph on the right shows in black and its Laurent approximations As , the approximation becomes exact for all (complex) numbers except at the singularity . More generally, Laurent series can be used to express holomorphic functions defined on an annulus, much as power series are used to express holomorphic functions defined on a disc. Suppose is a given Laurent series with complex coefficients and a complex center . Then there exists a unique inner radius and outer radius such that: The Laurent series converges on the open annulus . That is, both the positive- and negative degree power series converge. Furthermore, this convergence will be uniform on compact sets. Finally, the convergent series defines a holomorphic function on . Outside the annulus, the Laurent series diverges. That is, at each point in the exterior of , either the positive- or negative degree power series diverges. On the boundary of the annulus, one cannot make a general statement, except that there is at least one point on the inner boundary and one point on the outer boundary such that cannot be holomorphically extended to those points; giving rise to a Riemann-Hilbert problem. It is possible that may be zero or may be infinite; at the other extreme, it's not necessarily true that is less than . These radii can be computed by taking the limit superior of the coefficients such that: When , the coefficient of the Laurent expansion is called the residue of at the singularity . For example, the function is holomorphic everywhere except at . The Laurent expansion about can then be obtained from the power series representation: hence, the residue is given by . Conversely, for a holomorphic function defined on the annulus , there always exists a unique Laurent series with center which converges (at least on ) to . For example, consider the following rational function, along with its partial fraction expansion: This function has singularities at and , where the denominator is zero and the expression is therefore undefined. A Taylor series about (which yields a power series) will only converge in a disc of radius 1, since it "hits" the singularity at . However, there are three possible Laurent expansions about 0, depending on the radius of : One series is defined on the inner disc where ; it is the same as the Taylor series, This follows from the partial fraction form of the function, along with the formula for the sum of a geometric series, for . The second series is defined on the middle annulus where is caught between the two singularities: Here, we use the alternative form of the geometric series summation, for . The third series is defined on the infinite outer annulus where , (which is also the Laurent expansion at ) This series can be derived using geometric series as before, or by performing polynomial long division of 1 by , not stopping with a remainder but continuing into terms; indeed, the "outer" Laurent series of a rational function is analogous to the decimal form of a fraction. (The "inner" Taylor series expansion can be obtained similarly, just by reversing the term order in the division algorithm.) Uniqueness Suppose a function holomorphic on the annulus has two Laurent series: Multiply both sides by , where k is an arbitrary integer, and integrate on a path γ inside the annulus, The series converges uniformly on , where ε is a positive number small enough for γ to be contained in the constricted closed annulus, so the integration and summation can be interchanged. Substituting the identity into the summation yields Hence the Laurent series is unique. Laurent polynomials A Laurent polynomial is a Laurent series in which only finitely many coefficients are non-zero. Laurent polynomials differ from ordinary polynomials in that they may have terms of negative degree. Principal part The principal part of a Laurent series is the series of terms with negative degree, that is If the principal part of is a finite sum, then has a pole at of order equal to (negative) the degree of the highest term; on the other hand, if has an essential singularity at , the principal part is an infinite sum (meaning it has infinitely many non-zero terms). If the inner radius of convergence of the Laurent series for is 0, then has an essential singularity at if and only if the principal part is an infinite sum, and has a pole otherwise. If the inner radius of convergence is positive, may have infinitely many negative terms but still be regular at , as in the example above, in which case it is represented by a different Laurent series in a disk about . Laurent series with only finitely many negative terms are well-behaved—they are a power series divided by , and can be analyzed similarly—while Laurent series with infinitely many negative terms have complicated behavior on the inner circle of convergence. Multiplication and sum Laurent series cannot in general be multiplied. Algebraically, the expression for the terms of the product may involve infinite sums which need not converge (one cannot take the convolution of integer sequences). Geometrically, the two Laurent series may have non-overlapping annuli of convergence. Two Laurent series with only finitely many negative terms can be multiplied: algebraically, the sums are all finite; geometrically, these have poles at , and inner radius of convergence 0, so they both converge on an overlapping annulus. Thus when defining formal Laurent series, one requires Laurent series with only finitely many negative terms. Similarly, the sum of two convergent Laurent series need not converge, though it is always defined formally, but the sum of two bounded below Laurent series (or any Laurent series on a punctured disk) has a non-empty annulus of convergence. Also, for a field , by the sum and multiplication defined above, formal Laurent series would form a field which is also the field of fractions of the ring of formal power series.
Mathematics
Sequences and series
null
61260
https://en.wikipedia.org/wiki/Filling%20station
Filling station
A filling station (also known as a gas station [] or petrol station []) is a facility that sells fuel and engine lubricants for motor vehicles. The most common fuels sold are gasoline (or petrol) and diesel fuel. Fuel dispensers are used to pump gasoline, diesel, compressed natural gas, compressed hydrogen, hydrogen compressed natural gas, liquefied petroleum gas, liquid hydrogen, kerosene, alcohol fuels (like methanol, ethanol, butanol, and propanol), biofuels (like straight vegetable oil and biodiesel), or other types of fuel into the tanks within vehicles and calculate the financial cost of the fuel transferred to the vehicle. Besides gasoline pumps, one other significant device which is also found in filling stations and can refuel certain (compressed-air) vehicles is an air compressor, although generally these are just used to inflate car tires. Many filling stations provide convenience stores, which may sell convenience food, beverages, tobacco products, lottery tickets, newspapers, magazines, and, in some cases, a small selection of grocery items, such as milk or eggs. Some also sell propane or butane and have added shops to their primary business. Conversely, some chain stores, such as supermarkets, discount stores, warehouse clubs, or traditional convenience stores, have provided fuel pumps on the premises. Terminology In North America the fuel is known as "gasoline" or "gas" for short, and the terms "gas station" and "service station" are used in the United States, Canada, and the Caribbean. In some regions of Canada, the term "gas bar" (or "gasbar") is used. In the rest of the English-speaking world the fuel is known as "petrol". As a result, the term "petrol station" or "petrol pump" is used in the United Kingdom. In Ireland, New Zealand and South Africa "garage" and "forecourt" are still commonly used. Similarly, in Australia, New Zealand, the United Kingdom, and Ireland, the term "service station" describes any petrol station; Australians and New Zealanders also call it a "servo". In India, Pakistan and Bangladesh, it is called a "petrol pump" or a "petrol bunk". In Japanese, a commonly used term is although the abbreviation SS (for service station) is also used. History The first known filling station was the city pharmacy in Wiesloch, Germany, where Bertha Benz refilled the tank of the first automobile on its maiden trip from Mannheim to Pforzheim back in 1888. Shortly thereafter other pharmacies sold gasoline as a side business. Since 2008 the Bertha Benz Memorial Route commemorates this event. Brazil The first "posto de gasolina" of South America was opened in Santos, São Paulo, Brazil, in 1920. It was located on Ana Costa Avenue, in front of the beach, in a corner that is located by the Hotel Atlântico, which occupies its area nowadays. It was owned by Esso and brought by Antonio Duarte Moreira, a taxi entrepreneur. Russia In Russia, the first filling stations appeared in 1911, when the Imperial Automobile Society signed an agreement with the partnership "Br. Nobel". By 1914 about 440 stations functioned in major cities across the country. In the mid-1960s in Moscow there were about 250 stations. A significant boost in retail network development occurred with the mass launch of the car "Zhiguli" at the Volga Automobile Plant, which was built in Tolyatti in 1970. Gasoline for other than non-private cars was sold for ration cards only. This type of payment system stopped in the midst of perestroika in the early 1990s. Since the saturation of automobile filling stations in Russia is insufficient and lags behind the leading countries of the world, there is a need to accommodate new stations in the cities and along the roads of different levels. United States The increase in automobile ownership after Henry Ford started to sell automobiles that the middle class could afford resulted in an increased demand for filling stations. The world's first purpose-built gas station was constructed in St. Louis, Missouri, in 1905 at 420 South Theresa Avenue. The second station was constructed in 1907 by Standard Oil of California (now Chevron) in Seattle, Washington, at what is now Pier 32. Reighard's Gas Station in Altoona, Pennsylvania claims that it dates from 1909 and is the oldest existing filling station in the United States. Early on, they were known to motorists as "filling stations" and often washed vehicle windows for free. The first "drive-in" filling station, Gulf Refining Company, opened to the motoring public in Pittsburgh on December 1, 1913, at Baum Boulevard and St Clair's Street. Prior to this, automobile drivers pulled into almost any general or hardware store, or even blacksmith shops in order to fill up their tanks. On its first day, the station sold of gasoline at 27 cents per gallon (7 cents per litre). This was also the first architect-designed station and the first to distribute free road maps. The first alternative fuel station was opened in San Diego, California, by Pearson Fuels in 2003. Maryland officials said that on September 26, 2019, RS Automotive in Takoma Park, Maryland became the first filling station in the country to convert to an EV charging station. Design and function The majority of filling stations are built in a similar manner, with most of the fueling installation underground, pump machines in the forecourt and a point of service inside a building. Single or multiple fuel tanks are usually deployed underground. Local regulations and environmental concerns may require a different method, with some stations storing their fuel in container tanks, entrenched surface tanks or unprotected fuel tanks deployed on the surface. Fuel is usually offloaded from a tanker truck into each tank by gravity through a separate capped opening located on the station's perimeter. Fuel from the tanks travels to the dispenser pumps through underground pipes. For every fuel tank, direct access must be available at all times. Most tanks can be accessed through a service canal directly from the forecourt. Older stations tend to use a separate pipe for every kind of available fuel and for every dispenser. Newer stations may employ a single pipe for every dispenser. This pipe houses a number of smaller pipes for the individual fuel types. Fuel tanks, dispenser and nozzles used to fill car tanks employ vapor recovery systems, which prevents releases of vapor into the atmosphere with a system of pipes. The exhausts are placed as high as possible. A vapor recovery system may be employed at the exhaust pipe. This system collects the vapors, liquefies them and releases them back into the lowest grade fuel tank available. The forecourt is the part of a filling station where vehicles are refueled. Gasoline pumps are placed on concrete plinths, as a precautionary measure against collision by motor vehicles. Additional elements may be employed, including metal barriers. The area around the gasoline pumps must have a drainage system. Since fuel sometimes spills onto the pavement, as little of it as possible should remain. Any liquids present on the forecourt will flow into a channel drain before it enters a petrol interceptor which is designed to capture any hydrocarbon pollutants and filter these from rainwater which may then proceed to a sanitary sewer, stormwater drain, or to ground. If a filling station allows customers to pay at the dispenser, the data from the dispenser may be transmitted via RS-232, RS-485 or Ethernet to the point of sale, usually inside the filling station's building, and fed into the station's cash register operating system. The cash register system gives a limited control over the gasoline pump, and is usually limited to allowing the clerks to turn the pumps on and off. A separate system is used to monitor the fuel tank's status and quantities of fuel. With sensors directly in the fuel tank, the data is fed to a terminal in the back room, where it can be downloaded or printed out. Sometimes this method is bypassed, with the fuel tank data transmitted directly to an external database. Underground filling stations The underground modular filling station is a construction model for filling stations that was developed and patented by U-Cont Oy Ltd in Finland in 1993. Afterwards the same system was used in Florida, US. Above-ground modular stations were built in the 1980s in Eastern Europe and especially in Soviet Union, but they were not built in other parts of Europe due to the stations' lack of safety in case of fire. The construction model for underground modular filling station makes the installation time shorter, designing easier and manufacturing less expensive. As a proof of the model's installation speed an unofficial world record of filling station installation was made by U-Cont Oy Ltd when a modular filling station was built in Helsinki, Finland in less than three days, including groundwork. The safety of modular filling stations has been tested in a filling station simulator, in Kuopio, Finland. These tests have included for instance burning cars and explosions in the station simulator. Negative impacts Human health Gasoline contains a mixture of BTEX hydrocarbons (benzene, toluene, ethylbenzene, xylenes). Prolonged exposure to toluene can cause permanent damage to the central nervous system, and chlorinated solvents can cause liver and kidney problems. Benzene in particular causes leukemia and is associated with non-Hodgkin lymphoma and multiple myeloma. People who work in filling stations, live near them, or attend school close to them are exposed to fumes and are at increased lifetime risk of cancer, with risk increased if there are multiple stations nearby. There is some evidence that living near a filling station is a risk for childhood leukemia. In addition to long-term exposure, there are bursts of short-term exposures to benzene when tanker trucks deliver fuel. High levels of benzene have been detected near stations across urban, suburban, and rural environments, though the causes (such as road traffic or congestion) can vary by location. Gas station attendants have suffered adverse health consequences depending on the type of fuel used, exposure to vehicle exhaust, and types of personal protective equipment (PPE) offered. Studies have noted higher levels of chromosomal deletions and higher rates of miscarriage, and workers have reported headaches, fatigue, throat irritation and depression. Exposure to exhaust and fumes has been associated with eye irritation, nausea, dizziness, and cough. Environment Gasoline can leak into the surrounding soil and water, posing health risks. Areas formerly occupied by stations are often contaminated, resulting in brownfields and urban blight. Underground storage tanks (USTs) were typically made of steel and were common in the United States, but were prone to corrosion. They received national attention in 1983 after an episode of 60 Minutes documented significant drinking water contamination from a Mobil station in Canob Park in Richmond, Rhode Island. This led to regulations banning these types of tanks in 1985. However, tanks that ceased operation before 1986 are unlikely to have been recorded, and many underground tanks are thus unknowingly hidden beneath redeveloped land, contributing to soil, groundwater, and indoor air pollution. Because of the relatively small size of former stations (compared to larger brownfields), the cost-per-acre to rehabilitate the land is higher; the total cost in the United States is not known but is in the billions of dollars. Individual cleanups may be complex, with some in Canada taking decades and costing millions of dollars both for the cleanup efforts and in legal fees to determine whether individuals, governments, or corporations are liable for costs. Economic costs The cost of potential cleanup of a former filling station can lower property values, discourage development of land, and depress neighbouring property values and potential tax revenue. When areas are known to be contaminated by leaking underground storage tanks, the sale value of the land and neighbouring area drops. An analysis of residential properties in Cuyahoga County, Ohio estimated the loss at about 17% when within or one block of a registered leaking tank. Active filling stations have similar negative effects on property values, with an analysis in Xuancheng, China finding a loss of 16% within and 9% when between and . Marketing North America In the United States and Canada, there are generally two marketing types of filling stations: premium brands and discount brands. Premium brands Filling stations with premium brands sell well-recognized and often international brands of fuel, including Exxon/Mobil and its Esso brand, Phillips 66/Conoco/76, Chevron, Mobil, Shell, Husky Energy, Sunoco (US), BP, Valero and Texaco. Non-international premium brands include Petrobras, Petro-Canada (owned by Suncor Energy Canada), QuikTrip, Hess, Sinclair, and Pemex. Premium-brand stations accept credit cards, often issue their own company cards (fuel cards or fleet cards) and may charge higher prices. In some cases, fuel cards for customers with a lower fuel consumption are ordered not directly from an oil company, but from an intermediary. Many premium brands have fully automated pay-at-the-pump facilities. Premium stations tend to be highly visible from highway and freeway exits, utilizing tall signs to display their brand logos. Discount brands Discount brands are often smaller, regional chains or independent stations, offering lower fuel prices. Most purchase wholesale commodity gasoline from independent suppliers or the major petroleum companies. Lower-priced stations are also found at some supermarkets (Albertsons, Kroger, Big Y, Ingles, Lowes Foods, Giant, Weis Markets, Safeway, Hy-Vee, Vons, Meijer, Loblaws/Real Canadian Superstore, and Giant Eagle), convenience stores (7-Eleven, Circle K, Cumberland Farms, QuickChek, Road Ranger, Sheetz, Speedway and Wawa), discount stores (Walmart, Canadian Tire) and warehouse clubs (Costco, Sam's Club, and BJ's Wholesale Club). At some stations (such as Vons, Costco gas stations, BJ's Wholesale Club, or Sam's Club), consumers are required to hold a special membership card to be eligible for the discounted price, or pay only with the chain's cash card, debt card or a credit card issuer exclusive to that chain. In some areas, such as New Jersey, this practice is illegal, and stations are required to sell to all at the same price. Some convenience stores, such as 7-Eleven and Circle K, have co-branded their stations with one of the premium brands. After the Gulf Oil company was sold to Chevron, northeastern retail units were sold off as a chain, with Cumberland Farms controlling the remaining Gulf Oil outlets in the United States. State-controlled stations Some countries have only one brand of filling station. In Malaysia, Shell is the dominant player by number of stations, with government-owned Petronas coming in second. In Indonesia, the dominant player by number of stations is the government-owned Pertamina, although other companies such as TotalEnergies and Shell are increasingly found in big cities such as the capital Jakarta or Surabaya. In Taiwan, the government-owned CPC Corporation is the dominant player by number of stations, with the privately owned and operated Formosa Petrochemical Corporation and respectively in second and third place. Global and local branding Some companies, such as Shell, use their brand worldwide, however, Chevron uses its inherited brand Caltex in Asia Pacific, Australia and Africa, and its Texaco brand in Europe and Latin America. ExxonMobil uses its Exxon and Mobil brands but is still known as Esso (the forerunner company name, Standard Oil or S.O.) in many places, most noticeably in Canada and Singapore. In Brazil, the main operators are Vibra Energia and Ipiranga, but Esso and Shell (Raízen) are also present. In Mexico, the historical monopoly filling station operator, and still the largest, is Pemex, but ever since Mexico's energy laws were gradually liberalized starting from 2013, foreign brands such as Shell, BP, Mobil and Chevron, as well as the country's largest convenience store chain Oxxo, have also started operating filling stations. In the United Kingdom, the three largest are BP, Esso and Shell; the "Big Four" supermarket chains, Morrisons, Sainsbury's, Asda and Tesco, also operate filling stations, as well as some smaller supermarket chains such as The Co-operative Group and Waitrose. In Poland, the three largest operators are the partially state-owned PKN Orlen (including acquisitions Grupa Lotos and PGNiG), followed by Shell and BP. Smaller operators include Auchan, , Circle K, MOL Group and Żabka. In Australia, the major operators are Ampol, BP, Chevron Australia (Caltex), EG Australia, ExxonMobil Australia (Mobil), Puma Energy, United Petroleum and Viva Energy (mostly under the dual Shell-Coles Express branding). Smaller operators include Costco, Liberty Oil, Seven & i Holdings (operates servos under the 7-Eleven convenience store branding) and Shell Australia. In India, the three major operators are the state-owned Hindustan Petroleum, Bharat Petroleum and Indian Oil Corporation, which together control approximately 87% of the market. Foreign brands such as BP (joint venture with Reliance Industries, branded as Jio-bp) and Shell are also present. In Japan, the four major operators are: Cosmo Oil, Idemitsu (under the brand names apollostation and Idemitsu), ENEOS Corporation (under the brand names ENEOS, Express and General) and San-Ai Oil (under the brand name Kygnus). Smaller operators include: Japan Agricultural Cooperatives (under the brand name , except in Hokkaido where the brand name is operated by ) and Mitsubishi Group (operates self-service stations under the Lawson convenience store branding). Previously, foreign filling station brands were also present in Japan: mainly Shell (operated by Idemitsu since its acquisition of Showa Shell Sekiyu in 2018–19, all rebranded to apollostation by 2023), Esso and Mobil (last operated by ENEOS Corporation under license from ExxonMobil, all rebranded to ENEOS in 2019). Payment methods Australia and New Zealand Most service stations allow the customer to pump the fuel before paying. In recent years, some service stations have required customers to purchase their fuel first. In some small towns, the customer may hand the cash to the attendant on the forecourt if they are paying for a set amount of fuel and have no change; but usually customers will enter the service centre to pay a cashier. Some supermarkets have their own forecourts which are unmanned and payment is pay-at-pump only. Customers at the supermarket will receive a discount voucher which offers discounted fuel at their forecourt. The amount of discount varies depending on the amount spent on groceries at the supermarket, but normally starts at 4 cents a litre. In New Zealand, BP has an app for smartphones that detects a user's location, then allows one to select the type of fuel, which pump, and how much to spend. The amount is then deducted from the user's account. Canada In British Columbia and Alberta, it is a legal requirement that customers either pre-pay for the fuel or pay at the pump. The law is called "Grant's Law" and is intended to prevent "gas-and-dash" crimes, where a customer refuels and then drives away without paying for it. In other provinces, payment after filling is permitted and widely available, though some stations may require either a pre-payment or a payment at the pump during night hours. Ireland In the Republic of Ireland, most stations allow customers to pump fuel before paying. Some stations have pay-at-the-pump facilities. United Kingdom A large majority of stations allow customers to pay with a chip and pin payment card or pay in the shop. Many have a pay at the pump system, where customers can enter their PIN prior to refueling. United States Pre-payment is the norm in the US and customers may typically pay either at the pump or inside the gas station. Modern stations have pay-at-the-pump functions: in most cases credit, debit, ATM cards, fuel cards and fleet cards are accepted. Occasionally a station will have a pay-at-the-pump-only period per day, when attendants are not present, often at night, and some stations are pay-at-the-pump only 24 hours a day. Types of service Filling stations typically offer one of three types of service to their customers: full service, minimum service or self-service. Full service An attendant operates the pumps, often wipes the windshield, and sometimes checks the vehicle's oil level and tire pressure, then collects payment and perhaps a small tip. Minimum service An attendant operates the pumps. This is often required due to legislation that prohibits customers from operating the pumps. Self service The customer performs all required service. Signs informing the customer of filling procedures and cautions are displayed on each pump. Customers can still enter a store or go to a booth to give payment to a person. Unstaffed Using cardlock (or pay-at-the-pump) system, these are completely unstaffed. Brazil In Brazil, self-service fuel filling is illegal, due to a federal law enacted in 2000. The law was introduced by Federal Deputy Aldo Rebelo, who claims it saved 300,000 fuel attendant jobs across the country. Japan Before 1998, filling stations in Japan were entirely full-service stations. Self-service stations were legalized in Japan in 1998 following the abolition of the Special Petroleum Law, which led to the deregulation of the petroleum industry in Japan. Under current safety regulations, while motorists are able to self-dispense fuel at self-service stations, generally identified in Japanese as , at least one fuel attendant must be on hand to keep watch over potential safety violations and to render assistance to motorists whenever necessary. South Korea Filling stations in South Korea offer a variety of services, such as providing bottled water or tissues, and cleaning free of charge. But most have switched to self-service. Some large full-service stations have many services, such as tire pressure charging, automatic car washing, and self-cleaning. Some of them are free to gas customers who spend more than a certain amount. North America In the past, filling stations in the United States offered a choice between full service and self service. Before 1970, full service was the norm, and self-service was rare. Today, few stations advertise or provide full service. Full service stations are more common in wealthy and upscale areas. The cost of full service is usually assessed as a fixed amount per US gallon. The first self-service station in the United States was in Los Angeles, opened in 1947 by Frank Urich. In Canada, the first self-service station opened in Winnipeg, Manitoba, in 1949. It was operated by the independent company Henderson Thriftway Petroleum, owned by Bill Henderson. In New Jersey, filling stations offer only full service (and mini service); attendants there are required to pump gasoline for customers. Customers, in fact, are prohibited by law from pumping their own gasoline. The only exception to this within New Jersey is at the filling station next to Joint Base McGuire-Dix-Lakehurst in Wrightstown. New Jersey prohibited self-service in 1949, with the passage of "Retail Gasoline Dispensing Safety Act," after lobbying by service station owners. That laws states that "Because of the fire hazards directly associated with dispensing fuel, it is in the public interest that gasoline station operators have the control needed over that activity to ensure compliance with appropriate safety procedures, including turning off vehicle engines and refraining from smoking while fuel is dispensed." Proponents of the prohibition cite safety and jobs as reasons to keep the ban. Of note, the ban does not apply to the pumping of diesel fuel at filling stations (though individual filling stations may prohibit this); nor does it apply to the pumping of gasoline into boats or aircraft. Oregon prohibited self-service in a 1951 statute prohibiting that listed 17 different justifications, including flammability, the risk of crime from customers leaving their vehicles, toxic fumes, and the jobs created by requiring mini service. In 1982 Oregon voters rejected a ballot measure sponsored by the service station owners, which would have legalized self-service. Oregon legislators passed a bill that was signed into law by the Governor in May 2017 to allow self-service for counties with a total population of 40,000 or less beginning in January 2018. Governor Tina Kotek passed a law allowing for it in 2023, but stations are still required to provide full-serve for customers who want it. The constitutionality of the self-service bans has been disputed. The Oregon statute was brought into court in 1989 by ARCO, and the New Jersey statute was challenged in court in 1950 by a small independent service station, Rein Motors. Both challenges failed. Former New Jersey governor Jon Corzine sought to lift the ban on self-service for New Jersey. He asserted that it would be able to lower gas prices, but some New Jerseyans argued that it could cause drawbacks, especially unemployment. The town of Huntington, New York has prohibited self-service stations since the early 1970s firstly to prevent theft and later due to safety concerns. The towns of Arlington, Massachusetts and Weymouth, Massachusetts have also prohibited self-service stations since 1975 and 1977, respectively. Contrary to popular belief, lit cigarettes are not capable of igniting gasoline. However, several states outlaw smoking at gas stations as the fire from the ignition source used to light the cigarette can ignite gasoline vapors. Most gas stations and many municipalities will also explicitly ban any smoking activity within certain distances of gasoline pumps. Other goods and services commonly available Many filling stations provide toilet facilities for customer use, as well as squeegees and paper towels for customers to clean their vehicle's windows. Discount stations may not provide these amenities in some countries. Stations typically have an air compressor, typically with a built-in or provided handheld tire-pressure gauge, to inflate tires and a hose to add water to vehicle radiators. Some air compressor machines are free of charge, while others charge a small fee to use (typically 50 cents to a dollar in North America). In US states, such as California, state law requires that paying customers must be provided with free air compressor service and radiator water. In some regions of America and Australia, many filling stations have a mechanic on duty, but this practice has died out in other parts of the world. Many filling stations have integrated convenience stores which sell food, beverages, and often cigarettes, lottery tickets, motor oil, and auto parts. Prices for these items tend to be higher than they would be at a supermarket or discount store. Many stations, particularly in the United States, have a fast food outlet inside. These are usually "express" versions with limited seating and limited menus, though some may be regular-sized and have spacious seating. Larger restaurants are common at truck stops and toll road service plazas. In some US states, beer, wine, and liquor are sold at filling stations, though this practice varies according to state law (see Alcohol laws of the United States by state). Nevada also allows the operation of slot and video poker machines without time restrictions. Vacuum cleaners, often coin-operated, are a common amenity to allow the cleaning of vehicle interiors, either by the customer or by an attendant. Some stations are equipped with car washes. Car washes are sometimes offered free of charge or at a discounted price with a certain amount of fuel purchased. Conversely, some car washes operate filling stations to supplement their businesses. From approximately 1920 to 1980, many service stations in the US provided free road maps affiliated with their parent oil companies to customers. This practice fell out of favor due to the 1970s energy crisis. Fuel prices Europe In European Union member states, gasoline prices are much higher than in North America due to higher fuel excise or taxation, although the base price is also higher than in the US. Occasionally, price rises trigger national protests. In the UK, a large-scale protest in August and September 2000, known as 'The Fuel Crisis', caused wide-scale havoc not only across the UK, but also in some other EU countries. The UK Government eventually backed down by indefinitely postponing a planned increase in fuel duty. This was partially reversed during December 2006 when then-Chancellor of the Exchequer Gordon Brown raised fuel duty by 1.25 pence per liter. Since 2007, gasoline prices in the UK rose by nearly 40 pence per liter, going from 97.3 pence per liter in 2007 to 136.8 pence per liter in 2012. In much of Europe, including the UK, France and Germany, stations operated by large supermarket chains usually price fuel lower than stand-alone stations. In most of mainland Europe, sales tax is lower on diesel fuel than on gasoline, and diesel is accordingly the cheaper fuel: in the UK and Switzerland, diesel has no tax advantage and retails at a higher price by quantity than gasoline (offset by its higher energy yield). In 2014, according to Eurostat, the mean EU28 price was €1.38 /L for euro-super 95 (gasoline), €1.26 /L for diesel. The least expensive gasoline was in Estonia at €1.10 /L, and the most expensive at €1.57 /L in Italy. The least expensive diesel was in Estonia at €1.14 /L, and the most expensive at €1.54 /L in the UK. The least expensive LPG was in Belgium at €0.50 /L, and the most expensive at €0.83 /L in France. North America Nearly all filling stations in North America advertise their prices on large signs outside the stations. Some locations have laws requiring such signage. In Canada and the United States, federal, state or provincial, and local sales taxes are usually included in the price, although tax details are often posted at the pump and some stations may provide details on sales receipts. Gasoline taxes are often ring-fenced (dedicated) to fund transportation projects such as the maintenance of existing roads and the construction of new ones. Individual filling stations in the United States have little if any control over gasoline prices. The wholesale price of gasoline is determined according to area by oil companies which supply the gasoline, and their prices are largely determined by the world markets for oil. Individual stations are unlikely to sell gasoline at a loss, and the profit margin—typically between 7 and 11 cents a US gallon (2–3 cents per liter)—that they make from gasoline sales is limited by competitive pressures: a gas station which charges more than others will lose customers to them. Most stations try to compensate by selling higher-margin food products in their convenience stores. Even with oil market fluctuations, prices for gasoline in the United States are among the lowest in the industrialized world; this is principally due to lower taxes. While the sales price of gasoline in Europe is more than twice that in the United States, the price excluding taxes is nearly identical in the two areas. Some Canadians and Mexicans in communities close to the US border drive into the United States to purchase cheaper gasoline. Due to heavy fluctuations in price in the United States, some stations offer their customers the option to buy and store gasoline for future uses, such as the service provided by First Fuel Bank. In order to save money, some consumers in Canada and the United States inform each other about low and high prices through the use of gasoline price websites. Such websites allow users to share prices advertised at filling stations with each other by posting them to a central server. Consumers then may check the prices listed in their geographic area in order to select the station with the lowest price available at the time. Some television and radio stations also compile pricing information via viewer and listener reports of pricing or reporter observations and present it as a regular segment of their newscasts, usually before or after traffic reports. These price observations must usually be made by reading the pricing signs outside stations, as many companies do not give their prices by telephone due to competitive concerns. It is a criminal offense to have written or verbal arrangements with competitors, suppliers or customers for: Fixing prices and exchanging information on prices or cost (including discounts and rebates), Limiting or restraining competition unduly, Engaging in misleading or deceptive practices. Gas stations must never hold discussions with other competitors regarding pricing policies and methods, terms of sale, costs, allocation of markets or boycotts of our petroleum products. Rest of the world In other energy-importing countries such as Japan, gasoline and petroleum product prices are higher than in the United States because of fuel transportation costs and taxes. On the other hand, some of the major oil-producing countries such as the Gulf states, Iran, Iraq, and Venezuela provide subsidized fuel at well-below world market prices. This practice tends to encourage heavy consumption. Hong Kong has some of the highest pump prices in the world, but most customers are given discounts as card members. Singapore, like Hong Kong, also has similarly high pump prices, which are largely based on a pricing strategy called Mean of Platts Singapore (MOPS). As Singapore does not have any oil reserves of its own, the city-state has instead built several off-shore refineries to refine oil imported mostly from Indonesian oil fields, as the latter country does not have enough refining capacity and capability of its own. Because neighbouring country Malaysia has cheaper pump prices than Singapore, cars registered in Singapore crossing over into Malaysia are legally required to have at least three-quarters of a tank of fuel since 1991 to prevent evading fuel duties, and when filling up in Malaysia, Singaporean-and Thai-registered hybrid and petrol-powered vehicles are legally restricted to only fill up on unsubsidised, premium-grade RON97-100 petrol, as RON95 petrol in Malaysia is partially subsidised by the Government of Malaysia for the benefit of lower-income Malaysian residents. In Western Australia a program called FuelWatch requires most filling stations to notify their "tomorrow prices" by 2pm each day; prices are changed at 6am each morning, and must be held for 24 hours. Each afternoon, the prices for the next day are released to the public and the media, allowing consumers to decide when to fill up. Service stations A service station or "servo" is the terminology often used in Australia, along with petrol station, to describe any facility where motorists can refuel their cars. In New Zealand a filling station is often referred to as a service station, petrol station or garage, even though it may not offer mechanical repairs or assistance with dispensing fuel. Levels of service available include full service, for which assistance in dispensing fuel is offered, as well as offers to check tire pressure or clean vehicle windscreens. That type of service is becoming uncommon in New Zealand, particularly Auckland. Further south of Auckland, many filling stations offer full service. There is also help service or assisted service, for which customers must request assistance before it is given, and self-service, for which no assistance is available. In the US, a filling station that also offers services such as oil changes and mechanical repairs to automobiles is called a service station. Until the 1970s the vast majority of filling stations were service stations. These stations typically offered free air for inflating tires, as compressed air was already on hand to operate the repair garage's pneumatic tools. While a few filling stations with a service station remain, many in the 1980s and 1990s were converted to convenience stores while still selling fuel, while others continued to offer services but discontinued offering fuel. This kind of business provided the name for the US comic strip Gasoline Alley, where a number of the characters worked. In the UK and Ireland, a 'service station' refers to much larger facilities, usually attached to motorways (see rest area) or major truck routes, which provide food outlets, large parking areas, and often other services such as hotels, arcade games, and shops in addition to 24-hour fuel supplies and a higher standard of restrooms. Fuel is typically more expensive from these outlets due to their premium locations. UK or Irish service stations do not usually repair automobiles. Highway service centre This arrangement occurs on many toll roads and some interstate freeways and is called an oasis or service plaza. In many cases, these centers might have a food court or restaurants. In the United Kingdom and Ireland these are called motorway service areas. Often, the state government maintains public rest areas directly connected to freeways, but does not rent out space to private businesses, as this is specifically prohibited by law via the Interstate Highway Act of 1956 which created the national Interstate Highway System, except sites on freeways built before January 1, 1960, and toll highways that are self-supporting but have Interstate designation, under a grandfather clause. As a result, such areas often provide only minimal services such as restrooms and vending machines. Private entrepreneurs develop additional facilities, such as truck stops or travel centers, restaurants, gas stations, and motels in clusters on private land adjacent to major interchanges. In the US, Pilot Flying J and TravelCenters of America are two of the most common full-service chains of truck stops. Because these facilities are not directly connected to the freeway, they usually have huge signs on poles high enough to be visible by motorists in time to exit from the freeway. Sometimes, the state also posts small official signs (normally blue) indicating what types of filling stations, restaurants, and hotels are available at an upcoming exit; businesses may add their logos to these signs for a fee. In Canada, the province of Ontario has stops along two of its 400-series highways, the 401 and the 400, traditionally referred to as "Service Centres", but recently renamed "ONroute" as part of a full rebuild of the sites. Owned by the provincial government, but leased to private operator Host Kilmer Service Centres, they contain food courts, convenience stores, washrooms, and co-located gas and diesel bars with attached convenience stores. Food providers include Tim Hortons (at all sites), A&W, Wendy's and Pizza Pizza. At most sites fuel is sold by Canadian Tire, with a few older Esso gas bars at earlier renovated locations. Octane In Australia, gasoline is unleaded, and available in 91, 95, 98 and 100 octane (names differ from brand to brand). Fuel additives for use in cars designed for leaded fuel are available at most filling stations. In Canada, the most commonly found octane grades are 87 (regular), 89 (mid grade) and 91 (premium), using the same "(R+M)/2 Method" used in the US (see below). In China, the most commonly found octane grade is RON 91 (regular), 93 (mid grade) and 97 (premium). Almost all of the fuel has been unleaded since 2000. In some premium filling stations in large cities, such as PetrolChina and Sinopec, RON 98 gas is sold for racing cars. In Europe, gasoline is unleaded and available in 95 RON (Eurosuper) and, in nearly all countries, 98 RON (Super Plus) octanes; in some countries 91 RON octane gasoline is offered as well. In addition, 100 RON is offered in some countries in continental Europe (Shell markets this as V-Power Racing). Some stations offer 98 RON with lead substitute (often called Lead-Replacement Petrol, or LRP). In New Zealand, gasoline is unleaded, and most commonly available in 91 RON ("Regular") and 95 RON ("Premium"). 98 RON is available at selected BP ("Ultimate") and Mobil ("Synergy 8000") service stations instead of the standard 95 RON. 96 RON was replaced by 95 RON, and subsequently abolished in 2006. Leaded fuel was abolished in 1996. In the UK the most common gasoline grade (and lowest octane generally available) is 'Premium' 95 RON unleaded. 'Super' is widely available at 97 RON (for example Shell V-Power, BP Ultimate). Leaded fuel is no longer available. In the United States all motor vehicle gasoline is unleaded and is available in several grades with different octane rating; 87 (Regular), 89 (Mid-Grade), and 93 (Premium) are typical grades. At high altitudes in the Mountain States and the Black Hills of South Dakota, regular unleaded can be as low as 85 octane; this practice has become increasingly controversial, since it was instituted when most cars had carburetors instead of the fuel injection and electronic engine controls standard in recent decades. In the US gasoline is described in terms of its "pump octane", which is the mean of their "RON" (Research Octane Number) and "MON" (Motor Octane Number). Labels on pumps in the US typically describe this as the "(R+M)/2 Method". Some nations describe fuels according to the traditional RON or MON ratings, so octane ratings cannot always be compared with the equivalent US rating by the "(R+M)/2 method". Differences in gasoline pumps In Europe, New Zealand and Australia, the customer selects one of several colour-coded nozzles depending on the type of fuel required. The filler pipe of unleaded fuel is smaller than the one for fuels for engines designed to take leaded fuel. The tank filler opening has a corresponding diameter; this prevents inadvertently using leaded fuel in an engine not designed for it, which can damage a catalytic converter. In most stations in Canada and the US, the pump has a single nozzle and the customer selects the desired octane grade by pushing a button. Some pumps require the customer to pick up the nozzle first, then lift a lever underneath it; others are designed so that lifting the nozzle automatically releases a switch. Some newer stations have separate nozzles for different types of fuel. Where diesel fuel is provided, it is usually dispensed from a separate nozzle even if the various grades of gasoline share the same nozzle. Motorists occasionally pump gasoline into a diesel car by accident. The converse is almost impossible because diesel pumps have a large nozzle with a diameter of which does not fit the filler, and the nozzles are protected by a lock mechanism or a liftable flap. Diesel fuel in a gasoline engine—while creating large amounts of smoke—does not normally cause permanent damage if it is drained once the mistake is realized. However even a liter of gasoline added to the tank of a modern diesel car can cause irreversible damage to the injection pump and other components through a lack of lubrication. In some cases, the car has to be scrapped because the cost of repairs exceeds its residual value. The issue is not clear-cut as older diesels using completely mechanical injection can tolerate some gasoline—which has historically been used to "thin" diesel fuel in winter. Legislation In most countries, stations are subjected to guidelines and regulations which exist to minimize the potential of fires, and increase safety. It is prohibited to use open flames on the forecourt of a filling station because of the risk of igniting gasoline vapor. In the United States, establishing fire codes and enforcing their compliance is the responsibility of state governments. Most localities ban smoking, open flames and running engines. Since the increased occurrence of static-related fires many stations have warnings about leaving the refueling point. Cars can build up static charge by driving on dry road surfaces. However many tire compounds contain enough carbon black to provide an electrical ground which prevents charge build-up. Newer "high mileage" tires use more silica and can increase the buildup of static. A driver who does not discharge static by contacting a conductive part of the car will carry it to the insulated handle of the nozzle and the static potential will eventually be discharged when this purposely-grounded arrangement is put into contact with the metallic filler neck of the vehicle. Ordinarily, vapor concentrations in the area of this filling operation are below the lower explosive limit (LEL) of the product being dispensed, so the static discharge causes no problem. The problem with ungrounded gasoline cans results from a combination of vehicular static charge, the potential between the container and the vehicle, and the loose fit between the grounded nozzle and the gas can. This last condition causes a rich vapor concentration in the ullage (the unfilled volume) of the gas can, and a discharge from the can to the grounded hanging hardware (the nozzle, hose, swivels and break-a-ways) can thus occur at a most inopportune point. The Petroleum Equipment Institute has recorded incidents of static-related ignition at refueling sites since early 2000. Although urban legends persist that using a mobile phone while pumping gasoline can cause sparks or explosion, this has not been duplicated under any controlled condition. Nevertheless, mobile phone manufacturers and gas stations ask users to switch off their phones. One suggested origin of this myth is said to have been started by gas station companies because the cell phone signal would interfere with the fuel counter on some older model fuel pumps causing it to give a lower reading. In the MythBusters episode "Cell Phone Destruction", investigators concluded that explosions attributed to cell phones could be caused by static discharges from clothing instead and also observed that such incidents seem to involve women more often than men. The US National Fire Protection Association does most of the research and code writing to address the potential for explosions of gasoline vapor. The customer fueling area, up to above the surface, normally does not have explosive concentrations of vapors, but may from time to time. Above this height, where most fuel filler necks are located, there is no expectation of an explosive concentration of gasoline vapor in normal operating conditions. Electrical equipment in the fueling area may be specially certified for use around gasoline vapors. Worldwide numbers The UK has 8,385 filling stations , down from about 18,000 in 1992 and a peak of around 40,000 in the mid-1960s. The US had 114,474 stations in 2012, according to the US Census Bureau, down from 118,756 in 2007 and 121,446 in 2002. In Canada, the number is on the decline. As of December 2008, 12,684 were in operation, significantly down from about 20,000 stations recorded in 1989. In Japan, the number dropped from a peak of 60,421 in 1994 to 40,357 at the end of 2009. In Germany, the number dropped down to 14,300 in 2011. In China, according to different reports, the total number of gas/oil stations (at the end of 2018) is about 106,000. India—60,799 (as of November 2017) Russia—there were about 25,000 stations in the Russian Federation (2011) In Argentina, as of 2021, there are more than 5,000 stations. The largest filling station networks in Europe (2017) TotalEnergies—8,200 stations Shell—7,800 stations BP—7,000 stations Esso—6,100 stations Eni—5,500 stations Repsol—4,700 stations Q8—4,600 stations Avia—3,000 stations PKN Orlen—2,800 stations Circle K—2,700 stations
Technology
Road transport
null
61271
https://en.wikipedia.org/wiki/Auxiliary%20power%20unit
Auxiliary power unit
An auxiliary power unit (APU), is a device on a vehicle that provides energy for functions other than propulsion. They are commonly found on large aircraft and naval ships as well as some large land vehicles. Aircraft APUs generally produce 115 V AC voltage at 400 Hz (rather than 50/60 Hz in mains supply), to run the electrical systems of the aircraft; others can produce 28 V DC voltage. APUs can provide power through single or three-phase systems. A jet fuel starter (JFS) is a similar device to an APU but directly linked to the main engine and started by an onboard compressed air bottle. Transport aircraft History During World War I, the British Coastal class blimps, one of several types of airship operated by the Royal Navy, carried a ABC auxiliary engine. These powered a generator for the craft's radio transmitter and, in an emergency, could power an auxiliary air blower. One of the first military fixed-wing aircraft to use an APU was the British, World War 1, Supermarine Nighthawk, an anti-Zeppelin night fighter. During World War II, a number of large American military aircraft were fitted with APUs. These were typically known as putt–putts, even in official training documents. The putt-putt on the B-29 Superfortress bomber was fitted in the unpressurised section at the rear of the aircraft. Various models of four-stroke, Flat-twin or V-twin engines were used. The engine drove a P2, DC generator, rated 28.5 Volts and 200 Amps (several of the same P2 generators, driven by the main engines, were the B-29's DC power source in flight). The putt-putt provided power for starting the main engines and was used after take-off to a height of . The putt-putt was restarted when the B-29 was descending to land. Some models of the B-24 Liberator had a putt–putt fitted at the front of the aircraft, inside the nose-wheel compartment. Some models of the Douglas C-47 Skytrain transport aircraft carried a putt-putt under the cockpit floor. As mechanical "startup" APUs for jet engines The first German jet engines built during the Second World War used a mechanical APU starting system designed by the German engineer Norbert Riedel. It consisted of a two-stroke flat engine, which for the Junkers Jumo 004 design was hidden in the engine nose cone, essentially functioning as a pioneering example of an auxiliary power unit for starting a jet engine. A hole in the extreme nose of the cone contained a manual pull-handle which started the piston engine, which in turn rotated the compressor. Two spark plug access ports existed in the Jumo 004's nose cone to service the Riedel unit's cylinders in situ, for maintenance purposes. Two small "premix" tanks for the Riedel's petrol/oil fuel were fitted in the annular intake. The engine was considered an extreme short stroke (bore / stroke: 70 mm / 35 mm = 2:1) design so it could fit within the in the nose cone of jet engines like the Jumo 004. For reduction it had an integrated planetary gear. It was produced by Victoria in Nuremberg and served as a mechanical APU-style starter for all three German jet engine designs to have made it to at least the prototype stage before May 1945 – the Junkers Jumo 004, the BMW 003 (which uniquely appears to use an electric starter for the Riedel APU), and the prototypes (19 built) of the more advanced Heinkel HeS 011 engine, which mounted it just above the intake passage in the Heinkel-crafted sheetmetal of the engine nacelle nose. The Boeing 727 in 1963 was the first jetliner to feature a gas turbine APU, allowing it to operate at smaller airports, independent from ground facilities. The APU can be identified on many modern airliners by an exhaust pipe at the aircraft's tail. Sections A typical gas-turbine APU for commercial transport aircraft comprises three main sections: Power section The power section is the gas-generator portion of the engine and produces all the shaft power for the APU. In this section of the engine, air and fuel are mixed, compressed and ignited to create hot and expanding gases. This gas is highly energetic and is used to spin the turbine, which in turn powers other sections of the engine, such as auxiliary gearboxes, pumps, electrical generators, and in the case of a turbo fan engine, the main fan. Load compressor section The load compressor is generally a shaft-mounted compressor that provides pneumatic power for the aircraft, though some APUs extract bleed air from the power section compressor. There are two actuated devices to help control the flow of air: the inlet guide vanes that regulate airflow to the load compressor and the surge control valve that maintains stable or surge-free operation of the turbo machine. Gearbox section The gearbox transfers power from the main shaft of the engine to an oil-cooled generator for electrical power. Within the gearbox, power is also transferred to engine accessories such as the fuel control unit, the lubrication module, and cooling fan. There is also a starter motor connected through the gear train to perform the starting function of the APU. Some APU designs use a combination starter/generator for APU starting and electrical power generation to reduce complexity. On the Boeing 787, an aircraft which has greater reliance on its electrical systems, the APU delivers only electricity to the aircraft. The absence of a pneumatic system simplifies the design, but high demand for electricity requires heavier generators. Onboard solid oxide fuel cell (SOFC) APUs are being researched. Manufacturers The market of Auxiliary power units is dominated by Honeywell, followed by Pratt & Whitney, Motorsich and other manufacturers such as PBS Velká Bíteš, Safran Power Units, Aerosila and Klimov. Local manufacturers include Bet Shemesh Engines and Hanwha Aerospace. The 2018 market share varied according to the application platforms: Large commercial aircraft: Honeywell 70–80%, Pratt & Whitney 20–30%, others 0–5% Regional aircraft: Pratt & Whitney 50–60%, Honeywell 40–50%, others 0–5% Business jets: Honeywell 90–100%, others 0–5% Helicopters: Pratt & Whitney 40–50%, Motorsich 40–50%, Honeywell 5–10%, Safran Power Units 5–10%, others 0–5% On June 4, 2018, Boeing and Safran announced their 50–50 partnership to design, build and service APUs after regulatory and antitrust clearance in the second half of 2018. Boeing produced several hundred T50/T60 small turboshafts and their derivatives in the early 1960s. Safran produces helicopters and business jets APUs but stopped the large APUs since Labinal exited the APIC joint venture with Sundstrand in 1996. This could threaten the dominance of Honeywell and United Technologies. Honeywell has a 65% share of the mainliner APU market and is the sole supplier for the Airbus A350, the Boeing 777 and all single-aisles: the Boeing 737 MAX, Airbus A220 (formerly Bombardier CSeries), Comac C919, Irkut MC-21 and Airbus A320neo since Airbus eliminated the P&WC APS3200 option. P&WC claims the remaining 35% with the Airbus A380, Boeing 787 and Boeing 747-8. It should take at least a decade for the Boeing/Safran JV to reach $100 million in service revenue. The 2017 market for production was worth $800 million (88% civil and 12% military), while the MRO market was worth $2.4 billion, spread equally between civil and military. Spacecraft The Space Shuttle APUs provided hydraulic pressure. The Space Shuttle had three redundant APUs, powered by hydrazine fuel. They were only powered up for ascent, re-entry, and landing. During ascent, the APUs provided hydraulic power for gimballing of the Shuttle's three engines and control of their large valves, and for movement of the control surfaces. During landing, they moved the control surfaces, lowered the wheels, and powered the brakes and nose-wheel steering. Landing could be accomplished with only one APU working. In the early years of the Shuttle there were problems with APU reliability, with malfunctions on three of the first nine Shuttle missions. Armored vehicles APUs are fitted to some tanks to provide electrical power without the high fuel consumption and large infrared signature of the main engine. As early as World War II, the American M4 Sherman had a small, piston-engine powered APU for charging the tank's batteries, a feature the Soviet-produced T-34 tank did not have. Commercial vehicles A refrigerated or frozen food semi trailer or train car may be equipped with an independent APU and fuel tank to maintain low temperatures while in transit, without the need for an external transport-supplied power source. On some older diesel engined-equipment, a small gasoline engine (often called a "pony engine") was used instead of an electric motor to start the main engine. The exhaust path of the pony engine was typically arranged so as to warm the intake manifold of the diesel, to ease starting in colder weather. These were primarily used on large pieces of construction equipment. Fuel cells In recent years, truck and fuel cell manufacturers have teamed up to create, test and demonstrate a fuel cell APU that eliminates nearly all emissions and uses diesel fuel more efficiently. In 2008, a DOE sponsored partnership between Delphi Electronics and Peterbilt demonstrated that a fuel cell could provide power to the electronics and air conditioning of a Peterbilt Model 386 under simulated "idling" conditions for ten hours. Delphi has said the 5 kW system for Class 8 trucks will be released in 2012, at an $8000–9000 price tag that would be competitive with other "midrange" two-cylinder diesel APUs, should they be able to meet those deadlines and cost estimates.
Technology
Aircraft components
null
61316
https://en.wikipedia.org/wiki/Galois%20theory
Galois theory
In mathematics, Galois theory, originally introduced by Évariste Galois, provides a connection between field theory and group theory. This connection, the fundamental theorem of Galois theory, allows reducing certain problems in field theory to group theory, which makes them simpler and easier to understand. Galois introduced the subject for studying roots of polynomials. This allowed him to characterize the polynomial equations that are solvable by radicals in terms of properties of the permutation group of their roots—an equation is by definition solvable by radicals if its roots may be expressed by a formula involving only integers, th roots, and the four basic arithmetic operations. This widely generalizes the Abel–Ruffini theorem, which asserts that a general polynomial of degree at least five cannot be solved by radicals. Galois theory has been used to solve classic problems including showing that two problems of antiquity cannot be solved as they were stated (doubling the cube and trisecting the angle), and characterizing the regular polygons that are constructible (this characterization was previously given by Gauss but without the proof that the list of constructible polygons was complete; all known proofs that this characterization is complete require Galois theory). Galois' work was published by Joseph Liouville fourteen years after his death. The theory took longer to become popular among mathematicians and to be well understood. Galois theory has been generalized to Galois connections and Grothendieck's Galois theory. Application to classical problems The birth and development of Galois theory was caused by the following question, which was one of the main open mathematical questions until the beginning of 19th century: The Abel–Ruffini theorem provides a counterexample proving that there are polynomial equations for which such a formula cannot exist. Galois' theory provides a much more complete answer to this question, by explaining why it is possible to solve some equations, including all those of degree four or lower, in the above manner, and why it is not possible for most equations of degree five or higher. Furthermore, it provides a means of determining whether a particular equation can be solved that is both conceptually clear and easily expressed as an algorithm. Galois' theory also gives a clear insight into questions concerning problems in compass and straightedge construction. It gives an elegant characterization of the ratios of lengths that can be constructed with this method. Using this, it becomes relatively easy to answer such classical problems of geometry as Which regular polygons are constructible? Why is it not possible to trisect every angle using a compass and a straightedge? Why is doubling the cube not possible with the same method? History Pre-history Galois' theory originated in the study of symmetric functions – the coefficients of a monic polynomial are (up to sign) the elementary symmetric polynomials in the roots. For instance, , where 1, and are the elementary polynomials of degree 0, 1 and 2 in two variables. This was first formalized by the 16th-century French mathematician François Viète, in Viète's formulas, for the case of positive real roots. In the opinion of the 18th-century British mathematician Charles Hutton, the expression of coefficients of a polynomial in terms of the roots (not only for positive roots) was first understood by the 17th-century French mathematician Albert Girard; Hutton writes: ...[Girard was] the first person who understood the general doctrine of the formation of the coefficients of the powers from the sum of the roots and their products. He was the first who discovered the rules for summing the powers of the roots of any equation. In this vein, the discriminant is a symmetric function in the roots that reflects properties of the roots – it is zero if and only if the polynomial has a multiple root, and for quadratic and cubic polynomials it is positive if and only if all roots are real and distinct, and negative if and only if there is a pair of distinct complex conjugate roots. See Discriminant:Nature of the roots for details. The cubic was first partly solved by the 15–16th-century Italian mathematician Scipione del Ferro, who did not however publish his results; this method, though, only solved one type of cubic equation. This solution was then rediscovered independently in 1535 by Niccolò Fontana Tartaglia, who shared it with Gerolamo Cardano, asking him to not publish it. Cardano then extended this to numerous other cases, using similar arguments; see more details at Cardano's method. After the discovery of del Ferro's work, he felt that Tartaglia's method was no longer secret, and thus he published his solution in his 1545 Ars Magna. His student Lodovico Ferrari solved the quartic polynomial; his solution was also included in Ars Magna. In this book, however, Cardano did not provide a "general formula" for the solution of a cubic equation, as he had neither complex numbers at his disposal, nor the algebraic notation to be able to describe a general cubic equation. With the benefit of modern notation and complex numbers, the formulae in this book do work in the general case, but Cardano did not know this. It was Rafael Bombelli who managed to understand how to work with complex numbers in order to solve all forms of cubic equation. A further step was the 1770 paper Réflexions sur la résolution algébrique des équations by the French-Italian mathematician Joseph Louis Lagrange, in his method of Lagrange resolvents, where he analyzed Cardano's and Ferrari's solution of cubics and quartics by considering them in terms of permutations of the roots, which yielded an auxiliary polynomial of lower degree, providing a unified understanding of the solutions and laying the groundwork for group theory and Galois' theory. Crucially, however, he did not consider composition of permutations. Lagrange's method did not extend to quintic equations or higher, because the resolvent had higher degree. The quintic was almost proven to have no general solutions by radicals by Paolo Ruffini in 1799, whose key insight was to use permutation groups, not just a single permutation. His solution contained a gap, which Cauchy considered minor, though this was not patched until the work of the Norwegian mathematician Niels Henrik Abel, who published a proof in 1824, thus establishing the Abel–Ruffini theorem. While Ruffini and Abel established that the general quintic could not be solved, some particular quintics can be solved, such as , and the precise criterion by which a given quintic or higher polynomial could be determined to be solvable or not was given by Évariste Galois, who showed that whether a polynomial was solvable or not was equivalent to whether or not the permutation group of its roots – in modern terms, its Galois group – had a certain structure – in modern terms, whether or not it was a solvable group. This group was always solvable for polynomials of degree four or less, but not always so for polynomials of degree five and greater, which explains why there is no general solution in higher degrees. Galois' writings In 1830 Galois (at the age of 18) submitted to the Paris Academy of Sciences a memoir on his theory of solvability by radicals; Galois' paper was ultimately rejected in 1831 as being too sketchy and for giving a condition in terms of the roots of the equation instead of its coefficients. Galois then died in a duel in 1832, and his paper, "Mémoire sur les conditions de résolubilité des équations par radicaux", remained unpublished until 1846 when it was published by Joseph Liouville accompanied by some of his own explanations. Prior to this publication, Liouville announced Galois' result to the Academy in a speech he gave on 4 July 1843. According to Allan Clark, Galois's characterization "dramatically supersedes the work of Abel and Ruffini." Aftermath Galois' theory was notoriously difficult for his contemporaries to understand, especially to the level where they could expand on it. For example, in his 1846 commentary, Liouville completely missed the group-theoretic core of Galois' method. Joseph Alfred Serret who attended some of Liouville's talks, included Galois' theory in his 1866 (third edition) of his textbook Cours d'algèbre supérieure. Serret's pupil, Camille Jordan, had an even better understanding reflected in his 1870 book Traité des substitutions et des équations algébriques. Outside France, Galois' theory remained more obscure for a longer period. In Britain, Cayley failed to grasp its depth and popular British algebra textbooks did not even mention Galois' theory until well after the turn of the century. In Germany, Kronecker's writings focused more on Abel's result. Dedekind wrote little about Galois' theory, but lectured on it at Göttingen in 1858, showing a very good understanding. Eugen Netto's books of the 1880s, based on Jordan's Traité, made Galois theory accessible to a wider German and American audience as did Heinrich Martin Weber's 1895 algebra textbook. Permutation group approach Given a polynomial, it may be that some of the roots are connected by various algebraic equations. For example, it may be that for two of the roots, say and , that . The central idea of Galois' theory is to consider permutations (or rearrangements) of the roots such that any algebraic equation satisfied by the roots is still satisfied after the roots have been permuted. Originally, the theory had been developed for algebraic equations whose coefficients are rational numbers. It extends naturally to equations with coefficients in any field, but this will not be considered in the simple examples below. These permutations together form a permutation group, also called the Galois group of the polynomial, which is explicitly described in the following examples. Quadratic equation Consider the quadratic equation By using the quadratic formula, we find that the two roots are Examples of algebraic equations satisfied by and include and If we exchange and in either of the last two equations we obtain another true statement. For example, the equation becomes . It is more generally true that this holds for every possible algebraic relation between and such that all coefficients are rational; that is, in any such relation, swapping and yields another true relation. This results from the theory of symmetric polynomials, which, in this case, may be replaced by formula manipulations involving the binomial theorem. One might object that and are related by the algebraic equation , which does not remain true when and are exchanged. However, this relation is not considered here, because it has the coefficient which is not rational. We conclude that the Galois group of the polynomial consists of two permutations: the identity permutation which leaves and untouched, and the transposition permutation which exchanges and . As all groups with two elements are isomorphic, this Galois group is isomorphic to the multiplicative group . A similar discussion applies to any quadratic polynomial , where , and are rational numbers. If the polynomial has rational roots, for example , or , then the Galois group is trivial; that is, it contains only the identity permutation. In this example, if and then is no longer true when and are swapped. If it has two irrational roots, for example , then the Galois group contains two permutations, just as in the above example. Quartic equation Consider the polynomial Completing the square in an unusual way, it can also be written as By applying the quadratic formula to each factor, one sees that the four roots are Among the 24 possible permutations of these four roots, four are particularly simple, those consisting in the sign change of 0, 1, or 2 square roots. They form a group that is isomorphic to the Klein four-group. Galois theory implies that, since the polynomial is irreducible, the Galois group has at least four elements. For proving that the Galois group consists of these four permutations, it suffices thus to show that every element of the Galois group is determined by the image of , which can be shown as follows. The members of the Galois group must preserve any algebraic equation with rational coefficients involving , , and . Among these equations, we have: It follows that, if is a permutation that belongs to the Galois group, we must have: This implies that the permutation is well defined by the image of , and that the Galois group has 4 elements, which are: (identity) (change of sign of ) (change of sign of ) (change of sign of both square roots) This implies that the Galois group is isomorphic to the Klein four-group. Modern approach by field theory In the modern approach, one starts with a field extension (read " over "), and examines the group of automorphisms of that fix . See the article on Galois groups for further explanation and examples. The connection between the two approaches is as follows. The coefficients of the polynomial in question should be chosen from the base field . The top field should be the field obtained by adjoining the roots of the polynomial in question to the base field . Any permutation of the roots which respects algebraic equations as described above gives rise to an automorphism of , and vice versa. In the first example above, we were studying the extension , where is the field of rational numbers, and is the field obtained from by adjoining . In the second example, we were studying the extension . There are several advantages to the modern approach over the permutation group approach. It permits a far simpler statement of the fundamental theorem of Galois theory. The use of base fields other than is crucial in many areas of mathematics. For example, in algebraic number theory, one often does Galois theory using number fields, finite fields or local fields as the base field. It allows one to more easily study infinite extensions. Again this is important in algebraic number theory, where for example one often discusses the absolute Galois group of , defined to be the Galois group of where is an algebraic closure of . It allows for consideration of inseparable extensions. This issue does not arise in the classical framework, since it was always implicitly assumed that arithmetic took place in characteristic zero, but nonzero characteristic arises frequently in number theory and in algebraic geometry. It removes the rather artificial reliance on chasing roots of polynomials. That is, different polynomials may yield the same extension fields, and the modern approach recognizes the connection between these polynomials. Solvable groups and solution by radicals The notion of a solvable group in group theory allows one to determine whether a polynomial is solvable in radicals, depending on whether its Galois group has the property of solvability. In essence, each field extension corresponds to a factor group in a composition series of the Galois group. If a factor group in the composition series is cyclic of order , and if in the corresponding field extension the field already contains a primitive th root of unity, then it is a radical extension and the elements of can then be expressed using the th root of some element of . If all the factor groups in its composition series are cyclic, the Galois group is called solvable, and all of the elements of the corresponding field can be found by repeatedly taking roots, products, and sums of elements from the base field (usually ). One of the great triumphs of Galois Theory was the proof that for every , there exist polynomials of degree which are not solvable by radicals (this was proven independently, using a similar method, by Niels Henrik Abel a few years before, and is the Abel–Ruffini theorem), and a systematic way for testing whether a specific polynomial is solvable by radicals. The Abel–Ruffini theorem results from the fact that for the symmetric group contains a simple, noncyclic, normal subgroup, namely the alternating group . A non-solvable quintic example Van der Waerden cites the polynomial . By the rational root theorem, this has no rational zeroes. Neither does it have linear factors modulo 2 or 3. The Galois group of modulo 2 is cyclic of order 6, because modulo 2 factors into polynomials of orders 2 and 3, . modulo 3 has no linear or quadratic factor, and hence is irreducible. Thus its modulo 3 Galois group contains an element of order 5. It is known that a Galois group modulo a prime is isomorphic to a subgroup of the Galois group over the rationals. A permutation group on 5 objects with elements of orders 6 and 5 must be the symmetric group , which is therefore the Galois group of . This is one of the simplest examples of a non-solvable quintic polynomial. According to Serge Lang, Emil Artin was fond of this example. Inverse Galois problem The inverse Galois problem is to find a field extension with a given Galois group. As long as one does not also specify the ground field, the problem is not very difficult, and all finite groups do occur as Galois groups. For showing this, one may proceed as follows. Choose a field and a finite group . Cayley's theorem says that is (up to isomorphism) a subgroup of the symmetric group on the elements of . Choose indeterminates , one for each element of , and adjoin them to to get the field . Contained within is the field of symmetric rational functions in the . The Galois group of is , by a basic result of Emil Artin. acts on by restriction of action of . If the fixed field of this action is , then, by the fundamental theorem of Galois theory, the Galois group of is . On the other hand, it is an open problem whether every finite group is the Galois group of a field extension of the field of the rational numbers. Igor Shafarevich proved that every solvable finite group is the Galois group of some extension of . Various people have solved the inverse Galois problem for selected non-Abelian simple groups. Existence of solutions has been shown for all but possibly one (Mathieu group ) of the 26 sporadic simple groups. There is even a polynomial with integral coefficients whose Galois group is the Monster group. Inseparable extensions In the form mentioned above, including in particular the fundamental theorem of Galois theory, the theory only considers Galois extensions, which are in particular separable. General field extensions can be split into a separable, followed by a purely inseparable field extension. For a purely inseparable extension F / K, there is a Galois theory where the Galois group is replaced by the vector space of derivations, , i.e., K-linear endomorphisms of F satisfying the Leibniz rule. In this correspondence, an intermediate field E is assigned . Conversely, a subspace satisfying appropriate further conditions is mapped to . Under the assumption , showed that this establishes a one-to-one correspondence. The condition imposed by Jacobson has been removed by , by giving a correspondence using notions of derived algebraic geometry.
Mathematics
Algebra
null
61334
https://en.wikipedia.org/wiki/Pygmy%20hippopotamus
Pygmy hippopotamus
The pygmy hippopotamus or pygmy hippo (Choeropsis liberiensis) is a small hippopotamid which is native to the forests and swamps of West Africa, primarily in Liberia, with small populations in Sierra Leone, Guinea, and Ivory Coast. It has been extirpated from Nigeria. The pygmy hippo is reclusive and nocturnal. It is one of only two extant species in the family Hippopotamidae, the other being its much larger relative, the common hippopotamus (Hippopotamus amphibius) or Nile hippopotamus. The pygmy hippopotamus displays many terrestrial adaptations, but like the common hippo, it is semiaquatic and relies on water to keep its skin moist and its body temperature cool. Behaviors such as mating and giving birth may occur in water or on land. The pygmy hippo is herbivorous, feeding on ferns, broad-leaved plants, grasses, and fruits it finds in the forests. A rare nocturnal forest creature, the pygmy hippopotamus is a difficult animal to study in the wild. Pygmy hippos were unknown outside West Africa until the 19th century. Introduced to zoos in the early 20th century, they breed well in captivity and the vast majority of research is derived from zoo specimens. The survival of the species in captivity is more assured than in the wild; in a 2015 assessment, the International Union for Conservation of Nature estimated that fewer than 2,500 pygmy hippos remain in the wild. Pygmy hippos are primarily threatened by loss of habitat, as forests are logged and converted to farm land, and are also vulnerable to poaching, hunting for bushmeat, natural predators, and war. Pygmy hippos are among the species illegally hunted for food in Liberia. Taxonomy and origins Nomenclature of the pygmy hippopotamus reflects that of the hippopotamus; the plural form is pygmy hippopotamuses or pygmy hippopotami. A male pygmy hippopotamus is known as a bull, a female as a cow, and a baby as a calf. A group of hippopotami is known as a herd or a bloat. The pygmy hippopotamus is a member of the family Hippopotamidae where it is classified as a member of the genus Choeropsis ("resembling a hog"). Members of Hippopotamidae are sometimes known as hippopotamids. Sometimes the sub-family Hippopotaminae is used. Further, some taxonomists group hippopotami and anthracotheres in the superfamily Anthracotheroidea or Hippopotamoidea. The taxonomy of the genus of the pygmy hippopotamus has changed as understanding of the animal has developed. Samuel G. Morton initially classified the animal as Hippopotamus minor, but later determined it was distinct enough to warrant its own genus, and labeled it Choeropsis. In 1977, Shirley C. Coryndon proposed that the pygmy hippopotamus was closely related to Hexaprotodon, a genus that consisted of prehistoric hippos mostly native to Asia. This assertion was widely accepted, until Boisserie asserted in 2005 that the pygmy hippopotamus was not a member of Hexaprotodon, after a thorough examination of the phylogeny of Hippopotamidae. He suggested instead that the pygmy hippopotamus was a distinct genus, and returned the animal to Choeropsis. ITIS verifies Hexaprotodon liberiensis as the valid scientific name. All agree that the modern pygmy hippopotamus, be it H. liberiensis or C. liberiensis, is the only extant member of its genus. The American Society of Mammalogists moved it back to Choeropsis in 2021, a move supported by the IUCN. Nigerian subspecies A distinct subspecies of pygmy hippopotamus existed in Nigeria until at least the 20th century, though the validity of this has been questioned. The existence of the subspecies, makes Choeropsis liberiensis liberiensis (or Hexaprotodon liberiensis liberiensis under the old classification) the full trinomial nomenclature for the Liberian pygmy hippopotamus. The Nigerian pygmy hippopotamus was never studied in the wild and never captured. All research and all zoo specimens are the Liberian subspecies. The Nigerian subspecies is classified as C. liberiensis heslopi. The Nigerian pygmy hippopotamus ranged in the Niger River Delta, especially near Port Harcourt, but no reliable reports exist after the collection of the museum specimens secured by Ian Heslop, a British colonial officer, in the early 1940s. It is probably extinct. The subspecies is separated by over and the Dahomey Gap, a region of savanna that divides the forest regions of West Africa. The subspecies is named after Heslop, who shot three members of it in 1935 and 1943. He estimated that perhaps no more than 30 pygmy hippos remained in the region. Heslop sent four pygmy hippopotamus skulls he collected to the British Museum of Natural History in London. These specimens were not subjected to taxonomic evaluation, however, until 1969 when classified the skulls as belonging to a separate subspecies based on consistent variations in the proportions of the skulls. The Nigerian pygmy hippos were seen or shot in Rivers State, Imo State and Bayelsa State, Nigeria. While some local humans are aware that the species once existed, its history in the region is poorly documented. Evolution The evolution of the pygmy hippopotamus is most often studied in the context of its larger cousin. Both species were long believed to be most closely related to the family Suidae (pigs and hogs) or Tayassuidae (peccaries), but research within the last 10 years has determined that pygmy hippos and hippos are most closely related to cetaceans (whales and dolphins). Hippos and whales shared a common semi-aquatic ancestor that branched off from other artiodactyls around . This hypothesized ancestor likely split into two branches about six million years later. One branch would evolve into cetaceans, the other branch became the anthracotheres, a large family of four-legged beasts, whose earliest member, from the Late Eocene, would have resembled narrow hippopotami with comparatively small and thin heads. Hippopotamids are deeply nested within the family Anthracotheriidae. The oldest known hippopotamid is the genus Kenyapotamus, which lived in Africa from . Kenyapotamus is known only through fragmentary fossils, but was similar in size to C. liberiensis. The Hippopotamidae are believed to have evolved in Africa, and while at one point the species spread across Asia and Europe, no hippopotami have ever been discovered in the Americas. Starting the Archaeopotamus, likely ancestors to the genus Hippopotamus and Hexaprotodon, lived in Africa and the Middle East. While the fossil record of hippos is still poorly understood, the lineages of the two modern genera, Hippopotamus and Choeropsis, may have diverged as far back as . The ancestral form of the pygmy hippopotamus may be the genus Saotherium. Saotherium and Choeropsis are significantly more basal than Hippopotamus and Hexaprotodon, and thus more closely resemble the ancestral species of hippos. Extinct pygmy and dwarf hippos Several species of small hippopotamids have also become extinct in the Mediterranean in the late Pleistocene or early Holocene. Though these species are sometimes known as "pygmy hippopotami" they are not believed to be closely related to C. liberiensis. These include the Cretan dwarf hippopotamus (Hippopotamus creutzburgi), the Sicilian hippopotamus (Hippopotamus pentlandi), the Maltese hippopotamus (Hippopotamus melitensis) and the Cyprus dwarf hippopotamus (Hippopotamus minor). These species, though comparable in size to the pygmy hippopotamus, are considered dwarf hippopotamuses, rather than pygmies. They are likely descended from a full-sized species of European hippopotamus, and reached their small size through the evolutionary process of insular dwarfism which is common on islands; the ancestors of pygmy hippopotami were also small and thus there was never a dwarfing process. There were also several species of pygmy hippo on the island of Madagascar (see Malagasy hippopotamus). Description Pygmy hippos share the same general form as a hippopotamus. They have a graviportal skeleton, with four stubby legs and four toes on each foot, supporting a portly frame. Yet, the pygmy is only half as tall as the hippopotamus and weighs less than 1/4 as much as its larger cousin. Adult pygmy hippos stand about high at the shoulder, are in length and weigh . Their lifespan in captivity ranges from 30 to 55 years, though it is unlikely that they live this long in the wild. The skin is greenish-black or brown, shading to a creamy gray on the lower body. Their skin is very similar to the common hippo's, with a thin epidermis over a dermis that is several centimeters thick. Pygmy hippos have the same unusual secretion as common hippos, that gives a pinkish tinge to their bodies, and is sometimes described as "blood sweat" though the secretion is neither sweat nor blood. This substance, hipposudoric acid, is believed to have antiseptic and sunscreening properties. The skin of hippos dries out quickly and cracks, which is why both species spend so much time in water. The skeleton of C. liberiensis is more gracile than that of the common hippopotamus, meaning their bones are proportionally thinner. The common hippo's spine is parallel with the ground; the pygmy hippo's back slopes forward, a likely adaptation to pass more easily through dense forest vegetation. Proportionally, the pygmy hippo's legs and neck are longer and its head smaller. The orbits and nostrils of a pygmy hippo are much less pronounced, an adaptation from spending less time in deep water (where pronounced orbits and nostrils help the common hippo breathe and see). The feet of pygmy hippos are narrower, but the toes are more spread out and have less webbing, to assist in walking on the forest floor. Despite adaptations to a more terrestrial life than the common hippopotamus, pygmy hippos are still more aquatic than all other terrestrial even-toed ungulates. The ears and nostrils of pygmy hippos have strong muscular valves to aid submerging underwater, and the skin physiology is dependent on the availability of water. Behavior The behavior of the pygmy hippo differs from the common hippo in many ways. Much of its behavior is more similar to that of a tapir, though this is an effect of convergent evolution. While the common hippopotamus is gregarious, pygmy hippos live either alone or in small groups, typically a mated pair or a mother and calf. Pygmy hippos tend to ignore each other rather than fight when they meet. Field studies have estimated that male pygmy hippos range over , while the range of a female is .Pygmy hippos spend most of the day hidden in rivers. They will rest in the same spot for several days in a row, before moving to a new spot. At least some pygmy hippos make use of dens or burrows that form in river banks. It is unknown if the pygmy hippos help create these dens, or how common it is to use them. Though a pygmy hippo has never been observed burrowing, other artiodactyls, such as warthogs, are burrowers. Diet Like the common hippopotamus, the pygmy hippo emerges from the water at dusk to feed. It relies on game trails to travel through dense forest vegetation. It marks trails by vigorously waving its tail while defecating to further spread its feces. The pygmy hippo spends about six hours a day foraging for food. Pygmy hippos are herbivorous. They do not eat aquatic vegetation to a significant extent and rarely eat grass because it is uncommon in the thick forests they inhabit. The bulk of a pygmy hippo's diet consists of herbs, ferns, broad-leaved plants, herbaceous shoots, forbs, sedges and fruits that have fallen to the forest floor. The wide variety of plants pygmy hippos have been observed eating suggests that they will eat any plants available. This diet is of higher quality than that of the common hippopotamus. Reproduction A study of breeding behavior in the wild has never been conducted; the artificial conditions of captivity may cause the observed behavior of pygmy hippos in zoos to differ from natural conditions. Sexual maturity for the pygmy hippopotamus occurs between three and five years of age. The youngest reported age for giving birth is a pygmy hippo in the Zoo Basel, Switzerland which bore a calf at three years and three months. The oestrus cycle of a female pygmy hippo lasts an average of 35.5 days, with the oestrus itself lasting between 24 and 48 hours. Pygmy hippos consort for mating, but the duration of the relationship is unknown. In zoos they breed as monogamous pairs. Copulation can take place on land or in the water, and a pair will mate one to four times during an oestrus period. In captivity, pygmy hippos have been conceived and born in all months of the year. The gestation period ranges from 190 to 210 days, and usually a single young is born, though twins are known to occur.The common hippopotamus gives birth and mates only in the water, but pygmy hippos mate and give birth on both land and water. Young pygmy hippos can swim almost immediately. At birth, pygmy hippos weigh 4.5–6.2 kg (9.9–13.7 lb) with males weighing about 0.25 kg (0.55 lb) more than females. Pygmy hippos are fully weaned between six and eight months of age; before weaning they do not accompany their mother when she leaves the water to forage, but instead hide in the water by themselves. The mother returns to the hiding spot about three times a day and calls out for the calf to suckle. Suckling occurs with the mother lying on her side. Temperament Although not considered dangerous to humans and generally docile, pygmy hippos can be highly aggressive at times. Although there have been no human deaths associated with pygmy hippos, there have been several attacks - while most of these were provoked by human behaviour, several have had no apparent cause. Conservation The greatest threat to the remaining pygmy hippopotamus population in the wild is loss of habitat. The forests in which pygmy hippos live have been subject to logging, settling and conversion to agriculture, with little efforts made to make logging sustainable. As forests shrink, the populations become more fragmented, leading to less genetic diversity in the potential mating pool. Pygmy hippos are among the species illegally hunted for food in Liberia. Their meat is said to be of excellent quality, like that of a wild boar; unlike those of the common hippo, the pygmy hippo's teeth have no value. The effects of West Africa's civil strife on the pygmy hippopotamus are unknown, but unlikely to be positive. The pygmy hippopotamus can be killed by leopards, pythons and crocodiles. How often this occurs is unknown. C. liberiensis was identified as one of the top 10 "focal species" in 2007 by the Evolutionarily Distinct and Globally Endangered (EDGE) project. Some populations inhabit protected areas, such as the Gola Forest Reserve in Sierra Leone. Basel Zoo in Switzerland holds the international studbook and coordinates the entire captive pygmy hippo population that freely breeds in zoos around the world. Between 1970 and 1991 the population of pygmy hippos born in captivity more than doubled. The survival of the species in zoos is more certain than the survival of the species in the wild. In captivity, the pygmy hippo lives from 42 to 55 years, longer than in the wild. Since 1919, only 41 percent of pygmy hippos born in zoos have been male. History and folklore While the common hippopotamus has been known to Europeans since classical antiquity, the pygmy hippopotamus was unknown outside its range in West Africa until the 19th century. Due to their nocturnal, forested existence, they were poorly known within their range as well. In Liberia the animal was traditionally known as a water cow. Early field reports of the animal misidentified it as a wild hog. Several skulls of the species were sent to the American natural scientist Samuel G. Morton, during his residency in Monrovia, Liberia. Morton first described the species in 1843. The first complete specimens were collected as part of a comprehensive investigation of Liberian fauna in the 1870s and 1880s by Dr. Johann Büttikofer. The specimens were taken to the Natural History Museum in Leiden, The Netherlands. The first pygmy hippo was brought to Europe in 1873 after being captured in Sierra Leone by a member of the British Colonial Service but died shortly after arrival. Pygmy hippos were successfully established in European zoos in 1911. They were first shipped to Germany and then to the Bronx Zoo in New York City where they also thrived. In 1927, Harvey Firestone of Firestone Tires presented Billy the pygmy hippo to U.S. President Calvin Coolidge. Coolidge donated Billy to the National Zoo in Washington, D.C. According to the zoo, Billy is a common ancestor to most pygmy hippos in U.S. zoos today. Moo Deng is a pygmy hippo living in Khao Kheow Open Zoo, in Thailand, who gained notability in September 2024 as a popular Internet meme after images of her went viral online. Because of the popularity of the hippo, whose name translates to "bouncy pork", the zoo saw a boosted attendance. It has been reported that some visitors to the zoo threw water and other objects at the baby hippo to get her to react. Several folktales have been collected about the pygmy hippopotamus. One tale says that pygmy hippos carry a shining diamond in their mouths to help travel through thick forests at night; by day the pygmy hippo has a secret hiding place for the diamond, but if a hunter catches a pygmy hippo at night the diamond can be taken. Villagers sometimes believed that baby pygmy hippos do not nurse but rather lick secretions off the skin of the mother.
Biology and health sciences
Other artiodactyla
Animals
61338
https://en.wikipedia.org/wiki/Addition
Addition
Addition (usually signified by the plus symbol ) is one of the four basic operations of arithmetic, the other three being subtraction, multiplication and division. The addition of two whole numbers results in the total amount or sum of those values combined. The example in the adjacent image shows two columns of three apples and two apples each, totaling at five apples. This observation is equivalent to the mathematical expression (that is, "3 plus 2 is equal to 5"). Besides counting items, addition can also be defined and executed without referring to concrete objects, using abstractions called numbers instead, such as integers, real numbers and complex numbers. Addition belongs to arithmetic, a branch of mathematics. In algebra, another area of mathematics, addition can also be performed on abstract objects such as vectors, matrices, subspaces and subgroups. Addition has several important properties. It is commutative, meaning that the order of the operands does not matter, and it is associative, meaning that when one adds more than two numbers, the order in which addition is performed does not matter. Repeated addition of is the same as counting (see Successor function). Addition of does not change a number. Addition also obeys rules concerning related operations such as subtraction and multiplication. Performing addition is one of the simplest numerical tasks to do. Addition of very small numbers is accessible to toddlers; the most basic task, , can be performed by infants as young as five months, and even some members of other animal species. In primary education, students are taught to add numbers in the decimal system, starting with single digits and progressively tackling more difficult problems. Mechanical aids range from the ancient abacus to the modern computer, where research on the most efficient implementations of addition continues to this day. Notation and terminology Addition is written using the plus sign "+" between the terms; that is, in infix notation. The result is expressed with an equals sign. For example, ("one plus two equals three") (see "associativity" below) (see "multiplication" below) There are also situations where addition is "understood", even though no symbol appears: A whole number followed immediately by a fraction indicates the sum of the two, called a mixed number. For example, This notation can cause confusion, since in most other contexts, juxtaposition denotes multiplication instead. The sum of a series of related numbers can be expressed through capital sigma notation, which compactly denotes iteration. For example, Terms The numbers or the objects to be added in general addition are collectively referred to as the terms, the addends or the summands; this terminology carries over to the summation of multiple terms. This is to be distinguished from factors, which are multiplied. Some authors call the first addend the augend. In fact, during the Renaissance, many authors did not consider the first addend an "addend" at all. Today, due to the commutative property of addition, "augend" is rarely used, and both terms are generally called addends. All of the above terminology derives from Latin. "Addition" and "add" are English words derived from the Latin verb addere, which is in turn a compound of ad "to" and dare "to give", from the Proto-Indo-European root "to give"; thus to add is to give to. Using the gerundive suffix -nd results in "addend", "thing to be added". Likewise from augere "to increase", one gets "augend", "thing to be increased". "Sum" and "summand" derive from the Latin noun summa "the highest, the top" and associated verb summare. This is appropriate not only because the sum of two positive numbers is greater than either, but because it was common for the ancient Greeks and Romans to add upward, contrary to the modern practice of adding downward, so that a sum was literally at the top of the addends. Addere and summare date back at least to Boethius, if not to earlier Roman writers such as Vitruvius and Frontinus; Boethius also used several other terms for the addition operation. The later Middle English terms "adden" and "adding" were popularized by Chaucer. The plus sign "+" (Unicode:U+002B; ASCII: &#43;) is an abbreviation of the Latin word et, meaning "and". It appears in mathematical works dating back to at least 1489. Interpretations Addition is used to model many physical processes. Even for the simple case of adding natural numbers, there are many possible interpretations and even more visual representations. Combining sets Possibly the most basic interpretation of addition lies in combining sets: When two or more disjoint collections are combined into a single collection, the number of objects in the single collection is the sum of the numbers of objects in the original collections. This interpretation is easy to visualize, with little danger of ambiguity. It is also useful in higher mathematics (for the rigorous definition it inspires, see below). However, it is not obvious how one should extend this version of addition to include fractional numbers or negative numbers. One possible fix is to consider collections of objects that can be easily divided, such as pies or, still better, segmented rods. Rather than solely combining collections of segments, rods can be joined end-to-end, which illustrates another conception of addition: adding not the rods but the lengths of the rods. Extending a length A second interpretation of addition comes from extending an initial length by a given length: When an original length is extended by a given amount, the final length is the sum of the original length and the length of the extension. The sum a + b can be interpreted as a binary operation that combines a and b, in an algebraic sense, or it can be interpreted as the addition of b more units to a. Under the latter interpretation, the parts of a sum play asymmetric roles, and the operation is viewed as applying the unary operation +b to a. Instead of calling both a and b addends, it is more appropriate to call a the augend in this case, since a plays a passive role. The unary view is also useful when discussing subtraction, because each unary addition operation has an inverse unary subtraction operation, and vice versa. Properties Commutativity Addition is commutative, meaning that one can change the order of the terms in a sum, but still get the same result. Symbolically, if a and b are any two numbers, then a + b = b + a. The fact that addition is commutative is known as the "commutative law of addition" or "commutative property of addition". Some other binary operations are commutative, such as multiplication, but many others, such as subtraction and division, are not. Associativity Addition is associative, which means that when three or more numbers are added together, the order of operations does not change the result. As an example, should the expression a + b + c be defined to mean (a + b) + c or a + (b + c)? Given that addition is associative, the choice of definition is irrelevant. For any three numbers a, b, and c, it is true that . For example, . When addition is used together with other operations, the order of operations becomes important. In the standard order of operations, addition is a lower priority than exponentiation, nth roots, multiplication and division, but is given equal priority to subtraction. Identity element Adding zero to any number, does not change the number; this means that zero is the identity element for addition, and is also known as the additive identity. In symbols, for every , one has . This law was first identified in Brahmagupta's Brahmasphutasiddhanta in 628 AD, although he wrote it as three separate laws, depending on whether a is negative, positive, or zero itself, and he used words rather than algebraic symbols. Later Indian mathematicians refined the concept; around the year 830, Mahavira wrote, "zero becomes the same as what is added to it", corresponding to the unary statement . In the 12th century, Bhaskara wrote, "In the addition of cipher, or subtraction of it, the quantity, positive or negative, remains the same", corresponding to the unary statement . Successor Within the context of integers, addition of one also plays a special role: for any integer a, the integer is the least integer greater than a, also known as the successor of a. For instance, 3 is the successor of 2 and 7 is the successor of 6. Because of this succession, the value of can also be seen as the bth successor of a, making addition iterated succession. For example, is 8, because 8 is the successor of 7, which is the successor of 6, making 8 the 2nd successor of 6. Units To numerically add physical quantities with units, they must be expressed with common units. For example, adding 50 milliliters to 150 milliliters gives 200 milliliters. However, if a measure of 5 feet is extended by 2 inches, the sum is 62 inches, since 60 inches is synonymous with 5 feet. On the other hand, it is usually meaningless to try to add 3 meters and 4 square meters, since those units are incomparable; this sort of consideration is fundamental in dimensional analysis. Performing addition Innate ability Studies on mathematical development starting around the 1980s have exploited the phenomenon of habituation: infants look longer at situations that are unexpected. A seminal experiment by Karen Wynn in 1992 involving Mickey Mouse dolls manipulated behind a screen demonstrated that five-month-old infants expect to be 2, and they are comparatively surprised when a physical situation seems to imply that is either 1 or 3. This finding has since been affirmed by a variety of laboratories using different methodologies. Another 1992 experiment with older toddlers, between 18 and 35 months, exploited their development of motor control by allowing them to retrieve ping-pong balls from a box; the youngest responded well for small numbers, while older subjects were able to compute sums up to 5. Even some nonhuman animals show a limited ability to add, particularly primates. In a 1995 experiment imitating Wynn's 1992 result (but using eggplants instead of dolls), rhesus macaque and cottontop tamarin monkeys performed similarly to human infants. More dramatically, after being taught the meanings of the Arabic numerals 0 through 4, one chimpanzee was able to compute the sum of two numerals without further training. More recently, Asian elephants have demonstrated an ability to perform basic arithmetic. Childhood learning Typically, children first master counting. When given a problem that requires that two items and three items be combined, young children model the situation with physical objects, often fingers or a drawing, and then count the total. As they gain experience, they learn or discover the strategy of "counting-on": asked to find two plus three, children count three past two, saying "three, four, five" (usually ticking off fingers), and arriving at five. This strategy seems almost universal; children can easily pick it up from peers or teachers. Most discover it independently. With additional experience, children learn to add more quickly by exploiting the commutativity of addition by counting up from the larger number, in this case, starting with three and counting "four, five." Eventually children begin to recall certain addition facts ("number bonds"), either through experience or rote memorization. Once some facts are committed to memory, children begin to derive unknown facts from known ones. For example, a child asked to add six and seven may know that and then reason that is one more, or 13. Such derived facts can be found very quickly and most elementary school students eventually rely on a mixture of memorized and derived facts to add fluently. Different nations introduce whole numbers and arithmetic at different ages, with many countries teaching addition in pre-school. However, throughout the world, addition is taught by the end of the first year of elementary school. Table Children are often presented with the addition table of pairs of numbers from 0 to 9 to memorize. Decimal system The prerequisite to addition in the decimal system is the fluent recall or derivation of the 100 single-digit "addition facts". One could memorize all the facts by rote, but pattern-based strategies are more enlightening and, for most people, more efficient: Commutative property: Mentioned above, using the pattern a + b = b + a reduces the number of "addition facts" from 100 to 55. One or two more: Adding 1 or 2 is a basic task, and it can be accomplished through counting on or, ultimately, intuition. Zero: Since zero is the additive identity, adding zero is trivial. Nonetheless, in the teaching of arithmetic, some students are introduced to addition as a process that always increases the addends; word problems may help rationalize the "exception" of zero. Doubles: Adding a number to itself is related to counting by two and to multiplication. Doubles facts form a backbone for many related facts, and students find them relatively easy to grasp. Near-doubles: Sums such as 6 + 7 = 13 can be quickly derived from the doubles fact by adding one more, or from but subtracting one. Five and ten: Sums of the form 5 + and 10 + are usually memorized early and can be used for deriving other facts. For example, can be derived from by adding one more. Making ten: An advanced strategy uses 10 as an intermediate for sums involving 8 or 9; for example, . As students grow older, they commit more facts to memory, and learn to derive other facts rapidly and fluently. Many students never commit all the facts to memory, but can still find any basic fact quickly. Carry The standard algorithm for adding multidigit numbers is to align the addends vertically and add the columns, starting from the ones column on the right. If a column exceeds nine, the extra digit is "carried" into the next column. For example, in the addition ¹ 27 + 59 ———— 86 7 + 9 = 16, and the digit 1 is the carry. An alternate strategy starts adding from the most significant digit on the left; this route makes carrying a little clumsier, but it is faster at getting a rough estimate of the sum. There are many alternative methods. Since the end of the 20th century, some US programs, including TERC, decided to remove the traditional transfer method from their curriculum. This decision was criticized, which is why some states and counties did not support this experiment. Decimal fractions Decimal fractions can be added by a simple modification of the above process. One aligns two decimal fractions above each other, with the decimal point in the same location. If necessary, one can add trailing zeros to a shorter decimal to make it the same length as the longer decimal. Finally, one performs the same addition process as above, except the decimal point is placed in the answer, exactly where it was placed in the summands. As an example, 45.1 + 4.34 can be solved as follows: 4 5 . 1 0 + 0 4 . 3 4 ———————————— 4 9 . 4 4 Scientific notation In scientific notation, numbers are written in the form , where is the significand and is the exponential part. Addition requires two numbers in scientific notation to be represented using the same exponential part, so that the two significands can simply be added. For example: Non-decimal Addition in other bases is very similar to decimal addition. As an example, one can consider addition in binary. Adding two single-digit binary numbers is relatively simple, using a form of carrying: 0 + 0 → 0 0 + 1 → 1 1 + 0 → 1 1 + 1 → 0, carry 1 (since 1 + 1 = 2 = 0 + (1 × 21)) Adding two "1" digits produces a digit "0", while 1 must be added to the next column. This is similar to what happens in decimal when certain single-digit numbers are added together; if the result equals or exceeds the value of the radix (10), the digit to the left is incremented: 5 + 5 → 0, carry 1 (since 5 + 5 = 10 = 0 + (1 × 101)) 7 + 9 → 6, carry 1 (since 7 + 9 = 16 = 6 + (1 × 101)) This is known as carrying. When the result of an addition exceeds the value of a digit, the procedure is to "carry" the excess amount divided by the radix (that is, 10/10) to the left, adding it to the next positional value. This is correct since the next position has a weight that is higher by a factor equal to the radix. Carrying works the same way in binary: 0 1 1 0 1 + 1 0 1 1 1 ————————————— 1 0 0 1 0 0 = 36 In this example, two numerals are being added together: 011012 (1310) and 101112 (2310). The top row shows the carry bits used. Starting in the rightmost column, . The 1 is carried to the left, and the 0 is written at the bottom of the rightmost column. The second column from the right is added: again; the 1 is carried, and 0 is written at the bottom. The third column: . This time, a 1 is carried, and a 1 is written in the bottom row. Proceeding like this gives the final answer 1001002 (3610). Computers Analog computers work directly with physical quantities, so their addition mechanisms depend on the form of the addends. A mechanical adder might represent two addends as the positions of sliding blocks, in which case they can be added with an averaging lever. If the addends are the rotation speeds of two shafts, they can be added with a differential. A hydraulic adder can add the pressures in two chambers by exploiting Newton's second law to balance forces on an assembly of pistons. The most common situation for a general-purpose analog computer is to add two voltages (referenced to ground); this can be accomplished roughly with a resistor network, but a better design exploits an operational amplifier. Addition is also fundamental to the operation of digital computers, where the efficiency of addition, in particular the carry mechanism, is an important limitation to overall performance. The abacus, also called a counting frame, is a calculating tool that was in use centuries before the adoption of the written modern numeral system and is still widely used by merchants, traders and clerks in Asia, Africa, and elsewhere; it dates back to at least 2700–2300 BC, when it was used in Sumer. Blaise Pascal invented the mechanical calculator in 1642; it was the first operational adding machine. It made use of a gravity-assisted carry mechanism. It was the only operational mechanical calculator in the 17th century and the earliest automatic, digital computer. Pascal's calculator was limited by its carry mechanism, which forced its wheels to only turn one way so it could add. To subtract, the operator had to use the Pascal's calculator's complement, which required as many steps as an addition. Giovanni Poleni followed Pascal, building the second functional mechanical calculator in 1709, a calculating clock made of wood that, once setup, could multiply two numbers automatically. Adders execute integer addition in electronic digital computers, usually using binary arithmetic. The simplest architecture is the ripple carry adder, which follows the standard multi-digit algorithm. One slight improvement is the carry skip design, again following human intuition; one does not perform all the carries in computing , but one bypasses the group of 9s and skips to the answer. In practice, computational addition may be achieved via XOR and AND bitwise logical operations in conjunction with bitshift operations as shown in the pseudocode below. Both XOR and AND gates are straightforward to realize in digital logic allowing the realization of full adder circuits which in turn may be combined into more complex logical operations. In modern digital computers, integer addition is typically the fastest arithmetic instruction, yet it has the largest impact on performance, since it underlies all floating-point operations as well as such basic tasks as address generation during memory access and fetching instructions during branching. To increase speed, modern designs calculate digits in parallel; these schemes go by such names as carry select, carry lookahead, and the Ling pseudocarry. Many implementations are, in fact, hybrids of these last three designs. Unlike addition on paper, addition on a computer often changes the addends. On the ancient abacus and adding board, both addends are destroyed, leaving only the sum. The influence of the abacus on mathematical thinking was strong enough that early Latin texts often claimed that in the process of adding "a number to a number", both numbers vanish. In modern times, the ADD instruction of a microprocessor often replaces the augend with the sum but preserves the addend. In a high-level programming language, evaluating does not change either a or b; if the goal is to replace a with the sum this must be explicitly requested, typically with the statement . Some languages such as C or C++ allow this to be abbreviated as . // Iterative algorithm int add(int x, int y) { int carry = 0; while (y != 0) { carry = AND(x, y); // Logical AND x = XOR(x, y); // Logical XOR y = carry << 1; // left bitshift carry by one } return x; } // Recursive algorithm int add(int x, int y) { return x if (y == 0) else add(XOR(x, y), AND(x, y) << 1); } On a computer, if the result of an addition is too large to store, an arithmetic overflow occurs, resulting in an incorrect answer. Unanticipated arithmetic overflow is a fairly common cause of program errors. Such overflow bugs may be hard to discover and diagnose because they may manifest themselves only for very large input data sets, which are less likely to be used in validation tests. The Year 2000 problem was a series of bugs where overflow errors occurred due to use of a 2-digit format for years. Addition of numbers To prove the usual properties of addition, one must first define addition for the context in question. Addition is first defined on the natural numbers. In set theory, addition is then extended to progressively larger sets that include the natural numbers: the integers, the rational numbers, and the real numbers. (In mathematics education, positive fractions are added before negative numbers are even considered; this is also the historical route.) Natural numbers There are two popular ways to define the sum of two natural numbers a and b. If one defines natural numbers to be the cardinalities of finite sets, (the cardinality of a set is the number of elements in the set), then it is appropriate to define their sum as follows: Let N(S) be the cardinality of a set S. Take two disjoint sets A and B, with and . Then is defined as . Here, is the union of A and B. An alternate version of this definition allows A and B to possibly overlap and then takes their disjoint union, a mechanism that allows common elements to be separated out and therefore counted twice. The other popular definition is recursive: Let n+ be the successor of n, that is the number following n in the natural numbers, so , . Define . Define the general sum recursively by . Hence . Again, there are minor variations upon this definition in the literature. Taken literally, the above definition is an application of the recursion theorem on the partially ordered set N2. On the other hand, some sources prefer to use a restricted recursion theorem that applies only to the set of natural numbers. One then considers a to be temporarily "fixed", applies recursion on b to define a function "a +", and pastes these unary operations for all a together to form the full binary operation. This recursive formulation of addition was developed by Dedekind as early as 1854, and he would expand upon it in the following decades. He proved the associative and commutative properties, among others, through mathematical induction. Integers The simplest conception of an integer is that it consists of an absolute value (which is a natural number) and a sign (generally either positive or negative). The integer zero is a special third case, being neither positive nor negative. The corresponding definition of addition must proceed by cases: For an integer n, let |n| be its absolute value. Let a and b be integers. If either a or b is zero, treat it as an identity. If a and b are both positive, define . If a and b are both negative, define . If a and b have different signs, define to be the difference between |a| and |b|, with the sign of the term whose absolute value is larger. As an example, ; because −6 and 4 have different signs, their absolute values are subtracted, and since the absolute value of the negative term is larger, the answer is negative. Although this definition can be useful for concrete problems, the number of cases to consider complicates proofs unnecessarily. So the following method is commonly used for defining integers. It is based on the remark that every integer is the difference of two natural integers and that two such differences, and are equal if and only if . So, one can define formally the integers as the equivalence classes of ordered pairs of natural numbers under the equivalence relation if and only if . The equivalence class of contains either if , or otherwise. If is a natural number, one can denote the equivalence class of , and by the equivalence class of . This allows identifying the natural number with the equivalence class . Addition of ordered pairs is done component-wise: A straightforward computation shows that the equivalence class of the result depends only on the equivalences classes of the summands, and thus that this defines an addition of equivalence classes, that is integers. Another straightforward computation shows that this addition is the same as the above case definition. This way of defining integers as equivalence classes of pairs of natural numbers, can be used to embed into a group any commutative semigroup with cancellation property. Here, the semigroup is formed by the natural numbers and the group is the additive group of integers. The rational numbers are constructed similarly, by taking as semigroup the nonzero integers with multiplication. This construction has been also generalized under the name of Grothendieck group to the case of any commutative semigroup. Without the cancellation property the semigroup homomorphism from the semigroup into the group may be non-injective. Originally, the Grothendieck group was, more specifically, the result of this construction applied to the equivalences classes under isomorphisms of the objects of an abelian category, with the direct sum as semigroup operation. Rational numbers (fractions) Addition of rational numbers can be computed using the least common denominator, but a conceptually simpler definition involves only integer addition and multiplication: Define As an example, the sum . Addition of fractions is much simpler when the denominators are the same; in this case, one can simply add the numerators while leaving the denominator the same: , so . The commutativity and associativity of rational addition is an easy consequence of the laws of integer arithmetic. For a more rigorous and general discussion, see field of fractions. Real numbers A common construction of the set of real numbers is the Dedekind completion of the set of rational numbers. A real number is defined to be a Dedekind cut of rationals: a non-empty set of rationals that is closed downward and has no greatest element. The sum of real numbers a and b is defined element by element: Define This definition was first published, in a slightly modified form, by Richard Dedekind in 1872. The commutativity and associativity of real addition are immediate; defining the real number 0 to be the set of negative rationals, it is easily seen to be the additive identity. Probably the trickiest part of this construction pertaining to addition is the definition of additive inverses. Unfortunately, dealing with multiplication of Dedekind cuts is a time-consuming case-by-case process similar to the addition of signed integers. Another approach is the metric completion of the rational numbers. A real number is essentially defined to be the limit of a Cauchy sequence of rationals, lim an. Addition is defined term by term: Define This definition was first published by Georg Cantor, also in 1872, although his formalism was slightly different. One must prove that this operation is well-defined, dealing with co-Cauchy sequences. Once that task is done, all the properties of real addition follow immediately from the properties of rational numbers. Furthermore, the other arithmetic operations, including multiplication, have straightforward, analogous definitions. Complex numbers Complex numbers are added by adding the real and imaginary parts of the summands. That is to say: Using the visualization of complex numbers in the complex plane, the addition has the following geometric interpretation: the sum of two complex numbers A and B, interpreted as points of the complex plane, is the point X obtained by building a parallelogram three of whose vertices are O, A and B. Equivalently, X is the point such that the triangles with vertices O, A, B, and X, B, A, are congruent. Generalizations There are many binary operations that can be viewed as generalizations of the addition operation on the real numbers. The field of abstract algebra is centrally concerned with such generalized operations, and they also appear in set theory and category theory. Abstract algebra Vectors In linear algebra, a vector space is an algebraic structure that allows for adding any two vectors and for scaling vectors. A familiar vector space is the set of all ordered pairs of real numbers; the ordered pair (a,b) is interpreted as a vector from the origin in the Euclidean plane to the point (a,b) in the plane. The sum of two vectors is obtained by adding their individual coordinates: This addition operation is central to classical mechanics, in which velocities, accelerations and forces are all represented by vectors. Matrices Matrix addition is defined for two matrices of the same dimensions. The sum of two m × n (pronounced "m by n") matrices A and B, denoted by , is again an matrix computed by adding corresponding elements: For example: Modular arithmetic In modular arithmetic, the set of available numbers is restricted to a finite subset of the integers, and addition "wraps around" when reaching a certain value, called the modulus. For example, the set of integers modulo 12 has twelve elements; it inherits an addition operation from the integers that is central to musical set theory. The set of integers modulo 2 has just two elements; the addition operation it inherits is known in Boolean logic as the "exclusive or" function. A similar "wrap around" operation arises in geometry, where the sum of two angle measures is often taken to be their sum as real numbers modulo 2π. This amounts to an addition operation on the circle, which in turn generalizes to addition operations on many-dimensional tori. General theory The general theory of abstract algebra allows an "addition" operation to be any associative and commutative operation on a set. Basic algebraic structures with such an addition operation include commutative monoids and abelian groups. Set theory and category theory A far-reaching generalization of addition of natural numbers is the addition of ordinal numbers and cardinal numbers in set theory. These give two different generalizations of addition of natural numbers to the transfinite. Unlike most addition operations, addition of ordinal numbers is not commutative. Addition of cardinal numbers, however, is a commutative operation closely related to the disjoint union operation. In category theory, disjoint union is seen as a particular case of the coproduct operation, and general coproducts are perhaps the most abstract of all the generalizations of addition. Some coproducts, such as direct sum and wedge sum, are named to evoke their connection with addition. Related operations Addition, along with subtraction, multiplication and division, is considered one of the basic operations and is used in elementary arithmetic. Arithmetic Subtraction can be thought of as a kind of addition—that is, the addition of an additive inverse. Subtraction is itself a sort of inverse to addition, in that adding and subtracting are inverse functions. Given a set with an addition operation, one cannot always define a corresponding subtraction operation on that set; the set of natural numbers is a simple example. On the other hand, a subtraction operation uniquely determines an addition operation, an additive inverse operation, and an additive identity; for this reason, an additive group can be described as a set that is closed under subtraction. Multiplication can be thought of as repeated addition. If a single term appears in a sum n times, then the sum is the product of n and . If n is not a natural number, the product may still make sense; for example, multiplication by yields the additive inverse of a number. In the real and complex numbers, addition and multiplication can be interchanged by the exponential function: This identity allows multiplication to be carried out by consulting a table of logarithms and computing addition by hand; it also enables multiplication on a slide rule. The formula is still a good first-order approximation in the broad context of Lie groups, where it relates multiplication of infinitesimal group elements with addition of vectors in the associated Lie algebra. There are even more generalizations of multiplication than addition. In general, multiplication operations always distribute over addition; this requirement is formalized in the definition of a ring. In some contexts, such as the integers, distributivity over addition and the existence of a multiplicative identity is enough to uniquely determine the multiplication operation. The distributive property also provides information about addition; by expanding the product in both ways, one concludes that addition is forced to be commutative. For this reason, ring addition is commutative in general. Division is an arithmetic operation remotely related to addition. Since , division is right distributive over addition: . However, division is not left distributive over addition; is not the same as . Ordering The maximum operation "max (a, b)" is a binary operation similar to addition. In fact, if two nonnegative numbers a and b are of different orders of magnitude, then their sum is approximately equal to their maximum. This approximation is extremely useful in the applications of mathematics, for example in truncating Taylor series. However, it presents a perpetual difficulty in numerical analysis, essentially since "max" is not invertible. If b is much greater than a, then a straightforward calculation of can accumulate an unacceptable round-off error, perhaps even returning zero.
Mathematics
Basics
null
61344
https://en.wikipedia.org/wiki/Lightning
Lightning
Lightning is a natural phenomenon, more specifically an atmospheric electrical phenomenon. It consists of electrostatic discharges occurring through the atmosphere between two electrically charged regions, either both existing within the atmosphere or one within the atmosphere and one on the ground, with these regions then becoming partially or wholly electrically neutralized. Lightning involves a near-instantaneous release of energy on a scale averaging between 200 megajoules and 7 gigajoules. This discharge may produce a wide range of electromagnetic radiation, from heat created by the rapid movement of electrons, to brilliant flashes of visible light in the form of black-body radiation. Lightning also causes thunder, a sound from the shock wave which develops as gases in the vicinity of the discharge experience a sudden increase in pressure. The most common occurrence of a lightning event is known as a thunderstorm, though they can also commonly occur in other types of energetic weather systems too. Lightning influences the global atmospheric electrical circuit, atmospheric chemistry, and is a natural ignition source of wildfires. The scientific study of lightning is called fulminology. Forms Three primary forms of lightning are distinguished by where they occur: (IC) or — Within a single thundercloud (CC) or — Between two clouds (CG) — Between a cloud and the ground, in which case it is referred to as a lightning strike. Many other observational variants are recognized, including: volcanic lightning, which can occur during volcanic eruptions; "heat lightning", which can be seen from a great distance but not heard; dry lightning, which can cause forest fires; and ball lightning, which is rarely observed scientifically. The most direct effects of lightning on humans occur as a result of cloud-to-ground lightning, even though intra-cloud and cloud-to-cloud are more common. Intra-cloud and cloud-to-cloud lightning indirectly affect humans through their influence on atmospheric chemistry. There are variations of each type, such as "positive" versus "negative" CG flashes, that have different physical characteristics common to each which can be measured. Cloud to ground (CG) (CG) lightning is a lightning discharge between a thundercloud and the ground. It is initiated by a stepped leader moving down from the cloud, which is met by a streamer moving up from the ground. CG is the least common, but best understood of all types of lightning. It is easier to study scientifically because it terminates on a physical object, namely the ground, and lends itself to being measured by instruments on the ground. Of the three primary types of lightning, it poses the greatest threat to life and property, since it terminates on the ground or "strikes". The overall discharge, termed a flash, is composed of a number of processes such as preliminary breakdown, stepped leaders, connecting leaders, return strokes, dart leaders, and subsequent return strokes. The conductivity of the electrical ground, be it soil, fresh water, or salt water, may affect the lightning discharge rate and thus visible characteristics. Positive and negative lightning Cloud-to-ground (CG) lightning is either positive or negative, as defined by the direction of the conventional electric current between cloud and ground. Most CG lightning is negative, meaning that a negative charge is transferred (electrons flow) downwards to ground along the lightning channel (conventionally speaking they flow from the ground up to the cloud). The reverse happens in a positive CG flash, where electrons travel upward along the lightning channel, while also a positive charge is transferred downward to the ground (conventionally speaking this would be the opposite). Positive lightning is less common than negative lightning and on average makes up less than 5% of all lightning strikes. There are a number of mechanisms theorized to result in the formation of positive lightning. These are mainly based on movement or intensification of charge centres in the cloud. Such changes in cloud charging may come about as a result of variations in vertical wind shear or precipitation, or dissipation of the storm. Positive flashes may also result from certain behaviour of in-cloud discharges, e.g. breaking off or branching from existing flashes. Positive lightning strikes tend to be much more intense than their negative counterparts. An average bolt of negative lightning creates an electric current of 30,000 amperes (30 kA), transferring a total 15 C (coulombs) of electric charge and 1 gigajoule of energy. Large bolts of positive lightning can create up to 120 kA and transfer 350 C. The average positive ground flash has roughly double the peak current of a typical negative flash, and can produce peak currents up to 400 kA and charges of several hundred coulombs. Furthermore, positive ground flashes with high peak currents are commonly followed by long continuing currents, a correlation not seen in negative ground flashes. As a result of their greater power, positive lightning strikes are considerably more dangerous than negative strikes. Positive lightning produces both higher peak currents and longer continuing currents, making them capable of heating surfaces to much higher levels which increases the likelihood of a fire being ignited. The long distances positive lightning can propagate through clear air explains why they are known as "bolts from the blue", giving no warning to observers. Positive lightning has also been shown to trigger the occurrence of upward lightning flashes from the tops of tall structures and is largely responsible for the initiation of sprites several tens of kilometers above ground level. Positive lightning tends to occur more frequently in winter storms, as with thundersnow, during intense tornadoes and in the dissipation stage of a thunderstorm. Huge quantities of extremely low frequency (ELF) and very low frequency (VLF) radio waves are also generated. Contrary to popular belief, positive lightning flashes do not necessarily originate from the anvil or the upper positive charge region and strike a rain-free area outside of the thunderstorm. This belief is based on the outdated idea that lightning leaders are unipolar and originate from their respective charge region. Despite the popular misconception that flashes originating from the anvil are positive, due to them seemingly originating from the positive charge region, observations have shown that these are in fact negative flashes. They begin as IC flashes within the cloud, the negative leader then exits the cloud from the positive charge region before propagating through clear air and striking the ground some distance away. Cloud to cloud (CC) and intra-cloud (IC) Lightning discharges may occur between areas of cloud without contacting the ground. When it occurs between two separate clouds, it is known as (CC) or lightning; when it occurs between areas of differing electric potential within a single cloud, it is known as (IC) lightning. IC lightning is the most frequently occurring type. IC lightning most commonly occurs between the upper anvil portion and lower reaches of a given thunderstorm. This lightning can sometimes be observed at great distances at night as so-called "sheet lightning". In such instances, the observer may see only a flash of light without hearing any thunder. Another term used for cloud–cloud or cloud–cloud–ground lightning is "Anvil Crawler", due to the habit of charge, typically originating beneath or within the anvil and scrambling through the upper cloud layers of a thunderstorm, often generating dramatic multiple branch strokes. These are usually seen as a thunderstorm passes over the observer or begins to decay. The most vivid crawler behavior occurs in well developed thunderstorms that feature extensive rear anvil shearing. Formation The processes involved in lightning formation fall into the following categories: Large-scale atmospheric phenomena in which charge separation can occur (e.g. storm) Microscopic physical processes that result in charge separation Large-scale separation of charge and establishment of an electric field Discharge through a lightning channel Atmospheric phenomena in which lightning occurs Lightning primarily occurs when warm air is mixed with colder air masses, resulting in atmospheric disturbances necessary for polarizing the atmosphere. The disturbances result in storms, and when those storms also result in lightning and thunder, they are called a thunderstorm. Lightning can also occur during dust storms, forest fires, tornadoes, volcanic eruptions, and even in the cold of winter, where the lightning is known as thundersnow. Hurricanes typically generate some lightning, mainly in the rainbands as much as from the center. Intense forest fires, such as those seen in the 2019–20 Australian bushfire season, can create their own weather systems that can produce lightning (also called Fire Lightning) and other weather phenomena. Intense heat from a fire causes air to rapidly rise within the smoke plume, causing the formation of pyrocumulonimbus clouds. Cooler air is drawn in by this turbulent, rising air, helping to cool the plume. The rising plume is further cooled by the lower atmospheric pressure at high altitude, allowing the moisture in it to condense into cloud. Pyrocumulonimbus clouds form in an unstable atmosphere. These weather systems can produce dry lightning, fire tornadoes, intense winds, and dirty hail. Airplane contrails have also been observed to influence lightning to a small degree. The water vapor-dense contrails of airplanes may provide a lower resistance pathway through the atmosphere having some influence upon the establishment of an ionic pathway for a lightning flash to follow. Rocket exhaust plumes provided a pathway for lightning when it was witnessed striking the Apollo 12 rocket shortly after takeoff. Thermonuclear explosions, by providing extra material for electrical conduction and a very turbulent localized atmosphere, have been seen triggering lightning flashes within the mushroom cloud. In addition, intense gamma radiation from large nuclear explosions may develop intensely charged regions in the surrounding air through Compton scattering. The intensely charged space charge regions create multiple clear-air lightning discharges shortly after the device detonates. Some high energy cosmic rays produced by supernovas as well as solar particles from the solar wind, enter the atmosphere and electrify the air, which may create pathways for lightning channels. Charge separation Charge separation in thunderstorms The details of the charging process are still being studied by scientists, but there is general agreement on some of the basic concepts of thunderstorm charge separation, also known as electrification. Electrification can be by the triboelectric effect leading to electron or ion transfer between colliding bodies. Uncharged, colliding water-drops can become charged because of charge transfer between them (as aqueous ions) in an electric field as would exist in a thunderstorm. The main charging area in a thunderstorm occurs in the central part of the storm where air is moving upward rapidly (updraft) and temperatures range from ; see Figure 1. In that area, the combination of temperature and rapid upward air movement produces a mixture of super-cooled cloud droplets (small water droplets below freezing), small ice crystals, and graupel (soft hail). The updraft carries the super-cooled cloud droplets and very small ice crystals upward. At the same time, the graupel, which is considerably larger and denser, tends to fall or be suspended in the rising air. The differences in the movement of the precipitation cause collisions to occur. When the rising ice crystals collide with graupel, the ice crystals become positively charged and the graupel becomes negatively charged; see Figure 2. The updraft carries the positively charged ice crystals upward toward the top of the storm cloud. The larger and denser graupel is either suspended in the middle of the thunderstorm cloud or falls toward the lower part of the storm. The result is that the upper part of the thunderstorm cloud becomes positively charged while the middle to lower part of the thunderstorm cloud becomes negatively charged. The upward motions within the storm and winds at higher levels in the atmosphere tend to cause the small ice crystals (and positive charge) in the upper part of the thunderstorm cloud to spread out horizontally some distance from the thunderstorm cloud base. This part of the thunderstorm cloud is called the anvil. While this is the main charging process for the thunderstorm cloud, some of these charges can be redistributed by air movements within the storm (updrafts and downdrafts). In addition, there is a small but important positive charge buildup near the bottom of the thunderstorm cloud due to the precipitation and warmer temperatures. Charge separation in different phases of water The induced separation of charge in pure liquid water has been known since the 1840s as has the electrification of pure liquid water by the triboelectric effect. William Thomson (Lord Kelvin) demonstrated that charge separation in water occurs in the usual electric fields at the Earth's surface and developed a continuous electric field measuring device using that knowledge. The physical separation of charge into different regions using liquid water was demonstrated by Kelvin with the Kelvin water dropper. The most likely charge-carrying species were considered to be the aqueous hydrogen ion and the aqueous hydroxide ion. An electron is not stable in liquid water concerning a hydroxide ion plus dissolved hydrogen for the time scales involved in thunderstorms. The electrical charging of solid water ice has also been considered. The charged species were again considered to be the hydrogen ion and the hydroxide ion. The charge carrier in lightning is mainly electrons in a plasma. The process of going from charge as ions (positive hydrogen ion and negative hydroxide ion) associated with liquid water or solid water to charge as electrons associated with lightning must involve some form of electro-chemistry, that is, the oxidation and/or the reduction of chemical species. As hydroxide functions as a base and carbon dioxide is an acidic gas, it is possible that charged water clouds in which the negative charge is in the form of the aqueous hydroxide ion, interact with atmospheric carbon dioxide to form aqueous carbonate ions and aqueous hydrogen carbonate ions. Establishing an electric field In order for an electrostatic discharge to occur, two preconditions are necessary: first, a sufficiently high potential difference between two regions of space must exist, and second, a high-resistance medium must obstruct the free, unimpeded equalization of the opposite charges. The atmosphere provides the electrical insulation, or barrier, that prevents free equalization between charged regions of opposite polarity. Meanwhile, a thunderstorm can provide the charge separation and aggregation in certain regions of the cloud. When the local electric field exceeds the dielectric strength of damp air (about 3 MV/m), electrical discharge results in a strike, often followed by commensurate discharges branching from the same path. Mechanisms that cause the charges to build up to lightning are still a matter of scientific investigation. A 2016 study confirmed dielectric breakdown is involved. Lightning may be caused by the circulation of warm moisture-filled air through electric fields. Ice or water particles then accumulate charge as in a Van de Graaff generator. As a thundercloud moves over the surface of the Earth, an equal electric charge, but of opposite polarity, is induced on the Earth's surface underneath the cloud. The induced positive surface charge, when measured against a fixed point, will be small as the thundercloud approaches, increasing as the center of the storm arrives and dropping as the thundercloud passes. The referential value of the induced surface charge could be roughly represented as a bell curve. The oppositely charged regions create an electric field within the air between them. This electric field varies in relation to the strength of the surface charge on the base of the thundercloud – the greater the accumulated charge, the higher the electrical field. Electrical discharge as flashes and strikes The best-studied and understood form of lightning is cloud to ground (CG) lightning. Although more common, intra-cloud (IC) and cloud-to-cloud (CC) flashes are very difficult to study given there are no "physical" points to monitor inside the clouds. Also, given the very low probability of lightning striking the same point repeatedly and consistently, scientific inquiry is difficult even in areas of high CG frequency. Lightning leaders In a process not well understood, a bidirectional channel of ionized air, called a "leader", is initiated between oppositely-charged regions in a thundercloud. Leaders are electrically conductive channels of ionized gas that propagate through, or are otherwise attracted to, regions with a charge opposite of that of the leader tip. The negative end of the bidirectional leader fills a positive charge region, also called a well, inside the cloud while the positive end fills a negative charge well. Leaders often split, forming branches in a tree-like pattern. In addition, negative and some positive leaders travel in a discontinuous fashion, in a process called "stepping". The resulting jerky movement of the leaders can be readily observed in slow-motion videos of lightning flashes. It is possible for one end of the leader to fill the oppositely-charged well entirely while the other end is still active. When this happens, the leader end which filled the well may propagate outside of the thundercloud and result in either a cloud-to-air flash or a cloud-to-ground flash. In a typical cloud-to-ground flash, a bidirectional leader initiates between the main negative and lower positive charge regions in a thundercloud. The weaker positive charge region is filled quickly by the negative leader which then propagates toward the inductively-charged ground. The positively and negatively charged leaders proceed in opposite directions, positive upwards within the cloud and negative towards the earth. Both ionic channels proceed, in their respective directions, in a number of successive spurts. Each leader "pools" ions at the leading tips, shooting out one or more new leaders, momentarily pooling again to concentrate charged ions, then shooting out another leader. The negative leader continues to propagate and split as it heads downward, often speeding up as it gets closer to the Earth's surface. About 90% of ionic channel lengths between "pools" are approximately in length. The establishment of the ionic channel takes a comparatively long amount of time (hundreds of milliseconds) in comparison to the resulting discharge, which occurs within a few dozen microseconds. The electric current needed to establish the channel, measured in the tens or hundreds of amperes, is dwarfed by subsequent currents during the actual discharge. Initiation of the lightning leader is not well understood. The electric field strength within the thundercloud is not typically large enough to initiate this process by itself. Many hypotheses have been proposed. One hypothesis postulates that showers of relativistic electrons are created by cosmic rays and are then accelerated to higher velocities via a process called runaway breakdown. As these relativistic electrons collide and ionize neutral air molecules, they initiate leader formation. Another hypothesis involves locally enhanced electric fields being formed near elongated water droplets or ice crystals. Percolation theory, especially for the case of biased percolation, describes random connectivity phenomena, which produce an evolution of connected structures similar to that of lightning strikes. A streamer avalanche model has recently been favored by observational data taken by LOFAR during storms. Upward streamers When a stepped leader approaches the ground, the presence of opposite charges on the ground enhances the strength of the electric field. The electric field is strongest on grounded objects whose tops are closest to the base of the thundercloud, such as trees and tall buildings. If the electric field is strong enough, a positively charged ionic channel, called a positive or upward streamer, can develop from these points. This was first theorized by Heinz Kasemir. As negatively charged leaders approach, increasing the localized electric field strength, grounded objects already experiencing corona discharge will exceed a threshold and form upward streamers. Attachment Once a downward leader connects to an available upward leader, a process referred to as attachment, a low-resistance path is formed and discharge may occur. Photographs have been taken in which unattached streamers are clearly visible. The unattached downward leaders are also visible in branched lightning, none of which are connected to the earth, although it may appear they are. High-speed videos can show the attachment process in progress. Discharge – Return stroke Once a conductive channel bridges the air gap between the negative charge excess in the cloud and the positive surface charge excess below, there is a large drop in resistance across the lightning channel. Electrons accelerate rapidly as a result in a zone beginning at the point of attachment, which expands across the entire leader network at up to one third of the speed of light. This is the "return stroke" and it is the most luminous and noticeable part of the lightning discharge. A large electric charge flows along the plasma channel, from the cloud to the ground, neutralising the positive ground charge as electrons flow away from the strike point to the surrounding area. This huge surge of current creates large radial voltage differences along the surface of the ground. Called step potentials, they are responsible for more injuries and deaths in groups of people or of other animals than the strike itself. Electricity takes every path available to it. Such step potentials will often cause current to flow through one leg and out another, electrocuting an unlucky human or animal standing near the point where the lightning strikes. The electric current of the return stroke averages 30 kiloamperes for a typical negative CG flash, often referred to as "negative CG" lightning. In some cases, a ground-to-cloud (GC) lightning flash may originate from a positively charged region on the ground below a storm. These discharges normally originate from the tops of very tall structures, such as communications antennas. The rate at which the return stroke current travels has been found to be around 100,000 km/s (one-third of the speed of light). A typical cloud-to-ground lightning flash culminates in the formation of an electrically conducting plasma channel through the air in excess of tall, from within the cloud to the ground's surface. The massive flow of electric current occurring during the return stroke combined with the rate at which it occurs (measured in microseconds) rapidly superheats the completed leader channel, forming a highly electrically conductive plasma channel. The core temperature of the plasma during the return stroke may exceed , causing it to radiate with a brilliant, blue-white color. Once the electric current stops flowing, the channel cools and dissipates over tens or hundreds of milliseconds, often disappearing as fragmented patches of glowing gas. The nearly instantaneous heating during the return stroke causes the air to expand explosively, producing a powerful shock wave which is heard as thunder. Discharge – Re-strike High-speed videos (examined frame-by-frame) show that most negative CG lightning flashes are made up of 3 or 4 individual strokes, though there may be as many as 30. Each re-strike is separated by a relatively large amount of time, typically 40 to 50 milliseconds, as other charged regions in the cloud are discharged in subsequent strokes. Re-strikes often cause a noticeable "strobe light" effect. To understand why multiple return strokes utilize the same lightning channel, one needs to understand the behavior of positive leaders, which a typical ground flash effectively becomes following the negative leader's connection with the ground. Positive leaders decay more rapidly than negative leaders do. For reasons not well understood, bidirectional leaders tend to initiate on the tips of the decayed positive leaders in which the negative end attempts to re-ionize the leader network. These leaders, also called recoil leaders, usually decay shortly after their formation. When they do manage to make contact with a conductive portion of the main leader network, a return stroke-like process occurs and a dart leader travels across all or a portion of the length of the original leader. The dart leaders making connections with the ground are what cause a majority of subsequent return strokes. Each successive stroke is preceded by intermediate dart leader strokes that have a faster rise time but lower amplitude than the initial return stroke. Each subsequent stroke usually re-uses the discharge channel taken by the previous one, but the channel may be offset from its previous position as wind displaces the hot channel. Since recoil and dart leader processes do not occur on negative leaders, subsequent return strokes very seldom utilize the same channel on positive ground flashes which are explained later in the article. Discharge – Transient currents during flash The electric current within a typical negative CG lightning discharge rises very quickly to its peak value in 1–10 microseconds, then decays more slowly over 50–200 microseconds. The transient nature of the current within a lightning flash results in several phenomena that need to be addressed in the effective protection of ground-based structures. Rapidly changing (alternating) currents tend to travel on the surface of a conductor, in what is called the skin effect, unlike direct currents, which "flow-through" the entire conductor like water through a hose. Hence, conductors used in the protection of facilities tend to be multi-stranded, with small wires woven together. This increases the total bundle surface area in inverse proportion to the individual strand radius, for a fixed total cross-sectional area. The rapidly changing currents also create electromagnetic pulses (EMPs) that radiate outward from the ionic channel. This is a characteristic of all electrical discharges. The radiated pulses rapidly weaken as their distance from the origin increases. However, if they pass over conductive elements such as power lines, communication lines, or metallic pipes, they may induce a current which travels outward to its termination. The surge current is inversely related to the surge impedance: the higher in impedance, the lower the current. This is the surge that, more often than not, results in the destruction of delicate electronics, electrical appliances, or electric motors. Devices known as surge protectors (SPD) or transient voltage surge suppressors (TVSS) attached in parallel with these lines can detect the lightning flash's transient irregular current, and, through alteration of its physical properties, route the spike to an attached earthing ground, thereby protecting the equipment from damage. Distribution, frequency and properties Global monitoring indicates that lightning on Earth occurs at an average frequency of approximately 44 (± 5) times per second, equating to nearly 1.4 billion flashes per year. Median duration is 0.52 seconds made up from a number of much shorter flashes (strokes) of around 60 to 70 microseconds. Occurrences are distributed unevenly across the planet with about 70% being over land in the tropics where atmospheric convection is the greatest. Many factors affect the frequency, distribution, strength and physical properties of a typical lightning flash in a particular region of the world. These factors include ground elevation, latitude, prevailing wind currents, relative humidity, and proximity to warm and cold bodies of water. To a certain degree the proportions of intra-cloud, cloud-to-cloud, and cloud-to-ground lightning may also vary by season in middle latitudes. This occurs from both the mixture of warmer and colder air masses, as well as differences in moisture concentrations, and it generally happens at the boundaries between them. The flow of warm ocean currents past drier land masses, such as the Gulf Stream, partially explains the elevated frequency of lightning in the Southeast United States. Because large bodies of water lack the topographic variation that would result in atmospheric mixing, lightning is notably less frequent over the world's oceans than over land. The North and South Poles are limited in their coverage of thunderstorms and therefore result in areas with the least lightning. In general, CG lightning flashes account for only 25% of all total lightning flashes worldwide. Since the base of a thunderstorm is usually negatively charged, this is where most CG lightning originates. This region is typically at the elevation where freezing occurs within the cloud. Freezing, combined with collisions between ice and water, appears to be a critical part of the initial charge development and separation process. During wind-driven collisions, ice crystals tend to develop a positive charge, while a heavier, slushy mixture of ice and water (called graupel) develops a negative charge. Updrafts within a storm cloud separate the lighter ice crystals from the heavier graupel, causing the top region of the cloud to accumulate a positive space charge while the lower level accumulates a negative space charge. Because the concentrated charge within the cloud must exceed the insulating properties of air, and this increases proportionally to the distance between the cloud and the ground, the proportion of CG strikes (versus CC or IC discharges) becomes greater when the cloud is closer to the ground. In the tropics, where the freezing level is generally higher in the atmosphere, only 10% of lightning flashes are CG. At the latitude of Norway (around 60° North latitude), where the freezing elevation is lower, 50% of lightning is CG. Lightning is usually produced by cumulonimbus clouds, which have bases that are typically above the ground and tops up to in height. The place on Earth where lightning occurs most often is over Lake Maracaibo, wherein the Catatumbo lightning phenomenon produces 250 bolts of lightning a day. This activity occurs on average, 297 days a year. The second most lightning density is near the village of Kifuka in the mountains of the eastern Democratic Republic of the Congo, where the elevation is around . On average, this region receives . Other lightning hotspots include Singapore and Lightning Alley in Central Florida. According to the World Meteorological Organization, on April 29, 2020, a bolt 768 km (477.2 mi) long was observed in the southern U.S.—sixty km (37 mi) longer than the previous distance record (southern Brazil, October 31, 2018). A single flash in Uruguay and northern Argentina on June 18, 2020, lasted for 17.1 seconds—0.37 seconds longer than the previous record (March 4, 2019, also in northern Argentina). Researchers at the University of Florida found that the final one-dimensional speeds of 10 flashes observed were between 1.0 and 1.4 m/s, with an average of 4.4 m/s. Effects A lightning strike can unleash a variety of effects, some temporary, including very brief emission of light, sound and electromagnetic radiation, and some long-lasting, such as death, damage, and atmospheric and environmental changes. Injury, damage and destruction The immense amount of energy transferred in a lightning strike can have potentially devastating effect in a multitude of areas. To nature Objects struck by lightning experience heat and magnetic forces of great magnitude. Consequently: The heat created by lightning currents travelling through a tree may vaporize its sap, causing a steam explosion that rips off bark or even bursts the trunk. Similarly water in a fractured rock may be rapidly heated such that it splits further apart. A struck tree may catch fire, or a forest fire may be started.
Physical sciences
Storms
null
61346
https://en.wikipedia.org/wiki/Commutative%20ring
Commutative ring
In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not specific to commutative rings. This distinction results from the high number of fundamental properties of commutative rings that do not extend to noncommutative rings. Definition and first examples Definition A ring is a set equipped with two binary operations, i.e. operations combining any two elements of the ring to a third. They are called addition and multiplication and commonly denoted by "" and ""; e.g. and . To form a ring these two operations have to satisfy a number of properties: the ring has to be an abelian group under addition as well as a monoid under multiplication, where multiplication distributes over addition; i.e., . The identity elements for addition and multiplication are denoted and , respectively. If the multiplication is commutative, i.e. then the ring is called commutative. In the remainder of this article, all rings will be commutative, unless explicitly stated otherwise. First examples An important example, and in some sense crucial, is the ring of integers with the two operations of addition and multiplication. As the multiplication of integers is a commutative operation, this is a commutative ring. It is usually denoted as an abbreviation of the German word Zahlen (numbers). A field is a commutative ring where and every non-zero element is invertible; i.e., has a multiplicative inverse such that . Therefore, by definition, any field is a commutative ring. The rational, real and complex numbers form fields. If is a given commutative ring, then the set of all polynomials in the variable whose coefficients are in forms the polynomial ring, denoted . The same holds true for several variables. If is some topological space, for example a subset of some , real- or complex-valued continuous functions on form a commutative ring. The same is true for differentiable or holomorphic functions, when the two concepts are defined, such as for a complex manifold. Divisibility In contrast to fields, where every nonzero element is multiplicatively invertible, the concept of divisibility for rings is richer. An element of ring is called a unit if it possesses a multiplicative inverse. Another particular type of element is the zero divisors, i.e. an element such that there exists a non-zero element of the ring such that . If possesses no non-zero zero divisors, it is called an integral domain (or domain). An element satisfying for some positive integer is called nilpotent. Localizations The localization of a ring is a process in which some elements are rendered invertible, i.e. multiplicative inverses are added to the ring. Concretely, if is a multiplicatively closed subset of (i.e. whenever then so is ) then the localization of at , or ring of fractions with denominators in , usually denoted consists of symbols subject to certain rules that mimic the cancellation familiar from rational numbers. Indeed, in this language is the localization of at all nonzero integers. This construction works for any integral domain instead of . The localization is a field, called the quotient field of . Ideals and modules Many of the following notions also exist for not necessarily commutative rings, but the definitions and properties are usually more complicated. For example, all ideals in a commutative ring are automatically two-sided, which simplifies the situation considerably. Modules For a ring , an -module is like what a vector space is to a field. That is, elements in a module can be added; they can be multiplied by elements of subject to the same axioms as for a vector space. The study of modules is significantly more involved than the one of vector spaces, since there are modules that do not have any basis, that is, do not contain a spanning set whose elements are linearly independents. A module that has a basis is called a free module, and a submodule of a free module needs not to be free. A module of finite type is a module that has a finite spanning set. Modules of finite type play a fundamental role in the theory of commutative rings, similar to the role of the finite-dimensional vector spaces in linear algebra. In particular, Noetherian rings (see also , below) can be defined as the rings such that every submodule of a module of finite type is also of finite type. Ideals Ideals of a ring are the submodules of , i.e., the modules contained in . In more detail, an ideal is a non-empty subset of such that for all in , and in , both and are in . For various applications, understanding the ideals of a ring is of particular importance, but often one proceeds by studying modules in general. Any ring has two ideals, namely the zero ideal and , the whole ring. These two ideals are the only ones precisely if is a field. Given any subset of (where is some index set), the ideal generated by is the smallest ideal that contains . Equivalently, it is given by finite linear combinations Principal ideal domains If consists of a single element , the ideal generated by consists of the multiples of , i.e., the elements of the form for arbitrary elements . Such an ideal is called a principal ideal. If every ideal is a principal ideal, is called a principal ideal ring; two important cases are and , the polynomial ring over a field . These two are in addition domains, so they are called principal ideal domains. Unlike for general rings, for a principal ideal domain, the properties of individual elements are strongly tied to the properties of the ring as a whole. For example, any principal ideal domain is a unique factorization domain (UFD) which means that any element is a product of irreducible elements, in a (up to reordering of factors) unique way. Here, an element in a domain is called irreducible if the only way of expressing it as a product is by either or being a unit. An example, important in field theory, are irreducible polynomials, i.e., irreducible elements in , for a field . The fact that is a UFD can be stated more elementarily by saying that any natural number can be uniquely decomposed as product of powers of prime numbers. It is also known as the fundamental theorem of arithmetic. An element is a prime element if whenever divides a product , divides or . In a domain, being prime implies being irreducible. The converse is true in a unique factorization domain, but false in general. Factor ring The definition of ideals is such that "dividing" "out" gives another ring, the factor ring : it is the set of cosets of together with the operations and . For example, the ring (also denoted ), where is an integer, is the ring of integers modulo . It is the basis of modular arithmetic. An ideal is proper if it is strictly smaller than the whole ring. An ideal that is not strictly contained in any proper ideal is called maximal. An ideal is maximal if and only if is a field. Except for the zero ring, any ring (with identity) possesses at least one maximal ideal; this follows from Zorn's lemma. Noetherian rings A ring is called Noetherian (in honor of Emmy Noether, who developed this concept) if every ascending chain of ideals becomes stationary, i.e. becomes constant beyond some index . Equivalently, any ideal is generated by finitely many elements, or, yet equivalent, submodules of finitely generated modules are finitely generated. Being Noetherian is a highly important finiteness condition, and the condition is preserved under many operations that occur frequently in geometry. For example, if is Noetherian, then so is the polynomial ring (by Hilbert's basis theorem), any localization , and also any factor ring . Any non-Noetherian ring is the union of its Noetherian subrings. This fact, known as Noetherian approximation, allows the extension of certain theorems to non-Noetherian rings. Artinian rings A ring is called Artinian (after Emil Artin), if every descending chain of ideals becomes stationary eventually. Despite the two conditions appearing symmetric, Noetherian rings are much more general than Artinian rings. For example, is Noetherian, since every ideal can be generated by one element, but is not Artinian, as the chain shows. In fact, by the Hopkins–Levitzki theorem, every Artinian ring is Noetherian. More precisely, Artinian rings can be characterized as the Noetherian rings whose Krull dimension is zero. Spectrum of a commutative ring Prime ideals As was mentioned above, is a unique factorization domain. This is not true for more general rings, as algebraists realized in the 19th century. For example, in there are two genuinely distinct ways of writing 6 as a product: Prime ideals, as opposed to prime elements, provide a way to circumvent this problem. A prime ideal is a proper (i.e., strictly contained in ) ideal such that, whenever the product of any two ring elements and is in at least one of the two elements is already in (The opposite conclusion holds for any ideal, by definition.) Thus, if a prime ideal is principal, it is equivalently generated by a prime element. However, in rings such as prime ideals need not be principal. This limits the usage of prime elements in ring theory. A cornerstone of algebraic number theory is, however, the fact that in any Dedekind ring (which includes and more generally the ring of integers in a number field) any ideal (such as the one generated by 6) decomposes uniquely as a product of prime ideals. Any maximal ideal is a prime ideal or, more briefly, is prime. Moreover, an ideal is prime if and only if the factor ring is an integral domain. Proving that an ideal is prime, or equivalently that a ring has no zero-divisors can be very difficult. Yet another way of expressing the same is to say that the complement is multiplicatively closed. The localisation is important enough to have its own notation: . This ring has only one maximal ideal, namely . Such rings are called local. Spectrum The spectrum of a ring , denoted by , is the set of all prime ideals of . It is equipped with a topology, the Zariski topology, which reflects the algebraic properties of : a basis of open subsets is given by where is any ring element. Interpreting as a function that takes the value f mod p (i.e., the image of f in the residue field R/p), this subset is the locus where f is non-zero. The spectrum also makes precise the intuition that localisation and factor rings are complementary: the natural maps and correspond, after endowing the spectra of the rings in question with their Zariski topology, to complementary open and closed immersions respectively. Even for basic rings, such as illustrated for at the right, the Zariski topology is quite different from the one on the set of real numbers. The spectrum contains the set of maximal ideals, which is occasionally denoted mSpec (R). For an algebraically closed field k, mSpec (k[T1, ..., Tn] / (f1, ..., fm)) is in bijection with the set Thus, maximal ideals reflect the geometric properties of solution sets of polynomials, which is an initial motivation for the study of commutative rings. However, the consideration of non-maximal ideals as part of the geometric properties of a ring is useful for several reasons. For example, the minimal prime ideals (i.e., the ones not strictly containing smaller ones) correspond to the irreducible components of Spec R. For a Noetherian ring R, Spec R has only finitely many irreducible components. This is a geometric restatement of primary decomposition, according to which any ideal can be decomposed as a product of finitely many primary ideals. This fact is the ultimate generalization of the decomposition into prime ideals in Dedekind rings. Affine schemes The notion of a spectrum is the common basis of commutative algebra and algebraic geometry. Algebraic geometry proceeds by endowing Spec R with a sheaf (an entity that collects functions defined locally, i.e. on varying open subsets). The datum of the space and the sheaf is called an affine scheme. Given an affine scheme, the underlying ring R can be recovered as the global sections of . Moreover, this one-to-one correspondence between rings and affine schemes is also compatible with ring homomorphisms: any f : R → S gives rise to a continuous map in the opposite direction The resulting equivalence of the two said categories aptly reflects algebraic properties of rings in a geometrical manner. Similar to the fact that manifolds are locally given by open subsets of Rn, affine schemes are local models for schemes, which are the object of study in algebraic geometry. Therefore, several notions concerning commutative rings stem from geometric intuition. Dimension The Krull dimension (or dimension) dim R of a ring R measures the "size" of a ring by, roughly speaking, counting independent elements in R. The dimension of algebras over a field k can be axiomatized by four properties: The dimension is a local property: . The dimension is independent of nilpotent elements: if is nilpotent then . The dimension remains constant under a finite extension: if S is an R-algebra which is finitely generated as an R-module, then dim S = dim R. The dimension is calibrated by dim . This axiom is motivated by regarding the polynomial ring in n variables as an algebraic analogue of n-dimensional space. The dimension is defined, for any ring R, as the supremum of lengths n of chains of prime ideals For example, a field is zero-dimensional, since the only prime ideal is the zero ideal. The integers are one-dimensional, since chains are of the form (0) ⊊ (p), where p is a prime number. For non-Noetherian rings, and also non-local rings, the dimension may be infinite, but Noetherian local rings have finite dimension. Among the four axioms above, the first two are elementary consequences of the definition, whereas the remaining two hinge on important facts in commutative algebra, the going-up theorem and Krull's principal ideal theorem. Ring homomorphisms A ring homomorphism or, more colloquially, simply a map, is a map such that These conditions ensure . Similarly as for other algebraic structures, a ring homomorphism is thus a map that is compatible with the structure of the algebraic objects in question. In such a situation S is also called an R-algebra, by understanding that s in S may be multiplied by some r of R, by setting The kernel and image of f are defined by and . The kernel is an ideal of R, and the image is a subring of S. A ring homomorphism is called an isomorphism if it is bijective. An example of a ring isomorphism, known as the Chinese remainder theorem, is where is a product of pairwise distinct prime numbers. Commutative rings, together with ring homomorphisms, form a category. The ring Z is the initial object in this category, which means that for any commutative ring R, there is a unique ring homomorphism Z → R. By means of this map, an integer n can be regarded as an element of R. For example, the binomial formula which is valid for any two elements a and b in any commutative ring R is understood in this sense by interpreting the binomial coefficients as elements of R using this map. Given two R-algebras S and T, their tensor product is again a commutative R-algebra. In some cases, the tensor product can serve to find a T-algebra which relates to Z as S relates to R. For example, Finite generation An R-algebra S is called finitely generated (as an algebra) if there are finitely many elements s1, ..., sn such that any element of s is expressible as a polynomial in the si. Equivalently, S is isomorphic to A much stronger condition is that S is finitely generated as an R-module, which means that any s can be expressed as a R-linear combination of some finite set s1, ..., sn. Local rings A ring is called local if it has only a single maximal ideal, denoted by m. For any (not necessarily local) ring R, the localization at a prime ideal p is local. This localization reflects the geometric properties of Spec R "around p". Several notions and problems in commutative algebra can be reduced to the case when R is local, making local rings a particularly deeply studied class of rings. The residue field of R is defined as Any R-module M yields a k-vector space given by . Nakayama's lemma shows this passage is preserving important information: a finitely generated module M is zero if and only if is zero. Regular local rings The k-vector space m/m2 is an algebraic incarnation of the cotangent space. Informally, the elements of m can be thought of as functions which vanish at the point p, whereas m2 contains the ones which vanish with order at least 2. For any Noetherian local ring R, the inequality holds true, reflecting the idea that the cotangent (or equivalently the tangent) space has at least the dimension of the space Spec R. If equality holds true in this estimate, R is called a regular local ring. A Noetherian local ring is regular if and only if the ring (which is the ring of functions on the tangent cone) is isomorphic to a polynomial ring over k. Broadly speaking, regular local rings are somewhat similar to polynomial rings. Regular local rings are UFD's. Discrete valuation rings are equipped with a function which assign an integer to any element r. This number, called the valuation of r can be informally thought of as a zero or pole order of r. Discrete valuation rings are precisely the one-dimensional regular local rings. For example, the ring of germs of holomorphic functions on a Riemann surface is a discrete valuation ring. Complete intersections By Krull's principal ideal theorem, a foundational result in the dimension theory of rings, the dimension of is at least r − n. A ring R is called a complete intersection ring if it can be presented in a way that attains this minimal bound. This notion is also mostly studied for local rings. Any regular local ring is a complete intersection ring, but not conversely. A ring R is a set-theoretic complete intersection if the reduced ring associated to R, i.e., the one obtained by dividing out all nilpotent elements, is a complete intersection. As of 2017, it is in general unknown, whether curves in three-dimensional space are set-theoretic complete intersections. Cohen–Macaulay rings The depth of a local ring R is the number of elements in some (or, as can be shown, any) maximal regular sequence, i.e., a sequence a1, ..., an ∈ m such that all ai are non-zero divisors in For any local Noetherian ring, the inequality holds. A local ring in which equality takes place is called a Cohen–Macaulay ring. Local complete intersection rings, and a fortiori, regular local rings are Cohen–Macaulay, but not conversely. Cohen–Macaulay combine desirable properties of regular rings (such as the property of being universally catenary rings, which means that the (co)dimension of primes is well-behaved), but are also more robust under taking quotients than regular local rings. Constructing commutative rings There are several ways to construct new rings out of given ones. The aim of such constructions is often to improve certain properties of the ring so as to make it more readily understandable. For example, an integral domain that is integrally closed in its field of fractions is called normal. This is a desirable property, for example any normal one-dimensional ring is necessarily regular. Rendering a ring normal is known as normalization. Completions If I is an ideal in a commutative ring R, the powers of I form topological neighborhoods of 0 which allow R to be viewed as a topological ring. This topology is called the I-adic topology. R can then be completed with respect to this topology. Formally, the I-adic completion is the inverse limit of the rings R/In. For example, if k is a field, k[[X]], the formal power series ring in one variable over k, is the I-adic completion of k[X] where I is the principal ideal generated by X. This ring serves as an algebraic analogue of the disk. Analogously, the ring of p-adic integers is the completion of Z with respect to the principal ideal (p). Any ring that is isomorphic to its own completion, is called complete. Complete local rings satisfy Hensel's lemma, which roughly speaking allows extending solutions (of various problems) over the residue field k to R. Homological notions Several deeper aspects of commutative rings have been studied using methods from homological algebra. lists some open questions in this area of active research. Projective modules and Ext functors Projective modules can be defined to be the direct summands of free modules. If R is local, any finitely generated projective module is actually free, which gives content to an analogy between projective modules and vector bundles. The Quillen–Suslin theorem asserts that any finitely generated projective module over k[T1, ..., Tn] (k a field) is free, but in general these two concepts differ. A local Noetherian ring is regular if and only if its global dimension is finite, say n, which means that any finitely generated R-module has a resolution by projective modules of length at most n. The proof of this and other related statements relies on the usage of homological methods, such as the Ext functor. This functor is the derived functor of the functor The latter functor is exact if M is projective, but not otherwise: for a surjective map of R-modules, a map need not extend to a map . The higher Ext functors measure the non-exactness of the Hom-functor. The importance of this standard construction in homological algebra stems can be seen from the fact that a local Noetherian ring R with residue field k is regular if and only if vanishes for all large enough n. Moreover, the dimensions of these Ext-groups, known as Betti numbers, grow polynomially in n if and only if R is a local complete intersection ring. A key argument in such considerations is the Koszul complex, which provides an explicit free resolution of the residue field k of a local ring R in terms of a regular sequence. Flatness The tensor product is another non-exact functor relevant in the context of commutative rings: for a general R-module M, the functor is only right exact. If it is exact, M is called flat. If R is local, any finitely presented flat module is free of finite rank, thus projective. Despite being defined in terms of homological algebra, flatness has profound geometric implications. For example, if an R-algebra S is flat, the dimensions of the fibers (for prime ideals p in R) have the "expected" dimension, namely . Properties By Wedderburn's theorem, every finite division ring is commutative, and therefore a finite field. Another condition ensuring commutativity of a ring, due to Jacobson, is the following: for every element r of R there exists an integer such that . If, for every r, the ring is called Boolean ring. More general conditions which guarantee commutativity of a ring are also known. Generalizations Graded-commutative rings A graded ring is called graded-commutative if, for all homogeneous elements a and b, If the Ri are connected by differentials ∂ such that an abstract form of the product rule holds, i.e., R is called a commutative differential graded algebra (cdga). An example is the complex of differential forms on a manifold, with the multiplication given by the exterior product, is a cdga. The cohomology of a cdga is a graded-commutative ring, sometimes referred to as the cohomology ring. A broad range examples of graded rings arises in this way. For example, the Lazard ring is the ring of cobordism classes of complex manifolds. A graded-commutative ring with respect to a grading by Z/2 (as opposed to Z) is called a superalgebra. A related notion is an almost commutative ring, which means that R is filtered in such a way that the associated graded ring is commutative. An example is the Weyl algebra and more general rings of differential operators. Simplicial commutative rings A simplicial commutative ring is a simplicial object in the category of commutative rings. They are building blocks for (connective) derived algebraic geometry. A closely related but more general notion is that of E∞-ring. Applications of the commutative rings Holomorphic functions Algebraic K-theory Topological K-theory Divided power structures Witt vectors Hecke algebra (used in Wiles's proof of Fermat's Last Theorem) Fontaine's period rings Cluster algebra Convolution algebra (of a commutative group) Fréchet algebra
Mathematics
Algebra
null
61388
https://en.wikipedia.org/wiki/Deoxyribose
Deoxyribose
Deoxyribose, or more precisely 2-deoxyribose, is a monosaccharide with idealized formula H−(C=O)−(CH2)−(CHOH)3−H. Its name indicates that it is a deoxy sugar, meaning that it is derived from the sugar ribose by loss of a hydroxy group. Discovered in 1929 by Phoebus Levene, deoxyribose is most notable for its presence in DNA. Since the pentose sugars arabinose and ribose only differ by the stereochemistry at C2′, 2-deoxyribose and 2-deoxyarabinose are equivalent, although the latter term is rarely used because ribose, not arabinose, is the precursor to deoxyribose. Structure Several isomers exist with the formula H−(C=O)−(CH2)−(CHOH)3−H, but in deoxyribose all the hydroxyl groups are on the same side in the Fischer projection. The term "2-deoxyribose" may refer to either of two enantiomers: the biologically important -2-deoxyribose and to the rarely encountered mirror image -2-deoxyribose. -2-deoxyribose is a precursor to the nucleic acid DNA. 2-deoxyribose is an aldopentose, that is, a monosaccharide with five carbon atoms and having an aldehyde functional group. In aqueous solution, deoxyribose primarily exists as a mixture of three structures: the linear form H−(C=O)−(CH2)−(CHOH)3−H and two ring forms, deoxyribofuranose ("C3′-endo"), with a five-membered ring, and deoxyribopyranose ("C2′-endo"), with a six-membered ring. The latter form is predominant (whereas the C3′-endo form is favored for ribose). Biological importance As a component of DNA, 2-deoxyribose derivatives have an important role in biology. The DNA (deoxyribonucleic acid) molecule, which is the main repository of genetic information in life, consists of a long chain of deoxyribose-containing units called nucleotides, linked via phosphate groups. In the standard nucleic acid nomenclature, a DNA nucleotide consists of a deoxyribose molecule with an organic base (usually adenine, thymine, guanine or cytosine) attached to the 1′ ribose carbon. The 5′ hydroxyl of each deoxyribose unit is replaced by a phosphate (forming a nucleotide) that is attached to the 3′ carbon of the deoxyribose in the preceding unit. The absence of the 2′ hydroxyl group in deoxyribose is apparently responsible for the increased mechanical flexibility of DNA compared to RNA, which allows it to assume the double-helix conformation, and also (in the eukaryotes) to be compactly coiled within the small cell nucleus. The double-stranded DNA molecules are also typically much longer than RNA molecules. The backbone of RNA and DNA are structurally similar, but RNA is single stranded, and made from ribose as opposed to deoxyribose. Other biologically important derivatives of deoxyribose include mono-, di-, and triphosphates, as well as 3′-5′ cyclic monophosphates. Biosynthesis Deoxyribose is generated from ribose 5-phosphate by enzymes called ribonucleotide reductases. These enzymes catalyse the deoxygenation process. Angiogenesis In one study, deoxyribose was shown to have pro-angiogenic properties when applied topically in a gel to wounds in rats. In addition, this topical gel also increased Vascular Endothelial Growth Factor (VEGF), which has been implicated in hair growth. This could potentially lead to future products to treat hair loss in humans.
Biology and health sciences
Carbohydrates
Biology
61476
https://en.wikipedia.org/wiki/Radius%20of%20convergence
Radius of convergence
In mathematics, the radius of convergence of a power series is the radius of the largest disk at the center of the series in which the series converges. It is either a non-negative real number or . When it is positive, the power series converges absolutely and uniformly on compact sets inside the open disk of radius equal to the radius of convergence, and it is the Taylor series of the analytic function to which it converges. In case of multiple singularities of a function (singularities are those values of the argument for which the function is not defined), the radius of convergence is the shortest or minimum of all the respective distances (which are all non-negative numbers) calculated from the center of the disk of convergence to the respective singularities of the function. Definition For a power series f defined as: where a is a complex constant, the center of the disk of convergence, cn is the n-th complex coefficient, and z is a complex variable. The radius of convergence r is a nonnegative real number or such that the series converges if and diverges if Some may prefer an alternative definition, as existence is obvious: On the boundary, that is, where |z − a| = r, the behavior of the power series may be complicated, and the series may converge for some values of z and diverge for others. The radius of convergence is infinite if the series converges for all complex numbers z. Finding the radius of convergence Two cases arise: The first case is theoretical: when you know all the coefficients then you take certain limits and find the precise radius of convergence. The second case is practical: when you construct a power series solution of a difficult problem you typically will only know a finite number of terms in a power series, anywhere from a couple of terms to a hundred terms. In this second case, extrapolating a plot estimates the radius of convergence. Theoretical radius The radius of convergence can be found by applying the root test to the terms of the series. The root test uses the number "lim sup" denotes the limit superior. The root test states that the series converges if C < 1 and diverges if C > 1. It follows that the power series converges if the distance from z to the center a is less than and diverges if the distance exceeds that number; this statement is the Cauchy–Hadamard theorem. Note that r = 1/0 is interpreted as an infinite radius, meaning that f is an entire function. The limit involved in the ratio test is usually easier to compute, and when that limit exists, it shows that the radius of convergence is finite. This is shown as follows. The ratio test says the series converges if That is equivalent to Practical estimation of radius in the case of real coefficients Usually, in scientific applications, only a finite number of coefficients are known. Typically, as increases, these coefficients settle into a regular behavior determined by the nearest radius-limiting singularity. In this case, two main techniques have been developed, based on the fact that the coefficients of a Taylor series are roughly exponential with ratio where r is the radius of convergence. The basic case is when the coefficients ultimately share a common sign or alternate in sign. As pointed out earlier in the article, in many cases the limit exists, and in this case . Negative means the convergence-limiting singularity is on the negative axis. Estimate this limit, by plotting the versus , and graphically extrapolate to (effectively ) via a linear fit. The intercept with estimates the reciprocal of the radius of convergence, . This plot is called a Domb–Sykes plot. The more complicated case is when the signs of the coefficients have a more complex pattern. Mercer and Roberts proposed the following procedure. Define the associated sequence Plot the finitely many known versus , and graphically extrapolate to via a linear fit. The intercept with estimates the reciprocal of the radius of convergence, . This procedure also estimates two other characteristics of the convergence limiting singularity. Suppose the nearest singularity is of degree and has angle to the real axis. Then the slope of the linear fit given above is . Further, plot versus , then a linear fit extrapolated to has intercept at . Radius of convergence in complex analysis A power series with a positive radius of convergence can be made into a holomorphic function by taking its argument to be a complex variable. The radius of convergence can be characterized by the following theorem: The radius of convergence of a power series f centered on a point a is equal to the distance from a to the nearest point where f cannot be defined in a way that makes it holomorphic. The set of all points whose distance to a is strictly less than the radius of convergence is called the disk of convergence. The nearest point means the nearest point in the complex plane, not necessarily on the real line, even if the center and all coefficients are real. For example, the function has no singularities on the real line, since has no real roots. Its Taylor series about 0 is given by The root test shows that its radius of convergence is 1. In accordance with this, the function f(z) has singularities at ±i, which are at a distance 1 from 0. For a proof of this theorem, see analyticity of holomorphic functions. A simple example The arctangent function of trigonometry can be expanded in a power series: It is easy to apply the root test in this case to find that the radius of convergence is 1. A more complicated example Consider this power series: where the rational numbers Bn are the Bernoulli numbers. It may be cumbersome to try to apply the ratio test to find the radius of convergence of this series. But the theorem of complex analysis stated above quickly solves the problem. At z = 0, there is in effect no singularity since the singularity is removable. The only non-removable singularities are therefore located at the other points where the denominator is zero. We solve by recalling that if and then and then take x and y to be real. Since y is real, the absolute value of is necessarily 1. Therefore, the absolute value of e can be 1 only if e is 1; since x is real, that happens only if x = 0. Therefore z is purely imaginary and . Since y is real, that happens only if cos(y) = 1 and sin(y) = 0, so that y is an integer multiple of 2. Consequently the singular points of this function occur at z = a nonzero integer multiple of 2i. The singularities nearest 0, which is the center of the power series expansion, are at ±2i. The distance from the center to either of those points is 2, so the radius of convergence is 2. Convergence on the boundary If the power series is expanded around the point a and the radius of convergence is , then the set of all points such that is a circle called the boundary of the disk of convergence. A power series may diverge at every point on the boundary, or diverge on some points and converge at other points, or converge at all the points on the boundary. Furthermore, even if the series converges everywhere on the boundary (even uniformly), it does not necessarily converge absolutely. Example 1: The power series for the function , expanded around , which is simply has radius of convergence 1 and diverges at every point on the boundary. Example 2: The power series for , expanded around , which is has radius of convergence 1, and diverges for but converges for all other points on the boundary. The function of Example 1 is the derivative of . Example 3: The power series has radius of convergence 1 and converges everywhere on the boundary absolutely. If is the function represented by this series on the unit disk, then the derivative of h(z) is equal to g(z)/z with g of Example 2. It turns out that is the dilogarithm function. Example 4: The power series has radius of convergence 1 and converges uniformly on the entire boundary , but does not converge absolutely on the boundary. Rate of convergence If we expand the function around the point x = 0, we find out that the radius of convergence of this series is meaning that this series converges for all complex numbers. However, in applications, one is often interested in the precision of a numerical answer. Both the number of terms and the value at which the series is to be evaluated affect the accuracy of the answer. For example, if we want to calculate accurate up to five decimal places, we only need the first two terms of the series. However, if we want the same precision for we must evaluate and sum the first five terms of the series. For , one requires the first 18 terms of the series, and for we need to evaluate the first 141 terms. So for these particular values the fastest convergence of a power series expansion is at the center, and as one moves away from the center of convergence, the rate of convergence slows down until you reach the boundary (if it exists) and cross over, in which case the series will diverge. Abscissa of convergence of a Dirichlet series An analogous concept is the abscissa of convergence of a Dirichlet series Such a series converges if the real part of s is greater than a particular number depending on the coefficients an: the abscissa of convergence.
Mathematics
Sequences and series
null
61478
https://en.wikipedia.org/wiki/Analytic%20function
Analytic function
In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex analytic functions exhibit properties that do not generally hold for real analytic functions. A function is analytic if and only if for every in its domain, its Taylor series about converges to the function in some neighborhood of . This is stronger than merely being infinitely differentiable at , and therefore having a well-defined Taylor series; the Fabius function provides an example of a function that is infinitely differentiable but not analytic. Definitions Formally, a function is real analytic on an open set in the real line if for any one can write in which the coefficients are real numbers and the series is convergent to for in a neighborhood of . Alternatively, a real analytic function is an infinitely differentiable function such that the Taylor series at any point in its domain converges to for in a neighborhood of pointwise. The set of all real analytic functions on a given set is often denoted by , or just by if the domain is understood. A function defined on some subset of the real line is said to be real analytic at a point if there is a neighborhood of on which is real analytic. The definition of a complex analytic function is obtained by replacing, in the definitions above, "real" with "complex" and "real line" with "complex plane". A function is complex analytic if and only if it is holomorphic i.e. it is complex differentiable. For this reason the terms "holomorphic" and "analytic" are often used interchangeably for such functions. Examples Typical examples of analytic functions are The following elementary functions: All polynomials: if a polynomial has degree n, any terms of degree larger than n in its Taylor series expansion must immediately vanish to 0, and so this series will be trivially convergent. Furthermore, every polynomial is its own Maclaurin series. The exponential function is analytic. Any Taylor series for this function converges not only for x close enough to x0 (as in the definition) but for all values of x (real or complex). The trigonometric functions, logarithm, and the power functions are analytic on any open set of their domain. Most special functions (at least in some range of the complex plane): hypergeometric functions Bessel functions gamma functions Typical examples of functions that are not analytic are The absolute value function when defined on the set of real numbers or complex numbers is not everywhere analytic because it is not differentiable at 0. Piecewise defined functions (functions given by different formulae in different regions) are typically not analytic where the pieces meet. The complex conjugate function z → z* is not complex analytic, although its restriction to the real line is the identity function and therefore real analytic, and it is real analytic as a function from to . Other non-analytic smooth functions, and in particular any smooth function with compact support, i.e. , cannot be analytic on . Alternative characterizations The following conditions are equivalent: is real analytic on an open set . There is a complex analytic extension of to an open set which contains . is smooth and for every compact set there exists a constant such that for every and every non-negative integer the following bound holds Complex analytic functions are exactly equivalent to holomorphic functions, and are thus much more easily characterized. For the case of an analytic function with several variables (see below), the real analyticity can be characterized using the Fourier–Bros–Iagolnitzer transform. In the multivariable case, real analytic functions satisfy a direct generalization of the third characterization. Let be an open set, and let . Then is real analytic on if and only if and for every compact there exists a constant such that for every multi-index the following bound holds Properties of analytic functions The sums, products, and compositions of analytic functions are analytic. The reciprocal of an analytic function that is nowhere zero is analytic, as is the inverse of an invertible analytic function whose derivative is nowhere zero. (
Mathematics
Functions: General
null
61532
https://en.wikipedia.org/wiki/Absolute%20convergence
Absolute convergence
In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series is said to converge absolutely if for some real number Similarly, an improper integral of a function, is said to converge absolutely if the integral of the absolute value of the integrand is finite—that is, if A convergent series that is not absolutely convergent is called conditionally convergent. Absolute convergence is important for the study of infinite series, because its definition guarantees that a series will have some "nice" behaviors of finite sums that not all convergent series possess. For instance, rearrangements do not change the value of the sum, which is not necessarily true for conditionally convergent series. Background When adding a finite number of terms, addition is both associative and commutative, meaning that grouping and rearrangment do not alter the final sum. For instance, is equal to both and . However, associativity and commutativity do not necessarily hold for infinite sums. One example is the alternating harmonic series whose terms are fractions that alternate in sign. This series is convergent and can be evaluated using the Maclaurin series for the function , which converges for all satisfying : Substituting reveals that the original sum is equal to . The sum can also be rearranged as follows: In this rearrangement, the reciprocal of each odd number is grouped with the reciprocal of twice its value, while the reciprocals of every multiple of 4 are evaluated separately. However, evaluating the terms inside the parentheses yields or half the original series. The violation of the associativity and commutativity of addition reveals that the alternating harmonic series is conditionally convergent. Indeed, the sum of the absolute values of each term is , or the divergent harmonic series. According to the Riemann series theorem, any conditionally convergent series can be permuted so that its sum is any finite real number or so that it diverges. When an absolutely convergent series is rearranged, its sum is always preserved. Definition for real and complex numbers A sum of real numbers or complex numbers is absolutely convergent if the sum of the absolute values of the terms converges. Sums of more general elements The same definition can be used for series whose terms are not numbers but rather elements of an arbitrary abelian topological group. In that case, instead of using the absolute value, the definition requires the group to have a norm, which is a positive real-valued function on an abelian group (written additively, with identity element 0) such that: The norm of the identity element of is zero: For every implies For every For every In this case, the function induces the structure of a metric space (a type of topology) on Then, a -valued series is absolutely convergent if In particular, these statements apply using the norm (absolute value) in the space of real numbers or complex numbers. In topological vector spaces If is a topological vector space (TVS) and is a (possibly uncountable) family in then this family is absolutely summable if is summable in (that is, if the limit of the net converges in where is the directed set of all finite subsets of directed by inclusion and ), and for every continuous seminorm on the family is summable in If is a normable space and if is an absolutely summable family in then necessarily all but a countable collection of 's are 0. Absolutely summable families play an important role in the theory of nuclear spaces. Relation to convergence If is complete with respect to the metric then every absolutely convergent series is convergent. The proof is the same as for complex-valued series: use the completeness to derive the Cauchy criterion for convergence—a series is convergent if and only if its tails can be made arbitrarily small in norm—and apply the triangle inequality. In particular, for series with values in any Banach space, absolute convergence implies convergence. The converse is also true: if absolute convergence implies convergence in a normed space, then the space is a Banach space. If a series is convergent but not absolutely convergent, it is called conditionally convergent. An example of a conditionally convergent series is the alternating harmonic series. Many standard tests for divergence and convergence, most notably including the ratio test and the root test, demonstrate absolute convergence. This is because a power series is absolutely convergent on the interior of its disk of convergence. Proof that any absolutely convergent series of complex numbers is convergent Suppose that is convergent. Then equivalently, is convergent, which implies that and converge by termwise comparison of non-negative terms. It suffices to show that the convergence of these series implies the convergence of and for then, the convergence of would follow, by the definition of the convergence of complex-valued series. The preceding discussion shows that we need only prove that convergence of implies the convergence of Let be convergent. Since we have Since is convergent, is a bounded monotonic sequence of partial sums, and must also converge. Noting that is the difference of convergent series, we conclude that it too is a convergent series, as desired. Alternative proof using the Cauchy criterion and triangle inequality By applying the Cauchy criterion for the convergence of a complex series, we can also prove this fact as a simple implication of the triangle inequality. By the Cauchy criterion, converges if and only if for any there exists such that for any But the triangle inequality implies that so that for any which is exactly the Cauchy criterion for Proof that any absolutely convergent series in a Banach space is convergent The above result can be easily generalized to every Banach space Let be an absolutely convergent series in As is a Cauchy sequence of real numbers, for any and large enough natural numbers it holds: By the triangle inequality for the norm , one immediately gets: which means that is a Cauchy sequence in hence the series is convergent in Rearrangements and unconditional convergence Real and complex numbers When a series of real or complex numbers is absolutely convergent, any rearrangement or reordering of that series' terms will still converge to the same value. This fact is one reason absolutely convergent series are useful: showing a series is absolutely convergent allows terms to be paired or rearranged in convenient ways without changing the sum's value. The Riemann rearrangement theorem shows that the converse is also true: every real or complex-valued series whose terms cannot be reordered to give a different value is absolutely convergent. Series with coefficients in more general space The term unconditional convergence is used to refer to a series where any rearrangement of its terms still converges to the same value. For any series with values in a normed abelian group , as long as is complete, every series which converges absolutely also converges unconditionally. Stated more formally: For series with more general coefficients, the converse is more complicated. As stated in the previous section, for real-valued and complex-valued series, unconditional convergence always implies absolute convergence. However, in the more general case of a series with values in any normed abelian group , the converse does not always hold: there can exist series which are not absolutely convergent, yet unconditionally convergent. For example, in the Banach space ℓ∞, one series which is unconditionally convergent but not absolutely convergent is: where is an orthonormal basis. A theorem of A. Dvoretzky and C. A. Rogers asserts that every infinite-dimensional Banach space has an unconditionally convergent series that is not absolutely convergent. Proof of the theorem For any we can choose some such that: Let where so that is the smallest natural number such that the list includes all of the terms (and possibly others). Finally for any integer let so that and thus This shows that that is: Q.E.D. Products of series The Cauchy product of two series converges to the product of the sums if at least one of the series converges absolutely. That is, suppose that The Cauchy product is defined as the sum of terms where: If the or sum converges absolutely then Absolute convergence over sets A generalization of the absolute convergence of a series, is the absolute convergence of a sum of a function over a set. We can first consider a countable set and a function We will give a definition below of the sum of over written as First note that because no particular enumeration (or "indexing") of has yet been specified, the series cannot be understood by the more basic definition of a series. In fact, for certain examples of and the sum of over may not be defined at all, since some indexing may produce a conditionally convergent series. Therefore we define only in the case where there exists some bijection such that is absolutely convergent. Note that here, "absolutely convergent" uses the more basic definition, applied to an indexed series. In this case, the value of the sum of over is defined by Note that because the series is absolutely convergent, then every rearrangement is identical to a different choice of bijection Since all of these sums have the same value, then the sum of over is well-defined. Even more generally we may define the sum of over when is uncountable. But first we define what it means for the sum to be convergent. Let be any set, countable or uncountable, and a function. We say that the sum of over converges absolutely if There is a theorem which states that, if the sum of over is absolutely convergent, then takes non-zero values on a set that is at most countable. Therefore, the following is a consistent definition of the sum of over when the sum is absolutely convergent. Note that the final series uses the definition of a series over a countable set. Some authors define an iterated sum to be absolutely convergent if the iterated series This is in fact equivalent to the absolute convergence of That is to say, if the sum of over converges absolutely, as defined above, then the iterated sum converges absolutely, and vice versa. Absolute convergence of integrals The integral of a real or complex-valued function is said to converge absolutely if One also says that is absolutely integrable. The issue of absolute integrability is intricate and depends on whether the Riemann, Lebesgue, or Kurzweil-Henstock (gauge) integral is considered; for the Riemann integral, it also depends on whether we only consider integrability in its proper sense ( and both bounded), or permit the more general case of improper integrals. As a standard property of the Riemann integral, when is a bounded interval, every continuous function is bounded and (Riemann) integrable, and since continuous implies continuous, every continuous function is absolutely integrable. In fact, since is Riemann integrable on if is (properly) integrable and is continuous, it follows that is properly Riemann integrable if is. However, this implication does not hold in the case of improper integrals. For instance, the function is improperly Riemann integrable on its unbounded domain, but it is not absolutely integrable: Indeed, more generally, given any series one can consider the associated step function defined by Then converges absolutely, converges conditionally or diverges according to the corresponding behavior of The situation is different for the Lebesgue integral, which does not handle bounded and unbounded domains of integration separately (see below). The fact that the integral of is unbounded in the examples above implies that is also not integrable in the Lebesgue sense. In fact, in the Lebesgue theory of integration, given that is measurable, is (Lebesgue) integrable if and only if is (Lebesgue) integrable. However, the hypothesis that is measurable is crucial; it is not generally true that absolutely integrable functions on are integrable (simply because they may fail to be measurable): let be a nonmeasurable subset and consider where is the characteristic function of Then is not Lebesgue measurable and thus not integrable, but is a constant function and clearly integrable. On the other hand, a function may be Kurzweil-Henstock integrable (gauge integrable) while is not. This includes the case of improperly Riemann integrable functions. In a general sense, on any measure space the Lebesgue integral of a real-valued function is defined in terms of its positive and negative parts, so the facts: integrable implies integrable measurable, integrable implies integrable are essentially built into the definition of the Lebesgue integral. In particular, applying the theory to the counting measure on a set one recovers the notion of unordered summation of series developed by Moore–Smith using (what are now called) nets. When is the set of natural numbers, Lebesgue integrability, unordered summability and absolute convergence all coincide. Finally, all of the above holds for integrals with values in a Banach space. The definition of a Banach-valued Riemann integral is an evident modification of the usual one. For the Lebesgue integral one needs to circumvent the decomposition into positive and negative parts with Daniell's more functional analytic approach, obtaining the Bochner integral.
Mathematics
Sequences and series
null
61534
https://en.wikipedia.org/wiki/Australopithecus
Australopithecus
Australopithecus (, ; or (, ) is a genus of early hominins that existed in Africa during the Pliocene and Early Pleistocene. The genera Homo (which includes modern humans), Paranthropus, and Kenyanthropus evolved from some Australopithecus species. Australopithecus is a member of the subtribe Australopithecina, which sometimes also includes Ardipithecus, though the term "australopithecine" is sometimes used to refer only to members of Australopithecus. Species include A. garhi, A. africanus, A. sediba, A. afarensis, A. anamensis, A. bahrelghazali, and A. deyiremeda. Debate exists as to whether some Australopithecus species should be reclassified into new genera, or if Paranthropus and Kenyanthropus are synonymous with Australopithecus, in part because of the taxonomic inconsistency. Furthermore, because e.g. A. africanus is more closely related to humans, or their ancestors at the time, than e.g. A. anamensis and many more Australopithecus branches, Australopithecus cannot be consolidated into a coherent grouping without also including the Homo genus and other genera. The earliest known member of the genus, A. anamensis, existed in eastern Africa around 4.2 million years ago. Australopithecus fossils become more widely dispersed throughout eastern and southern Africa (the Chadian A. bahrelghazali indicates that the genus was much more widespread than the fossil record suggests), before eventually becoming pseudo-extinct 1.9 million years ago (or 1.2 to 0.6 million years ago if Paranthropus is included). While none of the groups normally directly assigned to this group survived, Australopithecus gave rise to living descendants, as the genus Homo emerged from an Australopithecus species at some time between 3 and 2 million years ago. Australopithecus possessed two of the three duplicated genes derived from SRGAP2 roughly 3.4 and 2.4 million years ago (SRGAP2B and SRGAP2C), the second of which contributed to the increase in number and migration of neurons in the human brain. Significant changes to the hand first appear in the fossil record of later A. afarensis about 3 million years ago (fingers shortened relative to thumb and changes to the joints between the index finger and the trapezium and capitate). Taxonomy Research history The first Australopithecus specimen, the type specimen, was discovered in 1924 in a lime quarry by workers at Taung, South Africa. The specimen was studied by the Australian anatomist Raymond Dart, who was then working at the University of the Witwatersrand in Johannesburg. The fossil skull was from a three-year-old bipedal primate (nicknamed Taung Child) that he named Australopithecus africanus. The first report was published in Nature in February 1925. Dart realised that the fossil contained a number of humanoid features, and so he came to the conclusion that this was an early human ancestor. Later, Scottish paleontologist Robert Broom and Dart set out to search for more early hominin specimens, and several more A. africanus remains from various sites. Initially, anthropologists were largely hostile to the idea that these discoveries were anything but apes, though this changed during the late 1940s. In 1950, evolutionary biologist Ernst Walter Mayr said that all bipedal apes should be classified into the genus Homo, and considered renaming Australopithecus to Homo transvaalensis. However, the contrary view taken by J.T. Robinson in 1954, excluding australopiths from Homo, became the prevalent view. The first australopithecine fossil discovered in eastern Africa was an A. boisei skull excavated by Mary Leakey in 1959 in Olduvai Gorge, Tanzania. Since then, the Leakey family has continued to excavate the gorge, uncovering further evidence for australopithecines, as well as for Homo habilis and Homo erectus. The scientific community took 20 more years to widely accept Australopithecus as a member of the human family tree. In 1997, an almost complete Australopithecus skeleton with skull was found in the Sterkfontein caves of Gauteng, South Africa. It is now called "Little Foot" and it is around 3.7 million years old. It was named Australopithecus prometheus which has since been placed within A. africanus. Other fossil remains found in the same cave in 2008 were named Australopithecus sediba, which lived 1.9 million years ago. A. africanus probably evolved into A. sediba, which some scientists think may have evolved into H. erectus, though this is heavily disputed. In 2003, Spanish writer Camilo José Cela Conde and evolutionary biologist Francisco J. Ayala proposed resurrecting the genus Praeanthropus to house Orrorin, A. afarensis, A. anamensis, A. bahrelghazali, and A. garhi, but this genus has been largely dismissed. Classification With the apparent emergence of the genera Homo, Kenyanthropus, and Paranthropus in the genus Australopithecus, taxonomy runs into some difficulty, as the name of species incorporates their genus. According to cladistics, groups should not be left paraphyletic, where it is kept not consisting of a common ancestor and all of its descendants. Resolving this problem would cause major ramifications in the nomenclature of all descendent species. Possibilities suggested have been to rename Homo sapiens to Australopithecus sapiens (or even Pan sapiens), or to move some Australopithecus species into new genera. In 2002 and again in 2007, Camilo José Cela Conde et al. suggested that A. africanus be moved to Paranthropus. On the basis of craniodental evidence, Strait and Grine (2004) suggest that A. anamensis and A. garhi should be assigned to new genera. It is debated whether or not A. bahrelghazali should be considered simply a western variant of A. afarensis instead of a separate species. Evolution A. anamensis may have descended from or was closely related to Ardipithecus ramidus. A. anamensis shows some similarities to both Ar. ramidus and Sahelanthropus. Australopiths shared several traits with modern apes and humans, and were widespread throughout Eastern and Northern Africa by 3.5 million years ago (mya). The earliest evidence of fundamentally bipedal hominins is a (3.6 mya) fossil trackway in Laetoli, Tanzania, which bears a remarkable similarity to those of modern humans. The footprints have generally been classified as australopith, as they are the only form of prehuman hominins known to have existed in that region at that time. According to the Chimpanzee Genome Project, the human–chimpanzee last common ancestor existed about five to six million years ago, assuming a constant rate of mutation. However, hominin species dated to earlier than the date could call this into question. Sahelanthropus tchadensis, commonly called "Toumai", is about seven million years old and Orrorin tugenensis lived at least six million years ago. Since little is known of them, they remain controversial among scientists since the molecular clock in humans has determined that humans and chimpanzees had a genetic split at least a million years later. One theory suggests that the human and chimpanzee lineages diverged somewhat at first, then some populations interbred around one million years after diverging. Anatomy The brains of most species of Australopithecus were roughly 35% of the size of a modern human brain with an endocranial volume average of . Although this is more than the average endocranial volume of chimpanzee brains at the earliest australopiths (A. anamensis) appear to have been within the chimpanzee range, whereas some later australopith specimens have a larger endocranial volume than that of some early Homo fossils. Most species of Australopithecus were diminutive and gracile, usually standing tall. It is possible that they exhibited a considerable degree of sexual dimorphism, males being larger than females. In modern populations, males are on average a mere 15% larger than females, while in Australopithecus, males could be up to 50% larger than females by some estimates. However, the degree of sexual dimorphism is debated due to the fragmentary nature of australopith remains. One paper finds that A. afarensis had a level of dimorphism close to modern humans. According to A. Zihlman, Australopithecus body proportions closely resemble those of bonobos (Pan paniscus), leading evolutionary biologist Jeremy Griffith to suggest that bonobos may be phenotypically similar to Australopithecus. Furthermore, thermoregulatory models suggest that australopiths were fully hair covered, more like chimpanzees and bonobos, and unlike humans. The fossil record seems to indicate that Australopithecus is ancestral to Homo and modern humans. It was once assumed that large brain size had been a precursor to bipedalism, but the discovery of Australopithecus with a small brain but developed bipedality upset this theory. Nonetheless, it remains a matter of controversy as to how bipedalism first emerged. The advantages of bipedalism were that it left the hands free to grasp objects (e.g., carry food and young), and allowed the eyes to look over tall grasses for possible food sources or predators, but it is also argued that these advantages were not significant enough to cause the emergence of bipedalism. Earlier fossils, such as Orrorin tugenensis, indicate bipedalism around six million years ago, around the time of the split between humans and chimpanzees indicated by genetic studies. This suggests that erect, straight-legged walking originated as an adaptation to tree-dwelling. Major changes to the pelvis and feet had already taken place before Australopithecus. It was once thought that humans descended from a knuckle-walking ancestor, but this is not well-supported. Australopithecines have thirty-two teeth, like modern humans. Their molars were parallel, like those of great apes, and they had a slight pre-canine gap (diastema). Their canines were smaller, like modern humans, and with the teeth less interlocked than in previous hominins. In fact, in some australopithecines, the canines are shaped more like incisors. The molars of Australopithecus fit together in much the same way those of humans do, with low crowns and four low, rounded cusps used for crushing. They have cutting edges on the crests. However, australopiths generally evolved a larger postcanine dentition with thicker enamel. Australopiths in general had thick enamel, like Homo, while other great apes have markedly thinner enamel. Robust australopiths wore their molar surfaces down flat, unlike the more gracile species, who kept their crests. Diet Australopithecus species are thought to have eaten mainly fruit, vegetables, and tubers, and perhaps easy-to-catch animals such as small lizards. Much research has focused on a comparison between the South African species A. africanus and Paranthropus robustus. Early analyses of dental microwear in these two species showed, compared to P. robustus, A. africanus had fewer microwear features and more scratches as opposed to pits on its molar wear facets. Microwear patterns on the cheek teeth of A. afarensis and A. anamensis indicate that A. afarensis predominantly ate fruits and leaves, whereas A. anamensis included grasses and seeds (in addition to fruits and leaves). The thickening of enamel in australopiths may have been a response to eating more ground-bound foods such as tubers, nuts, and cereal grains with gritty dirt and other small particulates which would wear away enamel. Gracile australopiths had larger incisors, which indicates tearing food was important, perhaps eating scavenged meat. Nonetheless, the wearing patterns on the teeth support a largely herbivorous diet. In 1992, trace-element studies of the strontium/calcium ratios in robust australopith fossils suggested the possibility of animal consumption, as they did in 1994 using stable carbon isotopic analysis. In 2005, fossil animal bones with butchery marks dating to 2.6 million years old were found at the site of Gona, Ethiopia. This implies meat consumption by at least one of three species of hominins occurring around that time: A. africanus, A. garhi, and/or P. aethiopicus. In 2010, fossils of butchered animal bones dated 3.4 million years old were found in Ethiopia, close to regions where australopith fossils were found. However, a 2025 study measuring nitrogen isotope ratios in fossilized teeth determined that Australopithecus was almost entirely vegetarian. Robust australopithecines (Paranthropus) had larger cheek teeth than gracile australopiths, possibly because robust australopithecines had more tough, fibrous plant material in their diets, whereas gracile australopiths ate more hard and brittle foods. However, such divergence in chewing adaptations may instead have been a response to fallback food availability. In leaner times, robust and gracile australopithecines may have turned to different low-quality foods (fibrous plants for the former, and hard food for the latter), but in more bountiful times, they had more variable and overlapping diets. In a 1979 preliminary microwear study of Australopithecus fossil teeth, anthropologist Alan Walker theorized that robust australopiths ate predominantly fruit (frugivory). A study in 2018 found non-carious cervical lesions, caused by acid erosion, on the teeth of A. africanus, probably caused by consumption of acidic fruit. Technology It is debated if the Australopithecus hand was anatomically capable of producing stone tools. A. garhi was associated with large mammal bones bearing evidence of processing by stone tools, which may indicate australopithecine tool production. Stone tools dating to roughly the same time as A. garhi (about 2.6 mya) were later discovered at the nearby Gona and Ledi-Geraru sites, but the appearance of Homo at Ledi-Geraru (LD 350-1) casts doubt on australopithecine authorship. In 2010, cut marks dating to 3.4 mya on a bovid leg were found at the Dikaka site, which were at first attributed to butchery by A. afarensis, but because the fossil came from a sandstone unit (and were modified by abrasive sand and gravel particles during the fossilisation process), the attribution to butchery is dubious. In 2015, the Lomekwi culture was discovered at Lake Turkana dating to 3.3 mya, possibly attributable to Kenyanthropus or A. deyiremeda. Notable specimens KT-12/H1, an A. bahrelghazali mandibular fragment, discovered 1995 in Sahara, Chad AL 129-1, an A. afarensis knee joint, discovered 1973 in Hadar, Ethiopia Karabo, a juvenile male A. sediba, discovered in South Africa Laetoli footprints, preserved hominin footprints in Tanzania Lucy, a 40%-complete skeleton of a female A. afarensis, discovered 1974 in Hadar, Ethiopia Selam, remains of a three-year-old A. afarensis female, discovered in Dikika, Ethiopia MRD-VP-1/1, first skull of A. anamensis discovered in 2016 in Afar, Ethiopia. STS 5 (Mrs. Ples), the most complete skull of an A. africanus ever found in South Africa STS 14, remains of an A. africanus, discovered 1947 in Sterkfontein, South Africa STS 71, skull of an A. africanus, discovered 1947 in Sterkfontein, South Africa Taung Child, skull of a young A. africanus, discovered 1924 in Taung, South Africa Gallery
Biology and health sciences
Evolution
null
61550
https://en.wikipedia.org/wiki/Black%20swan
Black swan
The black swan (Cygnus atratus) is a large waterbird, a species of swan which breeds mainly in the southeast and southwest regions of Australia. Within Australia, the black swan is nomadic, with erratic migration patterns dependent on climatic conditions. It is a large bird with black plumage and a red bill. It is a monogamous breeder, with both partners sharing incubation and cygnet-rearing duties. The black swan was introduced to various countries as an ornamental bird in the 1800s, but has managed to escape and form stable populations. Described scientifically by English naturalist John Latham in 1790, the black swan was formerly placed into a monotypic genus, Chenopis. Black swans can be found singly, or in loose companies numbering into the hundreds or even thousands. It is a popular bird in zoological gardens and bird collections, and escapees are sometimes seen outside their natural range. This bird is a regional symbol of both Western Australia, where it is native, and the English town of Dawlish, where it is an introduced species. Description Black swans are black-feathered birds, with white flight feathers. The bill is bright red, with a pale bar and tip; and legs and feet are greyish-black. Cobs (males) are slightly larger than pens (females), with a longer and straighter bill. Cygnets (immature birds) are a greyish-brown with pale-edged feathers. Mature black swans measure between in length and weigh . Their wing span is between . The neck is long (relatively the longest neck among the swans) and curved in an "S"-shape. The black swan utters a musical and far reaching bugle-like sound, called either on the water or in flight, as well as a range of softer crooning notes. It can also whistle, especially when disturbed while breeding and nesting. When swimming, black swans hold their necks arched or erect and often carry their feathers or wings raised in an aggressive display. In flight, a wedge of black swans will form as a line or a V, with the individual birds flying strongly with undulating long necks, making whistling sounds with their wings and baying, bugling or trumpeting calls. The black swan is unlike any other Australian bird, although in poor light and at long range it may be confused with a magpie goose in flight. However, the black swan can be distinguished by its much longer neck and slower wing beat. One captive population of black swans in Lakeland, Florida has produced a few individuals which are a light mottled grey colour instead of black. White black swans are also known to exist; these individuals, which are leucistic, are believed to occur rarely in the wild. Distribution The black swan is common in the wetlands of southwestern and eastern Australia and adjacent coastal islands. In the south west its range encompasses an area between North West Cape, Cape Leeuwin and Eucla; while in the east it covers a large region bounded by the Atherton Tableland, the Eyre Peninsula and Tasmania, with the Murray Darling Basin supporting very large populations of black swans. It is uncommon in central and northern Australia. The black swan's preferred habitat extends across fresh, brackish and salt water lakes, swamps and rivers with underwater and emergent vegetation for food and nesting materials. It also favors permanent wetlands, including ornamental lakes, but can also be found in flooded pastures and tidal mudflats, and occasionally on the open sea near islands or the shore. The black swan was once thought to be sedentary, but is now known to be highly nomadic. There is no set migratory pattern, but rather opportunistic responses to either rainfall or drought. In high rainfall years, emigration occurs from the south west and south east into the interior, with a reverse migration from these heartlands in drier years. When rain does fall in the arid central regions, black swans will migrate to these areas to nest and raise their young. However, should dry conditions return before the young have been raised, the adult birds will abandon the nests and their eggs or cygnets and return to wetter areas. The black swan, like many other water fowl, loses all its flight feathers at once when it moults after breeding and is unable to fly for about a month. During this time it will usually settle on large, open waters for safety. The species has a large range, with figures between 1 and 10 million km2 given as the extent of occurrence. The current global population is estimated to be up to 500,000 individuals. No threat of extinction or significant decline in population has been identified with this numerous and widespread bird. Black swans were first seen by Europeans in 1697, when Willem de Vlamingh's expedition explored the Swan River, Western Australia. Introduced populations New Zealand Before the arrival of the Māori in New Zealand, a related species of swan known as the New Zealand swan had developed there, but was apparently hunted to extinction. In 1864, the Australian black swan was introduced to New Zealand as an ornamental waterfowl and populations are now common on larger coastal or inland lakes, especially Rotorua Lakes, Lake Wairarapa, Lake Ellesmere / Te Waihora, and the Chatham Islands. Black swans have also naturally flown to New Zealand, leading scientists to consider them a native rather than exotic species, although the present population appears to be largely descended from deliberate introductions. United Kingdom The black swan is also very popular as an ornamental waterbird in western Europe, especially Britain, and escapees are commonly reported. As yet, the population in Britain is not considered to be self-sustaining and so the species is not afforded admission to the official British List, but the Wildfowl and Wetlands Trust have recorded a maximum of nine breeding pairs in the UK in 2001, with an estimate of 43 feral birds in 2003–2004. A small population of black swans exists on the River Thames at Marlow, on the brook running through the small town of Dawlish in Devon, near the River Itchen, Hampshire, and the River Tees near Stockton on Tees. The Dawlish population is so well associated with the town that the bird has been the town's emblem for forty years. Japan There are also wild populations in Japan, having originally been imported during 1950–1960. United States Black swans have been reported in Florida, but there is no evidence that they are breeding; persistent sightings may be due to continual releases or escapes. Orange County, California reported black swans in Lake Forest, Irvine and Newport Beach on October 5, 2020, and Santa Ana in December 2021 in the “Versailles on the Lake” apartment community. The "Lake Forest Keys” community bought the original swans about eight to ten years ago and since then there have been many births and gaggles from the original couple" according to a report in the Orange County Register. Black swans formerly resided in the vicinity of Lake Junaluska, a large lake in Waynesville, North Carolina. Mainland China Black swans can also be found in mainland China. In 2018, one group of swans was introduced to the Shenzhen University campus on an artificial lake in Guangdong Province. Behaviour Diet and feeding The black swan is almost exclusively herbivorous, and while there is some regional and seasonal variation, the diet is generally dominated by aquatic and marshland plants. In New South Wales the leaf of reedmace (genus Typha) is the most important food of birds in wetlands, followed by submerged algae and aquatic plants such as Vallisneria. In Queensland, aquatic plants such as Potamogeton, stoneworts, and algae are the dominant foods. The exact composition varies with water level; in flood situations where normal foods are out of reach black swans will feed on pasture plants on shore. The black swan feeds in a similar manner to other swans. When feeding in shallow water it will dip its head and neck under the water and it is able to keep its head flat against the bottom while keeping its body horizontal. In deeper water the swan up-ends to reach lower. Black swans are also able to filter feed at the water's surface. Nesting and reproduction Like other swans, the black swan is largely monogamous, pairing for life (about 6% divorce rate). Recent studies have shown that around a third of all broods exhibit extra-pair paternity. An estimated one-quarter of all pairings are homosexual, mostly between males. They steal nests, or form temporary threesomes with females to obtain eggs, driving away the female after she lays the eggs. Generally, black swans in the Southern hemisphere nest in the wetter winter months (February to September), occasionally in large colonies. A black swan nest is essentially a large heap or mound of reeds, grasses and weeds between 1 and 1.5 metres (3– feet) in diameter and up to 1 metre high, in shallow water or on islands. A nest is reused every year, restored or rebuilt as needed. Both parents share the care of the nest. A typical clutch contains four to eight greenish-white eggs that are incubated for about 35–40 days. Incubation begins after the laying of the last egg, to synchronise the hatching of the chicks. Prior to the commencement of incubation the parent will sit over the eggs without actually warming them. Both sexes incubate the eggs, with the female incubating at night. The change over between incubation periods is marked by ritualised displays by both sexes. If eggs accidentally roll out of the nest both sexes will retrieve the egg using the neck (in other swan species only the female performs this feat). Like all swans, black swans will aggressively defend their nests with their wings and beaks. After hatching, the cygnets are tended by the parents for about nine months until fledging. Cygnets may ride on their parent's back for longer trips into deeper water, but black swans undertake this behaviour less frequently than mute and black-necked swans. Relationship with humans Conservation The black swan is protected in New South Wales, Australia under the National Parks and Wildlife Act 1974 (s.5). The Black Swan is fully protected in all states and territories of Australia and must not be shot. It is evaluated as Least Concern on the IUCN Red List of Threatened Species. Australian culture The black swan was a literary or artistic image among Europeans even before their arrival in Australia. Cultural reference has been based on symbolic contrast and as a distinctive motif. The black swan's role in Australian heraldry and culture extends to the first founding of the colonies in the eighteenth century. It has often been equated with antipodean identity, the contrast to the white swan of the northern hemisphere indicating 'Australianness'. The black swan is featured on the flag, and is both the state bird and state emblem of Western Australia; it also appears in the coat of arms and other iconography of the state's institutions. The Black Swan was the sole postage stamp design of Western Australia from 1854 to 1902. Indigenous Australia The Noongar People of the South-West of Australia call the black swan Kooldjak along the West and South-West coast, Gooldjak in the South East and it is sometimes referred to as maali in language schools.
Biology and health sciences
Anseriformes
Animals