id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
424589
https://en.wikipedia.org/wiki/Skype
Skype
Skype () is a proprietary telecommunications application operated by Skype Technologies, a division of Microsoft, best known for IP-based videotelephony, videoconferencing and voice calls. It also has instant messaging, file transfer, debit-based calls to landline and mobile telephones (over traditional telephone networks), and other features. It is available on various desktop, mobile, and video game console platforms. Skype was created by Niklas Zennström, Janus Friis, and four Estonian developers, and first released in August 2003. In September 2005, eBay acquired it for $2.6 billion. In September 2009, Silver Lake, Andreessen Horowitz, and the Canada Pension Plan Investment Board bought 65% of Skype for $1.9 billion from eBay, valuing the business at $2.92 billion. In May 2011, Microsoft bought Skype for $8.5 billion and used it to replace their Windows Live Messenger. As of 2011, most of the development team and 44% of all the division's employees were in Tallinn and Tartu, Estonia. Skype originally featured a hybrid peer-to-peer and client–server system. It became entirely powered by Microsoft-operated supernodes in May 2012; in 2017, it changed from a peer-to-peer service to a centralized Azure-based service. As of February 2023, it was used by 36 million people each day. Etymology The name for the software is derived from "Sky peer-to-peer", which was then abbreviated to "Skyper". However, some of the domain names associated with "Skyper" were already taken. Dropping the final "r" left the current title "Skype", for which domain names were available. History Skype was founded in 2003 by Niklas Zennström, from Sweden, and Janus Friis, from Denmark. The software was created by Estonians Ahti Heinla, Priit Kasesalu, Jaan Tallinn, and Toivo Annus. Friis and Annus are credited with the idea of reducing the cost of voice calls by using a P2P protocol like that of Kazaa. An early alpha version was created and tested in spring 2003, and the first public beta version was released on 29 August 2003. In June 2005, Skype entered an agreement with Polish web portal Onet.pl for an integrated offering on the Polish market. On 12 September 2005, eBay Inc. agreed to acquire Luxembourg-based Skype Technologies SA for approximately US$2.5 billion in up-front cash and eBay stock, plus potential performance-based consideration. On 1 September 2009, eBay announced it was selling 65% of Skype to Silver Lake, Andreessen Horowitz, and the Canada Pension Plan Investment Board for US$1.9 billion, valuing Skype at US$2.75 billion. On 14 July 2011, Skype partnered with Comcast to bring its video chat service to Comcast subscribers via HDTV sets. On 17 June 2013, Skype released a free video messaging services for Windows, Mac OS, iOS, iPadOS, Android, and BlackBerry. Between 2017 and 2020, Skype collaborated with PayPal to provide a money-send feature, enabling users to transfer funds via the Skype mobile app in the middle of a conversation. In 2019, Skype was declared the sixth most-downloaded mobile app of the decade, from 2010 to 2019. Microsoft acquisition On 10 May 2011, Microsoft Corporation acquired Skype Communications, S.à r.l for US$8.5 billion. It was incorporated as a division of Microsoft, which acquired all its technologies with the purchase. The acquisition was completed on 13 October 2011. Microsoft began integrating the Skype service with its own products. Along with taking over the development of existing Skype desktop and mobile apps, it developed a dedicated client app for its then-newly released, touch-focused Windows 8 and Windows RT operating systems, which were made available from Windows Store when the then-new OS launched on 26 October 2012. The following year, it became the default messaging app for Windows 8.1, replacing the Windows 8 Messaging app at the time, and was pre-installed on every device that came with or upgraded to 8.1. In a month-long transition from 8 to 30 April 2013, Microsoft discontinued two of its own products in favor of Skype, including its Windows Live Messenger instant messaging service, although Messenger continued to be available in mainland China until October 2014. On 11 November 2014, Microsoft announced that in 2015, its Lync product would be replaced by Skype for Business, combining the features of Lync and the consumer Skype software. Organizations that used it could switch their users between the default Skype for Business interface and the Lync interface. Post-acquisition On 12 August 2013, Skype released the 4.10 update for Apple iPhone and iPad apps that allowed HD quality video for iPhone 5 and fourth-generation iPads. On 20 November 2014, Microsoft Office's team announced that a new chat powered by Skype would be implemented in their software, enabling users to chat with co-workers in the same document. On 15 September 2015, Skype announced the release of Mojis ("a brand new way to express yourself on Skype")—short video clips and GIFs featuring characters from films and TV shows that could be entered into conversations like emoticons. Skype worked with Universal Studios, Disney Muppets, BBC and other studios to enhance the Mojis collection. Later that year, Gurdeep Singh Pall, Corporate Vice President of Skype, announced that Microsoft had acquired the technology from Talko. In July 2016, Skype introduced an early Alpha version of a new Skype for Linux client, built with WebRTC technology, after several petitions asked Microsoft to continue development for Linux. In September of that year, Skype updated their iOS app with new features, including an option to call contacts on Skype through Siri voice commands. In October of that year, Microsoft launched Skype for Business for Mac. In February 2017, Microsoft announced plans to discontinue its Skype Wi-Fi service globally. The application was delisted, and the service itself became non-functional from 31 March 2017. On 5 June 2017, Microsoft announced its plans to revamp Skype with similar features to Snapchat, allowing users to share temporary copies of their photos and video files. In late June 2017, Microsoft rolled out their latest update for iOS, incorporating a revamped design and new third-party integrations, with platforms including Gfycat, YouTube, and UpWorthy. It was not well-received, with numerous negative reviews and complaints that the new client broke existing functionality. Skype later removed this "makeover". In December 2017, Microsoft added "Skype Interviews", a shared code editing system for those wishing to hold job interviews for programming roles. In April 2017, Microsoft eventually moved the service from a peer-to-peer to a central server based system, enabling cloud-based storage of text messages/pictures and temporary 30-day storage of videos/file attachments/voice messages/call recordings. It also adjusted the user interfaces of apps to make text-based messaging more prominent than voice calling. Skype for Windows, iOS, Android, Mac and Linux all received significant visual overhauls at this time. Users with legacy Skype accounts were able to retain their usernames, while new users are no longer able to manually choose a username. New user registrations associated with a Microsoft account are assigned a username with a live: prefix followed by an autogenerated alphanumeric string. Features Registered users of Skype are identified by a unique Skype ID and may be listed in the Skype directory under a Skype username. Skype allows these registered users to communicate through both instant messaging and voice chat. Voice chat allows telephone calls between pairs of users and conference calling and uses proprietary audio codec. Skype's text chat client allows group chats, emoticons, storing chat history, and editing of previous messages. Offline messages were implemented in a beta build of version 5 but removed after a few weeks without notification. The usual features familiar to instant messaging users—user profiles, online status indicators, and so on—are also included. The Online Number, a.k.a. SkypeIn, service allows Skype users to receive calls on their computers dialed by conventional phone subscribers to a local Skype phone number; local numbers are available for Australia, Belgium, Brazil, Chile, Colombia, Denmark, the Dominican Republic, Estonia, Finland, France, Germany, Hong Kong, Hungary, India, Ireland, Japan, Mexico, Nepal, New Zealand, Poland, Romania, South Africa, South Korea, Sweden, Switzerland, Turkey, the Netherlands, the United Kingdom, and the United States. A Skype user can have local numbers in any of these countries, with calls to the number charged at the same rate as calls to fixed lines in the country. Skype supports conference calls, video chats, and screen sharing between 25 people at a time for free, which then increased to 50 on 5 April 2019. Skype does not provide the ability to call emergency numbers, such as 112 in Europe, 911 in North America, 999 in the UK or 100 in India and Nepal. However, as of December 2012, there is limited support for emergency calls in the United Kingdom, Australia, Denmark, and Finland. The U.S. Federal Communications Commission (FCC) has ruled that, for the purposes of section 255 of the Telecommunications Act, Skype is not an "interconnected VoIP provider". As a result, the U.S. National Emergency Number Association recommends that all VoIP users have an analog line available as a backup. In 2019, Skype added an option to blur the background in a video chat interface using AI algorithms purely done using software, despite a depth-sensing camera not being present in most webcams. In 2023, Skype added the Bing AI chatbot to the platform for users who had access to the chatbot. Usage and traffic At the end of 2010, there were over 660 million worldwide users, with over 300 million estimated active each month as of August 2015. At one point in February 2012, there were 34 million users concurrently online on Skype. In January 2011, after the release of video calling on the Skype client for iPhone, Skype reached a record 27 million simultaneous online users. This record was broken with 29 million simultaneous online users on 21 February 2011 and again on 28 March 2011 with 30 million online users. On 25 February 2012, Skype announced that it has over 32 million users for the first time ever. By 5 March 2012, it had 36 million simultaneous online users, and less than a year later, on 21 January 2013, Skype had more than 50 million concurrent users online. In June 2012, Skype had surpassed 70 million downloads on Android. On 19 July 2012, Microsoft announced that Skype users had logged 115 billion minutes of calls in the quarter, up to 50% since the last quarter. On 15 January 2014, TeleGeography estimated that Skype-to-Skype international traffic has gone up to 36% in 2013 to 214 billion minutes. As of March 2020, Skype was used by 100 million people at least once a month and by 40 million people each day. At end March 2020 there was a 70% increase in the number of daily users from the previous month, due to the COVID-19 pandemic. However, Skype also lost a large part of its market share to Zoom. System and software Client applications and devices Windows client Multiple different versions of Skype have been released for Windows since its conception. The original line of Skype applications continued from versions 1.0 through 4.0. It has offered a desktop-only program since 2003. Later, a mobile version was created for Windows Phones. In 2012, Skype introduced a new version for Windows 8 similar to the Windows Phone version. On 7 July 2015, Skype modified the application to direct Windows users to download the desktop version, but it was set to continue working on Windows RT until October 2016. In November 2015, Skype introduced three new applications, called Messaging, Skype Video, and Phone, intended to provide an integrated Skype experience on Windows 10. On 24 March 2016, Skype announced that the integrated applications did not satisfy most users' needs and announced that they and the desktop client would eventually be replaced with a new UWP application, which was released as a preview version for the Windows 10 Anniversary Update and dubbed as the stable version with the release of the Windows 10 Creators Update. The latest version of Skype for Windows is Skype 11, which is based on the Universal Windows Platform and runs on various Windows 10-related systems, including Xbox One, Xbox Series X/S, Windows phones, and Microsoft HoloLens. Microsoft still offers the older Skype 8, which is Win32-based and runs on all systems from Windows XP (which is otherwise unsupported by Microsoft) to the most recent release of Windows 10. In late 2017, this version was upgraded to Skype 12.9, in which several features were both removed and added. Other desktop clients macOS (10.9 or newer) Linux (Debian, Debian-based (Ubuntu, etc.), Fedora, openSUSE) Mobile clients iOS Android Skype formerly provided a client for feature phones that ran on J2ME, Nokia X, Symbian, BlackBerry OS and BlackBerry 10 devices. In May 2009, a Version 3.0 was available on Windows Mobile 5 to 6.1, and in September 2015, a Version 2.29 was available on Windows Phone 8.1; in 2016, Microsoft announced that this would stop working in early 2017 once Skype's transition from peer-to-peer to client-server was complete. Other platforms The Nokia N800, N810, and N900 Internet tablets, which run Maemo The Nokia N9, which runs MeeGo, comes with Skype voice calling and text messaging integrated; however, it lacks video calling. Both the Sony Mylo COM-1 and COM-2 models The PlayStation Portable Slim and Lite series, though the user needs to purchase a specially designed microphone peripheral. The PSP-3000 has a built-in microphone, which allows communication without the Skype peripheral. The PSP Go has the ability to use Bluetooth connections with the Skype application, in addition to its built-in microphone. Skype for PlayStation Vita may be downloaded via the PlayStation Network in the U.S. It includes the capability to receive incoming calls with the application running in the background. The Samsung Smart TV had a Skype app, which could be downloaded for free. It used the built-in camera and microphone for the newer models. Alternatively, a separate mountable Skype camera with built-in speakers and microphones is available to purchase for older models. This functionality has now been disabled, along with any other "TV Based" Skype clients. Some devices were made to work with Skype by talking to a desktop Skype client or by embedding Skype software into the device. These were usually either tethered to a PC or had a built-in Wi-Fi client to allow calling from Wi-Fi hotspots, like the Netgear SPH101 Skype Wi-Fi Phone, the SMC WSKP100 Skype Wi-Fi Phone, the Belkin F1PP000GN-SK Wi-Fi Skype Phone, the Panasonic KX-WP1050 Wi-Fi Phone for Skype Executive Travel Set, the IPEVO So-20 Wi-Fi Phone for Skype, and the Linksys CIT200 Wi-Fi Phone. 3G Skypephone, created in collaboration between Skype and 3 in 2007 Third-party licensing Third-party developers, such as Truphone, Nimbuzz, and Fring, previously allowed Skype to run in parallel with several other competing VoIP/IM networks (Truphone and Nimbuzz provide TruphoneOut and NimbuzzOut as a competing paid service) in any Symbian or Java environment. Nimbuzz made Skype available to BlackBerry users, and Fring provided mobile video calling over Skype as well as support for the Android platform. Skype disabled access to Skype by Fring users in July 2010. Nimbuzz discontinued support of Skype on request in October 2010. Before and during the Microsoft acquisition, Skype withdrew licensing from several third parties producing software and hardware compatible with Skype. The Skype for Asterisk product from Digium was withdrawn as "no longer available for sale". The Senao SN358+ long-range (10–15 km) cordless phone was discontinued due to loss of licenses to participate in the Skype network as peers. In combination, these two products made it possible to create roaming cordless mesh networks with a robust handoff. Technology Protocol Skype uses a proprietary Internet telephony (VoIP) network called the Skype protocol. The protocol has not been made publicly available by Skype, and official applications using the protocol are also proprietary. Part of the Skype technology relies on the Global Index P2P protocol belonging to the Joltid Ltd. corporation. The main difference between Skype and standard VoIP clients is that Skype operates on a peer-to-peer model (originally based on the Kazaa software), rather than the more usual client-server model (note that the very popular Session Initiation Protocol (SIP) model of VoIP is also peer-to-peer, but implementation generally requires registration with a server, as does Skype). On 20 June 2014, Microsoft announced the deprecation of the old Skype protocol. Within several months from this date, in order to continue using Skype services, Skype users will have to update to Skype applications released in 2014. The new Skype protocol, Microsoft Notification Protocol 24, was released. The deprecation became effective in the second week of August 2014. Transferred files are now saved on central servers. As far as networking stack support is concerned, Skype only supports the IPv4 protocol. It lacks support for the next-generation Internet protocol, IPv6. Skype for Business, however, includes support for IPv6 addresses, along with continued support of IPv4. Protocol detection and control Many networking and security companies have claimed to detect and control Skype's protocol for enterprise and carrier applications. While the specific detection methods used by these companies are often private, Pearson's chi-squared test and naive Bayes classification are two approaches that were published in 2008. Combining statistical measurements of payload properties (such as byte frequencies and initial byte sequences) as well as flow properties (like packet sizes and packet directions) has also shown to be an effective method for identifying Skype's TCP- and UDP-based protocols. Audio codecs Skype 2.x used G.729, Skype 3.2 introduced SVOPC, and Skype 4.0 added a Skype-created codec called :SILK, intended to be "lightweight and embeddable". Additionally, Skype has released Opus as a free codec, which integrates the SILK codec principles for voice transmission with the CELT codec principles for higher-quality audio transmissions, such as live music performances. Opus was submitted to the Internet Engineering Task Force (IETF) in September 2010. Since then, it has been standardized as RFC 6716. Video codecs VP7 is used for versions prior to Skype 5.5. As of version 7.0, H.264 is used for both group and one-on-one video chat, at standard definition, 720p, and 1080p high-definition. Skype Qik Skype acquired the video service Qik in 2011. After shutting down Qik in April 2014, Skype relaunched the service as Skype Qik on 14 October 2014. Although Qik offered video conferencing and Internet streaming, the new service focuses on mobile video messaging between individuals and groups. Hyperlink format Skype uses URIs as skype:USER?call for a call. Security and privacy Skype was claimed initially to be a secure communication, with one of its early web pages stating "highly secure with end-to-end encryption". Security services were invisible to the user, and encryption cannot be disabled. Skype claims to use publicly documented, widely trusted encryption techniques for Skype-to-Skype communication: RSA for key negotiation and the Advanced Encryption Standard to encrypt conversations. However, it is impossible to verify that these algorithms are used correctly, completely, and at all times, as there is no public review possible without a protocol specification and/or the program's source code. Skype provides an uncontrolled registration system for users with no proof of identity. Instead, users may choose a screen name that does not have to relate to their real-life identity in any way; a name chosen could also be an impersonation attempt, where the user claims to be someone else for fraudulent purposes. A third-party paper analyzing the security and methodology of Skype was presented at Black Hat Europe 2006. It analyzed Skype and found a number of security issues with the then-current security model. Skype incorporates some features that tend to hide its traffic, but it is not specifically designed to thwart traffic analysis and therefore does not provide anonymous communication. Some researchers have been able to watermark the traffic so that it is identifiable even after passing through an anonymizing network. In an interview, Kurt Sauer, the Chief Security Officer of Skype, said, "We provide a safe communication option. I will not tell you whether we can listen or not." This does not deny the fact that the U.S. National Security Agency (NSA) monitors Skype conversations. Skype's client uses an undocumented and proprietary protocol. The Free Software Foundation (FSF) is concerned about user privacy issues arising from using proprietary software and protocols and has made a replacement for Skype one of their high-priority projects. Security researchers Biondi and Desclaux have speculated that Skype may have a back door, since Skype sends traffic even when it is turned off and because Skype has taken extreme measures to obfuscate the program's traffic and functioning. Several media sources reported that at a meeting about the "Lawful interception of IP based services" held on 25 June 2008, high-ranking unnamed officials at the Austrian interior ministry said that they could listen in on Skype conversations without problems. The Austrian public broadcasting service ORF, citing minutes from the meeting, reported that "the Austrian police are able to listen in on Skype connections". Skype declined to comment on the reports. One easily demonstrated method of monitoring is to set up two computers with the same Skype user ID and password. When a message is typed or a call is received on one computer, the second computer duplicates the audio and text. This requires knowledge of the user ID and password. The United States Federal Communications Commission (FCC) has interpreted the Communications Assistance for Law Enforcement Act (CALEA) as requiring digital phone networks to allow wiretapping if authorized by an FBI warrant, in the same way as other phone services. In February 2009, Skype said that, not being a telephone company owning phone lines, it is exempt from CALEA and similar laws, which regulate US phone companies, and it is not clear whether Skype could support wiretapping even if it wanted to. According to the ACLU, the Act is inconsistent with the original intent of the Fourth Amendment to the U.S. Constitution; more recently, the ACLU has expressed concern that the FCC interpretation of the Act is incorrect. It has been suggested that Microsoft made changes to Skype's infrastructure to ease various wiretapping requirements; however, Skype denies the claims. Sometime before Skype was sold in 2009, the company had started Project Chess, a program to explore legal and technical ways to easily share calls with intelligence agencies and law enforcement. On 20 February 2009, the European Union's Eurojust agency announced that the Italian Desk at Eurojust would "play a key role in the coordination and cooperation of the investigations on the use of internet telephony systems (VoIP), such as 'Skype'. [...] The purpose of Eurojust's coordination role is to overcome the technical and judicial obstacles to the interception of internet telephony systems, taking into account the various data protection rules and civil rights." In November 2010, a flaw was disclosed to Skype that showed how computer crackers could secretly track any user's IP address. Due to Skype's peer-to-peer nature, this was a difficult issue to address, but this bug was eventually remedied in a 2016 update. In 2012, Skype introduced automatic updates to better protect users from security risks, but received some challenge from users of the Mac product, as the updates cannot be disabled from version 5.6 on, both on Mac OS and Windows versions, although in the latter, and only from version 5.9 on, automatic updating can be turned off in certain cases. According to a 2012 Washington Post article, Skype "has expanded its cooperation with law enforcement authorities to make online chats and other user information available to police"; the article additionally mentions that Skype made changes to allow authorities access to addresses and credit card numbers. In November 2012, Skype was reported to have handed over user data of a pro-WikiLeaks activist to Dallas, Texas-based private security company iSIGHT Partners without a warrant or court order. The alleged handover would be a breach of Skype's privacy policy. Skype responded with a statement that it launched an internal investigation to probe the breach of user data privacy. On 13 November 2012, a Russian user published a flaw in Skype's security that allowed any person to take over a Skype account knowing only the victim's email by following seven steps. This vulnerability was claimed to exist for months and existed for more than 12 hours after being published widely. On 14 May 2013, it was documented that a URL sent via a Skype instant messaging session was usurped by the Skype service and subsequently used in a HTTP HEAD query originating from an IP address registered to Microsoft in Redmond (the IP address used was 65.52.100.214). The Microsoft query used the full URL supplied in the IM conversation and was generated by a previously undocumented security service. Security experts speculate that the action was triggered by a technology similar to Microsoft's SmartScreen Filter used in its browsers. The 2013 mass surveillance disclosures revealed that agencies such as the NSA and the FBI have the ability to eavesdrop on Skype, including the monitoring and storage of text and video calls and file transfers. The PRISM surveillance program, which requires FISA court authorization, reportedly has allowed the NSA unfettered access to its data center supernodes. According to the leaked documents, integration work began in November 2010, but it was not until February 2011 that the company was served with a directive to comply signed by the attorney general, with NSA documents showing that collection began on 31 March 2011. On 10 November 2014, Skype scored 1 out of 7 points on the Electronic Frontier Foundation's secure messaging scorecard. Skype received a point for encryption during transit but lost points because communications are not encrypted with a key the provider does not have access to (i.e., the communications are not end-to-end encrypted), users cannot verify contacts' identities, past messages are not secure if the encryption keys are stolen (i.e., the service does not provide forward secrecy), the code is not open to independent review (i.e., not available to merely view, nor under a free-software license), the security design is not properly documented, and there has not been a recent independent security audit. AIM, BlackBerry Messenger, Ebuddy XMS, Hushmail, Kik Messenger, Viber, and Yahoo Messenger also scored 1 out of 7 points. As of August 2018, Skype now supports end-to-end encryption across all platforms. Cybercrime on application Cybersex trafficking has occurred on Skype and other videoconferencing applications. According to the Australian Federal Police, overseas pedophiles are directing child sex abuse using its live streaming services. Service in the People's Republic of China Since September 2007, users in China trying to download the Skype software client have been redirected to the site of TOM Online, a joint venture between a Chinese wireless operator and Skype, from which a modified Chinese version can be downloaded. The TOM client participates in China's system of Internet censorship, monitoring text messages between Skype users in China as well as messages exchanged with users outside the country. Niklas Zennström, then chief executive of Skype, told reporters that TOM "had implemented a text filter, which is what everyone else in that market is doing. Those are the regulations." He also stated, "One thing that's certain is that those things are in no way jeopardising the privacy or the security of any of the users." In October 2008, it was reported that TOM had been saving the full message contents of some Skype text conversations on its servers, apparently focusing on conversations containing political issues such as Tibet, Falun Gong, Taiwan independence, and the Chinese Communist Party. The saved messages contain personally identifiable information about the message senders and recipients, including IP addresses, usernames, landline phone numbers, and the entire content of the text messages, including the time and date of each message. Information about Skype users outside China who were communicating with a TOM-Skype user was also saved. A server misconfiguration made these log files accessible to the public for a time. Research on the TOM-Skype venture has revealed information about blacklisted keyword checks, allowing censorship and surveillance of its users. The partnership has received much criticism for the latter. Microsoft remains unavailable for comment on the issue. According to reports from the advocacy group Great Fire, Microsoft has modified censorship restrictions and ensured encryption of all user information. Furthermore, Microsoft is now partnered with Guangming Founder (GMF) in China. All attempts to visit the official Skype web page from mainland China redirect the user to skype.gmw.cn. The Linux version of Skype is unavailable. Localization Skype comes bundled with the following locales and languages: Arabic, Bulgarian, Catalan, Chinese (Traditional and Simplified), Croatian, Czech, Danish, Dutch, English, Estonian, Finnish, French, German, Greek, Hebrew, Hungarian, Indonesian, Italian, Japanese, Korean, Latvian, Lithuanian, Nepali, Norwegian, Polish, Portuguese (Brazilian and European), Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, Swedish, Thai, Turkish, Ukrainian, and Vietnamese. As the Windows desktop program offers users the option of creating new language files, at least 80 other (full or partial) localizations are also available for many languages. Customer service In January 2010, Skype rescinded its policy of seizing funds in Skype accounts that have been inactive (no paid call) for 180 days. This was in settlement of a class-action lawsuit. The company also paid up to US $4 to persons who opted into the action. As of February 2012, Skype provides support through their web support portal, support community, @skypesupport on Twitter, and Skype Facebook page. Direct contact via email and live chat is available through their web support portal. Chat Support is a premium feature available to Skype Premium and some other paid users. Skype's refund policy states that they will provide refunds in full if customers have used less than €1 of their Skype Credit. "Upon a duly submitted request, Skype will refund you on a pro-rata basis for the unused period of a Product." Skype has come under some criticism from users for the inability to completely close accounts. Users not wanting to continue using Skype can make their account inactive by deleting all personal information, except for the username. Due to an outage on September 21, 2015 that affected several users in New Zealand, Australia, and other countries, Skype decided to compensate their customers with 20 minutes of free calls to over 60 landline and 8 mobile phone destinations. Educational use Although Skype is a commercial product, its non-paid version is used with increasing frequency among teachers, schools, and charities interested in global education projects. A popular use case is to facilitate language learning through conversations that alternate between each participant's native language. The video conferencing aspect of the software has been praised for its ability to connect students who speak different languages, facilitate virtual field trips, and engage directly with experts. Skype in the classroom is another free-of-charge tool that Skype has set up on its website, designed to encourage teachers to make their classrooms more interactive, and collaborate with other teachers around the world. There are various Skype lessons in which students can participate. Teachers can also use a search tool and find experts in a particular field. The educational program Skype a Scientist, set up by biologist Sarah McAnulty in 2017, had in two years connected 14,312 classrooms with over 7000 volunteer scientists. However, Skype is not adopted universally, some educational institutions in the United States and Europe were blocking the application from their networks.
Technology
Social network and blogging
null
424725
https://en.wikipedia.org/wiki/Cirsium
Cirsium
Cirsium is a genus of perennial and biennial flowering plants in the Asteraceae, one of several genera known commonly as thistles. They are more precisely known as plume thistles. These differ from other thistle genera (Carduus, Silybum and Onopordum) in having a seed with a pappus of feathered hairs on their achenes. The other genera have a pappus of simple unbranched hairs. They are mostly native to Eurasia and northern Africa, with about 60 species from North America (although several species have been introduced outside their native ranges). The lectotype species of the genus is Cirsium heterophyllum (L.) Hill. Cirsium thistles are known for their effusive flower heads, usually purple, rose or pink, also yellow or white. The radially symmetrical disc flowers are at the end of the branches and are visited by many kinds of insects, featuring a generalised pollination syndrome. They have erect stems, with a characteristic enlarged base of the flower which is often spiny. The leaves are alternate, spiny in many (but not all) species, and in some species can be slightly to densely hairy. Extensions from the leaf base down the stem, called wings, can be lacking (Cirsium arvense), conspicuous (Cirsium vulgare), or inconspicuous. They can spread by seed, and also by rhizomes below the surface (Cirsium arvense). The seeds have a tuft of hair, or pappus, which can carry them far by wind. Cirsium thistles are used as food plants by the larvae of some Lepidoptera species—see list of Lepidoptera that feed on Cirsium. The seeds are attractive to small finches such as American goldfinch. Many species are considered weeds, typically by agricultural interests. Cirsium vulgare (spear thistle) is listed in the United States (where as a non-native invasive species it has been renamed "bull thistle") as a noxious weed in nine states. Some species in particular are cultivated in gardens and wildflower plantings for their aesthetic value and/or to support pollinators such as bees and butterflies. Some species dubbed weeds by various interest groups can also provide these benefits. Cirsium vulgare, for instance, ranked in the top 10 for nectar production in a UK plants survey conducted by the AgriLand project which is supported by the UK Insect Pollinators Initiative. Cirsium vulgare was also a top producer of nectar sugar in another study in Britain, ranked third with a production per floral unit of (2323 ± 418μg). Not only does it provide abundant nectar, it provides seeds for birds, such as the European goldfinch Carduelis carduelis, and supports the larvae of the Painted Lady butterfly Vanessa cardui. Some other common species are Cirsium arvense, Cirsium palustre, Cirsium oleraceum. Some ecological organizations, such as the Xerces Society, have attempted to raise awareness of the benefits of thistles, to counteract the general agricultural and home garden labeling of thistles as unwanted weeds. The monarch butterfly (Danaus plexippus), for instance, was highlighted as relying upon thistles such as tall thistle (Cirsium altissimum) as nectar sources during its migration. Some prairie and wildflower seed production companies in the United States supply bulk seed of native North American thistle species for wildlife habitat restoration, although availability tends to be low. Thistles are particularly valued by bumblebees for their high nectar production. Certain species of Cirsium, like Cirsium monspessulanum, Cirsium pyrenaicum and Cirsium vulgare, have been traditionally used as food in rural areas of southern Europe. Cirsium oleraceum is cultivated as a food source in Japan and India. Cirsium setidens is used as a vegetable in Korean cuisine. 'Cirsium' is the Greek word for thistle, kirsos, likely derived from 'swollen vein'. The flower blooms April to August. Selected species 383 species are accepted. Selected species include: Cirsium acaule – dwarf thistle Cirsium altissimum – roadside thistle, tall thistle Cirsium andersonii – Anderson's thistle, rose thistle Cirsium andrewsii – Franciscan thistle Cirsium arisanense Cirsium arizonicum – Arizona thistle Cirsium arvense – creeping thistle, field thistle Cirsium arvense var.argenteum Cirsium arvense var. integrifolium Cirsium arvense var. mite Cirsium arvense var. vestitum Cirsium barnebyi – Barneby's thistle Cirsium boninense Cirsium brachycephalum Cirsium brevifolium – Palouse thistle Cirsium brevistylum – clustered thistle Cirsium calcareum – Cainville thistle Cirsium canescens – Platte thistle, prairie thistle Cirsium canum – Queen Anne's thistle Cirsium carolinianum – Carolina thistle, soft thistle Cirsium centaureae Cirsium chellyense – queen thistle Cirsium ciliolatum – ashland thistle Cirsium clavatum – lake thistle Cirsium clokeyi – Charleston Mountain thistle, whitespine thistle Cirsium congdonii – rosette thistle Cirsium crassicaule – slough thistle Cirsium creticum Cirsium cymosum – peregrine thistle Cirsium discolor – field thistle, pasture thistle Cirsium dissectum – meadow thistle (syn. Cirsium lanceolatum non ) Cirsium douglasii Cirsium drummondii – dwarf thistle Cirsium durangense Cirsium eatonii – Eaton's thistle Cirsium edule – edible thistle Cirsium engelmannii – Engelmann thistle, Engelmann's thistle Cirsium erisithales – yellow melancholy thistle Cirsium esculentum Cirsium flodmanii – Flodman thistle, Flodman's thistle Cirsium foliosum – Drummond's thistle, elk thistle, leafy thistle, meadow thistle Cirsium fontinale – fountain thistle Cirsium funkiae – funky thistle Cirsium grahamii – Graham's thistle Cirsium griseum – gray thistle Cirsium helenioides Cirsium heterophyllum – melancholy thistle Cirsium hookerianum – white thistle Cirsium horridulum – yellow thistle Cirsium hydrophilum – Suisun thistle Cirsium hypoleucum Cirsium jaliscoense Cirsium japonicum – Japanese thistle Cirsium kamtschaticum – Kamchatka thistle Cirsium lecontei – Le Conte's thistle Cirsium leo Cirsium libanoticum Cirsium loncholepis – LaGraciosa thistle Cirsium longistylum – longstyle thistle Cirsium maritimum Cirsium mexicanum – Mexican thistle Cirsium mohavense – Mojave thistle Cirsium muticum – swamp thistle Cirsium neomexicanum – lavender thistle, New Mexico thistle, powderpuff thistle Cirsium nipponicum Cirsium nuttallii – Nuttall's thistle Cirsium occidentale – cobweb thistle Cirsium ochrocentrum – yellowspine thistle Cirsium oleraceum – cabbage thistle Cirsium ownbeyi – Ownbey's thistle Cirsium palustre – marsh thistle Cirsium parryi – Parry's thistle or Cloudcroft thistle Cirsium peckii – Steens Mountain thistle Cirsium pendulum Cirsium perplexans – Rocky Mountain thistle Cirsium pitcheri – Pitcher's thistle, sand dune thistle Cirsium praeteriens – Palo Alto thistle, lost thistle Cirsium pulcherrimum – Wyoming thistle Cirsium pumilum – pasture thistle Cirsium pumilum var. hillii – Hill's thistle Cirsium pumilum var. pumilum Cirsium pyrenaicum Cirsium quercetorum – Alameda County thistle Cirsium remotifolium – fewleaf thistle Cirsium remotifolium var. odontolepis Cirsium remotifolium var. remotifolium – fewleaf thistle Cirsium remotifolium var. rivulare Cirsium repandum – sandhill thistle Cirsium rhaphilepis Cirsium rhinoceros – Korean prickly thistle Cirsium rhothophilum – surf thistle Cirsium rivulare – brook thistle Cirsium rydbergii – Rydberg's thistle Cirsium scapanolepis – mountain slope thistle Cirsium scariosum – meadow thistle Cirsium scopulorum – mountain thistle Cirsium setidens – gondre or Korean thistle Cirsium spinosissimum Cirsium texanum – Texas thistle Cirsium tioganum – stemless thistle †Cirsium toyoshimae Cirsium tuberosum – tuberous thistle Cirsium turneri – cliff thistle Cirsium undulatum – gray thistle, wavy-leaf thistle, wavyleaf thistle Cirsium undulatum var. tracyi – Tracy's thistle, wavyleaf thistle Cirsium undulatum var. undulatum – wavyleaf thistle Cirsium vinaceum – Sacramento Mountain thistle, Sacramento Mountains thistle Cirsium virginianum – Virginia thistle Cirsium vulgare – spear thistle, common thistle (syn. Cirsium lanceolatum ) Cirsium wheeleri – Wheeler's thistle Cirsium wrightii – Wright's thistle Hybrids Cirsium × canalense – canal thistle Cirsium × crassum – thistle Cirsium × erosum – glory thistle Cirsium × iowense – Iowa thistle Cirsium × vancouverense – Vancouver thistle Formerly placed here Afrocirsium buchwaldii (as Cirsium buchwaldii ) Afrocirsium schimperi (as Cirsium schimperi ) Afrocirsium straminispinum (Cirsium straminispinum ) Epitrachys italica (as Cirsium italicum ) Lophiolepis eriophora (as Cirsium eriophorum ) – woolly thistle Nuriaea dender (as Cirsium dender ) Nuriaea engleriana (as Cirsium englerianum ) Picnomon acarna (as Cirsium acarna ) – soldier thistle Image gallery
Biology and health sciences
Asterales
null
424921
https://en.wikipedia.org/wiki/Gasket
Gasket
A gasket is a mechanical seal which fills the space between two or more mating surfaces, generally to prevent leakage from or into the joined objects while under compression. It is a deformable material that is used to create a static seal and maintain that seal under various operating conditions in a mechanical assembly. Gaskets allow for "less-than-perfect" mating surfaces on machine parts where they can fill irregularities. Gaskets are commonly produced by cutting from sheet materials. Given the potential cost and safety implications of faulty or leaking gaskets, it is critical that the correct gasket material is selected to fit the needs of the application. Gaskets for specific applications, such as high pressure steam systems, may contain asbestos. However, due to health hazards associated with asbestos exposure, non-asbestos gasket materials are used when practical. It is usually desirable that the gasket be made from a material that is to some degree yielding such that it is able to deform and tightly fill the space it is designed for, including any slight irregularities. Some types of gaskets require a sealant be applied directly to the gasket surface to function properly. Some (piping) gaskets are made entirely of metal and rely on a seating surface to accomplish the seal; the metal's own spring characteristics are utilized (up to but not passing σy, the material's yield strength). This is typical of some "ring joints" (RTJ) or some other metal gasket systems. These joints are known as R-con and E-con compressive type joints. Some gaskets are dispensed and cured in place. These materials are called formed-in-place gaskets. Properties Gaskets are normally made from a flat material, a sheet such as paper, rubber, silicone, metal, cork, felt, neoprene, nitrile rubber, fiberglass, polytetrafluoroethylene (otherwise known as PTFE or Teflon) or a plastic polymer (such as polychlorotrifluoroethylene). One of the more desirable properties of an effective gasket in industrial applications for compressed fiber gasket material is the ability to withstand high compressive loads. Most industrial gasket applications involve bolts exerting compression well into the 14 MPa (2000 psi) range or higher. Generally speaking, there are several truisms that allow for better gasket performance. One of the more tried and tested is: "The more compressive load exerted on the gasket, the longer it will last". There are several ways to measure a gasket material's ability to withstand compressive loading. The "hot compression test" is probably the most accepted of these tests. Most manufacturers of gasket materials will provide or publish the results of these tests. Types Gaskets come in many different designs based on industrial usage, budget, chemical contact and physical parameters: Sheet gaskets Gaskets can be produced by punching the required shape out of a sheet of flat, thin material, resulting in a sheet gaskets. Sheet gasket are fast and cheap to produce, and can be produced from a variety of materials, among them fibrous materials and matted graphite (and in the past - compressed asbestos). These gaskets can fill various different chemical requirements based on the inertness of the material used. Non-asbestos gasket sheet is durable, of multiple materials, and thick in nature. Material examples are mineral, carbon or synthetic rubbers such as EPDM, Nitrile, Neoprene, Natural, SBR Insertion - each of which have unique properties suitable for different applications. Applications using sheet gaskets involve acids, corrosive chemicals, steam or mild caustics. Flexibility and good recovery prevent breakage during installation of a sheet gasket. Solid material gaskets The idea behind solid material is to use metals which cannot be punched out of sheets but are still cheap to produce. These gaskets generally have a much higher level of quality control than sheet gaskets and generally can withstand much higher temperatures and pressures. The key downside is that a solid metal must be greatly compressed in order to become flush with the flange head and prevent leakage. The material choice is more difficult; because metals are primarily used, process contamination and oxidation are risks. An additional downside is that the metal used must be softer than the flange — in order to ensure that the flange does not warp and thereby prevent sealing with future gaskets. Even so, these gaskets have found a niche in industry. Spiral-wound gaskets Spiral-wound gaskets comprise a mix of metallic and filler material. Generally, the gasket has a metal (normally carbon rich or stainless steel) wound outwards in a circular spiral (other shapes are possible) with the filler material (generally a flexible graphite) wound in the same manner but starting from the opposing side. This results in alternating layers of filler and metal. The filler material in these gaskets acts as the sealing element, with the metal providing structural support. These gaskets have proven to be reliable in most applications, and allow lower clamping forces than solid gaskets, albeit with a higher cost. Constant seating stress gaskets The constant seating stress gasket consists of two components; a solid carrier ring of a suitable material, such as stainless steel, and two sealing elements of some compressible material installed within two opposing channels, one channel on either side of the carrier ring. The sealing elements are typically made from a material (expanded graphite, expanded polytetrafluoroethylene (PTFE), vermiculite, etc.) suitable to the process fluid and application. Constant seating stress gaskets derive their name from the fact that the carrier ring profile takes flange rotation (deflection under bolt preload) into consideration. With all other conventional gaskets, as the flange fasteners are tightened, the flange deflects radially under load, resulting in the greatest gasket compression, and highest gasket stress, at the outer gasket edge. Since the carrier ring used in constant seating stress gaskets take this deflection into account when creating the carrier ring for a given flange size, pressure class, and material, the carrier ring profile can be adjusted to enable the gasket seating stress to be radially uniform across the entire sealing area. Further, because the sealing elements are fully confined by the flange faces in opposing channels on the carrier ring, any in-service compressive forces acting on the gasket are transmitted through the carrier ring and avoid any further compression of the sealing elements, thus maintaining a 'constant' gasket seating stress while in-service. Thus, the gasket is immune to common gasket failure modes that include creep relaxation, high system vibration, or system thermal cycles. The fundamental concept underlying the improved sealability for constant seating stress gaskets are that (i) if the flange sealing surfaces are capable of attaining a seal, (ii) the sealing elements are compatible with the process fluid and application, and (iii) the sufficient gasket seating stress is achieved on installation necessary to affect a seal, then the possibility of the gasket leaking in-service is greatly reduced or eliminated altogether. Double-jacketed gaskets Double-jacketed gaskets are another combination of filler material and metallic materials. In this application, a tube with ends that resemble a "C" is made of the metal with an additional piece made to fit inside of the "C" making the tube thickest at the meeting points. The filler is pumped between the shell and piece. When in use, the compressed gasket has a larger amount of metal at the two tips where contact is made (due to the shell/piece interaction) and these two places bear the burden of sealing the process. Since all that is needed is a shell and piece, these gaskets can be made from almost any material that can be made into a sheet and a filler can then be inserted. Kammprofile gaskets Kammprofile gaskets (sometimes spelled "Camprofile" due to their design resembling the profile of a camshaft, which is a rotating component in internal combustion engines.) are used in many older seals since they have both a flexible nature and reliable performance. Kammprofiles work by having a solid corrugated core with a flexible covering layer. This arrangement allows for very high compression and an extremely tight seal along the ridges of the gasket. Since generally the graphite will fail instead of the metal core, Kammprofile can be repaired during later inactivity. Kammprofile has a high capital cost for most applications but this is countered by long life and increased reliability. Fishbone Gaskets Fishbone Gaskets are direct replacements for Kammprofile and Spiralwound gaskets. They are fully CNC machine manufactured from similar materials but the design of the gaskets has eliminated inherent short comings. Fishbone gaskets do not unwind in storage or in the plant. The rounded edges do not cause flange damage. The added "Stop Step" prevents the Fishbone gaskets from being over compressed/crushed, often caused by hot torque techniques on plant start up. The bones of the gasket remain ductile and adjust to thermal cycling and system pressure spikes, resulting in a durable and reliable flange seal that out performs all other gaskets of this nature significantly. Flange gasket A flange gasket is a type of gasket made to fit between two sections of pipe that are flared to provide higher surface area. Flange gaskets come in a variety of sizes and are categorized by their inside diameter and their outside diameter. There are many standards in gasket for flanges of pipes. The gaskets for flanges can be divided into four major categories: Sheet gaskets Corrugated metal gaskets Ring gaskets Spiral wound gaskets Sheet gaskets are simple, they are cut to size either with bolt holes or without holes for standard sizes with various thickness and material suitable to media and temperature pressure of pipeline. Ring gaskets also known as RTJ. They are mostly used in offshore oil- and gas pipelines and are designed to work under extremely high pressure. They are solid rings of metal in different cross sections like oval, round, octagonal etc. Sometimes they come with hole in center for pressure . Spiral wound gaskets are also used in high pressure pipelines and are made with stainless steel outer and inner rings and a center filled with spirally wound stainless steel tape wound together with graphite and PTFE, formed in V shape. Internal pressure acts upon the faces of the V, forcing the gasket to seal against the flange faces. Most spiral wound gasket applications will use two standard gasket thicknesses: 1/8 inch and 3/16 inch. With 1/8 inch thick gaskets, compression to a 0.100 inch thickness is recommended. For 3/16 inches, compress to a 0.13 inch thickness. Soft cut gasket Soft gasket is a term that refers to a gasket that is cut from a soft (flexible) sheet material and can easily conform to surface irregularities even when the bolt load is low. Soft gaskets are used in applications such as heat exchangers, compressors, bonnet valve and pipe flanges. Ring type joint gasket (RTJ gasket) Annular Seal (RTJ Seal) is a high integrity, high temperature, high pressure seal for applications in the oil industry, oilfield drilling, pressure vessel connections, pipes, valves and more. The movement of the ring packing (RTJ) can be described as an irregular flow in the groove of the deformed sealing flange due to the axial compressive load. Colored seal (RTJ seal) has a small load area, which leads to a large surface pressure between the sealing surface and the groove, the maintenance properties are poor and not suitable for reuse. Improvements Many gaskets contain minor improvements to increase or infer acceptable operating conditions: A common improvement is an inner compression ring. A compression ring allows for higher flange compression while preventing gasket failure. The effects of a compression ring are minimal and generally are just used when the standard design experiences a high rate of failure. A common improvement is an outer guiding ring. A guiding ring allows for easier installation and serves as a minor compression inhibitor. In some alkylation uses these can be modified on Double Jacketed gaskets to show when the first seal has failed through an inner lining system coupled with alkylation paint. Reasons for failure Uneven distributed pressing force Uneven pressure can be caused by a variety of factors. First is the human factor: asymmetric application of the bolt preload, this can cause uneven pressure. Theoretically when the flanges are pressed, the sealing surfaces are absolutely parallel, in practice however, the centerline of a pipeline cannot be absolutely concentric, and tightening the bolts on the flange moment makes the flange a discontinuity. With asymmetric connections, the seal surfaces will be more or less deformed and the pressure reduced, the running load, prone to leakage. Third, the density of bolt arrangement has an obvious impact on the pressure distribution, the closer the bolts, the more uniform the pressure. Stress relaxation and torque loss Tighten the bolts on the flange. Due to vibration, temperature changes, and other factors such as spiral wound gasket stress relaxation, the bolt tension will gradually decrease, resulting in loss of torque, causing a leak. In general longer bolts and smaller diameters of bolt are better at preventing the loss of torque. A long thin bolt is an effective way to prevent torque loss. Heating for a certain period of time to stretch the bolt, and then maintaining a given torque, is very effective in preventing the loss of torque. When the gasket is thinner and smaller there will be a greater loss of torque. In addition, prevent strong vibration of the machine and the pipe itself, and isolate them from adjacent equipment vibration. Impacts on the sealing surface are not meaningless. Not impacting the tightened bolts can prevent the loss of torque. Surface not smooth It is important to make the sealing finish properly otherwise it will cause leakage. A surface that is too smooth can allow your gasket material to blow out under pressure. A surface that is not machined flat can provide leak paths. A good rule of thumb is a machined surface to 32RMS. This insures the surface is flat, but with enough surface finish to bite into the gasket under compression. Metal Reinforced Gasket With metal core coated gaskets, both sides of the core are covered with a flexible, malleable sealant. There are reinforced metal seals in the pressure class up to 300. A strong metal core prevents pressure seals and a soft core ensures exceptional sealing.
Technology
Hydraulics and pneumatics
null
424964
https://en.wikipedia.org/wiki/Water%20clock
Water clock
A water clock or clepsydra (; ; ) is a timepiece by which time is measured by the regulated flow of liquid into (inflow type) or out from (outflow type) a vessel, and where the amount of liquid can then be measured. Water clocks are one of the oldest time-measuring instruments. The simplest form of water clock, with a bowl-shaped outflow, existed in Babylon, Egypt, and Persia around the 16th century BC. Other regions of the world, including India and China, also provide early evidence of water clocks, but the earliest dates are less certain. Water clocks were used in ancient Greece and in ancient Rome, as described by technical writers such as Ctesibius (died 222 BC) and Vitruvius (died after 15 BC). Designs A water clock uses the flow of water to measure time. If viscosity is neglected, the physical principle required to study such clocks is Torricelli's law. Two types of water clock exist: inflow and outflow. In an outflow water clock, a container is filled with water, and the water is drained slowly and evenly out of the container. This container has markings that are used to show the passage of time. As the water leaves the container, an observer can see where the water is level with the lines and tell how much time has passed. An inflow water clock works in basically the same way, except instead of flowing out of the container, the water is filling up the marked container. As the container fills, the observer can see where the water meets the lines and tell how much time has passed. Some modern timepieces are called "water clocks" but work differently from the ancient ones. Their timekeeping is governed by a pendulum, but they use water for other purposes, such as providing the power needed to drive the clock by using a water wheel or something similar, or by having water in their displays. The Greeks and Romans advanced water clock design to include the inflow clepsydra with an early feedback system, gearing, and escapement mechanism, which were connected to fanciful automata and resulted in improved accuracy. Further advances were made in Byzantium, Syria, and Mesopotamia, where increasingly accurate water clocks incorporated complex segmental and epicyclic gearing, water wheels, and programmability, advances which eventually made their way to Europe. Independently, the Chinese developed their own advanced water clocks, incorporating gears, escapement mechanisms, and water wheels, passing their ideas on to Korea and Japan. Some water clock designs were developed independently, and some knowledge was transferred through the spread of trade. These early water clocks were calibrated with a sundial. While never reaching a level of accuracy comparable to today's standards of timekeeping, the water clock was a commonly used timekeeping device for millennia, until it was replaced by more accurate verge escapement mechanical clocks in Europe around 1300. Regional development Egypt The oldest water clock of which there is physical evidence dates to c. 1417–1379 BC in the New Kingdom of Egypt, during the reign of the pharaoh Amenhotep III, where it was used in the Precinct of Amun-Re at Karnak. The oldest documentation of the water clock is the tomb inscription of the 16th century BC Egyptian court official Amenemhet, which identifies him as its inventor. These simple water clocks, which were of the outflow type, were stone vessels with sloping sides that allowed water to drip at a nearly constant rate from a small hole near the bottom. There were twelve separate columns with consistently spaced markings on the inside to measure the passage of "hours" as the water level reached them. The columns were for each of the twelve months to allow for the variations of the seasonal hours. Priests used these clocks to determine the time at night so that the temple rites and sacrifices could be performed at the correct hour. Babylon In Babylon, water clocks were of the outflow type and were cylindrical in shape. Use of the water clock as an aid to astronomical calculations dates back to the Old Babylonian Empire (c. 2000 – c. 1600 BC). While there are no surviving water clocks from the Mesopotamian region, most evidence of their existence comes from writings on clay tablets. Two collections of tablets, for example, are the Enuma Anu Enlil (1600–1200 BC) and the MUL.APIN (7th century BC). In these tablets, water clocks are used for payment of the night and day watches (guards). These clocks were unique, as they did not have an indicator such as hands (as are typically used today) or grooved notches (as were used in Egypt). Instead, these clocks measured time "by the weight of water flowing from" it. The volume was measured in capacity units called qa. The weight, mana or mina (the Greek unit for about one pound), is the weight of water in a water clock. In Babylonian times, time was measured with temporal hours. So, as seasons changed, so did the length of a day. "To define the length of a 'night watch' at the summer solstice, one had to pour two mana of water into a cylindrical clepsydra; its emptying indicated the end of the watch. One-sixth of mana had to be added each succeeding half-month. At the equinox, three mana had to be emptied in order to correspond to one watch, and four mana was emptied for each watch of the winter solstitial night." India N. Narahari Achar and Subhash Kak suggest that water clocks were used in ancient India as early as the 2nd millennium BC, based on their appearance in the Atharvaveda'. According to N. Kameswara Rao, pots excavated from the Indus Valley Civilisation site of Mohenjo-daro may have been used as water clocks. They are tapered at the bottom, have a hole on the side, and are similar to the utensil used to perform abhiṣeka (ritual water pouring) on lingams. The Jyotisha, one of the six Vedanga disciplines, describes water clocks called ghati or kapala that measure time in units of nadika (around 24 minutes). A clepsydra in the form of a floating and sinking copper vessel is mentioned in the Sürya Siddhānta (5th century AD). At Nalanda mahavihara, an ancient Buddhist university, four-hour intervals were measured by a water clock, which consisted of a similar copper bowl holding two large floats in a larger bowl filled with water. The bowl was filled with water from a small hole at its bottom; it sank when filled and was marked by the beating of a drum in the daytime. The amount of water added varied with the seasons, and students at the university operated the clock. Descriptions of similar water clocks are also given in the Pañca Siddhāntikā by the polymath Varāhamihira in the 6th century, which adds further detail to the account given in the Sūrya Siddhānta. Further descriptions are recorded in the Brāhmasphuṭasiddhānta by the mathematician Brahmagupta in the 7th century. A detailed description with measurements is also recorded by the astronomer Lalla in the 8th century, who describes the ghati as a hemispherical copper vessel with a hole that is fully filled after one nadika. China In ancient China, as well as throughout East Asia, water clocks were very important in the study of astronomy and astrology. The oldest written reference dates the use of the water clock in China to the 6th century BC. From about 200 BC onwards, the outflow clepsydra was replaced almost everywhere in China by the inflow type with an indicator-rod borne on a float(called fou chien lou,浮箭漏). The Han dynasty philosopher and politician Huan Tan (40 BC – AD 30), a Secretary at the Court in charge of clepsydrae, wrote that he had to compare clepsydrae with sundials because of how temperature and humidity affected their accuracy, demonstrating that the effects of evaporation, as well as of temperature on the speed at which water flows, were known at this time. The liquid in water clocks was liable to freezing, and had to be kept warm with torches, a problem that was solved in 976 by the Chinese astronomer and engineer Zhang Sixun. His invention—a considerable improvement on Yi Xing's clock—used mercury instead of water. Mercury is a liquid at room temperature, and freezes at , lower than any air temperature common outside polar regions. Again, instead of using water, the early Ming Dynasty engineer Zhan Xiyuan (c. 1360–1380) created a sand-driven wheel clock, improved upon by Zhou Shuxue (c. 1530–1558). The use of clepsydrae to drive mechanisms illustrating astronomical phenomena began with the Han Dynasty polymath Zhang Heng (78–139) in 117, who also employed a waterwheel. Zhang Heng was the first in China to add an extra compensating tank between the reservoir and the inflow vessel, which solved the problem of the falling pressure head in the reservoir tank. Zhang's ingenuity led to the creation by the Tang dynasty mathematician and engineer Yi Xing (683–727) and Liang Lingzan in 725 of a clock driven by a waterwheel linkwork escapement mechanism. The same mechanism would be used by the Song dynasty polymath Su Song (1020–1101) in 1088 to power his astronomical clock tower, as well as a chain drive. Su Song's clock tower, over tall, possessed a bronze power-driven armillary sphere for observations, an automatically rotating celestial globe, and five front panels with doors that permitted the viewing of changing mannequins which rang bells or gongs, and held tablets indicating the hour or other special times of the day. In the 2000s, in Beijing's Drum Tower an outflow clepsydra is operational and displayed for tourists. It is connected to automata so that every quarter-hour a small brass statue of a man claps his cymbals. Persia The use of water clocks in Greater Iran, especially in the desert areas such as Yazd, Isfahan, Zibad, and Gonabad, dates back to 500 BC. Later, they were also used to determine the exact holy days of pre-Islamic religions such as Nowruz (March equinox), Mehregan (September equinox), Tirgan (summer solstice) and Yaldā Night (winter solstice) – the shortest, longest, and equal-length days and nights of the years. The water clocks, called pengan (and later fenjan) used were one of the most practical ancient tools for timing the yearly calendar. The water clock was the most accurate and commonly used timekeeping device for calculating the amount or the time that a farmer must take water from a qanat or well for irrigation until more accurate current clocks replaced it. Persian water clocks were a practical, useful, and necessary tool for the qanat's shareholders to calculate the length of time they could divert water to their farms or gardens. The qanat was the only water source for agriculture and irrigation in arid area so a just and fair water distribution was very important. Therefore, a very fair and clever old person was elected to be the manager of the water clock or mir āb, and at least two full-time managers were needed to control and observe the number of hours and announce the exact time of the days and nights from sunrise to sunset because shareholders usually divided between day and night owners. The Persian water clock consisted of a large pot full of water and a bowl with a small hole in the center. When the bowl became full of water, it would sink into the pot, and the manager would empty the bowl and again put it on the top of the water in the pot. He would record the number of times the bowl sank by putting small stones into a jar. The place where the clock was situated and its managers were collectively known as the khane pengān. Usually this would be the top floor of a public house, with west- and east-facing windows to show the time of sunset and sunrise. The Zibad water clock was in use until 1965, when it was replaced by modern clocks. Greco-Roman world The word "clepsydra" comes from the Greek meaning "water thief". The Greeks considerably advanced the water clock by tackling the problem of the diminishing flow. They introduced several types of the inflow clepsydra, one of which included the earliest feedback control system. Ctesibius invented an indicator system typical for later clocks such as the dial and pointer. The Roman engineer Vitruvius described early alarm clocks, working with gongs or trumpets. A commonly used water clock was the simple outflow clepsydra. This small earthenware vessel had a hole in its side near the base. In both Greek and Roman times, this type of clepsydra was used in courts for allocating periods of time to speakers. In important cases, such as when a person's life was at stake, it was filled completely, but for more minor cases, only partially. If proceedings were interrupted for any reason, such as to examine documents, the hole in the clepsydra was stopped with wax until the speaker was able to resume his pleading. Clepsydrae for keeping time Some scholars suspect that the clepsydra may have been used as a stop-watch for imposing a time limit on clients' visits in Athenian brothels. Slightly later, in the early 3rd century BC, the Hellenistic physician Herophilos employed a portable clepsydra on his house visits in Alexandria for measuring his patients' pulse-beats. By comparing the rate by age group with empirically obtained data sets, he was able to determine the intensity of the disorder. Between 270 BC and AD 500, Hellenistic (Ctesibius, Hero of Alexandria, Archimedes) and Roman horologists and astronomers were developing more elaborate mechanized water clocks. The added complexity was aimed at regulating the flow and at providing fancier displays of the passage of time. For example, some water clocks rang bells and gongs, while others opened doors and windows to show figurines of people, or moved pointers, and dials. Some even displayed astrological models of the universe. The 3rd century BC engineer Philo of Byzantium referred in his works to water clocks already fitted with an escapement mechanism, the earliest known of its kind. The biggest achievement of the invention of clepsydrae during this time, however, was by Ctesibius with his incorporation of gears and a dial indicator to automatically show the time as the lengths of the days changed throughout the year, because of the temporal timekeeping used during his day. Also, a Greek astronomer, Andronicus of Cyrrhus, supervised the construction of his Horologion, known today as the Tower of the Winds, in the Athens marketplace (or agora) in the first half of the 1st century BC. This octagonal clocktower showed scholars and shoppers both sundials and a windvane. Inside it was a mechanized clepsydra, although the type of display it used cannot be known for sure; some possibilities are: a rod that moved up and down to display the time, a water-powered automaton that struck a bell to mark the hours, or a moving star disk in the ceiling. Medieval Islamic world In the medieval Islamic world (632-1280), the use of water clocks has its roots from Archimedes during the rise of Alexandria in Egypt and continues on through Byzantium. The water clocks by the Arabic engineer Al-Jazari, however, are credited for going "well beyond anything" that had preceded them. In Al-Jazari's 1206 treatise, he describes one of his water clocks, the elephant clock. The clock recorded the passage of temporal hours, which meant that the rate of flow had to be changed daily to match the uneven length of days throughout the year. To accomplish this, the clock had two tanks, the top tank was connected to the time indicating mechanisms and the bottom was connected to the flow control regulator. Basically, at daybreak, the tap was opened and water flowed from the top tank to the bottom tank via a float regulator that maintained a constant pressure in the receiving tank. The most sophisticated water-powered astronomical clock was Al-Jazari's castle clock, considered by some to be an early example of a programmable analog computer, in 1206. It was a complex device that was about high, and had multiple functions alongside timekeeping. It included a display of the zodiac and the solar and lunar orbits, and a pointer in the shape of the crescent moon which traveled across the top of a gateway, moved by a hidden cart and causing automatic doors to open, each revealing a mannequin, every hour. It was possible to re-program the length of day and night in order to account for the changing lengths of day and night throughout the year, and it also featured five musician automata who automatically play music when moved by levers operated by a hidden camshaft attached to a water wheel. Other components of the castle clock included a main reservoir with a float, a float chamber and flow regulator, plate and valve trough, two pulleys, crescent disc displaying the zodiac, and two falcon automata dropping balls into vases. The first water clocks to employ complex segmental and epicyclic gearing was invented earlier by the Arab engineer Ibn Khalaf al-Muradi in Islamic Iberia c. 1000. His water clocks were driven by water wheels, as was also the case for several Chinese water clocks in the 11th century. Comparable water clocks were built in Damascus and Fez. The latter (Dar al-Magana) remains until today and its mechanism has been reconstructed. The first European clock to employ these complex gears was the astronomical clock created by Giovanni de Dondi in c. 1365. Like the Chinese, Arab engineers at the time also developed an escapement mechanism which they employed in some of their water clocks. The escapement mechanism was in the form of a constant-head system, while heavy floats were used as weights. Korea In 718, Unified Silla established the system of clepsydra for the first time in Korean history, imitating the Tang Dynasty. In 1434, during Joseon rule, Jang Yeong-sil (), a palace guard and later chief court engineer, constructed the Borugak Jagyeongnu or self-striking water clock of Borugak Pavillion for Sejong the Great. What made his water clock self-striking (or automatic) was using jack-work mechanisms: three wooden figures or "jacks" struck objects to signal the time. This innovation no longer required the reliance of human workers, known as "rooster men", to constantly replenish it. The uniqueness of the clock was its capability to announce dual-times automatically with visual and audible signals. Jang developed a signal conversion technique that made it possible to measure analog time and announce digital time simultaneously as well as to separate the water mechanisms from the ball-operated striking mechanisms. The conversion device was called pangmok, and was placed above the inflow vessel that measured the time, the first device of its kind in the world. Thus, the Borugak water clock is the first hydro-mechanically engineered dual-time clock in the history of horology. Japan Emperor Tenji made Japan's first water clock called a . They were highly socially significant and run by Temperature, water viscosity, and clock accuracy When viscosity can be neglected, the outflow rate of the water is governed by Torricelli's law, or more generally, by Bernoulli's principle. Viscosity will dominate the outflow rate if the water flows out through a nozzle that is sufficiently long and thin, as given by the Hagen–Poiseuille equation. Approximately, the flow rate is for such design inversely proportional to the viscosity, which depends on the temperature. Liquids generally become less viscous as the temperature increases. In the case of water, the viscosity varies by a factor of about seven between zero and 100 degrees Celsius. Thus, a water clock with such a nozzle would run about seven times faster at 100 °C than at 0 °C. Water is about 25 percent more viscous at 20 °C than at 30 °C, and a variation in temperature of one degree Celsius, in this "room temperature" range, produces a change of viscosity of about two percent. Therefore, a water clock with such a nozzle that keeps good time at some given temperature would gain or lose about half an hour per day if it were one degree Celsius warmer or cooler. To make it keep time within one minute per day would require its temperature to be controlled within °C (about °F). There is no evidence that this was done in antiquity, so ancient water clocks with sufficiently thin and long nozzles (unlike the modern pendulum-controlled one described above) cannot have been reliably accurate by modern standards. However, while modern timepieces may not be reset for long periods, water clocks were likely reset every day, when refilled, based on a sundial, so the cumulative error would not have been great.
Technology
Clocks
null
424987
https://en.wikipedia.org/wiki/Bushel
Bushel
A bushel (abbreviation: bsh. or bu.) is an imperial and US customary unit of volume based upon an earlier measure of dry capacity. The old bushel is equal to 2 kennings (obsolete), 4 pecks, or 8 dry gallons, and was used mostly for agricultural products, such as wheat. In modern usage, the volume is nominal, with bushels denoting a mass defined differently for each commodity. The name "bushel" is also used to translate similar units in other measurement systems. Name The word "bushel" as originally used for a container itself, and later a unit of measurement. The name comes from the Old French and , meaning "little box". It may further derive from Old French , thus meaning "little butt". History The bushel is an intermediate value between the pound and ton or tun that was introduced to England following the Norman Conquest. Norman statutes made the London bushel part of the legal measure of English wine, ale, and grains. The Assize of Bread and Ale credited to Henry III, , defined this bushel in terms of the wine gallon, while the Assize of Weights and Measures usually credited to Edward I or II defined the London bushel in terms of the larger corn gallon. In either case, a London bushel was reckoned to contain 64 pounds, 12 ounces, 20 pennyweight, and 32 grains. These measures were based on the relatively light tower pound and were rarely used in Scotland, Ireland, or Wales during the Middle Ages. When the Tower system was abolished in the 16th century, the bushel was redefined as 56 avoirdupois pounds. The imperial bushel established by the Weights and Measures Act 1824 described the bushel as the volume of 80 avoirdupois pounds of distilled water in air at or 8 imperial gallons. This is the bushel in some use in the United Kingdom. Thus, there is no distinction between liquid and dry measure in the imperial system. The Winchester bushel is the volume of a cylinder in diameter and high, which gives an irrational number of approximately 2150.4202 cubic inches. The modern American or US bushel is a variant of this, rounded to exactly 2150.42 cubic inches, less than one part per ten million less. It is also somewhat in use in Canada. In English use, a Bushel was a willow basket with fixed dimensions. The basket was round. Its inside measurements were: Base diameter 12 inches, top diameter 18 inches, height 12 inches. A basket filled level to the top was a bushel. A basket filled to the top but overfilled to a height where it overflowed was considered to be a bushel and a peck, a generous measure (a similar concept to a baker's dozen). Hence, the old song " I love you, a bushel and a peck...." meant "I am overflowing with love for you". Sometimes the basket was made 13 inches high, but with a ring of "waling" (a special willow weaving technique) to mark the 12 inches height. Volume Weight Bushels are now most often used as units of mass or weight rather than of volume. The bushels in which grains are bought and sold on commodity markets or at local grain elevators, and for reports of grain production, are all units of weight. This is done by assigning a standard weight to each commodity that is to be measured in bushels. These bushels depend on the commodities being measured, and on the moisture content of the commodity. Some of the more common ones are: Oats: US: 32 lb () Canada: 34 lb () UK: 38 lb () Barley: 48 lb () Malted barley: 34 lb () Shelled maize (corn) at 15.5% moisture by weight: 56 lb () Wheat at 13.5% moisture by weight: 60 lb () Soybeans at 13% moisture by weight: 60 lb () Other specific values are defined (and those definitions may vary within different jurisdictions, including from state to state in the United States) for other grains, oilseeds, fruits, vegetables, coal, hair and many other commodities. Government policy in the United States is to phase out units such as the bushel and replace them with metric mass equivalents. Other units The German bushel is the . A Prussian scheffel was equal to 54.96 litres. The Polish bushel () was used as measure of dry capacity. It is divided into 4 quarters () and in the early 19th century had a value of 128 litres in Warsaw and 501.116 litres in Kraków. The Spanish bushel () was used as a measure of dry capacity. It is roughly equal to 55.5 litres in Castille. The Welsh hobbit was equivalent to two-and-a-half bushels when used for volume; when used for measuring weight the hobbit was dependent on the grain being weighed.
Physical sciences
Mass and weight
Basics and measurement
425050
https://en.wikipedia.org/wiki/Agelenidae
Agelenidae
The Agelenidae are a large family of spiders in the suborder Araneomorphae. Well-known examples include the common "grass spiders" of the genus Agelenopsis. Nearly all Agelenidae are harmless to humans, but the bite of the hobo spider (Eratigena agrestis) may be medically significant, and some evidence suggests it might cause necrotic lesions, but the matter remains subject to debate. The most widely accepted common name for members of the family is funnel weaver. Description The body length of the smallest Agelenidae spiders are about , excluding the legs, while the larger species grow to long. Some exceptionally large species, such as Eratigena atrica, may reach in total leg span. Agelenids have eight eyes in two horizontal rows of four. Their cephalothoraces narrow somewhat towards the front where the eyes are. Their abdomens are more or less oval, usually patterned with two rows of lines and spots. Some species have longitudinal lines on the dorsal surface of the cephalothorax, whereas other species do not; for example, the hobo spider does not, which assists in informally distinguishing it from similar-looking species. Biology Most of the Agelenidae are very fast runners, especially on their webs. With speeds clocked at , the giant house spider held the Guinness Book of World Records title for top spider speed until 1987. A recent literature review found peer-reviewed accounts of several agelenid species achieving speeds in this range, though some other taxa have achieved higher speeds. Agelenids build a flat sheet of nonsticky web with a funnel-shaped retreat to one side or occasionally in the middle, depending on the situation and species. Accordingly, "funnel weaver" is the most widely accepted common name for members of the family, but they should not be confused with the so-called "funnel-web tarantulas" or "funnel-web spiders" of mygalomorph families. The typical hunting mode for most sheet-building Agelenidae is similar to that of most other families of spiders that build sheet webs in the open, typically on grass or in scrubland as opposed to under bark, rocks, and the like. They await the arrival of prey such as grasshoppers that fall onto the horizontal web. Although the web is not sticky, it is full of entangling filaments that the spider continually lays down when passing over. The filaments catch in the least projections on a prey insect's body or limbs. The web also is springy, and whether perching on the sheet or awaiting prey in its retreat, the spider reacts immediately to vibrations, whether from a courting male, the threatening struggles of dangerous invaders, or the weaker struggles of potential meals. They attack promising prey by rushing out at high speed and dealing a paralysing venomous bite. The agatoxin in their venom has been studied extensively in Agelenopsis aperta. Once the prey has been disabled, the spider generally drags it back into the retreat and begins to feed. This method of attack is consistent with the high speeds at which the Agelenidae run. Other sheet-web hunters such as some Pisauridae also are very fast runners. Like any fast-running spider, the Agelenidae possess good vision, and are generally photosensitive (i.e. react to changes in the light), so they can successfully retreat upon perceiving a larger threat's shadow approaching. Some are also sensitive to wind blows, and can retreat before the prey even spots them. Males are less successful ambushers than females, so prefer to roam around and wander to new areas, rather than stay in one single web. In September, males of outdoors species (such as Agelenopsis and Agelena) can seek refuge within houses, usually nesting on or underneath outer windowsills, or also around the porch door. These spiders often are neither pest controllers nor pests themselves; they are very selective in their prey, and do not consume large quantities; also, they are immune to intimidation and come back to their webs even after being disturbed, unless they are completely destroyed. Parasocial species The type genus, Agelena, includes some parasocial spiders that live in complex communal webs in Africa. The best known of these is probably A. consociata. Social behaviour in these spiders comprises communal web-building, cooperative prey capture, and communal rearing of young. No trophallaxis occurs, though, nor does any true eusociality such as occurs in the social Hymenoptera (ants, bees, and wasps); for example, the spiders have no castes such as sterile workers or soldiers, and all females are reproductive. Medical significance Only one species of agelenid has become prominent as a putative cause of a significant frequency of necrotic arachnidism; this is the hobo spider, Eratigena agrestis. This perception arose when the species was accidentally introduced to the United States in the mid-20th century and propagated rapidly in several regions. It is a fairly large, rapidly moving spider, so accordingly alarms many people. A few cases of bites have been reported in Southern California by the desert grass spider, Agelenopsis aperta, that resulted in symptoms, but determining whether these cases were confused with similar-looking spiders is difficult. Genera , the World Spider Catalog accepts these genera: Acutipetala Dankittipakul & Zhang, 2008 — Thailand Aeolocoelotes Okumura, 2020 — Japan Agelena Walckenaer, 1805 — Africa, Asia, Italy Agelenella Lehtinen, 1967 — Yemen Agelenopsis Giebel, 1869 — North America, Ukraine, Asia Ageleradix Xu & Li, 2007 — China Agelescape Levy, 1996 — Asia Ahua Forster & Wilton, 1973 — New Zealand Allagelena Zhang, Zhu & Song, 2006 — Asia Alloclubionoides Paik, 1992 — Asia Anatextrix Kaya, Zamani, Yağmur & Marusik, 2023 — Turkiye Asiascape Zamani & Marusik, 2020 — Iran Aterigena Bolzern, Hänggi & Burckhardt, 2010 — China, Italy, France Azerithonica Guseinov, Marusik & Koponen, 2005 Baiyuerius Zhao, B. Li & S. Li, 2023 — China, Vietnam Bajacalilena Maya-Morales & Jiménez, 2017 — Mexico Barronopsis Chamberlin & Ivie, 1941 — Cuba, United States Benoitia Lehtinen, 1967 — Asia, Africa, Spain Bifidocoelotes Wang, 2002 — China Brignoliolus Ovtchinnikov, 1999 — Israel, Lebanon, Uzbekistan, Kyrgyzstan, Tajikistan, Turkiye, Turkmenistan Cabolena Maya-Morales & Jiménez, 2017 — Mexico Calilena Chamberlin & Ivie, 1941 — United States, Mexico Callidalena Maya-Morales & Jiménez, 2017 — Mexico, United States Coelotes Blackwall, 1841 — Asia, Europe, Mexico Coras Simon, 1898 — United States, Canada, Korea Curticoelotes Okumura, 2020 — Japan Dichodactylus Okumura, 2017 — Japan Draconarius Ovtchinnikov, 1999 — Asia Eratigena Bolzern, Burckhardt & Hänggi, 2013 — North America, Europe, Algeria, Asia Femoracoelotes Wang, 2002 — Taiwan Flexicoelotes Chen, Li & Zhao, 2015 — China Gorbiscape Zamani & Marusik, 2020 — Western Mediterranean, Tajikistan Griseidraconarius Okumura, 2020 — Japan Guilotes Zhao & S. Q. Li, 2018 — China Hadites Keyserling, 1862 — Croatia Hellamalthonica Bosmans, 2023 — Greece Hengconarius Zhao & S. Q. Li, 2018 — China Himalcoelotes Wang, 2002 — Nepal, Bhutan, China Histopona Thorell, 1869 — Europe Hoffmannilena Maya-Morales & Jiménez, 2016 — Mexico, Guatemala Hololena Chamberlin & Gertsch, 1929 — United States, Canada, Mexico Huangyuania Song & Li, 1990 — China Huka Forster & Wilton, 1973 — New Zealand Hypocoelotes Nishikawa, 2009 — Japan Inermocoelotes Ovtchinnikov, 1999 — Europe Iwogumoa Kishida, 1955 — Asia Jishiyu Lin & Li, 2023 — China Kidugua Lehtinen, 1967 — Congo Lagunella Maya-Morales & Jiménez, 2017 Leptocoelotes Wang, 2002 — Taiwan Lineacoelotes Xu, Li & Wang, 2008 — China Longicoelotes Wang, 2002 — China, Japan Lycosoides Lucas, 1846 — Africa, Azerbaijan, Spain Mahura Forster & Wilton, 1973 — New Zealand Maimuna Lehtinen, 1967 — Asia, Greece Malthonica Simon, 1898 — Greece, Portugal, France Melpomene O. Pickard-Cambridge, 1898 — North America, Central America Mistaria Lehtinen, 1967 — Kenya, Yemen Neorepukia Forster & Wilton, 1973 — New Zealand Neotegenaria Roth, 1967 — Guyana Neowadotes Alayón, 1995 — Hispaniola Nesiocoelotes Okumura & Zhao, 2022 — Japan Notiocoelotes Wang, Xu & Li, 2008 — China Novalena Chamberlin & Ivie, 1942 — North America, Central America, Trinidad Nuconarius Zhao & S. Q. Li, 2018 — China Olorunia Lehtinen, 1967 — Congo Oramia Forster, 1964 — New Zealand, Australia Oramiella Forster & Wilton, 1973 — New Zealand Orumcekia Koçak & Kemal, 2008 — China, Vietnam Papiliocoelotes Zhao & Li, 2016 — China Paramyro Forster & Wilton, 1973 — New Zealand Persilena Zamani & Marusik, 2020 — Iran Persiscape Zamani & Marusik, 2020 — Western Asia, Greece Pireneitega Kishida, 1955 — Asia, Europe Platocoelotes Wang, 2002 — China, Japan Porotaka Forster & Wilton, 1973 — New Zealand Pseudotegenaria Caporiacco, 1934 — Libya Robusticoelotes Wang, 2002 — China Rothilena Maya-Morales & Jiménez, 2013 — Mexico Rualena Chamberlin & Ivie, 1942 — United States, Mexico Sinocoelotes Zhao & Li, 2016 — China, Thailand Sinodraconarius Zhao & S. Q. Li, 2018 — China Spiricoelotes Wang, 2002 — China, Japan Tamgrinia Lehtinen, 1967 — India, China Tararua Forster & Wilton, 1973 — New Zealand Tegecoelotes Ovtchinnikov, 1999 — Asia Tegenaria Latreille, 1804 — Europe, Asia, Africa, North America, Oceania, South America, Jamaica Textrix Sundevall, 1833 — Asia, Europe, Ethiopia Tikaderia Lehtinen, 1967 — Himalayas Tonsilla Wang & Yin, 1992 — China Tortolena Chamberlin & Ivie, 1941 — United States, Mexico, Costa Rica Troglocoelotes Zhao & S. Q. Li, 2019 — China Tuapoka Forster & Wilton, 1973 — New Zealand Urocoras Ovtchinnikov, 1999 — Europe, Turkey Vappolotes Zhao & S. Q. Li, 2019 — China Wadotes Chamberlin, 1925 — United States, Canada A number of fossil species are known from Eocene aged Baltic amber, but their exact relationship with extant members of the clade is unclear.
Biology and health sciences
Spiders
Animals
425289
https://en.wikipedia.org/wiki/Ochre
Ochre
Ochre ( ; , ), iron ochre, or ocher in American English, is a natural clay earth pigment, a mixture of ferric oxide and varying amounts of clay and sand. It ranges in colour from yellow to deep orange or brown. It is also the name of the colours produced by this pigment, especially a light brownish-yellow. A variant of ochre containing a large amount of hematite, or dehydrated iron oxide, has a reddish tint known as red ochre (or, in some dialects, ruddle). The word ochre also describes clays coloured with iron oxide derived during the extraction of tin and copper. Earth pigments Ochre is a family of earth pigments, which includes yellow ochre, red ochre, purple ochre, sienna, and umber. The major ingredient of all the ochres is iron(III) oxide-hydroxide, known as limonite, which gives them a yellow colour. A range of other minerals may also be included in the mixture: Yellow ochre, , is a hydrated iron hydroxide (limonite) also called gold ochre. Red ochre, , takes its reddish colour from the mineral hematite, which is an iron oxide, reddish brown when hydrated. Purple ochre is a rare variant identical to red ochre chemically but of a different hue caused by different light diffraction properties associated with a greater average particle size. Brown ochre, also FeO(OH), (goethite), is a partly hydrated iron oxide. Similarly, lepidocrocite — γ-FeO(OH), a secondary mineral, a product of the oxidation of iron ore minerals, found in brown iron ores Sienna contains both limonite and a small amount of manganese oxide (less than 5%), which makes it darker than ochre. Umber pigments contain a larger proportion of manganese (5-20%), which makes them a dark brown. When natural sienna and umber pigments are heated, they are dehydrated and some of the limonite is transformed into hematite, giving them more reddish colours, called burnt sienna and burnt umber. Ochres are non-toxic and can be used to make an oil paint that dries quickly and covers surfaces thoroughly. Modern ochre pigments often are made using synthetic iron oxide. Pigments which use natural ochre pigments indicate it with the name PY-43 (Pigment yellow 43) on the label, following the Colour Index International system. Historical use in art and culture Prehistory Over recent decades, red ochre has played a pivotal role in discussions about the cognitive and cultural evolution of early modern humans during the African Middle Stone Age. In Africa, evidence for the processing and use of red ochre pigments has been dated by archaeologists to around 300,000 years ago, the climax of the practice coinciding broadly with the emergence of Homo sapiens. Evidence of ochre's use in Australia is more recent, dated to 50,000 years ago, while new research has uncovered evidence in Asia that is dated to 40,000 years ago. A re-examination of artifacts uncovered in 1908 at Le Moustier rock shelters in France has identified Mousterian stone tools that were attached to grips made of ochre and bitumen. The grips were formulated with 55% ground goethite ochre and 45% cooked liquid bitumen to create a mouldable putty that hardened into handles. Earlier excavations at Le Moustier prevent conclusive identification of the archaeological culture and age, but the European Mousterian style of these tools suggests they are associated with Neanderthals during the late Middle Paleolithic, between 60,000 and 35,000 years before present. It is the earliest evidence of compound adhesive use in Europe. Pieces of ochre engraved with abstract designs have been found at the site of the Blombos Cave in South Africa, dated to around 75,000 years ago. "Mungo Man" (LM3) in Australia was buried sprinkled with red ochre around 40,000 years ago. In Wales, the paleolithic burial called the Red Lady of Paviland from its coating of red ochre has been dated to around 33,000 years before present. Paintings of animals made with red and yellow ochre pigments have been found in paleolithic sites at Pech Merle in France (ca. 25,000 years old), and the cave of Altamira in Spain (–15,000 BC). The cave of Lascaux has an image of a horse coloured with yellow ochre estimated to be 17,300 years old. Neolithic burials may have used red ochre pigments symbolically, either to represent a return to the earth or possibly as a form of ritual rebirth, in which the colour may symbolize blood and a hypothesized Great Goddess. The Ancient Picts were said to paint themselves "Iron Red" according to the Gothic historian Jordanes. Frequent references in Irish myth to "red men" (Gaelic: Fer Dearg) make it likely that such a practice was common to the Celts of the British Isles, bog iron being particularly abundant in the midlands of Ireland. Ochre has uses other than as paint: "tribal peoples alive today . . . use either as a way to treat animal skins or else as an insect repellent, to staunch bleeding, or as protection from the sun. Ochre may have been the first medicament." Africa Red ochre has been used as a colouring agent in Africa for over 200,000 years. Women of the Himba ethnic group in Namibia use a mix of ochre and animal fat for body decoration, to achieve a reddish skin colour. The ochre mixture is also applied to their hair after braiding. Men and women of the Maasai people in Kenya and Tanzania have also used ochre in the same way. Ancient Egypt In Ancient Egypt, yellow was associated with gold, which was considered to be eternal and indestructible. The skin and bones of the gods were believed to be made of gold. The Egyptians used yellow ochre extensively in tomb painting, though occasionally they used orpiment, which made a brilliant colour, but was highly toxic, since it was made with arsenic. In tomb paintings, men were always shown with brown faces, women with yellow ochre or gold faces. Red ochre in Ancient Egypt was used as a rouge, or lip gloss for women. Ochre-coloured lines were also discovered on the Unfinished obelisk at the northern region of the Aswan Stone Quarry, marking work sites. Ochre clays were also used medicinally in Ancient Egypt: such use is described in the Ebers Papyrus from Egypt, dating to about 1550 BC. Ancient Phoenicia Pigments, particularly red ochre, were essential to grave rituals in ancient Phoenician society. They were more than just cosmetics; they also had important symbolic and ritualistic connotations. With its vivid color that was evocative of blood and energy, red ochre represented life, death, and rebirth. It also represented the desire for resurrection and the belief in an afterlife. In order to honor the deceased and get them ready for their passage to the afterlife, these pigments, particularly red ochre, were most likely applied to their body or other grave goods as part of the burial rites. “Phoenicians' love of red is highlighted by the great number of powders of this color found in the containers. The powders were probably used to give a hue to cheeks or to lips. Besides these uses as make-up powders, we can also assume a ritual use of ointments and powders containing cinnabar or ochre, applied to the face and the forehead during preparation rituals of the bodies. The discovery of red paint traces on bones and skulls suggests that these practices were common among the Phoenicians as for other populations.” Greater-quality pigments and more intricate applications would typically indicate people of greater rank or particular significance within the community. Moreover, the presence and quality of pigments in a burial site may indicate the identity or social standing of the deceased. In addition to acting as offerings to the gods and protective symbols, pigments were employed to adorn grave goods including pottery, amulets, and other objects, so elevating the spiritual purity of the interment. The visual impact of red ochre could also have been intended to preserve the appearance of the body or make it presentable for mourning ceremonies, ensuring that the deceased was honored appropriately. This vivid color would enhance the overall visual and emotional impact of funerary displays. In essence, the use of red ochre and other pigments in Phoenician funerary contexts highlights their cultural and symbolic importance, reflecting deep-seated beliefs about death, the afterlife, and social hierarchy, thus providing a richer understanding of Phoenician customs and values. Ancient Greece and Rome Ochre was the most commonly used pigment for painting walls in the ancient Mediterranean world. In Ancient Greece, red ochre was called μίλτος, míltos (hence Miltiades: "red-haired" or "ruddy"). In ancient Athens when Assembly was called, a contingent of public slaves would sweep the open space of the Agora with ropes dipped in miltos: those citizens that loitered there instead of moving to the Assembly area would risk having their clothes stained with the paint. This prevented them from wearing these clothes in public again, as failure to attend the Assembly incurred a fine. In England, red ochre was also known as "raddle", "reddle", or "ruddle" and was used to mark sheep and can also be used as a waxy waterproof coating on structures. The reddle was sold as a ready-made mixture to farmers and herders by travelling workers called reddlemen. In Classical antiquity, the finest red ochre came from a Greek colony on the Black Sea where the modern city of Sinop in Turkey is located. It was carefully regulated, expensive and marked by a special seal, and this colour was called sealed Sinope. Later the Latin and Italian name sinopia was given to wide range of dark red ochre pigments. Roman triumphators painted their faces red, perhaps to imitate the red-painted flesh of statues of the Gods. The Romans used yellow ochre in their paintings to represent gold and skin tones, and as a background colour. It is found frequently in the murals of Pompeii. Australia Ochre pigments are plentiful across Australia, especially the Western Desert, Kimberley and Arnhem Land regions, and occur in many archaeological sites. The practice of ochre painting has been prevalent among Aboriginal Australians for over 40,000 years. Pleistocene burials with red ochre date as early as 40,000 BP and ochre plays a role in expressing symbolic ideologies of the earliest arrivals to the continent. Ochre has been used for millennia by Aboriginal people for body decoration, sun protection, mortuary practices, cave painting, bark painting and other artwork, and the preservation of animal skins, among other uses. At Lake Mungo, in Western New South Wales, burial sites have been excavated and burial materials, including ochre-painted bones, have been dated to the arrival of people in Australia; "Mungo Man" (LM3) was buried sprinkled with red ochre at least 30,000 BP, and possibly as early as 60,000 BP. Ochre was also widely used as medicine and, when ingested, some ochres have an antacid effect on the digestive system while others, which are rich in iron, can assist with lethargy and fatigue. Ochre is also often mixed with plant oils and animal fats to create other medicines. This ochre was mined by Aboriginal people in pits and quarries across Australia; there are over 400 recorded sites, and many of these (including the Ochre Pits in the Tjoritja / West MacDonnell National Park) are still in use. The National Museum of Australia has a large collection of ochre samples from many sites across Australia. There are many words for ochre in Australian Aboriginal languages throughout Australia, including: Yolŋu languages; which refers to white ochre as gapan. Noongar language; which calls red and yellow ochre wilgee. Wiradjuri language; which calls red ochre gubarr or gidyi. Yawuru language; which refers to white/yellow ochre as gumbarri, white as larli and red duguldugul (when used in ritual). New Zealand The Māori people of New Zealand were found to be making extensive use of mineral ochre mixed with fish oil. Ochre was the predominant colouring agent used by Maori, and was used to paint their large waka taua (war canoe). Ochre prevented the drying out of the wood in canoes and the carvings of meeting houses; later missionaries estimated that it would last for 30 years. It was also roughly smeared over the face, especially by women, to keep off insects. Solid chunks of ochre were ground on a flat but rough surfaced rock to produce the powder. Indigenous North America In Newfoundland its use is most often associated with the Beothuk, whose use of red ochre led them to be referred to as "Red Indians" by the first Europeans to Newfoundland. The Beothuk may have also used yellow ochre to colour their hair. It was also used by the Maritime Archaic as evidenced by its discovery in the graves of over 100 individuals during an archaeological excavation at Port au Choix. Its use was widespread at times in the Eastern Woodlands cultural area of Canada and the US; the Red Ocher people complex refers to a specific archaeological period in the Woodlands –400 BC. California Native Americans such as the Tongva and Chumash were also known to use red ochre as body paint. Researchers diving into dark submerged caves on Mexico's Yucatán Peninsula have found evidence of an ambitious mining operation starting 12,000 years ago and lasting two millennia for red ochre. Colonial North America In Newfoundland, red ochre was the pigment of choice for use in vernacular outbuildings and work buildings associated with the cod fishery. Deposits of ochre are found throughout Newfoundland, notably near Fortune Harbour and at Ochre Pit Cove. While earliest settlers may have used locally collected ochre, people were later able to purchase pre-ground ochre through local merchants, largely imported from England. The dry ingredient, ochre, was mixed with some type of liquid raw material to create a rough paint. The liquid material was usually seal oil or cod liver oil in Newfoundland and Labrador, while Scandinavian recipes sometimes called for linseed oil. Red ochre paint was sometimes prepared months in advance and allowed to sit, and the smell of ochre paint being prepared is still remembered today. Variations in local recipes, shades of ore, and type of oil used resulted in regional variations in colour. Because of this, it is difficult to pinpoint an exact shade or hue of red that would be considered the traditional "fishing stage red". In the Bonavista Bay area one man maintained that seal oil mixed with the ochre gave the sails a purer red colour, while cod liver oil would give a "foxy" colour, browner in hue. Renaissance During the Renaissance, yellow and red ochre pigments were widely used in painting panels and frescoes. The colours vary greatly from region to region, depending upon whether the local clay was richer in yellowish limonite or reddish hematite. The red earth from Pozzuoli near Naples was a salmon pink, while the pigment from Tuscany contained manganese, making it a darker reddish brown called terra di siena, or sienna earth. The 15th-century painter Cennino Cennini described the uses of ochre pigments in his famous treatise on painting. In early modern Malta, red ochre paint was commonly used on public buildings. Modern history The industrial process for making ochre pigment was developed by the French scientist Jean-Étienne Astier in the 1780s. He was from Roussillon in the Vaucluse department of Provence, and he was fascinated by the cliffs of red and yellow clay in the region. He invented a process to make the pigment on a large scale. First the clay was extracted from open pits or mines. The raw clay contained about 10 to 20 percent ochre. Then he washed the clay to separate the grains of sand from the particles of ochre. The remaining mixture was then decanted in large basins, to further separate the ochre from the sand. The water was then drained, and the ochre was dried, cut into bricks, crushed, sifted, and then classified by colour and quality. The best quality was reserved for artists' pigments. In Britain, ochre was mined at Brixham, England. It became an important product for the British fishing industry, where it was combined with oil and used to coat sails to protect them from seawater, giving them a reddish colour. The ochre was boiled in great caldrons, together with tar, tallow and oak bark, the last ingredient giving the name of barking yards to the places where the hot mixture was painted on to the sails, which were then hung up to dry. In 1894, a theft case provided insights into the use of the pigment as a food adulterant in sausage roll production whereby the accused apprentice was taught to soak brown bread in red ochre, salt, and pepper to give the appearance of beef sausage for the filling. As noted above, the industrial process for making ochre pigment was developed by the French scientist Jean-Étienne Astier in the 1780s, using the ochre mines and quarries in Roussillon, Rustrel, or Gargas in the Vaucluse department of Provence, in France. Thanks to the process invented by Astier and refined by his successors, ochre pigments from Vaucluse were exported across Europe and around the world. It was not only used for artists paints and house paints; it also became an important ingredient for the early rubber industry. Ochre from Vaucluse was an important French export until the mid-20th century, when major markets were lost due to the Russian Revolution and the Spanish Civil War. Ochre also began to face growing competition from newly synthetic pigment industry. The quarries in Roussillon, Rustrel, the Mines of Bruoux closed one by one. Today, the last quarry in activity is in Gargas (Vaucluse) and belongs to the Société des Ocres de France. In heraldry and vexillology Ochre, both red and yellow, appear as tinctures in South African heraldry; the national coat of arms, adopted in 2000, includes red ochre, while (yellow) ochre appears in the arms of the University of Transkei. Ochre is also used as a symbol of Indigenous Australians, and appears on the Flag of the Northern Territory and on the flags of the Taungurung and Aṉangu people. In popular culture A reddleman named Diggory Venn was prominently described in Thomas Hardy's 1878 novel entitled The Return of the Native.
Physical sciences
Minerals
Earth science
425290
https://en.wikipedia.org/wiki/Kilogram-force
Kilogram-force
The kilogram-force (kgf or kgF), or kilopond (kp, from ), is a non-standard gravitational metric unit of force. It is not accepted for use with the International System of Units (SI) and is deprecated for most uses. The kilogram-force is equal to the magnitude of the force exerted on one kilogram of mass in a gravitational field (standard gravity, a conventional value approximating the average magnitude of gravity on Earth). That is, it is the weight of a kilogram under standard gravity. One kilogram-force is defined as . Similarly, a gram-force is , and a milligram-force is . History The gram-force and kilogram-force were never well-defined units until the CGPM adopted a standard acceleration of gravity of 9.80665 m/s2 for this purpose in 1901, though they had been used in low-precision measurements of force before that time. Even then, the proposal to define kilogram-force as a standard unit of force was explicitly rejected. Instead, the newton was proposed in 1913 and accepted in 1948. The kilogram-force has never been a part of the International System of Units (SI), which was introduced in 1960. The SI unit of force is the newton. Prior to this, the units were widely used in much of the world. They are still in use for some purposes; for example, they are used to specify tension of bicycle spokes, draw weight of bows in archery, and tensile strength of electronics bond wire, for informal references to pressure (as the technically incorrect kilogram per square centimetre, omitting -force, the kilogram-force per square centimetre being the technical atmosphere, the value of which is very near those of both the bar and the standard atmosphere), and to define the "metric horsepower" (PS) as 75 metre-kiloponds per second. In addition, the kilogram force was the standard unit used for Vickers hardness testing. In 1940s, Germany, the thrust of a rocket engine was measured in kilograms-force, in the Soviet Union it remained the primary unit for thrust in the Russian space program until at least the late 1980s. Dividing the thrust in kilograms-force on the mass of an engine or a rocket in kilograms conveniently gives the thrust to weight ratio, dividing the thrust on propellant consumption rate (mass flow rate) in kilograms per second gives the specific impulse in seconds. The term "kilopond" has been declared obsolete. Related units The tonne-force, metric ton-force, megagram-force, and megapond (Mp) are each 1000 kilograms-force. The decanewton or dekanewton (daN), exactly 10 N, is used in some fields as an approximation to the kilogram-force, because it is close to the 9.80665 N of 1 kgf. The gram-force is of a kilogram-force.
Physical sciences
Force
Basics and measurement
425310
https://en.wikipedia.org/wiki/Elastic%20modulus
Elastic modulus
An elastic modulus (also known as modulus of elasticity (MOE)) is the unit of measurement of an object's or substance's resistance to being deformed elastically (i.e., non-permanently) when a stress is applied to it. Definition The elastic modulus of an object is defined as the slope of its stress–strain curve in the elastic deformation region: A stiffer material will have a higher elastic modulus. An elastic modulus has the form: where stress is the force causing the deformation divided by the area to which the force is applied and strain is the ratio of the change in some parameter caused by the deformation to the original value of the parameter. Since strain is a dimensionless quantity, the units of will be the same as the units of stress. Elastic constants and moduli Elastic constants are specific parameters that quantify the stiffness of a material in response to applied stresses and are fundamental in defining the elastic properties of materials. These constants form the elements of the stiffness matrix in tensor notation, which relates stress to strain through linear equations in anisotropic materials. Commonly denoted as Cijkl, where i,j,k, and l are the coordinate directions, these constants are essential for understanding how materials deform under various loads. Types of elastic modulus Specifying how stress and strain are to be measured, including directions, allows for many types of elastic moduli to be defined. The four primary ones are: Young's modulus (E) describes tensile and compressive elasticity, or the tendency of an object to deform along an axis when opposing forces are applied along that axis; it is defined as the ratio of tensile stress to tensile strain. It is often referred to simply as the elastic modulus. The shear modulus or modulus of rigidity (G or Lamé second parameter) describes an object's tendency to shear (the deformation of shape at constant volume) when acted upon by opposing forces; it is defined as shear stress over shear strain. The shear modulus is part of the derivation of viscosity. The bulk modulus (K) describes volumetric elasticity, or the tendency of an object to deform in all directions when uniformly loaded in all directions; it is defined as volumetric stress over volumetric strain, and is the inverse of compressibility. The bulk modulus is an extension of Young's modulus to three dimensions. Flexural modulus (Eflex) describes the object's tendency to flex when acted upon by a moment. Two other elastic moduli are Lamé's first parameter, λ, and P-wave modulus, M, as used in table of modulus comparisons given below references. Homogeneous and isotropic (similar in all directions) materials (solids) have their (linear) elastic properties fully described by two elastic moduli, and one may choose any pair. Given a pair of elastic moduli, all other elastic moduli can be calculated according to formulas in the table below at the end of page. Inviscid fluids are special in that they cannot support shear stress, meaning that the shear modulus is always zero. This also implies that Young's modulus for this group is always zero. In some texts, the modulus of elasticity is referred to as the elastic constant, while the inverse quantity is referred to as elastic modulus. Density functional theory calculation Density functional theory (DFT) provides reliable methods for determining several forms of elastic moduli that characterise distinct features of a material's reaction to mechanical stresses.Utilize DFT software such as VASP, Quantum ESPRESSO, or ABINIT. Overall, conduct tests to ensure that results are independent of computational parameters such as the density of the k-point mesh, the plane-wave cutoff energy, and the size of the simulation cell. Young's modulus (E) - apply small, incremental changes in the lattice parameter along a specific axis and compute the corresponding stress response using DFT. Young’s modulus is then calculated as E=σ/ϵ, where σ is the stress and ϵ is the strain. Initial structure: Start with a relaxed structure of the material. All atoms should be in a state of minimum energy (i.e., minimum energy state with zero forces on atoms) before any deformations are applied. Incremental uniaxial strain: Apply small, incremental strains to the crystal lattice along a particular axis. This strain is usually uniaxial, meaning it stretches or compresses the lattice in one direction while keeping other dimensions constant or periodic. Calculate stresses: For each strained configuration, run a DFT calculation to compute the resulting stress tensor. This involves solving the Kohn-Sham equations to find the ground state electron density and energy under the strained conditions Stress-strain curve: Plot the calculated stress versus the applied strain to create a stress-strain curve. The slope of the initial, linear portion of this curve gives Young's modulus. Mathematically, Young's modulus E is calculated using the formula E=σ/ϵ, where σ is the stress and ϵ is the strain. Shear modulus (G) Initial structure: Start with a relaxed structure of the material. All atoms should be in a state of minimum energy with no residual forces. (i.e., minimum energy state with zero forces on atoms) before any deformations are applied. Shear strain application: Apply small increments of shear strain to the material. Shear strains are typically off-diagonal components in the strain tensor, affecting the shape but not the volume of the crystal cell. Stress calculation: For each configuration with applied shear strain, perform a DFT calculation to determine the resulting stress tensor. Shear stress vs. shear strain curve: Plot the calculated shear stress against the applied shear strain for each increment.The slope of the stress-strain curve in its linear region provides the shear modulus, G=τ/γ, where τ is the shear stress and γ is the applied shear strain. Bulk modulus (K) Initial structure: Start with a relaxed structure of the material. It’s crucial that the material is fully optimized, ensuring that any changes in volume are purely due to applied pressure. Volume changes: Incrementally change the volume of the crystal cell, either compressing or expanding it. This is typically done by uniformly scaling the lattice parameters. Calculate pressure: For each altered volume, perform a DFT calculation to determine the pressure required to maintain that volume. DFT allows for the calculation of stress tensors which provide a direct measure of the internal pressure. Pressure-volume curve: Plot the applied pressure against the resulting volume change. The bulk modulus can be calculated from the slope of this curve in the linear elastic region.The bulk modulus is defined as K=−VdV/dP, where V is the original volume, dP is the change in pressure, and dV is the change in volume.
Physical sciences
Solid mechanics
null
425363
https://en.wikipedia.org/wiki/Port%20of%20Los%20Angeles
Port of Los Angeles
The Port of Los Angeles is a seaport managed by the Los Angeles Harbor Department, a unit of the City of Los Angeles. It occupies of land and water with of waterfront and adjoins the separate Port of Long Beach. Promoted as "America's Port", the port is located in San Pedro Bay in the San Pedro and Wilmington neighborhoods of Los Angeles, approximately south of downtown. The port has 25 cargo terminals, 82 container cranes, 8 container terminals, and of on-dock rail. The port's top imports were furniture, automobile parts, apparel, footwear, and electronics. In 2019, the port's top exports were wastepaper, pet and animal feed, scrap metal and soybeans. In 2020, the port's top three trading partners were China (including Hong Kong), Japan, and Vietnam. In 2022, the port, together with the adjoining Port of Long Beach, were considered amongst the world's least efficient ports by the World Bank and IHS Markit citing union protectionism and a lack of automation. History In 1542, Juan Rodriquez Cabrillo discovered the "Bay of Smokes." The south-facing San Pedro Bay was originally a shallow mudflat, too soft to support a wharf. Visiting ships had two choices: stay far out at anchor and have their goods and passengers ferried to shore, or beach themselves. That sticky process is described in Two Years Before the Mast by Richard Henry Dana Jr., who was a crew member on an 1834 voyage that visited San Pedro Bay. Phineas Banning greatly improved shipping when he dredged the channel to Wilmington in 1871 to a depth of . The port handled 50,000 tons of shipping that year. Banning owned a stagecoach line with routes connecting San Pedro to Salt Lake City, Utah, and Yuma, Arizona, and in 1868 he built a railroad to connect San Pedro Bay to Los Angeles, the first in the area. After Banning's death in 1885, his sons pursued their interests in promoting the port, which handled 500,000 tons of shipping in that year. The Southern Pacific Railroad and Collis P. Huntington wanted to create Port Los Angeles at Santa Monica and built the Long Wharf there in 1893. However, the Los Angeles Times publisher Harrison Gray Otis and U.S. Senator Stephen White pushed for federal support of the Port of Los Angeles at San Pedro Bay. The Free Harbor Fight was settled when San Pedro was endorsed in 1897 by a commission headed by Rear Admiral John C. Walker (who later went on to become the chair of the Isthmian Canal Commission in 1904). With U.S. government support, breakwater construction began in 1899, and the area was annexed to Los Angeles in 1909. The Los Angeles Board of Harbor Commissioners was founded in 1907. In 1912 the Southern Pacific Railroad completed its first major wharf at the port. During the 1920s, the port surpassed San Francisco as the West Coast's busiest seaport. In the early 1930s, a massive expansion of the port was undertaken with the construction of a breakwater three miles out and over two miles in length. In addition to the construction of this outer breakwater, an inner breakwater was built off Terminal Island with docks for seagoing ships and smaller docks built at Long Beach. It was this improved harbor that hosted the sailing events for the 1932 Summer Olympics. During World War II, the port was primarily used for shipbuilding, employing more than 90,000 people. In 1959, Matson Navigation Company's Hawaiian Merchant delivered 20 containers to the port, beginning the port's shift to containerization. The opening of the Vincent Thomas Bridge in 1963 greatly improved access to Terminal Island and allowed increased traffic and further expansion of the port. In 1985, the port handled one million containers in a year for the first time. During the 2002 West Coast port labor lockout, the port had a large backlog of ships waiting to be unloaded at any given time. In 2000, the Pier 400 Dredging and Landfill Program, the largest such project in America, was completed. By 2013, more than half a million containers were moving through the Port every month. Port district The port district is an independent, self-supporting department of the government of the City of Los Angeles. The port is under the control of a five-member Board of Harbor Commissioners appointed by the mayor and approved by the city council, and is administered by an executive director. The port maintains an AA bond rating, the highest rating attainable for self-funded ports. , the port had about a dozen pilots, including two chiefs. Pilots have specialized knowledge of the harbor and San Pedro Bay. They meet the ships waiting to enter the harbor and provide advice as the vessel is steered through the congested waterway to the dock. For public safety protection inside the port and of its businesses, the Port of Los Angeles utilizes the Los Angeles Port Police for police service, the Los Angeles Fire Department (LAFD) to provide fire and EMS services, the U.S. Coast Guard for waterway security, Homeland Security to protect federal land at the port, the Los Angeles County Lifeguards to provide lifeguard services for open waters outside of the harbor, while Los Angeles City Recreation & Parks Department lifeguards patrol the inner Cabrillo Beach. Shipping The port's container volume was in calendar year 2019, a 5.5% increase over 2016's record-breaking year of 8.8 million TEU. It's the most cargo moved annually by a Western Hemisphere port. The port is the busiest port in the United States by container volume, the 19th-busiest container port in the world, and the 10th-busiest worldwide when combined with the neighboring Port of Long Beach. The port is also the number-one freight gateway in the United States when ranked by the value of shipments passing through it. The port's top trading partners in 2019 were: China/Hong Kong ($128 billion) Japan ($89 billion) Vietnam ($21 billion) South Korea ($15 billion) Taiwan ($15 billion) The most-imported types of goods in the 2019 calendar year were, in order: furniture (579,405), automobile parts (340,546), apparel (312,655), and electronic products (209,622). The port is served by the Pacific Harbor Line (PHL) railroad. From the PHL, intermodal railroad cars go north to Los Angeles via the Alameda Corridor. In 2011, no American port could handle ships of the PS-class Emma Mærsk and the future Maersk Triple E class size, the latter of which needs cranes reaching 23 rows. In 2012, the port and the U.S. Army Corps of Engineers deepened the port's main navigational channel to , which is deep enough to accommodate the draft of the world's biggest container ships. However, Maersk had no plans in 2014 to bring those ships to America. In 2024 the port received 3 cranes capable of servicing ships up to 18,000 TEU. Los Angeles and Long Beach ports were some of the least efficient in the world, according to a 2022 ranking by the World Bank and IHS Markit. Cruise ship terminal The World Cruise Center, located in San Pedro, Los Angeles, beneath the Vincent Thomas Bridge, has three passenger ship berths. Public access investments The LA Waterfront is a visitor-serving destination in the city of Los Angeles, funded and maintained by the Port of Los Angeles. In 2009, the Los Angeles Harbor Commission approved the San Pedro Waterfront and Wilmington Waterfront development programs, under the LA Waterfront umbrella. The LA Waterfront consists of a series of waterfront development and community enhancement projects covering more than of existing Port of Los Angeles property in both San Pedro and Wilmington. With miles of public promenade and walking paths, acres of open space and scenic views, the LA Waterfront attracts thousands of visitors annually. Remodel and reconstruction was approved by the Los Angeles City Council. Development is set to be completed in 2020. Construction is expected to begin in 2017 at a partial project cost of $90 million, paid by the developer. The San Pedro Public Market is expected to open in 2020, with demolition beginning as early as November 2016. The Waterfront Red Car is a currently non-operational heritage trolley line for public transit along the waterfront in San Pedro. Prior to its closure in 2015, it used vintage and restored Pacific Electric Red Cars to connect the World Cruise Center, Downtown San Pedro, Ports O' Call Village, and the San Pedro Marina. Environment Oceangoing ships visiting ports are a large source of nitrogen oxides in Southern California. Heavy-duty diesel trucks, that are also part of the freight-moving port complexes, emit exhaust with nitrogen oxides and particulate matter. The California Air Resources Board is working on reducing these sources of pollution that produce the nation's most polluted air smog and kill more than 3,500 Southern Californians each year. In 2021, the South Coast Air Quality Management District required warehouses in the port which do not cut emissions of carbon and pollutants to pay fees. The port installed the first Alternative Maritime Power (AMP) berth in 2004 and can provide up to 40 MW of grid power to two cruise ships simultaneously at both 6.6 kV and 11 kV, as well as three container terminals, reducing pollution from ship engines. In an effort to buffer the nearby community of Wilmington from the port, in June 2011 the Wilmington Waterfront Park was opened. Clean Air Action Program The $2.8 million San Pedro Bay Ports Clean Air Action Program (CAAP) initiative was implemented by the Board of Harbor Commissioners in October 2002 for terminal and ship operations programs targeted at reducing polluting emissions from vessels and cargo handling equipment . To accelerate implementation of emission reductions through the use of new and cleaner-burning equipment, the port has allocated more than $52 million in additional funding for the CAAP through 2008. As of May 2016, the Port of Los Angeles has already surpassed its initial 2023 emission goals 8 years ahead of predicted time frame. The dramatic success to reduce emissions has seen a decrease in diesel particulate matter reduce 72%, sulfur oxides by 93%, and nitrogen oxide by 22% so far. The CAAP program was updated to 3.0 after this environmental successes of the initiatives. With the recent ramification of environment goals the updates will look to reduce the emissions through efficient supply chain optimization. There has also been recent developments to increase port technologies advancement to promote the development of efficient and green port technologies. The CAAP also looks to be the lead role caretaker of fostering and improving the wildlife and ecosystem of the port.
Technology
Specific piers and ports
null
425753
https://en.wikipedia.org/wiki/Late%20Ordovician%20mass%20extinction
Late Ordovician mass extinction
The Late Ordovician mass extinction (LOME), sometimes known as the end-Ordovician mass extinction or the Ordovician-Silurian extinction, is the first of the "big five" major mass extinction events in Earth's history, occurring roughly 445 million years ago (Ma). It is often considered to be the second-largest known extinction event just behind the end-Permian mass extinction, in terms of the percentage of genera that became extinct. Extinction was global during this interval, eliminating 49–60% of marine genera and nearly 85% of marine species. Under most tabulations, only the Permian-Triassic mass extinction exceeds the Late Ordovician mass extinction in biodiversity loss. The extinction event abruptly affected all major taxonomic groups and caused the disappearance of one third of all brachiopod and bryozoan families, as well as numerous groups of conodonts, trilobites, echinoderms, corals, bivalves, and graptolites. Despite its taxonomic severity, the Late Ordovician mass extinction did not produce major changes to ecosystem structures compared to other mass extinctions, nor did it lead to any particular morphological innovations. Diversity gradually recovered to pre-extinction levels over the first 5 million years of the Silurian period. The Late Ordovician mass extinction is traditionally considered to occur in two distinct pulses. The first pulse (interval), known as LOMEI-1, began at the boundary between the Katian and Hirnantian stages of the Late Ordovician epoch. This extinction pulse is typically attributed to the Late Ordovician glaciation, which abruptly expanded over Gondwana at the beginning of the Hirnantian and shifted the Earth from a greenhouse to icehouse climate. Cooling and a falling sea level brought on by the glaciation led to habitat loss for many organisms along the continental shelves, especially endemic taxa with restricted temperature tolerance and latitudinal range. During this extinction pulse, there were also several marked changes in biologically responsive carbon and oxygen isotopes. Marine life partially rediversified during the cold period and a new cold-water ecosystem, the "Hirnantia fauna", was established. The second pulse (interval) of extinction, referred to as LOMEI-2, occurred in the later half of the Hirnantian as the glaciation abruptly receded and warm conditions returned. The second pulse was associated with intense worldwide anoxia (oxygen depletion) and euxinia (toxic sulfide production), which persisted into the subsequent Rhuddanian stage of the Silurian Period. Some researchers have proposed the existence of a third distinct pulse of the mass extinction during the early Rhuddanian, evidenced by a negative carbon isotope excursion and a pulse of anoxia into shelf environments amidst already low background oxygen levels. Others, however, have argued that Rhuddanian anoxia was simply part of the second pulse, which according to this view was longer and more drawn out than most authors suggest. Impact on life Ecological impacts The Late Ordovician mass extinction followed the Great Ordovician Biodiversification Event (GOBE), one of the largest surges of increasing biodiversity in the geological and biological history of the Earth. At the time of the extinction, most complex multicellular organisms lived in the sea, and the only evidence of life on land are rare spores from small early land plants. At the time of the extinction, around 100 marine families became extinct, covering about 49% of genera (a more reliable estimate than species). The brachiopods and bryozoans were strongly impacted, along with many of the trilobite, conodont and graptolite families. The extinction was divided into two major extinction pulses. The first pulse occurred at the base of the global Metabolograptus extraordinarius graptolite biozone, which marks the end of the Katian stage and the start of the Hirnantian stage. The second pulse of extinction occurred in the later part of the Hirnantian stage, coinciding with the Metabolograptus persculptus zone. Each extinction pulse affected different groups of animals and was followed by a rediversification event. Statistical analysis of marine losses at this time suggests that the decrease in diversity was mainly caused by a sharp increase in extinctions, rather than a decrease in speciation. Following such a major loss of diversity, Silurian communities were initially less complex and broader niched. Nonetheless, in South China, warm-water benthic communities with complex trophic webs thrived immediately following LOME. Highly endemic faunas, which characterized the Late Ordovician, were replaced by faunas that were amongst the most cosmopolitan in the Phanerozoic, biogeographic patterns that persisted throughout most of the Silurian. LOME had few of the long-term ecological impacts associated with the Permian–Triassic and Cretaceous–Paleogene extinction events. Furthermore, biotic recovery from LOME proceeded at a much faster rate than it did after the Permian-Triassic extinction. Nevertheless, a large number of taxa disappeared from the Earth over a short time interval, eliminating and altering the relative diversity and abundance of certain groups. The Cambrian-type evolutionary fauna nearly died out, and was unable to rediversify after the extinction. Biodiversity changes in marine invertebrates Brachiopods Brachiopod diversity and composition was strongly affected, with the Cambrian-type inarticulate brachiopods (linguliforms and craniiforms) never recovering their pre-extinction diversity. Articulate (rhynchonelliform) brachiopods, part of the Paleozoic evolutionary fauna, were more variable in their response to the extinction. Some early rhynchonelliform groups, such as the Orthida and Strophomenida, declined significantly. Others, including the Pentamerida, Athyridida, Spiriferida, and Atrypida, were less affected and took the opportunity to diversify after the extinction. Additionally, brachiopods with higher abundance were more likely to survive. The extinction pulse at the end of the Katian was selective in its effects, disproportionally affecting deep-water species and tropical endemics inhabiting epicontinental seas. The Foliomena fauna, an assemblage of thin-shelled species adapted for deep dysoxic (low oxygen) waters, went extinct completely in the first extinction pulse. The Foliomena fauna was formerly widespread and resistant to background extinction rates prior to the Hirnantian, so their unexpected extinction points towards the abrupt loss of their specific habitat. During the glaciation, a high-latitude brachiopod assemblage, the Hirnantia fauna, established itself along outer shelf environments in lower latitudes, probably in response to cooling. However, the Hirnantia fauna would meet its demise in the second extinction pulse, replaced by Silurian-style assemblages adapted for warmer waters. The brachiopod survival intervals following the second pulse spanned the terminal Hirnantian to the middle Rhuddanian, after which the recovery interval began and lasted until the early Aeronian. Overall, the brachiopod recovery in the late Rhuddanian was rapid. Brachiopod survivors of the mass extinction tended to be endemic to one palaeoplate or even one locality in the survival interval in the earliest Silurian, though their ranges geographically expanded over the course of the biotic recovery. The region around what is today Oslo was a hotbed of atrypide rediversification. Brachiopod recovery consisted mainly of the reestablishment of cosmopolitan brachiopod taxa from the Late Ordovician. Progenitor taxa that arose following the mass extinction displayed numerous novel adaptations for resisting environmental stresses. Although some brachiopods did experience the Lilliput effect in response to the extinction, this phenomenon was not particularly widespread compared to other mass extinctions. Trilobites Trilobites were hit hard by both phases of the extinction, with about 70% of genera and 50% of families going extinct between the Katian and Silurian. The extinction disproportionately affected deep water species and groups with fully planktonic larvae or adults. The order Agnostida was completely wiped out, and the formerly diverse Asaphida survived with only a single genus, Raphiophorus. A cool-water trilobite assemblage, the Mucronaspis fauna, coincides with the Hirnantia brachiopod fauna in the timing of its expansion and demise. Trilobite faunas after the extinction were dominated by families that appeared in the Ordovician and survived LOME, such as Encrinuridae and Odontopleuridae. Bryozoans Over a third of bryozoan genera went extinct, but most families survived the extinction interval and the group as a whole recovered in the Silurian. The hardest-hit subgroups were the cryptostomes and trepostomes, which never recovered the full extent of their Ordovician diversity. Bryozoan extinctions started in coastal regions of Laurentia, before high extinction rates shifted to Baltica by the end of the Hirnantian. Bryozoan biodiversity loss appears to have been a prolonged process which partially preceded the Hirnantian extinction pulses. Extinction rates among Ordovician bryozoan genera were actually higher in the early and late Katian, and origination rates sharply dropped in the late Katian and Hirnantian. Echinoderms About 70% of crinoid genera died out. Early studies of crinoid biodiversity loss by Jack Sepkoski overestimated crinoid biodiversity losses during LOME. Most extinctions occurred in the first pulse. However, they rediversified quickly in tropical areas and reacquired their pre-extinction diversity not long into the Silurian. Many other echinoderms became very rare after the Ordovician, such as the cystoids, edrioasteroids, and other early crinoid-like groups. Sponges Stromatoporoid generic and familial taxonomic diversity was not significantly impacted by the mass extinction. A change in abundance is recorded, however; clathrodictyids increased in abundance relative to labechiids. Sponges thrived and dominated marine ecosystems in South China immediately after the extinction event, colonising depauperate, anoxic environments in the earliest Rhuddanian. Their pervasiveness in marine environments after the biotic crisis has been attributed to drastically decreased competition and an abundance of vacant niches left behind by organisms that perished in the catastrophe. Sponges may have assisted the recovery of other sessile suspension feeders: by helping stabilise sediment surfaces, they enabled bryozoans, brachiopods, and corals to recolonise the seafloor. Glaciation and cooling The first pulse of the Late Ordovician Extinction has typically been attributed to the Late Ordovician Glaciation, which is unusual among mass extinctions and has made LOME an outlier. Although there was a longer cooling trend in Middle and Lower Ordovician, the most severe and abrupt period of glaciation occurred in the Hirnantian stage, which was bracketed by both pulses of the extinction. The rapid continental glaciation was centered on Gondwana, which was located at the South Pole in the Late Ordovician. The Hirnantian glaciation is considered one of the most severe ice ages of the Paleozoic, which previously maintained the relatively warm climate conditions of a greenhouse earth. The cause of the glaciation is heavily debated. The late Ordovician glaciation was preceded by a fall in atmospheric carbon dioxide (from 7,000 ppm to 4,400 ppm). Atmospheric and oceanic CO2 levels may have fluctuated with the growth and decay of Gondwanan glaciation. The appearance and development of terrestrial plants and microphytoplankton, which consumed atmospheric carbon dioxide, may have diminished the greenhouse effect and promoting the transition of the climatic system to the glacial mode. Heavy silicate weathering of the uplifting Appalachians and Caledonides occurred during the Late Ordovician, which sequestered CO2. In the Hirnantian stage the volcanism diminished, and the continued weathering caused a significant and rapid draw down of CO2 coincident with the rapid and short ice age. As Earth cooled and sea levels dropped, highly weatherable carbonate platforms became exposed above water, enkindling a positive feedback loop of inorganic carbon sequestration. A hypothetical large igneous province emplaced during the Katian whose existence is unproven has been speculated to have been the sink that absorbed carbon dioxide and precipitated Hirnantian cooling. Alternatively, volcanic activity may have caused the cooling by supplying sulphur aerosols to the atmosphere and generating severe volcanic winters that triggered a runaway ice-albedo positive feedback loop. In addition, volcanic fertilisation of the oceans with phosphorus may have increased populations of photosynthetic algae and enhanced biological sequestration of carbon dioxide from the atmosphere. Increased burial of organic carbon is another method of drawing down carbon dioxide from the air that may have played a role in the Late Ordovician. Other studies point to an asteroid strike and impact winter as the culprit for the glaciation. True polar wander and the associated rapid palaeogeographic changes have also been proposed as a cause. Other studies have even suggested that shading of the sun's rays by a temporary planetary ring formed from the partial breakup of a large meteor in the atmosphere may have caused the glaciation, which would also link it to the Ordovician meteor event. Two environmental changes associated with the glaciation were responsible for much of the Late Ordovician extinction. First, the cooling global climate was probably especially detrimental because the biota were adapted to an intense greenhouse, especially because most shallow sea habitats in the Ordovician were located in the tropics. The southward shift of the polar front severely contracted the available latitudinal range of warm-adapted organisms. Second, sea level decline, caused by sequestering of water in the ice cap, drained the vast epicontinental seaways and eliminated the habitat of many endemic communities. The dispersed positions of the continents, in contrast to their position during the much less extinction-inducing Pleistocene glaciations, made glacioeustatic marine regression especially hazardous to marine life. Falling sea levels may have acted as a positive feedback loop accelerating further cooling; as shallow seas receded, carbonate-shelf production declined and atmospheric carbon dioxide levels correspondingly decreased, fostering even more cooling. Ice caps formed on the southern supercontinent Gondwana as it drifted over the South Pole. Correlating rock strata have been detected in Late Ordovician rock strata of North Africa and then-adjacent northeastern South America, which were south-polar locations at the time. Glaciation locks up water from the world-ocean and interglacials free it, causing sea levels repeatedly to drop and rise; the vast, shallow Ordovician seas withdrew, which eliminated many ecological niches, then returned, carrying diminished founder populations lacking many whole families of organisms. Then they withdrew again with the next pulse of glaciation, eliminating biological diversity at each change. In the North African strata, five pulses of glaciation from seismic sections are recorded. In the Yangtze Platform, a relict warm-water fauna continued to persist because South China blocked the transport of cold waters from Gondwanan waters at higher latitudes. This incurred a shift in the location of bottom water formation, shifting from low latitudes, characteristic of greenhouse conditions, to high latitudes, characteristic of icehouse conditions, which was accompanied by increased deep-ocean currents and oxygenation of the bottom water. An opportunistic fauna briefly thrived there, before anoxic conditions returned. The breakdown in the oceanic circulation patterns brought up nutrients from the abyssal waters. Surviving species were those that coped with the changed conditions and filled the ecological niches left by the extinctions. However, not all studies agree that cooling and glaciation caused LOMEI-1. One study suggests that the first pulse began not during the rapid Hirnantian ice cap expansion but in an interval of deglaciation following it. Anoxia and euxinia Another heavily-discussed factor in the Late Ordovician mass extinction is anoxia, the absence of dissolved oxygen in seawater. Anoxia not only deprives most life forms of a vital component of respiration, it also encourages the formation of toxic metal ions and other compounds. One of the most common of these poisonous chemicals is hydrogen sulfide, a biological waste product and major component of the sulfur cycle. Oxygen depletion when combined with high levels of sulfide is called euxinia. Though less toxic, ferrous iron (Fe2+) is another substance which commonly forms in anoxic waters. Anoxia is the most common culprit for the second pulse of the Late Ordovician mass extinction and is connected to many other mass extinctions throughout geological time. It may have also had a role in the first pulse of the Late Ordovician mass extinction, though support for this hypothesis is inconclusive and contradicts other evidence for high oxygen levels in seawater during the glaciation. Early Hirnantian anoxia Some geologists have argued that anoxia played a role in the first extinction pulse, though this hypothesis is controversial. In the early Hirnantian, shallow-water sediments throughout the world experience a large positive excursion in the δ34S ratio of buried pyrite. This ratio indicates that shallow-water pyrite which formed at the beginning of the glaciation had a decreased proportion of 32S, a common lightweight isotope of sulfur. 32S in the seawater could hypothetically be used up by extensive deep-sea pyrite deposition. The Ordovician ocean also had very low levels of sulfate, a nutrient which would otherwise resupply 32S from the land. Pyrite forms most easily in anoxic and euxinic environments, while better oxygenation encourages the formation of gypsum instead. As a result, anoxia and euxinia would need to be common in the deep sea to produce enough pyrite to shift the δ34S ratio. Thallium isotope ratios can also be used as indicators of anoxia. A major positive ε205Tl excursion in the late Katian, just before the Katian-Hirnantian boundary, likely reflects a global enlargement of oxygen minimum zones. During the late Katian, thallium isotopic perturbations indicating proliferation of anoxic waters notably preceded the appearance of other geochemical indicators of the expansion of anoxia. A more direct proxy for anoxic conditions is FeHR/FeT. This ratio describes the comparative abundance of highly reactive iron compounds which are only stable without oxygen. Most geological sections corresponding to the beginning of the Hirnantian glaciation have FeHR/FeT below 0.38, indicating oxygenated waters. However, higher FeHR/FeT values are known from a few deep-water early Hirnantian sequences found in China and Nevada. Elevated FePy/FeHR values have also been found in association with LOMEI-1, including ones above 0.8 that are tell-tale indicators of euxinia. Glaciation could conceivably trigger anoxic conditions, albeit indirectly. If continental shelves are exposed by falling sea levels, then organic surface runoff flows into deeper oceanic basins. The organic matter would have more time to leach out phosphate and other nutrients before being deposited on the seabed. Increased phosphate concentration in the seawater would lead to eutrophication and then anoxia. Deep-water anoxia and euxinia would impact deep-water benthic fauna, as expected for the first pulse of extinction. Chemical cycle disturbances would also steepen the chemocline, restricting the habitable zone of planktonic fauna which also go extinct in the first pulse. This scenario is congruent with both organic carbon isotope excursions and general extinction patterns observed in the first pulse. However, data supporting deep-water anoxia during the glaciation contrasts with more extensive evidence for well-oxygenated waters. Black shales, which are indicative of an anoxic environment, become very rare in the early Hirnantian compared to surrounding time periods. Although early Hirnantian black shales can be found in a few isolated ocean basins (such as the Yangtze platform of China), from a worldwide perspective these correspond to local events. Some Chinese sections record an early Hirnantian increase in the abundance of Mo-98, a heavy isotope of molybdenum. This shift can correspond to a balance between minor local anoxia and well-oxygenated waters on a global scale. Other trace elements point towards increased deep-sea oxygenation at the start of the glaciation. Oceanic current modelling suggest that glaciation would have encouraged oxygenation in most areas, apart from the Paleo-Tethys ocean. Devastation of the Dicranograptidae-Diplograptidae-Orthograptidae (DDO) graptolite fauna, which was well adapted to anoxic conditions, further suggests that LOMEI-1 was associated with increased oxygenation of the water column and not the other way around. Deep-sea anoxia is not the only explanation for the δ34S excursion of pyrite. Carbonate-associated sulfate maintains high 32S levels, indicating that seawater in general did not experience 32S depletion during the glaciation. Even if pyrite burial did increase at that time, its chemical effects would have been far too slow to explain the rapid excursion or extinction pulse. Instead, cooling may lower the metabolism of warm-water aerobic bacteria, reducing decomposition of organic matter. Fresh organic matter would eventually sink down and supply nutrients to sulfate-reducing microbes living in the seabed. Sulfate-reducing microbes prioritize 32S during anaerobic respiration, leaving behind heavier isotopes. A bloom of sulfate-reducing microbes can quickly account for the δ34S excursion in marine sediments without a corresponding decrease in oxygen. A few studies have proposed that the first extinction pulse did not begin with the Hirnantian glaciation, but instead corresponds to an interglacial period or other warming event. Anoxia would be the most likely mechanism of extinction in a warming event, as evidenced by other extinctions involving warming. However, this view of the first extinction pulse is controversial and not widely accepted. Late Hirnantian anoxia The late Hirnantian experienced a dramatic increase in the abundance of black shales. Coinciding with the retreat of the Hirnantian glaciation, black shale expands out of isolated basins to become the dominant oceanic sediment at all latitudes and depths. The worldwide distribution of black shales in the late Hirnantian is indicative of a global anoxic event, which has been termed the Hirnantian ocean anoxic event (HOAE). Corresponding to widespread anoxia are δ34SCAS, δ98Mo, δ238U, and εNd(t) excursions found in many different regions. At least in European sections, late Hirnantian anoxic waters were originally ferruginous (dominated by ferrous iron) before gradually becoming more euxinic. In the Yangtze Sea, located on the western margins of the South China microcontinent, the second extinction pulse occurred alongside intense euxinia which spread out from the middle of the continental shelf. Mercury loading in South China during LOMEI-2 was likely related to euxinia. However, some evidence suggests that the top of the water column in the Ordovician oceans remained well oxygenated even as the seafloor became deoxygenated. On a global scale, euxinia was probably one or two orders of magnitude more prevalent than in the modern day. Global anoxia may have lasted more than 3 million years, persisting through the entire Rhuddanian stage of the Silurian period. This would make the Hirnantian-Rhuddanian anoxia one of the longest-lasting anoxic events in geologic time. The cause of the Hirnantian-Rhuddanian anoxic event is uncertain. Like most global anoxic events, an increased supply of nutrients (such as nitrates and phosphates) would encourage algal or microbial blooms that deplete oxygen levels in the seawater. The most likely culprits are cyanobacteria, which can use nitrogen fixation to produce usable nitrogen compounds in the absence of nitrates. Nitrogen isotopes during the anoxic event record high rates of denitrification, a biological process which depletes nitrates. The Nitrogen-fixing ability of cyanobacteria would give them an edge over inflexible competitors like eukaryotic algae. At Anticosti Island, a uranium isotope excursion consistent with anoxia actually occurs prior to indicators of receding glaciation. This may suggest that the Hirnantian-Rhuddanian anoxic event (and its corresponding extinction) began during the glaciation, not after it. Cool temperatures can lead to upwelling, cycling nutrients into productive surface waters via air and ocean cycles. Upwelling could instead be encouraged by increasing oceanic stratification through an input of freshwater from melting glaciers. This would be more reasonable if the anoxic event coincided with the end of glaciation, as supported by most other studies. However, oceanic models argue that marine currents would recover too quickly for freshwater disruptions to have a meaningful effect on nutrient cycles. Retreating glaciers could expose more land to weathering, which would be a more sustained source of phosphates flowing into the ocean. There is also evidence implicating volcanism as a contributor to Late Hirnantian anoxia. There were few clear patterns of extinction associated with the second extinction pulse. Every region and marine environment experienced the second extinction pulse to some extent. Many taxa which survived or diversified after the first pulse were finished off in the second pulse. These include the Hirnantia brachiopod fauna and Mucronaspis trilobite fauna, which previously thrived in the cold glacial period. Other taxa such as graptolites and warm-water reef denizens were less affected. Sediments from China and Baltica seemingly show a more gradual replacement of the Hirnantia fauna after glaciation. Although this suggests that the second extinction pulse may have been a minor event at best, other paleontologists maintain that an abrupt ecological turnover accompanied the end of glaciation. There may be a correlation between the relatively slow recovery after the second extinction pulse, and the prolonged nature of the anoxic event which accompanied it. On the other hand, the occurrence of euxinic pulses similar in magnitude to LOMEI-2 during the Katian without ensuing biological collapses has caused some researchers to question whether euxinia alone could have been LOMEI-2's driver. Early Rhuddanian anoxia Deposition of black graptolite shales continued to be common into the earliest Rhuddanian, indicating that anoxia persisted well into the Llandovery. A sharp reduction in the average size of many organisms, likely attributable to the Lilliput effect, and the disappearance of many relict taxa from the Ordovician indicate a third extinction interval linked to an expansion of anoxic conditions into shallower shelf environments, particularly in Baltica. This sharp decline in dissolved oxygen concentrations was likely linked to a period of global warming documented by a negative carbon isotope excursion preserved in Baltican sediments. Other potential factors Metal poisoning Toxic metals on the ocean floor may have dissolved into the water when the oceans' oxygen was depleted. An increase in available nutrients in the oceans may have been a factor, and decreased ocean circulation caused by global cooling may also have been a factor. Hg/TOC values from the Peri-Baltic region indicate noticeable spikes in mercury concentrations during the lower late Katian, the Katian-Hirnantian boundary, and the late Hirnantian. The toxic metals may have killed life forms in lower trophic levels of the food chain, causing a decline in population, and subsequently resulting in starvation for the dependent higher feeding life forms in the chain. Gamma-ray burst A minority hypothesis to explain the first burst has been proposed by Philip Ball, Adrian Lewis Melott, and Brian C. Thomas, suggesting that the initial extinctions could have been caused by a gamma-ray burst originating from a hypernova in a nearby arm of the Milky Way galaxy, within 6,000 light-years of Earth. A ten-second burst would have stripped the Earth's atmosphere of half of its ozone almost immediately, exposing surface-dwelling organisms, including those responsible for planetary photosynthesis, to high levels of extreme ultraviolet radiation. Under this hypothesis, several groups of marine organisms with a planktonic lifestyle were more exposed to more UV radiation than groups that lived on the seabed. It is estimated that 20% to 60% of the total phytoplankton biomass on Earth would have been killed in such an event because the oceans were mostly oligotrophic and clear during the Late Ordovician. This is consistent with observations that planktonic organisms suffered severely during the first extinction pulse. In addition, species dwelling in shallow water were more likely to become extinct than species dwelling in deep water, also consistent with the hypothetical effects of a galactic gamma-ray burst. A gamma-ray burst could also explain the rapid expansion of glaciers, since the high energy rays would cause ozone, a greenhouse gas, to dissociate and its dissociated oxygen atoms to then react with nitrogen to form nitrogen dioxide, a darkly-coloured aerosol which cools the planet. It would also cohere with the major δ13C isotopic excursion indicating increased sequestration of carbon-12 out of the atmosphere, which would have occurred as a result of the nitrogen dioxide reacting with hydroxyl and raining back down to Earth as nitric acid, precipitating large quantities of nitrates that would have enhanced wetland productivity and sequestration of carbon dioxide. Although the gamma-ray burst hypothesis is consistent with some patterns at the onset of extinction, there is no unambiguous evidence that such a nearby gamma-ray burst ever happened. Volcanism Though more commonly associated with greenhouse gases and global warming, volcanoes may have cooled the planet and precipitated glaciation by discharging sulphur into the atmosphere. This is supported by a positive uptick in pyritic Δ33S values, a geochemical signal of volcanic sulphur discharge, coeval with LOMEI-1. More recently, in May 2020, a study suggested the first pulse of mass extinction was caused by volcanism which induced global warming and anoxia, rather than cooling and glaciation. Higher resolution of species diversity patterns in the Late Ordovician suggest that extinction rates rose significantly in the early or middle Katian stage, several million years earlier than the Hirnantian glaciation. This early phase of extinction is associated with large igneous province (LIP) activity, possibly that of the Alborz LIP of northern Iran, as well as a warming phase known as the Boda event. However, other research still suggests the Boda event was a cooling event instead. Increased volcanic activity during the early late Katian and around the Katian-Hirnantian boundary is also implied by heightened mercury concentrations relative to total organic carbon. Marine bentonite layers associated with the subduction of the Junggar Ocean underneath the Yili Block have been dated to the late Katian, close to the Katian-Hirnantian boundary. Volcanic activity could also provide a plausible explanation for anoxia during the first pulse of the mass extinction. A volcanic input of phosphorus, which was insufficient to enkindle persistent anoxia on its own, may have triggered a positive feedback loop of phosphorus recycling from marine sediments, sustaining widespread marine oxygen depletion over the course of LOMEI-1. Also, the weathering of nutrient-rich volcanic rocks emplaced during the middle and late Katian likely enhanced the reduction in dissolved oxygen. Intense volcanism also fits in well with the attribution of euxinia as the main driver of LOMEI-2; sudden volcanism at the Ordovician-Silurian boundary is suggested to have supplied abundant sulphur dioxide, greatly facilitating the development of euxinia. Other papers have criticised the volcanism hypothesis, claiming that volcanic activity was relatively low in the Ordovician and that superplume and LIP volcanic activity is especially unlikely to have caused the mass extinction at the end of the Ordovician. A 2022 study argued against a volcanic cause of LOME, citing the lack of mercury anomalies and the discordance between deposition of bentonites and redox changes in drillcores from South China straddling the Ordovician-Silurian boundary. Mercury anomalies at the end of the Ordovician relative to total organic carbon, or Hg/TOC, that some researchers have attributed to large-scale volcanism have been reinterpreted by some to be flawed because the main mercury host in the Ordovician was sulphide, and thus Hg/TS should be used instead; Hg/TS values show no evidence of volcanogenic mercury loading, a finding bolstered by ∆199Hg measurements much higher than would be expected for volcanogenic mercury input. Asteroid impact A 2023 paper points to the Deniliquin multiple-ring feature in southeastern Australia, which has been dated to around the start of LOMEI-1, for initiating the intense Hirnantian glaciation and the first pulse of the extinction event. According to the paper, it still requires further research to test the idea.
Physical sciences
Geological history
null
425779
https://en.wikipedia.org/wiki/Compression%20%28physics%29
Compression (physics)
In mechanics, compression is the application of balanced inward ("pushing") forces to different points on a material or structure, that is, forces with no net sum or torque directed so as to reduce its size in one or more directions. It is contrasted with tension or traction, the application of balanced outward ("pulling") forces; and with shearing forces, directed so as to displace layers of the material parallel to each other. The compressive strength of materials and structures is an important engineering consideration. In uniaxial compression, the forces are directed along one direction only, so that they act towards decreasing the object's length along that direction. The compressive forces may also be applied in multiple directions; for example inwards along the edges of a plate or all over the side surface of a cylinder, so as to reduce its area (biaxial compression), or inwards over the entire surface of a body, so as to reduce its volume. Technically, a material is under a state of compression, at some specific point and along a specific direction , if the normal component of the stress vector across a surface with normal direction is directed opposite to . If the stress vector itself is opposite to , the material is said to be under normal compression or pure compressive stress along . In a solid, the amount of compression generally depends on the direction , and the material may be under compression along some directions but under traction along others. If the stress vector is purely compressive and has the same magnitude for all directions, the material is said to be under isotropic compression, hydrostatic compression, or bulk compression. This is the only type of static compression that liquids and gases can bear. It affects the volume of the material, as quantified by the bulk modulus and the volumetric strain. The inverse process of compression is called decompression, dilation, or expansion, in which the object enlarges or increases in volume. In a mechanical wave, which is longitudinal, the medium is displaced in the wave's direction, resulting in areas of compression and rarefaction. Effects When put under compression (or any other type of stress), every material will suffer some deformation, even if imperceptible, that causes the average relative positions of its atoms and molecules to change. The deformation may be permanent, or may be reversed when the compression forces disappear. In the latter case, the deformation gives rise to reaction forces that oppose the compression forces, and may eventually balance them. Liquids and gases cannot bear steady uniaxial or biaxial compression, they will deform promptly and permanently and will not offer any permanent reaction force. However they can bear isotropic compression, and may be compressed in other ways momentarily, for instance in a sound wave. Every ordinary material will contract in volume when put under isotropic compression, contract in cross-section area when put under uniform biaxial compression, and contract in length when put into uniaxial compression. The deformation may not be uniform and may not be aligned with the compression forces. What happens in the directions where there is no compression depends on the material. Most materials will expand in those directions, but some special materials will remain unchanged or even contract. In general, the relation between the stress applied to a material and the resulting deformation is a central topic of continuum mechanics. Uses Compression of solids has many implications in materials science, physics and structural engineering, for compression yields noticeable amounts of stress and tension. By inducing compression, mechanical properties such as compressive strength or modulus of elasticity, can be measured. Compression machines range from very small table top systems to ones with over 53 MN capacity. Gases are often stored and shipped in highly compressed form, to save space. Slightly compressed air or other gases are also used to fill balloons, rubber boats, and other inflatable structures. Compressed liquids are used in hydraulic equipment and in fracking. In engines Internal combustion engines In internal combustion engines the explosive mixture gets compressed before it is ignited; the compression improves the efficiency of the engine. In the Otto cycle, for instance, the second stroke of the piston effects the compression of the charge which has been drawn into the cylinder by the first forward stroke. Steam engines The term is applied to the arrangement by which the exhaust valve of a steam engine is made to close, shutting a portion of the exhaust steam in the cylinder, before the stroke of the piston is quite complete. This steam being compressed as the stroke is completed, a cushion is formed against which the piston does work while its velocity is being rapidly reduced, and thus the stresses in the mechanism due to the inertia of the reciprocating parts are lessened. This compression, moreover, obviates the shock which would otherwise be caused by the admission of the fresh steam for the return stroke.
Physical sciences
Solid mechanics
Physics
425850
https://en.wikipedia.org/wiki/Thermodynamic%20system
Thermodynamic system
A thermodynamic system is a body of matter and/or radiation separate from its surroundings that can be studied using the laws of thermodynamics. Thermodynamic systems can be passive and active according to internal processes. According to internal processes, passive systems and active systems are distinguished: passive, in which there is a redistribution of available energy, active, in which one type of energy is converted into another. Depending on its interaction with the environment, a thermodynamic system may be an isolated system, a closed system, or an open system. An isolated system does not exchange matter or energy with its surroundings. A closed system may exchange heat, experience forces, and exert forces, but does not exchange matter. An open system can interact with its surroundings by exchanging both matter and energy. The physical condition of a thermodynamic system at a given time is described by its state, which can be specified by the values of a set of thermodynamic state variables. A thermodynamic system is in thermodynamic equilibrium when there are no macroscopically apparent flows of matter or energy within it or between it and other systems. Overview Thermodynamic equilibrium is characterized not only by the absence of any flow of mass or energy, but by “the absence of any tendency toward change on a macroscopic scale.” Equilibrium thermodynamics, as a subject in physics, considers macroscopic bodies of matter and energy in states of internal thermodynamic equilibrium. It uses the concept of thermodynamic processes, by which bodies pass from one equilibrium state to another by transfer of matter and energy between them. The term 'thermodynamic system' is used to refer to bodies of matter and energy in the special context of thermodynamics. The possible equilibria between bodies are determined by the physical properties of the walls that separate the bodies. Equilibrium thermodynamics in general does not measure time. Equilibrium thermodynamics is a relatively simple and well settled subject. One reason for this is the existence of a well defined physical quantity called 'the entropy of a body'. Non-equilibrium thermodynamics, as a subject in physics, considers bodies of matter and energy that are not in states of internal thermodynamic equilibrium, but are usually participating in processes of transfer that are slow enough to allow description in terms of quantities that are closely related to thermodynamic state variables. It is characterized by presence of flows of matter and energy. For this topic, very often the bodies considered have smooth spatial inhomogeneities, so that spatial gradients, for example a temperature gradient, are well enough defined. Thus the description of non-equilibrium thermodynamic systems is a field theory, more complicated than the theory of equilibrium thermodynamics. Non-equilibrium thermodynamics is a growing subject, not an established edifice. Example theories and modeling approaches include the GENERIC formalism for complex fluids, viscoelasticity, and soft materials. In general, it is not possible to find an exactly defined entropy for non-equilibrium problems. For many non-equilibrium thermodynamical problems, an approximately defined quantity called 'time rate of entropy production' is very useful. Non-equilibrium thermodynamics is mostly beyond the scope of the present article. Another kind of thermodynamic system is considered in most engineering. It takes part in a flow process. The account is in terms that approximate, well enough in practice in many cases, equilibrium thermodynamical concepts. This is mostly beyond the scope of the present article, and is set out in other articles, for example the article Flow process. History The classification of thermodynamic systems arose with the development of thermodynamics as a science. Theoretical studies of thermodynamic processes in the period from the first theory of heat engines (Saadi Carnot, France, 1824) to the theory of dissipative structures (Ilya Prigozhin, Belgium, 1971) mainly concerned the patterns of interaction of thermodynamic systems with the environment. At the same time, thermodynamic systems were mainly classified as isolated, closed and open, with corresponding properties in various thermodynamic states, for example, in states close to equilibrium, nonequilibrium and strongly nonequilibrium. In 2010, Boris Dobroborsky (Israel, Russia) proposed a classification of thermodynamic systems according to internal processes consisting in energy redistribution (passive systems) and energy conversion (active systems). Passive systems If there is a temperature difference inside the thermodynamic system, for example in a rod, one end of which is warmer than the other, then thermal energy transfer processes occur in it, in which the temperature of the colder part rises and the warmer part decreases. As a result, after some time, the temperature in the rod will equalize – the rod will come to a state of thermodynamic equilibrium. Active systems If the process of converting one type of energy into another takes place inside a thermodynamic system, for example, in chemical reactions, in electric or pneumatic motors, when one solid body rubs against another, then the processes of energy release or absorption will occur, and the thermodynamic system will always tend to a non-equilibrium state with respect to the environment. Systems in equilibrium In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases. A system in which all processes of change have gone practically to completion is considered in a state of thermodynamic equilibrium. The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states. In some cases, when analyzing a thermodynamic process, one can assume that each intermediate state in the process is at equilibrium. Such a process is called quasistatic. For a process to be reversible, each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly. The very existence of thermodynamic equilibrium, defining states of thermodynamic systems, is the essential, characteristic, and most fundamental postulate of thermodynamics, though it is only rarely cited as a numbered law. According to Bailyn, the commonly rehearsed statement of the zeroth law of thermodynamics is a consequence of this fundamental postulate. In reality, practically nothing in nature is in strict thermodynamic equilibrium, but the postulate of thermodynamic equilibrium often provides very useful idealizations or approximations, both theoretically and experimentally; experiments can provide scenarios of practical thermodynamic equilibrium. In equilibrium thermodynamics the state variables do not include fluxes because in a state of thermodynamic equilibrium all fluxes have zero values by definition. Equilibrium thermodynamic processes may involve fluxes but these must have ceased by the time a thermodynamic process or operation is complete bringing a system to its eventual thermodynamic state. Non-equilibrium thermodynamics allows its state variables to include non-zero fluxes, which describe transfers of mass or energy or entropy between a system and its surroundings. Walls A system is enclosed by walls that bound it and connect it to its surroundings. Often a wall restricts passage across it by some form of matter or energy, making the connection indirect. Sometimes a wall is no more than an imaginary two-dimensional closed surface through which the connection to the surroundings is direct. A wall can be fixed (e.g. a constant volume reactor) or moveable (e.g. a piston). For example, in a reciprocating engine, a fixed wall means the piston is locked at its position; then, a constant volume process may occur. In that same engine, a piston may be unlocked and allowed to move in and out. Ideally, a wall may be declared adiabatic, diathermal, impermeable, permeable, or semi-permeable. Actual physical materials that provide walls with such idealized properties are not always readily available. The system is delimited by walls or boundaries, either actual or notional, across which conserved (such as matter and energy) or unconserved (such as entropy) quantities can pass into and out of the system. The space outside the thermodynamic system is known as the surroundings, a reservoir, or the environment. The properties of the walls determine what transfers can occur. A wall that allows transfer of a quantity is said to be permeable to it, and a thermodynamic system is classified by the permeabilities of its several walls. A transfer between system and surroundings can arise by contact, such as conduction of heat, or by long-range forces such as an electric field in the surroundings. A system with walls that prevent all transfers is said to be isolated. This is an idealized conception, because in practice some transfer is always possible, for example by gravitational forces. It is an axiom of thermodynamics that an isolated system eventually reaches internal thermodynamic equilibrium, when its state no longer changes with time. The walls of a closed system allow transfer of energy as heat and as work, but not of matter, between it and its surroundings. The walls of an open system allow transfer both of matter and of energy. This scheme of definition of terms is not uniformly used, though it is convenient for some purposes. In particular, some writers use 'closed system' where 'isolated system' is here used. Anything that passes across the boundary and effects a change in the contents of the system must be accounted for in an appropriate balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824. It could also be just one nuclide (i.e. a system of quarks) as hypothesized in quantum thermodynamics. Surroundings The system is the part of the universe being studied, while the surroundings is the remainder of the universe that lies outside the boundaries of the system. It is also known as the environment or the reservoir. Depending on the type of system, it may interact with the system by exchanging mass, energy (including heat and work), momentum, electric charge, or other conserved properties. The environment is ignored in the analysis of the system, except in regards to these interactions. Closed system In a closed system, no mass may be transferred in or out of the system boundaries. The system always contains the same amount of matter, but (sensible) heat and (boundary) work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary. Adiabatic boundary – not allowing any heat exchange: A thermally isolated system Rigid boundary – not allowing exchange of work: A mechanically isolated system One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter, a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way. The first law of thermodynamics for energy transfers for closed system may be stated: where denotes the internal energy of the system, heat added to the system, the work done by the system. For infinitesimal changes the first law for closed systems may stated: If the work is due to a volume expansion by at a pressure then: For a quasi-reversible heat transfer, the second law of thermodynamics reads: where denotes the thermodynamic temperature and the entropy of the system. With these relations the fundamental thermodynamic relation, used to compute changes in internal energy, is expressed as: For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. For systems undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically: where denotes the number of -type molecules, the number of atoms of element in molecule , and the total number of atoms of element in the system, which remains constant, since the system is closed. There is one such equation for each element in the system. Isolated system An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is in a state of thermodynamic equilibrium. Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena. In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann's H-theorem used equations, which assumed that a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt's paradox. However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann's assumption of molecular chaos can be justified. The second law of thermodynamics for isolated systems states that the entropy of an isolated system not in equilibrium tends to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the internal energy is constant and the entropy can never decrease. A closed system's entropy can decrease e.g. when heat is extracted from the system. Isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe). 'Closed system' is often used in thermodynamics discussions when 'isolated system' would be correct – i.e. there is an assumption that energy does not enter or leave the system. Selective transfer of matter For a thermodynamic process, the precise physical properties of the walls and surroundings of the system are important, because they determine the possible processes. An open system has one or several walls that allow transfer of matter. To account for the internal energy of the open system, this requires energy transfer terms in addition to those for heat and work. It also leads to the idea of the chemical potential. A wall selectively permeable only to a pure substance can put the system in diffusive contact with a reservoir of that pure substance in the surroundings. Then a process is possible in which that pure substance is transferred between system and surroundings. Also, across that wall a contact equilibrium with respect to that substance is possible. By suitable thermodynamic operations, the pure substance reservoir can be dealt with as a closed system. Its internal energy and its entropy can be determined as functions of its temperature, pressure, and mole number. A thermodynamic operation can render impermeable to matter all system walls other than the contact equilibrium wall for that substance. This allows the definition of an intensive state variable, with respect to a reference state of the surroundings, for that substance. The intensive variable is called the chemical potential; for component substance it is usually denoted . The corresponding extensive variable can be the number of moles of the component substance in the system. For a contact equilibrium across a wall permeable to a substance, the chemical potentials of the substance must be same on either side of the wall. This is part of the nature of thermodynamic equilibrium, and may be regarded as related to the zeroth law of thermodynamics. Open system In an open system, there is an exchange of energy and matter between the system and the surroundings. The presence of reactants in an open beaker is an example of an open system. Here the boundary is an imaginary surface enclosing the beaker and reactants. It is named closed, if borders are impenetrable for substance, but allow transit of energy in the form of heat, and isolated, if there is no exchange of heat and substances. The open system cannot exist in the equilibrium state. To describe deviation of the thermodynamic system from equilibrium, in addition to constitutive variables that was described above, a set of internal variables have been introduced. The equilibrium state is considered to be stable and the main property of the internal variables, as measures of non-equilibrium of the system, is their trending to disappear; the local law of disappearing can be written as relaxation equation for each internal variable where is a relaxation time of a corresponding variable. It is convenient to consider the initial value equal to zero. The specific contribution to the thermodynamics of open non-equilibrium systems was made by Ilya Prigogine, who investigated a system of chemically reacting substances. In this case the internal variables appear to be measures of incompleteness of chemical reactions, that is measures of how much the considered system with chemical reactions is out of equilibrium. The theory can be generalized, to consider any deviations from the equilibrium state, such as structure of the system, gradients of temperature, difference of concentrations of substances and so on, to say nothing of degrees of completeness of all chemical reactions, to be internal variables. The increments of Gibbs free energy and entropy at and are determined as The stationary states of the system exist due to exchange of both thermal energy () and a stream of particles. The sum of the last terms in the equations presents the total energy coming into the system with the stream of particles of substances that can be positive or negative; the quantity is chemical potential of substance .The middle terms in equations (2) and (3) depict energy dissipation (entropy production) due to the relaxation of internal variables , while are thermodynamic forces. This approach to the open system allows describing the growth and development of living objects in thermodynamic terms.
Physical sciences
Thermodynamics
Physics
425881
https://en.wikipedia.org/wiki/Brucellosis
Brucellosis
Brucellosis is a zoonosis caused by ingestion of unpasteurized milk from infected animals, or close contact with their secretions. It is also known as undulant fever, Malta fever, and Mediterranean fever. The bacteria causing this disease, Brucella, are small, Gram-negative, nonmotile, nonspore-forming, rod-shaped (coccobacilli) bacteria. They function as facultative intracellular parasites, causing chronic disease, which usually persists for life. Four species infect humans: B. abortus, B. canis, B. melitensis, and B. suis. B. abortus is less virulent than B. melitensis and is primarily a disease of cattle. B. canis affects dogs. B. melitensis is the most virulent and invasive species; it usually infects goats and occasionally sheep. B. suis is of intermediate virulence and chiefly infects pigs. Symptoms include profuse sweating and joint and muscle pain. Brucellosis has been recognized in animals and humans since the early 20th century. Signs and symptoms The symptoms are like those associated with many other febrile diseases, but with emphasis on muscular pain and night sweats. The duration of the disease can vary from a few weeks to many months or even years. In the first stage of the disease, bacteremia occurs and leads to the classic triad of undulant fevers, sweating (often with a characteristic foul, moldy smell sometimes likened to wet hay), and migratory arthralgia and myalgia (joint and muscle pain). Blood tests characteristically reveal a low number of white blood cells and red blood cells, show some elevation of liver enzymes such as aspartate aminotransferase and alanine aminotransferase, and demonstrate positive Bengal rose and Huddleston reactions. Gastrointestinal symptoms occur in 70% of cases and include nausea, vomiting, decreased appetite, unintentional weight loss, abdominal pain, constipation, diarrhea, an enlarged liver, liver inflammation, liver abscess, and an enlarged spleen. This complex is, at least in Portugal, Palestine, Israel, Syria, Iran, and Jordan, known as Malta fever. During episodes of Malta fever, melitococcemia (presence of brucellae in the blood) can usually be demonstrated by means of blood culture in tryptose medium or Albini medium. If untreated, the disease can give origin to focalizations or become chronic. The focalizations of brucellosis occur usually in bones and joints, and osteomyelitis or spondylodiscitis of the lumbar spine accompanied by sacroiliitis is very characteristic of this disease. Orchitis is also common in men. The consequences of Brucella infection are highly variable and may include arthritis, spondylitis, thrombocytopenia, meningitis, uveitis, optic neuritis, endocarditis, and various neurological disorders collectively known as neurobrucellosis. Cause Brucellosis in humans is usually associated with consumption of unpasteurized milk and soft cheeses made from the milk of infected animals—often goats—infected with B. melitensis, and with occupational exposure of laboratory workers, veterinarians, and slaughterhouse workers. These infected animals may be healthy and asymptomatic. Some vaccines used in livestock, most notably B. abortus strain 19, also cause disease in humans if accidentally injected. Brucellosis induces inconstant fevers, miscarriage, sweating, weakness, anemia, headaches, depression, and muscular and bodily pain. The other strains, B. suis and B. canis, cause infection in pigs and dogs, respectively. Overall findings support that brucellosis poses an occupational risk to goat farmers with specific areas of concern including weak awareness of disease transmission to humans and lack of knowledge on specific safe farm practices such as quarantine practices. Diagnosis The diagnosis of brucellosis relies on: Demonstration of the agent: blood cultures in tryptose broth, bone marrow cultures: The growth of brucellae is extremely slow (they can take up to two months to grow) and the culture poses a risk to laboratory personnel due to high infectivity of brucellae. Demonstration of antibodies against the agent either with the classic Huddleson, Wright, and/or Bengal Rose reactions, either with ELISA or the 2-mercaptoethanol assay for IgM antibodies associated with chronic disease Histologic evidence of granulomatous hepatitis on hepatic biopsy Radiologic alterations in infected vertebrae: the Pedro Pons sign (preferential erosion of the anterosuperior corner of lumbar vertebrae) and marked osteophytosis are suspicious of brucellic spondylitis. Definite diagnosis of brucellosis requires the isolation of the organism from the blood, body fluids, or tissues, but serological methods may be the only tests available in many settings. Positive blood culture yield ranges between 40 and 70% and is less commonly positive for B. abortus than B. melitensis or B. suis. Identification of specific antibodies against bacterial lipopolysaccharide and other antigens can be detected by the standard agglutination test (SAT), rose Bengal, 2-mercaptoethanol (2-ME), antihuman globulin (Coombs') and indirect enzyme-linked immunosorbent assay (ELISA). SAT is the most commonly used serology in endemic areas. An agglutination titre greater than 1:160 is considered significant in nonendemic areas and greater than 1:320 in endemic areas. Due to the similarity of the O polysaccharide of Brucella to that of various other Gram-negative bacteria (e.g. Francisella tularensis, Escherichia coli, Salmonella urbana, Yersinia enterocolitica, Vibrio cholerae, and Stenotrophomonas maltophilia), the appearance of cross-reactions of class M immunoglobulins may occur. The inability to diagnose B. canis by SAT due to lack of cross-reaction is another drawback. False-negative SAT may be caused by the presence of blocking antibodies (the prozone phenomenon) in the α2-globulin (IgA) and in the α-globulin (IgG) fractions. Dipstick assays are new and promising, based on the binding of Brucella IgM antibodies, and are simple, accurate, and rapid. ELISA typically uses cytoplasmic proteins as antigens. It measures IgM, IgG, and IgA with better sensitivity and specificity than the SAT in most recent comparative studies. The commercial Brucellacapt test, a single-step immunocapture assay for the detection of total anti-Brucella antibodies, is an increasingly used adjunctive test when resources permit. PCR is fast and should be specific. Many varieties of PCR have been developed (e.g. nested PCR, realtime PCR, and PCR-ELISA) and found to have superior specificity and sensitivity in detecting both primary infection and relapse after treatment. Unfortunately, these are not standardized for routine use, and some centres have reported persistent PCR positivity after clinically successful treatment, fuelling the controversy about the existence of prolonged chronic brucellosis. Other laboratory findings include normal peripheral white cell count, and occasional leucopenia with relative lymphocytosis. The serum biochemical profiles are commonly normal. Prevention Surveillance using serological tests, as well as tests on milk such as the milk ring test, can be used for screening and play an important role in campaigns to eliminate the disease. Also, individual animal testing both for trade and for disease-control purposes is practiced. In endemic areas, vaccination is often used to reduce the incidence of infection. An animal vaccine is available that uses modified live bacteria. The World Organisation for Animal Health Manual of Diagnostic Test and Vaccines for Terrestrial Animals provides detailed guidance on the production of vaccines. As the disease is closer to being eliminated, a test and eradication program is required to eliminate it. The main way of preventing brucellosis is by using fastidious hygiene in producing raw milk products, or by pasteurizing all milk that is to be ingested by human beings, either in its unaltered form or as a derivative, such as cheese. Another important aspect of Brucellosis prevention is public awareness. People in endemic areas demonstrated a high lack of knowledge and understanding of the disease and its causes. To combat this, the One Health concept has been proposed. One Health is a method for combining disciplines such as public health, veterinary services, and microbiology to bring awareness to the disease. However, the implementation of this method faces many challenges including economic, political, and social barriers. Treatment Antibiotics such as tetracyclines, rifampicin, and the aminoglycosides streptomycin and gentamicin are effective against Brucella bacteria. However, the use of more than one antibiotic is needed for several weeks, because the bacteria incubate within cells. The gold standard treatment for adults is daily intramuscular injections of streptomycin 1 g for 14 days and oral doxycycline 100 mg twice daily for 45 days (concurrently). Gentamicin 5 mg/kg by intramuscular injection once daily for 7 days is an acceptable substitute when streptomycin is not available or contraindicated. Another widely used regimen is doxycycline plus rifampicin twice daily for at least 6 weeks. This regimen has the advantage of oral administration. A triple therapy of doxycycline, with rifampicin and co-trimoxazole, has been used successfully to treat neurobrucellosis. Doxycycline plus streptomycin regimen (for 2 to 3 weeks) is more effective than doxycycline plus rifampicin regimen (for 6 weeks). Doxycycline is able to cross the blood–brain barrier, but requires the addition of two other drugs to prevent relapse. Ciprofloxacin and co-trimoxazole therapy is associated with an unacceptably high rate of relapse. In brucellic endocarditis, surgery is required for an optimal outcome. Even with optimal antibrucellic therapy, relapses still occur in 5 to 10% of patients with Malta fever. Prognosis The mortality of the disease in 1909, as recorded in the British Army and Navy stationed in Malta, was 2%. The most frequent cause of death was endocarditis. Recent advances in antibiotics and surgery have been successful in preventing death due to endocarditis. Prevention of human brucellosis can be achieved by eradication of the disease in animals by vaccination and other veterinary control methods such as testing herds/flocks and slaughtering animals when infection is present. Currently, no effective vaccine is available for humans. Boiling milk before consumption, or before using it to produce other dairy products, is protective against transmission via ingestion. Changing traditional food habits of eating raw meat, liver, or bone marrow is necessary, but difficult to implement. Patients who have had brucellosis should probably be excluded indefinitely from donating blood or organs. Exposure of diagnostic laboratory personnel to Brucella organisms remains a problem in both endemic settings and when brucellosis is unknowingly imported by a patient. After appropriate risk assessment, staff with significant exposure should be offered postexposure prophylaxis and followed up serologically for 6 months. Epidemiology Argentina According to a study published in 2002, an estimated 10–13% of farm animals were infected with Brucella species. Annual losses from the disease were calculated at around US$60 million. Since 1932, government agencies have undertaken efforts to contain the disease. all cattle of ages 3–8 months must receive the Brucella abortus strain 19 vaccine. Australia Australia is free of cattle brucellosis, although it occurred in the past. Brucellosis of sheep or goats has never been reported. Brucellosis of pigs does occur. Feral pigs are the typical source of human infections. Canada On 19 September 1985, the Canadian government declared its cattle population brucellosis-free. Brucellosis ring testing of milk and cream, and testing of cattle to be slaughtered ended on 1 April 1999. Monitoring continues through testing at auction markets, through standard disease-reporting procedures, and through testing of cattle being qualified for export to countries other than the United States. China An outbreak infecting humans took place in Lanzhou in 2019 after the Lanzhou Biopharmaceutical Plant, which was involved in vaccine production, accidentally pumped out the bacteria into the atmosphere in exhaust air due to use of expired disinfectant. According to Georgios Pappas, an infectious-disease specialist and author of a report published in the journal Clinical Infectious Diseases, the result was “possibly the largest laboratory accident in the history of infectious diseases.” According to Pappas, out of nearly 70,000 people tested, more than 10,000 were seropositive, citing figures compiled by the provincial health authorities in Lanzhou’s Gansu province. Pappas also states that Chinese documents show that more than 3,000 people living near the plant applied for compensation, an indication of at least a mild illness. Europe Malta Until the early 20th century, the disease was endemic in Malta to the point of it being referred to as "Maltese fever". Since 2005, due to a strict regimen of certification of milk animals and widespread use of pasteurization, the illness has been eradicated from Malta. Republic of Ireland Ireland was declared free of brucellosis on 1 July 2009. The disease had troubled the country's farmers and veterinarians for several decades. The Irish government submitted an application to the European Commission, which verified that Ireland had been liberated. Brendan Smith, Ireland's then Minister for Agriculture, Food and the Marine, said the elimination of brucellosis was "a landmark in the history of disease eradication in Ireland". Ireland's Department of Agriculture, Food and the Marine intends to reduce its brucellosis eradication programme now that eradication has been confirmed. UK Mainland Britain has been free of brucellosis since 1979, although there have been episodic re-introductions since. The last outbreak of brucellosis in Great Britain was in cattle in Cornwall in 2004. Northern Ireland was declared officially brucellosis-free in 2015. New Zealand Brucellosis in New Zealand is limited to sheep (B. ovis). The country is free of all other species of Brucella. United States Dairy herds in the U.S. are tested at least once a year to be certified brucellosis-free with the Brucella milk ring test. Cows confirmed to be infected are often killed. In the United States, veterinarians are required to vaccinate all young stock, to further reduce the chance of zoonotic transmission. This vaccination is usually referred to as a "calfhood" vaccination. Most cattle receive a tattoo in one of their ears, serving as proof of their vaccination status. This tattoo also includes the last digit of the year they were born. The first state–federal cooperative efforts towards eradication of brucellosis caused by B. abortus in the U.S. began in 1934. Brucellosis was originally imported to North America with non-native domestic cattle (Bos taurus), which transmitted the disease to wild bison (Bison bison) and elk (Cervus canadensis). No records exist of brucellosis in ungulates native to America until the early 19th century. History Brucellosis first came to the attention of British medical officers in the 1850s in Malta during the Crimean War, and was referred to as Malta Fever. Jeffery Allen Marston (1831–1911) described his own case of the disease in 1861. The causal relationship between organism and disease was first established in 1887 by David Bruce. Bruce considered the agent spherical and classified it as a coccus. In 1897, Danish veterinarian Bernhard Bang isolated a bacillus as the agent of heightened spontaneous abortion in cows, and the name "Bang's disease" was assigned to this condition. Bang considered the organism rod-shaped and classified it as a bacillus. At the time no one knew that this bacillus had anything to do with the causative agent of Malta fever. Maltese scientist and archaeologist Themistocles Zammit identified unpasteurized goat milk as the major etiologic factor of undulant fever in June 1905. In the late 1910s, American bacteriologist Alice C. Evans was studying the Bang bacillus and gradually realized that it was virtually indistinguishable from the Bruce coccus. The short-rod versus oblong-round morphologic borderline explained the leveling of the erstwhile bacillus/coccus distinction (that is, these "two" pathogens were not a coccus versus a bacillus but rather were one coccobacillus). The Bang bacillus was already known to be enzootic in American dairy cattle, which showed itself in the regularity with which herds experienced contagious abortion. Having made the discovery that the bacteria were certainly nearly identical and perhaps totally so, Evans then wondered why Malta fever was not widely diagnosed or reported in the United States. She began to wonder whether many cases of vaguely defined febrile illnesses were in fact caused by the drinking of raw (unpasteurized) milk. During the 1920s, this hypothesis was vindicated. Such illnesses ranged from undiagnosed and untreated gastrointestinal upset to misdiagnosed febrile and painful versions, some even fatal. This advance in bacteriological science sparked extensive changes in the American dairy industry to improve food safety. The changes included making pasteurization standard and greatly tightening the standards of cleanliness in milkhouses on dairy farms. The expense prompted delay and skepticism in the industry, but the new hygiene rules eventually became the norm. Although these measures have sometimes struck people as overdone in the decades since, being unhygienic at milking time or in the milkhouse, or drinking raw milk, are not a safe alternative. In the decades after Evans's work, this genus, which received the name Brucella in honor of Bruce, was found to contain several species with varying virulence. The name "brucellosis" gradually replaced the 19th-century names Mediterranean fever and Malta fever. Neurobrucellosis, a neurological involvement in brucellosis, was first described in 1879. In the late 19th century, its symptoms were described in more detail by M. Louis Hughes, a Surgeon-Captain of the Royal Army Medical Corps stationed in Malta who isolated brucella organisms from a patient with meningo-encephalitis. In 1989, neurologists in Saudi Arabia made significant contributions to the medical literature involving neurobrucellosis. These obsolete names have previously been applied to brucellosis: Biological warfare Brucella species had been weaponized by several advanced countries by the mid-20th century. In 1954, B. suis became the first agent weaponized by the United States at its Pine Bluff Arsenal near Pine Bluff, Arkansas. Brucella species survive well in aerosols and resist drying. Brucella and all other remaining biological weapons in the U.S. arsenal were destroyed in 1971–72 when the American offensive biological warfare program was discontinued by order of President Richard Nixon. The experimental American bacteriological warfare program focused on three agents of the Brucella group: Porcine brucellosis (agent US) Bovine brucellosis (agent AA) Caprine brucellosis (agent AM) Agent US was in advanced development by the end of World War II. When the United States Air Force (USAF) wanted a biological warfare capability, the Chemical Corps offered Agent US in the M114 bomblet, based on the four-pound bursting bomblet developed for spreading anthrax during World War II. Though the capability was developed, operational testing indicated the weapon was less than desirable, and the USAF designed it as an interim capability until it could eventually be replaced by a more effective biological weapon. The main drawback of using the M114 with Agent US was that it acted mainly as an incapacitating agent, whereas the USAF administration wanted weapons that were deadly. The stability of M114 in storage was too low to allow for storing it at forward air bases, and the logistical requirements to neutralize a target were far higher than originally planned. Ultimately, this would have required too much logistical support to be practical in the field. Agents US and AA had a median infective dose of 500 organisms/person, and for Agent AM it was 300 organisms/person. The incubation time was believed to be about 2 weeks, with a duration of infection of several months. The lethality estimate was, based on epidemiological information, 1 to 2 per cent. Agent AM was believed to be a somewhat more virulent disease, with a fatality rate of 3 per cent being expected. Other animals Species infecting domestic livestock are B. abortus (cattle, bison, and elk), B. canis (dogs), B. melitensis (goats and sheep), B. ovis (sheep), and B. suis (caribou and pigs). Brucella species have also been isolated from several marine mammal species (cetaceans and pinnipeds). Cattle B. abortus is the principal cause of brucellosis in cattle. The bacteria are shed from an infected animal at or around the time of calving or abortion. Once exposed, the likelihood of an animal becoming infected is variable, depending on age, pregnancy status, and other intrinsic factors of the animal, as well as the number of bacteria to which the animal was exposed. The most common clinical signs of cattle infected with B. abortus are high incidences of abortions, arthritic joints, and retained placenta. The two main causes for spontaneous abortion in animals are erythritol, which can promote infections in the fetus and placenta, and the lack of anti-Brucella activity in the amniotic fluid. Males can also harbor the bacteria in their reproductive tracts, namely seminal vesicles, ampullae, testicles, and epididymises. Dogs The causative agent of brucellosis in dogs, B. canis, is transmitted to other dogs through breeding and contact with aborted fetuses. Brucellosis can occur in humans who come in contact with infected aborted tissue or semen. The bacteria in dogs normally infect the genitals and lymphatic system, but can also spread to the eyes, kidneys, and intervertebral discs. Brucellosis in the intervertebral disc is one possible cause of discospondylitis. Symptoms of brucellosis in dogs include abortion in female dogs and scrotal inflammation and orchitis in males. Fever is uncommon. Infection of the eye can cause uveitis, and infection of the intervertebral disc can cause pain or weakness. Blood testing of the dogs prior to breeding can prevent the spread of this disease. It is treated with antibiotics, as with humans, but it is difficult to cure. Aquatic wildlife Brucellosis in cetaceans is caused by the bacterium B. ceti. First discovered in the aborted fetus of a bottlenose dolphin, the structure of B. ceti is similar to Brucella in land animals. B. ceti is commonly detected in two suborders of cetaceans, the Mysticeti and Odontoceti. The Mysticeti include four families of baleen whales, filter-feeders, and the Odontoceti include two families of toothed cetaceans ranging from dolphins to sperm whales. B. ceti is believed to transfer from animal to animal through sexual intercourse, maternal feeding, aborted fetuses, placental issues, from mother to fetus, or through fish reservoirs. Brucellosis is a reproductive disease, so has an extreme negative impact on the population dynamics of a species. This becomes a greater issue when the already low population numbers of cetaceans are taken into consideration. B. ceti has been identified in four of the 14 cetacean families, but the antibodies have been detected in seven of the families. This indicates that B. ceti is common amongst cetacean families and populations. Only a small percentage of exposed individuals become ill or die. However, particular species apparently are more likely to become infected by B. ceti. The harbor porpoise, striped dolphin, white-sided dolphin, bottlenose dolphin, and common dolphin have the highest frequency of infection amongst odontocetes. In the mysticetes families, the northern minke whale is by far the most infected species. Dolphins and porpoises are more likely to be infected than cetaceans such as whales. With regard to sex and age biases, the infections do not seem influenced by the age or sex of an individual. Although fatal to cetaceans, B. ceti has a low infection rate for humans. Terrestrial wildlife The disease in its various strains can infect multiple wildlife species, including elk (Cervus canadensis), bison (Bison bison), African buffalo (Syncerus caffer), European wild boar (Sus scrofa), caribou (Rangifer tarandus), moose (Alces alces), and marine mammals (see section on aquatic wildlife above). While some regions use vaccines to prevent the spread of brucellosis between infected and uninfected wildlife populations, no suitable brucellosis vaccine for terrestrial wildlife has been developed. This gap in medicinal knowledge creates more pressure for management practices that reduce spread of the disease. Wild bison and elk in the greater Yellowstone area are the last remaining reservoir of B. abortus in the US. The recent transmission of brucellosis from elk back to cattle in Idaho and Wyoming illustrates how the area, as the last remaining reservoir in the United States, may adversely affect the livestock industry. Eliminating brucellosis from this area is a challenge, as many viewpoints exist on how to manage diseased wildlife. However, the Wyoming Game and Fish Department has recently begun to protect scavengers (particularly coyotes and red fox) on elk feedgrounds, because they act as sustainable, no-cost, biological control agents by removing infected elk fetuses quickly. The National Elk Refuge in Jackson, Wyoming asserts that the intensity of the winter feeding program affects the spread of brucellosis more than the population size of elk and bison. Since concentrating animals around food plots accelerates spread of the disease, management strategies to reduce herd density and increase dispersion could limit its spread. Effects on hunters Hunters may be at additional risk for exposure to brucellosis due to increased contact with susceptible wildlife, including predators that may have fed on infected prey. Hunting dogs can also be at risk of infection. Exposure can occur through contact with open wounds or by directly inhaling the bacteria while cleaning game. In some cases, consumption of undercooked game can result in exposure to the disease. Hunters can limit exposure while cleaning game through the use of precautionary barriers, including gloves and masks, and by washing tools rigorously after use. By ensuring that game is cooked thoroughly, hunters can protect themselves and others from ingesting the disease. Hunters should refer to local game officials and health departments to determine the risk of brucellosis exposure in their immediate area and to learn more about actions to reduce or avoid exposure.
Biology and health sciences
Bacterial infections
Health
425971
https://en.wikipedia.org/wiki/Pademelon
Pademelon
Pademelons are small marsupials in the genus Thylogale, found in Australia and Aru, Kai plus New Guinea islands. They are some of the smallest members of the macropod family, which includes the similar-looking but larger kangaroos and wallabies. Pademelons are distinguished by their small size and their short, thick, and sparsely-haired tails. Like other marsupials, they carry their young in a pouch. Etymology The word "pademelon" comes from the word badimaliyan in Dharug, an Australian Aboriginal language spoken near what is now Port Jackson, New South Wales. The scientific name Thylogale uses the Greek words for "pouch" and "weasel." Description Along with the rock-wallabies and the hare-wallabies, the pademelons are among the smallest members of the macropod family. Mature male pademelons are larger than females, with an average weight of about 7 kg and height of 60 cm. Mature females weigh around 3.8 kg. Species There are seven recognised species within genus Thylogale: Distribution and habitat The red-legged pademelon can be found in the coastal regions of Queensland and New South Wales, and in south-central New Guinea. In some areas, its range has been drastically reduced. The red-bellied or Tasmanian pademelon is abundant in Tasmania, although it was once found throughout the southeastern parts of mainland Australia. The dusky pademelon lives in New Guinea and surrounding islands. It was previously called the Aru Islands wallaby. Before that, it was called the "philander" ("friend of man"), which is the name it bears in the second volume of Cornelis de Bruijn's Travels, originally published in 1711. The Latin name of this species is called after De Bruijn. The natural habitat of the pademelon is in thick scrubland or dense forested undergrowth. It also makes tunnels through long grasses and bushes in swampy country. Threats Pademelon meat used to be considered valuable and was eaten by settlers and indigenous Australians. Aside from being killed for their meat and soft fur, their numbers have been reduced by the introduction of non-native predators such as cats in Australia, dogs, and red foxes. The rapid increase in Australia's rabbit population has also caused problems as rabbits graze on the same grasses, making less available for the pademelons. Clearing of land for urbanisation has pushed the larger wallabies and kangaroos onto land that previously was occupied by pademelons with little competition. Tasmanian pademelons were an important part of the diet of the now-extinct thylacine, and they are still preyed on by quolls, Tasmanian devils, and wedge-tailed eagles. Despite these predators, Tasmania and its outlying smaller islands have large numbers of pademelons and every year many are culled to keep their numbers sustainable.
Biology and health sciences
Diprotodontia
Animals
426143
https://en.wikipedia.org/wiki/Exploration%20of%20Mars
Exploration of Mars
The planet Mars has been explored remotely by spacecraft. Probes sent from Earth, beginning in the late 20th century, have yielded a large increase in knowledge about the Martian system, focused primarily on understanding its geology and habitability potential. Engineering interplanetary journeys is complicated and the exploration of Mars has experienced a high failure rate, especially the early attempts. Roughly sixty percent of all spacecraft destined for Mars failed before completing their missions, with some failing before their observations could begin. Some missions have been met with unexpected success, such as the twin Mars Exploration Rovers, Spirit and Opportunity, which operated for years beyond their specification. Current status There are two functional rovers on the surface of Mars, the Curiosity and Perseverance rovers, both operated by the American space agency NASA. Perseverance was accompanied by the Ingenuity helicopter, which scouted sites for Perseverance to study before the helicopter's mission ended in 2024. The Zhurong rover, part of the Tianwen-1 mission by the China National Space Administration (CNSA) was active until 20 May 2022 when it went into hibernation due to approaching sandstorms and Martian winter; the rover was expected to awaken from hibernation in December 2022, but as of April 2023 it has not moved and is presumed to be permanently inactive. There are seven orbiters surveying the planet: Mars Odyssey, Mars Express, Mars Reconnaissance Orbiter, MAVEN, the Trace Gas Orbiter, the Hope Mars Mission, and the Tianwen-1 orbiter, which have contributed massive amounts of information about Mars. Thus there are nine total vehicles currently exploring Mars: 2 rovers and 7 orbiters. Various Mars sample return missions are being planned like NASA-ESA Mars Sample Return that will pick up the samples currently being collected by the Perseverance rover. In April 2024, NASA selected several companies to begin studies on providing commercial services to further enable robotic science on Mars. Martian system Mars has long been the subject of human interest. Early telescopic observations revealed color changes on the surface that were attributed to seasonal vegetation and apparent linear features were ascribed to intelligent design. Further telescopic observations found two moons, Phobos and Deimos, polar ice caps and the feature now known as Olympus Mons, the Solar System's tallest mountain. The discoveries piqued further interest in the study and exploration of the red planet. Mars is a rocky planet, like Earth, that formed around the same time, yet with only half the diameter of Earth, and a thin atmosphere; it has a cold and desert-like surface. One way the surface of Mars has been categorized, is by thirty "quadrangles", with each quadrangle named for a prominent physiographic feature within that quadrangle. Launch windows The minimum-energy launch windows for a Martian expedition occur at intervals of approximately two years and two months (specifically 780 days, the planet's synodic period with respect to Earth). In addition, the lowest available transfer energy varies on a roughly 16-year cycle. For example, a minimum occurred in the 1969 and 1971 launch windows, rising to a peak in the late 1970s, and hitting another low in 1986 and 1988. Past missions Starting in 1960, the Soviet space program launched a series of probes to Mars including the first intended (but unsuccessful) flybys and hard (impact) landing (Mars 1962B), and the first successful soft landing (Mars 3). The first successful flyby of Mars was on 14–15 July 1965, by NASA's Mariner 4. On November 14, 1971, Mariner 9 became the first space probe to orbit another planet when it entered into orbit around Mars. The amount of data returned by probes increased substantially as technology improved. The first to contact the surface were two Soviet probes: Mars 2 lander on November 27 and Mars 3 lander on December 2, 1971Mars 2 failed during descent and Mars 3 failed about twenty seconds after the first Martian soft landing. Mars 6 failed during descent but did return some corrupted atmospheric data in 1974. The 1975 NASA launches of the Viking program consisted of two orbiters, each with a lander that successfully soft landed in 1976. Viking 1 remained operational for six years, Viking 2 for three years. The Viking landers relayed the first color panoramas of Mars. The Soviet probes Phobos 1 and 2 were sent to Mars in 1988 to study Mars and its two moons, with a focus on Phobos. Phobos 1 lost contact on the way to Mars. Phobos 2, while successfully photographing Mars and Phobos, failed before it was set to release two landers to the surface of Phobos. Missions that ended prematurely after Phobos 1 and 2 (1988) include (see Probe difficulties section for more details): Mars Observer (launched in 1992) Mars 96 (1996) Mars Climate Orbiter (1999) Mars Polar Lander with Deep Space 2 (1999) Nozomi (2003) Beagle 2 (2003) Fobos-Grunt with Yinghuo-1 (2011) Schiaparelli lander (2016) Following the 1993 failure of the Mars Observer orbiter, the NASA Mars Global Surveyor achieved Mars orbit in 1997. This mission was a complete success, having finished its primary mapping mission in early 2001. Contact was lost with the probe in November 2006 during its third extended program, spending exactly 10 operational years in space. The NASA Mars Pathfinder, carrying a robotic exploration vehicle Sojourner, landed in the Ares Vallis on Mars in the summer of 1997, returning many images. NASA's Mars Odyssey orbiter entered Mars orbit in 2001. Odyssey Gamma Ray Spectrometer detected significant amounts of hydrogen in the upper metre or so of regolith on Mars. This hydrogen is thought to be contained in large deposits of water ice. The Mars Express mission of the European Space Agency (ESA) reached Mars in 2003. It carried the Beagle 2 lander, which was not heard from after being released and was declared lost in February 2004. Beagle 2 was located in January 2015 by the HiRise camera on NASA's Mars Reconnaissance Orbiter (MRO) having landed safely but failed to fully deploy its solar panels and antenna. In early 2004, the Mars Express Planetary Fourier Spectrometer team announced the orbiter had detected methane in the Martian atmosphere, a potential biosignature. ESA announced in June 2006 the discovery of aurorae on Mars by the Mars Express. In January 2004, the NASA twin Mars Exploration Rovers named Spirit (MER-A) and Opportunity (MER-B) landed on the surface of Mars. Both have met and exceeded all their science objectives. Among the most significant scientific returns has been conclusive evidence that liquid water existed at some time in the past at both landing sites. Martian dust devils and windstorms have occasionally cleaned both rovers' solar panels, and thus increased their lifespan. Spirit rover (MER-A) was active until 2010, when it stopped sending data because it got stuck in a sand dune and was unable to reorient itself to recharge its batteries. Rosetta came within 250 km of Mars during its 2007 flyby. Dawn flew by Mars in February 2009 for a gravity assist on its way to investigate Vesta and Ceres. Phoenix landed on the north polar region of Mars on May 25, 2008. Its robotic arm dug into the Martian soil and the presence of water ice was confirmed on June 20, 2008. The mission concluded on November 10, 2008, after contact was lost. In 2008, the price of transporting material from the surface of Earth to the surface of Mars was approximately US$309,000 per kilogram. The Indian Space Research Organisation (ISRO) launched their Mars Orbiter Mission (MOM) on November 5, 2013, and it was inserted into Mars orbit on September 24, 2014. India's ISRO is the fourth space agency to reach Mars, after the Soviet space program, NASA and ESA. India successfully placed a spacecraft into Mars orbit, and became the first country to do so in its maiden attempt. Overview of missions The following entails a brief overview of previous missions to Mars, oriented towards orbiters and flybys; see also Mars landing and Mars rover. Early Soviet missions 1960s Between 1960 and 1969, the Soviet Union launched nine probes intended to reach Mars. They all failed: three at launch; three failed to reach near-Earth orbit; one during the burn to put the spacecraft into trans-Mars trajectory; and two during the interplanetary orbit. The Mars 1M programs (sometimes dubbed Marsnik in Western media) was the first Soviet uncrewed spacecraft interplanetary exploration program, which consisted of two flyby probes launched towards Mars in October 1960, Mars 1960A and Mars 1960B (also known as Korabl 4 and Korabl 5 respectively). After launch, the third stage pumps on both launchers were unable to develop enough pressure to commence ignition, so Earth parking orbit was not achieved. The spacecraft reached an altitude of 120 km before reentry. Mars 1962A was a Mars flyby mission, launched on October 24, 1962, and Mars 1962B an intended first Mars lander mission, launched in late December of the same year (1962). Both failed from either breaking up as they were going into Earth orbit or having the upper stage explode in orbit during the burn to put the spacecraft into trans-Mars trajectory. The first success Mars 1 (1962 Beta Nu 1), an automatic interplanetary spacecraft launched to Mars on November 1, 1962, was the first probe of the Soviet Mars probe program to achieve interplanetary orbit. Mars 1 was intended to fly by the planet at a distance of about 11,000 km and take images of the surface as well as send back data on cosmic radiation, micrometeoroid impacts and Mars' magnetic field, radiation environment, atmospheric structure, and possible organic compounds. Sixty-one radio transmissions were held, initially at 2-day intervals and later at 5-day intervals, from which a large amount of interplanetary data was collected. On 21 March 1963, when the spacecraft was at a distance of 106,760,000 km from Earth, on its way to Mars, communications ceased due to failure of its antenna orientation system. In 1964, both Soviet probe launches, of Zond 1964A on June 4, and Zond 2 on November 30, (part of the Zond program), resulted in failures. Zond 1964A had a failure at launch, while communication was lost with Zond 2 en route to Mars after a mid-course maneuver, in early May 1965. In 1969, and as part of the Mars probe program, the Soviet Union prepared two identical 5-ton orbiters called M-69, dubbed by NASA as Mars 1969A and Mars 1969B. Both probes were lost in launch-related complications with the newly developed Proton rocket. 1970s The USSR intended to have the first artificial satellite of Mars beating the planned American Mariner 8 and Mariner 9 Mars orbiters. In May 1971, one day after Mariner 8 malfunctioned at launch and failed to reach orbit, Cosmos 419 (Mars 1971C), a heavy probe of the Soviet Mars program M-71, also failed to launch. This spacecraft was designed as an orbiter only, while the next two probes of project M-71, Mars 2 and Mars 3, were multipurpose combinations of an orbiter and a lander with small skis-walking rovers, PrOP-M, that would be the first planet rovers outside the Moon. They were successfully launched in mid-May 1971 and reached Mars about seven months later. On November 27, 1971, the lander of Mars 2 crash-landed due to an on-board computer malfunction and became the first man-made object to reach the surface of Mars. On 2 December 1971, the Mars 3 lander became the first spacecraft to achieve a soft landing, but its transmission was interrupted after 14.5 seconds. The Mars 2 and 3 orbiters sent back a relatively large volume of data covering the period from December 1971 to March 1972, although transmissions continued through to August. By 22 August 1972, after sending back data and a total of 60 pictures, Mars 2 and 3 concluded their missions. The images and data enabled creation of surface relief maps, and gave information on the Martian gravity and magnetic fields. In 1973, the Soviet Union sent four more probes to Mars: the Mars 4 and Mars 5 orbiters and the Mars 6 and Mars 7 flyby/lander combinations. All missions except Mars 7 sent back data, with Mars 5 being most successful. Mars 5 transmitted just 60 images before a loss of pressurization in the transmitter housing ended the mission. Mars 6 lander transmitted data during descent, but failed upon impact. Mars 4 flew by the planet at a range of 2200 km returning one swath of pictures and radio occultation data, which constituted the first detection of the nightside ionosphere on Mars. Mars 7 probe separated prematurely from the carrying vehicle due to a problem in the operation of one of the onboard systems (attitude control or retro-rockets) and missed the planet by . Mariner program In 1964, NASA's Jet Propulsion Laboratory made two attempts at reaching Mars. Mariner 3 and Mariner 4 were identical spacecraft designed to carry out the first flybys of Mars. Mariner 3 was launched on November 5, 1964, but the shroud encasing the spacecraft atop its rocket failed to open properly, dooming the mission. Three weeks later, on November 28, 1964, Mariner 4 was launched successfully on a 7-month voyage to Mars. Mariner 4 flew past Mars on July 14, 1965, providing the first close-up photographs of another planet. The pictures, gradually played back to Earth from a small tape recorder on the probe, showed impact craters. It provided radically more accurate data about the planet; a surface atmospheric pressure of about 1% of Earth's and daytime temperatures of −100 °C (−148 °F) were estimated. No magnetic field or Martian radiation belts were detected. The new data meant redesigns for then planned Martian landers, and showed life would have a more difficult time surviving there than previously anticipated. NASA continued the Mariner program with another pair of Mars flyby probes, Mariner 6 and 7. They were sent at the next launch window, and reached the planet in 1969. During the following launch window the Mariner program again suffered the loss of one of a pair of probes. Mariner 9 successfully entered orbit about Mars, the first spacecraft ever to do so, after the launch time failure of its sister ship, Mariner 8. When Mariner 9 reached Mars in 1971, it and two Soviet orbiters (Mars 2 and Mars 3) found that a planet-wide dust storm was in progress. The mission controllers used the time spent waiting for the storm to clear to have the probe rendezvous with, and photograph, Phobos. When the storm cleared sufficiently for Mars' surface to be photographed by Mariner 9, the pictures returned represented a substantial advance over previous missions. These pictures were the first to offer more detailed evidence that liquid water might at one time have flowed on the planetary surface. They also finally discerned the true nature of many Martian albedo features. For example, Nix Olympica was one of only a few features that could be seen during the planetary duststorm, revealing it to be the highest mountain (volcano, to be exact) on any planet in the entire Solar System, and leading to its reclassification as Olympus Mons. Viking program The Viking program launched Viking 1 and Viking 2 spacecraft to Mars in 1975; The program consisted of two orbiters and two landers – these were the second and third spacecraft to successfully land on Mars. In 1976, Viking 1 and Viking 2 touched down on the Martian surface. These landers were significantly larger than the Soviet Mars 3 lander (Viking 1 was 3,527 kilograms compared to the 358 kg Mars 3 lander). They were able to take the first photographs from the surface of Mars. Viking 1 operated on the surface of Mars for around six years (On Nov 11, 1982 the Lander stopped operating after getting a faulty command) and Viking 2 for over three years (mission ended in early 1980). Both landers were equipped with a robotic sampler arm which successfully scooped up soil samples and tested them with instruments such as a Gas chromatography–mass spectrometer. The landers measured temperatures ranging from negative 86 degrees Celsius before dawn to negative 33 degrees Celsius in the afternoon. Both landers had issues obtaining accurate results from their seismometers. Photographs from the landers and orbiters surpassed expectations in quality and quantity. The total exceeded 4,500 from the landers and 52,000 from the orbiters. The Viking landers recorded atmospheric pressures ranging from below 7 millibars (0.0068 bars) to over 10 millibars (0.0108 bars) over the Martian year, leading to the conclusion that atmospheric pressure varies by 30 percent during the Martian year because carbon dioxide condenses and sublimes at the polar caps. Martian winds generally blow more slowly than expected, scientists had expected them to reach speeds of several hundred miles an hour from observing global dust storms, but neither lander recorded gusts over 120 kilometers (74 miles) an hour, and average velocities were considerably lower. Nevertheless, the orbiters observed more than a dozen small dust storms. The Viking landers detected nitrogen in the atmosphere for the first time, and that it was a significant component of the Martian atmosphere. There was speculation from the atmospheric analysis that the atmosphere of Mars used to be much denser. The primary scientific objectives of the lander mission were to search for biosignatures and observe meteorologic, seismic and magnetic properties of Mars. The results of the biological experiments on board the Viking landers remain inconclusive, with a reanalysis of the Viking data published in 2012 suggesting signs of microbial life on Mars. The Viking orbiters revealed that large floods of water carved deep valleys, eroded grooves into bedrock, and traveled thousands of kilometers. Areas of branched streams, in the southern hemisphere, suggest that rain once fell. Mars Pathfinder, Sojourner rover Mars Pathfinder was a U.S. spacecraft that landed a base station with a roving probe on Mars on July 4, 1997. It consisted of a lander and a small wheeled robotic rover named Sojourner, which was the first rover to operate on the surface of Mars. In addition to scientific objectives, the Mars Pathfinder mission was also a "proof-of-concept" for various technologies, such as an airbag landing system and automated obstacle avoidance, both later exploited by the Mars Exploration Rovers. Mars Global Surveyor After the 1992 failure of NASA's Mars Observer orbiter, NASA retooled and launched Mars Global Surveyor (MGS). Mars Global Surveyor launched on November 7, 1996, and entered orbit on September 12, 1997. After a year and a half trimming its orbit from a looping ellipse to a circular track around the planet, the spacecraft began its primary mapping mission in March 1999. It observed the planet from a low-altitude, nearly polar orbit over the course of one complete Martian year, the equivalent of nearly two Earth years. Mars Global Surveyor completed its primary mission on January 31, 2001, and completed several extended mission phases until communication was lost in 2007. The mission studied the entire Martian surface, atmosphere, and interior, and returned more data about the red planet than all previous Mars missions combined. The data has been archived and remains available publicly. Among key scientific findings, Global Surveyor took pictures of gullies and debris flow features that suggest there may be current sources of liquid water, similar to an aquifer, at or near the surface of the planet. Similar channels on Earth are formed by flowing water, but on Mars the temperature is normally too cold and the atmosphere too thin to sustain liquid water. Nevertheless, many scientists hypothesize that liquid groundwater can sometimes surface on Mars, erode gullies and channels, and pool at the bottom before freezing and evaporating. Magnetometer readings showed that the planet's magnetic field is not globally generated in the planet's core, but is localized in particular areas of the crust. New temperature data and closeup images of the Martian moon Phobos showed that its surface is composed of powdery material at least 1 metre (3 feet) thick, caused by millions of years of meteoroid impacts. Data from the spacecraft's laser altimeter gave scientists their first 3-D views of Mars' north polar ice cap in January 1999. Faulty software uploaded to the vehicle in June 2006 caused the spacecraft to orient its solar panels incorrectly several months later, resulting in battery overheating and subsequent failure. On November 5, 2006, MGS lost contact with Earth. NASA ended efforts to restore communication on January 28, 2007. Mars Odyssey and Mars Express In 2001, NASA's Mars Odyssey orbiter arrived at Mars. Its mission is to use spectrometers and imagers to hunt for evidence of past or present water and volcanic activity on Mars. In 2002, it was announced that the probe's gamma-ray spectrometer and neutron spectrometer had detected large amounts of hydrogen, indicating that there are vast deposits of water ice in the upper three meters of Mars' soil within 60° latitude of the south pole. On June 2, 2003, the European Space Agency's Mars Express set off from Baikonur Cosmodrome to Mars. The Mars Express craft consists of the Mars Express Orbiter and the stationary lander Beagle 2. The lander carried a digging device and the smallest mass spectrometer created to date, as well as a range of other devices, on a robotic arm in order to accurately analyze soil beneath the dusty surface to look for biosignatures and biomolecules. The orbiter entered Mars orbit on December 25, 2003, and Beagle 2 entered Mars' atmosphere the same day. However, attempts to contact the lander failed. Communications attempts continued throughout January, but Beagle 2 was declared lost in mid-February, and a joint inquiry was launched by the UK and ESA. The Mars Express Orbiter confirmed the presence of water ice and carbon dioxide ice at the planet's south pole, while NASA had previously confirmed their presence at the north pole of Mars. The lander's fate remained a mystery until it was located intact on the surface of Mars in a series of images from the Mars Reconnaissance Orbiter. The images suggest that two of the spacecraft's four solar panels failed to deploy, blocking the spacecraft's communications antenna. Beagle 2 is the first British and first European probe to achieve a soft landing on Mars. MER, Opportunity rover, Spirit rover, Phoenix lander NASA's Mars Exploration Rover Mission (MER), started in 2003, was a robotic space mission involving two rovers, Spirit (MER-A) and Opportunity, (MER-B) that explored the Martian surface geology. The mission's scientific objective was to search for and characterize a wide range of rocks and soils that hold clues to past water activity on Mars. The mission was part of NASA's Mars Exploration Program, which includes three previous successful landers: the two Viking program landers in 1976; and Mars Pathfinder probe in 1997. Rosetta and Dawn swingbys The ESA Rosetta space probe mission to the comet 67P/Churyumov-Gerasimenko flew within 250 km of Mars on February 25, 2007, in a gravitational slingshot designed to slow and redirect the spacecraft. The NASA Dawn spacecraft used the gravity of Mars in 2009 to change direction and velocity on its way to Vesta, and tested out Dawn cameras and other instruments on Mars. Fobos-Grunt On November 8, 2011, Russia's Roscosmos launched an ambitious mission called Fobos-Grunt. It consisted of a lander aimed to retrieve a sample back to Earth from Mars' moon Phobos, and place the Chinese Yinghuo-1 probe in Mars' orbit. The Fobos-Grunt mission suffered a complete control and communications failure shortly after launch and was left stranded in low Earth orbit, later falling back to Earth. The Yinghuo-1 satellite and Fobos-Grunt underwent destructive re-entry on January 15, 2012, finally disintegrating over the Pacific Ocean. Mars Orbiter Mission The Mars Orbiter Mission, also called Mangalyaan, was launched on 5 November 2013 by the Indian Space Research Organisation (ISRO). It was successfully inserted into Martian orbit on 24 September 2014. The mission is a technology demonstrator, and as secondary objective, it will also study the Martian atmosphere. This is India's first mission to Mars, and with it, ISRO became the fourth space agency to successfully reach Mars after the Soviet Union, NASA (USA) and ESA (Europe). It was completed in a record low budget of $71 million, making it the least-expensive Mars mission to date. The mission concluded on September 27, 2022, after contact was lost. InSight and MarCO In August 2012, NASA selected InSight, a $425 million lander mission with a heat flow probe and seismometer, to determine the deep interior structure of Mars. InSight landed successfully on Mars on 26 November 2018. Valuable data on the atmosphere, surface and the planet's interior were gathered by Insight. Insight's mission was declared as ended on 21 December 2022. Two flyby CubeSats called MarCO were launched with InSight on 5 May 2018 to provide real-time telemetry during the entry and landing of InSight. The CubeSats separated from the Atlas V booster 1.5 hours after launch and traveled their own trajectories to Mars. Current missions On 10 March 2006, NASA's Mars Reconnaissance Orbiter (MRO) probe arrived in orbit to conduct a two-year science survey. The orbiter began mapping the Martian terrain and weather to find suitable landing sites for upcoming lander missions. The MRO captured the first image of a series of active avalanches near the planet's north pole in 2008. The Mars Science Laboratory mission was launched on November 26, 2011, and delivered the Curiosity rover on the surface of Mars on August 6, 2012 UTC. It is larger and more advanced than the Mars Exploration Rovers, with a velocity of up to 90 meters per hour (295 feet per hour). Experiments include a laser chemical sampler that can deduce the composition of rocks at a distance of 7 meters. MAVEN orbiter was launched on 18 November 2013, and on 22 September 2014, it was injected into an areocentric elliptic orbit 6,200 km (3,900 mi) by 150 km (93 mi) above the planet's surface to study its atmosphere. Mission goals include determining how the planet's atmosphere and water, presumed to have once been substantial, were lost over time. The ExoMars Trace Gas Orbiter arrived at Mars in 2016 and deployed the Schiaparelli EDM lander, a test lander. Schiaparelli crashed on surface, but it transmitted key data during its parachute descent, so the test was declared a partial success. Overview of missions Mars Reconnaissance Orbiter The Mars Reconnaissance Orbiter (MRO) is a multipurpose spacecraft designed to conduct reconnaissance and exploration of Mars from orbit. The US$720 million spacecraft was built by Lockheed Martin under the supervision of the Jet Propulsion Laboratory, launched August 12, 2005, and entered Mars orbit on March 10, 2006. The MRO contains a host of scientific instruments such as the HiRISE camera, CTX camera, CRISM, and SHARAD. The HiRISE camera is used to analyze Martian landforms, whereas CRISM and SHARAD can detect water, ice, and minerals on and below the surface. Additionally, MRO is paving the way for upcoming generations of spacecraft through daily monitoring of Martian weather and surface conditions, searching for future landing sites, and testing a new telecommunications system that enable it to send and receive information at an unprecedented bitrate, compared to previous Mars spacecraft. Data transfer to and from the spacecraft occurs faster than all previous interplanetary missions combined and allows it to serve as an important relay satellite for other missions. Curiosity rover The NASA Mars Science Laboratory mission with its rover named Curiosity, was launched on November 26, 2011, and landed on Mars on August 6, 2012, on Aeolis Palus in Gale Crater. The rover carries instruments designed to look for past or present conditions relevant to the past or present habitability of Mars. MAVEN NASA's MAVEN is an orbiter mission to study the upper atmosphere of Mars. It also serves as a communications relay satellite for robotic landers and rovers on the surface of Mars. MAVEN was launched 18 November 2013 and reached Mars on 22 September 2014. Trace Gas Orbiter and EDM The ExoMars Trace Gas Orbiter is an atmospheric research orbiter built in collaboration between ESA and Roscosmos. It was injected into Mars orbit on 19 October 2016 to gain a better understanding of methane () and other trace gases present in the Martian atmosphere that could be evidence for possible biological or geological activity. The Schiaparelli EDM lander was destroyed when trying to land on the surface of Mars. Hope The United Arab Emirates launched the Hope Mars Mission, in July 2020 on the Japanese H-IIA booster. It was successfully placed into orbit on 9 February 2021. It is studying the Martian atmosphere and weather. Tianwen-1 and Zhurong rover Tianwen-1 was a Chinese mission launched on 23 July 2020 which included an orbiter, a lander, and a rover along with a package of deployable and remote cameras. Tianwen-1 entered orbit on 10 February 2021 and the Zhurong rover successfully landed on 14 May 2021 and deployed on 22 May 2021. Zhurong had been in operation for 347 Martian days and traveled 1,921 meters across Mars before entering hibernation state in May 22. The rover has never been awaken since then, but the orbiter continued to work. Mars 2020, Perseverance rover, Ingenuity helicopter The Mars 2020 mission by NASA was launched on 30 July 2020 on a United Launch Alliance Atlas V rocket from Cape Canaveral. It is based on the Mars Science Laboratory design. The scientific payload is focused on astrobiology. It includes the Perseverance rover and the retired Ingenuity helicopter. Unlike older rovers that relied on solar power, Perseverance is nuclear powered, to survive longer than its predecessors in this harsh, dusty environment. The car-size rover weighs about 1 ton, with a robotic arm that reaches about , zoom cameras, a chemical analyzer and a rock drill. After traveling 293 million miles (471 million km) to reach Mars over the course of more than six months, Perseverance successfully landed on February 18, 2021. Its initial mission is set for at least one Martian year, or 687 Earth days. It will search for signs of ancient life and explore the red planet's surface. As of October 19, 2021, Perseverance had captured the first sounds from Mars. Recordings consisted of five hours of Martian wind gusts, rover wheels crunching over gravel, and motors whirring as the spacecraft moves its arm. The sounds give researchers clues about the atmosphere, such as how far sound travels on the planet. Europa Clipper, Hera and Psyche The NASA Europa Clipper to Jupiter and Europa, NASA Psyche space probe mission to the metal-rich asteroid 16 Psyche and ESA Hera to Didymos will undertake flybys of Mars on March 1, 2025, March 2025, and May 2026 respectively, in a gravitational slingshot designed to slow and redirect the spacecraft. Future missions EscaPADE (Escape and Plasma Acceleration and Dynamics Explorers) by the University of California, Berkeley, is a planned twin-spacecraft NASA Mars orbiter mission to study the structure, composition, variability and dynamics of Mars' magnetosphere and atmospheric escape processes. The EscaPADE orbiters were originally to be launched in 2022 as secondary payloads on a Falcon Heavy together with the Psyche and Janus missions, but will now be launched on a New Glenn. The mission is scheduled to launch in Q2 for 2025. India's ISRO plans to send a follow-up mission to its Mars Orbiter Mission in 2026; it is called Mars Lander Mission and it will consist of an orbiter and a lander. As part of the ExoMars program, ESA and the Roscosmos planned to send the Rosalind Franklin rover in 2022 to search for evidence of past or present microscopic life on Mars. The lander that was planned to deliver the rover is called Kazachok, and it would have performed scientific studies for about 2 years. This mission had been delayed indefinitely as a result of the 2022 Russian invasion of Ukraine. in 2024, the mission received additional funding and is now planned for launch in 2028. Proposals Tianwen-3 is a Chinese mission to return samples of Martian soil to Earth. The mission would launch in late 2028, with a lander and ascent vehicle and an orbiter and return module launched separately on two rockets. The samples would be returned to Earth by July 2031. NASA-ESA Mars Sample Return is a three-launch architecture concept for a sample return mission, which uses a rover to cache small samples, a Mars ascent stage to send it into orbit, and an orbiter to rendezvous with it above Mars and take it to Earth. Solar-electric propulsion could allow a one launch sample return instead of three. The Mars-Grunt is a Russian mission concept to bring a sample of Martian soil to Earth. Mars Aerial and Ground Global Intelligent Explorer (MAGGIE) – is a proposed compact fixed wing electric aircraft powered by solar energy to fly in the Martian atmosphere with vertical take-off/landing (VTOL) capability. Other future mission concepts include polar probes, Martian aircraft and a network of small meteorological stations. Longterm areas of study may include Martian lava tubes, resource utilization, and electronic charge carriers in rocks. Human mission proposals The human exploration of Mars has been an aspiration since the earliest days of modern rocketry; Robert H. Goddard credits the idea of reaching Mars as his own inspiration to study the physics and engineering of space flight. Proposals for human exploration of Mars have been made throughout the history of space exploration. Currently there are multiple active plans and programs to put humans on Mars within the next ten to thirty years, both governmental and private, some of which are listed below. NASA Human exploration by the United States was identified as a long-term goal in the Vision for Space Exploration announced in 2004 by then US President George W. Bush. The planned Orion spacecraft would be used to send a human expedition to Earth's moon by 2020 as a stepping stone to a Mars expedition. On September 28, 2007, NASA administrator Michael D. Griffin stated that NASA aims to put a person on Mars by 2037. On December 2, 2014, NASA's Advanced Human Exploration Systems and Operations Mission Director Jason Crusan and Deputy Associate Administrator for Programs James Reuthner announced tentative support for the Boeing "Affordable Mars Mission Design" including radiation shielding, centrifugal artificial gravity, in-transit consumable resupply, and a lander which can return. Reuthner suggested that if adequate funding was forthcoming, the proposed mission would be expected in the early 2030s. On October 8, 2015, NASA published its official plan for human exploration and colonization of Mars. They called it "Journey to Mars". The plan operates through three distinct phases leading to fully sustained colonization. The first stage, already underway, is the "Earth Reliant" phase. This phase continues using the International Space Station until 2024; validating deep space technologies and studying the effects of long-duration space missions on the human body. The second stage, "Proving Ground," moves away from Earth reliance and ventures into cislunar space for most of its tasks. This is when NASA plans to capture an asteroid, test deep space habitation facilities, and validate the capabilities required for human exploration of Mars. The last stage, the "Earth Independent" phase, includes long-term missions on the lunar surface which leverage surface habitats that only require routine maintenance, and the harvesting of Martian resources for fuel, water, and building materials. NASA is still aiming for human missions to Mars in the 2030s, though Earth independence could take decades longer. On August 28, 2015, NASA funded a year-long simulation to study the effects of a year-long Mars mission on six scientists. The scientists lived in a biodome on a Mauna Loa mountain in Hawaii with limited connection to the outside world and were only allowed outside if they were wearing spacesuits. NASA's human Mars exploration plans have evolved through the NASA Mars Design Reference Missions, a series of design studies for human exploration of Mars. In 2017, the focus of NASA shifted to a return to the Moon by 2024 with the Artemis program, a flight to Mars could follow after this project. SpaceX The long-term goal of the private corporation SpaceX is the establishment of routine flights to Mars to enable colonization. To this end, the company is developing Starship, a spacecraft capable of crew transportation to Mars and other celestial bodies, along with its booster Super Heavy. In 2016 SpaceX announced plans to send two uncrewed Starships to Mars by 2022, followed by two more uncrewed flights and two crewed flights in 2024. SpaceX is currently targeting the first uncrewed launches NET 2026, with the first crewed flights happening NET 2028. Starship is planned to have a payload of at least 100 tonnes and is designed to use a combination of aerobraking and propulsive descent, using fuel produced from a Mars (in situ resource utilization) facility. As of 2024, the Starship development program has seen multiple integrated test flights and is progressing towards full reusability. SpaceX’s plans involve the mass manufacturing of Starship and initially sustained by resupply from Earth, and in situ resource utilization on Mars, until the Mars colony reaches full self sustainability. Any future human mission to Mars will likely take place within the optimal Mars launch window, which occurs every 26 months. Zubrin Mars Direct, a low-cost human mission proposed by Robert Zubrin, founder of the Mars Society, would use heavy-lift Saturn V class rockets, such as the Ares V, to skip orbital construction, LEO rendezvous, and lunar fuel depots. A modified proposal, called "Mars to Stay", involves not returning the first immigrant explorers immediately, if ever (see Colonization of Mars). Probe difficulties The challenge, complexity and length of Mars missions have led to many mission failures. The high failure rate of missions attempting to explore Mars is informally called the "Mars Curse" or "Martian Curse". The phrase "Galactic Ghoul" or "Great Galactic Ghoul" refers to a fictitious space monster that subsists on a diet of Mars probes, and is sometimes facetiously used to "explain" the recurring difficulties. Two Soviet probes were sent to Mars in 1988 as part of the Phobos program. Phobos 1 operated normally until an expected communications session on 2 September 1988 failed to occur. The problem was traced to a software error, which deactivated Phobos 1's attitude thrusters, causing the spacecraft's solar arrays to no longer point at the Sun, depleting Phobos 1's batteries. Phobos 2 operated normally throughout its cruise and Mars orbital insertion phases on January 29, 1989, gathering data on the Sun, interplanetary medium, Mars, and Phobos. Shortly before the final phase of the mission – during which the spacecraft was to approach within 50 m of Phobos' surface and release two landers, one a mobile 'hopper', the other a stationary platform – contact with Phobos 2 was lost. The mission ended when the spacecraft signal failed to be successfully reacquired on March 27, 1989. The cause of the failure was determined to be a malfunction of the on-board computer. Just a few years later in 1992, Mars Observer, launched by NASA, failed as it approached Mars. Mars 96, an orbiter launched on November 16, 1996, by Russia failed, when the planned second burn of the Block D-2 fourth stage did not occur. Following the success of Global Surveyor and Pathfinder, another spate of failures occurred in 1998 and 1999, with the Japanese Nozomi orbiter and NASA's Mars Climate Orbiter, Mars Polar Lander, and Deep Space 2 penetrators all suffering various fatal errors. The Mars Climate Orbiter was noted for mixing up U.S. customary units with metric units, causing the orbiter to burn up while entering Mars' atmosphere. The European Space Agency has also attempted to land two probes on the Martian surface; Beagle 2, a British-built lander that failed to deploy its solar arrays properly after touchdown in December 2003, and Schiaparelli, which was flown along the ExoMars Trace Gas Orbiter. Contact with the Schiaparelli EDM lander was lost 50 seconds before touchdown. It was later confirmed that the lander struck the surface at a high velocity, possibly exploding.
Technology
Basics_10
null
426219
https://en.wikipedia.org/wiki/Classical%20electromagnetism
Classical electromagnetism
Classical electromagnetism or classical electrodynamics is a branch of physics focused on the study of interactions between electric charges and currents using an extension of the classical Newtonian model. It is, therefore, a classical field theory. The theory provides a description of electromagnetic phenomena whenever the relevant length scales and field strengths are large enough that quantum mechanical effects are negligible. For small distances and low field strengths, such interactions are better described by quantum electrodynamics which is a quantum field theory. History The physical phenomena that electromagnetism describes have been studied as separate fields since antiquity. For example, there were many advances in the field of optics centuries before light was understood to be an electromagnetic wave. However, the theory of electromagnetism, as it is currently understood, grew out of Michael Faraday's experiments suggesting the existence of an electromagnetic field and James Clerk Maxwell's use of differential equations to describe it in his A Treatise on Electricity and Magnetism (1873). The development of electromagnetism in Europe included the development of methods to measure voltage, current, capacitance, and resistance. Detailed historical accounts are given by Wolfgang Pauli, E. T. Whittaker, Abraham Pais, and Bruce J. Hunt. Lorentz force The electromagnetic field exerts the following force (often called the Lorentz force) on charged particles: where all boldfaced quantities are vectors: is the force that a particle with charge q experiences, is the electric field at the location of the particle, is the velocity of the particle, is the magnetic field at the location of the particle. The above equation illustrates that the Lorentz force is the sum of two vectors. One is the cross product of the velocity and magnetic field vectors. Based on the properties of the cross product, this produces a vector that is perpendicular to both the velocity and magnetic field vectors. The other vector is in the same direction as the electric field. The sum of these two vectors is the Lorentz force. Although the equation appears to suggest that the electric and magnetic fields are independent, the equation can be rewritten in term of four-current (instead of charge) and a single electromagnetic tensor that represents the combined field (): Electric field The electric field E is defined such that, on a stationary charge: where q0 is what is known as a test charge and is the force on that charge. The size of the charge does not really matter, as long as it is small enough not to influence the electric field by its mere presence. What is plain from this definition, though, is that the unit of is N/C (newtons per coulomb). This unit is equal to V/m (volts per meter); see below. In electrostatics, where charges are not moving, around a distribution of point charges, the forces determined from Coulomb's law may be summed. The result after dividing by q0 is: where n is the number of charges, qi is the amount of charge associated with the ith charge, ri is the position of the ith charge, r is the position where the electric field is being determined, and ε0 is the electric constant. If the field is instead produced by a continuous distribution of charge, the summation becomes an integral: where is the charge density and is the vector that points from the volume element to the point in space where E is being determined. Both of the above equations are cumbersome, especially if one wants to determine E as a function of position. A scalar function called the electric potential can help. Electric potential, also called voltage (the units for which are the volt), is defined by the line integral where is the electric potential, and C is the path over which the integral is being taken. Unfortunately, this definition has a caveat. From Maxwell's equations, it is clear that is not always zero, and hence the scalar potential alone is insufficient to define the electric field exactly. As a result, one must add a correction factor, which is generally done by subtracting the time derivative of the A vector potential described below. Whenever the charges are quasistatic, however, this condition will be essentially met. From the definition of charge, one can easily show that the electric potential of a point charge as a function of position is: where q is the point charge's charge, r is the position at which the potential is being determined, and ri is the position of each point charge. The potential for a continuous distribution of charge is: where is the charge density, and is the distance from the volume element to point in space where φ is being determined. The scalar φ will add to other potentials as a scalar. This makes it relatively easy to break complex problems down into simple parts and add their potentials. Taking the definition of φ backwards, we see that the electric field is just the negative gradient (the del operator) of the potential. Or: From this formula it is clear that E can be expressed in V/m (volts per meter). Electromagnetic waves A changing electromagnetic field propagates away from its origin in the form of a wave. These waves travel in vacuum at the speed of light and exist in a wide spectrum of wavelengths. Examples of the dynamic fields of electromagnetic radiation (in order of increasing frequency): radio waves, microwaves, light (infrared, visible light and ultraviolet), x-rays and gamma rays. In the field of particle physics this electromagnetic radiation is the manifestation of the electromagnetic interaction between charged particles. General field equations As simple and satisfying as Coulomb's equation may be, it is not entirely correct in the context of classical electromagnetism. Problems arise because changes in charge distributions require a non-zero amount of time to be "felt" elsewhere (required by special relativity). For the fields of general charge distributions, the retarded potentials can be computed and differentiated accordingly to yield Jefimenko's equations. Retarded potentials can also be derived for point charges, and the equations are known as the Liénard–Wiechert potentials. The scalar potential is: where is the point charge's charge and is the position. and are the position and velocity of the charge, respectively, as a function of retarded time. The vector potential is similar: These can then be differentiated accordingly to obtain the complete field equations for a moving point particle. Models Branches of classical electromagnetism such as optics, electrical and electronic engineering consist of a collection of relevant mathematical models of different degrees of simplification and idealization to enhance the understanding of specific electrodynamics phenomena. An electrodynamics phenomenon is determined by the particular fields, specific densities of electric charges and currents, and the particular transmission medium. Since there are infinitely many of them, in modeling there is a need for some typical, representative (a) electrical charges and currents, e.g. moving pointlike charges and electric and magnetic dipoles, electric currents in a conductor etc.; (b) electromagnetic fields, e.g. voltages, the Liénard–Wiechert potentials, the monochromatic plane waves, optical rays, radio waves, microwaves, infrared radiation, visible light, ultraviolet radiation, X-rays, gamma rays etc.; (c) transmission media, e.g. electronic components, antennas, electromagnetic waveguides, flat mirrors, mirrors with curved surfaces convex lenses, concave lenses; resistors, inductors, capacitors, switches; wires, electric and optical cables, transmission lines, integrated circuits etc.; all of which have only few variable characteristics.
Physical sciences
Basics_3
Physics
426318
https://en.wikipedia.org/wiki/Barbary%20dove
Barbary dove
The Barbary dove, ringed turtle dove, ringneck dove, ring-necked turtle dove, or ring dove (Streptopelia risoria) is a domestic member of the dove and pigeon family (Columbidae). Taxonomy and domestication Although the Barbary dove is normally assigned its own systematic name, as Streptopelia risoria, considerable doubt exists as to its appropriate classification. Some sources assert confidently that it is a domesticated form of the Eurasian collared dove (Streptopelia decaocto), but the majority of evidence points to it being a domesticated form of the African collared dove (Streptopelia roseogrisea). It appears that it can hybridize freely with either species, and its status as a species must therefore be regarded as doubtful. However, because of the wide use of both the common and systematic names, it is best to consider it separately from either of the parent species. Their time of domestication is also uncertain. While Linnaeus described them in 1756, they may have been imported into Italy from North Africa in the late 16th century. Behavior Barbary doves are easily kept and long-lived in captivity, living for up to 12 years. There have been cases of doves living over 20 years, and, in one case, of a dove living for 29 years. In recent years they have been used extensively in biological research, particularly into the hormonal bases of reproductive behaviour, because their sequences of courtship, mating and parental behaviour have been described accurately and are highly consistent in form. Dove fanciers have bred them in a great variety of colours; the number of colours available has increased dramatically in the latter half of the 20th century, and it is thought that this has been achieved by interbreeding with Streptopelia roseogrisea. Some of these doves carry a mutation that makes them completely white. These white Barbary doves are most commonly used in stage magic acts. White Barbary doves are also traditionally released in large public ceremonies, since it is a peace symbol in several cultures, and "dove releases" are also sometimes found at weddings and funerals. However, a release dove is, in fact, usually a homing pigeon, as Barbary doves lack the homing instinct. The coo of the Barbary dove is created by muscles that vibrate air sent up from the dove's lungs. These muscles belong to the fastest known class of vertebrate muscles, contracting as much as 10 times faster than muscles vertebrates use for running. This class of muscles is usually found in high speed tissue such as a rattlesnake's tail. Barbary doves are the first bird species to have been found to have this class of muscle. Breeding They can be crossed with the pigeon to create offspring, but the offspring are not fertile. Gallery
Biology and health sciences
Columbimorphae
Animals
426426
https://en.wikipedia.org/wiki/Rain%20shadow
Rain shadow
A rain shadow is an area of significantly reduced rainfall behind a mountainous region, on the side facing away from prevailing winds, known as its leeward side. Evaporated moisture from water bodies (such as oceans and large lakes) is carried by the prevailing onshore breezes towards the drier and hotter inland areas. When encountering elevated landforms, the moist air is driven upslope towards the peak, where it expands, cools, and its moisture condenses and starts to precipitate. If the landforms are tall and wide enough, most of the humidity will be lost to precipitation over the windward side (also known as the rainward side) before ever making it past the top. As the air descends the leeward side of the landforms, it is compressed and heated, producing foehn winds that absorb moisture downslope and cast a broad "shadow" of dry climate region behind the mountain crests. This climate typically takes the form of shrub–steppe, xeric shrublands or even deserts. The condition exists because warm moist air rises by orographic lifting to the top of a mountain range. As atmospheric pressure decreases with increasing altitude, the air has expanded and adiabatically cooled to the point that the air reaches its adiabatic dew point (which is not the same as its constant pressure dew point commonly reported in weather forecasts). At the adiabatic dew point, moisture condenses onto the mountain and it precipitates on the top and windward sides of the mountain. The air descends on the leeward side, but due to the precipitation it has lost much of its moisture. Typically, descending air also gets warmer because of adiabatic compression (as with foehn winds) down the leeward side of the mountain, which increases the amount of moisture that it can absorb and creates an arid region. Notably affected regions There are regular patterns of prevailing winds found in bands round Earth's equatorial region. The zone designated the trade winds is the zone between about 30° N and 30° S, blowing predominantly from the northeast in the Northern Hemisphere and from the southeast in the Southern Hemisphere. The westerlies are the prevailing winds in the middle latitudes between 30 and 60 degrees latitude, blowing predominantly from the southwest in the Northern Hemisphere and from the northwest in the Southern Hemisphere. Some of the strongest westerly winds in the middle latitudes can come in the Roaring Forties of the Southern Hemisphere, between 30 and 50 degrees latitude. Examples of notable rain shadowing include: Africa Northern Africa The Sahara is made even drier because of a strong rain shadow effects caused by major mountain ranges (whose highest points can culminate up to more than 4,000 meters; 2½ miles high). To the northwest, the Atlas Mountains, covering the Mediterranean coast for Morocco, Algeria and Tunisia. On the windward side of the Atlas Mountains, the warm, moist winds blowing from the northwest off the Atlantic Ocean which contain a lot of water vapor, are forced to rise, lift up and expand over the mountain range. This causes them to cool down, which causes an excess of moisture to condense into high clouds and results in heavy precipitation over the mountain range. This is known as orographic rainfall and after this process, the air is dry because it has lost most of its moisture over the Atlas Mountains. On the leeward side, the cold, dry air starts to descend and to sink and compress, making the winds warm up. This warming causes the moisture to evaporate, making clouds disappear. This prevents rainfall formation and creates desert conditions in the Sahara. Desert regions in the Horn of Africa (Ethiopia, Eritrea, Somalia and Djibouti) such as the Danakil Desert are all influenced by the air heating and drying produced by rain shadow effect of the Ethiopian Highlands. Southern Africa The windward side of the island of Madagascar, which sees easterly on-shore winds, is wet tropical, while the western and southern sides of the island lie in the rain shadow of the central highlands and are home to thorn forests and deserts. The same is true for the island of Réunion. On Tristan da Cunha, Sandy Point on the east coast is warmer and drier than the rainy, windswept settlement of Edinburgh of the Seven Seas in the west. In Western Cape Province, the Breede River Valley and the Karoo region lie in the rain shadow of the Cape Fold Mountains and are arid; whereas the wettest parts of the Cape Mountains can receive , Worcester receives only around and is useful only for grazing. Asia Central and Northern Asia The Himalaya and connecting ranges also contribute to arid conditions in Central Asia including Mongolia's Gobi desert, as well as the semi-arid steppes of Mongolia and north-central to north western China. The Verkhoyansk Range in eastern Siberia is the coldest place in the Northern Hemisphere, because the moist southeasterly winds from the Pacific Ocean lose their moisture over the coastal mountains well before reaching the Lena River valley, due to the intense Siberian High forming around the very cold continental air during the winter. One effect in the Sakha Republic (Yakutia) is that, in Yakutsk, Verkhoyansk, and Oymyakon, the average temperature in the coldest month is below . These regions are synonymous with extreme cold. Eastern Asia The Ordos Desert is rain shadowed by mountain chains including the Kara-naryn-ula, the Sheitenula, and the Yin Mountains, which link on to the south end of the Great Khingan Mountains. The central region of Myanmar is in the rain shadow of the Arakan Mountains and is almost semi-arid with only of rain, versus up to on the Rakhine State coast. The plains around Tokyo, Japan - known as Kanto plain - during winter experiences significantly less precipitation than the rest of the country by virtue of surrounding mountain ranges, including the Japanese Alps, blocking prevailing northwesterly winds originating in Siberia. Southern Asia The eastern side of the Sahyadri ranges on the Deccan Plateau including: Vidarbha, North Karnataka, Rayalaseema and western Tamil Nadu. Gilgit and Chitral, Pakistan, are rainshadow areas. The Thar Desert is bounded and rain shadowed by the Aravalli ranges to the southeast, the Himalaya to the northeast, and the Kirthar and Sulaiman ranges to the west. The Central Highlands of Sri Lanka rain shadow the northeastern parts of the island, which experience much less severe summer monsoon rains and instead have precipitation peaks in autumn and winter. Western Asia The peaks of the Caucasus Mountains to the west and Hindukush and Pamir to the east rain shadow the Karakum and Kyzyl Kum deserts east of the Caspian Sea, as well as the semi-arid Kazakh Steppe. They also cause vast rainfall differences between coastal areas on the Black Sea such as Rize, Batumi and Sochi contrasted with the dry lowlands of Azerbaijan facing the Caspian Sea. The semi-arid Anatolian Plateau is rain shadowed by mountain chains, including the Pontic Mountains in the north and the Taurus Mountains in the south. The High Peaks of Mount Lebanon rain-shadow the northern parts of the Beqaa Valley and Anti-Lebanon Mountains. The Judaean Desert, the Dead Sea and the western slopes of the Moab Mountains on the opposite (Jordanian) side are rain-shadowed by the Judaean Mountains. The Dasht-i-Lut in Iran is in the rain shadow of the Elburz and Zagros Mountains and is one of the most lifeless areas on Earth. The peaks of the Zagros Mountains rain-shadow the northern half of the West Azerbaijan province in Iranian Azerbaijan (above Urmia), as manifested by the province's dry winters relative to those in the windward part of the region (i.e. Kurdistan Region and Hakkâri Province in Turkey). Much of the Mesaoria Plain of Cyprus is in the rain shadow of the Troodos Mountains and is semi-arid. Europe Central Europe The Plains of Limagne and Forez in the northern Massif Central, France are also relatively rainshadowed (mostly the plain of Limagne, shadowed by the Chaîne des Puys (up to 2000 mm; 80" of rain a year on the summits and below 600mm; 20" at Clermont-Ferrand, which is one of the driest places in the country). The Piedmont wine region of northern Italy is rainshadowed by the mountains that surround it on nearly every side: Asti receives only 527 mm (20¾") of precipitation per year, making it one of the driest places in mainland Italy. Some valleys in the inner Alps are also strongly rainshadowed by the high surrounding mountains: the areas of Gap and Briançon in France, the district of Zernez in Switzerland. The Kuyavia and the eastern part of the Greater Poland has an average rainfall of about 450 mm (18") because of rainshadowing by the slopes of the Kashubian Switzerland, making it one of the driest places in the North European Plain. Northern Europe The Pennines of Northern England, the mountains of Wales, the Lake District and the Highlands of Scotland create a rain shadow that includes most of the eastern United Kingdom, due to the prevailing south-westerly winds. Manchester and Glasgow, for example, receive around double the rainfall of Leeds and Edinburgh respectively (although there are no mountains between Edinburgh and Glasgow). The contrast is even stronger further north, where Aberdeen gets around a third of the rainfall of Fort William or Skye. In Devon, rainfall at Princetown on Dartmoor is almost three times the amount received to the east at locations such as Exeter and Teignmouth. The Fens of East Anglia receive similar rainfall amounts to Seville. Iceland has plenty of microclimates courtesy of the mountainous terrain. Akureyri on a northerly fiord receives about a third of the precipitation that the island of Vestmannaeyjar off the south coast gets. The smaller island is in the pathway of Gulf Stream rain fronts with mountains lining the southern coast of the mainland. The Scandinavian Mountains create a rain shadow for lowland areas east of the mountain chain and prevents the Oceanic climate from penetrating further east; thus Bergen and a place like Brekke in Sogn, west of the mountains, receive an annual precipitation of and , respectively, while Oslo receives only , and Skjåk Municipality, a municipality situated in a deep valley, receives only . Further east, the partial influence of the Scandinavian Mountains contribute to areas in east-central Sweden around Stockholm only receiving annually. In the north, the mountain range extending to the coast in around Narvik and Tromsø cause a lot higher precipitation there than in coastal areas further east facing north such as Alta or inland areas like Kiruna across the Swedish border. The South Swedish highlands, although not rising more than , reduce precipitation and increase summer temperatures on the eastern side. Combined with the high pressure of the Baltic Sea, this leads to some of the driest climates in the humid zones of Northern Europe being found in the triangle between the coastal areas in the counties of Kalmar, Östergötland and Södermanland along with the offshore island of Gotland on the leeward side of the slopes. Coastal areas in this part of Sweden usually receive less precipitation than windward locations in Andalusia in the south of Spain. Southern Europe The Cantabrian Mountains form a sharp division between "Green Spain" to the north and the dry central plateau. The northern-facing slopes receive heavy rainfall from the Bay of Biscay, but the southern slopes are in rain shadow. The other most evident effect on the Iberian Peninsula occurs in the Almería, Murcia and Alicante areas, each with an average rainfall of 300 mm (12"), which are the driest spots in Europe (see Cabo de Gata) mostly a result of the mountain range running through their western side, which blocks the westerlies. The Norte Region in Portugal has extreme differences in precipitation with values surpassing in the Peneda-Gerês National Park to values close to in the Douro Valley. Despite being only apart, Chaves has less than half the precipitation of Montalegre. The eastern part of the Pyrenean mountains in the south of France (Cerdagne). In the Northern Apennines of Italy, Mediterranean city La Spezia receives twice the rainfall of Adriatic city Rimini on the eastern side. This is also extended to the southern end of the Apennines that see vast rainfall differences between Naples with above on the Mediterranean side and Bari with about on the Adriatic side. The valley of the Vardar River and south from Skopje to Athens is in the rain shadow of the Accursed Mountains and Pindus Mountains. On its windward side the Accursed Mountains has the highest rainfall in Europe at around with small glaciers even at mean annual temperatures well above , but the leeward side receives as little as . Caribbean Throughout the Greater Antilles, the southwestern sides are in the rain shadow of the trade winds and can receive as little as per year as against over on the northeastern, windward sides and over over some highland areas. This is most apparent in Cuba, where this phenomenon leads to the Cuban cactus scrub ecoregion, and the island of Hispaniola (which contains the Caribbean's highest mountain ranges), which results in xeric semi-arid shrublands throughout the Dominican Republic and Haiti. North American mainland On the largest scale, the entirety of the North American Interior Plains are shielded from the prevailing Westerlies carrying moist Pacific weather by the North American Cordillera. More pronounced effects are observed, however, in particular valley regions within the Cordillera, in the direct lee of specific mountain ranges. This includes much of the Basin and Range Province in the United States and Mexico. The Pacific Coast Ranges create rain shadows near the West Coast: The Dungeness Valley around Sequim and Port Angeles, Washington lies in the rain shadow of the Olympic Mountains. The area averages of rain per year. The rain shadow extends to the eastern Olympic Peninsula, Whidbey Island, parts of the San Juan Islands, and Victoria, British Columbia which receive between of precipitation each year. Seattle is also affected by the rain shadow, albeit to a much lesser effect. By contrast, Aberdeen, which is situated southwest of the Olympics, receives nearly of rain per year The east slopes of the Coast Ranges in central and southern California cut off the southern San Joaquin Valley from enough precipitation to ensure desert-like conditions in areas around Bakersfield. San Jose, and adjacent cities are usually drier than the rest of the San Francisco Bay Area because of the rain shadow cast by the highest part of the Santa Cruz Mountains. The Sonoran Desert is bounded to the west by the Peninsular Ranges, but extends even along part of the east coast of the Gulf of California. The Sierra Madre Occidental in Mexico are west of the Chihuahuan Desert. Most rain shadows in the western United States are due to the Sierra Nevada mountains in California and Cascade Mountains, mostly in Oregon and Washington. The Cascades create a rain-shadowed Columbia Basin area of Eastern Washington and valleys in British Columbia, Canada - most notably the Thompson and Nicola Valleys which can receive less than of rain in parts, and the Okanagan Valley (particularly the south, nearest to the US border) which receives anywhere from 12–17 inches of rain annually. The endorheic Great Basin of Utah and Nevada is in the rain shadows of the Cascades and Sierra Nevada. The Mojave Desert is rain-shadowed by the Sierra Nevada and the Transverse Ranges of southern California. The Black Rock Desert is in the rain shadows of the Cascades and Sierra Nevada. California's Owens Valley is rain-shadowed by the Sierra Nevada. Death Valley in the United States, behind both the Pacific Coast Ranges of California and the Sierra Nevada range, is the driest place in North America and one of the driest places on the planet. This is also due to its location well below sea level which tends to cause high pressure and dry conditions to dominate due to the greater weight of the atmosphere above. The Colorado Front Range is limited to precipitation that crosses over the Continental Divide. While many locations west of the Divide may receive as much as of precipitation per year, some places on the eastern side, notably the cities of Denver and Pueblo, Colorado, typically receive only about 12 to 19 inches. Thus, the Continental Divide acts as a barrier for precipitation. This effect applies only to storms traveling west-to-east. When low pressure systems skirt the Rocky Mountains and approach from the south, they can generate high precipitation on the eastern side and little or none on the western slope. Further east: The Shenandoah Valley of Virginia, wedged between the Ridge-and-Valley Appalachians and the Blue Ridge Mountains and partially shielded from moisture from the west and southeast, is much drier than the very humid remainder of Virginia and the American Southeast. Asheville, North Carolina sits in the rain shadow of the Balsam, Smoky, and Blue Ridge Mountains. While the mountains surrounding Asheville contain the Appalachian Temperate Rainforests, with areas receiving over an annual average precipitation of , the city itself is the driest location in North Carolina, with an annual average precipitation of only . Ashcroft, British Columbia, the only true desert in Canada, sits in the rain shadow of the Coast Mountains of Canada. Yellowknife, the capital and most populous city in the Northwest Territories of Canada, is located in the rain shadow of the mountain ranges to the west of the city. Oceania Australia In New South Wales and the Australian Capital Territory, Monaro is shielded by both the Snowy Mountains to the northwest and coastal ranges to the southeast. Consequently, parts of it are as dry as the wheat-growing lands of those states. For comparison, Cooma receives of rain annually, whereas Batlow, on the western side of the ranges, receives of precipitation. Furthermore, Australia's capital Canberra is also protected from the west by the Brindabellas which create a strong rain shadow in Canberra's valleys, where it receives an annual rainfall of , compared to Adjungbilly's . In the cool season, the Great Dividing Range also shields much of the southeast coast (i.e. Sydney, the Central Coast, the Hunter Valley, Illawarra, the South Coast) from south-westerly polar blasts that originate from the Southern Ocean. In Queensland, the land west of Atherton Tableland in the Tablelands Region lies on a rain shadow and therefore would feature significantly lower annual rainfall averages than those in the Cairns Region. For comparison, Tully, which is on the eastern side of the tablelands, towards the coast, receives annual rainfall that exceeds , whereas Mareeba, which lies on the rain shadow of the Atherton Tableland, receives of rainfall annually. In Tasmania, the central Midlands region is in a strong rain shadow and receives only about a fifth as much rainfall as the highlands to the west. In Victoria, the western side of Port Phillip Bay is in the rain shadow of the Otway Ranges. The area between Geelong and Werribee is the driest part of southern Victoria: the crest of the Otway Ranges receives of rain per year and has myrtle beech rainforests much further west than anywhere else, whilst the area around Little River receives as little as annually, which is as little as Nhill or Longreach and supports only grassland. Also in Victoria, Omeo is shielded by the surrounding Victorian Alps, where it receives around of annual rain, whereas other places nearby exceed . Western Australia's Wheatbelt and Great Southern regions are shielded by the Darling Range to the west: Mandurah, near the coast, receives about annually. Dwellingup, 40 km (25 miles) inland and in the heart of the ranges, receives over a year while Narrogin, further east, receives less than a year. Pacific Islands Hawaii also has rain shadows, with some areas being desert. Orographic lifting produces the world's second-highest annual precipitation record, , on the island of Kauai; the leeward side is understandably rain-shadowed. The entire island of Kahoolawe lies in the rain shadow of Maui's East Maui Volcano. New Caledonia lies astride the Tropic of Capricorn, between 19° and 23° south latitude. The climate of the islands is tropical, and rainfall is brought by trade winds from the east. The western side of the Grande Terre lies in the rain shadow of the central mountains, and rainfall averages are significantly lower. On the South Island of New Zealand is found one of the most remarkable rain shadows anywhere on Earth. The Southern Alps intercept moisture coming off the Tasman Sea, precipitating about 6,300 mm (250 in) to 8,900 mm (350 in) liquid water equivalent per year and creating large glaciers on the western side. To the east of the Southern Alps, scarcely 50 km (30 mi) from the snowy peaks, yearly rainfall drops to less than 760 mm (30 in) and some areas less than 380 mm (15 in). (see Nor'west arch for more on this subject). South America The Atacama Desert in Chile is the driest non-polar desert on Earth because it is blocked from moisture by the Andes Mountains to the east while the Humboldt Current causes persistent atmospheric stability. Cuyo and Eastern Patagonia is rain shadowed from the prevailing westerly winds by the Andes range and is arid. The aridity of the lands next to eastern piedmont of the Andes decreases to the south due to a decrease in the height of the Andes with the consequence that the Patagonian Desert develop more fully at the Atlantic coast contributing to shaping the climatic pattern known as the Arid Diagonal. The Argentinian wine region of Cuyo and Northern Patagonia is almost completely dependent on irrigation, using water drawn from the many rivers that drain glacial ice from the Andes. The Guajira Peninsula in northern Colombia is in the rain shadow of the Sierra Nevada de Santa Marta and despite its tropical latitude is almost arid, receiving almost no rainfall for seven to eight months of the year and being incapable of cultivation without irrigation.
Physical sciences
Precipitation
Earth science
426456
https://en.wikipedia.org/wiki/Ice%20core
Ice core
An ice core is a core sample that is typically removed from an ice sheet or a high mountain glacier. Since the ice forms from the incremental buildup of annual layers of snow, lower layers are older than upper ones, and an ice core contains ice formed over a range of years. Cores are drilled with hand augers (for shallow holes) or powered drills; they can reach depths of over two miles (3.2 km), and contain ice up to 800,000 years old. The physical properties of the ice and of material trapped in it can be used to reconstruct the climate over the age range of the core. The proportions of different oxygen and hydrogen isotopes provide information about ancient temperatures, and the air trapped in tiny bubbles can be analysed to determine the level of atmospheric gases such as carbon dioxide. Since heat flow in a large ice sheet is very slow, the borehole temperature is another indicator of temperature in the past. These data can be combined to find the climate model that best fits all the available data. Impurities in ice cores may depend on location. Coastal areas are more likely to include material of marine origin, such as sea salt ions. Greenland ice cores contain layers of wind-blown dust that correlate with cold, dry periods in the past, when cold deserts were scoured by wind. Radioactive elements, either of natural origin or created by nuclear testing, can be used to date the layers of ice. Some volcanic events that were sufficiently powerful to send material around the globe have left a signature in many different cores that can be used to synchronise their time scales. Ice cores have been studied since the early 20th century, and several cores were drilled as a result of the International Geophysical Year (1957–1958). Depths of over 400 m were reached, a record which was extended in the 1960s to 2164 m at Byrd Station in Antarctica. Soviet ice drilling projects in Antarctica include decades of work at Vostok Station, with the deepest core reaching 3769 m. Numerous other deep cores in the Antarctic have been completed over the years, including the West Antarctic Ice Sheet project, and cores managed by the British Antarctic Survey and the International Trans-Antarctic Scientific Expedition. In Greenland, a sequence of collaborative projects began in the 1970s with the Greenland Ice Sheet Project; there have been multiple follow-up projects, with the most recent, the East Greenland Ice-Core Project, originally expected to complete a deep core in east Greenland in 2020 but since postponed. Structure of ice sheets and cores An ice core is a vertical column through a glacier, sampling the layers that formed through an annual cycle of snowfall and melt. As snow accumulates, each layer presses on lower layers, making them denser until they turn into firn. Firn is not dense enough to prevent air from escaping; but at a density of about 830 kg/m3 it turns to ice, and the air within is sealed into bubbles that capture the composition of the atmosphere at the time the ice formed. The depth at which this occurs varies with location, but in Greenland and the Antarctic it ranges from 64 m to 115 m. Because the rate of snowfall varies from site to site, the age of the firn when it turns to ice varies a great deal. At Summit Camp in Greenland, the depth is 77 m and the ice is 230 years old; at Dome C in Antarctica the depth is 95 m and the age 2500 years. As further layers build up, the pressure increases, and at about 1500 m the crystal structure of the ice changes from hexagonal to cubic, allowing air molecules to move into the cubic crystals and form a clathrate. The bubbles disappear and the ice becomes more transparent. Two or three feet of snow may turn into less than a foot of ice. The weight above makes deeper layers of ice thin and flow outwards. Ice is lost at the edges of the glacier to icebergs, or to summer melting, and the overall shape of the glacier does not change much with time. The outward flow can distort the layers, so it is desirable to drill deep ice cores at places where there is very little flow. These can be located using maps of the flow lines. Impurities in the ice provide information on the environment from when they were deposited. These include soot, ash, and other types of particle from forest fires and volcanoes; isotopes such as beryllium-10 created by cosmic rays; micrometeorites; and pollen. The lowest layer of a glacier, called basal ice, is frequently formed of subglacial meltwater that has refrozen. It can be up to about 20 m thick, and though it has scientific value (for example, it may contain subglacial microbial populations), it often does not retain stratigraphic information. Cores are often drilled in areas such as Antarctica and central Greenland where the temperature is almost never warm enough to cause melting, but the summer sunlight can still alter the snow. In polar areas, the Sun is visible day and night during the local summer and invisible all winter. It can make some snow sublimate, leaving the top inch or so less dense. When the Sun approaches its lowest point in the sky, the temperature drops and hoar frost forms on the top layer. Buried under the snow of following years, the coarse-grained hoar frost compresses into lighter layers than the winter snow. As a result, alternating bands of lighter and darker ice can be seen in an ice core. Coring Ice cores are collected by cutting around a cylinder of ice in a way that enables it to be brought to the surface. Early cores were often collected with hand augers and they are still used for short holes. A design for ice core augers was patented in 1932 and they have changed little since. An auger is essentially a cylinder with helical metal ribs (known as flights) wrapped around the outside, at the lower end of which are cutting blades. Hand augers can be rotated by a T handle or a brace handle, and some can be attached to handheld electric drills to power the rotation. With the aid of a tripod for lowering and raising the auger, cores up to 50 m deep can be retrieved, but the practical limit is about 30 m for engine-powered augers, and less for hand augers. Below this depth, electromechanical or thermal drills are used. The cutting apparatus of a drill is on the bottom end of a drill barrel, the tube that surrounds the core as the drill cuts downward. The cuttings (chips of ice cut away by the drill) must be drawn up the hole and disposed of or they will reduce the cutting efficiency of the drill. They can be removed by compacting them into the walls of the hole or into the core, by air circulation (dry drilling), or by the use of a drilling fluid (wet drilling). Dry drilling is limited to about 400 m depth, since below that point a hole would close up as the ice deforms from the weight of the ice above. Drilling fluids are chosen to balance the pressure so that the hole remains stable. The fluid must have a low kinematic viscosity to reduce tripping time (the time taken to pull the drilling equipment out of the hole and return it to the bottom of the hole). Since retrieval of each segment of core requires tripping, a slower speed of travel through the drilling fluid could add significant time to a project—a year or more for a deep hole. The fluid must contaminate the ice as little as possible; it must have low toxicity, for safety and to minimize the effect on the environment; it must be available at a reasonable cost; and it must be relatively easy to transport. Historically, there have been three main types of ice drilling fluids: two-component fluids based on kerosene-like products mixed with fluorocarbons to increase density; alcohol compounds, including aqueous ethylene glycol and ethanol solutions; and esters, including n-butyl acetate. Newer fluids have been proposed, including new ester-based fluids, low-molecular weight dimethyl siloxane oils, fatty-acid esters, and kerosene-based fluids mixed with foam-expansion agents. Rotary drilling is the main method of drilling for minerals and it has also been used for ice drilling. It uses a string of drill pipe rotated from the top, and drilling fluid is pumped down through the pipe and back up around it. The cuttings are removed from the fluid at the top of the hole and the fluid is then pumped back down. This approach requires long trip times, since the entire drill string must be hoisted out of the hole, and each length of pipe must be separately disconnected, and then reconnected when the drill string is reinserted. Along with the logistical difficulties associated with bringing heavy equipment to ice sheets, this makes traditional rotary drills unattractive. In contrast, wireline drills allow the removal of the core barrel from the drill assembly while it is still at the bottom of the borehole. The core barrel is hoisted to the surface, and the core removed; the barrel is lowered again and reconnected to the drill assembly. Another alternative is flexible drill-stem rigs, in which the drill string is flexible enough to be coiled when at the surface. This eliminates the need to disconnect and reconnect the pipes during a trip. The need for a string of drillpipe that extends from the surface to the bottom of the borehole can be eliminated by suspending the entire downhole assembly on an armoured cable that conveys power to the downhole motor. These cable-suspended drills can be used for both shallow and deep holes; they require an anti-torque device, such as leaf-springs that press against the borehole, to prevent the drill assembly rotating around the drillhead as it cuts the core. The drilling fluid is usually circulated down around the outside of the drill and back up between the core and core barrel; the cuttings are stored in the downhole assembly, in a chamber above the core. When the core is retrieved, the cuttings chamber is emptied for the next run. Some drills have been designed to retrieve a second annular core outside the central core, and in these drills the space between the two cores can be used for circulation. Cable-suspended drills have proved to be the most reliable design for deep ice drilling. Thermal drills, which cut ice by electrically heating the drill head, can also be used, but they have some disadvantages. Some have been designed for working in cold ice; they have high power consumption and the heat they produce can degrade the quality of the retrieved ice core. Early thermal drills, designed for use without drilling fluid, were limited in depth as a result; later versions were modified to work in fluid-filled holes but this slowed down trip times, and these drills retained the problems of the earlier models. In addition, thermal drills are typically bulky and can be impractical to use in areas where there are logistical difficulties. More recent modifications include the use of antifreeze, which eliminates the need for heating the drill assembly and hence reduces the power needs of the drill. Hot-water drills use jets of hot water at the drill head to melt the water around the core. The drawbacks are that it is difficult to accurately control the dimensions of the borehole, the core cannot easily be kept sterile, and the heat may cause thermal shock to the core. When drilling in temperate ice, thermal drills have an advantage over electromechanical (EM) drills: ice melted by pressure can refreeze on EM drill bits, reducing cutting efficiency, and can clog other parts of the mechanism. EM drills are also more likely to fracture ice cores where the ice is under high stress. When drilling deep holes, which require drilling fluid, the hole must be cased (fitted with a cylindrical lining), since otherwise the drilling fluid will be absorbed by the snow and firn. The casing has to reach down to the impermeable ice layers. To install casing a shallow auger can be used to create a pilot hole, which is then reamed (expanded) until it is wide enough to accept the casing; a large diameter auger can also be used, avoiding the need for reaming. An alternative to casing is to use water in the borehole to saturate the porous snow and firn; the water eventually turns to ice. Ice cores from different depths are not all equally in demand by scientific investigators, which can lead to a shortage of ice cores at certain depths. To address this, work has been done on technology to drill replicate cores: additional cores, retrieved by drilling into the sidewall of the borehole, at depths of particular interest. Replicate cores were successfully retrieved at WAIS divide in the 2012–2013 drilling season, at four different depths. Large coring projects The logistics of any coring project are complex because the locations are usually difficult to reach, and may be at high altitude. The largest projects require years of planning and years to execute, and are usually run as international consortiums. The EastGRIP project, for example, which as of 2017 is drilling in eastern Greenland, is run by the Centre for Ice and Climate (Niels Bohr Institute, University of Copenhagen) in Denmark, and includes representatives from 12 countries on its steering committee. Over the course of a drilling season, scores of people work at the camp, and logistics support includes airlift capabilities provided by the US Air National Guard, using Hercules transport planes owned by the National Science Foundation. In 2015 the EastGRIP team moved the camp facilities from NEEM, a previous Greenland ice core drilling site, to the EastGRIP site. Drilling is expected to continue until at least 2020. Core processing With some variation between projects, the following steps must occur between drilling and final storage of the ice core. The drill removes an annulus of ice around the core but does not cut under it. A spring-loaded lever arm called a core dog can break off the core and hold it in place while it is brought to the surface. The core is then extracted from the drill barrel, usually by laying it out flat so that the core can slide out onto a prepared surface. The core must be cleaned of drilling fluid as it is slid out; for the WAIS Divide coring project, a vacuuming system was set up to facilitate this. The surface that receives the core should be aligned as accurately as possible with the drill barrel to minimise mechanical stress on the core, which can easily break. The ambient temperature is kept well below freezing to avoid thermal shock. A log is kept with information about the core, including its length and the depth it was retrieved from, and the core may be marked to show its orientation. It is usually cut into shorter sections, the standard length in the US being one metre. The cores are then stored on site, usually in a space below snow level to simplify temperature maintenance, though additional refrigeration can be used. If more drilling fluid must be removed, air may be blown over the cores. Any samples needed for preliminary analysis are taken. The core is then bagged, often in polythene, and stored for shipment. Additional packing, including padding material, is added. When the cores are flown from the drilling site, the aircraft's flight deck is unheated to help maintain a low temperature; when they are transported by ship they must be kept in a refrigeration unit. There are several locations around the world that store ice cores, such as the National Ice Core Laboratory in the US. These locations make samples available for testing. A substantial fraction of each core is archived for future analyses. Brittle ice Over a depth range known as the brittle ice zone, bubbles of air are trapped in the ice under great pressure. When the core is brought to the surface, the bubbles can exert a stress that exceeds the tensile strength of the ice, resulting in cracks and spall. At greater depths, the air disappears into clathrates and the ice becomes stable again. At the WAIS Divide site, the brittle ice zone was from 520 m to 1340 m depth. The brittle ice zone typically returns poorer quality samples than for the rest of the core. Some steps can be taken to alleviate the problem. Liners can be placed inside the drill barrel to enclose the core before it is brought to the surface, but this makes it difficult to clean off the drilling fluid. In mineral drilling, special machinery can bring core samples to the surface at bottom-hole pressure, but this is too expensive for the inaccessible locations of most drilling sites. Keeping the processing facilities at very low temperatures limits thermal shocks. Cores are most brittle at the surface, so another approach is to break them into 1 m lengths in the hole. Extruding the core from the drill barrel into a net helps keep it together if it shatters. Brittle cores are also often allowed to rest in storage at the drill site for some time, up to a full year between drilling seasons, to let the ice gradually relax. Ice core data Dating Many different kinds of analysis are performed on ice cores, including visual layer counting, tests for electrical conductivity and physical properties, and assays for inclusion of gases, particles, radionuclides, and various molecular species. For the results of these tests to be useful in the reconstruction of palaeoenvironments, there has to be a way to determine the relationship between depth and age of the ice. The simplest approach is to count layers of ice that correspond to the original annual layers of snow, but this is not always possible. An alternative is to model the ice accumulation and flow to predict how long it takes a given snowfall to reach a particular depth. Another method is to correlate radionuclides or trace atmospheric gases with other timescales such as periodicities in the earth's orbital parameters. A difficulty in ice core dating is that gases can diffuse through firn, so the ice at a given depth may be substantially older than the gases trapped in it. As a result, there are two chronologies for a given ice core: one for the ice, and one for the trapped gases. To determine the relationship between the two, models have been developed for the depth at which gases are trapped for a given location, but their predictions have not always proved reliable. At locations with very low snowfall, such as Vostok, the uncertainty in the difference between ages of ice and gas can be over 1,000 years. The density and size of the bubbles trapped in ice provide an indication of crystal size at the time they formed. The size of a crystal is related to its growth rate, which in turn depends on the temperature, so the properties of the bubbles can be combined with information on accumulation rates and firn density to calculate the temperature when the firn formed. Radiocarbon dating can be used on the carbon in trapped . In the polar ice sheets there is about 15–20 μg of carbon in the form of in each kilogram of ice, and there may also be carbonate particles from wind-blown dust (loess). The can be isolated by subliming the ice in a vacuum, keeping the temperature low enough to avoid the loess giving up any carbon. The results have to be corrected for the presence of produced directly in the ice by cosmic rays, and the amount of correction depends strongly on the location of the ice core. Corrections for produced by nuclear testing have much less impact on the results. Carbon in particulates can also be dated by separating and testing the water-insoluble organic components of dust. The very small quantities typically found require at least 300 g of ice to be used, limiting the ability of the technique to precisely assign an age to core depths. Timescales for ice cores from the same hemisphere can usually be synchronised using layers that include material from volcanic events. It is more difficult to connect the timescales in different hemispheres. The Laschamp event, a geomagnetic reversal about 40,000 years ago, can be identified in cores; away from that point, measurements of gases such as (methane) can be used to connect the chronology of a Greenland core (for example) with an Antarctic core. In cases where volcanic tephra is interspersed with ice, it can be dated using argon/argon dating and hence provide fixed points for dating the ice. Uranium decay has also been used to date ice cores. Another approach is to use Bayesian probability techniques to find the optimal combination of multiple independent records. This approach was developed in 2010 and has since been turned into a software tool, DatIce. The boundary between the Pleistocene and the Holocene, about 11,700 years ago, is now formally defined with reference to data on Greenland ice cores. Formal definitions of stratigraphic boundaries allow scientists in different locations to correlate their findings. These often involve fossil records, which are not present in ice cores, but cores have extremely precise palaeoclimatic information that can be correlated with other climate proxies. The dating of ice sheets has proved to be a key element in providing dates for palaeoclimatic records. According to Richard Alley, "In many ways, ice cores are the 'rosetta stones' that allow development of a global network of accurately dated paleoclimatic records using the best ages determined anywhere on the planet". Visual analysis Cores show visible layers, which correspond to annual snowfall at the core site. If a pair of pits is dug in fresh snow with a thin wall between them and one of the pits is roofed over, an observer in the roofed pit will see the layers revealed by sunlight shining through. A six-foot pit may show anything from less than a year of snow to several years of snow, depending on the location. Poles left in the snow from year to year show the amount of accumulated snow each year, and this can be used to verify that the visible layer in a snow pit corresponds to a single year's snowfall. In central Greenland a typical year might produce two or three feet of winter snow, plus a few inches of summer snow. When this turns to ice, the two layers will make up no more than a foot of ice. The layers corresponding to the summer snow will contain bigger bubbles than the winter layers, so the alternating layers remain visible, which makes it possible to count down a core and determine the age of each layer. As the depth increases to the point where the ice structure changes to a clathrate, the bubbles are no longer visible, and the layers can no longer be seen. Dust layers may now become visible. Ice from Greenland cores contains dust carried by wind; the dust appears most strongly in late winter, and appears as cloudy grey layers. These layers are stronger and easier to see at times in the past when the Earth's climate was cold, dry, and windy. Any method of counting layers eventually runs into difficulties as the flow of the ice causes the layers to become thinner and harder to see with increasing depth. The problem is more acute at locations where accumulation is high; low accumulation sites, such as central Antarctica, must be dated by other methods. For example, at Vostok, layer counting is only possible down to an age of 55,000 years. When there is summer melting, the melted snow refreezes lower in the snow and firn, and the resulting layer of ice has very few bubbles so is easy to recognise in a visual examination of a core. Identification of these layers, both visually and by measuring density of the core against depth, allows the calculation of a melt-feature percentage (MF): an MF of 100% would mean that every year's deposit of snow showed evidence of melting. MF calculations are averaged over multiple sites or long time periods in order to smooth the data. Plots of MF data over time reveal variations in the climate, and have shown that since the late 20th century melting rates have been increasing. In addition to manual inspection and logging of features identified in a visual inspection, cores can be optically scanned so that a digital visual record is available. This requires the core to be cut lengthwise, so that a flat surface is created. Isotopic analysis The isotopic composition of the oxygen in a core can be used to model the temperature history of the ice sheet. Oxygen has three stable isotopes, , and . The ratio between and indicates the temperature when the snow fell. Because is lighter than , water containing is slightly more likely to turn into vapour, and water containing is slightly more likely to condense from vapour into rain or snow crystals. At lower temperatures, the difference is more pronounced. The standard method of recording the / ratio is to subtract the ratio in a standard known as standard mean ocean water (SMOW): where the ‰ sign indicates parts per thousand. A sample with the same / ratio as SMOW has a of 0‰; a sample that is depleted in has a negative . Combining the measurements of an ice core sample with the borehole temperature at the depth it came from provides additional information, in some cases leading to significant corrections to the temperatures deduced from the data. Not all boreholes can be used in these analyses. If the site has experienced significant melting in the past, the borehole will no longer preserve an accurate temperature record. Hydrogen ratios can also be used to calculate a temperature history. Deuterium (, or D) is heavier than hydrogen () and makes water more likely to condense and less likely to evaporate. A ratio can be defined in the same way as . There is a linear relationship between and : where d is the deuterium excess. It was once thought that this meant it was unnecessary to measure both ratios in a given core, but in 1979 Merlivat and Jouzel showed that the deuterium excess reflects the temperature, relative humidity, and wind speed of the ocean where the moisture originated. Since then it has been customary to measure both. Water isotope records, analyzed in cores from Camp Century and Dye 3 in Greenland, were instrumental in the discovery of Dansgaard-Oeschger events—rapid warming at the onset of an interglacial, followed by slower cooling. Other isotopic ratios have been studied, for example, the ratio between and can provide information about past changes in the carbon cycle. Combining this information with records of carbon dioxide levels, also obtained from ice cores, provides information about the mechanisms behind changes in over time. Palaeoatmospheric sampling It was understood in the 1960s that analyzing the air trapped in ice cores would provide useful information on the paleoatmosphere, but it was not until the late 1970s that a reliable extraction method was developed. Early results included a demonstration that the concentration was 30% less at the last glacial maximum than just before the start of the industrial age. Further research has demonstrated a reliable correlation between levels and the temperature calculated from ice isotope data. Because (methane) is produced in lakes and wetlands, the amount in the atmosphere is correlated with the strength of monsoons, which are in turn correlated with the strength of low-latitude summer insolation. Since insolation depends on orbital cycles, for which a timescale is available from other sources, can be used to determine the relationship between core depth and age. (nitrous oxide) levels are also correlated with glacial cycles, though at low temperatures the graph differs somewhat from the and graphs. Similarly, the ratio between (nitrogen) and (oxygen) can be used to date ice cores: as air is gradually trapped by the snow turning to firn and then ice, is lost more easily than , and the relative amount of correlates with the strength of local summer insolation. This means that the trapped air retains, in the ratio of to , a record of the summer insolation, and hence combining this data with orbital cycle data establishes an ice core dating scheme. Diffusion within the firn layer causes other changes that can be measured. Gravity causes heavier molecules to be enriched at the bottom of a gas column, with the amount of enrichment depending on the difference in mass between the molecules. Colder temperatures cause heavier molecules to be more enriched at the bottom of a column. These fractionation processes in trapped air, determined by the measurement of the / ratio and of neon, krypton and xenon, have been used to infer the thickness of the firn layer, and determine other palaeoclimatic information such as past mean ocean temperatures. Some gases such as helium can rapidly diffuse through ice, so it may be necessary to test for these "fugitive gases" within minutes of the core being retrieved to obtain accurate data. Chlorofluorocarbons (CFCs), which contribute to the greenhouse effect and also cause ozone loss in the stratosphere, can be detected in ice cores after about 1950; almost all CFCs in the atmosphere were created by human activity. Greenland cores, during times of climatic transition, may show excess in air bubbles when analysed, due to production by acidic and alkaline impurities. Glaciochemistry Summer snow in Greenland contains some sea salt, blown from the surrounding waters; there is less of it in winter, when much of the sea surface is covered by pack ice. Similarly, hydrogen peroxide appears only in summer snow because its production in the atmosphere requires sunlight. These seasonal changes can be detected because they lead to changes in the electrical conductivity of the ice. Placing two electrodes with a high voltage between them on the surface of the ice core gives a measurement of the conductivity at that point. Dragging them down the length of the core, and recording the conductivity at each point, gives a graph that shows an annual periodicity. Such graphs also identify chemical changes caused by non-seasonal events such as forest fires and major volcanic eruptions. When a known volcanic event, such as the eruption of Laki in Iceland in 1783, can be identified in the ice core record, it provides a cross-check on the age determined by layer counting. Material from Laki can be identified in Greenland ice cores, but did not spread as far as Antarctica; the 1815 eruption of Tambora in Indonesia injected material into the stratosphere, and can be identified in both Greenland and Antarctic ice cores. If the date of the eruption is not known, but it can be identified in multiple cores, then dating the ice can in turn give a date for the eruption, which can then be used as a reference layer. This was done, for example, in an analysis of the climate for the period from 535 to 550 AD, which was thought to be influenced by an otherwise unknown tropical eruption in about 533 AD; but which turned out to be caused by two eruptions, one in 535 or early 536 AD, and a second one in 539 or 540 AD. There are also more ancient reference points, such as the eruption of Toba about 72,000 years ago. Many other elements and molecules have been detected in ice cores. In 1969, it was discovered that lead levels in Greenland ice had increased by a factor of over 200 since pre-industrial times, and increases in other elements produced by industrial processes, such as copper, cadmium, and zinc, have also been recorded. The presence of nitric and sulfuric acid ( and ) in precipitation can be shown to correlate with increasing fuel combustion over time. Methanesulfonate (MSA) () is produced in the atmosphere by marine organisms, so ice core records of MSA provide information on the history of the oceanic environment. Both hydrogen peroxide () and formaldehyde () have been studied, along with organic molecules such as carbon black that are linked to vegetation emissions and forest fires. Some species, such as calcium and ammonium, show strong seasonal variation. In some cases there are contributions from more than one source to a given species: for example, Ca++ comes from dust as well as from marine sources; the marine input is much greater than the dust input and so although the two sources peak at different times of the year, the overall signal shows a peak in the winter, when the marine input is at a maximum. Seasonal signals can be erased at sites where the accumulation is low, by surface winds; in these cases it is not possible to date individual layers of ice between two reference layers. Some of the deposited chemical species may interact with the ice, so what is detected in an ice core is not necessarily what was originally deposited. Examples include HCHO and . Another complication is that in areas with low accumulation rates, deposition from fog can increase the concentration in the snow, sometimes to the point where the atmospheric concentration could be overestimated by a factor of two. Radionuclides Galactic cosmic rays produce in the atmosphere at a rate that depends on the solar magnetic field. The strength of the field is related to the intensity of solar radiation, so the level of in the atmosphere is a proxy for climate. Accelerator mass spectrometry can detect the low levels of in ice cores, about 10,000 atoms in a gram of ice, and these can be used to provide long-term records of solar activity. Tritium (), created by nuclear weapons testing in the 1950s and 1960s, has been identified in ice cores, and both 36Cl and have been found in ice cores in Antarctica and Greenland. Chlorine-36, which has a half-life of 301,000 years, has been used to date cores, as have krypton (, with a half-life of 11 years), lead (, 22 years), and silicon (, 172 years). Other inclusions Meteorites and micrometeorites that land on polar ice are sometimes concentrated by local environmental processes. For example, there are places in Antarctica where winds evaporate surface ice, concentrating the solids that are left behind, including meteorites. Meltwater ponds can also contain meteorites. At the South Pole Station, ice in a well is melted to provide a water supply, leaving micrometeorites behind. These have been collected by a robotic "vacuum cleaner" and examined, leading to improved estimates of their flux and mass distribution. The well is not an ice core, but the age of the ice that was melted is known, so the age of the recovered particles can be determined. The well becomes about 10 m deeper each year, so micrometeorites collected in a given year are about 100 years older than those from the previous year. Pollen, an important component of sediment cores, can also be found in ice cores. It provides information on changes in vegetation. Physical properties In addition to the impurities in a core and the isotopic composition of the water, the physical properties of the ice are examined. Features such as crystal size and axis orientation can reveal the history of ice flow patterns in the ice sheet. The crystal size can also be used to determine dates, though only in shallow cores. History Early years In 1841 and 1842, Louis Agassiz drilled holes in the Unteraargletscher in the Alps; these were drilled with iron rods and did not produce cores. The deepest hole achieved was 60 m. On Erich von Drygalski's Antarctic expedition in 1902 and 1903, 30 m holes were drilled in an iceberg south of the Kerguelen Islands and temperature readings were taken. The first scientist to create a snow sampling tool was James E. Church, described by Pavel Talalay as "the father of modern snow surveying". In the winter of 1908–1909, Church constructed steel tubes with slots and cutting heads to retrieve cores of snow up to 3 m long. Similar devices are in use today, modified to allow sampling to a depth of about 9 m. They are simply pushed into the snow and rotated by hand. The first systematic study of snow and firn layers was by Ernst Sorge, who was part of the Alfred Wegener Expedition to central Greenland in 1930–1931. Sorge dug a 15 m pit to examine the snow layers, and his results were later formalized into Sorge's Law of Densification by Henri Bader, who went on to do additional coring work in northwest Greenland in 1933. In the early 1950s, a SIPRE expedition took pit samples over much of the Greenland ice sheet, obtaining early oxygen isotope ratio data. Three other expeditions in the 1950s began ice coring work: a joint Norwegian-British-Swedish Antarctic Expedition (NBSAE), in Queen Maud Land in Antarctica; the Juneau Ice Field Research Project (JIRP), in Alaska; and Expéditions Polaires Françaises, in central Greenland. Core quality was poor, but some scientific work was done on the retrieved ice. The International Geophysical Year (1957–1958) saw increased glaciology research around the world, with one of the high priority research targets being deep cores in polar regions. SIPRE conducted pilot drilling trials in 1956 (to 305 m) and 1957 (to 411 m) at Site 2 in Greenland; the second core, with the benefit of the previous year's drilling experience, was retrieved in much better condition, with fewer gaps. In Antarctica, a 307 m core was drilled at Byrd Station in 1957–1958, and a 264 m core at Little America V, on the Ross Ice Shelf, the following year. The success of the IGY core drilling led to increased interest in improving ice coring capabilities, and was followed by a CRREL project at Camp Century, where in the early 1960s three holes were drilled, the deepest reaching the base of the ice sheet at 1387 m in July 1966. The drill used at Camp Century then went to Byrd Station, where a 2164 m hole was drilled to bedrock before the drill was frozen into the borehole by sub-ice meltwater and had to be abandoned. French, Australian and Canadian projects from the 1960s and 1970s include a 905 m core at Dome C in Antarctica, drilled by CNRS; cores at Law Dome drilled by ANARE, starting in 1969 with a 382 m core; and Devon Ice Cap cores recovered by a Canadian team in the 1970s. Antarctica deep cores Soviet ice drilling projects began in the 1950s, in Franz Josef Land, the Urals, Novaya Zemlya, and at Mirny and Vostok in the Antarctic; not all these early holes retrieved cores. Over the following decades work continued at multiple locations in Asia. Drilling in the Antarctic focused mostly on Mirny and Vostok, with a series of deep holes at Vostok begun in 1970. The first deep hole at Vostok reached 506.9 m in April 1970; by 1973 a depth of 952 m had been reached. A subsequent hole, Vostok 2, drilled from 1971 to 1976, reached 450 m, and Vostok 3 reached 2202 m in 1985 after six drilling seasons. Vostok 3 was the first core to retrieve ice from the previous glacial period, 150,000 years ago. Drilling was interrupted by a fire at the camp in 1982, but further drilling began in 1984, eventually reaching 2546 m in 1989. A fifth Vostok core was begun in 1990, reached 3661 m in 2007, and was later extended to 3769 m. The estimated age of the ice is 420,000 years at 3310 m depth; below that point it is difficult to interpret the data reliably because of mixing of the ice. EPICA, a European ice coring collaboration, was formed in the 1990s, and two holes were drilled in East Antarctica: one at Dome C, which reached 2871 m in only two seasons of drilling, but which took another four years to reach bedrock at 3260 m; and one at Kohnen Station, which reached bedrock at 2760 m in 2006. The Dome C core had very low accumulation rates, which mean that the climate record extended a long way; by the end of the project the usable data extended to 800,000 years ago. Other deep Antarctic cores included a Japanese project at Dome F, which reached 2503 m in 1996, with an estimated age of 330,000 years for the bottom of the core; and a subsequent hole at the same site which reached 3035 m in 2006, estimated to reach ice 720,000 years old. US teams drilled at McMurdo Station in the 1990s, and at Taylor Dome (554 m in 1994) and Siple Dome (1004 m in 1999), with both cores reaching ice from the last glacial period. The West Antarctic Ice Sheet (WAIS) project, completed in 2011, reached 3405 m; the site has high snow accumulation so the ice only extends back 62,000 years, but as a consequence, the core provides high resolution data for the period it covers. A 948 m core was drilled at Berkner Island by a project managed by the British Antarctic Survey from 2002 to 2005, extending into the last glacial period; and an Italian-managed ITASE project completed a 1620 m core at Talos Dome in 2007. In 2016, cores were retrieved from the Allan Hills in Antarctica in an area where old ice lay near the surface. The cores were dated by potassium-argon dating; traditional ice core dating is not possible as not all layers were present. The oldest core was found to include ice from 2.7 million years ago—by far the oldest ice yet dated from a core. Greenland deep cores In 1970, scientific discussions began which resulted in the Greenland Ice Sheet Project (GISP), a multinational investigation into the Greenland ice sheet that lasted until 1981. Years of field work were required to determine the ideal location for a deep core; the field work included several intermediate-depth cores, at Dye 3 (372 m in 1971), Milcent (398 m in 1973) and Crete (405 m in 1974), among others. A location in north-central Greenland was selected as ideal, but financial constraints forced the group to drill at Dye 3 instead, beginning in 1979. The hole reached bedrock at 2037 m, in 1981. Two holes, 30 km apart, were eventually drilled at the north-central location in the early 1990s by two groups: GRIP, a European consortium, and GISP-2, a group of US universities. GRIP reached bedrock at 3029 m in 1992, and GISP-2 reached bedrock at 3053 m the following year. Both cores were limited to about 100,000 years of climatic information, and since this was thought to be connected to the topography of the rock underlying the ice sheet at the drill sites, a new site was selected 200 km north of GRIP, and a new project, NorthGRIP, was launched as an international consortium led by Denmark. Drilling began in 1996; the first hole had to be abandoned at 1400 m in 1997, and a new hole was begun in 1999, reaching 3085 m in 2003. The hole did not reach bedrock, but terminated at a subglacial river. The core provided climatic data back to 123,000 years ago, which covered part of the last interglacial period. The subsequent North Greenland Eemian (NEEM) project retrieved a 2537 m core in 2010 from a site further north, extending the climatic record to 128,500 years ago; NEEM was followed by EastGRIP, which began in 2015 in east Greenland and was planned to be completed in 2020. In March 2020, the 2020 EGRIP field campaign was cancelled due to the ongoing COVID-19 pandemic. EastGRIP reopened for field work in 2022, where the CryoEgg reached new depths in the ice, under pressures in excess of 200 bar and temperatures of around -30c. Non-polar cores Ice cores have been drilled at locations away from the poles, notably in the Himalayas and the Andes. Some of these cores reach back to the last glacial period, but they are more important as records of El Niño events and of monsoon seasons in south Asia. Cores have also been drilled on Mount Kilimanjaro, in the Alps, and in Indonesia, New Zealand, Iceland, Scandinavia, Canada, and the US. Future plans IPICS (International Partnerships in Ice Core Sciences) has produced a series of white papers outlining future challenges and scientific goals for the ice core science community. These include plans to: Retrieve ice cores that reach back over 1.2 million years, in order to obtain multiple iterations of ice core record for the 40,000-year long climate cycles known to have operated at that time. Current cores reach back over 800,000 years, and show 100,000-year cycles. Improve ice core chronologies, including connecting chronologies of multiple cores. Identify additional proxies from ice cores, for example for sea ice, marine biological productivity, or forest fires. Drill additional cores to provide high-resolution data for the last 2,000 years, to use as input for detailed climate modelling. Identify an improved drilling fluid Improve the ability to handle brittle ice, both while drilling and in transport and storage Find a way to handle cores which have pressurised water at bedrock Come up with a standardised lightweight drill capable of drilling both wet and dry holes, and able to reach depths of up to 1000 m. Improve core handling to maximise the information that can be obtained from each core. A warming climate is found to create glacial meltwater that washes away temporally ordered layers of trapped aerosols that researchers use as an historical record of environmental events. The Ice Memory Foundation plans to store additional ice cores in Antarctica in advance of this impending loss of data.
Physical sciences
Geology: General
Earth science
4558584
https://en.wikipedia.org/wiki/Precast%20concrete
Precast concrete
Precast concrete is a construction product produced by casting concrete in a reusable mold or "form" which is then cured in a controlled environment, transported to the construction site and maneuvered into place; examples include precast beams, and wall panels, floors, roofs, and piles. In contrast, cast-in-place concrete is poured into site-specific forms and cured on site. Recently lightweight expanded polystyrene foam is being used as the cores of precast wall panels, saving weight and increasing thermal insulation. Precast stone is distinguished from precast concrete by the finer aggregate used in the mixture, so the result approaches the natural product. Overview Precast concrete is employed in both interior and exterior applications, from highway, bridge, and high-rise projects to parking structures, K-12 schools, warehouses, mixed-use, and industrial building construction. By producing precast concrete in a controlled environment (typically referred to as a precast plant), the precast concrete is afforded the opportunity to properly cure and be closely monitored by plant employees. Using a precast concrete system offers many potential advantages over onsite casting. Precast concrete production can be performed on ground level, which maximizes safety in its casting. There is greater control over material quality and workmanship in a precast plant compared to a construction site. The forms used in a precast plant can be reused hundreds to thousands of times before they have to be replaced, often making it cheaper than onsite casting in terms of cost per unit of formwork. Precast concrete forming systems for architectural applications differ in size, function, and cost. Precast architectural panels are also used to clad all or part of a building facade or erect free-standing walls for landscaping, soundproofing, and security. In appropriate instances precast products – such as beams for bridges, highways, and parking structure decks – can be prestressed structural elements. Stormwater drainage, water and sewage pipes, and tunnels also make use of precast concrete units. Precast concrete molds can be made of timber, steel, plastic, rubber, fiberglass, or other synthetic materials, with each giving a unique finish. In addition, many surface finishes for the four precast wall panel types – sandwich, plastered sandwich, inner layer and cladding panels – are available, including those creating the looks of horizontal boards and ashlar stone. Color may be added to the concrete mix, and the proportions and size aggregate also affect the appearance and texture of finished concrete surfaces. History Ancient Roman builders made use of concrete and soon poured the material into moulds to build their complex network of aqueducts, culverts, and tunnels. Modern uses for pre-cast technology include a variety of architectural and structural applications – including individual parts, or even entire building systems. In the modern world, precast panelled buildings were pioneered in Liverpool, England, in 1905. The process was invented by city engineer John Alexander Brodie. The tram stables at Walton in Liverpool followed in 1906. The idea was not taken up extensively in Britain. However, it was adopted all over the world, particularly in Central and Eastern Europe as well as in Million Programme in Scandinavia. In the US, precast concrete has evolved as two sub-industries, each represented by a major association. The precast concrete structures industry, represented primarily by of the Precast/Prestressed Concrete Institute (PCI), focuses on prestressed concrete elements and on other precast concrete elements used in above-ground structures such as buildings, parking structures, and bridges, while the precast concrete products industry produces utility, underground, and other non-prestressed products, and is represented primarily by the National Precast Concrete Association (NPCA). In Australia, The New South Wales Government Railways made extensive use of precast concrete construction for its stations and similar buildings. Between 1917 and 1932, it erected 145 such buildings. Beyond cladding panels and structural elements, entire buildings can be assembled from precast concrete. Precast assembly enables fast completion of commercial shops and offices with minimal labor. For example, the Jim Bridger Building in Williston, North Dakota, was precast in Minnesota with air, electrical, water, and fiber utilities preinstalled into the building panels. The panels were transported over 800 miles to the Bakken oilfields, and the commercial building was assembled by three workers in minimal time. The building houses over 40,000 square feet of shops and offices. Virtually the entire building was fabricated in Minnesota. Reinforcement Reinforcing concrete with steel improves strength and durability. On its own, concrete has good compressive strength, but lacks tensile and shear strength and can be subject to cracking when bearing loads for long periods of time. Steel offers high tensile and shear strength to make up for what concrete lacks. Steel behaves similarly to concrete in changing environments, which means it will shrink and expand with concrete, helping avoid cracking. Rebar is the most common form of concrete reinforcement. It is typically made from steel, manufactured with ribbing to bond with concrete as it cures. Rebar is versatile enough to be bent or assembled to support the shape of any concrete structure. Carbon steel is the most common rebar material. However, stainless steel, galvanized steel, and epoxy coatings can prevent corrosion. Products The following is a sampling of the numerous products that utilize precast/prestressed concrete. While this is not a complete list, the majority of precast/prestressed products typically fall under one or Agricultural products Since precast concrete products can withstand the most extreme weather conditions and will hold up for many decades of constant usage they have wide applications in agriculture. These include bunker silos, cattle feed bunks, cattle grid, agricultural fencing, H-bunks, J-bunks, livestock slats, livestock watering trough, feed troughs, concrete panels, slurry channels, and more. Prestressed concrete panels are widely used in the UK for a variety of applications including agricultural buildings, grain stores, silage clamps, slurry stores, livestock walling and general retaining walls. Panels can be used horizontally and placed either inside the webbings of RSJs (I-beam) or in front of them. Alternatively panels can be cast into a concrete foundation and used as a cantilever retaining wall. Building and site amenities Precast concrete building components and site amenities are used architecturally as fireplace mantels, cladding, trim products, accessories and curtain walls. Structural applications of precast concrete include foundations, beams, floors, walls and other structural components. It is essential that each structural component be designed and tested to withstand both the tensile and compressive loads that the member will be subjected to over its lifespan. Expanded polystyrene cores are now in precast concrete panels for structural use, making them lighter and serving as thermal insulation. Multi-storey car parks are commonly constructed using precast concrete. The constructions involve putting together precast parking parts which are multi-storey structural wall panels, interior and exterior columns, structural floors, girders, wall panels, stairs, and slabs. These parts can be large; for example, double-tee structural floor modules need to be lifted into place with the help of precast concrete lifting anchor systems. Retaining walls Precast concrete is employed in a wide range of engineered earth retaining systems. Products include commercial and residential retaining walls, sea walls, mechanically stabilized earth panels, and other modular block systems. Sanitary and stormwater Sanitary and stormwater management products are structures designed for underground installation that have been specifically engineered for the treatment and removal of pollutants from sanitary and stormwater run-off. These precast concrete products include stormwater detention vaults, catch basins, and manholes. Utility structures For communications, electrical, gas or steam systems, precast concrete utility structures protect the vital connections and controls for utility distribution. Precast concrete is nontoxic and environmentally safe. Products include: hand holes, hollow-core products, light pole bases, meter boxes, panel vaults, pull boxes, telecommunications structures, transformer pads, transformer vaults, trenches, utility buildings, utility vaults, utility poles, controlled environment vaults (CEVs), and other utility structures. Water and wastewater products Precast water and wastewater products hold or contain water, oil or other liquids for the purpose of further processing into non-contaminating liquids and soil products. Products include: aeration systems, distribution boxes, dosing tanks, dry wells, grease interceptors, leaching pits, sand-oil/oil-water interceptors, septic tanks, water/sewage storage tanks, wet wells, fire cisterns, and other water and wastewater products. Transportation and traffic-related products Precast concrete transportation products are used in the construction, safety, and site protection of roads, airports, and railroad transportation systems. Products include: box culverts, 3-sided culverts, bridge systems, railroad crossings, railroad ties, sound walls/barriers, Jersey barriers, tunnel segments, concrete barriers, TVCBs, central reservation barriers, bollards, and other transportation products. Precast concrete can also be used to make underpasses, surface crossings, and pedestrian subways. Precast concrete is also used for the roll ways of some rubber-tyred metros. Modular paving Modular paving is available in a rainbow of colors, shapes, sizes, and textures. These versatile precast concrete pieces can be designed to mimic brick, stone or wood. Specialized products Cemetery products Underground vaults or mausoleums require watertight structures that withstand natural forces for extended periods of time. Hazardous materials containment Storage of hazardous material, whether short-term or long-term, is an increasingly important environmental issue, calling for containers that not only seal in the materials, but are strong enough to stand up to natural disasters or terrorist attacks. Marine products Seawalls, floating docks, underwater infrastructure, decking, railings, and a host of amenities are among the uses of precast along the waterfront. When designed with heavy weight in mind, precast products counteract the buoyant forces of water significantly better than most materials. Structures Prestressed concrete Prestressing is a technique of introducing stresses into a structural member during fabrication and/or construction to improve its strength and performance. This technique is often employed in concrete beams, columns, spandrels, single and double tees, wall panels, segmental bridge units, bulb-tee girders, I-beam girders, and others. Many projects find that prestressed concrete provides the lowest overall cost, considering production and lifetime maintenance. Precast concrete sandwich wall (or insulated double-wall) panels Origin The precast concrete double-wall panel has been in use in Europe for decades. The original double-wall design consisted of two wythes of reinforced concrete separated by an interior void, held together with embedded steel trusses. With recent concerns about energy use, it is recognized that using steel trusses creates a "thermal bridge" that degrades thermal performance. Also, since steel does not have the same thermal expansion coefficient as concrete, as the wall heats and cools any steel that is not embedded in the concrete can create thermal stresses that cause cracking and spalling. Development To achieve better thermal performance, insulation was added in the void, and in many applications today the steel trusses have been replaced by composite (fibreglass, plastic, etc.) connection systems. These systems, which are specially developed for this purpose, also eliminate the differential thermal expansion problem.The best thermal performance is achieved when the insulation is continuous throughout the wall section, i.e., the wythes are thermally separated completely to the ends of the panel. Using continuous insulation and modern composite connection systems, R-values up to R-28.2 can be achieved. Characteristics The overall thickness of sandwich wall panels in commercial applications is typically 8 inches, but their designs are often customized to the application. In a typical 8-inch wall panel the concrete wythes are each 2-3/8 inches thick), sandwiching 3-1/4 inches of high R-value insulating foam. The interior and exterior wythes of concrete are held together (through the insulation) with some form of connecting system that is able to provide the needed structural integrity. Sandwich wall panels can be fabricated to the length and width desired, within practical limits dictated by the fabrication system, the stresses of lifting and handling, and shipping constraints. Panels of 9-foot clear height are common, but heights up to 12 feet can be found. The fabrication process for precast concrete sandwich wall panels allows them to be produced with finished surfaces on both sides. Such finishes can be very smooth, with the surfaces painted, stained, or left natural; for interior surfaces, the finish is comparable to drywall in smoothness and can be finished using the same prime and paint procedure as is common for conventional drywall construction. If desired, the concrete can be given an architectural finish, where the concrete itself is colored and/or textured. Colors and textures can provide the appearance of brick, stone, wood, or other patterns through the use of reusable formliners, or, in the most sophisticated applications, actual brick, stone, glass, or other materials can be cast into the concrete surface. Window and door openings are cast into the walls at the manufacturing plant as part of the fabrication process. In many applications, electrical and telecommunications conduit and boxes are cast directly into the panels in the specified locations. In some applications, utilities, plumbing and even heating components have been cast into the panels to reduce on-site construction time. The carpenters, electricians and plumbers do need to make some slight adjustments when first becoming familiar with some of the unique aspects of the wall panels. However, they still perform most of their job duties in the manner to which they are accustomed. Applications and benefits Precast concrete sandwich wall panels have been used on virtually every type of building, including schools, office buildings, apartment buildings, townhouses, condominiums, hotels, motels, dormitories, and single-family homes. Although typically considered part of a building's enclosure or "envelope," they can be designed to also serve as part of the building's structural system, eliminating the need for beams and columns on the building perimeter. Besides their energy efficiency and aesthetic versatility, they also provide excellent noise attenuation, outstanding durability (resistant to rot, mold, etc.), and rapid construction. In addition to the good insulation properties, sandwich panels require fewer work phases to complete. Compared to double-walls, for example, which have to be insulated and filled with concrete on site, sandwich panels require much less labor and scaffolding. Precast Concrete Market The precast concrete industry is largely dominated by Government initiated projects for infrastructural development. However, these are also being extensively used for residential (low and high rise) and commercial constructions because of their various favourable attributes. The efficiency, durability, ease, cost effectiveness, and sustainable properties of these products have brought a revolutionary shift in the time consumed in construction of any structure. Construction industry is a huge energy consuming industry, and precast concrete products are and will continue to be more energy efficient than its counterparts. The wide range of designs, colours, and structural options that these products provide is also making it a favourable choice for its consumers. Regulations Many state and federal transportation projects in the United States require precast concrete suppliers to be certified by either the Architectural Precast Association, National Precast Concrete Association or Precast Prestressed Concrete Institute.
Technology
Building materials
null
4561599
https://en.wikipedia.org/wiki/Hunting%20knife
Hunting knife
A hunting knife is a knife used during hunting for preparing the game to be used as food: skinning the animal and cutting up the meat. It is different from the hunting dagger which was traditionally used to kill wild game. Some hunting knives are adapted for other uses in the wild, such as a camp knife, which hunters may use as machetes or hatchets when those specific tools are not available. In this case, their function is similar to a survival knife. Design Hunting knives are traditionally designed for cutting rather than stabbing, and usually have a single sharpened edge. The blade is slightly curved on most models, and some hunting knives may have a blade that has both a curved portion for skinning, and a straight portion for cutting slices of meat. Some blades incorporate a gut hook. Most hunting knives designed as "skinners" have a rounded point as to not damage the skin as it is being removed. Types of knife Fixed-Blade Knife – Fixed-blade knives have the practical advantage of their simple design. If the game you hunt is large and the terrain more rugged, a fixed-blade knife is often a better option for its strength and dependability. Folding Knife – Folding knives have the advantage of being easier to carry and to conceal. They are also considered safer. They can be kept in a pocket easily. Out the Front Knife – OTF knives are usually used by military personnel. Replaceable Blade Knives – Knives having interchangeable blades or ones with a handle that may carry a separate blade are known as replaceable blade knives. Type of blade Clip Point – The clip point knife blade is thin with a well-defined point. The blade itself is relatively flat. This type of blade is used for dressing and skinning. Drop Point – The blade of a drop point knife is thick and curved. It is used for dressing and skinning. Skinning Blade – This type of blade is specially designed for skinning. The blade quickly and neatly separates skin from meat. Examples Hunting knives include the puukko, the Yakutian knife, and the Sharpfinger. Most American designs are based on a smaller version of the Bowie knife. Knifemaker Bob Loveless popularized the drop point hunting knife and William Scagel popularized the Camp knife.
Technology
Knives
null
4562815
https://en.wikipedia.org/wiki/Applied%20mechanics
Applied mechanics
Applied mechanics is the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments. In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life. It has numerous applications in a wide variety of fields and disciplines, including but not limited to structural engineering, astronomy, oceanography, meteorology, hydraulics, mechanical engineering, aerospace engineering, nanotechnology, structural design, earthquake engineering, fluid dynamics, planetary sciences, and other life sciences. Connecting research between numerous disciplines, applied mechanics plays an important role in both science and engineering. Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application to technology. Composed of two main categories, Applied Mechanics can be split into classical mechanics; the study of the mechanics of macroscopic solids, and fluid mechanics; the study of the mechanics of macroscopic fluids. Each branch of applied mechanics contains subcategories formed through their own subsections as well. Classical mechanics, divided into statics and dynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split into kinematics and kinetics. Like classical mechanics, fluid mechanics is also divided into two sections: statics and dynamics. Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools. In the application of the natural sciences, mechanics was said to be complemented by thermodynamics, the study of heat and more generally energy, and electromechanics, the study of electricity and magnetism. Overview Engineering problems are generally tackled with applied mechanics through the application of theories of classical mechanics and fluid mechanics. Because applied mechanics can be applied in engineering disciplines like civil engineering, mechanical engineering, aerospace engineering, materials engineering, and biomedical engineering, it is sometimes referred to as engineering mechanics. Science and engineering are interconnected with respect to applied mechanics, as researches in science are linked to research processes in civil, mechanical, aerospace, materials and biomedical engineering disciplines. In civil engineering, applied mechanics’ concepts can be applied to structural design and a variety of engineering sub-topics like structural, coastal, geotechnical, construction, and earthquake engineering. In mechanical engineering, it can be applied in mechatronics and robotics, design and drafting, nanotechnology, machine elements, structural analysis, friction stir welding, and acoustical engineering. In aerospace engineering, applied mechanics is used in aerodynamics, aerospace structural mechanics and propulsion, aircraft design and flight mechanics. In materials engineering, applied mechanics’ concepts are used in thermoelasticity, elasticity theory, fracture and failure mechanisms, structural design optimisation, fracture and fatigue, active materials and composites, and computational mechanics. Research in applied mechanics can be directly linked to biomedical engineering areas of interest like orthopaedics; biomechanics; human body motion analysis; soft tissue modelling of muscles, tendons, ligaments, and cartilage; biofluid mechanics; and dynamic systems, performance enhancement, and optimal control. Brief history The first science with a theoretical foundation based in mathematics was mechanics; the underlying principles of mechanics were first delineated by Isaac Newton in his 1687 book Philosophiæ Naturalis Principia Mathematica. One of the earliest works to define applied mechanics as its own discipline was the three volume Handbuch der Mechanik written by German physicist and engineer Franz Josef Gerstner. The first seminal work on applied mechanics to be published in English was A Manual of Applied Mechanics in 1858 by English mechanical engineer William Rankine. August Föppl, a German mechanical engineer and professor, published Vorlesungen über technische Mechanik in 1898 in which he introduced calculus to the study of applied mechanics. Applied mechanics was established as a discipline separate from classical mechanics in the early 1920s with the publication of Journal of Applied Mathematics and Mechanics, the creation of the Society of Applied Mathematics and Mechanics, and the first meeting of the International Congress of Applied Mechanics. In 1921 Austrian scientist Richard von Mises started the Journal of Applied Mathematics and Mechanics (Zeitschrift für Angewante Mathematik und Mechanik) and in 1922 with German scientist Ludwig Prandtl founded the Society of Applied Mathematics and Mechanics (Gesellschaft für Angewandte Mathematik und Mechanik). During a 1922 conference on hydrodynamics and aerodynamics in Innsbruck, Austria, Theodore von Kármán, a Hungarian engineer, and Tullio Levi-Civita, an Italian mathematician, met and decided to organize a conference on applied mechanics. In 1924 the first meeting of the International Congress of Applied Mechanics was held in Delft, the Netherlands attended by more than 200 scientist from around the world. Since this first meeting the congress has been held every four years, except during World War II; the name of the meeting was changed to International Congress of Theoretical and Applied Mechanics in 1960. Due to the unpredictable political landscape in Europe after the First World War and upheaval of World War II many European scientist and engineers emigrated to the United States. Ukrainian engineer Stephan Timoshenko fled the Bolsheviks Red Army in 1918 and eventually emigrated to the U.S. in 1922; over the next twenty-two years he taught applied mechanics at the University of Michigan and Stanford University. Timoshenko authored thirteen textbooks in applied mechanics, many considered the gold standard in their fields; he also founded the Applied Mechanics Division of the American Society of Mechanical Engineers in 1927 and is considered “America’s Father of Engineering Mechanics.” In 1930 Theodore von Kármán left Germany and became the first director of the Aeronautical Laboratory at the California Institute of Technology; von Kármán would later co-found the Jet Propulsion Laboratory in 1944. With the leadership of Timoshenko and von Kármán, the influx of talent from Europe, and the rapid growth of the aeronautical and defense industries, applied mechanics became a mature discipline in the U.S. by 1950. Branches Dynamics Dynamics, the study of the motion and movement of various objects, can be further divided into two branches, kinematics and kinetics. For classical mechanics, kinematics would be the analysis of moving bodies using time, velocities, displacement, and acceleration. Kinetics would be the study of moving bodies through the lens of the effects of forces and masses. In the context of fluid mechanics, fluid dynamics pertains to the flow and describing of the motion of various fluids. Statics The study of statics is the study and describing of bodies at rest. Static analysis in classical mechanics can be broken down into two categories, non-deformable bodies and deformable bodies. When studying non-deformable bodies, considerations relating to the forces acting on the rigid structures are analyzed. When studying deformable bodies, the examination of the structure and material strength is observed. In the context of fluid mechanics, the resting state of the pressure unaffected fluid is taken into account. Relationship to classical mechanics Applied Mechanics is a result of the practical applications of various engineering/mechanical disciplines; as illustrated in the table below. Examples Newtonian foundation Being one of the first sciences for which a systematic theoretical framework was developed, mechanics was spearheaded by Sir Isaac Newton's Principia (published in 1687). It is the "divide and rule" strategy developed by Newton that helped to govern motion and split it into dynamics or statics. Depending on the type of force, type of matter, and the external forces, acting on said matter, will dictate the "Divide and Rule" strategy within dynamic and static studies. Archimedes' principle Archimedes' principle is a major one that contains many defining propositions pertaining to fluid mechanics. As stated by proposition 7 of Archimedes' principle, a solid that is heavier than the fluid its placed in, will descend to the bottom of the fluid. If the solid is to be weighed within the fluid, the fluid will be measured as lighter than the weight of the amount of fluid that was displaced by said solid. Further developed upon by proposition 5, if the solid is lighter than the fluid it is placed in, the solid will have to be forcibly immersed to be fully covered by the liquid. The weight of the amount of displaced fluids will then be equal to the weight of the solid. Major topics This section based on the "AMR Subject Classification Scheme" from the journal Applied Mechanics Reviews. Foundations and basic methods Continuum mechanics Finite element method Finite difference method Other computational methods Experimental system analysis Dynamics and vibration Dynamics (mechanics) Kinematics Vibrations of solids (basic) Vibrations (structural elements) Vibrations (structures) Wave motion in solids Impact on solids Waves in incompressible fluids Waves in compressible fluids Solid fluid interactions Astronautics (celestial and orbital mechanics) Explosions and ballistics Acoustics Automatic control System theory and design Optimal control system System and control applications Robotics Manufacturing Mechanics of solids Elasticity Viscoelasticity Plasticity and viscoplasticity Composite material mechanics Cables, rope, beams, etc Plates, shells, membranes, etc Structural stability (buckling, postbuckling) Electromagneto solid mechanics Soil mechanics (basic) Soil mechanics (applied) Rock mechanics Material processing Fracture and damage processes Fracture and damage mechanics Experimental stress analysis Material Testing Structures (basic) Structures (ground) Structures (ocean and coastal) Structures (mobile) Structures (containment) Friction and wear Machine elements Machine design Fastening and joining Mechanics of fluids Rheology Hydraulics Incompressible flow Compressible flow Rarefied flow Multiphase flow Wall Layers (incl boundary layers) Internal flow (pipe, channel, and couette) Internal flow (inlets, nozzles, diffusers, and cascades) Free shear layers (mixing layers, jets, wakes, cavities, and plumes)\ Flow stability Turbulence Electromagneto fluid and plasma dynamics Hydromechanics Aerodynamics Machinery fluid dynamics Lubrication Flow measurements and visualization Thermal sciences Thermodynamics Heat transfer (one phase convection) Heat transfer (two phase convection) Heat transfer (conduction) Heat transfer (radiation and combined modes) Heat transfer (devices and systems) Thermodynamics of solids Mass transfer (with and without heat transfer) Combustion Prime movers and propulsion systems Earth sciences Micromeritics Porous media Geomechanics Earthquake mechanics Hydrology, oceanology, and meteorology Energy systems and environment Fossil fuel systems Nuclear systems Geothermal systems Solar energy systems Wind energy systems Ocean energy system Energy distribution and storage Environmental fluid mechanics Hazardous waste containment and disposal Biosciences Biomechanics Human factor engineering Rehabilitation engineering Sports mechanics Applications Electrical Engineering Civil engineering Mechanical Engineering Nuclear engineering Architectural engineering Chemical engineering Petroleum engineering Publications Journal of Applied Mathematics and Mechanics Newsletters of the Applied Mechanics Division Journal of Applied Mechanics Applied Mechanics Reviews Applied Mechanics Quarterly Journal of Mechanics and Applied Mathematics Journal of Applied Mathematics and Mechanics (PMM) Gesellschaft für Angewandte Mathematik und Mechanik Acta Mechanica Sinica
Technology
Disciplines
null
4562902
https://en.wikipedia.org/wiki/Radian%20per%20second
Radian per second
The radian per second (symbol: rad⋅s−1 or rad/s) is the unit of angular velocity in the International System of Units (SI). The radian per second is also the SI unit of angular frequency (symbol ω, omega). The radian per second is defined as the angular frequency that results in the angular displacement increasing by one radian every second. Relation to other units A frequency of one hertz (1 Hz), or one cycle per second (1 cps), corresponds to an angular frequency of 2 radians per second. This is because one cycle of rotation corresponds to an angular rotation of 2 radians. Since the radian is a dimensionless unit in the SI, the radian per second is dimensionally equivalent to the hertz—both can be expressed as reciprocal seconds, s−1. So, context is necessary to specify which kind of quantity is being expressed, angular frequency or ordinary frequency. One radian per second also corresponds to about 9.55 revolutions per minute (rpm). Degrees per second may also be defined, based on degree of arc, where 1 degree per second (°/s) is equivalent to rad⋅s−1. {| class="wikitable" |+ Quantity correspondence |- ! Angular frequency !! Frequency |- || 2π rad/s ||1 Hz |- || 1 rad/s || ≈ 0.159155 Hz |- || 1 rad/s || ≈ 9.5493 rpm |- || 0.1047 rad/s || ≈ 1 rpm |- |} Coherent units A use of the unit radian per second is in calculation of the power transmitted by a shaft. In the International System of Quantities (SI) and the International System of Units, widely used in physics and engineering, the power p is equal to the angular speed ω multiplied by the torque τ applied to the shaft: . When coherent units are used for these quantities, which are respectively the watt, the radian per second, and the newton-metre, and thus , no numerical factor needed when performing the numerical calculation. When the units are not coherent (e.g. horsepower, turn/min, and pound-foot), an additional factor will generally be necessary.
Physical sciences
Angular velocity
Basics and measurement
6003864
https://en.wikipedia.org/wiki/Markarian%20421
Markarian 421
Markarian 421 (Mrk 421, Mkn 421) is a blazar located in the constellation Ursa Major. The object is an active galaxy and a BL Lacertae object, and is a strong source of gamma rays. It is about 397 million light-years (redshift: z=0.0308 eq. 122Mpc) to 434 million light-years (133Mpc) from the Earth. It is one of the closest blazars to Earth, making it one of the brightest quasars in the night sky. It is suspected to have a supermassive black hole (SMBH) at its center due to its active nature. An early-type high inclination spiral galaxy (Markarian 421-5) is located 14 arc-seconds northeast of Markarian 421. It was first determined to be a very high energy gamma ray emitter in 1992 by M. Punch at the Whipple Observatory, and an extremely rapid outburst in very-high-energy gamma rays (15-minute rise-time) was measured in 1996 by J. Gaidos at Whipple Observatory. Markarian 421 also had an outburst in 2001 and is monitored by the Whole Earth Blazar Telescope project. Due to its brightness (around 13.3 magnitude, max. 11.6 mag. and min. 16 mag.) the object can also be viewed by amateurs in smaller telescopes.
Physical sciences
Notable galaxies
Astronomy
18254249
https://en.wikipedia.org/wiki/Load%20factor%20%28electrical%29
Load factor (electrical)
In electrical engineering the load factor is defined as the average load divided by the peak load in a specified time period. It is a measure of the utilization rate, or efficiency of electrical energy usage; a high load factor indicates that load is using the electric system more efficiently, whereas consumers or generators that underutilize the electric distribution will have a low load factor. An example, using a large commercial electrical bill: peak demand = use = number of days in billing cycle = Hence: load factor = ( [ / { × } ] / ) × 100% = 18.22% It can be derived from the load profile of the specific device or system of devices. Its value is always less than one because maximum demand is never lower than average demand, since facilities likely never operate at full capacity for the duration of an entire 24-hour day. A high load factor means power usage is relatively constant. Low load factor shows that occasionally a high demand is set. To service that peak, capacity is sitting idle for long periods, thereby imposing higher costs on the system. Electrical rates are designed so that customers with high load factor are charged less overall per kWh. This process along with others is called load balancing or peak shaving. The load factor is closely related to and often confused with the demand factor. The major difference to note is that the denominator in the demand factor is fixed depending on the system. Because of this, the demand factor cannot be derived from the load profile but needs the addition of the full load of the system in question.
Technology
Concepts
null
14202412
https://en.wikipedia.org/wiki/Locker
Locker
A locker is a small, usually narrow storage compartment. They are commonly found in dedicated cabinets, very often in large numbers, in various public places such as locker rooms, workplaces, schools, transport hubs and the like. They vary in size, purpose, construction, and security. General description and characteristics Lockers are normally quite narrow, of varying heights and tier arrangements. Width and depth usually conform to standard measurements, although non-standard sizes are occasionally found. Public places with lockers often contain large numbers of them, such as in a school. They are usually made of painted sheet metal. The characteristics that usually distinguish them from other types of cabinet or cupboard or storage container are: They are usually equipped with a lock, or at least a facility for padlocking (occasionally both). They are usually intended for use in public places, and intended for the short- or long-term private use of individuals for storing clothing or other personal items. Users may rent a locker for a single use or for a period of time for repeated use. Some lockers are offered as a free service to people partaking of certain activities that require the safekeeping of personal items. There are usually, but not always, several of them joined. Lockers are usually physically joined side by side in banks, and are commonly made from steel, although wood, laminate, and plastic are other materials sometimes found. Steel lockers which are banked together share side walls, and are constructed by starting with a complete locker; further lockers may then be added by constructing the floor, roof, rear wall, door, and just one extra side wall, the existing side wall of the previous locker serving as the other side wall of the new one. The walls, floors, and roof of lockers may be either riveted together (the more traditional method) or, more recently, welded together. Locker doors usually have some kind of ventilation to provide for the flow of air to aid in cleanliness. These vents usually take the form of a series of horizontal angled slats at the top and bottom of the door, although sometimes parallel rows of small square or rectangular holes are found instead, running up and down the door. Less often, the side or rear walls may also have similar ventilation. Locker doors usually have door stiffeners fixed vertically to the inside of the door, in the form of a metal plate welded to the inner surface, and protruding outward a fraction of an inch, thus adding to the robustness of the door and making it harder to force open. Lockers are often manufactured by the same companies who produce filing cabinets, stationery cabinets (occasionally wrongly referred to as lockers), steel shelving, and other products made from sheet steel. Variable characteristics of lockers There are a number of features or characteristics which may vary in lockers. Because purchasers will need to specify what they want in each of these when ordering, it is more common to order a particular configuration rather than buy "off the shelf" in a shop, although certain very common configurations can be found in shops fairly easily. These features include: Bank size: It does not necessarily refer to the total number of compartments, but rather the number of compartments wide the entire cabinet is. So a bank of three may contain six lockers, for example, if they are two-tier lockers. In short, the total number of lockers is the bank size multiplied by the number of tiers. Sometimes the term "bay" is used instead of "bank", although "bank" appears to be the more standard term; on other occasions, "bay" refers to a single locker width within a bank, including all tiers of locker directly on top of each other. Tiers: may be specified as single-tier (full height), two-tier, three-tier, etc., meaning that the lockers are stacked on top of each other in layers two high, three high, etc. Tiers are commonly up to eight high; on occasion, even more tiers may be found, in the case of very small lockers for such purposes as storing laptop computers. The most common numbers of tiers found in lockers are, in order, one, two, and four; three-tier lockers are rather less common, and other numbers such as five, six, or eight even less common still - seven almost non-existent. Since locker cabinets are most commonly 6 feet (182.9 cm.) high (although there are exceptions), the height of individual lockers varies according to how many tiers are accommodated within the cabinet. The height of individual lockers is usually approximately 6 feet (182.9 cm.) divided by the number of tiers, so that two-tier lockers are about 3 feet (91.4 cm.) high, three-tier lockers 2 feet (61 cm.) high, four-tier lockers 1.5 feet (45.7 cm.) high, and so on. Standard features often vary according to the number of tiers: single-tier lockers usually include a shelf about a foot (roughly 30 cm.) from the top, and a hanging rail (sometimes with one or two hooks) immediately underneath that, at the top of the large compartment beneath the shelf; two- or three-tier lockers usually lack the shelf, but include the hanging rail; lockers with four or more tiers usually have none of these fittings, but consist of just the bare compartment. Material: steel is the traditional material; but wood, plastic, or laminate are sometimes used. Plastic or laminate lockers are sometimes advocated in environments, such as near swimming pools, where moisture accumulation may cause steel lockers to rust over time. They can also be used in external applications where internal space is not available. Locking options: various types of key locking or padlocking facility are available now. Key locking options include flush locks, cam locks, or locks incorporated into a rotating handle; padlocking facilities may be a simple hasp and staple, or else a padlocking hole may be included in a handle, often called a latchlock. More modern designs include keyless operation, either by coin deposit (which may or may not be returned when use of the locker terminates), or by using electronic keypads to enter passwords for later reopening the locker. Some older lockers used a drop-latch which was incorporated into the door handle, and slid up and down and could be padlocked at the bottom in the "down" position, but these are less used now. Three-point locking is not possible with this type of latch, because it needs to be operated by means of a latch that rotates rather than slides up and down; so this drop-latch is probably a less secure locking option, which may be why it is little used nowadays. Number of locking points: Locker doors may lock with either single- or three-point locking, but this is not normally chosen as a separate option, and the choice is usually dependent on the number of tiers in the lockers, or whether they are a high-security model, although some manufacturers do allow purchasers to specifically choose an option here that goes against their normal practice. Single-point locking locks the door at only the point where the latch engages with the door-frame, whereas three-point locking uses extensible steel rods to lock the top and bottom of the door as well. Dimensions: (Note that, in English-speaking countries, even those commonly using metric measurements now, locker dimensions are usually clean numbers of inches or feet, while the corresponding metric measurements are uneven, involving decimal places when precision is required, presumably resulting from continued use of locker designs based on feet and inches, unchanged for decades other than for cosmetic features.): Width: Lockers are usually designed in standard widths: 12 inches (30.5 cm.) wide is a common width, and 15 inches (38 cm.) has become more common recently. Other widths are occasionally found, however, especially in the U.S., where narrower or (occasionally) wider lockers can be found. Depth: Most standard lockers are approximately 18 inches (46 cm.) deep, so this property does not usually vary, unless a non-standard model is chosen, or arranged by special order. In the U.S., 12- or 15-inch-deep (30.5 or 38 cm.) lockers seem to have some currency, although this is virtually unknown in Australia. Height: Similarly, locker cabinets are a standard height, usually about 6 feet (182.9 cm.), so this does not vary either, unless non-standard models are ordered. Colour: lockers were often a uniform dark-grey some decades ago, but a range of colors is offered by most manufacturers now. A few manufacturers offer two-tone coloring, where the doors and locker bodies are of different colors. Steel thickness: lockers tend to be made from a standard thickness of steel, which is commonly 0.8 mm. thick; but heavy-duty or high-security lockers are offered as a standard option by some manufacturers, or may be available on special order. A typical locker of this sort may be constructed from steel 1.2 mm. thick, for example, and is usually fitted with three-point locking, regardless of the number of tiers. Sloping tops: while most lockers have flat tops, some manufacturers offer the option of sloping tops. The slope may be either 30 degrees or 45 degrees to the horizontal, sloping towards the front. The purpose of this is to make it impossible to store items on atop the lockers, or to make it harder for dust or other debris to accumulate there. This is an important factor in places like food-processing factories or restaurants where hygiene requirements must be met. The evolution of lockers Historically, lockers have been a space to store personal belongings secured by various locking mechanisms. The earliest modern lockers were simple ‘box with a lock’ type device likely used for sporting purposes. The ‘locker room’ was a place for athletes to store their clothing, belongings and equipment temporarily. People could retrieve their items by using their specific key assigned to them when they selected the locker space. As lockers became more commonplace, they started appearing in educational facilities, hospitals, gymnasiums and in the workplace. Lockers initially were cabinet-like and made of wood and later made of steel and metal. Lockers have since evolved with peoples needs and breakthrough technologies. Today lockers can be manufactured out of various materials and to suit the décor of the environment they are in. Metal, steel, plastic, wood and fabricated wood are all popular materials that are used. The lock mechanism on a locker has especially evolved with the induction of new technologies. The movement from a large padlock and key to an electronic system, illustrates how lockers have adopted smart technology. Smart technology allows lockers to be digital, flexible in use and equipped with various features to improve the user experience. Smart lockers are digitally managed storage banks which makes the experience of acquiring and using a locker fast and efficient. Whether it's controlled by a mobile phone app or a touchless kiosk, the technology allows for automation throughout the entire process/workflow. Types and applications Traditional lockers There are a number of less standard lockers that are offered by various manufacturers. These include: Gun lockers or safes are specifically designed for the secure storage of guns and ammunition. They broadly resemble normal single-tier lockers, but tend to be slightly less high than normal single-tier lockers, and are often free-standing, and not banked together. They are fitted with internal racks designed for holding firearms. They have a shelf at the top like normal single-tier lockers, although in this case it is closed and locked by a separate door, because of legal requirements in some countries that firearms and ammunition be stored and locked separately. They always lock with three-point locking, which is in some countries a legal requirement for the storage of firearms. Sometimes they are made of the standard kind of sheet steel used in manufacturing normal lockers, and sometimes they are made of extremely thick heavy-duty steel and in this case resemble a safe more than a normal locker. In Australia there are strict regulations governing the storage of firearms following the Port Arthur massacre in Tasmania, Australia on 28 April 1996, and cabinets used for storing firearms must be bolted to the floor or a wall if the cabinet is under a certain weight. Dedicated gun lockers are likely to include holes in the cabinet to accommodate such bolting. Several locker manufacturers also offer dedicated gun lockers. Bicycle lockers are usually in outdoor locations near railway stations and the like where people may want to store bicycles securely. They are often banked together, with individual lockers shaped like an isosceles triangle for efficient and compact storage of a bicycle. This triangular shape permits the lockers to be grouped either in a radial pattern (with the sharpest points of the lockers together), or in a row in alternating orientations. Heavy-duty or high-security lockers are similar to the standard models, but are usually made from thicker steel, and have three-point locking, regardless of the number of tiers involved. Some models are made from steel 1.2 mm. thick, in contrast to the more usual 0.8 mm. Laundry lockers are used in places like hospitals and food-processing workplaces where uniforms have to be collected, laundered, then returned to their owners. The locker cabinet contains a number of very narrow lockers, each of whose doors is keyed using a key held by the owner, so that they have access only to their own locker; but the entire array of doors is embedded in a much larger door covering the entire front of the cabinet. Opening this opens all the lockers simultaneously, and requires the use of a master key which is held by whoever collects items deposited in lockers, for laundering, then returned in the same way, after which they items are accessible to owners using their individual small doors. Services lockers are extra-wide lockers used by fire or police services, and typically have a number of different compartments within a single door to accommodate different pieces of equipment used by fire or police personnel, such as special shelves to accommodate helmets, boots, and so on. School lockers may be single- or two-tier, and are fitted with internal divisions or shelves to accommodate both hanging space and room for storing textbooks. Perforated lockers are similar to the standard types of locker, but the door and walls are made largely or entirely of perforated steel, with hundreds of holes creating a strong mesh arranged in a diagonal pattern. This is used where good ventilation is required, or where, for security reasons, it is necessary that the contents can be examined visually while the doors are locked. Clean/dirty lockers normally have two or three parts within the locker. One part is meant for dirty or clothes that are worn, and the other side for clean clothes. These lockers are meant for hospitals or other medical workplaces where it is useful to keep work and personal clothes apart to reduce the risk of infection. These lockers are also useful for factories where work clothes can become dirty and it can be very useful to keep them apart from personal clothes. Backpacker lockers are designed to accommodate backpacks in places like backpackers' hostels, and are similar to two-tier lockers, but with larger dimensions. Typically, the height may be standard, but the width and depth will be several inches bigger. These usually lack internal fitting such as shelves, hanging railes, or hooks. Stepped/2-step lockers are two-tier lockers, usually available only in 15-inch (38-cm.) width; but the compartments and their doors have an L-shaped cross-section, which causes the division between the doors to follow a zigzag pattern. This configuration enables more hanging height to be included in both upper and lower lockers; but part of each compartment (the lower part of the upper one and the upper part of the lower one) will be only half the usual width of two-tier lockers. Executive lockers are larger units, not banked but free-standing, that include several compartments, including a full wardrobe-type hanging compartment, as well as a number of other smaller compartments for varied uses. Coin Operated Lockers are meant for temporary use when guests need a place to store valuables while having fun. They are commonly seen in amusement parks. TA-50 military gear lockers are widely used by the US Department of Defense as personnel lockers. They can be made of several locker materials but usually are made of steel or wire mesh. The term TA-50 refers to any type of military equipment, so the sizes and configurations are generally based on the type of equipment stored. Division 10 — Specialties Lockers: Division 10 — Specialties is a category within the National Master Specification (NMS) set of guidelines developed by Public Works and Government Services Canada. Division 10 — Specialties items that could be required within a locker room (to meet commercial building and construction regulations) are lockers, washroom accessories, toilet compartments, and toilet partitions. Lockers are constructed of two sides: a back, top and a bottom. Different types of materials are used in locker manufacturing, offering a wide variety of metal lockers, stainless steel lockers, solid plastic lockers, solid phenolic lockers, and custom lockers. A padlock is the most common way to lock a locker; however, you can also use a keyed cylinder lock, built in combination locks or keypad locks. There are a lot of optional extras that can be utilized for lockers, for example: bases, sloping tops, end panels, customized shelves and hooks as well as the locking method (coin-operated lockers are another option). The environment is the best way to distinguish what type of locker will be required for which type of space. For example, if you are putting gym lockers into a humid area, or anywhere close to showers, stainless steel or solid plastic lockers would be most suitable because they are moisture-resistant and rust-resistant. Wood lockers would not be appropriate for this type of environment because the moisture from the humidity would rot the wood. Waterproof lockers: These are one of the more common types of lockers that are mainly found in wet areas such as swimming pools, gyms, health and fitness clubs etc. Mini lockers: Mini storage lockers are smaller than normal, we can store books, albums, seasonal clothing, tools, and small appliances. Intelligent Lockers After the COVID-19 pandemic of 2019, office workers only went into offices for part of their working week for social distancing. Hybrid working, defined as “team or organisation work part of their time at the workplace and part remotely”, has made the workplace more flexible. The reduced number of employees coming to the office made companies start cutting cost and the space of their offices, and looking for technologies that can enhance their workplace productivity, efficiency, and employee experience. With the rise of hybrid working, traditional lockers no longer serves the purpose for a modern workplace that empowers its people. Agile lockers is a new term that used for an agile workplace, where employee experiences are being prioritised while saving office space and cost. Doorless designs There are also several types of doorless locker design including those that are cylindrical, spherical and cone-shaped. One such design eliminates the use of doors by offering a cylinder open at the front to receive items and can then be rotated to secure the contents. Abolition of lockers Some schools in the United States have been reported to have abolished the use of lockers. Security concerns are cited as the reason for this, with the concern being that lockers may be used to store contraband such as weapons, drugs or pornographic material. There has been some controversy over in what circumstances school authorities or law-enforcement officials are permitted to search lockers, with or without informing the users, or with or without the users being present at the time of the search, and it has been considered a civil liberties issue, particularly in the U.S. Other advocates of lockerless schools also cite reasons such as reducing noise by eliminating the clang of dozens of locker doors, or creating a more appealing environment aesthetically. It has also been claimed that removing lockers provides good training for students by forcing them to be more efficient in managing their books, and taking the time to plan what books they will need, and carrying only those ones. In schools without lockers, students are sometimes provided with two complete sets of textbooks, one set being kept at school for use in class, and the other being kept at home for referring to for homework, thus limiting the amount of heavy carrying that would otherwise be required without having lockers to store them in between classes. However, research has shown an increase in the incidence of back injuries in some students, which has been directly attributed to the lack of lockers for storing books in, thus forcing students to spend more time carrying heavy loads of books in backpacks. Some students oppose the abolition of lockers, arguing that their locker is one of the few private spaces they have in an environment which is otherwise communal and impersonal. Coin-operated public luggage lockers can be present in bus stations and rail stations. In some countries they were commonplace from the 1950s to the 1970s, but eliminated for concern that bombs may be hidden in them. Some airports have also removed them for this reason.
Technology
Mechanisms
null
14206697
https://en.wikipedia.org/wiki/Bipolar%20II%20disorder
Bipolar II disorder
Bipolar II disorder (BP-II) is a mood disorder on the bipolar spectrum, characterized by at least one episode of hypomania and at least one episode of major depression. Diagnosis for BP-II requires that the individual must never have experienced a full manic episode. Otherwise, one manic episode meets the criteria for bipolar I disorder (BP-I). Hypomania is a sustained state of elevated or irritable mood that is less severe than mania yet may still significantly affect the quality of life and result in permanent consequences including reckless spending, damaged relationships and poor judgment. Unlike mania, hypomania cannot include psychosis. The hypomanic episodes associated with BP-II must last for at least four days. Commonly, depressive episodes are more frequent and more intense than hypomanic episodes. Additionally, when compared to BP-I, type II presents more frequent depressive episodes and shorter intervals of well-being. The course of BP-II is more chronic and consists of more frequent cycling than the course of BP-I. Finally, BP-II is associated with a greater risk of suicidal thoughts and behaviors than BP-I or unipolar depression. BP-II is no less severe than BP-I, and types I and II present equally severe burdens. BP-II is notoriously difficult to diagnose. Patients usually seek help when they are in a depressed state, or when their hypomanic symptoms manifest themselves in unwanted effects, such as high levels of anxiety, or the seeming inability to focus on tasks. Because many of the symptoms of hypomania are often mistaken for high-functioning behavior or simply attributed to personality, patients are typically not aware of their hypomanic symptoms. In addition, many people with BP-II have periods of normal affect. As a result, when patients seek help, they are very often unable to provide their doctor with all the information needed for an accurate assessment; these individuals are often misdiagnosed with unipolar depression. BP-II is more common than BP-I, while BP-II and major depressive disorder have about the same rate of diagnosis. Substance use disorders (which have high co-morbidity with BP-II) and periods of mixed depression may also make it more difficult to accurately identify BP-II. Despite the difficulties, it is important that BP-II individuals be correctly assessed so that they can receive the proper treatment. Antidepressant use, in the absence of mood stabilizers, is correlated with worsening BP-II symptoms. Causes Multiple factors contribute to the development of bipolar spectrum disorders, although there have been very few studies conducted to examine the possible causes of BP-II specifically. While no identifiable single dysfunctions in specific neurotransmitters have been found, preliminary data has shown that calcium signal transmission, the glutamatergic system, and hormonal regulation play a role in the pathophysiology of the disease. The cause of Bipolar disorder can be attributed to misfiring neurotransmitters that overstimulate the amygdala, which in turn causes the prefrontal cortex to stop working properly. The bipolar patient becomes overwhelmed with emotional stimulation with no way of understanding it, which can trigger mania and exacerbate the effects of depression. Signs and symptoms Bipolar disorder is characterized by marked swings in mood, activity, and behavior. BP-II is characterized by periods of hypomania, which may occur before, after, or independently of a depressive episode. Hypomania Hypomania is the signature characteristic of BP-II, defined by an experience of elevated mood. A patient's mood is typically cheerful, enthusiastic, euphoric, or irritable. In addition, they can present with symptoms of inflated self-esteem or grandiosity, decreased need for sleep, talkativeness or pressured speech, flight of ideas or rapid cycling of thoughts, distractibility, increased goal-directed activity, psychomotor agitation, and/or excessive involvement in activities that have a high potential for painful consequences (engaging in unrestrained buying sprees, sexual indiscretions, or foolish business investments.) Hypomania is distinct from mania. During a typical hypomanic episode, patients may present as upbeat, may show signs of poor judgment or display signs of increased energy despite lack of sleep, but do not meet the full criteria for an acute manic episode. Patients may display elevated confidence, but do not express delusional thoughts as in mania. They can experience increase in goal-directed activity and creativity, but do not reach the severity of aimlessness and disorganization. Speech may be rapid, but interruptible. Patients with hypomania never present with psychotic symptoms and do not reach the severity to require psychiatric hospitalization. For these reasons, hypomania commonly goes unnoticed. Individuals often will only seek treatment during a depressive episode, and their history of hypomania may go undiagnosed. Although hypomania may increase functioning, episodes require treatment as they may indicate increasing instability and can precipitate a depressive episode. Depressive episodes It is during depressive episodes that BP-II patients often seek help. Symptoms may be syndromal or subsyndromal. Depressive episodes in BP-II can present similarly to those experienced in unipolar depressive disorders. Patients characteristically experience a depressed mood and may describe themselves as feeling sad, gloomy, down in the dumps, or hopeless, for most of the day, nearly every day. In children, this can present with an irritable mood. Most patients report significant fatigue, loss of energy, or tiredness. Patients or their family members may note diminished interest in usual activities such as sex, hobbies, or daily routines. Many patients report a change in appetite along with associated weight change. Sleep disturbances may be present, and can manifest as problems falling or staying asleep, frequent awakenings, excessive sleep, or difficulties getting up in the morning. Around half of depressed patients develop changes in psychomotor activity, described as slowness in thinking, speaking, or movement. Conversely, they may also present with agitation, with inability to sit still or wringing their hands. Changes in posture, speech, facial expression, and grooming can be observed. Other signs and symptoms include changes in posture and facial expression, slowed speech, poor hygiene, unkempt appearance, feelings of guilt, shame, or helplessness, diminished ability to concentrate, nihilistic thoughts, and suicidal ideation. Many experts in the field have attempted to find reliable differences between BP-I depressive episodes and episodes of major depressive disorder, but the data is inconsistent. However, some clinicians report that patients who came in with a depressive episode, but were later diagnosed as having bipolar disorder often presented with hypersomnia, increased appetite, psychomotor retardation, and a history of antidepressant-induced hypomania. Evidence also suggests that BP-II is strongly associated with atypical depression. Mood episodes with mixed features A mixed episode is defined by the presence of a hypomanic or depressive episode that is accompanied by symptoms of the opposite polarity. This is commonly referred to as a mood episode with mixed features (e.g. depression with mixed features or hypomania with mixed features), but can also be referred to as mixed episodes or mixed states. For example, a patient with depression with mixed features may have a depressed mood, but has simultaneous symptoms of rapid speech, increased energy, and flight of ideas. Conversely, a patient with hypomania with mixed features will present with the full criteria for a hypomanic episode, but with concurrent symptoms of decreased appetite, loss of interest, and low energy. Episodes with mixed features can last up to several months. They occur more frequently in patients with an earlier onset of bipolar disorder, are associated with higher frequency of episodes, and are associated with a greater risk of substance use, anxiety disorders, and suicidality. In addition, they are associated with increased treatment resistance compared to non-mixed episodes. Relapse Bipolar disorder is often a lifelong condition, and patients should be followed up regularly for relapse prevention. Although BP-II is thought to be less severe than BP-I in regard to symptom intensity, BP-II is associated with higher frequencies of rapid cycling and depressive episodes. In the case of a relapse, patients may experience new onset sleep disturbance, racing thoughts and/or speech, anxiety, irritability, and increase in emotional intensity. Family and/or friends may notice that patients are arguing more frequently with them, spending more money than usual, are increasing their binging on food, drugs, or alcohol, and may suddenly start taking on many projects at once. These symptoms often occur and are considered early warning signs. Psychosocial factors in a person's life can trigger a relapse in patients with BP-II. These include stressful life events, criticism from peers or relatives, and a disrupted circadian rhythm. In addition, the addition of antidepressant medications can trigger a hypomanic episode. Comorbid conditions Comorbid conditions are extremely common in individuals with BP-II. In fact, individuals are twice as likely to present a comorbid disorder than not. These include anxiety, eating, personality (cluster B), and substance use disorders. For BP-II, the most conservative estimate of lifetime prevalence of alcohol or other substance use disorders is 20%. In patients with comorbid substance use disorder and BP-II, episodes have a longer duration and treatment compliance decreases. Preliminary studies suggest that comorbid substance use is also linked to increased risk of suicidality. Diagnosis BP-II is diagnosed according to the criteria established in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). In addition, alternative diagnostic criteria is established in the World Health Organization's International Classification of Diseases-11th Revision (ICD-11)]. The diagnostic criteria are established from self-reported experiences from patients or their family members, the psychiatric assessment, and the mental status examination. In addition, Screening instruments like the Mood Disorders Questionnaire are helpful tools in determining a patient's status on the bipolar spectrum. In addition, certain features have been shown to increase the chances that depressed patients have a bipolar disorder, including atypical symptoms of depression like hypersomnia and hyperphagia, a family history of bipolar disorder, medication-induced hypomania, recurrent or psychotic depression, antidepressant refractory depression, and early or postpartum depression. DSM-5 criteria According to the DSM-5, a patient diagnosed with BP-II will have experienced at least one hypomanic episode, at least one major depressive episodes, and no manic episode. Furthermore, the occurrence of the mood episodes are not better explained by schizoaffective disorder, schizophrenia, delusional disorder, or other specified or unspecified schizophrenia spectrum and other psychotic disorder. The final criteria that must be met is that the mood episodes cause clinically significant distress or impairment in social, occupational, or other important areas of functioning (from the depressive symptoms or the unpredictability of cycling between periods of depression and hypomania). A hypomanic episode is established if a patient's symptoms last for most of the day each day for at least four days. Furthermore, three or more of the following symptoms must be present: Inflated sense of self-esteem or grandiose thoughts, feeling well rested despite getting low amounts of sleep (3 hours), talkativeness, racing thoughts, distractibility, and increase in goal-directed activity or psychomotor agitation, or excessive involvement in activities with high risk of painful consequences. Per DSM-5 criteria, a major depressive episode consists of the presence of a depressed mood or loss of interest/pleasure in activities (anhedonia). In addition to the former symptoms, five out of the nine following symptoms must occur for more than two weeks (to the extent in which it impairs functioning): weight loss/gain, insomnia or hypersomnia, psychomotor agitation or retardation, fatigue, feelings of worthlessness/inappropriate guilt, decreased concentration, or thoughts of death/suicide. Specifiers: With current or most recent episode hypomanic or depressed With partial remission or full remission With mild, moderate, or severe severity With anxious distress With catatonic features With mood congruent psychotic features With peripartum onset With seasonal pattern (applies only to the pattern of major depressive episodes) With rapid cycling. ICD-11 According to the ICD-11, a BP-II patient will have experienced episodic experiences of one or more hypomaniac episodes and one or more major depressive episodes, and no history of a manic episode or mixed episode. These symptoms cannot be explained by other diagnoses such as: Cyclothymia ADHD Oppositional Defiant Disorder Schizophrenia and other primary psychotic disorders Substance-Use Disorder Personality Disorders Other Mental illness Physical issues such as a brain tumor The specifiers are the same as the DSM-5 with the exception of catatonic features and if symptoms have occurred with or without psychosis about 6 weeks after childbirth. Differential diagnoses The signs and symptoms of BP-II may overlap significantly with those of other conditions. Thus, a comprehensive history, medication review, and laboratory work are key to diagnosing BP-II and differentiating it from other conditions. The differential diagnosis of BP-II is as follows: unipolar major depression, borderline personality disorder, posttraumatic stress disorder, substance use disorders, and attention deficit hyperactivity disorder. Major differences between BP-I and BP-II have been identified in their clinical features, comorbidity rates and family histories. During depressive episodes, BP-II patients tend to show higher rates of psychomotor agitation, guilt, shame, suicidal ideation, and suicide attempts. BP-II patients have shown higher lifetime comorbidity rates of phobias, anxiety disorders, substance use, and eating disorders. In addition, there is a higher correlation between BP-II patients and family history of psychiatric illness, including major depression and substance-related disorders compared to BP-I. The occurrence rate of psychiatric illness in first degree relatives of BP-II patients was 26.5%, versus 15.4% in BP-I patients. Management Although BP-II is a prevalent condition associated with morbidity and mortality, there has been an absence of robust clinical trials and systematic reviews that investigate the efficacy of pharmacologic treatments for the hypomanic and depressive phases of BP-II. Thus, the current treatment guidelines for the symptoms of BP-II are derived and extrapolated from the treatment guidelines in BP-I, along with limited randomized controlled trials published in the literature. The treatment of BP-II consists of the following: treatment of hypomania, treatment of major depression, and maintenance therapy for the prevention of relapse of hypomania or depression. As BP-II is a chronic condition, the goal of treatment is to achieve remission of symptoms and prevention of self-harm in patients. Treatment modalities of BP-II include medication-based pharmacotherapy, along with various forms of psychotherapy. Medications The most common pharmacologic agents utilized in the treatment of BP-II includes mood stabilizers, antipsychotics, and antidepressants. Mood stabilizers Mood stabilizers used in the treatment of hypomanic and depressive episodes of BP-II include lithium, and the anticonvulsant medications valproate, carbamazepine, lamotrigine, and topiramate. There is strong evidence that lithium is effective in treating both the depressive and hypomanic symptoms in BP-II, along with the reduction of hypomanic switch in patients treated with antidepressants. Furthermore, lithium is the only mood stabilizer to demonstrate a decrease in suicide and self-harm in patients with mood disorders. Due to lithium's narrow therapeutic index, lithium levels must be monitored regularly for prevention of lithium toxicity. There is also evidence that the anticonvulsants valproate, lamotrigine, carbamazepine, and topiramate are effective in the reduction of symptoms of hypomanic and depressive episodes of bipolar disorder. Potential mechanisms contributing to these effects include a decrease in brain excitation due to blockage of low-voltage sodium-gated channels, decrease in glutamate and excitatory amino acids, and potentiation of levels of GABA. There is evidence that lamotrigine decreases the risk of relapse in rapid-cycling BP-II. It is more effective in BP-II than BP-I, suggesting that lamotrigine is more effective for the treatment of depressive rather than manic episodes. Doses ranging from 100 to 200 mg have been reported to have the most efficacy, while experimental doses of 400 mg have rendered little response. A large, multicenter trial comparing carbamazepine and lithium over two and a half years found that carbamazepine was superior in terms of preventing future episodes of BP-II, although lithium was superior in individuals with BP-I. There is also some evidence for the use of valproate and topiramate, although the results for the use of gabapentin have been disappointing. Antipsychotics Antipsychotics are utilized as a second line option for hypomanic episodes, typically indicated patients who do not respond to mood stabilizers. However, quetiapine is the only antipsychotic that has demonstrated efficacy in multiple meta-analyses of Randomized controlled trials for treating acute BP-II depression, and is a first-line option for patients with BP-II depression. Other antipsychotics that are used to treat BP-II include lurasidone, olanzapine, cariprazine, aripiprazole, asenapine, paliperidone, risperidone, ziprasidone, haloperidol, and chlorpromazine. As a class, the first generation antipsychotics are associated with movement disorders, along with anticholinergic side effects compared with second generation antipsychotics. Antidepressants There is evidence to support the use of SSRI and SNRI antidepressants in BP-II, but the use of these treatments is controversial. Potential risks of antidepressant pharmacotherapy in patients with bipolar disorder include increased mood cycling, development of rapid cycling, dysphoria, and switch to hypomania. In addition, the evidence for their efficacy in bipolar depression is mixed. Thus, in most cases, antidepressant monotherapy in patients with BP-II is not recommended. However, antidepressants may provide benefit to some patients when used in addition to mood stabilizers and antipsychotics, as these drugs reduce the risk of manic/hypomanic switching. However, the risk still exists, and should be used with caution. Non-pharmaceutical therapies Although medication therapy is the standard of care for treatment of both BP-I and BP-II, additional non-pharmaceutical therapies can also help those with the illness. Benefits include prevention of relapse and improved maintenance medication adherence. These include psychotherapy (e.g. cognitive behavioral therapy, psychodynamic therapy, psychoanalysis, interpersonal therapy, behavioral therapy, cognitive therapy, and family-focused therapy), social rhythm therapy, art therapy, music therapy, psychoeducation, mindfulness, and light therapy. Meta-analyses in the literature has shown that psychotherapy plus pharmacotherapy was associated with a lower relapse rate compared with patients treated with pharmacotherapy alone. However, relapse can still occur, despite continued medication and therapy. People with bipolar disorder may develop dissociation to match each mood they experience. For some, this is done intentionally, as a means by which to escape trauma or pain from a depressive period, or simply to better organize one's life by setting boundaries for one's perceptions and behaviors. Prognosis There is evidence to suggest that BP-II has a more chronic course of illness than BP-I. This constant and pervasive course of the illness leads to an increased risk in suicide and more hypomanic and major depressive episodes with shorter periods between episodes than BP-I patients experience. The natural course of BP-II, when left untreated, leads to patients spending the majority of their lives with some symptoms, primarily stemming from depression. Their recurrent depression results in personal distress and disability. This disability can present itself in the form of psychosocial impairment, which has been suggested to be worse in BP-II patients than in BP-I patients. Another facet of this illness that is associated with a poorer prognosis is rapid cycling, which denotes the occurrence of four or more major Depressive, Hypomanic, and/or mixed episodes in a 12-month period. Rapid cycling is quite common in those with BP-II, much more so in women than in men (70% vs. 40%), and without treatment leads to added sources of disability and an increased risk of suicide. Women are more prone to rapid cycling between hypomanic episodes and depressive episodes. To improve a patient's prognosis, long-term therapy is most favorably recommended for controlling symptoms, maintaining remission and preventing relapses. With treatment, patients have been shown to present a decreased risk of suicide (especially when treated with lithium) and a reduction of frequency and severity of their episodes, which in turn moves them toward a stable life and reduces the time they spend ill. To maintain their state of balance, therapy is often continued indefinitely, as around 50% of the patients who discontinue it relapse quickly and experience either full-blown episodes or sub-syndromal symptoms that bring significant functional impairments. Functioning The deficits in functioning associated with BP-II stem mostly from the recurrent depression that BP-II patients experience. Depressive symptoms are much more disabling than hypomanic symptoms and are potentially as, or more disabling than mania symptoms. Functional impairment has been shown to be directly linked with increasing percentages of depressive symptoms, and because sub-syndromal symptoms are more common—and frequent—in BP-II, they have been implicated heavily as a major cause of psychosocial disability. There is evidence that shows the mild depressive symptoms, or even sub-syndromal symptoms, are responsible for the non-recovery of social functioning, which furthers the idea that residual depressive symptoms are detrimental for functional recovery in patients being treated for BP-II. It has been suggested that symptom interference in relation to social and interpersonal relationships in BP-II is worse than symptom interference in other chronic medical illnesses such as cancer. This social impairment can last for years, even after treatment that has resulted in a resolution of mood symptoms. The factors related to this persistent social impairment are residual depressive symptoms, limited illness insight (a very common occurrence in patients with BP-II), and impaired executive functioning. Impaired ability in executive functions is directly tied to poor psychosocial functioning, a common side-effect in patients with BP-II. The impact on a patient's psychosocial functioning stems from the depressive symptoms (more common in BP-II than BP-I). An increase in these symptoms' severity seems to correlate with a significant increase in psychosocial disability. Psychosocial disability can present itself in poor semantic memory, which in turn affects other cognitive domains like verbal memory and (as mentioned earlier) executive functioning leading to a direct and persisting impact on psychosocial functioning. An abnormal semantic memory organization can manipulate thoughts and lead to the formation of delusions and possibly affect speech and communication problems, which can lead to interpersonal issues. BP-II patients have also been shown to present worse cognitive functioning than those patients with BP-I, though they demonstrate about the same disability when it comes to occupational functioning, interpersonal relationships, and autonomy. This disruption in cognitive functioning takes a toll on their ability to function in the workplace, which leads to high rates of work loss in BP-II patient populations. After treatment and while in remission, BP-II patients tend to report a good psychosocial functioning but they still score less than patients without the disorder. These lasting impacts further suggest that a prolonged period of untreated BP-II can lead to permanent adverse effects on functioning. Recovery and recurrence BP-II has a chronic relapsing nature. It has been suggested that BP-II patients have a higher degree of relapse than BP-I patients. Generally, within four years of an episode, around 60% of patients will relapse into another episode. Some patients are symptomatic half the time, either with full on episodes or symptoms that fall just below the threshold of an episode. Because of the nature of the illness, long-term therapy is the best option and aims to not only control the symptoms but to maintain sustained remission and prevent relapses from occurring. Even with treatment, patients do not always regain full functioning, especially in the social realm. There is a very clear gap between symptomatic recovery and full functional recovery for both BP-I and BP-II patients. As such, and because those with BP-II spend more time with depressive symptoms that do not quite qualify as a major depressive episode, the best chance for recovery is to have therapeutic interventions that focus on the residual depressive symptoms and to aim for improvement in psychosocial and cognitive functioning. Even with treatment, a certain amount of responsibility is placed in the patient's hands; they have to be able to assume responsibility for their illness by accepting their diagnosis, taking the required medication, and seeking help when needed to do well in the future. Treatment often lasts after remission is achieved, and the treatment that worked is continued during the continuation phase (lasting anywhere from 6–12 months) and maintenance can last 1–2 years or, in some cases, indefinitely. One of the treatments of choice is Lithium, which has been shown to be very beneficial in reducing the frequency and severity of depressive episodes. Lithium prevents mood relapse and works especially well in BP-II patients who experience rapid-cycling. Almost all BP-II patients who take lithium have a decrease in the amount of time they spend ill and a decrease in mood episodes. Along with medication, other forms of therapy have been shown to be beneficial for BP-II patients. A treatment called a "well-being plan" serves several purposes: it informs the patients, protects them from future episodes, teaches them to add value to their life, and works toward building a strong sense of self to fend off depression and reduce the desire to succumb to the seductive hypomanic highs. The plan has to aim high. Otherwise, patients will relapse into depression. A large part of this plan involves the patient being very aware of warning signs and stress triggers so that they take an active role in their recovery and prevention of relapse. Mortality Several studies have shown that the risk of suicide is slightly higher in patients who have BP-II than those with BP-I. In results of a summary of several lifetime study experiments, it was found that 32.4% of BP-I patients experienced suicidal ideation or suicide attempts compared to 36% in BP-II patients. Bipolar disorders, in general, are the third leading cause of death in 15- to 24-year-olds. BP-II patients were also found to employ more lethal means and have more complete suicides overall. BP-II patients have several risk factors that increase their risk of suicide. The illness is very recurrent and results in severe disabilities, interpersonal relationship problems, barriers to academic, financial, and vocational goals, and a loss of social standing in their community, all of which increase the likelihood of suicide. Mixed symptoms and rapid-cycling, both very common in BP-II, are also associated with an increased risk of suicide. The tendency for BP-II to be misdiagnosed and treated ineffectively, or not at all in some cases, leads to an increased risk. As a result of the high suicide risk for this group, reducing the risk and preventing attempts remains a main part of the treatment; a combination of self-monitoring, close supervision by a therapist, and faithful adherence to their medication regimen will help to reduce the risk and prevent the likelihood of suicide. Suicide is a common endpoint for many patients with severe psychiatric illness. The mood disorders (depression and bipolar) are by far the most common psychiatric conditions associated with suicide. At least 25% to 50% of patients with bipolar disorder also attempt suicide at least once. Aside from lithium—which is the most demonstrably effective treatment against suicide—little is known about contributions of specific mood-altering treatments to minimizing mortality rates in persons with either major mood disorders or bipolar depression specifically. Suicide is usually a manifestation of severe psychiatric distress that is often associated with a diagnosable and treatable form of depression or other mental illness. In a clinical setting, an assessment of suicidal risk must precede any attempt to treat psychiatric illness. Epidemiology The global estimated lifetime prevalence of bipolar disorder among adults range from 1 to 3 percent. The annual incidence is estimated to vary from 0.3 to 1.2 percent worldwide. According to the World Mental Health Survey Initiative, the lifetime prevalence of BP-II was found to be 0.4%, with a 12-month prevalence of 0.3%. Other meta-analyses have found lifetime prevalence of BP-II up to 1.57%. In the United States, the estimated lifetime prevalence of BP-II was found to be 1.1%, with a 12-month prevalence of 0.8%. The mean age of onset for BP-II was 20 years. Thus far, there have been no studies that have conclusively demonstrated that an unequal distribution of bipolar disorders across sex and ethnicity exists. A vast majority of studies and meta-analysis do not differentiate between BP-I and BP-II, and current epidemiology data may not accurately describe true prevalence and incidence. In addition, BP-II is underdiagnosed in practice, and it is easy to miss milder forms of the condition. History In 19th century psychiatry, mania covered a broad range of intensity, and hypomania was equated by some to concepts of 'partial insanity' or monomania. A more specific usage was advanced by the German neuro-psychiatrist Emanuel Ernst Mendel in 1881, who wrote "I recommend (taking under consideration the word used by Hippocrates) to name those types of mania that show a less severe phenomenological picture, 'hypomania'". Narrower operational definitions of hypomania were developed from the 1960s/1970s. The first diagnostic distinction to be made between manic-depression involving mania and involving hypomania came from Carl Gustav Jung in 1903. In his paper, Jung introduced the non-psychotic version of the illness with the statement, "I would like to publish a number of cases whose peculiarity consists in chronic hypomanic behavior" where "it is not a question of real mania at all but of a hypomanic state which cannot be regarded as psychotic." Jung illustrated the hypomanic variation with five case histories, each involving hypomanic behavior, occasional bouts of depression, and mixed mood states, which involved personal and interpersonal upheaval for each patient. In 1975, Jung's original distinction between mania and hypomania gained support. Fieve and Dunner published an article recognizing that only individuals in a manic state require hospitalization. It was proposed that the presentation of either the one state or the other differentiates two distinct diseases; the proposition was initially met with skepticism. However, studies since confirm that BP-II is a phenomenologically distinct disorder. Empirical evidence, combined with treatment considerations, led the DSM-IV Mood Disorders Work Group to add BP-II as its own entity in the 1994 publication. Only one other mood disorder was added to this edition, indicating the conservative nature of the DSM-IV work group. In May 2013, the DSM-5 was released. Two revisions to the existing BP-II criteria are anticipated. The first expected change will reduce the required duration of a hypomanic state from four to two days. The second change will allow hypomania to be diagnosed without the manifestation of elevated mood; that is, increased energy/activity will be sufficient. The rationale behind the latter revision is that some individuals with BP-II manifest only visible changes in energy. Without presenting elevated mood, these individuals are commonly misdiagnosed with major depressive disorder. Consequently, they receive prescriptions for antidepressants, which unaccompanied by mood stabilizers, may induce rapid cycling or mixed states. Society and culture Heath Black revealed in his autobiography, Black, that he has been diagnosed with BP-II. Maria Bamford has been diagnosed with BP-II. Geoff Bullock, singer-songwriter, was diagnosed with BP-II. Mariah Carey was diagnosed with BP-II in 2001. In 2018, publicly revealed and actively seeking treatment in the form of therapy and medication. Charmaine Dragun, former Australian journalist/newsreader. Inquest concluded she had BP-II. Joe Gilgun has been diagnosed with BP-II. Shane Hmiel has been diagnosed with BP-II. Jesse Jackson Jr. has been diagnosed with BP-II. Thomas Eagleton received a diagnosis of BP-II from Dr. Frederick K. Goodwin. Carrie Fisher had been diagnosed with BP-II. Demi Lovato has been diagnosed with BP-II. Evan Perry, subject of the documentary Boy Interrupted, was diagnosed with BP-II. Richard Rossi, filmmaker, musician, and maverick minister was diagnosed with BP-II. Rumer has been diagnosed with BP-II. Catherine Zeta-Jones received treatment for BP-II after dealing with the stress of her husband's throat cancer. According to her publicist, Zeta-Jones made a decision to check into a mental health facility for a brief stay.
Biology and health sciences
Mental disorders
Health
19265670
https://en.wikipedia.org/wiki/Centrifugal%20force
Centrifugal force
Centrifugal force is a fictitious force in Newtonian mechanics (also called an "inertial" or "pseudo" force) that appears to act on all objects when viewed in a rotating frame of reference. It appears to be directed radially away from the axis of rotation of the frame. The magnitude of the centrifugal force F on an object of mass m at the distance r from the axis of a rotating frame of reference with angular velocity is: This fictitious force is often applied to rotating devices, such as centrifuges, centrifugal pumps, centrifugal governors, and centrifugal clutches, and in centrifugal railways, planetary orbits and banked curves, when they are analyzed in a non–inertial reference frame such as a rotating coordinate system. The term has sometimes also been used for the reactive centrifugal force, a real frame-independent Newtonian force that exists as a reaction to a centripetal force in some scenarios. History From 1659, the Neo-Latin term vi centrifuga ("centrifugal force") is attested in Christiaan Huygens' notes and letters. Note, that in Latin means "center" and (from ) means "fleeing, avoiding". Thus, centrifugus means "fleeing from the center" in a literal translation. In 1673, in Horologium Oscillatorium, Huygens writes (as translated by Richard J. Blackwell): There is another kind of oscillation in addition to the one we have examined up to this point; namely, a motion in which a suspended weight is moved around through the circumference of a circle. From this we were led to the construction of another clock at about the same time we invented the first one. [...] I originally intended to publish here a lengthy description of these clocks, along with matters pertaining to circular motion and centrifugal force, as it might be called, a subject about which I have more to say than I am able to do at present. But, in order that those interested in these things can sooner enjoy these new and not useless speculations, and in order that their publication not be prevented by some accident, I have decided, contrary to my plan, to add this fifth part [...]. The same year, Isaac Newton received Huygens work via Henry Oldenburg and replied "I pray you return [Mr. Huygens] my humble thanks [...] I am glad we can expect another discourse of the vis centrifuga, which speculation may prove of good use in natural philosophy and astronomy, as well as mechanics". In 1687, in Principia, Newton further develops vis centrifuga ("centrifugal force"). Around this time, the concept is also further evolved by Newton, Gottfried Wilhelm Leibniz, and Robert Hooke. In the late 18th century, the modern conception of the centrifugal force evolved as a "fictitious force" arising in a rotating reference. Centrifugal force has also played a role in debates in classical mechanics about detection of absolute motion. Newton suggested two arguments to answer the question of whether absolute rotation can be detected: the rotating bucket argument, and the rotating spheres argument. According to Newton, in each scenario the centrifugal force would be observed in the object's local frame (the frame where the object is stationary) only if the frame were rotating with respect to absolute space. Around 1883, Mach's principle was proposed where, instead of absolute rotation, the motion of the distant stars relative to the local inertial frame gives rise through some (hypothetical) physical law to the centrifugal force and other inertia effects. Today's view is based upon the idea of an inertial frame of reference, which privileges observers for which the laws of physics take on their simplest form, and in particular, frames that do not use centrifugal forces in their equations of motion in order to describe motions correctly. Around 1914, the analogy between centrifugal force (sometimes used to create artificial gravity) and gravitational forces led to the equivalence principle of general relativity. Introduction Centrifugal force is an outward force apparent in a rotating reference frame. It does not exist when a system is described relative to an inertial frame of reference. All measurements of position and velocity must be made relative to some frame of reference. For example, an analysis of the motion of an object in an airliner in flight could be made relative to the airliner, to the surface of the Earth, or even to the Sun. A reference frame that is at rest (or one that moves with no rotation and at constant velocity) relative to the "fixed stars" is generally taken to be an inertial frame. Any system can be analyzed in an inertial frame (and so with no centrifugal force). However, it is often more convenient to describe a rotating system by using a rotating frame—the calculations are simpler, and descriptions more intuitive. When this choice is made, fictitious forces, including the centrifugal force, arise. In a reference frame rotating about an axis through its origin, all objects, regardless of their state of motion, appear to be under the influence of a radially (from the axis of rotation) outward force that is proportional to their mass, to the distance from the axis of rotation of the frame, and to the square of the angular velocity of the frame. This is the centrifugal force. As humans usually experience centrifugal force from within the rotating reference frame, e.g. on a merry-go-round or vehicle, this is much more well-known than centripetal force. Motion relative to a rotating frame results in another fictitious force: the Coriolis force. If the rate of rotation of the frame changes, a third fictitious force (the Euler force) is required. These fictitious forces are necessary for the formulation of correct equations of motion in a rotating reference frame and allow Newton's laws to be used in their normal form in such a frame (with one exception: the fictitious forces do not obey Newton's third law: they have no equal and opposite counterparts). Newton's third law requires the counterparts to exist within the same frame of reference, hence centrifugal and centripetal force, which do not, are not action and reaction (as is sometimes erroneously contended). Examples Vehicle driving round a curve A common experience that gives rise to the idea of a centrifugal force is encountered by passengers riding in a vehicle, such as a car, that is changing direction. If a car is traveling at a constant speed along a straight road, then a passenger inside is not accelerating and, according to Newton's second law of motion, the net force acting on them is therefore zero (all forces acting on them cancel each other out). If the car enters a curve that bends to the left, the passenger experiences an apparent force that seems to be pulling them towards the right. This is the fictitious centrifugal force. It is needed within the passengers' local frame of reference to explain their sudden tendency to start accelerating to the right relative to the car—a tendency which they must resist by applying a rightward force to the car (for instance, a frictional force against the seat) in order to remain in a fixed position inside. Since they push the seat toward the right, Newton's third law says that the seat pushes them towards the left. The centrifugal force must be included in the passenger's reference frame (in which the passenger remains at rest): it counteracts the leftward force applied to the passenger by the seat, and explains why this otherwise unbalanced force does not cause them to accelerate. However, it would be apparent to a stationary observer watching from an overpass above that the frictional force exerted on the passenger by the seat is not being balanced; it constitutes a net force to the left, causing the passenger to accelerate toward the inside of the curve, as they must in order to keep moving with the car rather than proceeding in a straight line as they otherwise would. Thus the "centrifugal force" they feel is the result of a "centrifugal tendency" caused by inertia. Similar effects are encountered in aeroplanes and roller coasters where the magnitude of the apparent force is often reported in "G's". Stone on a string If a stone is whirled round on a string, in a horizontal plane, the only real force acting on the stone in the horizontal plane is applied by the string (gravity acts vertically). There is a net force on the stone in the horizontal plane which acts toward the center. In an inertial frame of reference, were it not for this net force acting on the stone, the stone would travel in a straight line, according to Newton's first law of motion. In order to keep the stone moving in a circular path, a centripetal force, in this case provided by the string, must be continuously applied to the stone. As soon as it is removed (for example if the string breaks) the stone moves in a straight line, as viewed from above. In this inertial frame, the concept of centrifugal force is not required as all motion can be properly described using only real forces and Newton's laws of motion. In a frame of reference rotating with the stone around the same axis as the stone, the stone is stationary. However, the force applied by the string is still acting on the stone. If one were to apply Newton's laws in their usual (inertial frame) form, one would conclude that the stone should accelerate in the direction of the net applied force—towards the axis of rotation—which it does not do. The centrifugal force and other fictitious forces must be included along with the real forces in order to apply Newton's laws of motion in the rotating frame. Earth The Earth constitutes a rotating reference frame because it rotates once every 23 hours and 56 minutes around its axis. Because the rotation is slow, the fictitious forces it produces are often small, and in everyday situations can generally be neglected. Even in calculations requiring high precision, the centrifugal force is generally not explicitly included, but rather lumped in with the gravitational force: the strength and direction of the local "gravity" at any point on the Earth's surface is actually a combination of gravitational and centrifugal forces. However, the fictitious forces can be of arbitrary size. For example, in an Earth-bound reference system (where the earth is represented as stationary), the fictitious force (the net of Coriolis and centrifugal forces) is enormous and is responsible for the Sun orbiting around the Earth. This is due to the large mass and velocity of the Sun (relative to the Earth). Weight of an object at the poles and on the equator If an object is weighed with a simple spring balance at one of the Earth's poles, there are two forces acting on the object: the Earth's gravity, which acts in a downward direction, and the equal and opposite restoring force in the spring, acting upward. Since the object is stationary and not accelerating, there is no net force acting on the object and the force from the spring is equal in magnitude to the force of gravity on the object. In this case, the balance shows the value of the force of gravity on the object. When the same object is weighed on the equator, the same two real forces act upon the object. However, the object is moving in a circular path as the Earth rotates and therefore experiencing a centripetal acceleration. When considered in an inertial frame (that is to say, one that is not rotating with the Earth), the non-zero acceleration means that force of gravity will not balance with the force from the spring. In order to have a net centripetal force, the magnitude of the restoring force of the spring must be less than the magnitude of force of gravity. This reduced restoring force in the spring is reflected on the scale as less weight — about 0.3% less at the equator than at the poles. In the Earth reference frame (in which the object being weighed is at rest), the object does not appear to be accelerating; however, the two real forces, gravity and the force from the spring, are the same magnitude and do not balance. The centrifugal force must be included to make the sum of the forces be zero to match the apparent lack of acceleration. Note: In fact, the observed weight difference is more — about 0.53%. Earth's gravity is a bit stronger at the poles than at the equator, because the Earth is not a perfect sphere, so an object at the poles is slightly closer to the center of the Earth than one at the equator; this effect combines with the centrifugal force to produce the observed weight difference. Derivation For the following formalism, the rotating frame of reference is regarded as a special case of a non-inertial reference frame that is rotating relative to an inertial reference frame denoted the stationary frame. Time derivatives in a rotating frame In a rotating frame of reference, the time derivatives of any vector function of time—such as the velocity and acceleration vectors of an object—will differ from its time derivatives in the stationary frame. If are the components of with respect to unit vectors directed along the axes of the rotating frame (i.e. ), then the first time derivative of with respect to the rotating frame is, by definition, . If the absolute angular velocity of the rotating frame is then the derivative of with respect to the stationary frame is related to by the equation: where denotes the vector cross product. In other words, the rate of change of in the stationary frame is the sum of its apparent rate of change in the rotating frame and a rate of rotation attributable to the motion of the rotating frame. The vector has magnitude equal to the rate of rotation and is directed along the axis of rotation according to the right-hand rule. Acceleration Newton's law of motion for a particle of mass written in vector form is: where is the vector sum of the physical forces applied to the particle and is the absolute acceleration (that is, acceleration in an inertial frame) of the particle, given by: where is the position vector of the particle (not to be confused with radius, as used above.) By applying the transformation above from the stationary to the rotating frame three times (twice to and once to ), the absolute acceleration of the particle can be written as: Force The apparent acceleration in the rotating frame is . An observer unaware of the rotation would expect this to be zero in the absence of outside forces. However, Newton's laws of motion apply only in the inertial frame and describe dynamics in terms of the absolute acceleration . Therefore, the observer perceives the extra terms as contributions due to fictitious forces. These terms in the apparent acceleration are independent of mass; so it appears that each of these fictitious forces, like gravity, pulls on an object in proportion to its mass. When these forces are added, the equation of motion has the form: From the perspective of the rotating frame, the additional force terms are experienced just like the real external forces and contribute to the apparent acceleration. The additional terms on the force side of the equation can be recognized as, reading from left to right, the Euler force , the Coriolis force , and the centrifugal force , respectively. Unlike the other two fictitious forces, the centrifugal force always points radially outward from the axis of rotation of the rotating frame, with magnitude , where is the component of the position vector perpendicular to , and unlike the Coriolis force in particular, it is independent of the motion of the particle in the rotating frame. As expected, for a non-rotating inertial frame of reference the centrifugal force and all other fictitious forces disappear. Similarly, as the centrifugal force is proportional to the distance from object to the axis of rotation of the frame, the centrifugal force vanishes for objects that lie upon the axis. Absolute rotation Three scenarios were suggested by Newton to answer the question of whether the absolute rotation of a local frame can be detected; that is, if an observer can decide whether an observed object is rotating or if the observer is rotating. The shape of the surface of water rotating in a bucket. The shape of the surface becomes concave to balance the centrifugal force against the other forces upon the liquid. The tension in a string joining two spheres rotating about their center of mass. The tension in the string will be proportional to the centrifugal force on each sphere as it rotates around the common center of mass. In these scenarios, the effects attributed to centrifugal force are only observed in the local frame (the frame in which the object is stationary) if the object is undergoing absolute rotation relative to an inertial frame. By contrast, in an inertial frame, the observed effects arise as a consequence of the inertia and the known forces without the need to introduce a centrifugal force. Based on this argument, the privileged frame, wherein the laws of physics take on the simplest form, is a stationary frame in which no fictitious forces need to be invoked. Within this view of physics, any other phenomenon that is usually attributed to centrifugal force can be used to identify absolute rotation. For example, the oblateness of a sphere of freely flowing material is often explained in terms of centrifugal force. The oblate spheroid shape reflects, following Clairaut's theorem, the balance between containment by gravitational attraction and dispersal by centrifugal force. That the Earth is itself an oblate spheroid, bulging at the equator where the radial distance and hence the centrifugal force is larger, is taken as one of the evidences for its absolute rotation. Applications The operations of numerous common rotating mechanical systems are most easily conceptualized in terms of centrifugal force. For example: A centrifugal governor regulates the speed of an engine by using spinning masses that move radially, adjusting the throttle, as the engine changes speed. In the reference frame of the spinning masses, centrifugal force causes the radial movement. A centrifugal clutch is used in small engine-powered devices such as chain saws, go-karts and model helicopters. It allows the engine to start and idle without driving the device but automatically and smoothly engages the drive as the engine speed rises. Inertial drum brake ascenders used in rock climbing and the inertia reels used in many automobile seat belts operate on the same principle. Centrifugal forces can be used to generate artificial gravity, as in proposed designs for rotating space stations. The Mars Gravity Biosatellite would have studied the effects of Mars-level gravity on mice with gravity simulated in this way. Spin casting and centrifugal casting are production methods that use centrifugal force to disperse liquid metal or plastic throughout the negative space of a mold. Centrifuges are used in science and industry to separate substances. In the reference frame spinning with the centrifuge, the centrifugal force induces a hydrostatic pressure gradient in fluid-filled tubes oriented perpendicular to the axis of rotation, giving rise to large buoyant forces which push low-density particles inward. Elements or particles denser than the fluid move outward under the influence of the centrifugal force. This is effectively Archimedes' principle as generated by centrifugal force as opposed to being generated by gravity. Some amusement rides make use of centrifugal forces. For instance, a Gravitron's spin forces riders against a wall and allows riders to be elevated above the machine's floor in defiance of Earth's gravity. Nevertheless, all of these systems can also be described without requiring the concept of centrifugal force, in terms of motions and forces in a stationary frame, at the cost of taking somewhat more care in the consideration of forces and motions within the system. Other uses of the term While the majority of the scientific literature uses the term centrifugal force to refer to the particular fictitious force that arises in rotating frames, there are a few limited instances in the literature of the term applied to other distinct physical concepts. In Lagrangian mechanics One of these instances occurs in Lagrangian mechanics. Lagrangian mechanics formulates mechanics in terms of generalized coordinates {qk}, which can be as simple as the usual polar coordinates or a much more extensive list of variables. Within this formulation the motion is described in terms of generalized forces, using in place of Newton's laws the Euler–Lagrange equations. Among the generalized forces, those involving the square of the time derivatives {(dqk  ⁄ dt )2} are sometimes called centrifugal forces. In the case of motion in a central potential the Lagrangian centrifugal force has the same form as the fictitious centrifugal force derived in a co-rotating frame. However, the Lagrangian use of "centrifugal force" in other, more general cases has only a limited connection to the Newtonian definition. As a reactive force In another instance the term refers to the reaction force to a centripetal force, or reactive centrifugal force. A body undergoing curved motion, such as circular motion, is accelerating toward a center at any particular point in time. This centripetal acceleration is provided by a centripetal force, which is exerted on the body in curved motion by some other body. In accordance with Newton's third law of motion, the body in curved motion exerts an equal and opposite force on the other body. This reactive force is exerted by the body in curved motion on the other body that provides the centripetal force and its direction is from that other body toward the body in curved motion. This reaction force is sometimes described as a centrifugal inertial reaction, that is, a force that is centrifugally directed, which is a reactive force equal and opposite to the centripetal force that is curving the path of the mass. The concept of the reactive centrifugal force is sometimes used in mechanics and engineering. It is sometimes referred to as just centrifugal force rather than as reactive centrifugal force although this usage is deprecated in elementary mechanics.
Physical sciences
Classical mechanics
Physics
1777481
https://en.wikipedia.org/wiki/Clapboard
Clapboard
Clapboard (), also called bevel siding, lap siding, and weatherboard, with regional variation in the definition of those terms, is wooden siding of a building in the form of horizontal boards, often overlapping. Clapboard, in modern American usage, is a word for long, thin boards used to cover walls and (formerly) roofs of buildings. Historically, it has also been called clawboard and cloboard. In the United Kingdom, Australia and New Zealand, the term weatherboard is always used. An older meaning of "clapboard" is small split pieces of oak imported from Germany for use as barrel staves, and the name is a partial translation (from , "to fit") of Middle Dutch and related to German . Types Riven Clapboards were originally riven radially by hand producing triangular or "feather-edged" sections, attached thin side up and overlapped thick over thin to shed water. Radially sawn Later, the boards were radially sawn in a type of sawmill called a clapboard mill, producing vertical-grain clapboards. The more commonly used boards in New England are vertical-grain boards. Depending on the diameter of the log, cuts are made from deep along the full length of the log. Each time the log turns for the next cut, it is rotated until it has turned 360°. This gives the radially sawn clapboard its taper and true vertical grain. Flat-sawn Flat-grain clapboards are cut tangent to the annual growth rings of the tree. As this technique was common in most parts of the British Isles, it was carried by immigrants to their colonies in the Americas and in Australia and New Zealand. Flat-sawn wood cups more and does not hold paint as well as radially sawn wood. Chamferboard Chamferboards are an Australian form of weatherboarding using tongue-and-groove joints to link the boards together to give a flatter external appearance than regular angled weatherboards. Finger jointed Some modern clapboards are made up of shorter pieces of wood finger jointed together with an adhesive. Wood species In North America clapboards were historically made of split oak, pine and spruce. Modern clapboards are available in red cedar and pine. In some areas, clapboards were traditionally left as raw wood, relying upon good air circulation and the use of 'semi-hardwoods' to keep the boards from rotting. These boards eventually go grey as the tannins are washed out from the wood. More recently clapboard has been tarred or painted—traditionally black or white due to locally occurring minerals or pigments. In modern clapboard these colors remain popular, but with a hugely wider variety due to chemical pigments and stains. Clapboard houses may be found in most parts of the British Isles, and the style may be part of all types of traditional building, from cottages to windmills, shops to workshops, as well as many others. In New Zealand, clapboard housing dominates buildings before 1960. Clapboard, with a corrugated iron roof, was found to be a cost-effective building style. After the big earthquakes of 1855 and 1931, wooden buildings were perceived as being less vulnerable to damage. Clapboard is always referred to as weatherboard in New Zealand. Newer, cheaper designs often imitate the form of clapboard construction as siding made of vinyl (uPVC), aluminum, fiber cement, or other man-made materials. These materials can provide a lightweight alternative to wooden cladding.
Technology
Building materials
null
1778796
https://en.wikipedia.org/wiki/Substance%20dependence
Substance dependence
Substance dependence, also known as drug dependence, is a biopsychological situation whereby an individual's functionality is dependent on the necessitated re-consumption of a psychoactive substance because of an adaptive state that has developed within the individual from psychoactive substance consumption that results in the experience of withdrawal and that necessitates the re-consumption of the drug. A drug addiction, a distinct concept from substance dependence, is defined as compulsive, out-of-control drug use, despite negative consequences. An addictive drug is a drug which is both rewarding and reinforcing. ΔFosB, a gene transcription factor, is now known to be a critical component and common factor in the development of virtually all forms of behavioral and drug addictions, but not dependence. The International Classification of Diseases classifies substance dependence as a mental and behavioural disorder. Within the framework of the 4th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), substance dependence is redefined as a drug addiction, and can be diagnosed without the occurrence of a withdrawal syndrome. It was described accordingly: "When an individual persists in use of alcohol or other drugs despite problems related to use of the substance, substance dependence may be diagnosed. Compulsive and repetitive use may result in tolerance to the effect of the drug and withdrawal symptoms when use is reduced or stopped. This, along with Substance Abuse are considered Substance Use Disorders." In the DSM-5 (released in 2013), substance abuse and substance dependence were eliminated and replaced with the category of substance use disorders. This was done because "the tolerance and withdrawal that previously defined dependence are actually very normal responses to prescribed medications that affect the central nervous system and do not necessarily indicate the presence of an addiction." Withdrawal Withdrawal is the body's reaction to abstaining from a substance upon which a person has developed a dependence syndrome. When dependence has developed, cessation of substance-use produces an unpleasant state, which promotes continued drug use through negative reinforcement; i.e., the drug is used to escape or avoid re-entering the associated withdrawal state. The withdrawal state may include physical-somatic symptoms (physical dependence), emotional-motivational symptoms (psychological dependence), or both. Chemical and hormonal imbalances may arise if the substance is not re-introduced. Psychological stress may also result if the substance is not re-introduced. Infants also experience substance withdrawal, known as neonatal abstinence syndrome (NAS), which can have severe and life-threatening effects. Addiction to drugs such as alcohol in expectant mothers not only causes NAS, but also an array of other issues which can continually affect the infant throughout their lifetime. Risk factors Dependence potential The dependence potential or dependence liability of a drug varies from substance to substance, and from individual to individual. Dose, frequency, pharmacokinetics of a particular substance, route of administration, and time are critical factors for developing a drug dependence. An article in The Lancet compared the harm and dependence liability of 20 drugs, using a scale from zero to three for physical dependence, psychological dependence, and pleasure to create a mean score for dependence. Selected results can be seen in the chart below. Capture rates Capture rates enumerate the percentage of users who reported that they had become dependent to their respective drug at some point. Biomolecular mechanisms Psychological dependence Two factors have been identified as playing pivotal roles in psychological dependence: the neuropeptide "corticotropin-releasing factor" (CRF) and the gene transcription factor "cAMP response element binding protein" (CREB). The nucleus accumbens (NAcc) is one brain structure that has been implicated in the psychological component of drug dependence. In the NAcc, CREB is activated by cyclic adenosine monophosphate (cAMP) immediately after a high and triggers changes in gene expression that affect proteins such as dynorphin; dynorphin peptides reduce dopamine release into the NAcc by temporarily inhibiting the reward pathway. A sustained activation of CREB thus forces a larger dose to be taken to reach the same effect. In addition, it leaves the user feeling generally depressed and dissatisfied, and unable to find pleasure in previously enjoyable activities, often leading to a return to the drug for another dose. In addition to CREB, it is hypothesized that stress mechanisms play a role in dependence. Koob and Kreek have hypothesized that during drug use, activates the hypothalamic–pituitary–adrenal axis (HPA axis) and other stress systems in the extended amygdala. This activation influences the dysregulated emotional state associated with psychological dependence. They found that as drug use escalates, so does the presence of CRF in human cerebrospinal fluid. In rat models, the separate use of CRF inhibitors and CRF receptor antagonists both decreased self-administration of the drug of study. Other studies in this review showed dysregulation of other neuropeptides that affect the HPA axis, including enkephalin which is an endogenous opioid peptide that regulates pain. It also appears that μ-opioid receptors, which enkephalin acts upon, is influential in the reward system and can regulate the expression of stress hormones. Increased expression of AMPA receptors in nucleus accumbens MSNs is a potential mechanism of aversion produced by drug withdrawal. Physical dependence Upregulation of the signal transduction pathway in the locus coeruleus by has been implicated as the mechanism responsible for certain aspects of opioid-induced physical dependence. The temporal course of withdrawal correlates with LC firing, and administration of α2 agonists into the locus coeruleus leads to a decrease in LC firing and norepinephrine release during withdrawal. A possible mechanism involves upregulation of NMDA receptors, which is supported by the attenuation of withdraw by NMDA receptor antagonists. Physical dependence on opioids has been observed to produce an elevation of extracellular glutamate, an increase in NMDA receptor subunits NR1 and NR2A, phosphorylated CaMKII, and c-fos. Expression of CaMKII and c-fos is attenuated by NMDA receptor antagonists, which is associated with blunted withdrawal in adult rats, but not neonatal rats While acute administration of opioids decreases AMPA receptor expression and depresses both NMDA and non-NMDA excitatory postsynaptic potentials in the NAC, withdrawal involves a lowered threshold for LTP and an increase in spontaneous firing in the NAc. Diagnosis DSM classification "Substance dependence", as defined in the DSM-IV, can be diagnosed with physiological dependence, evidence of tolerance or withdrawal, or without physiological dependence. DSM-IV substance dependencies include: 303.90 Alcohol dependence 304.00 Opioid dependence 304.10 Sedative, hypnotic, or anxiolytic dependence (including benzodiazepine dependence and barbiturate dependence) 304.20 Cocaine dependence 304.30 Cannabis dependence 304.40 Amphetamine dependence (or amphetamine-like) 304.50 Hallucinogen dependence 304.60 Inhalant dependence 304.80 Polysubstance dependence 304.90 Phencyclidine (or phencyclidine-like) dependence 304.90 Other (or unknown) substance dependence 305.10 Nicotine dependence Management Addiction is a complex but treatable condition. It is characterized by compulsive drug craving, seeking, and use that persists even if the user is aware of severe adverse consequences. For some people, addiction becomes chronic, with periodic relapses even after long periods of abstinence. As a chronic, relapsing disease, addiction may require continued treatments to increase the intervals between relapses and diminish their intensity. While some with substance issues recover and lead fulfilling lives, others require ongoing additional support. The ultimate goal of addiction treatment is to enable an individual to manage their substance misuse; for some this may mean abstinence. Immediate goals are often to reduce substance abuse, improve the patient's ability to function, and minimize the medical and social complications of substance abuse and their addiction; this is called "harm reduction". Treatments for addiction vary widely according to the types of drugs involved, amount of drugs used, duration of the drug addiction, medical complications and the social needs of the individual. Determining the best type of recovery program for an addicted person depends on a number of factors, including: personality, drugs of choice, concept of spirituality or religion, mental or physical illness, and local availability and affordability of programs. Many different ideas circulate regarding what is considered a successful outcome in the recovery from addiction. Programs that emphasize controlled drinking exist for alcohol addiction. Opiate replacement therapy has been a medical standard of treatment for opioid addiction for many years. Treatments and attitudes toward addiction vary widely among different countries. In the US and developing countries, the goal of commissioners of treatment for drug dependence is generally total abstinence from all drugs. Other countries, particularly in Europe, argue the aims of treatment for drug dependence are more complex, with treatment aims including reduction in use to the point that drug use no longer interferes with normal activities such as work and family commitments; shifting the addict away from more dangerous routes of drug administration such as injecting to safer routes such as oral administration; reduction in crime committed by drug addicts; and treatment of other comorbid conditions such as AIDS, hepatitis and mental health disorders. These kinds of outcomes can be achieved without eliminating drug use completely. Drug treatment programs in Europe often report more favorable outcomes than those in the US because the criteria for measuring success are functional rather than abstinence-based. The supporters of programs with total abstinence from drugs as a goal believe that enabling further drug use means prolonged drug use and risks an increase in addiction and complications from addiction. Residential Residential drug treatment can be broadly divided into two camps: 12-step programs and therapeutic communities. 12-step programs are a nonclinical support-group and spiritual-based approach to treating addiction. Therapy typically involves the use of cognitive-behavioral therapy, an approach that looks at the relationship between thoughts, feelings and behaviors, addressing the root cause of maladaptive behavior. Cognitive-behavioral therapy treats addiction as a behavior rather than a disease, and so is subsequently curable, or rather, unlearnable. Cognitive-behavioral therapy programs recognize that, for some individuals, controlled use is a more realistic possibility. One of many recovery methods are 12-step recovery programs, with prominent examples including Alcoholics Anonymous, Narcotics Anonymous, and Pills Anonymous. They are commonly known and used for a variety of addictions for the individual addicted and the family of the individual. Substance-abuse rehabilitation (rehab) centers offer a residential treatment program for some of the more seriously addicted, in order to isolate the patient from drugs and interactions with other users and dealers. Outpatient clinics usually offer a combination of individual counseling and group counseling. Frequently, a physician or psychiatrist will prescribe medications in order to help patients cope with the side effects of their addiction. Medications can help immensely with anxiety and insomnia, can treat underlying mental disorders (cf. self-medication hypothesis, Khantzian 1997) such as depression, and can help reduce or eliminate withdrawal symptomology when withdrawing from physiologically addictive drugs. Some examples are using benzodiazepines for alcohol detoxification, which prevents delirium tremens and complications; using a slow taper of benzodiazepines or a taper of phenobarbital, sometimes including another antiepileptic agent such as gabapentin, pregabalin, or valproate, for withdrawal from barbiturates or benzodiazepines; using drugs such as baclofen to reduce cravings and propensity for relapse amongst addicts to any drug, especially effective in stimulant users, and alcoholics (in which it is nearly as effective as benzodiazepines in preventing complications); using clonidine, an alpha-agonist, and loperamide for opioid detoxification, for first-time users or those who wish to attempt an abstinence-based recovery (90% of opioid users relapse to active addiction within eight months or are multiple relapse patients); or replacing an opioid that is interfering with or destructive to a user's life, such as illicitly-obtained heroin, dilaudid, or oxycodone, with an opioid that can be administered legally, reduces or eliminates drug cravings, and does not produce a high, such as methadone or buprenorphine – opioid replacement therapy – which is the gold standard for treatment of opioid dependence in developed countries, reducing the risk and cost to both user and society more effectively than any other treatment modality (for opioid dependence), and shows the best short-term and long-term gains for the user, with the greatest longevity, least risk of fatality, greatest quality of life, and lowest risk of relapse and legal issues including arrest and incarceration. In a survey of treatment providers from three separate institutions, the National Association of Alcoholism and Drug Abuse Counselors, Rational Recovery Systems and the Society of Psychologists in Addictive Behaviors, measuring the treatment provider's responses on the "Spiritual Belief Scale" (a scale measuring belief in the four spiritual characteristics of AA identified by Ernest Kurtz); the scores were found to explain 41% of the variance in the treatment provider's responses on the "Addiction Belief Scale" (a scale measuring adherence to the disease model or the free-will model of addiction). Behavioral programming Behavioral programming is considered critical in helping those with addictions achieve abstinence. From the applied behavior analysis literature and the behavioral psychology literature, several evidence based intervention programs have emerged: (1) behavioral marital therapy; (2) community reinforcement approach; (3) cue exposure therapy; and (4) contingency management strategies. In addition, the same author suggests that social skills training adjunctive to inpatient treatment of alcohol dependence is probably efficacious. Community reinforcement has both efficacy and effectiveness data. In addition, behavioral treatment such as community reinforcement and family training (CRAFT) have helped family members to get their loved ones into treatment. Motivational intervention has also shown to be an effective treatment for substance dependence. Alternative therapies Alternative therapies, such as acupuncture, are used by some practitioners to alleviate the symptoms of drug addiction. In 1997, the American Medical Association (AMA) adopted, as policy, the following statement after a report on a number of alternative therapies including acupuncture: In addition, new research surrounding the effects of psilocybin on smokers revealed that 80% of smokers quit for six months following the treatment, and 60% remained smoking free for 5 years following the treatment. Treatment and issues Medical professionals need to apply many techniques and approaches to help patients with substance related disorders. Using a psychodynamic approach is one of the techniques that psychologists use to solve addiction problems. In psychodynamic therapy, psychologists need to understand the conflicts and the needs of the addicted person, and also need to locate the defects of their ego and defense mechanisms. Using this approach alone has proven to be ineffective in solving addiction problems. Cognitive and behavioral techniques should be integrated with psychodynamic approaches to achieve effective treatment for substance related disorders. Cognitive treatment requires psychologists to think deeply about what is happening in the brain of an addicted person. Cognitive psychologists should zoom in to neural functions of the brain and understand that drugs have been manipulating the dopamine reward center of the brain. From this particular state of thinking, cognitive psychologists need to find ways to change the thought process of the addicted person. Cognitive approach There are two routes typically applied to a cognitive approach to substance abuse: tracking the thoughts that pull patients to addiction and tracking the thoughts that prevent them if so from relapsing. Behavioral techniques have the widest application in treating substance related disorders. Behavioral psychologists can use the techniques of "aversion therapy", based on the findings of Pavlov's classical conditioning. It uses the principle of pairing abused substances with unpleasant stimuli or conditions; for example, pairing pain, electrical shock, or nausea with alcohol consumption. The use of medications may also be used in this approach, such as using disulfiram to pair unpleasant effects with the thought of alcohol use. Psychologists tend to use an integration of all these approaches to produce reliable and effective treatment. With the advanced clinical use of medications, biological treatment is now considered to be one of the most efficient interventions that psychologists may use as treatment for those with substance dependence. Medicinal approach Another approach is to use medicines that interfere with the functions of the drugs in the brain. Similarly, one can also substitute the misused substance with a weaker, safer version to slowly taper the patient off of their dependence. Such is the case with Suboxone in the context of opioid dependence. These approaches are aimed at the process of detoxification. Medical professionals weigh the consequences of withdrawal symptoms against the risk of staying dependent on these substances. These withdrawal symptoms can be very difficult and painful at times for patients. Most will have steps in place to handle severe withdrawal symptoms, either through behavioral therapy or other medications. Biological intervention should be combined with behavioral therapy approaches and other non-pharmacological techniques. Group therapies including anonymity, teamwork and sharing concerns of daily life among people who also have substance dependence issues can have a great impact on outcomes. However, these programs proved to be more effective and influential on persons who did not reach levels of serious dependence. Vaccines TA-CD is an active vaccine developed by the Xenova Group which is used to negate the effects of cocaine, making it suitable for use in treatment of addiction. It is created by combining norcocaine with inactivated cholera toxin. TA-NIC is a proprietary vaccine in development similar to TA-CD but being used to create human anti-nicotine antibodies in a person to destroy nicotine in the human body so that it is no longer effective. History The phenomenon of drug addiction has occurred to some degree throughout recorded history (see Opium). Modern agricultural practices, improvements in access to drugs, advancements in biochemistry, and dramatic increases in the recommendation of drug usage by clinical practitioners have exacerbated the problem significantly in the 20th century. Improved means of active biological agent manufacture and the introduction of synthetic compounds, such as fentanyl and methamphetamine, are also factors contributing to drug addiction. For the entirety of US history, drugs have been used by some members of the population. In the country's early years, most drug use by the settlers was of alcohol or tobacco. The 19th century saw opium usage in the US become much more common and popular. Morphine was isolated in the early 19th century, and came to be prescribed commonly by doctors, both as a painkiller and as an intended cure for opium addiction. At the time, the prevailing medical opinion was that the addiction process occurred in the stomach, and thus it was hypothesized that patients would not become addicted to morphine if it was injected into them via a hypodermic needle, and it was further hypothesized that this might potentially be able to cure opium addiction. However, many people did become addicted to morphine. In particular, addiction to opium became widespread among soldiers fighting in the Civil War, who very often required painkillers and thus were very often prescribed morphine. Women were also very frequently prescribed opiates, and opiates were advertised as being able to relieve "female troubles". Many soldiers in the Vietnam War were introduced to heroin and developed a dependency on the substance which survived even when they returned to the US. Technological advances in travel meant that this increased demand for heroin in the US could now be met. Furthermore, as technology advanced, more drugs were synthesized and discovered, opening up new avenues to substance dependency. Society and culture Demographics Internationally, the U.S. and Eastern Europe contain the countries with the highest substance abuse disorder occurrence (5-6%). Africa, Asia, and the Middle East contain countries with the lowest worldwide occurrence (1-2%). Across the globe, those that tended to have a higher prevalence of substance dependence were in their twenties, unemployed, and men. The National Survey on Drug Use and Health (NSDUH) reports on substance dependence/abuse rates in various population demographics across the U.S. When surveying populations based on race and ethnicity in those ages 12 and older, it was observed that American Indian/Alaskan Natives were among the highest rates and Asians were among the lowest rates in comparison to other racial/ethnic groups. When surveying populations based on gender in those ages 12 and older, it was observed that males had a higher substance dependence rate than females. However, the difference in the rates are not apparent until after age 17.'' Alcohol dependence or abuse rates were shown to have no correspondence with any person's education level when populations were surveyed in varying degrees of education from ages 26 and older. However, when it came to illicit drug use there was a correlation, in which those that graduated from college had the lowest rates. Furthermore, dependence rates were greater in unemployed populations ages 18 and older and in metropolitan-residing populations ages 12 and older. The National Opinion Research Center at the University of Chicago reported an analysis on disparities within admissions for substance abuse treatment in the Appalachian region, which comprises 13 states and 410 counties in the Eastern part of the U.S. While their findings for most demographic categories were similar to the national findings by NSDUH, they had different results for racial/ethnic groups which varied by sub-regions. Overall, Whites were the demographic with the largest admission rate (83%), while Alaskan Native, American Indian, Pacific Islander, and Asian populations had the lowest admissions (1.8%). Legislation Depending on the jurisdiction, addictive drugs may be legal, legal only as part of a government sponsored study, illegal to use for any purpose, illegal to sell, or even illegal to merely possess. Most countries have legislation which brings various drugs and drug-like substances under the control of licensing systems. Typically this legislation covers any or all of the opiates, amphetamines, cannabinoids, cocaine, barbiturates, benzodiazepines, anesthetics, hallucinogenics, derivatives and a variety of more modern synthetic drugs. Unlicensed production, supply or possession is a criminal offence. Although the legislation may be justifiable on moral or public health grounds, it can make addiction or dependency a much more serious issue for the individual: reliable supplies of a drug become difficult to secure, and the individual becomes vulnerable to both criminal abuse and legal punishment. It is unclear whether laws against illegal drug use do anything to stem usage and dependency. In jurisdictions where addictive drugs are illegal, they are generally supplied by drug dealers, who are often involved with organized crime. Even though the cost of producing most illegal addictive substances is very low, their illegality combined with the addict's need permits the seller to command a premium price, often hundreds of times the production cost. As a result, addicts sometimes turn to crime to support their habit. United States In the United States, drug policy is primarily controlled by the federal government. The Department of Justice's Drug Enforcement Administration (DEA) enforces controlled substances laws and regulations. The Department of Health and Human Services' Food and Drug Administration (FDA) serve to protect and promote public health by controlling the manufacturing, marketing, and distribution of products, like medications. The United States' approach to substance abuse has shifted over the last decade, and is continuing to change. The federal government was minimally involved in the 19th century. The federal government transitioned from using taxation of drugs in the early 20th century to criminalizing drug abuse with legislations and agencies like the Federal Bureau of Narcotics (FBN) mid-20th century in response to the nation's growing substance abuse issue. These strict punishments for drug offenses shined light on the fact that drug abuse was a multi-faceted problem. The President's Advisory Commission on Narcotics and Drug Abuse of 1963 addressed the need for a medical solution to drug abuse. However, drug abuse continued to be enforced by the federal government through agencies such as the DEA and further legislations such as The Controlled Substances Act (CSA), the Comprehensive Crime Control Act of 1984, and Anti-Drug Abuse Acts. In the past decade, there have been growing efforts through state and local legislations to shift from criminalizing drug abuse to treating it as a health condition requiring medical intervention. 28 states currently allow for the establishment of needle exchanges. Florida, Iowa, Missouri and Arizona all introduced bills to allow for the establishment of needle exchanges in 2019. These bills have grown in popularity across party lines since needle exchanges were first introduced in Amsterdam in 1983. In addition, AB-186 Controlled substances: overdose prevention program was introduced to operate safe injection sites in the City and County of San Francisco. The bill was vetoed on September 30, 2018, by California Governor Jerry Brown. The legality of these sites are still in discussion, so there are no such sites in the United States yet. However, there is growing international evidence for successful safe injection facilities.
Biology and health sciences
Drugs and pharmacology
null
1779163
https://en.wikipedia.org/wiki/Limonene
Limonene
Limonene () is a colorless liquid aliphatic hydrocarbon classified as a cyclic monoterpene, and is the major component in the essential oil of citrus fruit peels. The (+)-isomer, occurring more commonly in nature as the fragrance of oranges, is a flavoring agent in food manufacturing. It is also used in chemical synthesis as a precursor to carvone and as a renewables-based solvent in cleaning products. The less common (-)-isomer has a piny, turpentine-like odor, and is found in the edible parts of such plants as caraway, dill, and bergamot orange plants. Limonene takes its name from Italian limone ("lemon"). Limonene is a chiral molecule, and biological sources produce one enantiomer: the principal industrial source, citrus fruit, contains (+)-limonene (d-limonene), which is the (R)-enantiomer. (+)-Limonene is obtained commercially from citrus fruits through two primary methods: centrifugal separation or steam distillation. In plants (+)-Limonene is a major component of the aromatic scents and resins characteristic of numerous coniferous and broadleaved trees: red and silver maple (Acer rubrum, Acer saccharinum), cottonwoods (Populus angustifolia), aspens (Populus grandidentata, Populus tremuloides) sumac (Rhus glabra), spruce (Picea spp.), various pines (e.g., Pinus echinata, Pinus ponderosa), Pinus leucodermis, Douglas fir (Pseudotsuga menziesii), larches (Larix spp.), true firs (Abies spp.), hemlocks (Tsuga spp.), cannabis (Cannabis sativa spp.), cedars (Cedrus spp.), various Cupressaceae, and juniper bush (Juniperus spp.). It contributes to the characteristic odor of orange peel, orange juice and other citrus fruits. To optimize recovery of valued components from citrus peel waste, (+)-limonene is typically removed. Chemical reactions Limonene is a relatively stable monoterpene and can be distilled without decomposition, although at elevated temperatures it cracks to form isoprene. It oxidizes easily in moist air to produce carveol, carvone, and limonene oxide. With sulfur, it undergoes dehydrogenation to p-cymene. Limonene occurs commonly as the (R)-enantiomer, but racemizes at 300 °C. When warmed with mineral acid, limonene isomerizes to the conjugated diene α-terpinene (which can also easily be converted to p-cymene). Evidence for this isomerization includes the formation of Diels–Alder adducts between α-terpinene adducts and maleic anhydride. It is possible to effect reaction at one of the double bonds selectively. Anhydrous hydrogen chloride reacts preferentially at the disubstituted alkene, whereas epoxidation with mCPBA occurs at the trisubstituted alkene. In another synthetic method Markovnikov addition of trifluoroacetic acid followed by hydrolysis of the acetate gives terpineol. The most widely practiced conversion of limonene is to carvone. The three-step reaction begins with the regioselective addition of nitrosyl chloride across the trisubstituted double bond. This species is then converted to the oxime with a base, and the hydroxylamine is removed to give the ketone-containing carvone. Biosynthesis In nature, limonene is formed from geranyl pyrophosphate, via cyclization of a neryl carbocation or its equivalent as shown. The final step involves loss of a proton from the cation to form the alkene. Uses As the main fragrance of citrus peels, -limonene is used in food manufacturing and some medicines, such as a flavoring agent to mask the bitter taste of alkaloids, and as a fragrance in perfumery, aftershave lotions, bath products, and other personal care products. (+)-Limonene is also used as a botanical insecticide. (+)-Limonene is used in the organic herbicides. It is added to cleaning products, such as hand cleansers, to give a lemon or orange fragrance (see orange oil) and for its ability to dissolve oils. In contrast, (-)-limonene has a piny, turpentine-like odor. Limonene is used as a solvent for cleaning purposes, such as adhesive remover, or the removal of oil from machine parts, as it is produced from a renewable source (citrus essential oil, as a byproduct of orange juice manufacture). It is used as a paint stripper and is also useful as a fragrant alternative to turpentine. Limonene is also used as a solvent in some model airplane glues and as a constituent in some paints. Commercial air fresheners, with air propellants, containing limonene are used by stamp collectors to remove self-adhesive postage stamps from envelope paper. Limonene is also used as a solvent for fused filament fabrication based 3D printing. Printers can print the plastic of choice for the model, but erect supports and binders from high impact polystyrene (HIPS), a polystyrene plastic that is easily soluble in limonene. In preparing tissues for histology or histopathology, -limonene is often used as a less toxic substitute for xylene when clearing dehydrated specimens. Clearing agents are liquids miscible with alcohols (such as ethanol or isopropanol) and with melted paraffin wax, in which specimens are embedded to facilitate cutting of thin sections for microscopy. Limonene, from orange peel oil, is also combustible and has been considered as a biofuel. Safety and research Applied to skin, limonene may cause irritation from contact dermatitis, but otherwise appears to be safe for human use. Limonene is flammable as a liquid or vapor and it is toxic to aquatic life. Cancer There is no evidence that the limonene in peel oils of citrus fruits affects the onset or progress of cancer, with one national agency stating, "There is no consistent evidence that people with cancer who consume limonene—either in supplement form or by eating citrus fruits—get better or are more likely to be cured".
Physical sciences
Terpenes and terpenoids
Chemistry
1780815
https://en.wikipedia.org/wiki/Radius
Radius
In classical geometry, a radius (: radii or radiuses) of a circle or sphere is any of the line segments from its center to its perimeter, and in more modern usage, it is also their length. The radius of a regular polygon is the line segment or distance from its center to any of its vertices. The name comes from the Latin radius, meaning ray but also the spoke of a chariot wheel. The typical abbreviation and mathematical symbol for radius is R or r. By extension, the diameter D is defined as twice the radius: If an object does not have a center, the term may refer to its circumradius, the radius of its circumscribed circle or circumscribed sphere. In either case, the radius may be more than half the diameter, which is usually defined as the maximum distance between any two points of the figure. The inradius of a geometric figure is usually the radius of the largest circle or sphere contained in it. The inner radius of a ring, tube or other hollow object is the radius of its cavity. For regular polygons, the radius is the same as its circumradius. The inradius of a regular polygon is also called apothem. In graph theory, the radius of a graph is the minimum over all vertices u of the maximum distance from u to any other vertex of the graph. The radius of the circle with perimeter (circumference) C is Formula For many geometric figures, the radius has a well-defined relationship with other measures of the figure. Circles The radius of a circle with area is The radius of the circle that passes through the three non-collinear points , , and is given by where is the angle . This formula uses the law of sines. If the three points are given by their coordinates , , and , the radius can be expressed as Regular polygons The radius of a regular polygon with sides of length is given by , where Values of for small values of are given in the table. If then these values are also the radii of the corresponding regular polygons. Hypercubes The radius of a d-dimensional hypercube with side s is Use in coordinate systems Polar coordinates The polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a fixed point and an angle from a fixed direction. The fixed point (analogous to the origin of a Cartesian system) is called the pole, and the ray from the pole in the fixed direction is the polar axis. The distance from the pole is called the radial coordinate or radius, and the angle is the angular coordinate, polar angle, or azimuth. Cylindrical coordinates In the cylindrical coordinate system, there is a chosen reference axis and a chosen reference plane perpendicular to that axis. The origin of the system is the point where all three coordinates can be given as zero. This is the intersection between the reference plane and the axis. The axis is variously called the cylindrical or longitudinal axis, to differentiate it from the polar axis, which is the ray that lies in the reference plane, starting at the origin and pointing in the reference direction. The distance from the axis may be called the radial distance or radius, while the angular coordinate is sometimes referred to as the angular position or as the azimuth. The radius and the azimuth are together called the polar coordinates, as they correspond to a two-dimensional polar coordinate system in the plane through the point, parallel to the reference plane. The third coordinate may be called the height or altitude (if the reference plane is considered horizontal), longitudinal position, or axial position. Spherical coordinates In a spherical coordinate system, the radius describes the distance of a point from a fixed origin. Its position if further defined by the polar angle measured between the radial direction and a fixed zenith direction, and the azimuth angle, the angle between the orthogonal projection of the radial direction on a reference plane that passes through the origin and is orthogonal to the zenith, and a fixed reference direction in that plane.
Mathematics
Two-dimensional space
null
1780823
https://en.wikipedia.org/wiki/AC%20power
AC power
In an electric circuit, instantaneous power is the time rate of flow of energy past a given point of the circuit. In alternating current circuits, energy storage elements such as inductors and capacitors may result in periodic reversals of the direction of energy flow. Its SI unit is the watt. The portion of instantaneous power that, averaged over a complete cycle of the AC waveform, results in net transfer of energy in one direction is known as instantaneous active power, and its time average is known as active power or real power. The portion of instantaneous power that results in no net transfer of energy but instead oscillates between the source and load in each cycle due to stored energy is known as instantaneous reactive power, and its amplitude is the absolute value of reactive power. Active, reactive, apparent, and complex power in sinusoidal steady-state In a simple alternating current (AC) circuit consisting of a source and a linear time-invariant load, both the current and voltage are sinusoidal at the same frequency. If the load is purely resistive, the two quantities reverse their polarity at the same time. Hence, the instantaneous power, given by the product of voltage and current, is always positive, such that the direction of energy flow does not reverse and always is toward the resistor. In this case, only active power is transferred. If the load is purely reactive, then the voltage and current are 90 degrees out of phase. For two quarters of each cycle, the product of voltage and current is positive, but for the other two quarters, the product is negative, indicating that on average, exactly as much energy flows into the load as flows back out. There is no net energy flow over each half cycle. In this case, only reactive power flows: There is no net transfer of energy to the load; however, electrical power does flow along the wires and returns by flowing in reverse along the same wires. The current required for this reactive power flow dissipates energy in the line resistance, even if the ideal load device consumes no energy itself. Practical loads have resistance as well as inductance, or capacitance, so both active and reactive powers will flow to normal loads. Apparent power is the product of the RMS values of voltage and current. Apparent power is taken into account when designing and operating power systems, because although the current associated with reactive power does no work at the load, it still must be supplied by the power source. Conductors, transformers and generators must be sized to carry the total current, not just the current that does useful work. Insufficient reactive power can depress voltage levels on an electrical grid and, under certain operating conditions, collapse the network (a blackout). Another consequence is that adding the apparent power for two loads will not accurately give the total power unless they have the same phase difference between current and voltage (the same power factor). Conventionally, capacitors are treated as if they generate reactive power, and inductors are treated as if they consume it. If a capacitor and an inductor are placed in parallel, then the currents flowing through the capacitor and the inductor tend to cancel rather than add. This is the fundamental mechanism for controlling the power factor in electric power transmission; capacitors (or inductors) are inserted in a circuit to partially compensate for reactive power 'consumed' ('generated') by the load. Purely capacitive circuits supply reactive power with the current waveform leading the voltage waveform by 90 degrees, while purely inductive circuits absorb reactive power with the current waveform lagging the voltage waveform by 90 degrees. The result of this is that capacitive and inductive circuit elements tend to cancel each other out. Engineers use the following terms to describe energy flow in a system (and assign each of them a different unit to differentiate between them): Active power, P, or real power: watt (W); Reactive power, Q: volt-ampere reactive (var); Complex power, S: volt-ampere (VA); Apparent power, |S|: the magnitude of complex power S: volt-ampere (VA); Phase of voltage relative to current, φ: the angle of difference (in degrees) between current and voltage; . Current lagging voltage (quadrant I vector), current leading voltage (quadrant IV vector). These are all denoted in the adjacent diagram (called a power triangle). In the diagram, P is the active power, Q is the reactive power (in this case positive), S is the complex power and the length of S is the apparent power. Reactive power does not do any work, so it is represented as the imaginary axis of the vector diagram. Active power does do work, so it is the real axis. The unit for power is the watt (symbol: W). Apparent power is often expressed in volt-amperes (VA) since it is the product of RMS voltage and RMS current. The unit for reactive power is var, which stands for volt-ampere reactive. Since reactive power transfers no net energy to the load, it is sometimes called "wattless" power. It does, however, serve an important function in electrical grids and its lack has been cited as a significant factor in the Northeast blackout of 2003. Understanding the relationship among these three quantities lies at the heart of understanding power engineering. The mathematical relationship among them can be represented by vectors or expressed using complex numbers, S = P + j Q (where j is the imaginary unit). Calculations and equations in sinusoidal steady-state The formula for complex power (units: VA) in phasor form is: , where V denotes voltage in phasor form, with the amplitude as RMS, and I denotes current in phasor form, with the amplitude as RMS. Also by convention, the complex conjugate of I is used, which is denoted (or ), rather than I itself. This is done because otherwise using the product V I to define S would result in a quantity that depends on the reference angle chosen for V or I, but defining S as V I* results in a quantity that doesn't depend on the reference angle and allows to relate S to P and Q. Other forms of complex power (units in volt-amps, VA) are derived from Z, the load impedance (units in ohms, Ω). . Consequentially, with reference to the power triangle, real power (units in watts, W) is derived as: . For a purely resistive load, real power can be simplified to: . R denotes resistance (units in ohms, Ω) of the load. Reactive power (units in volts-amps-reactive, var) is derived as: . For a purely reactive load, reactive power can be simplified to: , where X denotes reactance (units in ohms, Ω) of the load. Combining, the complex power (units in volt-amps, VA) is back-derived as , and the apparent power (units in volt-amps, VA) as . These are simplified diagrammatically by the power triangle. Power factor The ratio of active power to apparent power in a circuit is called the power factor. For two systems transmitting the same amount of active power, the system with the lower power factor will have higher circulating currents due to energy that returns to the source from energy storage in the load. These higher currents produce higher losses and reduce overall transmission efficiency. A lower power factor circuit will have a higher apparent power and higher losses for the same amount of active power. The power factor is 1.0 when the voltage and current are in phase. It is zero when the current leads or lags the voltage by 90 degrees. When the voltage and current are 180 degrees out of phase, the power factor is negative one, and the load is feeding energy into the source (an example would be a home with solar cells on the roof that feed power into the power grid when the sun is shining). Power factors are usually stated as "leading" or "lagging" to show the sign of the phase angle of current with respect to voltage. Voltage is designated as the base to which current angle is compared, meaning that current is thought of as either "leading" or "lagging" voltage. Where the waveforms are purely sinusoidal, the power factor is the cosine of the phase angle () between the current and voltage sinusoidal waveforms. Equipment data sheets and nameplates will often abbreviate power factor as "" for this reason. Example: The active power is and the phase angle between voltage and current is 45.6°. The power factor is . The apparent power is then: . The concept of power dissipation in AC circuit is explained and illustrated with the example. For instance, a power factor of 0.68 means that only 68 percent of the total current supplied (in magnitude) is actually doing work; the remaining current does no work at the load. Power Factor is very important in Power sector substations. Form the national grid the sub sectors are required to have minimum amount of power factor. Otherwise there are many loss. Mainly the required vary around 0.90 to 0.96 or more. Better the power factor less the loss. Reactive power In a direct current circuit, the power flowing to the load is proportional to the product of the current through the load and the potential drop across the load. The power that happens because of a capacitor or inductor is called reactive power. It happens because of the AC nature of elements like inductors and capacitors. Energy flows in one direction from the source to the load. In AC power, the voltage and current both vary approximately sinusoidally. When there is inductance or capacitance in the circuit, the voltage and current waveforms do not line up perfectly. The power flow has two components – one component flows from source to load and can perform work at the load; the other portion, known as "reactive power", is due to the delay between voltage and current, known as phase angle, and cannot do useful work at the load. It can be thought of as current that is arriving at the wrong time (too late or too early). To distinguish reactive power from active power, it is measured in units of "volt-amperes reactive", or var. These units can simplify to watts but are left as var to denote that they represent no actual work output. Energy stored in capacitive or inductive elements of the network gives rise to reactive power flow. Reactive power flow strongly influences the voltage levels across the network. Voltage levels and reactive power flow must be carefully controlled to allow a power system to be operated within acceptable limits. A technique known as reactive compensation is used to reduce apparent power flow to a load by reducing reactive power supplied from transmission lines and providing it locally. For example, to compensate an inductive load, a shunt capacitor is installed close to the load itself. This allows all reactive power needed by the load to be supplied by the capacitor and not have to be transferred over the transmission lines. This practice saves energy because it reduces the amount of energy that is required to be produced by the utility to do the same amount of work. Additionally, it allows for more efficient transmission line designs using smaller conductors or fewer bundled conductors and optimizing the design of transmission towers. Capacitive vs. inductive loads Stored energy in the magnetic or electric field of a load device, such as a motor or capacitor, causes an offset between the current and the voltage waveforms. A capacitor is a device that stores energy in the form of an electric field. As current is driven through the capacitor, charge build-up causes an opposing voltage to develop across the capacitor. This voltage increases until some maximum dictated by the capacitor structure. In an AC network, the voltage across a capacitor is constantly changing. The capacitor opposes this change, causing the current to lead the voltage in phase. Capacitors are said to "source" reactive power, and thus to cause a leading power factor. Induction machines are some of the most common types of loads in the electric power system today. These machines use inductors, or large coils of wire to store energy in the form of a magnetic field. When a voltage is initially placed across the coil, the inductor strongly resists this change in a current and magnetic field, which causes a time delay for the current to reach its maximum value. This causes the current to lag behind the voltage in phase. Inductors are said to "sink" reactive power, and thus to cause a lagging power factor. Induction generators can source or sink reactive power, and provide a measure of control to system operators over reactive power flow and thus voltage. Because these devices have opposite effects on the phase angle between voltage and current, they can be used to "cancel out" each other's effects. This usually takes the form of capacitor banks being used to counteract the lagging power factor caused by induction motors. Reactive power control Transmission connected generators are generally required to support reactive power flow. For example, on the United Kingdom transmission system, generators are required by the Grid Code Requirements to supply their rated power between the limits of 0.85 power factor lagging and 0.90 power factor leading at the designated terminals. The system operator will perform switching actions to maintain a secure and economical voltage profile while maintaining a reactive power balance equation: The "system gain" is an important source of reactive power in the above power balance equation, which is generated by the capacitative nature of the transmission network itself. By making decisive switching actions in the early morning before the demand increases, the system gain can be maximized early on, helping to secure the system for the whole day. To balance the equation some pre-fault reactive generator use will be required. Other sources of reactive power that will also be used include shunt capacitors, shunt reactors, static VAR compensators and voltage control circuits. Unbalanced sinusoidal polyphase systems While active power and reactive power are well defined in any system, the definition of apparent power for unbalanced polyphase systems is considered to be one of the most controversial topics in power engineering. Originally, apparent power arose merely as a figure of merit. Major delineations of the concept are attributed to Stanley's Phenomena of Retardation in the Induction Coil (1888) and Steinmetz's Theoretical Elements of Engineering (1915). However, with the development of three phase power distribution, it became clear that the definition of apparent power and the power factor could not be applied to unbalanced polyphase systems. In 1920, a "Special Joint Committee of the AIEE and the National Electric Light Association" met to resolve the issue. They considered two definitions. , that is, the arithmetic sum of the phase apparent powers; and , that is, the magnitude of total three-phase complex power. The 1920 committee found no consensus and the topic continued to dominate discussions. In 1932 , another committee formed and once again failed to resolve the question. The transcripts of their discussions are the lengthiest and most controversial ever published by the AIEE. Further resolution of this debate did not come until the late 1990s. A new definition based on symmetrical components theory was proposed in 1993 by Alexander Emanuel for unbalanced linear load supplied with asymmetrical sinusoidal voltages: , that is, the root of squared sums of line voltages multiplied by the root of squared sums of line currents. denotes the positive sequence power: denotes the positive sequence voltage phasor, and denotes the positive sequence current phasor. Real number formulas A perfect resistor stores no energy; so current and voltage are in phase. Therefore, there is no reactive power and (using the passive sign convention). Therefore, for a perfect resistor . For a perfect capacitor or inductor, there is no net power transfer; so all power is reactive. Therefore, for a perfect capacitor or inductor: . where is the reactance of the capacitor or inductor. If is defined as being positive for an inductor and negative for a capacitor, then the modulus signs can be removed from S and X and get . Instantaneous power is defined as: , where and are the time-varying voltage and current waveforms. This definition is useful because it applies to all waveforms, whether they are sinusoidal or not. This is particularly useful in power electronics, where non-sinusoidal waveforms are common. In general, engineers are interested in the active power averaged over a period of time, whether it is a low frequency line cycle or a high frequency power converter switching period. The simplest way to get that result is to take the integral of the instantaneous calculation over the desired period: . This method of calculating the average power gives the active power regardless of harmonic content of the waveform. In practical applications, this would be done in the digital domain, where the calculation becomes trivial when compared to the use of rms and phase to determine active power: . Multiple frequency systems Since an RMS value can be calculated for any waveform, apparent power can be calculated from this. For active power it would at first appear that it would be necessary to calculate many product terms and average all of them. However, looking at one of these product terms in more detail produces a very interesting result. However, the time average of a function of the form is zero provided that ω is nonzero. Therefore, the only product terms that have a nonzero average are those where the frequency of voltage and current match. In other words, it is possible to calculate active (average) power by simply treating each frequency separately and adding up the answers. Furthermore, if voltage of the mains supply is assumed to be a single frequency (which it usually is), this shows that harmonic currents are a bad thing. They will increase the RMS current (since there will be non-zero terms added) and therefore apparent power, but they will have no effect on the active power transferred. Hence, harmonic currents will reduce the power factor. Harmonic currents can be reduced by a filter placed at the input of the device. Typically this will consist of either just a capacitor (relying on parasitic resistance and inductance in the supply) or a capacitor-inductor network. An active power factor correction circuit at the input would generally reduce the harmonic currents further and maintain the power factor closer to unity.
Physical sciences
Electrodynamics
Physics
1781219
https://en.wikipedia.org/wiki/Geologic%20province
Geologic province
A geologic province is a spatial entity with common geologic attributes. A province may include a single dominant structural element such as a basin or a fold belt, or a number of contiguous related elements. Adjoining provinces may be similar in structure but be considered separate due to differing histories. Geologic provinces by origin Geologic provinces by resources Some studies classify provinces based upon mineral resources, such as mineral deposits. There are a particularly large number of provinces identified worldwide for petroleum and other mineral fuels, such as the Niger Delta petroleum province.
Physical sciences
Tectonics
Earth science
1781797
https://en.wikipedia.org/wiki/Sweetness
Sweetness
Sweetness is a basic taste most commonly perceived when eating foods rich in sugars. Sweet tastes are generally regarded as pleasurable. In addition to sugars like sucrose, many other chemical compounds are sweet, including aldehydes, ketones, and sugar alcohols. Some are sweet at very low concentrations, allowing their use as non-caloric sugar substitutes. Such non-sugar sweeteners include saccharin, aspartame, sucralose and stevia. Other compounds, such as miraculin, may alter perception of sweetness itself. The perceived intensity of sugars and high-potency sweeteners, such as aspartame and neohesperidin dihydrochalcone, are heritable, with gene effect accounting for approximately 30% of the variation. The chemosensory basis for detecting sweetness, which varies between both individuals and species, has only begun to be understood since the late 20th century. One theoretical model of sweetness is the multipoint attachment theory, which involves multiple binding sites between a sweetness receptor and a sweet substance. Studies indicate that responsiveness to sugars and sweetness has very ancient evolutionary beginnings, being manifest as chemotaxis even in motile bacteria such as E. coli. Newborn human infants also demonstrate preferences for high sugar concentrations and prefer solutions that are sweeter than lactose, the sugar found in breast milk. Sweetness appears to have the highest taste recognition threshold, being detectable at around 1 part in 200 of sucrose in solution. By comparison, bitterness appears to have the lowest detection threshold, at about 1 part in 2 million for quinine in solution. In the natural settings that human primate ancestors evolved in, sweetness intensity should indicate energy density, while bitterness tends to indicate toxicity. The high sweetness detection threshold and low bitterness detection threshold would have predisposed our primate ancestors to seek out sweet-tasting (and energy-dense) foods and avoid bitter-tasting foods. Even amongst leaf-eating primates, there is a tendency to prefer immature leaves, which tend to be higher in protein and lower in fibre and poisons than mature leaves. The "sweet tooth" thus has an ancient heritage, and while food processing has changed consumption patterns, human physiology remains largely unchanged. Biologically, a variant in fibroblast growth factor 21 increases craving for sweet foods. Examples of sweet substances A great diversity of chemical compounds, such as aldehydes and ketones, are sweet. Among common biological substances, all of the simple carbohydrates are sweet to at least some degree. Sucrose (table sugar) is the prototypical example of a sweet substance. Sucrose in solution has a sweetness perception rating of 1, and other substances are rated relative to this. For example, another sugar, fructose, is somewhat sweeter, being rated at 1.7 times the sweetness of sucrose. Some of the amino acids are mildly sweet: alanine, glycine, and serine are the sweetest. Some other amino acids are perceived as both sweet and bitter. The sweetness of 5% solution of glycine in water compares to a solution of 5.6% glucose or 2.6% fructose. A number of plant species produce glycosides that are sweet at concentrations much lower than common sugars. The most well-known example is glycyrrhizin, the sweet component of licorice root, which is about 30 times sweeter than sucrose. Another commercially important example is stevioside, from the South American shrub Stevia rebaudiana. It is roughly 250 times sweeter than sucrose. Another class of potent natural sweeteners are the sweet proteins such as thaumatin, found in the West African katemfe fruit. Hen egg lysozyme, an antibiotic protein found in chicken eggs, is also sweet. Some variation in values is not uncommon between various studies. Such variations may arise from a range of methodological variables, from sampling to analysis and interpretation. Indeed, the taste index of 1, assigned to reference substances such as sucrose (for sweetness), hydrochloric acid (for sourness), quinine (for bitterness), and sodium chloride (for saltiness), is itself arbitrary for practical purposes. Some values, such as those for maltose and glucose, vary little. Others, such as aspartame and sodium saccharin, have much larger variation. Even some inorganic compounds are sweet, including beryllium chloride and lead(II) acetate. The latter may have contributed to lead poisoning among the ancient Roman aristocracy: the Roman delicacy sapa was prepared by boiling soured wine (containing acetic acid) in lead pots. Hundreds of synthetic organic compounds are known to be sweet, but only a few of these are legally permitted as food additives. For example, chloroform, nitrobenzene, and ethylene glycol are sweet, but also toxic. Saccharin, cyclamate, aspartame, acesulfame potassium, sucralose, alitame, and neotame are commonly used. Sweetness modifiers A few substances alter the way sweet taste is perceived. One class of these inhibits the perception of sweet tastes, whether from sugars or from highly potent sweeteners. Commercially, the most important of these is lactisole, a compound produced by Domino Sugar. It is used in some jellies and other fruit preserves to bring out their fruit flavors by suppressing their otherwise strong sweetness. Two natural products have been documented to have similar sweetness-inhibiting properties: gymnemic acid, extracted from the leaves of the Indian vine Gymnema sylvestre and ziziphin, from the leaves of the Chinese jujube (Ziziphus jujuba). Gymnemic acid has been widely promoted within herbal medicine as a treatment for sugar cravings and diabetes. On the other hand, two plant proteins, miraculin and curculin, cause sour foods to taste sweet. Once the tongue has been exposed to either of these proteins, sourness is perceived as sweetness for up to an hour afterwards. While curculin has some innate sweet taste of its own, miraculin is by itself quite tasteless. The sweetness receptor Despite the wide variety of chemical substances known to be sweet, and knowledge that the ability to perceive sweet taste must reside in taste buds on the tongue, the biomolecular mechanism of sweet taste was sufficiently elusive that as recently as the 1990s, there was some doubt whether any single "sweetness receptor" actually exists. The breakthrough for the present understanding of sweetness occurred in 2001, when experiments with laboratory mice showed that mice possessing different versions of the gene T1R3 prefer sweet foods to different extents. Subsequent research has shown that the T1R3 protein forms a complex with a related protein, called T1R2, to form a G-protein coupled receptor that is the sweetness receptor in mammals. Human studies have shown that sweet taste receptors are not only found in the tongue, but also in the lining of the gastrointestinal tract as well as the nasal epithelium, pancreatic islet cells, sperm and testes. It is proposed that the presence of sweet taste receptors in the GI tract controls the feeling of hunger and satiety. Another research has shown that the threshold of sweet taste perception is in direct correlation with the time of day. This is believed to be the consequence of oscillating leptin levels in blood that may impact the overall sweetness of food. Scientists hypothesize that this is an evolutionary relict of diurnal animals like humans. Sweetness perception may differ between species significantly. For example, even amongst the primates sweetness is quite variable. New World monkeys do not find aspartame sweet, while Old World monkeys and apes (including most humans) all do. Felids like domestic cats cannot perceive sweetness at all. The ability to taste sweetness often atrophies genetically in species of carnivores who do not eat sweet foods like fruits, including bottlenose dolphins, sea lions, spotted hyenas and fossas. Sweet receptor pathway To depolarize the cell, and ultimately generate a response, the body uses different cells in the taste bud that each express a receptor for the perception of sweet, sour, salty, bitter or umami. Downstream of the taste receptor, the taste cells for sweet, bitter and umami share the same intracellular signalling pathway. Incoming sweet molecules bind to their receptors, which causes a conformational change in the molecule. This change activates the G-protein, gustducin, which in turn activates phospholipase C to generate inositol trisphosphate (IP3), this subsequently opens the IP3-receptor and induces calcium release from the endoplasmic reticulum. This increase in intracellular calcium activates the TRPM5 channel and induces cellular depolarization. The ATP release channel CALHM1 gets activated by the depolarization and releases ATP neurotransmitter which activates the afferent neurons innervating the taste bud. Cognition The color of food can affect sweetness perception. Adding more red color to a drink increases its perceived sweetness. In a study darker colored solutions were rated 2–10% higher than lighter ones despite having 1% less sucrose concentration. The effect of color is believed to be due to cognitive expectations. Some odors smell sweet and memory confuses whether sweetness was tasted or smelled. Historical theories The development of organic chemistry in the 19th century introduced many new chemical compounds and the means to determine their molecular structures. Early organic chemists tasted many of their products, either intentionally (as a means of characterization) or accidentally (due to poor laboratory hygiene). One of the first attempts to draw systematic correlations between molecules' structures and their tastes was made by a German chemist, Georg Cohn, in 1914. He hypothesized that to evoke a certain taste, a molecule must contain some structural motif (called a sapophore) that produces that taste. With regard to sweetness, he noted that molecules containing multiple hydroxyl groups and those containing chlorine atoms are often sweet, and that among a series of structurally similar compounds, those with smaller molecular weights were often sweeter than the larger compounds. In 1919, Oertly and Myers proposed a more elaborate theory based on a then-current theory of color in synthetic dyes. They hypothesized that to be sweet, a compound must contain one each of two classes of structural motif, a glucophore and an auxogluc. Based on those compounds known to be sweet at the time, they proposed a list of six candidate glucophores and nine auxoglucs. From these beginnings in the early 20th century, the theory of sweetness enjoyed little further academic attention until 1963, when Robert Shallenberger and Terry Acree proposed the AH-B theory of sweetness. Simply put, they proposed that to be sweet, a compound must contain a hydrogen bond donor (AH) and a Lewis base (B) separated by about 0.3 nanometres. According to this theory, the AH-B unit of a sweetener binds with a corresponding AH-B unit on the biological sweetness receptor to produce the sensation of sweetness. B-X theory was proposed by Lemont Kier in 1972. While previous researchers had noted that among some groups of compounds, there seemed to be a correlation between hydrophobicity and sweetness. This theory formalized these observations by proposing that to be sweet, a compound must have a third binding site (labeled X) that could interact with a hydrophobic site on the sweetness receptor via London dispersion forces. Later researchers have statistically analyzed the distances between the presumed AH, B, and X sites in several families of sweet substances to estimate the distances between these interaction sites on the sweetness receptor. MPA theory The most elaborate theory of sweetness to date is the multipoint attachment theory (MPA) proposed by Jean-Marie Tinti and Claude Nofre in 1991. This theory involves a total of eight interaction sites between a sweetener and the sweetness receptor, although not all sweeteners interact with all eight sites. This model has successfully directed efforts aimed at finding highly potent sweeteners, including the most potent family of sweeteners known to date, the guanidine sweeteners. The most potent of these, lugduname, is about 225,000 times sweeter than sucrose.
Biology and health sciences
Sensory nervous system
Biology
11614796
https://en.wikipedia.org/wiki/Plastic%20bottle
Plastic bottle
A plastic bottle is a bottle constructed from high-density or low density plastic. Plastic bottles are typically used to store liquids such as water, soft drinks, motor oil, cooking oil, medicine, shampoo or milk. They range in sizes, from very small bottles to large carboys. Consumer blow molded containers often have integral handles or are shaped to facilitate grasping. Plastic was invented in the nineteenth century and was originally used to replace common materials such as ivory, rubber, and shellac. Plastic bottles were first used commercially in 1947, but remained relatively expensive until the early 1950s when high-density polyethylene was introduced. They quickly became popular with both manufacturers and customers because compared to glass bottles, plastic bottles are lighter, cheaper and easier to transport. However, the biggest advantage plastic bottles have over their glass counterparts is their superior resistance to breakage, in both production and transportation. Except for wine and beer, the food industry has largely replaced glass bottles with plastic bottles. Production The materials used in the manufacture of plastic bottles vary by application. Petrochemical resins High-density polyethylene (HDPE) HDPE is the most widely used resin for plastic bottles. This material is economical, impact resistant, and provides a good moisture barrier. HDPE is compatible with a wide range of products including acids and caustics but is not compatible with solvents. It is supplied in FDA-approved food grade. HDPE is naturally translucent and flexible. The addition of color will make HDPE opaque, but not glossy. HDPE lends itself to silk screen decoration. While HDPE provides good protection at below freezing temperatures, it cannot be used with products filled above or products requiring a hermetic (vacuum) seal. Fluorine-treated HDPE These bottles are exposed to fluorine gas in a secondary operation, are similar in appearance to HDPE, and serve as a barrier to hydrocarbons and aromatic solvents. Fluorine-treated bottles may contain insecticides, pesticides, herbicides, photographic chemicals, agricultural chemicals, household and industrial cleaners, electronic chemicals, medical cleaners and solvents, citrus products, d-limonene, flavors, fragrances, essential oils, surfactants, polishes, additives, graffiti cleaning products, pre-emergents, stone and tile care products, waxes, paint thinner, gasoline, biodiesel, xylene, acetone, kerosene and more. Low-density polyethylene (LDPE) LDPE is similar in composition to HDPE. It is less rigid and generally less chemically resistant than HDPE, but is more translucent. LDPE is used primarily for squeeze applications. LDPE is significantly more expensive than HDPE. Polyethylene terephthalate (PET, PETE) / Polyester This resin is commonly used for carbonated beverages, water bottles, and food packaging. PET provides very good alcohol and essential oil barrier properties, generally good chemical resistance (although acetones and ketones will attack PET), and a high degree of impact resistance and tensile strength. The orienting process serves to improve gas and moisture barrier properties and impact strength. This material is not resistant at high temperature. Its maximum temperature is . Polycarbonate (PC) PC is a clear plastic used to make bottles for milk and water. Five-gallon water bottles are a common application of PC. Polypropylene (PP) PP is used primarily for jars and closures. It is rigid and is a barrier to moisture. Polypropylene is stable at temperatures up to . It is autoclavable and offers the potential for steam sterilization. The compatibility of PP with high filling temperatures is responsible for its use with hot fill products. PP has excellent chemical resistance, but provides poor impact resistance in cold temperatures. Polystyrene (PS) PS is transparent and rigid. It is commonly used with dry products, including vitamins, petroleum jellies, and spices. Polystyrene does not provide good barrier properties, and exhibits poor impact resistance. Polyvinyl chloride (PVC) PVC is naturally clear. It has high resistance to oils and transmits very little oxygen. It provides a strong barrier to most gases, and its drop-impact resistance is also very good. This material is chemically resistant, but it is vulnerable to some solvents. PVC has poor resistance to high temperatures and will distort at , making it incompatible with hot-filled products. It has attained notoriety in recent years due to potential health risks. Post-consumer resin (PCR) PCR is a blend of reclaimed natural HDPE (primarily from milk and water containers) and virgin resin. The recycled material is cleaned, ground and recompounded into uniform pellets along with prime virgin material especially designed to build up environmental stress crack resistance. PCR has no odor but exhibits a slight yellow tint in its natural state. This tint can be hidden by the addition of color. PCR is easily processed and inexpensive. However, it cannot come into direct contact with food or pharmaceutical products. PCR can be produced in a variety of recycled content percentages up to 100%. K-Resin (SBC) SBC is a highly transparent, high-gloss, impact-resistance resin. K-Resin, a styrene derivative, is processed on polyethylene equipment. It is specifically incompatible with fats and unsaturated oils or solvents. This material is frequently used for display and point-of-purchase packaging. Other materials Bioplastic A bioplastic is a polymer structure based on processed biological materials rather than petrochemicals. Bioplastics are commonly made from renewable sources like starch, vegetable oil, and less commonly, chicken feathers. The idea behind bioplastic is to create a plastic that has the ability to biodegrade. Bisphenol A (BPA) BPA is a synthetic compound that serves as a raw material in the manufacturing of such plastics as polycarbonates and epoxy resins. It is commonly found in reusable drink containers, food storage containers, canned foods, children's toys and cash register receipts. BPA can seep into food or beverages from containers that are made with BPA. Acrylonitrile Acrylonitrile is an organic compound and one of the components of ABS plastic. Acrylonitrile bottles were introduced in 1974 by Coca-Cola to replace glass but was banned by the Food and Drug Administration after showing adverse health effects in animal studies. Concerns There is ongoing concern as to the use of plastics in consumer food packaging solutions, environmental impact of the disposal of these products, as well as concerns regarding consumer safety. Karin Michaels, Associate Professor at Harvard Medical School, suggests that toxins leaching from plastics might be related to disorders in humans such as endocrine disruption. Aluminum and cyanide were found as trace elements in the examined samples, which are considered to be toxic elements according to the United States food and drug administration FDA. In the United States, plastic water bottles are regulated by the FDA which also inspects and samples bottled water plants periodically. Plastic water bottle plants hold a low priority for inspection due to a continuously good safety record. In the past, the FDA maintained that there was a lack of human data showing plastics pose health problems. However, in January 2010, the FDA reversed its opinion saying they now have concerns about health risks. It is a common misconception that drinking from plastic water bottles increases cancer risk; there is no such risk. Microplastics An article published on 6 November 2017 in Water Research reported on the content of microplastics in mineral waters packed in plastic or glass bottles, or beverage cartons. In 2018, research conducted by Sherri Mason from the State University of New York in Fredonia revealed the presence of polypropylene, polystyrene, nylon and polyethylene terephthalate microparticles in plastic bottles. Polypropylene was hereby found to be the most common polymeric material (54%) and nylon the second most abundant (16%) polymeric material. The study also mentioned that polypropylene and polyethylene are polymers that are often used to make plastic bottle caps. Also, 4% of retrieved plastic particles were found to have signatures of industrial lubricants coating the polymer. The research was reviewed by Andrew Mayes of the University of East Anglia (UEA) School of Chemistry The European Food Safety Authority suggested most microplastics are excreted by the body, however the UN Food and Agriculture Organization warned that it is possible that the smallest particles (< 1.5 μm) could enter the bloodstream and organs, via the intestinal wall. Microplastics have been observed to cross the blood-brain barrier, found in semen, testes, and placenta tissue. Labelling Plastic bottles are marked at their base with the resin identification code to indicate the material used. Product labels are attached with adhesive or are shrunk to fit. In-mould labelling is a process of building the label into the bottle during molding. Speciality types Collapsible bottle An accordion bottle or collapsible bottle is a plastic bottle designed to store darkroom chemicals or any other chemical that is highly susceptible to oxidation. They work by being able to squeeze down to remove excess air from the bottle to extend the life of the product. An alternate benefit is minimizing storage, transportation, or disposal space when the bottle is empty or as the content is being dispersed, for example with water bottles used by hikers. Collapsing can also keep foods fresher. Carbonated drinks bottles Bottles, used for storing carbonated water and soft drinks, has an uneven bottom for stability reasons. The technology was developed and patented by Lithuanian Domas Adomaitis in 1971. Although carbonated soda bottles were designed for holding beverages, they have been used for other purposes. For example, in poor countries, empty two-liter soda bottles have been reused as an improvised personal flotation device to prevent drowning.
Technology
Containers
null
11619887
https://en.wikipedia.org/wiki/Spanish%20mackerel
Spanish mackerel
Scomberomorini is a tribe of ray-finned saltwater bony fishes that is commonly known as the Spanish mackerels, seerfishes or seer fish. This tribe is a subset of the mackerel family (Scombridae) – a family that it shares with four sister tribes, the tunas, mackerels, bonitos, and the butterfly kingfish. Scomberomorini comprises 21 species across three genera. They are pelagic fish, fast swimmers and predatory in nature, that fight vigorously when caught. They are mainly caught using hooks and lines. Taxonomy The following cladogram shows the most likely evolutionary relationships between the Spanish mackerels and the tunas, mackerels, bonitos, and the butterfly kingfish. This tribe comprises 21 species in three genera: Acanthocybium (Gill, 1862) A. solandri (Cuvier, 1832), wahoo Grammatorcynus (Gill, 1862) G. bicarinatus (Quoy & Gaimard, 1825), shark mackerel G. bilineatus (Rüppell, 1836), double-lined mackerel Scomberomorus (Lacepède, 1801) S. brasiliensis Collette, Russo & Zavala-Camin, 1978, Serra Spanish mackerel S. cavalla (Cuvier, 1829), king mackerel S. commerson (Lacépède, 1800), narrow-barred Spanish mackerel S. concolor (Lockington, 1879), Monterrey Spanish mackerel S. guttatus (Bloch & Schneider, 1801), Indo-Pacific king mackerel S. koreanus (Kishinouye, 1915), Korean seerfish S. lineolatus (Cuvier, 1829), streaked seerfish S. maculatus (Mitchill, 1815), Atlantic Spanish mackerel S. multiradiatus Munro, 1964, Papuan seerfish S. munroi Collette & Russo, 1980, Australian spotted mackerel S. niphonius (Cuvier, 1832), Japanese Spanish mackerel S. plurilineatus Fourmanoir, 1966, Kanadi kingfish S. queenslandicus Munro, 1943, Queensland school mackerel S. regalis (Bloch, 1793), Cero mackerel S. semifasciatus (Macleay, 1883), broadbarred king mackerel S. sierra Jordan & Starks, 1895, Pacific sierra S. sinensis (Lacépède, 1800), Chinese seerfish S. tritor (Cuvier, 1832), West African Spanish mackerel In India Spanish mackerel is very much liked for its delicacy in various regions of South India and Sri Lanka. In Andhra Pradesh and Tamil Nadu, this fish is called Vanjaram and is usually the most expensive fish available. In Kerala, it is called Neymeen. It is called Aiykoora in northern Kerala and south coastal Karnataka. In Sri Lanka, Spanish mackerel is known as thora. Seerfishes are also referred to as king mackerels in some areas. They have very sharp teeth and are handled with care by fishers familiar with them. Seerfish is one of the more popular in this group for eating. Seerfishes are notorious for their histamine poisoning. It can be fried, grilled, and steamed. It is gaining popularity in the South Pacific and United States as a canned product. Throughout India, Spanish mackerel may be known as the following: Tamil - Vanjaram, Seela Telugu - Vanjaram Chepa Kannada - Konema, Kalagnani, Surmai Tulu - Anjal Malayalam - Neymeen, Aiykoora Sinhala (Sri Lanka) - Thora Marathi - Surmai Konkani - Iswon
Biology and health sciences
Acanthomorpha
Animals
1171044
https://en.wikipedia.org/wiki/High-energy%20nuclear%20physics
High-energy nuclear physics
High-energy nuclear physics studies the behavior of nuclear matter in energy regimes typical of high-energy physics. The primary focus of this field is the study of heavy-ion collisions, as compared to lighter atoms in other particle accelerators. At sufficient collision energies, these types of collisions are theorized to produce the quark–gluon plasma. In peripheral nuclear collisions at high energies one expects to obtain information on the electromagnetic production of leptons and mesons that are not accessible in electron–positron colliders due to their much smaller luminosities. Previous high-energy nuclear accelerator experiments have studied heavy-ion collisions using projectile energies of 1 GeV/nucleon at JINR and LBNL-Bevalac up to 158 GeV/nucleon at CERN-SPS. Experiments of this type, called "fixed-target" experiments, primarily accelerate a "bunch" of ions (typically around 106 to 108 ions per bunch) to speeds approaching the speed of light (0.999c) and smash them into a target of similar heavy ions. While all collision systems are interesting, great focus was applied in the late 1990s to symmetric collision systems of gold beams on gold targets at Brookhaven National Laboratory's Alternating Gradient Synchrotron (AGS) and uranium beams on uranium targets at CERN's Super Proton Synchrotron. High-energy nuclear physics experiments are continued at the Brookhaven National Laboratory's Relativistic Heavy Ion Collider (RHIC) and at the CERN Large Hadron Collider. At RHIC the programme began with four experiments— PHENIX, STAR, PHOBOS, and BRAHMS—all dedicated to study collisions of highly relativistic nuclei. Unlike fixed-target experiments, collider experiments steer two accelerated beams of ions toward each other at (in the case of RHIC) six interaction regions. At RHIC, ions can be accelerated (depending on the ion size) from 100 GeV/nucleon to 250 GeV/nucleon. Since each colliding ion possesses this energy moving in opposite directions, the maximal energy of the collisions can achieve a center-of-mass collision energy of 200 GeV/nucleon for gold and 500 GeV/nucleon for protons. The ALICE (A Large Ion Collider Experiment) detector at the LHC at CERN is specialized in studying Pb–Pb nuclei collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. All major LHC detectors—ALICE, ATLAS, CMS and LHCb—participate in the heavy-ion programme. History The exploration of hot hadron matter and of multiparticle production has a long history initiated by theoretical work on multiparticle production by Enrico Fermi in the US and Lev Landau in the USSR. These efforts paved the way to the development in the early 1960s of the thermal description of multiparticle production and the statistical bootstrap model by Rolf Hagedorn. These developments led to search for and discovery of quark-gluon plasma. Onset of the production of this new form of matter remains under active investigation. First collisions The first heavy-ion collisions at modestly relativistic conditions were undertaken at the Lawrence Berkeley National Laboratory (LBNL, formerly LBL) at Berkeley, California, U.S.A., and at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, USSR. At the LBL, a transport line was built to carry heavy ions from the heavy-ion accelerator HILAC to the Bevatron. The energy scale at the level of 1–2 GeV per nucleon attained initially yields compressed nuclear matter at few times normal nuclear density. The demonstration of the possibility of studying the properties of compressed and excited nuclear matter motivated research programs at much higher energies in accelerators available at BNL and CERN with relativist beams targeting laboratory fixed targets. The first collider experiments started in 1999 at RHIC, and LHC begun colliding heavy ions at one order of magnitude higher energy in 2010. CERN operation The LHC collider at CERN operates one month a year in the nuclear-collision mode, with Pb nuclei colliding at 2.76 TeV per nucleon pair, about 1500 times the energy equivalent of the rest mass. Overall 1250 valence quarks collide, generating a hot quark–gluon soup. Heavy atomic nuclei stripped of their electron cloud are called heavy ions, and one speaks of (ultra)relativistic heavy ions when the kinetic energy exceeds significantly the rest energy, as it is the case at LHC. The outcome of such collisions is production of very many strongly interacting particles. In August 2012 ALICE scientists announced that their experiments produced quark–gluon plasma with temperature at around 5.5 trillion kelvins, the highest temperature achieved in any physical experiments thus far. This temperature is about 38% higher than the previous record of about 4 trillion kelvins, achieved in the 2010 experiments at the Brookhaven National Laboratory. The ALICE results were announced at the August 13 Quark Matter 2012 conference in Washington, D.C. The quark–gluon plasma produced by these experiments approximates the conditions in the universe that existed microseconds after the Big Bang, before the matter coalesced into atoms. Objectives There are several scientific objectives of this international research program: The formation and investigation of a new state of matter made of quarks and gluons, the quark–gluon plasma QGP, which prevailed in early universe in first 30 microseconds. The study of color confinement and the transformation of color confining = quark confining vacuum state to the excited state physicists call perturbative vacuum, in which quarks and gluons can roam free, which occurs at Hagedorn temperature; The study the origins of hadron (proton, neutron etc.) matter mass believed to be related to the phenomenon of quark confinement and vacuum structure. Experimental program This experimental program follows on a decade of research at the RHIC collider at BNL and almost two decades of studies using fixed targets at SPS at CERN and AGS at BNL. This experimental program has already confirmed that the extreme conditions of matter necessary to reach QGP phase can be reached. A typical temperature range achieved in the QGP created is more than times greater than in the center of the Sun. This corresponds to an energy density . The corresponding relativistic-matter pressure is More information Rutgers University Nuclear Physics Home Page Publications - High Energy Nuclear Physics (HENP) https://web.archive.org/web/20101212105542/http://www.er.doe.gov/np/
Physical sciences
Nuclear physics
Physics
1171466
https://en.wikipedia.org/wiki/Takin
Takin
The takin (Budorcas taxicolor; ), also called cattle chamois or gnu goat, is a large species of ungulate of the subfamily Caprinae found in the eastern Himalayas. It includes four subspecies: the Mishmi takin (B. t. taxicolor), the golden takin (B. t. bedfordi), the Tibetan (or Sichuan) takin (B. t. tibetana), and the Bhutan takin (B. t. whitei). Whilst the takin has in the past been placed together with the muskox in the tribe Ovibovini, more recent mitochondrial research shows a closer relationship to Ovis (sheep). Its physical similarity to the muskox is therefore an example of convergent evolution. The takin is the national animal of Bhutan. Etymology The specific name taxicolor comes from and referring to badger-like coloration. Appearance The takin rivals the muskox as the largest and stockiest of the subfamily Caprinae, which includes goats, sheep, and similar species. Its short legs are supported by large, two-toed hooves, which each have a highly developed spur. It has a stocky body and a deep chest. Its large head is distinctive by its long, arched nose and stout horns, which are ridged at the base. Horns are present in both sexes, and run parallel to the skull before turning upwards to a short point; they are about long, but can grow up to . Its long, shaggy coat is light in color with a dark stripe along the back, and males (bulls) also have dark faces. Four subspecies of takin are currently recognised, and these tend to show a variation in coat colour. Their thick wool often turns black in colour on their undersides and legs. Their overall coloration ranges from dark blackish to reddish-brown suffused with grayish-yellow in the eastern Himalayas to lighter yellow-gray in the Sichuan Province to mostly golden or (rarely) creamy-white with fewer black hairs in the Shaanxi Province. The legend of the 'golden fleece' sought by Jason and the Argonauts may have been inspired by the lustrous coat of the golden takin (B. t. bedfordi). Hair length can range from , on the flanks of the body in summer, up to on the underside of the head in winter. In height, takin stand at the shoulder, but measure a relatively short in head-and-body length, with the tail adding only an additional . Measurements of weights vary, but according to most reports, the males are slightly larger, weighing against in females. Sources including Betham (1908) report that females are larger, with the largest captive takin known to the author, at , having been female. Takin can weigh up to or in some cases. Instead of relying on localized scent glands, the takin secretes an oily, strong-smelling substance over its whole body, enabling it to mark objects such as trees. A prominent nose with a swollen appearance caused biologist George Schaller to liken the takin to a "bee-stung moose." Features reminiscent of familiar domesticated species have earned takins such nicknames as "cattle chamois" and "gnu goat." Distribution and habitat Takin are found from forested valleys to rocky, grass-covered alpine zones, at altitudes between above sea level. The Mishmi takin occurs in eastern Arunachal Pradesh, while the Bhutan takin is in western Arunachal Pradesh and Bhutan. Dihang-Dibang Biosphere Reserve in Arunachal Pradesh, India is a stronghold of both Mishmi, Upper Siang (Kopu) and Bhutan takins. Behaviour and ecology Takin are found in small family groups of around 20 individuals, although older males may lead more solitary existences. In the summer, herds of up to 300 individuals gather high on the mountain slopes. Groups often appear to occur in largest numbers when favorable feeding sites, salt licks, or hot springs are located. Mating takes place in July and August. Adult males compete for dominance by sparring head-to-head with opponents, and both sexes appear to use the scent of their own urine to indicate dominance. A single young is born after a gestation period of around eight months. Takin migrate from the upper pasture to lower, more forested areas in winter and favor sunny spots upon sunrise. When disturbed, individuals give a 'cough' alarm call and the herd retreats into thick bamboo thickets and lies on the ground for camouflage. Takin feed in the early morning and late afternoon, grazing on a variety of leaves and grasses, as well as bamboo shoots and flowers. They have been observed standing on their hind legs to feed on leaves over high. Salt is also an important part of their diets, and groups may stay at a mineral deposit for several days. Threats The takin is listed as Vulnerable on the IUCN Red List and considered Endangered in China. It is threatened by overhunting and the destruction of its natural habitat. It is not a common species naturally, and the population appears to have been reduced considerably. Takin horns have appeared in the illegal wildlife trade in Myanmar; and during three surveys carried out from 1999 to 2006 in the Tachilek market, a total of 89 sets of horns were observed openly for sale. Taxonomy Relationships with other caprines based on mitochondrial DNA after Bover et al.:
Biology and health sciences
Bovidae
Animals
1172577
https://en.wikipedia.org/wiki/Herschel%20Space%20Observatory
Herschel Space Observatory
The Herschel Space Observatory was a space observatory built and operated by the European Space Agency (ESA). It was active from 2009 to 2013, and was the largest infrared telescope ever launched until the launch of the James Webb Space Telescope in 2021. Herschel carries a mirror and instruments sensitive to the far infrared and submillimetre wavebands (55–672 μm). Herschel was the fourth and final cornerstone mission in the Horizon 2000 programme, following SOHO/Cluster II, XMM-Newton and Rosetta. The observatory was carried into orbit by an Ariane 5 in May 2009, reaching the second Lagrangian point (L2) of the Earth–Sun system, from Earth, about two months later. Herschel is named after Sir William Herschel, the discoverer of the infrared spectrum and planet Uranus, and his sister and collaborator Caroline Herschel. The observatory was capable of seeing the coldest and dustiest objects in space; for example, cool cocoons where stars form and dusty galaxies just starting to bulk up with new stars. The observatory sifted through star-forming clouds—the "slow cookers" of star ingredients—to trace the path by which potentially life-forming molecules, such as water, form. The telescope's lifespan was governed by the amount of coolant available for its instruments; when that coolant ran out, the instruments would stop functioning correctly. At the time of its launch, operations were estimated to last 3.5 years (to around the end of 2012). It continued to operate until 29 April 2013 15:20 UTC, when Herschel ran out of coolant. NASA was a partner in the Herschel mission, with US participants contributing to the mission; providing mission-enabling instrument technology and sponsoring the NASA Herschel Science Center (NHSC) at the Infrared Processing and Analysis Center and the Herschel Data Search at the Infrared Science Archive. Development In 1982 the Far Infrared and Sub-millimetre Telescope (FIRST) was proposed to ESA. The ESA long-term policy-plan "Horizon 2000", produced in 1984, called for a High Throughput Heterodyne Spectroscopy mission as one of its cornerstone missions. In 1986, FIRST was adopted as this cornerstone mission. It was selected for implementation in 1993, following an industrial study in 1992–1993. The mission concept was redesigned from Earth-orbit to the Lagrangian point L2, in light of experience gained from the Infrared Space Observatory [(2.5–240 μm) 1995–1998]. In 2000, FIRST was renamed Herschel. After being put out to tender in 2000, industrial activities began in 2001. Herschel was launched in 2009. The Herschel mission cost . This figure includes spacecraft and payload, launch and mission expenses, and science operations. Science Herschel specialised in collecting light from objects in the Solar System as well as the Milky Way and even extragalactic objects billions of light-years away, such as newborn galaxies, and was charged with four primary areas of investigation: Galaxy formation in the early universe and the evolution of galaxies; Star formation and its interaction with the interstellar medium; Chemical composition of atmospheres and surfaces of Solar System bodies, including planets, comets and moons; Molecular chemistry across the universe. During the mission, Herschel "made over 35,000 scientific observations" and "amass[ed] more than 25,000 hours' worth of science data from about 600 different observing programs". Instrumentation The mission involved the first space observatory to cover the full far infrared and submillimetre waveband. At , Herschel carried the largest optical telescope ever deployed in space. It was made not from glass but from sintered silicon carbide. The mirror's blank was manufactured by Boostec in Tarbes, France; ground and polished by Opteon Ltd. in Tuorla Observatory, Finland; and coated by vacuum deposition at the Calar Alto Observatory in Spain. The light reflected by the mirror was focused onto three instruments, whose detectors were kept at temperatures below . The instruments were cooled with over of liquid helium, boiling away in a near vacuum at a temperature of approximately . The supply of helium on board the spacecraft was a fundamental limit to the operational lifetime of the space observatory; it was originally expected to be operational for at least three years. Herschel carried three detectors: PACS (Photodetecting Array Camera and Spectrometer) An imaging camera and low-resolution spectrometer covering wavelengths from 55 to 210 micrometres, which was designed and built by the Max Planck Institute for Extraterrestrial Physics. The spectrometer had a spectral resolution between R=1000 and R=5000 and was able to detect signals as weak as −63 dB. It operated as an integral field spectrograph, combining spatial and spectral resolution. The imaging camera was able to image simultaneously in two bands (either 60–85/85–130 micrometres and 130–210 micrometres) with a detection limit of a few millijanskys. SPIRE (Spectral and Photometric Imaging Receiver) An imaging camera and low-resolution spectrometer covering 194 to 672 micrometre wavelength. The spectrometer had a resolution between R=40 and R=1000 at a wavelength of 250 micrometres and was able to image point sources with brightnesses around 100 millijanskys (mJy) and extended sources with brightnesses of around 500 mJy. The imaging camera had three bands, centred at 250, 350 and 500 micrometres, each with 139, 88 and 43 pixels respectively. It was able to detect point sources with brightness above 2 mJy and between 4 and 9 mJy for extended sources. A prototype of the SPIRE imaging camera flew on the BLAST high-altitude balloon. NASA's Jet Propulsion Laboratory in Pasadena, Calif., developed and built the "spider web" bolometers for this instrument, which is 40 times more sensitive than previous versions. The Herschel-SPIRE instrument was built by an international consortium comprising more than 18 institutes from eight countries, of which Cardiff University was the lead institute. HIFI (Heterodyne Instrument for the Far Infrared) A heterodyne detector able to electronically separate radiation of different wavelengths, giving a spectral resolution as high as R=107. The spectrometer was operated within two wavelength bands, from 157 to 212 micrometres and from 240 to 625 micrometres. SRON Netherlands Institute for Space Research led the entire process of designing, constructing and testing HIFI. The HIFI Instrument Control Center, also under the leadership of SRON, was responsible for obtaining and analysing the data. NASA developed and built the mixers, local oscillator chains and power amplifiers for this instrument. The NASA Herschel Science Center, part of the Infrared Processing and Analysis Center at the California Institute of Technology, also in Pasadena, has contributed science planning and data analysis software. Service module A common service module (SVM) was designed and built by Thales Alenia Space in its Turin plant for the Herschel and Planck missions, as they were combined into one single program. Structurally, the Herschel and Planck SVMs are very similar. Both SVMs are of octagonal shape and, for both, each panel is dedicated to accommodate a designated set of warm units, while taking into account the heat dissipation requirements of the different warm units, of the instruments, as well as the spacecraft. Furthermore, on both spacecraft a common design has been achieved for the avionics systems, attitude control and measurement systems (ACMS), command and data management systems (CDMS), power subsystems and the tracking, telemetry, and command subsystem (TT&C). All spacecraft units on the SVM are redundant. Power subsystem On each spacecraft, the power subsystem consists of the solar array, employing triple-junction solar cells, a battery and the power control unit (PCU). It is designed to interface with the 30 sections of each solar array, provide a regulated 28 V bus, distribute this power via protected outputs and to handle the battery charging and discharging. For Herschel, the solar array is fixed on the bottom part of the baffle designed to protect the cryostat from the Sun. The three-axis attitude control system maintains this baffle in direction of the Sun. The top part of this baffle is covered with optical solar reflector (OSR) mirrors reflecting 98% of the Sun's energy, avoiding heating of the cryostat. Attitude and orbit control This function is performed by the attitude control computer (ACC) which is the platform for the ACMS. It is designed to fulfil the pointing and slewing requirements of the Herschel and Planck payload. The Herschel spacecraft is three-axis stabilized. The absolute pointing error needs to be less than 3.7 arc seconds. The main sensor of the line of sight in both spacecraft is the star tracker. Launch and orbit The spacecraft, built in the Cannes Mandelieu Space Center, under Thales Alenia Space Contractorship, was successfully launched from the Guiana Space Centre in French Guiana at 13:12:02 UTC on 14 May 2009, aboard an Ariane 5 rocket, along with the Planck spacecraft, and placed on a very elliptical orbit on its way towards the second Lagrangian point. The orbit's perigee was 270.0 km (intended ), apogee 1,197,080 km (intended ), inclination 5.99 deg (intended ). On 14 June 2009, ESA successfully sent the command for the cryocover to open which allowed the PACS system to see the sky and transmit images in a few weeks. The lid had to remain closed until the telescope was well into space to prevent contamination. Five days later the first set of test photos, depicting M51 Group, was published by ESA. In mid-July 2009, approximately sixty days after launch, it entered a halo orbit of 800,000 km average radius around the second Lagrangian point (L2) of the Earth-Sun system, 1.5 million kilometres from the Earth. Discoveries On 21 July 2009, Herschel commissioning was declared successful, allowing the start of the operational phase. A formal handover of the overall responsibility of Herschel was declared from the programme manager Thomas Passvogel to the mission manager Johannes Riedinger. Herschel was instrumental in the discovery of an unknown and unexpected step in the star forming process. The initial confirmation and later verification via help from ground-based telescopes of a vast hole of empty space, previously believed to be a dark nebula, in the area of NGC 1999 shed new light in the way newly forming star regions discard the material which surround them. In July 2010 a special issue of Astronomy and Astrophysics was published with 152 papers on initial results from the observatory. A second special issue of Astronomy and Astrophysics was published in October 2010 concerning the sole HIFI instrument, due its technical failure which took it down over 6 months between August 2009 and February 2010. It was reported on 1 August 2011, that molecular oxygen had been definitively confirmed in space with the Herschel Space Telescope, the second time scientists have found the molecule in space. It had been previously reported by the Odin team. An October 2011 report published in Nature states that Herschel measurements of deuterium levels in the comet Hartley 2 suggests that much of Earth's water could have initially come from cometary impacts. On 20 October 2011, it was reported that oceans-worth of cold water vapor had been discovered in the accretion disc of a young star. Unlike warm water vapor, previously detected near forming stars, cold water vapor would be capable of forming comets which then could bring water to inner planets, as is theorized for the origin of water on Earth. On 18 April 2013, the Herschel team announced in another Nature paper that it had located an exceptional starburst galaxy which produced over 2,000 solar masses of stars a year. The galaxy, termed HFLS3, is located at z = 6.34, originating only 880 million years after the Big Bang. Just days before the end of its mission, ESA announced that Herschel observations had led to the conclusion that water on Jupiter had been delivered as a result of the collision of Comet Shoemaker–Levy 9 in 1994. On 22 January 2014, ESA scientists using Herschel data reported the detection, for the first definitive time, of water vapor on the dwarf planet, Ceres, largest object in the asteroid belt. The finding is unexpected because comets, not asteroids, are typically considered to "sprout jets and plumes". According to one of the scientists, "The lines are becoming more and more blurred between comets and asteroids." End of mission On 29 April 2013, ESA announced that Herschel supply of liquid helium, used to cool the instruments and detectors on board, had been depleted, thus ending its mission. At the time of the announcement, Herschel was approximately 1.5 million km from Earth. Because Herschel orbit at the L2 point is unstable, ESA wanted to guide the craft on a known trajectory. ESA managers considered two options: Place Herschel into a heliocentric orbit where it would not encounter Earth for at least several hundred years. Guide Herschel on a course toward the Moon for a destructive high-speed collision that would help in the search for water at a lunar pole. Herschel would take about 100 days to reach the Moon. The managers chose the first option because it was less costly. On 17 June 2013, Herschel was fully deactivated, with its fuel tanks forcibly depleted and the onboard computer programmed to cease communications with Earth. The final command, which severed communications, was sent from European Space Operations Centre (ESOC) at 12:25 UTC. The mission's post-operations phase continued until 2017. The main tasks were consolidation and refinement of instrument calibration, to improve data quality, and data processing, to create a body of scientifically validated data. After Herschel Following Herschel demise, some European astronomers have pushed for the joint European-Japanese SPICA far-infrared observatory project, as well as ESA's continued partnership in NASA's James Webb Space Telescope. James Webb covers the near-infrared spectrum from 0.6 to 28.5 μm, and SPICA covers the mid-to-far-infrared spectral range between 12 and 230 μm. While Herschel dependence on liquid helium coolant limited the design life to around three years, SPICA would have used mechanical Joule-Thomson coolers to sustain cryogenic temperatures for a longer period of time. SPICA's sensitivity was to be two orders of magnitude higher than Herschel. NASA's proposed Origins Space Telescope (OST) would also observe in the far-infrared band of light. Europe is leading the study for one of OST's five instruments, the Heterodyne Receiver for OST (HERO).
Technology
Space-based observatories
null
1172932
https://en.wikipedia.org/wiki/Sodium%20azide
Sodium azide
Sodium azide is an inorganic compound with the formula . This colorless salt is the gas-forming component in some car airbag systems. It is used for the preparation of other azide compounds. It is an ionic substance, is highly soluble in water, and is acutely poisonous. Structure Sodium azide is an ionic solid. Two crystalline forms are known, rhombohedral and hexagonal. Both adopt layered structures. The azide anion is very similar in each form, being centrosymmetric with N–N distances of 1.18 Å. The ion has an octahedral geometry. Each azide is linked to six centers, with three Na–N bonds to each terminal nitrogen center. Preparation The common synthesis method is the "Wislicenus process", which proceeds in two steps in liquid ammonia. In the first step, ammonia is converted to sodium amide by metallic sodium: It is a redox reaction in which metallic sodium gives an electron to a proton of ammonia which is reduced in hydrogen gas. Sodium easily dissolves in liquid ammonia to produce solvated electrons responsible for the blue color of the resulting liquid. The and ions are produced by this reaction. The sodium amide is subsequently combined with nitrous oxide: These reactions are the basis of the industrial route, which produced about 250 tons per year in 2004, with production increasing due to the increased use of airbags. Laboratory methods Curtius and Thiele developed another production process, where a nitrite ester is converted to sodium azide using hydrazine. This method is suited for laboratory preparation of sodium azide: Alternatively the salt can be obtained by the reaction of sodium nitrate with sodium amide. Chemical reactions Acid formation of hydrazoic acid Treatment of sodium azide with strong acids gives gaseous hydrazoic acid (hydrogen azide; HN3), which is also extremely toxic: Hydrazoic acid equilibrium Aqueous solutions contain minute amounts of hydrazoic acid, the formation of which is described by the following equilibrium: , K = 10−4.6 Destruction Sodium azide can be destroyed by treatment with in situ prepared nitrous acid (HNO2; not HNO3). In situ preparation is necessary as HNO2 is unstable and decomposes rapidly in aqueous solutions. This destruction must be done with great caution and within a chemical fume hood as the formed gaseous nitric oxide (NO) is also toxic, and an incorrect order of acid addition for in situ formation of HNO2 will instead produce gaseous highly toxic hydrazoic acid (HN3). Applications Automobile airbags and aircraft evacuation slides Older airbag formulations contained mixtures of oxidizers, sodium azide and other agents including ignitors and accelerants. An electronic controller detonates this mixture during an automobile crash: The same reaction occurs upon heating the salt to approximately 300 °C. The sodium that is formed is a potential hazard alone and, in automobile airbags, it is converted by reaction with other ingredients, such as potassium nitrate and silica. In the latter case, innocuous sodium silicates are generated. While sodium azide is still used in evacuation slides on modern aircraft, newer-generation automotive air bags contain less sensitive explosives such as nitroguanidine or guanidine nitrate. Organic and inorganic synthesis Due to its explosion hazard, sodium azide is of only limited value in industrial-scale organic synthesis. In the laboratory, it is used to introduce the azide functional group by displacement of halides. The azide functional group can thereafter be converted to an amine by reduction with either in ethanol or lithium aluminium hydride or a tertiary phosphine, such as triphenylphosphine in the Staudinger reaction, with Raney nickel or with hydrogen sulfide in pyridine. Oseltamivir, an antiviral medication, is currently produced in commercial scale by a method which utilizes sodium azide. Sodium azide is a versatile precursor to other inorganic azide compounds, e.g., lead azide and silver azide, which are used in detonators as primary explosives. These azides are significantly more sensitive to premature detonation than sodium azide and thus have limited applications. Lead and silver azide can be made via double displacement reaction with sodium azide and their respective nitrate (most commonly) or acetate salts. Sodium azide can also react with the chloride salts of certain alkaline earth metals in aqueous solution, such as barium chloride or strontium chloride to respectively produce barium azide and strontium azide, which are also relatively sensitive primarily explosive materials. These azides can be recovered from solution through careful desiccation. Biochemistry and biomedical uses Sodium azide is a useful probe reagent, and an antibacterial preservative for biochemical solutions. In the past merthiolate and chlorobutanol were also used as an alternative to azide for preservation of biochemical solutions. Sodium azide is an instantaneous inhibitor of lactoperoxidase, which can be useful to stop lactroperoxidase catalyzed 125I protein radiolabeling experiments. In hospitals and laboratories, it is a biocide; it is especially important in bulk reagents and stock solutions which may otherwise support bacterial growth where the sodium azide acts as a bacteriostatic by inhibiting cytochrome oxidase in gram-negative bacteria; however, some gram-positive bacteria (streptococci, pneumococci, lactobacilli) are intrinsically resistant. Agricultural uses It is used in agriculture for pest control of soil-borne pathogens such as Meloidogyne incognita or Helicotylenchus dihystera. It is also used as a mutagen for crop selection of plants such as rice, barley or oats. Safety considerations Sodium azide can be fatally toxic, and even minute amounts can cause symptoms. The toxicity of this compound is comparable to that of soluble alkali cyanides, although no toxicity has been reported from spent airbags. It produces extrapyramidal symptoms with necrosis of the cerebral cortex, cerebellum, and basal ganglia. Toxicity may also include hypotension, blindness and hepatic necrosis. Sodium azide increases cyclic GMP levels in the brain and liver by activation of guanylate cyclase. Sodium azide solutions react with metallic ions to precipitate metal azides, which can be shock sensitive and explosive. This should be considered for choosing a non-metallic transport container for sodium azide solutions in the laboratory. This can also create potentially dangerous situations if azide solutions should be directly disposed down the drain into a sanitary sewer system. Metal in the plumbing system could react, forming highly sensitive metal azide crystals which could accumulate over years. Adequate precautions are necessary for the safe and environmentally responsible disposal of azide solution residues. Intentional consumption Sodium azide has gained attention in the Netherlands and abroad as a chemical used for homicidal and suicidal purposes. Sodium azide has been attributed to at least 172 deaths in the period from 2015 to 2022 as part of an illicit substance used as a suicide aid commonly called drug X (Dutch: middel X) In 2021, a review of all case reports of sodium azide intoxication indicated that 37% of cases were suicide attempts. An increase in the usage of sodium azide as a suicide drug has been attributed to its availability through pyrotechnics-focused online stores.
Physical sciences
Nitride salts
Chemistry
1174047
https://en.wikipedia.org/wiki/Rock%20magnetism
Rock magnetism
Rock magnetism is the study of the magnetic properties of rocks, sediments and soils. The field arose out of the need in paleomagnetism to understand how rocks record the Earth's magnetic field. This remanence is carried by minerals, particularly certain strongly magnetic minerals like magnetite (the main source of magnetism in lodestone). An understanding of remanence helps paleomagnetists to develop methods for measuring the ancient magnetic field and correct for effects like sediment compaction and metamorphism. Rock magnetic methods are used to get a more detailed picture of the source of the distinctive striped pattern in marine magnetic anomalies that provides important information on plate tectonics. They are also used to interpret terrestrial magnetic anomalies in magnetic surveys as well as the strong crustal magnetism on Mars. Strongly magnetic minerals have properties that depend on the size, shape, defect structure and concentration of the minerals in a rock. Rock magnetism provides non-destructive methods for analyzing these minerals such as magnetic hysteresis measurements, temperature-dependent remanence measurements, Mössbauer spectroscopy, ferromagnetic resonance and so on. With such methods, rock magnetists can measure the effects of past climate change and human impacts on the mineralogy (see environmental magnetism). In sediments, a lot of the magnetic remanence is carried by minerals that were created by magnetotactic bacteria, so rock magnetists have made significant contributions to biomagnetism. History Until the 20th century, the study of the Earth's field (geomagnetism and paleomagnetism) and of magnetic materials (especially ferromagnetism) developed separately. Rock magnetism had its start when scientists brought these two fields together in the laboratory. Koenigsberger (1938), Thellier (1938) and Nagata (1943) investigated the origin of remanence in igneous rocks. By heating rocks and archeological materials to high temperatures in a magnetic field, they gave the materials a thermoremanent magnetization (TRM), and they investigated the properties of this magnetization. Thellier developed a series of conditions (the Thellier laws) that, if fulfilled, would allow the determination of the intensity of the ancient magnetic field to be determined using the Thellier–Thellier method. In 1949, Louis Néel developed a theory that explained these observations, showed that the Thellier laws were satisfied by certain kinds of single-domain magnets, and introduced the concept of blocking of TRM. When paleomagnetic work in the 1950s lent support to the theory of continental drift, skeptics were quick to question whether rocks could carry a stable remanence for geological ages. Rock magnetists were able to show that rocks could have more than one component of remanence, some soft (easily removed) and some very stable. To get at the stable part, they took to "cleaning" samples by heating them or exposing them to an alternating field. However, later events, particularly the recognition that many North American rocks had been pervasively remagnetized in the Paleozoic, showed that a single cleaning step was inadequate, and paleomagnetists began to routinely use stepwise demagnetization to strip away the remanence in small bits. Fundamentals Types of magnetic order The contribution of a mineral to the total magnetism of a rock depends strongly on the type of magnetic order or disorder. Magnetically disordered minerals (diamagnets and paramagnets) contribute a weak magnetism and have no remanence. The more important minerals for rock magnetism are the minerals that can be magnetically ordered, at least at some temperatures. These are the ferromagnets, ferrimagnets and certain kinds of antiferromagnets. These minerals have a much stronger response to the field and can have a remanence. Diamagnetism Diamagnetism is a magnetic response shared by all substances. In response to an applied magnetic field, electrons precess (see Larmor precession), and by Lenz's law they act to shield the interior of a body from the magnetic field. Thus, the moment produced is in the opposite direction to the field and the susceptibility is negative. This effect is weak but independent of temperature. A substance whose only magnetic response is diamagnetism is called a diamagnet. Paramagnetism Paramagnetism is a weak positive response to a magnetic field due to rotation of electron spins. Paramagnetism occurs in certain kinds of iron-bearing minerals because the iron contains an unpaired electron in one of their shells (see Hund's rules). Some are paramagnetic down to absolute zero and their susceptibility is inversely proportional to the temperature (see Curie's law); others are magnetically ordered below a critical temperature and the susceptibility increases as it approaches that temperature (see Curie–Weiss law). Ferromagnetism Collectively, strongly magnetic materials are often referred to as ferromagnets. However, this magnetism can arise as the result of more than one kind of magnetic order. In the strict sense, ferromagnetism refers to magnetic ordering where neighboring electron spins are aligned by the exchange interaction. The classic ferromagnet is iron. Below a critical temperature called the Curie temperature, ferromagnets have a spontaneous magnetization and there is hysteresis in their response to a changing magnetic field. Most importantly for rock magnetism, they have remanence, so they can record the Earth's field. Iron does not occur widely in its pure form. It is usually incorporated into iron oxides, oxyhydroxides and sulfides. In these compounds, the iron atoms are not close enough for direct exchange, so they are coupled by indirect exchange or superexchange. The result is that the crystal lattice is divided into two or more sublattices with different moments. Ferrimagnetism Ferrimagnets have two sublattices with opposing moments. One sublattice has a larger moment, so there is a net unbalance. Magnetite, the most important of the magnetic minerals, is a ferrimagnet. Ferrimagnets often behave like ferromagnets, but the temperature dependence of their spontaneous magnetization can be quite different. Louis Néel identified four types of temperature dependence, one of which involves a reversal of the magnetization. This phenomenon played a role in controversies over marine magnetic anomalies. Antiferromagnetism Antiferromagnets, like ferrimagnets, have two sublattices with opposing moments, but now the moments are equal in magnitude. If the moments are exactly opposed, the magnet has no remanence. However, the moments can be tilted (spin canting), resulting in a moment nearly at right angles to the moments of the sublattices. Hematite has this kind of magnetism. Magnetic mineralogy Types of remanence Magnetic remanence is often identified with a particular kind of remanence that is obtained after exposing a magnet to a field at room temperature. However, the Earth's field is not large, and this kind of remanence would be weak and easily overwritten by later fields. A central part of rock magnetism is the study of magnetic remanence, both as natural remanent magnetization (NRM) in rocks obtained from the field and remanence induced in the laboratory. Below are listed the important natural remanences and some artificially induced kinds. Thermoremanent magnetization (TRM) When an igneous rock cools, it acquires a thermoremanent magnetization (TRM) from the Earth's field. TRM can be much larger than it would be if exposed to the same field at room temperature (see isothermal remanence). This remanence can also be very stable, lasting without significant change for millions of years. TRM is the main reason that paleomagnetists are able to deduce the direction and magnitude of the ancient Earth's field. If a rock is later re-heated (as a result of burial, for example), part or all of the TRM can be replaced by a new remanence. If it is only part of the remanence, it is known as partial thermoremanent magnetization (pTRM). Because numerous experiments have been done modeling different ways of acquiring remanence, pTRM can have other meanings. For example, it can also be acquired in the laboratory by cooling in zero field to a temperature (below the Curie temperature), applying a magnetic field and cooling to a temperature , then cooling the rest of the way to room temperature in zero field. The standard model for TRM is as follows. When a mineral such as magnetite cools below the Curie temperature, it becomes ferromagnetic but is not immediately capable of carrying a remanence. Instead, it is superparamagnetic, responding reversibly to changes in the magnetic field. For remanence to be possible there must be a strong enough magnetic anisotropy to keep the magnetization near a stable state; otherwise, thermal fluctuations make the magnetic moment wander randomly. As the rock continues to cool, there is a critical temperature at which the magnetic anisotropy becomes large enough to keep the moment from wandering: this temperature is called the blocking temperature and referred to by the symbol . The magnetization remains in the same state as the rock is cooled to room temperature and becomes a thermoremanent magnetization. Chemical (or crystallization) remanent magnetization (CRM) Magnetic grains may precipitate from a circulating solution, or be formed during chemical reactions, and may record the direction of the magnetic field at the time of mineral formation. The field is said to be recorded by chemical remanent magnetization (CRM). The mineral recording the field commonly is hematite, another iron oxide. Redbeds, clastic sedimentary rocks (such as sandstones) that are red primarily because of hematite formation during or after sedimentary diagenesis, may have useful CRM signatures, and magnetostratigraphy can be based on such signatures. Depositional remanent magnetization (DRM) Magnetic grains in sediments may align with the magnetic field during or soon after deposition; this is known as detrital remanent magnetization (DRM). If the magnetization is acquired as the grains are deposited, the result is a depositional detrital remanent magnetization (dDRM); if it is acquired soon after deposition, it is a post-depositional detrital remanent magnetization (pDRM). Viscous remanent magnetization Viscous remanent magnetization (VRM), also known as viscous magnetization, is remanence that is acquired by ferromagnetic minerals by sitting in a magnetic field for some time. The natural remanent magnetization of an igneous rock can be altered by this process. To remove this component, some form of stepwise demagnetization must be used. Applications of rock magnetism Biomagnetism Environmental magnetism Magnetic anomalies Magnetostratigraphy Paleomagnetic secular variation Plate tectonics Petrofabric analysis Rock Physics Structural Geology
Physical sciences
Basics_2
Physics
1174701
https://en.wikipedia.org/wiki/Black%20caiman
Black caiman
The black caiman (Melanosuchus niger) is a crocodilian reptile endemic to South America. With a maximum length of around and a mass of over , it is the largest living species of the family Alligatoridae, and the third-largest crocodilian in the Neotropical realm. True to its common and scientific names, the black caiman has a dark greenish-black coloration as an adult. In some individuals, the pigmentation can appear almost jet-black. It has grey to brown banding on the lower jaw; juveniles have a more vibrant coloration compared to adults, with prominent white-pale yellow banding on the flanks that remains present well into adulthood (more than most other species). The banding on young animals helps with camouflage by breaking up their body outline, on land or in water, in an effort to avoid predation. The morphology is quite different from other caimans but the bony ridge that occurs in other caimans is present. The head is large and heavy, an advantage in catching larger prey. Like all crocodilians, caimans are long, squat creatures, with big jaws, long tails and short legs. They have thick, scaled skin, and their eyes and noses are located on the tops of their heads. This enables them to see and breathe while the rest of their bodies are underwater. A carnivorous animal, the black caiman lives along freshwater habitats, including slow-moving rivers, lakes and seasonally flooded savannas, where it preys upon a variety of fish, reptiles, birds, and mammals. Being an apex predator and potentially a keystone species, it is generalist, capable of taking most animals within its range, and might have played a critical role in maintaining structure of the ecosystem. Although merely few specific ecological studies have been conducted, it is observed that this species has its own niche which allows coexistence with other competitors. Reproduction takes place in the dry season. Females build a nest mound with an egg chamber, protecting the eggs from predators. Hatchlings form groups called pods, guarded by the presence of the female. These pods may contain individuals from other nests. Once common, it was hunted to near extinction primarily for its commercially valuable hide. It is now making a comeback, listed as Conservation Dependent. Overall a little-known species, it was not researched in any detail until the 1980s, when the leather-trade had already taken its toll. It is a dangerous species to humans, and attacks have occurred in the past. Classification Although the black caiman is the sole extant (living) species of the genus Melanosuchus, two fossil species found in South America have been described: Melanosuchus fisheri in 1976, and Melanosuchus latrubessei in 2020, although the status of M. fisheri is in doubt. The black caiman is a member of the caiman subfamily Caimaninae, and is one of six living species of caiman. It is most closely related to the caimans of the genus Caiman, as shown in the cladogram below, based on molecular DNA-based phylogenetic studies: Distribution The black caiman largely inhabits areas of Amazonia, living in rivers, swamps, wetlands, and lakes. It is found in Brazil, eastern Ecuador and Peru, northern Bolivia, eastern French Guinea, and southern Guyana. Characteristics The black caiman has dark-coloured, scaly skin. The skin coloration helps with camouflage during its nocturnal hunts, but may also help absorb heat (see thermoregulation). The lower jaw has grey banding (brown in older animals), and pale yellow or white bands are present across the flanks of the body, although these are much more prominent in juveniles. This banding fades only gradually as the animal matures. The bony ridge extending from above the eyes down the snout, as seen in other caiman, is present. The eyes are large, as befits its largely nocturnal activity, and brown in colour. Mothers on guard near their nests are tormented by blood-sucking flies that gather around their vulnerable eyes, leaving them bloodshot. The black caiman is structurally dissimilar to other caiman species, particularly in the shape of the skull. Compared to other caimans, it has distinctly larger eyes. The snout is relatively deep, and the skull (given the species' considerably larger size) is much larger overall than other caimans. Black caimans are relatively more robust than other crocodilians of comparable length. There appears to be varying skull morphology in this species depending on the age and particular individual animal, which is not uncommon in other modern crocodilians, and by gender, with adult males typically having much more massive skulls relative to their size than like-age females. Due to the differences, males have a stronger bite force and likely exploit a different, and larger, prey base than females. Young black caimans can be distinguished from large spectacled caimans by their proportionately larger head, as well as by the colour of the jaw, which is light coloured in the spectacled caiman and dark with three black spots in the black caiman. A , black caiman was found to have a bite force of . Size The black caiman is the largest predator in the Amazon basin and the largest member of the Alligatoridae family, making it one of the largest extant reptiles. It is also significantly larger than other caiman species. Most adult black caimans are in length, with a few old males exceeding . Sub-adult male specimens of around will weigh roughly , around the same size as a mature female, but will quickly increase in bulk and weight. The average size of adult females at their nests was found to be . Mid-sized mature males of weigh approximately , while large mature specimens exceed , being relatively bulky crocodilians. Very large, old males can exceed in length, and weigh up to . A relatively small adult male of a total length of weighed while an adult male considered fairly large at a length of weighed approximately . Another sampling of sub-adult males found them to range in length from , averaging , and that they weighed from , averaging . In a study conducted in Rupununi River, Guyana, sub-adult and adult black caimans ranged from in length and weighed between . In some areas (such as the Araguaia River) this species is consistently reported at in length, although specimens this size are uncommon. Several widely reported but unconfirmed (and probably largely anecdotal) reports claim that the black caiman can grow to over in length and weigh up to . While it is unclear what the sources for this maximum size are, many scientific papers accept that this species can attain extreme sizes as such. In South America, two other crocodilians reportedly reach similar sizes: the American crocodile (Crocodylus acutus) and the Orinoco crocodile (C. intermedius). Biology and behaviour Hunting and diet Black caimans are apex predators with a generalist diet, and can take virtually any terrestrial and riparian animal found throughout their range. Similar to other large crocodilians, black caimans have even been observed catching and eating smaller species, such as the spectacled caiman and sometimes cannibalizing smaller individuals of their own kind. Hatchlings mostly eat small fish, frogs, and invertebrates such as molluscs, crustaceans, arachnids, and insects, but with time and size graduate to eating larger fish, including piranhas, catfish, and perch, as well as molluscs, which remain a significant food source for all black caimans. Dietary studies have focused on young caimans (due both to their often being more common than large adults and to their being easier to handle), the largest specimen examined for stomach contents in one study being only notably under sexually mature size, which is at a minimum in smaller females. Although diverse prey is known to be captured by young black caimans, dietary studies have shown snails often dominate the diet of young caiman, followed by quite small fish. Fish were the main prey of black caimans of over subadult size in Manú National Park, Peru. Various prey will be taken by availability, includes snakes, turtles, birds and mammals, the latter two mainly when they come to drink at the river banks. Mammalian prey mostly include common Amazonian species such as various monkeys, sloths, armadillos, pacas, porcupines, agoutis, coatis, and capybaras. Large prey can include other species of caimans, deer, peccaries, tapirs, anacondas, giant otters, Amazon river dolphins and domestic animals including pigs, cattle, horses, and dogs. Although rare predations on cougars or even jaguars have been reported, very little evidence exists of such predation, and cats are likely to avoid ponds with large adult black caimans, suggesting that adults of this species are higher in the food chain than even the jaguar. Where capybara and white-lipped peccary herds are common, they are reportedly among the most common prey item for large adults. Evidence has suggested fairly large river turtles can be counted among the prey of adult black caimans, the bite force of which is apparently sufficient to shatter a turtle shell. Large males have even been observed to cannibalize other Black Caimans. Compared to the smaller caiman species, the black caiman more often hunts terrestrially at night, using its acute hearing and sight. As with all crocodilian species, their teeth are designed to grab but not chew, so they generally try to swallow their food whole after drowning or crushing it. Large prey that cannot be swallowed whole are often stored so that the flesh will rot enough to allow the caiman to take bites out of the flesh. Reproduction At the end of the dry season, females build a nest of soil and vegetation, which is about across and wide. They lay up to 65 eggs (though usually somewhere between 30 and 60), which hatch in about six weeks, at the beginning of the wet season, when newly flooded marshes provide ideal habitat for the juveniles once hatched. The eggs are quite large, averaging in weight. Unguarded clutches (when the mother goes off to hunt) are readily devoured by a wide array of animals, regularly including mammals such as South American coatis (Nasua nasua) or large rodents, egg-preying snakes and birds such as herons and vultures. Occasionally predators are caught and killed by the mother caiman. Hatching is said to occur between 42 and 90 days after the eggs are laid. It is well documented that, as with other crocodilians, caimans frequently move their young from the nest in their mouths after hatching (whence the erroneous belief that they eat their young), and transport them to a safe pool. The mother will assist chirping, unhatched young to break out of the leathery eggs, by delicately breaking the eggs between her teeth. She will try to look after her young for several months but the baby caimans are largely independent and most do not survive to maturity. Baby black caimans are subject to predation even more regularly after they hatch, facing many of the same mesopredators, as well any other crocodilian (including those of their own species), large snake or large, carnivorous fish that they encounter. Predation is so common that black caimans count on their young to survive via safety in numbers. The female black caiman only breeds once every 2 to 3 years, and doesn't become sexually mature until 20 years of age. During the dry season throughout the Black Caimans reproduction season, they will give off a sound that closely resembles like rumbling thunder in order to communicate with others. Interspecific predatory relationships Many predators, including various fish, mammal, reptile and even amphibian species, feed on caiman eggs and hatchlings. The black caiman shares its habitat with at least 3 other semi-amphibious animals considered apex predators, usually able to co-exist with them by focusing on different prey and micro-habitats. These are giant otters which are social and are obligate aquatic foragers and piscivorans, green anacondas which are predators of other caiman species, alongside sizable individuals of this caiman (albeit not regularly), and jaguars, which are the most terrestrial of these and focus their diet mainly on relatively larger mammals and terrestrial reptiles. Black caimans eat more or less all the same prey as the other species. They are possibly the most opportunistic but, despite being the largest predator of the area, can metabolically live off of their food longer and thus may not need to hunt as frequently. Usually, each predator avoids encounters with adults of the others but battles, which can be lost by nearly any side, may rarely occur. Green anaconda, jaguars and black caiman arguably sit atop this food chain. Once the black caiman attains a length of a few feet, it has few natural predators. Large anacondas may occasionally take smaller caiman of this species. The jaguar (Panthera onca), being a known predator of all other caiman species, is another primary predatory threat to juvenile and subadult black caimans, with several records of predation on young black caimans and eggs. However, adult black caimans have no natural predators, as is true of other similarly-sized crocodilian species given the size, weight, bite force, thick hide, and immense strength. Even though females and smaller individuals may be preyed on by jaguars, larger males may themselves prey upon jaguars in exceptional cases. Conservation status and threats Humans hunt black caimans for leather or meat. This species was classified as Endangered in the 1970s due to the high demand for its well-marked skin. The trade in black caiman leather peaked from the 1950s to 1970s, when the smaller but much more common spectacled caiman (Caiman crocodilus) became the more commonly hunted species. Local people still trade black caiman skins and meat today at a small scale but the species has rebounded overall from the overhunting in the past. That black caimans lay, on average, around 40 eggs has helped them recover to some degree. Perhaps an equal continuing threat is habitat destruction, since development and clear-cutting is now epidemic in South America. Spectacled caimans have now filled the niche of crocodilian predator of fish in many areas. Due to their greater numbers and faster reproductive abilities, the Spectacled populations are locally outcompeting black caimans, although the larger species dominates in a one-on-one basis. Persistent management is needed to control caiman-hunting and is quite difficult to enforce effectively. After the depletion of the black caiman population, piranhas and capybaras, having lost perhaps their primary predator, reached unnaturally high numbers. This has, in turn, led to increased agricultural and livestock losses. Compounding the conservation issues it faces, this species occasionally preys on humans. Most tales are poorly documented and unconfirmed but, given this species' formidable size and strength, attacks on humans are quite often fatal. The species is uncommon in captivity and breeding it has proven to be a challenge. The first captive breeding outside its native range was at Aalborg Zoo in 2013.
Biology and health sciences
Crocodilia
Animals
1175257
https://en.wikipedia.org/wiki/Munich%20U-Bahn
Munich U-Bahn
The Munich U-Bahn () is an electric rail rapid transit network in Munich, Germany. The system began operation in 1972, and is operated by the municipally owned Münchner Verkehrsgesellschaft (MVG; Munich Transport Company). The network is integrated into the Münchner Verkehrs- und Tarifverbund (MVV; Munich Transport and Tariff Association) and interconnected with the Munich S-Bahn. The U-Bahn currently comprises eight lines, serving 96 stations (100 stations if four interchange stations with separate levels for different lines are counted twice), and encompassing of routes. Alongside the S-Bahn, the Munich subway is the most important means of local public transport in Munich. Since the opening of the first line on October 19, 1972, a network with 103.1 km of track and 96 stops has been built, to which the neighboring town of Garching near Munich is also connected and in future also the Planegg district of Martinsried (both in the district of Munich). The Munich subway is operated by Münchner Verkehrsgesellschaft (MVG) and is integrated into the Munich Transport and Tariff Association (MVV). In 2019, it transported 429 million passengers. Current routes Currently, there are eight lines. The network has of active route, and 100 stations. In 2019, 429 million passengers rode the U-Bahn. The trains operate at speeds up to , which is the top speed among German U-Bahns. There is no continuous operation during the night (break from 1 to 4 am, 2 to 4 am on weekends) except on special occasions such as New Year's Eve. Currently, only the U6 line crosses the municipal border to the town of Garching. Except for the lines U5 and U6, all lines operate completely below ground. U5 only comes above ground at the south terminus Neuperlach-Süd, U6 on the northern section from Studentenstadt (except Garching and Garching-Forschungszentrum stations and tunnels). There are three "line families", which each consist of two lines (not counting peak hour lines) that share a common track in the city centre. The schedules of these lines are coordinated to produce regular train intervals on the common section. Most stations have two tracks with an island platform between them. Of the single-line stations, only the stations Olympia-Einkaufszentrum (U1), Richard-Strauss-Straße (U4), Neuperlach Süd (U5), Garching-Hochbrück and Nordfriedhof (both U6) have side platforms. At the junction stations Scheidplatz and Innsbrucker Ring, the four tracks run in parallel on the same level, with two island platforms allowing cross-platform interchange. The stations Hauptbahnhof (lower level), where U1 and U2 branch into two different lines and Münchner Freiheit (U3/U6) also have four tracks, while Implerstraße (U3/U6), Max-Weber-Platz (U4/U5) and Kolumbusplatz (U1/U2) have three: one with a side platform for outbound trains and two with a shared island platform for inbound trains. Olympiazentrum, Fröttmaning and Kieferngarten also have four tracks each, the first due to the proximity of the Olympic Stadium, the others to support both traffic directed to the technical base and depot in Fröttmaning, and passengers attending the Allianz Arena football stadium. At Hauptbahnhof, there is a second U-Bahn station for lines U4/5 at a higher level, giving a total of six U-Bahn tracks. Sendlinger Tor, Odeonsplatz and Olympia-Einkaufszentrum also each have two quite separate stations at different levels, connected with each other by escalators and elevators. Frequency and scheduling Most lines operate with trains running at intervals of every 5 minutes during peak hours, but due to lines overlapping, trains can be as frequent as every 2 minutes. Outside of peak times lines operate trains at frequencies of every 10 minutes; however, around the start of operations and after midnight the line frequency decreases to every 20 minutes or more on most lines. Again with line overlap this means that a suitable train will arrive (often much) more frequently. U1 In 1980 the U1 commenced operation together with the U8 (now U2). At the beginning it was only operating on a section of U2's track. When the branch to Rotkreuzplatz was opened, it became a separate line. The line's colour is green. Today the U1 has a length of and 15 stations. It starts at Olympia-Einkaufszentrum in the district of Moosach. The U3 was extended to the same station (but on a different level) in 2007. On the way south it follows Hanauer Straße to Georg-Brauchle-Ring, which has been designed by Franz Ackermann, reaching Westfriedhof. It continues via Gern to Rotkreuzplatz, which was its terminus from 1983 to 1998. Below Nymphenburger Straße it goes on to Maillingerstraße and Stiglmaierplatz and finally merges into the U2 track at München Hauptbahnhof. On the busy city section, U1 and U2 run with a 5-minute offset, yielding 5 minute intervals even beyond peak hours. At Central Station, it also crosses the S-Bahn and U4/U5. At the next station, Sendlinger Tor, it passes below U3/U6. There the U1/U2 platforms for each direction lie in tunnels which are apart from each other and are connected by a pedestrian tunnel. Fraunhoferstraße, the next station, is also reached in separate tunnels, which had to be excavated using tunneling shields due to the proximity of the River Isar. However, the two tubes are connected by the platform, which demanded large pillars that are characteristic for this station. The next station, Kolumbusplatz, is a junction which has three tracks. Here the U1 branches off the U2 again. The southbound branch line was opened in 1997 and traverses the colourful station Candidplatz, eventually reaching Wettersteinplatz. The following station, St.-Quirin-Platz has an extraordinary architecture, as it is covered by a large, shell-like structure made from glass and steel, which is drawn nearly down to track level on one side. The U1 terminates at Mangfallplatz below Naupliastraße. U2 The route of the U2 line has undergone more changes than any of the other Munich underground lines. It also changed its name as it was first called U8. It is the only line that runs or ran on all three "line families" (U1/U2, U3/U6 and U4/U5). Today it has a length of and 27 stations. The line's colour is red. The U2 starts in the north at Feldmoching, where it connects to the S1 to Freising/Airport. The station there is decorated with rural and urban motives of Feldmoching's history. Below Hasenbergl, a district which had been known for its social problems, it goes to Dülferstraße, which provides access to the eastern Hasenbergl and a newly built area on Panzerwiese. Dülferstraße was the terminus from 1993 till 1996. Via the stations Harthof and Am Hart, the U2 reaches Frankfurter Ring. In the tunnel between Am Hart and Frankfurter Ring, there is a white and blue wave pattern, which is the only installation of art in a Munich U-Bahn tunnel outside of stations. After Milbertshofen station the U2 touches the U3 line at Scheidplatz, where cross-platform interchange is possible. Before the opening of the section to Dülferstraße in 1993, U2 went from Scheidplatz to Olympiazentrum, sharing the track with the U3. Through the district of Maxvorstadt the U2 continues to downtown Munich, reaching the stations Hohenzollernplatz, Josephsplatz, Theresienstraße und Königsplatz. At Königsplatz one can find artworks from the nearby Glyptothek on the platform. At München Hauptbahnhof (Munich Central Station), the U2 meets the U1, with which it shares tracks until Kolumbusplatz (see above). After Kolumbusplatz the U2 continues eastward and reaches the stations Silberhornstraße, Untersbergstraße and Giesing station, with an interchange possibility to S3 and S7. The next stations are Karl-Preis-Platz and Innsbrucker Ring, where cross-platform interchange to the U5 is possible. Until 1999, when the branch to the Messestadt stations was opened, the U2 ran from here to Neuperlach. Via the stations Josephsburg and Kreillerstraße the U2 reaches Trudering, which features two platforms in separate tunnels, connected by two transversal tunnels. In 1994, during the construction of this section, an accident happened: the ceiling of the new tunnel collapsed due to the intrusion of water and a bus fell into the crater. Two passengers and one construction worker died and the construction was delayed. Via Moosfeld, the U2 reaches Messestadt West and its terminus Messestadt Ost. These stations are located between the fairgrounds (Messestadt) in the north and a development area and the Bundesgartenschau 2005 in the south. U3 The U3 is the original Olympic line; the first section was opened for the Olympic Summer Games 1972. Today the line has a total of length of and 25 stations. The line's colour is orange. Today the U3 starts in the north at Moosach, Munich's 100th U-Bahn station, where passengers can change to the S1 to Freising/Airport. From here the line runs east to Moosacher St.-Martins-Platz and Olympia-Einkaufszentrum, where a change to the U1 is possible. After passing through Oberwiesenfeld station, the U3 reaches its original northern terminus at Olympiazentrum. From 1972 until 2010, this station was the end of the original Olympic line. When Munich was awarded the Olympic Summer Games 1972 in 1965, the U-Bahn network concept (which was adopted only one year earlier) had to be revised to speed up the construction of a connection to the Olympic venues at Olympic grounds. The Olympic connector (now U3) was redesigned as a branch of the U6 line, because the original plan of a direct connection to the Olympic ground from Munich Central Station was not feasible in the new timeframe. This original U3 sections consists of four stations (from north to south): Olympiazentrum, Petuelring, Scheidplatz, where cross-platform interchange to the U2 (the line originally supposed to serve the Olympic venues) is possible since 1980, and Bonner Platz. After Bonner Platz the U3 reaches Münchner Freiheit, where it joins the U6 to run together through the inner city section to Implerstraße (for this section see U6 below). After leaving the three-track junction station Implerstraße, where the U6 heads west towards Harras before ending up in Großhadern suburb, the U3 reaches Brudermühlstraße (near the picturesque Flaucher section of the Isar river), Thalkirchen (Zoo) (a short walk from the large city zoo) and Obersendling, which is built higher than the Thalkirchen station, because it is located on the "Hochufer" (western tread) of the River Isar. Here, interchange to the S-Bahn at Siemenswerke station is possible. The U3 continues west via Aidenbachstraße and Machtlfinger Straße, before reaching Forstenrieder Allee, Basler Straße, and eventually the terminus Fürstenried West. This southern-eastern section was opened on 28 October 1989, as can be seen from huge date numbers on the western entrance of Obersendling station. U4 With only and 13 stations, the U4 is Munich's shortest U-Bahn line. This line has originally been planned as U9 and is the only line that operates regularly with 4-car sets rather than the full 6-car set. The exceptions are Fridays in the late afternoon and during the Oktoberfest. The line's colour is mint green. The U4 begins in the west in the Laim neighbourhood at Westendstraße station, which it shares with the U5 line. Both U4 and U5 are the only lines of a joint line "family", which only branch out on one end of the common line, as an originally planned western extension of the U4 was first put on hold and was subsequently cancelled altogether. From Westendstraße the U4 runs east to Heimeranplatz, which connects to S7 and S20 S-Bahn lines. The next two stations, Schwanthalerhöhe (originally called Messegelände, the German for "exhibition grounds"; the name was changed when the exhibition centre relocated to Riem in 1998) and Theresienwiese, are gateways to the Oktoberfest, and are therefore highly loaded during this event. Between both aforementioned stations, there is a track that links to Implerstraße to provide a connection to the depot in Fröttmaning. Theresienwiese is one of only two U-Bahn stations in Munich (besides Fröttmaning station which serves Allianz Arena) to have the command centre booth opened during the Oktoberfest for supervising the masses of passengers. The southern exit of the station leads to the northern entrance of Oktoberfest. U4 trains arriving from the east often terminate at Theresienwiese rather than continue to Westendstraße even during peak hours due to low traffic volume east of Hauptbahnhof. After Theresienwiese the U4 reaches München Hauptbahnhof (Munich Central Station); passengers can transfer to U1/U2 lines as well as to all S-Bahn lines (except S20) here. The next station is Karlsplatz (Stachus) with shorter and easier connections to S-Bahn (S1 to S8). Karlsplatz is the deepest station in Munich's U-Bahn network ( below the surface). From this point on, the U4 runs north of the S-Bahn cross-city tunnel. After passing Odeonsplatz, where an interchange to U3/U6 trains is possible, and Lehel, the U4 crosses the River Isar in a tunnel, and reaches Max-Weber-Platz, the last station that is shared with the U5. Here, the U4 branches off to the north, while the U5 runs south. Before terminating at Arabellapark, the U4 passes the stations Prinzregentenplatz, Böhmerwaldplatz, and Richard-Strauss-Straße, the latter being the only station of the line to be equipped with side platforms instead of an island platform. The original plan called for an extension to Johanneskirchen station (where easy transfer to the S8 S-Bahn line would be possible) via Fideliopark, but was never built, due to low current ridership in the area north of Max-Weber-Platz. The extension of the tram line in the area in 2011 made the plan even more unlikely to materialise. A possible extension in the west to Blumenau is even more improbable. In the evenings from around 20.40 to the close of operations, the U4 only operates between Odeonsplatz and Arabellapark. U5 The U5 currently begins at Laimer Platz; an extension to is under construction. The total length currently is . The line's colour is brown. Via Friedenheimer Straße, the U5 reaches Westendstraße. From there, the U5 shares the tracks with the U4 to Max-Weber-Platz (see above). At Max-Weber-Platz, the U5 branches off to the south to Ostbahnhof (East Station), where changing to all S-Bahn lines is possible. The next station, Innsbrucker Ring, allows cross-platform interchange to the U2. The U5 continues south to Michaelibad, Quiddestraße, and Neuperlach Zentrum, which is the centre of the satellite town of Neuperlach, built during the 1960s and 1970s. Going on to Therese-Giehse-Allee, the U5 comes above ground and reaches its terminus Neuperlach Süd, where it allows cross-platform interchange with S-Bahn line S7. South-east of Neuperlach-Süd is a large parking yard (Betriebsanlage Süd) used to park trains which can't be parked at the technical base in Fröttmaning or within the network. U6 The U6 is the oldest U-Bahn line of the network and also features the oldest tunnel built: the section below the Lindwurmstraße (between Sendlinger Tor and including the station Goetheplatz) was already built 1938-1941 as part of a planned S-Bahn network. For this reason Goetheplatz has a platform longer than the standard . Today the line has a length of . Its colour is blue. Since 2006 the northern terminus of the U6 is Garching-Forschungszentrum; via Garching it reaches Garching-Hochbrück. These three stations are outside the city limits of Munich in the city of Garching. The distance of to the next station at Fröttmaning is the longest distance between two stations in Munich's U-Bahn network. Fröttmaning has been expanded to two island platforms and four tracks to cater for the Allianz Arena football stadium, built for the 2006 FIFA World Cup. The technical base of the U-Bahn is located at Fröttmaning, too. After passing Kieferngarten station, which has two island platforms as well, it crosses over a rail bridge to Freimann and Studentenstadt. Between these two stations is a connection to mainline railway tracks, which is used to bring new trains into the network. The bridge was originally used by the tram and was the only tram track to be converted to be part of the U-Bahn network. The U6 then continues underground for the rest of its way south. Via Alte Heide, Nordfriedhof (station with side platforms), and Dietlindenstraße, the U6 reaches Münchner Freiheit, where it joins the U3 on the shared inner city tunnel. Passing Giselastraße and Universität (University), it arrives at Odeonsplatz, where it connects to the U4/U5 lines. Continuing to Marienplatz, it crosses the S-Bahn lines. During peak hours this station is can get overcrowded, which is why additional pedestrian tunnels were built between 2003 and 2006. At Sendlinger Tor the U3/U6 crosses the U1/U2 line, and interchange is possible. The line now uses the tunnel built in 1941 mentioned above as far as Goetheplatz. The next station, Poccistraße was added belatedly, constructed between the two existing tunnels which stayed operational. At Implerstraße the U3 and U6 separate again. To the north of the station, facing north, there is a branch to the U4/U5 at Schwanthalerhöhe, which is not used for passenger transport. At Harras the U6 connects to the S-Bahn lines S7 and S20, and to regional trains to the south. The section via Partnachplatz and Westpark to Holzapfelkreuth was constructed for the Internationale Gartenbauausstellung (IGA) in 1983 was therefore dubbed "flower line", which is reflected in the design of these stations. Passing Haderner Stern and Großhadern, the U6 reaches its current southern terminus at Klinikum Großhadern, where the entrance to the station is covered by a glass pyramid. An extension to Martinsried, which is only approx. west of the current terminus, is under construction, and is supposed to open in 2026. U7 This booster line (it only operates during rush hours) was added in December 2011 along with the new tram extension to St. Emmeram. The U7 runs between Olympia-Einkaufszentrum and Neuperlach Zentrum via München Hauptbahnhof and Innsbrucker Ring: it shares the tracks with the U1 from Olympia-Einkaufszentrum to Kolumbusplatz, the U2 from München Hauptbahnhof to Innsbrucker Ring, and the U5 from Innsbrucker Ring to its southern terminus Neuperlach Zentrum. U8 This booster line started operations in December 2013. It only operates on Saturday afternoons. The U8 begins in the north at Olympiazentrum and shares the tracks with the U3 as far as Scheidplatz, where it continues along the U2 tracks to Innsbrucker Ring and terminates at Neuperlach Zentrum. It is only running on Saturdays to ease crowding on the U2 and U3 lines, and to provide people an easier access to the Olympic Park from Munich Central Station. Rolling stock Munich U-Bahn uses three different generations of electric multiple unit trains. The stock of over 550 carriages is shared between all lines. Class A Class A trains were built between 1967 (prototypes) and 1983. The units consist of two carriages, which always remain coupled in normal operation. The double-carriage units have a length of , a height of , and a width of . Each unit has three doors per side and a capacity of 98 seats and standing room for 192 passengers. A total of 193 double-carriage units were delivered, of which 179 are still in use in Munich. Up to three A two-carriage units can be coupled together to form a 3/3 train (Langzug). Class B Class B trains were built between 1981 and 1994 to provide more stock to service the growing network in the 1980s. As with the class A trains, six prototypes were ordered. However, it took six years until the series production started and the prototypes had to be modified to match the series-production units. B units have the same size as A units but differ in the design (especially of the front window) and use three-phase current instead of direct current motors. The other difference is the door opening mechanism. On B units, the passengers only need to pull just one handle to open both door leaves rather than both handles as on A units. Of a total of 63 units, 57 are still in service (including one prototype), and six have been scrapped. As with A trains, up to three B double-carriage units can form a 3/3 train (Langzug). However, it is not possible to form a mixed train of A and B units, which are not compatible. Class C Class C trains were designed in the late 1990s to gradually replace the Class A trains, which become more expensive to operate and maintain. Class C1 trains are six carriages attached together in one continuous length with gangways between the carriages, allowing the passengers to walk from one end to other. Ten trains, designated as C1.9 (Wagon No. 601–610), were ordered and delivered without prototype trains for evaluation. The technical difficulties delayed the entry into service to 2002. Eight more units, designated as C1.10 (Wagon No. 611–618), were delivered in 2005 prior to the 2006 FIFA World Cup. The total number of C1 trains in operation is 18. A second generation, designated as Class C2, were ordered from Siemens Mobility in November 2010. They are based on new Siemens Inspiro subway trains. The first delivery of 21 units (C2.11) began in 2012 but was delayed until 2018 due to the technical defects and changes in federal and state regulations. The second order for 22 units (C2.12) was announced in 2019 with delivery commencing in 2020 and continuing until 2022. The third order for 24 units (C2.13) was placed in May 2020 for the delivery from 2022 to 2024. The total number of C2 trains in operation and under construction is 67. Operation MVG has made lot of effort in the last few years to increase the frequency for trains on the busier lines (U1/U2/U7 and U3/U6) to five minutes between trains and to ten minutes on less busy lines (U4 and U8). The busier lines (U1/U2/U7/U8 and U3/U6) have "Langzug" (3 Class A/Class B trains or 1 Class C1/C2 trains). "Kurzzug" (short train) with 1 or 2 Class A/Class B trains are operated on less busy lines (U4/U5) or during less busy times in the evenings or weekends. During the Covid-19 pandemic in 2020, the "Kurzzug" has been temporarily removed from the service. U3/U6 lines on shared tracks within city boundary has a notorious reputation of frequent service interruptions due to unanticipated technical issues, overcrowding, and passengers requiring medical attention or falling into the tracks at the stations. Additionally, U3/U6 lines receive frequent maintenance and track replacements due to heavier than anticipated passenger load. This necessitates the planning of new U9 line as to reduce the burden on U3/U6 line. No 24-hour service is offered due to the nightly maintenance. Instead, the night buses are used. History Already in 1905 there were plans to build an underground metro in about the route of today's trunk line of the S-Bahn between the main and Ostbahnhof and a ring road that surrounds the old town. Since these plans for the then traffic were clearly oversized, they came back into oblivion. The tram network was able to cover the traffic flows in the former half-million city. From 1910, the only 450 m long, automated Munich subway metro connected the main station with the post office on Hopfenstraße. It served only for the transport of letter post. In 1928 there were again plans to replace the trams in Munich by a subway network, but any such plans for this were thwarted the global economic crisis. A network of five subway routes, which had some similarities with today's route distribution, was to be realized. At the time of Nazi Germany, from 1936, a network of electric subterranean railways was planned for the "capital of the movement" and construction was begun, but the Second World War put an end to this. The tunnel of today's U6 between Sendlinger Tor and Goetheplatz - including the station there - were already completed in the shell, but still as part of a rapid-transit railway route. This also explains the relative generosity of Goetheplatz (especially in the blockade entrance Goetheplatz does not fit the architecture today) and the narrowness of the present interchange station Sendlinger Tor on the platform U3 / U6. In the Lindwurmstraße took place on 22 May 1938, the groundbreaking ceremony for this tunnel, which should herald the beginning of the end of the tram. By 1941, the shell was completed, first railcars were to be delivered in the same year. The war-related scarcity of resources led to the cessation of this work. The shell was used during the war as an air-raid shelter, of which today still bears inscriptions on the tunnel walls. Parts of the tunnel were filled with debris after the war, others served for a while as a breeding ground for mushrooms, before penetrating groundwater made the short piece of early metro history unusable. The Nazis forbade the acquisition of new rolling stock for the Munich tramways in order to show how "insufficient" the tram system was. At that time, trams were the primary means of public transportation in Munich. The Nazis made ambitious plans to change Munich into their "Reichshauptstadt der Bewegung" (Capital of the movement; the Nazi party had come to existence in Munich). This included the construction of an underground system. In the late 1930s, construction started in Lindwurmstraße and Sonnenstraße, where Munich's main Lutheran-Protestant church, Matthäuskirche, was torn down because it was supposedly a "traffic obstacle" (so was the Munich's main synagogue not far away as well as the tower of the Old Town Hall). Construction was abandoned in 1941 as World War II intensified. After the war, the priority was to reconstruct the badly-damaged tram system. However, even during the 1950s the Munich City Council discussed plans to run a few of the tram lines underground because the capacity for surface traffic was overstretched. It planned four diameter lines (designation A, B, C, D), which divided the city into eight sectors and contained essential elements of today's network of lines. An east-west line "A": Pasing - Laim - Westend - Stachus (change in line "B") - Marienplatz (change in line "C") - Ostbahnhof - Berg-am-Laim. Another line "B": Moosach - Gern - Rotkreuzplatz - Stiglmaierplatz - Stachus (change in line "A") - Odeonsplatz - Max Weber Square - Bogenhausen - Zamdorf - Riem. A north-south line "C" was along Freimann - Münchner-Freiheit - Marienplatz (change in line "A") - Goetheplatz (already built 1938-1941 interchange station to the line "D") - Harras - Waldfriedhof planned. A north-south line "D" with the lines: Settlement am Hart - Scheidplatz - Elisabethplatz - Central Station - Goetheplatz (change to line "C") - Giesing. In 1964 plans were, however, changed and it was decided to build a "real" underground network as follows: U1: Moosach Bf – (Dachauer Str.) – Hbf – Goetheplatz – Kolumbusplatz – Giesing Bf – Neuperlach Zentrum U2: Amalienburgstr. – Rotkreuzplatz – Hbf – Goetheplatz – Kolumbusplatz – KH Harlaching – Großhesseloher Brücke U3: Heidemannstr. – Scheidplatz – Münchener Freiheit – Marienplatz – Goetheplatz – Fürstenrieder Straße – Blumenau U4: Pasing – Laimer Pl. – Heimeranplatz – Hbf – Theatinerstr. (Marienplatz Nord) – Max-Weber-Pl. – Arabellapark – St. Emmeram U5: Pasing – like U4 – Max-Weber-Pl. – Leuchtenbergring – St.-Veit-Str. – Waldtrudering U6: Kieferngarten – Münchner Freiheit – Marienplatz – Goetheplatz – Harras – Waldfriedhof – Großhadern U8: Hasenbergl – Am Hart – Scheidplatz – Theresienstr. – Karlsplatz (Stachus) – Sendlinger Tor (4. Stammstrecke) – Kapuzinerstr. (Kreuzungsbf mit U1/2) – Thalkirchen – Aidenbachstr. – Fürstenried West Calls for a ring line of the subway were soon rejected, as this was the tangential passenger volume was too low, but you took in the construction of the S-Bahn trunk line at Rosenheimer Platz station consideration that should not be built the possibility of a crossing station here. Today, the tram takes on the most tangential traffic flows, the concept of a ring metro has been adopted. Work started on 1 February 1965 at Nordfriedhof (North Cemetery) in Ungererstraße (now known as Schenkendorfstraße). Today a steel girder at the first building site is a monument to Munich's first underground railway. When the 1972 Summer Olympics were awarded to Munich in 1966, construction had to be sped up to get the "Olympic" line finished on time. On 19 October 1971 the first line commenced operations between Kieferngarten and Goetheplatz with a total length of . On 8 May 1972 the line between Münchner Freiheit and Olympiazentrum ("Olympic line") to the venues of the Olympic Summer Games 1972 was opened, just 10 days after the Munich S-Bahn commenced operations. To satisfy demand during the Games, some DT1 trains were borrowed from Nuremberg. On 22 November 1975 the extension from Goetheplatz to Harras was opened. The network has been expanded continuously since 1980. The activities of the U3 and U1 should be extended on a case-by-case. The beginning of the extension of U1, from Westfriedhof to Georg-Brauchle-Ring, opened on 18 October 2003, whereas another station, Olympia-Einkaufszentrum was later opened on 31 October 2004. Similarly, on 28 October 2007, the new era had resulted in U3, extending from Olympiazentrum, via Oberwiesenfield, to the Olympia-Einkaufszentrum, and later on it was extended to Moosach on 11 December 2010. This is a similar concept to the fictional U3 extension, which was achieved in 2007 and 2010 respectively. The new Allianz Arena (football stadium) required a larger capacity of the nearby U-Bahn station at Fröttmaning. A new second platform was built and the whole station was moved north by roughly 100 metres (330 ft). For easy access to the platform, a second pedestrian bridge was built at the north end of the platforms. At the same year, U6 was extended on 14 October 2006 from Garching-Hochbrück to Garching-Forschungszentrum via Garching. This is a similar concept to the fictional U1 extension, where it had needed a new series of trains. Timeline Future expansions Extensions under construction U5 (west) – extension to Pasing Bahnhof The Laimer Platz-Pasing extension has been approved on 14 July 2015 as to relieve the overburdened tram and bus routes serving the area between Laimer Platz and Pasing neighbourhoods. In case of S-Bahn disruption between Pasing Bahnhof and Ostbahnhof via Hauptbahnhof, the U5 can supplement the connection between both stations. The construction is to commenced in 2022 at cost of 547 million euros and is planned to open in the early 2030s. Two new subway stations are Willibaldplatz and Am Knie. When completed, U5 will be the only subway line in Munich to connect Hauptbahnhof with both west (Pasing) and east (Ostbahnhof) termini that provide regional and long-distance train services. U5 (west) – new station in Freiham The residents called for the further extension of U5 to Freiham from Pasing Bahnhof. MVG initially rejected the proposal due to higher cost and two S-Bahn lines already serving the northern and southern areas of Freiham. Extending the tram line from Pasing Bahnhof to Freiham was considered as a cheaper alternative. On 17 July 2015, the citizen initiative and petition drive took place, supported by Christian Social Union (CSU) and Social Democratic Party (SPD) to extend U5 line further to Freiham. In January 2019, the city council passed the resolution to proceed with the further extension of U5 to Freiham. Due to the anticipated population growth, the subway line is more preferable than tram line in keeping up with the increasing demand. Additionally, the western terminus is to be located in Freiham city centre, serving the suburban town better than two S-Bahn lines in north and south. On 26 January 2020, the city council approved the plan to extend the U5 further to Freiham with the construction commencing at the same time as the construction of Pasing Bahnhof subway station in 2021. The 750 million euros Pasing Bahnhof-Freiham extension is 4.5 kilometres long and has four new stations: Westkreuz, Radolfzeller Straße, Riesenburgstraße, and Freiham Zentrum. The date of completion is anticipated to be in 2035–2040. On 28 July 2020, the administrative district of Oberbayern has approved the route alignment and construction plan for the second segment between the future subway stations Willibaldstraße and Am Knie. The second segment is routed from Willibaldstraße U-Bahnhof between Landsberger Straße and Agnes-Bernauer-Straße toward the two popular swimming, recreational, and sport centres (Westbad Hallenbad and Eis- und Funsportzentrums West). Am Knie U-Bahnhof will be located very close to the centres for quick and easy access. Groundbreaking ceremony for the retention structure of the Freiham Zentrum station took place on 28 May 2024. Due to the construction taking place in an empty field, it won't require intervention into existing infrastructure, which will save around 50 million euros in building cost. The retention structure is planned to be finished by 2027. The 4.7 km long connection to Pasing via three stations (Riesenburgstraße, Radolfzeller Straße and Westkreuz) is expected in the 2040s. U6 (south) – extension to Martinsried This extension will serve a large biotech centre at Martinsried. The estimated cost in 2012 was €73.3 million: since the extension crosses the municipality boundary of Munich, the large percentage of funding (95%) comes from Free State of Bavaria while Planegg and Munich contributed smaller percentages. The planning was approved in 2013, and the anticipated completion date was 2014–2015. However, the extension must go through the contaminated area and former gravel pit later filled in, leading to the complications and delays in planning. The parking garage at Martinsried station is being constructed now and to be completed at the end of 2021. The construction of the subway tunnel began in February 2023 with anticipated completion in 2026. Planned extensions U4 (east) – extension to Englschalking Bahnhof This extension is considered in the third medium-term planning (Mittelfristprogramm) along with the plan of moving a portion of S8 line from surface to underground between Unterföhring and Leuchtenbergring stations. Whether the extension would be approved and when will it be built is not clear. The stations are Cosimapark, Fideliopark, and Englschalking, all of which are in Bogenhausen. U3 (west) – extension to Untermenzing This extension is planned after the U3 has been extended to Moosach, and it will go via Waldhornstraße. U5 (south) – extension to Taufkirchen After the success of citizen initiative to extend the U5 West to Pasing and then to Freiham, the state parliament is exploring the extension south of Neuperlach Süd station to Taufkirchen. The extension would serve several important large-scale industries (including Ludwig-Bölkow-Systemtechnik and Airbus Defence and Space), the Bundeswehr University Munich, the Technical University of Munich Department of Aerospace and Geodesy, and suburban towns (Neubiberg, Ottobrunn, and Taufkirchen). The six-kilometre extension with four new stations is estimated to cost 300-400 million Euros, and the exact route hasn't been determined yet. Due to the cost factor, the extension would be built above ground since the tunnel costs approximately €80 million per kilometre. The Bavarian minister president, Markus Söder, is pushing for the large portion of federal funding, following the recent change in the outdated federal law, Gemeindeverkehrsfinanzierungsgesetz (Municipal Transportation Finances Act), to address the issues with cost factor analysis for the suburban regions and beyond. No construction or completion date is given yet. Abandoned extension plans U1 (south) – extension to Harlaching Hospital via Laurinplatz Although the plans for this extension were quite advanced, low passenger forecasts have led to its abandonment in favour of a tram or light rail from Schwanseestraße. But in 2015 and 2016, it was also announced that it will be extended to Solln. U1 (north) – extension to Feldmoching With this extension, the U1 line would terminate at S-Bahn and U2 subway stations in Feldmoching. The subway station, Olympia-Einkaufszentrum, would be renamed as the Northern Cross. The extension is to connect with U2 line at current Hasenbergl subway station then continue to their terminus in Feldmoching. This plan was abandoned. U2 (north) – extension to Karlsfeld The city council of Bündnis 90 / Die Grünen and the CSU faction in the town council of Karlsfeld proposed the above-ground extension of the U2 from Feldmoching to Karlsfeld as a measure to relieve the municipality of the heavy car traffic. Moreover, this should provide a greater incentive for the employees of the large companies MAN and MTU to use public transport for their work. After initial investigations had shown a low cost-benefit ratio for this route, it was not cost-effective and was abandoned. U6 (north) – extension to Munich Airport This would be only U-Bahn line with direct access to the Munich Airport. At Hallbergmoos station, U6 would continue in parallel with S8 line to the airport and stop at both terminals before travelling further to Eching and Neufahrn. However, the plan was abandoned. U9 and U29 bypass lines On 11 February 2014, SWM/MVG announced a detailed report for the construction of a new 10.5 km bypass line, a sixth intracity underground line, to be called the U9. The new line will relieve the overburdened U1/U2/U7/U8 and U3/U6 lines, especially the transfer stations at Sendlinger Tor (U1/U2/U7/U8 and U3/U6), Hauptbahnhof (U1/U2/U7/U8, U4/U5, and S-Bahn), and Odeonsplatz (U3/U6 and U4/U5), and will shorten the travel time between the Hauptbahnhof and the Allianz Arena by removing the need to transfer at Marienplatz or Odeonsplatz. MVG is predicting tremendous growth of passenger traffic in the northern section of Munich in the next twenty years and wants to plan accordingly. The proposal was revised and approved in January 2018 to integrate the three major construction projects at Hauptbahnhof: the second S-Bahn tunnel and station, a new dedicated U9 platform, and the reconstruction of Hauptbahnhof. On 2 July 2019, funding for the U9 construction was approved, with most of the €3.5 billion to be provided by the federal government. The confirmed completion date is 2038 at latest. The revised plan eliminates U6 service between Implerstraße and Münchner Freiheit to relieve the overburdened U3 and reduce the heavy congestion at Sendlinger Tor and Odeonplatz when transferring between lines during rush hour. The U9 takes over the current U6's southwestern (Klinikum Großhadern – Implerstraße) and northeastern (Münchner Freiheit – Garching-Forschungszentrum) extensions. The new stations would be Martinsried (a new extension west of Klinikum Großhadern), Esperantoplatz, Hauptbahnhof (dedicated platform), Pinakotheken, and Elisabethplatz. At Münchner Freiheit, the U9 is connected to the current U6 tracks to continue to its northeastern terminus at Garching-Forschungszentrum. A further plan to add a second spur to connect with the U2 line at Theresienstraße is planned. The second spur would be called U29: Klinikum-Großhadern – Hauptbahnhof – Theresienstraße – Harthof. This plan would extend the current U6 southwestern terminus from Klinikum-Großhadern to Martinsried. Theresienwiese park has currently one subway station on site in the north but is served by two additional subway stations a few blocks away (U3/U6 - Goetheplatz in the southeast and U4/U5 – Schwanthälerhöhe in the west). Adding a second onsite subway station in the southeastern area would reduce the heavy Oktoberfest crowd at Theresienwiese station, especially during the closing time. Both stations would have direct connection to Hauptbahnhof without transferring. The name of second onsite station has not been officially determined yet, but the working title at moment is 2. Wiesnbahnhof. Esperantoplatz and Bavaria (in reference to the Bavaria statue nearby) have been suggested. The museum quarter, Pinakothekenviertel, has one station, U2 – Königplatz, but it is inconveniently located in the southwestern quarter, requiring a long walk to several Pinakotheken museums. The new station, Pinakotheken, would be centrally located in the museum quarter and serve the museums and Technical University of Munich within a short walking distance. Pinakotheken would be served by tram lines, Tram 27 and 28, along with several local and crosstown bus lines. The plan will also facilitate the connection between Technical University of Munich and Campus Garching. The plan called for the closure of the current Poccistraße and Implerstraße subway stations and construction of a new four-track station underneath the Südring, serving the U3, U9, and U29, and connecting to the new aboveground station serving regional trains and possibly the S-Bahn if the future S-Bahn-Ring is approved. The new U-Bahn station would be called Impler-/Poccistraße to differentiate it from the current U3/U6 – Implerstraße and Poccistraße stations. The current Poccistraße station has serious structural problems requiring expensive maintenance and extensive monitoring; it is cheaper to build a new station nearby than to continue using Poccistraße. At Hauptbahnhof, the new dedicated U9 platform would be built underneath the current S-Bahn platform but above the U4/U5 platform. The platform would be located underneath the regional and long-distance train tracks between Holzkirchner Bahnhof (Holzkirchner wing station, serving southeastern Bavaria) and Starnberger Bahnhof (Starnberg wing station, serving southwestern Bavaria) with a new mezzanine level connecting the U9 platform with both wing stations and the rest of Hauptbahnhof. The advantage of locating the U9 platform in the western section of Hauptbahnhof is a quick transfer between platforms without a long, circuitous walk from one platform to other via the grand platform in the east. On 3 July 2019, Deutsche Bahn announced the new "2. Stammstrecke — Die Optimierungen" (Second Trunk Line Optimisation). The State of Bavaria and the Munich city council want the first U9 station to be built at the Hauptbahnhof at the same time as the reconstruction of the Hauptbahnhof main building and the construction of the second S-Bahn trunk line. The design revision relocates the U9 platforms from the west end of the regional and long-distance platforms to the middle of the main Hauptbahnhof building below ground. The relocation places the U9 station directly above the second trunk line station in a cross arrangement. This improves the passenger flow between two current U-Bahn lines (U4/U5 and U1/U2/U7/U8), one current S-Bahn trunk line, and the above-ground level. U26 tangent line Another proposal for crosstown travel in the north between U2 - Am Hart and U6 (or U9 when opened) - Kieferngarten is under consideration. This U26 would serve the people who live in the north and don't need to travel south in order to travel between northwest and northeast regions. The high cost and fewer stops as compared to tram line would make this proposal less feasible. However, the new massive expansion plan for BMW's Forschungszentrum (Research Centre) in the north of BMW headquarter buildings and manufacturing plants would employ about 40,000 employees by 2050. The increased number of employees would lead to the vehicular traffic chaos and collapse unless the new S-Bahn North Ring and new U26 are built with stations at Forschungszentrum. The higher capacity of U26 and underground placement (without impacting the vehicular traffic) could make it more feasible and possible than tram line. More booster lines Different booster lines have been discussed for years without any concrete plans. U10 Harthof - Münchner Freiheit - Odeonsplatz - Sendlinger Tor - Harras A third booster line, U10, would share U2 line between Harthof and Scheidplatz then switch to U3 line between Scheidplatz and Implerstraße before turning to U6 line at Harras. The northern and southern terminuses have not been determined yet. U11 Olympia-Einkaufszentrum - Westfriedhof - Central Station - Sendlinger Tor - Innsbrucker Ring A fourth booster line, U11, would share U1 from Olympia-Einkaufszentrum to Kolumbusplatz before switching to U2 line for the continued journey to Innsbrucker Ring station. Which terminus station, Messestadt Ost in the east or Neuperlach in the south, has not been determined. U12 Harthof - Theresienstraße - Hauptbahnhof - Theresienwiese - Implerstraße - Harras A fifth booster line, U12, would share with U2 between Harthof and Theresienstraße then switch to U9 south of Theresienstraße where it shares with U9 toward Harras. Renovations and upgrades of subway stations Platform Screen Doors Munich U-Bahn system is experiencing the increasingly frequency of passengers and objects falling from the platforms into the tracks lately, disrupting the service. The latest statistics showed for 2018 215 passengers falling in and being rescued, 22 seriously injured or dead, 115 objects, and 10 animals. MVG plans the pilot project at Olympiazentrum up to two years to test the feasibility and to resolve any issues before deploying system-wide technology. If these test at Olympiazentrum stations are successful, platform screen doors will be implemented in the high-frequency stations first then gradually and successively added to more stations when they are being renovated. The anticipated completion date of system-wide installation is 2028. Fröttmaning station (U6, North) — Completed The new Allianz Arena (football stadium) required a larger capacity for the nearby U-Bahn station. A new second platform was built and the whole station was moved north by roughly . For easy access to the platform, a second pedestrian bridge was built at the north end of the platforms. Marienplatz station (U3/U6) — Completed The increase in traffic and the new Allianz Arena also required a larger capacity for this already overcrowded pivotal transfer station. New pedestrian tunnels were built, which provide more room for passengers transferring from and to the S-Bahn. They lie parallel to the existing platforms and are connected to them by 11 portals. At the south end, they meet the transverse tunnel, where the escalators to the S-Bahn platforms are located. To prevent Munich's historic town hall, located above the station and in between its two tunnels, from sinking in during the construction works, the ground surrounding the construction site had to be frozen over. The construction was completed in time for 2006 World Cup. Karlsplatz (Stachus) (U4/U5) — Completed The two mezzanine levels and north entrance were renovated to include the enhanced fire protection and the updated interior. Due to the damage from humidity and water seepage, the structures of U4/U5 platforms were repaired and sealed up against further seepages. Münchner Freiheit (U3/U6) — Completed The interior has been updated with horizontally corrugated metal wall covering in fluorescent yellow. The square support columns are decked in deep blue tiles with blue lights casting down, giving them blue hue. The ceiling is covered with mirrors, giving the "infinite" height. The mezzanine level had been updated with new white floor. The aboveground tram and bus station received the new organic-shaped covering roof. An interesting fact: prior to renovation, the name was spelled with extra "e" in "Münchener". After the completion, the name is now "Münchner Freiheit" eliminating the extra "e", reflecting the current spelling trend, as well as a "return" towards more traditional Bavarian habits of spelling and pronunciation. Thus, it precisely mirrors the spelling on the street signs above. Plannings and in progress The original planners in the 1960s and 1970s did not envision the massive growth of passengers, increased number of subway lines, and continued extensions of current subway lines in the forthcoming years. Thus, they did not design the subway stations accordingly as to make them "future proof". The intersecting subway lines cause the overcrowding and frustrating movements between two station platforms, especially at Hauptbahnhof (U1/U2/U7/U8, U4/U5, and future U9 as well as S-Bahn), Sendlinger Tor (U1/U2/U7/U8 and U3/U6), and Odeonplatz (U3/U6 and U4/U5). The platform width and passage between stations and a fewer number of escalators were inadequate for larger number of people moving from one station to other as well as from aboveground to the underground during the rush hour. Sendlinger Tor station (U1/U2/U7/U8 and U3/U6) — Renovation In Progress Due to poor design and unanticipated explosive growth of passenger traffic, Sendlinger Tor station has severe congestion and chokepoints for passengers moving between upper U3/U6 and lower U1/U2/U7/U8 platforms as well as upper levels (mezzanine and street). The €150 million renovation and upgrade of station was approved on 30 December 2015 by administrative district of Oberbayern. The project is scheduled to be completed in 2023. The U3/U6 platform has wide staircases and escalators connecting to the U1/U2/U7/U8 platforms below and mezzanine level above, and they created the narrow passageways on the platform. The narrow passageways lead to the dangerous chokepoints for passengers moving between the platforms or levels, especially during the rush hour. The U1/U2/U7/U8 platforms have only one corridor in the middle, connecting to the U3/U6 platform and upper levels. The combined landings of ascending and descending escalators and wide staircases from U3/U6 platform are placed too close to the U1/U2/U7/U8 platforms at either end of middle corridor. This causes the severe congestion during the rush hour when the passengers going downstairs must push through the passengers who are going upstairs. The five lifts at the station do not connect directly to every level (two platform levels, mezzanine level, and street level). The two street level lifts are connected the mezzanine level only: they are located inconveniently further away from popular shopping street, Sendlinger Straße, requiring the passengers to cross the busy intersection to reach them. From the mezzanine level, the passengers either use two lifts to reach U1/U2/U7/U8 platforms or another lift to reach U3/U6 platform. If the passengers wish to transfer between upper and lower platforms via lifts, they must reach mezzanine level first and travel considerable distance to other lifts. Additionally, the new EU safety directive 2016/798 requires additional fire protection in the subterranean train stations. Thus, installation of the partitions and doors that close automatically during the fire alarm at the landings of staircases and escalators. The U1/U2/U7/U8 platforms have the new dedicated corridors built at north and south ends. The north corridor (Sonnenstraße-Verbindungstunnel) is connected to the mezzanine level: one set of escalators connecting to the mezzanine area and one staircase connecting to the lower landings of entrances A and B. The south corridor (Blumenstraße-Verbindungstunnel) is connected directly to the street level, bypassing U3/U6 platform or mezzanine level. Neither of them have the lifts. The north corridor has been opened to the public on 28 April 2020. The middle corridor is being widened by eliminating the mechanical and storage rooms on one side. This allows the higher and better flow of passengers on either side of escalators and staircases. The current lifts, staircases, and escalators between U1/U2/U7/U8 platforms and mezzanine remain unchanged. The floors of both upper and lower platforms are being raised five centimeters to line up with the subway carriage's floor for barrier-free and stepless access with wheelchairs, prams, walkers, stroller, etc. The tactile tiles for the passengers with visual disabilities are installed for the first time at Sendlinger Tor station. It is unclear whether the lifts are being renovated and extended to reach all lower levels from the street level without transferring at mezzanine level. The escalators between U1/U2/U7/U8 and U3/U6 platforms are being rearranged as to improve the flow while the wide staircases are eliminated. The bilevel mezzanine area is being renovated with one section rebuilt and levelled to eliminate the small stairs and narrow ramp, which interrupt the flow. The interior is to have blue and yellow colours, giving the station a bright and airy feel. The small shops, food vendors, and service centres will be installed close to the completion. Hauptbahnhof (U4/U5) — Planned On 23 February 2020, MVG announced a new project to reconstruct U4/U5 platform at Hauptbahnhof for improved passenger movement between U1/U2/U7/U8 platforms below and mezzanine level above. One of the proposals called for installation of footbridge above U4/U5 platform that are connected at several points by unidirectional escalators and smaller lifts. No construction start date has been given. This coincides with megaproject of reconstructing Hauptbahnhof building and construction of new second S-Bahn trunk line and third U-Bahn line. Network map
Technology
Germany
null
3377107
https://en.wikipedia.org/wiki/Heliciculture
Heliciculture
Heliciculture, commonly known as snail farming, is the process of raising edible land snails, primarily for human consumption or cosmetic use. The meat and snail eggs a.k.a. white caviar can be consumed as escargot and as a type of caviar, respectively. Perhaps the best-known edible land snail species in the Western world is Helix pomatia, commonly known as the Roman snail or the Burgundy snail. This species, however, is not fit for profitable snail farming, and is normally harvested from nature. Commercial snail farming in the Western world typically utilizes snails in the family Helicidae, particularly Cornu aspersum (morphotypically divided into C. a. aspersa and C. a. maxima), formerly known as Helix aspersa. In tropical climates, snail farming is typically done with the African snail. Snail meat from the African snail is highly valued and widely consumed. The term 'heliciculture' is used for raising snails for any commercial purpose, but generally refers to farming snails for escargot and cosmetic applications. It can also refer to cultivation of sea snails, such as whelks. History Roasted snail shells have been found in archaeological excavations, an indication that snails have been eaten since prehistoric times. Lumaca romana, (translation: Roman snail), was an ancient method of snail farming or heliciculture in the region about Tarquinia. This snail-farming method was described by Fulvius Lippinus (49 BC) and mentioned by Marcus Terentius Varro in De Re rustica III, 12. The snails were fattened for human consumption using spelt and aromatic herbs. People usually raised snails in pens near their houses, and these pens were called "cochlea". The Romans, in particular, are known to have considered escargot as an elite food, as noted in the writings of Pliny the Elder. The Romans selected the best snails for breeding. Fulvius Lippinus started this practice. Various species were consumed by the Romans. Shells of the edible land snail species Otala lactea have been recovered in archaeological excavations of Volubilis in present-day Morocco. "Wallfish" were also often eaten in Britain, but were never as popular as on the continent. There, people often ate snails during Lent, and in a few places, they consumed large quantities of snails at Mardi Gras or Carnival, prior to Lent. According to some sources, the French exported brown garden snails to California in the 1850s, raising them as the delicacy escargot. Other sources claim that Italian immigrants were the first to bring the snail to the United States. Edible land snail species Most land snails are edible provided they are properly cooked. Their flavour varies by species and the way/method of cooking, and preferences may vary by culture. Only a few species are suitable for profitable farming. Edible land snails range in size from about long to the giant African snails, which occasionally grow up to in length. "Escargot" most commonly refers to either Cornu aspersum or to Helix pomatia, although other varieties of snails are eaten. Terms such as "garden snail" or "common brown garden snail" are rather meaningless, since they refer to so many types of snails, but they sometimes mean C. aspersum. Cornu aspersum, formerly officially called Helix aspersa Müller, is also known as the French petit gris, "small grey snail", the escargot chagrine, or la zigrinata. The shell of a mature adult has four or five whorls and measures across. It is native to the shores of the Mediterranean and along the coasts of Spain and France. It is found on many British Isles, where the Romans introduced it in the first century AD (some references say it dates to the early Bronze Age). C. aspersum has a lifespan of 2 to 5 years. This species is more adaptable to different climates and conditions than many snails, and is found in woods, fields, sand dunes, and gardens. This adaptability not only increases C. aspersum's range, but it also makes farming it easier and less risky. Helix pomatia measures about across the shell. It also is called the "Roman snail", "apple snail", "lunar", la vignaiola, Weinbergschnecke, escargot de Bourgogne or "Burgundy snail", or "gros blanc. Native over a large part of Europe, it lives in wooded mountains and valleys up to altitude and in vineyards and gardens. The Romans may have introduced it into Britain during the Roman period (AD 43–410). Immigrants introduced it into the U.S. in Michigan and Wisconsin. Many prefer H. pomatia to C. aspersum for its flavor and larger size, as the "escargot par excellence. To date, H. pomatia, however, has not been economically viable for farming. Otala lactea is sometimes called the "vineyard snail", "milk snail", or "Spanish snail". The shell is white with reddish-brown spiral bands, and measures about in diameter. Iberus alonensis, the Spanish vaqueta or serrana, measures about across the shell. Cepaea nemoralis, the "grove snail" or Spanish vaqueta, measures about across the shell. It inhabits Central Europe and was introduced into, and is now naturalized in, many U.S. states, from Massachusetts to California, and from Tennessee to Canada. Its habitat ranges widely from woods to dunes. Cepaea hortensis, the "white-lipped snail", measures about across the shell, which often has distinct dark stripes. It is native to central and northern Europe. Its habitat varies, but C. hortensis is found in colder and wetter places than C. nemoralis. Their smaller size and some people's opinion that they do not taste as good make C. hortensis and C. nemoralis less popular than the larger European land snails. Otala punctata, called vaqueta in some parts of Spain, measures about across the shell. Eobania vermiculata, the vinyala, "mongeta, or xona, measures about . It is found in Mediterranean countries and was introduced into Louisiana and Texas. Helix lucorum, commonly called the Turkish snail because of it prevalence in Turkey, measures about across the shell. It is found in central Italy and from Yugoslavia through the Crimea to Turkey and around the Black Sea. Helix adanensis comes from around Turkey. Helix aperta measures about . Its meat is highly prized. It is native to France, Italy, and other Mediterranean countries, and has become established in California and Louisiana. Sometimes known as the "burrowing snail", it is found above ground only during rainy weather. In hot, dry weather, it burrows into the ground and becomes dormant until rain softens the soil. Sphincterochila candidissima or Leucochroa candidissima, the "cargol mongeta or cargol jueu, measures about . Lissachatina fulica (formerly Achatina fulica) and other species in the family Achatinidae, giant African snails, can grow up to in length. Their native range is south of the Sahara in East Africa. This snail was purposely introduced into India in 1847. An unsuccessful attempt was made to establish it in Japan in 1925. It has been purposely and accidentally transported to other Pacific locations and was inadvertently released in California after World War II, in Hawaii, and later in North Miami, Florida, in the 1970s. In many places, it is a serious agricultural pest that causes considerable crop damage. Also, due to its large size, its slime and fecal material create a nuisance as does the odor that occurs when something like poison bait causes large numbers to die. The U.S. has made considerable effort to eradicate these snails. The U.S. Department of Agriculture has banned the importation and possession of live giant African snails. However, they are still sought after as pets due to the vibrant "tiger stripes" on their shells. Giant African snails can be farmed, but their requirements and their farming methods differ significantly from those of the farming of Helix species. Biology Understanding of the snail's biology is fundamental to use the right farming techniques. The snail's biology is therefore described here with that in mind. Anatomy The anatomy of the edible land snail is described in Land snail. Lifecycle General Snails are hermaphrodites. Although they have both male and female reproductive organs, they must mate with another snail of the same species before they lay eggs. Some snails may act as males one season and as females the next. Other snails play both roles at once and fertilize each other simultaneously. When the snail is large enough and mature enough, which may take several years, mating occurs in the late spring or early summer after several hours of courtship. Sometimes, a second mating occurs in summer. (In tropical climates, mating may occur several times a year. In some climates, snails mate around October and may mate a second time 2 weeks later.) After mating, the snail can store sperm received for up to a year, but it usually lays eggs within a few weeks. Snails are sometimes uninterested in mating with another snail of the same species that originated from a considerable distance away. For example, a C. aspersum from southern France may reject a C. aspersum from northern France. Growth Within the same snail population and under the same conditions, some snails grow faster than others. Some take twice as long to mature. This may help the species survive bad weather, etc., in the wild. Several factors can greatly influence the growth of snails, including population density, stress (snails are sensitive to noise, light, vibration, unsanitary conditions, irregular feedings, being touched, etc.), feed, temperature and moisture, and the breeding technology used. A newly hatched snail's shell size depends on the egg size since the shell develops from the egg's surface membrane. As the snail grows, the shell is added in increments. Eventually, the shell develops a flare or reinforcing lip at its opening. This shows that the snail is now mature; no further shell growth can occur. Growth is measured by shell size, since a snail's body weight fluctuates, even in 100% humidity. The growth rate varies considerably between individuals in each population group. Adult size, which is related to the growth rate, also varies, thus the fastest growers are usually the largest snails. Eggs from larger, healthier snails also tend to grow faster and thus larger. Dryness inhibits growth and even stops activity. When weather becomes too hot and dry in summer, the snail becomes inactive, seals its shell, and estivates (becomes dormant) until cooler, moister weather returns. Some snails estivate in groups on tree trunks, posts, or walls. They seal themselves to the surface, thus sealing up the shell opening. Peak snail activity (including feeding and thus growth) occurs a few hours after sunset, when the temperature is lower and the water content (in the form of dew) is higher. During daytime, snails usually seek shelter. Snail farming Successful snail culture requires the correct equipment and supplies, including snail pens or enclosures; devices for measuring humidity (hygrometer), temperature (thermometer), soil moisture (soil moisture sensor), and light (in foot candles); a weight scale and an instrument to measure snail size; a kit for testing soil contents; and a magnifying glass to see the eggs. Equipment to control the climate (temperature and humidity), to regulate water (e.g., a sprinkler system to keep the snails moist and a drainage system), to provide light and shade, and to kill or keep out pests and predators may also be needed. Some horticultural systems such as artificial lighting systems and water sprinklers may be adapted for snail culture. Better results are obtained if snails of the same kind and generation are used. Some recommend putting the hatchlings in another pen. Four systems of snail farms can be distinguished: Outdoor pens In buildings with a controlled climate In closed systems such as plastic tunnel houses or "greenhouses" A hybrid system where snails may breed and hatch inside a controlled environment and then (after 6 to 8 weeks) may be placed in outside pens to mature. Key factors to successful snail farming Hygiene Good hygiene can prevent the spread of disease and otherwise improve the health and growth rate of snails. Food is replaced daily to prevent spoilage. Earthworms added to the soil helps keep the pen clean. Parasites, nematodes, trematodes, fungi, and microarthropods can attack snails, and such problems can spread rapidly when snail populations are dense. The bacterium Pseudomonas aeruginosa causes intestinal infections that can spread rapidly in a crowded snail pen. Possible predators include rats, mice, moles, skunks, weasels, birds, frogs and toads, lizards, walking insects (e.g., some beetle and cricket species), some types of flies, centipedes, and even certain carnivorous snail species, such as Strangesta capillacea. Population density Population density also affects successful snail production. Snails tend not to breed when packed too densely or when the slime in the pen accumulates too much. The slime apparently works like a pheromone and suppresses reproduction. On the other hand, snails in groups of about 100 seem to breed better than when only a few snails are confined together. Perhaps they have more potential mates from which to choose. Snails in a densely populated area grow more slowly even when food is abundant, and they also have a higher mortality rate. These snails then become smaller adults who lay fewer clutches of eggs, have fewer eggs per clutch, and the eggs have a lower hatch rate. Smaller adult snails sell for less. Dwarfing is quite common in snail farming and is attributable mainly to rearing conditions rather than heredity factors. Feeding The feeding season is April through October, (or may vary with the local climate), with a "rest period" during the summer. Do not place food in one small clump so that there is not enough room for all the snails to get to it. Snails eat solid food by rasping it away with their radula. Feeding activity depends on the weather, and snails may not necessarily feed every day. Evening irrigation in dry weather may encourage feeding since the moisture makes it easier for the snails to move about. Climate A mild climate with high humidity (75% to 95%) is best for snail farming, though most varieties can stand a wider range of temperatures. The optimal temperature is for many varieties. When the temperature falls below , snails hibernate. Under the snails are inactive, and under , all growth stops. When the temperature rises much above or conditions become too dry, snails estivate. Wind is bad for snails because it speeds up moisture loss, and snails must retain moisture. Snails thrive in damp but not waterlogged environments and thus a well-draining soil is required. Research indicates that water content around 80% of the carrying capacity of the soil and air humidity over 80% (during darkness) are the most favorable conditions. Many farmers use mist-producing devices to maintain proper moisture in the air and/or soil. Also, if the system contains alive vegetation, the leaves are to be periodically wet. Soil Snails dig in soil and ingest it. Good soil favors snail growth and provides some of their nutrition. Lack of access to good soil may cause fragile shells even when the snails have well-balanced feed; the snails' growth may lag far behind the growth of other snails on good soil. Snails often eat feed, then go eat soil. Sometimes, they eat only one or the other. Soil care: A farmer must find a way to prevent the soil from becoming fouled with mucus and droppings and also tackle undesirable chemical changes that may occur in time. Soil mix suggestions: peat, clay, compost and CaCO3 leaf mold (at pH 7) Phases in snail farming Some who raise C. aspersum separate the five stages: reproduction, hatching, young, fattening, and final fattening. Depending on the scale and sophistication of a snail farm, it will contain some or all of below described sections which may or may not be merged with one and another. Each section has its particular values for the key factors to successful snail farming, described above. Hibernation For future reproducers it is mandatory to hibernate 3 months. Breeding Most breeders allow the snails to mate with one another on their own. If snails are kept in ideal conditions, breeding will occur at higher rates and have more success. Hatchery and nursery When the snails have laid their eggs, the pots are put in a nursery where the eggs will hatch. The young snails are kept in the nursery for about 6 weeks, and then moved to a separate pen, as young snails do best if kept with other snails of similar size. Eight hours of daylight is optimal for young snails. Baby snails are fed on tender lettuce leaves (Boston type, but head type is probably also good). Cannibalism by hatchlings The first snails to hatch eat the shells of their eggs. This gives them calcium needed for their shells. They may then begin eating unhatched eggs. If the snail eggs are kept at the optimum temperature, (for some varieties), and if none of the eggs lose moisture, most eggs will hatch within three days of each other. Cannibalism also will be low. If hatching extends over a longer period, cannibalism may increase. Some eggs eaten are eggs that were not fertile or did not develop properly, but sometimes, properly developing embryos might be eaten. A high density of "clutches" of egg masses increases the rate of cannibalism, as other nearby egg masses are more likely to be found and eaten. Fattening/growing In this section, the snails are grown from juvenile to mature size. Fattening pens can be outside or in a greenhouse. High summer temperatures and insufficient moisture cause dwarfing and malformations of some snails. This is more of a problem inside greenhouses if the sun overheats the building. A layer of coarse sand and topsoil with earthworms is placed on the fattening pen's bottom. The worms help clean up the snail droppings. Harvest and purging Snails are mature when a lip forms at the opening of their shell. Before they mature, their shells are more easily broken, making them undesirable. For C. aspersum, commercial weight is 8 grams or larger. The fastest, largest, and healthy snails are selected for next-generation breeders. This is typically around 5% of the harvest. The remainder goes for sales. Snail eggs may also be harvested and processed to produce snail caviar, but in order to do so systematically, special breeding units are created enhancing easy harvest of the eggs. Types of farms, or sections thereof Open air farms Enclosures for snails are usually long and thin instead of square. This allows the workers to walk around (without harming the snails) and reach into the whole pen. The enclosure may be a trough with sides made of wood, block, fiber cement sheets, or galvanized sheet steel. Cover it with screen or netting. The covering confines the snails and keeps out birds and other predators. The bottom of the enclosure, if it is not the ground or trays of dirt, needs be a surface more solid than screening. A snail placed in a wire-mesh-bottom pen will keep crawling, trying to get off the wires and onto solid, more comfortable ground. Garden farms An alternate method is to make a square pen with a -square garden in it. Plant about six crops, e.g., nettles and artichokes, inside the pen. The snails will choose what they want to eat. Plastic tunnels make cheap, easy snail enclosures, but it is difficult to regulate heat and humidity. Indoor farms Fluorescent lamps can be used to give artificial daylight. Different snails respond to day length in different ways. The ratio of light to darkness influences activity, feeding, and mating and egg-laying. Snails can be bred in boxes or cages stacked several units high. An automatic sprinkler system can be used to provide moisture. Breeding cages need a feed trough and a water trough. Plastic trays a couple of inches deep are adequate; deeper water troughs increase the chance of snails drowning in them. Trays can be set on a bed of small gravel. Small plastic pots, e.g., flower pots about deep, can be filled with sterilized dirt (or a loamy pH neutral soil) and set in the gravel to give the snails a place to lay their eggs. After the snails lay eggs each pot is replaced. (Set one pot inside another so that one can be easily lifted without shifting the gravel.) Processing/transforming snails Snails can be processed industrially (typically in 'factories') and as a craft (typically in 'kitchens'). Industrial processing of snails risks significant drop of the snail's quality, and with relatively high loss of material. The economies of scale that go with industrial processing, though, allow for profitable compensation. Processing by individual craftsmanship allows for much lower production, but product quality typically remains high. Market developments Ukraine In 2015, the first snail farm opened in Ukraine. Production was, and remains, almost entirely for export, there being no consumer market for snails in the country. Production (in tonnes) was: 93 in 2018; 200–300 in 2019; and 1,000 in 2020, when the country had 400 farms. Exports were decimated in 2020, however, by lockdowns related to the COVID-19 pandemic. West Africa There is a huge demand for snail meat in Western African countries. 7.9 million kg of snails are consumed in Ivory Coast each year. Other countries, such as Ghana, import snails to meet demand. France The COVID-19 pandemic wiped out almost all sales in France in 2020. This was especially due to the cancelling of New Year's Eve, which comprises 70% of annual sales normally. Restrictions and regulations United States The Animal and Plant Health Inspection Service (APHIS) categorizes giant African snails as a "quarantine significant plant pest." The United States does not allow live giant African snails into the country under any circumstances. It is illegal to own or to possess them. APHIS vigorously enforces this regulation and destroys or returns these snails to their country of origin. Since large infestations of snails can do devastating damage, many states have quarantines against nursery products, and other products, from infested states. Further, it is illegal to import snails (or slugs) into the U.S. without permission from the Plant Protection and Quarantine (PPQ) Division of the Animal and Plant Health Inspection Service, U.S. Department of Agriculture. APHIS also oversees interstate transportation of snails. Environmental benefits The farming of snails for food shows potential as a low carbon animal protein source. A case study of a farm in Southern Italy found that snail meat production resulted in 0.7 kg eq per kg fresh edible meat. This is a similar carbon footprint to mealworm cultivation, for which they also have a similar feed conversion ratio. This compares with about 2-4 for chicken, 6-8 for pork, and up to 50 for beef. This is attributed to snails' lack of enteric methane emissions, reduced energy demands, and feed conversion ratio.
Technology
Agriculture_2
null
3378256
https://en.wikipedia.org/wiki/Batch%20reactor
Batch reactor
A batch reactor is a chemical reactor in which a non-continuous reaction is conducted, i.e., one where the reactants, products and solvent do not flow in or out of the vessel during the reaction until the target reaction conversion is achieved. By extension, the expression is somehow inappropriately used for other batch fluid processing operations that do not involve a chemical reaction, such as solids dissolution, product mixing, batch distillation, crystallization, and liquid/liquid extraction. In such cases, however, they may not be referred to as reactors but rather with a term specific to the function they perform (such as crystallizer, bioreactor, etc.). Many batch processes are designed on the basis of a scale-up from the laboratory, particularly for the manufacture of specialty chemicals and pharmaceuticals. If this is the case, the process development will produce a recipe for the manufacturing process, which has many similarities to a recipe used in cookery. A typical batch reactor consists of a pressure vessel with an agitator and integral heating/cooling system. The vessels may vary in size from less than 1 L to more than 15,000 L. They are usually fabricated in steel, stainless steel, glass-lined steel, glass or exotic alloys. Liquids and solids are usually charged via connections in the top cover of the reactor. Vapors and gases also discharge through connections in the top. Liquids are usually discharged out of the bottom. The advantages of the batch reactor lie with its versatility. A single vessel can carry out a sequence of different operations without the need to break containment. This is particularly useful when processing toxic or highly potent compounds. Agitation The usual agitator arrangement is a centrally mounted driveshaft with an overhead drive unit. Impeller blades are mounted on the shaft. A wide variety of blade designs are used and typically the blades cover about two thirds of the diameter of the reactor. Where viscous products are handled, anchor shaped paddles are often used which have a close clearance between the blade and the vessel walls. Most batch reactors also use baffles. These are stationary blades which break up flow caused by the rotating agitator. These may be fixed to the vessel cover or mounted on the interior of the side walls. Despite significant improvements in agitator blade and baffle design, mixing in large batch reactors is ultimately constrained by the amount of energy that can be applied. On large vessels, mixing energies of more than 5 W/L can put an unacceptable burden on the cooling system. High agitator loads can also create shaft stability problems. Where mixing is a critical parameter, the batch reactor is not the ideal solution. Much higher mixing rates can be achieved by using smaller flowing systems with high-speed agitators, ultrasonic mixing or static mixers. Heating and cooling systems Products within batch reactors usually liberate or absorb heat during processing. Even the action of stirring stored liquids generates heat. In order to hold the reactor contents at the desired temperature, heat has to be added or removed by a cooling jacket or cooling pipe. Heating/cooling coils or external jackets are used for heating and cooling batch reactors. Heat transfer fluid passes through the jacket or coils to add or remove heat. Within the chemical and pharmaceutical industries, external cooling jackets are generally preferred as they make the vessel easier to clean. The performance of these jackets can be defined by three parameters: Response time to modify the jacket temperature. Uniformity of jacket temperature. Stability of jacket temperature. It can be argued that heat transfer coefficient is also an important parameter. It has to be recognized however that large batch reactors with external cooling jackets have severe heat transfer constraints by virtue of design. It is difficult to achieve better than 100 W/L even with ideal heat transfer conditions. By contrast, continuous reactors can deliver cooling capacities in excess of 10,000 W/L. For processes with very high heat loads, there are better solutions than batch reactors. Fast temperature control response and uniform jacket heating and cooling is particularly important for crystallization processes or operations where the product or process is very temperature sensitive. There are several types of batch reactor cooling jackets, including single external jacket, half-coil jacket, and constant flux heat jacket. Single external jacket The single jacket design consists of an outer jacket which surrounds the vessel. Heat transfer fluid flows around the jacket and is injected at high velocity via nozzles. The temperature in the jacket is regulated to control heating or cooling. The single jacket is probably the oldest design of external cooling jacket. Despite being a tried and tested solution, it has some limitations. On large vessels, it can take many minutes to adjust the temperature of the fluid in the cooling jacket. This results in sluggish temperature control. The distribution of heat transfer fluid is also far from ideal and the heating or cooling tends to vary between the side walls and bottom dish. Another issue to consider is the inlet temperature of the heat transfer fluid which can oscillate (in response to the temperature control valve) over a wide temperature range to cause hot or cold spots at the jacket inlet points. Half-coil jacket The half-coil jacket is made by welding a half pipe around the outside of the vessel to create a semi circular flow channel. The heat transfer fluid passes through the channel in a plug flow fashion. A large reactor may use several coils to deliver the heat transfer fluid. Like the single jacket, the temperature in the jacket is regulated to control heating or cooling. The plug flow characteristics of a half coil jacket permits faster displacement of the heat transfer fluid in the jacket (typically less than 60 s). This is desirable for good temperature control. It also provides good distribution of heat transfer fluid which avoids the problems of non-uniform heating or cooling between the side walls and bottom dish. Like the single jacket design however the inlet heat transfer fluid is also vulnerable to large oscillations (in response to the temperature control valve) in temperature. Constant flux cooling jacket The constant flux cooling jacket is a relatively recent development. It is not a single jacket but has a series of 20 or more small jacket elements. The temperature control valve operates by opening and closing these channels as required. By varying the heat transfer area in this way, the process temperature can be regulated without altering the jacket temperature. The constant flux jacket has very fast temperature control response (typically less than 5 s) due to the short length of the flow channels and high velocity of the heat transfer fluid. Like the half coil jacket the heating/cooling flux is uniform. Because the jacket operates at substantially constant temperature however the inlet temperature oscillations seen in other jackets are absent. An unusual feature of this type jacket is that process heat can be measured very sensitively. This allows the user to monitor the rate of reaction for detecting end points, controlling addition rates, controlling crystallization etc. Applications Batch reactors are often used in the process industry; in wastewater treatment, as they are effective in reducing biological oxygen demand (BOD) of influent untreated water; in the pharmaceutical industry; in laboratory applications, such as small-scale production, inducing fermentation for beverage products, and for experiments of reaction kinetics and thermodynamics; etc. Common issues ascribed to batch reactors are their relatively high cost and unreliability in terms of product quality.
Physical sciences
Chemical engineering
Chemistry
3380651
https://en.wikipedia.org/wiki/Recessional%20velocity
Recessional velocity
Recessional velocity is the rate at which an extragalactic astronomical object recedes (becomes more distant) from an observer as a result of the expansion of the universe. It can be measured by observing the wavelength shifts of spectral lines emitted by the object, known as the object's cosmological redshift. Application to cosmology Hubble's law is the relationship between a galaxy's distance and its recessional velocity, which is approximately linear for galaxies at distances of up to a few hundred megaparsecs. It can be expressed as where is the Hubble constant, is the proper distance, is the object's recessional velocity, and is the object's peculiar velocity. The recessional velocity of a galaxy can be calculated from the redshift observed in its emitted spectrum. One application of Hubble's law is to estimate distances to galaxies based on measurements of their recessional velocities. However, for relatively nearby galaxies the peculiar velocity can be comparable to or larger than the recessional velocity, in which case Hubble's law does not give a good estimate of an object's distance based on its redshift. In some cases (such as the Andromeda Galaxy, 2.5 million light-years away and approaching us at 300 km/s, or even Messier 81 at 12 million light-years away and approaching at 34 km/s) is negative (i.e., the galaxy's spectrum is observed to be blueshifted) as a result of the peculiar velocity.
Physical sciences
Basics
Astronomy
97792
https://en.wikipedia.org/wiki/Peck
Peck
A peck is an imperial and United States customary unit of dry volume, equivalent to 2 dry gallons or 8 dry quarts or 16 dry pints. An imperial peck is equivalent to 9.09 liters and a US customary peck is equivalent to 8.81 liters. Two pecks make a kenning (obsolete), and four pecks make a bushel. Although the peck is no longer widely used, some produce, such as apples, are still often sold by the peck in the U.S. (although it is obsolete in the UK, found only in the old nursery rhyme "Peter Piper" and in the Bible – e.g., Matthew 5:15 in some older translations). Scotland before 1824 In Scotland, the peck was used as a dry measure until the introduction of imperial units as a result of the Weights and Measures Act 1824. The peck was equal to about 9 litres (1.98 Imp gal) (in the case of certain crops, such as wheat, peas, beans and meal) and about 13 litres (2.86 Imp gal) (in the case of barley, oats and malt). A firlot was equal to 4 pecks. Conversions
Physical sciences
Mass and weight
Basics and measurement
97830
https://en.wikipedia.org/wiki/Nuclear%20technology
Nuclear technology
Nuclear technology is technology that involves the nuclear reactions of atomic nuclei. Among the notable nuclear technologies are nuclear reactors, nuclear medicine and nuclear weapons. It is also used, among other things, in smoke detectors and gun sights. History and scientific background Discovery The vast majority of common, natural phenomena on Earth only involve gravity and electromagnetism, and not nuclear reactions. This is because atomic nuclei are generally kept apart because they contain positive electrical charges and therefore repel each other. In 1896, Henri Becquerel was investigating phosphorescence in uranium salts when he discovered a new phenomenon which came to be called radioactivity. He, Pierre Curie and Marie Curie began investigating the phenomenon. In the process, they isolated the element radium, which is highly radioactive. They discovered that radioactive materials produce intense, penetrating rays of three distinct sorts, which they labeled alpha, beta, and gamma after the first three Greek letters. Some of these kinds of radiation could pass through ordinary matter, and all of them could be harmful in large amounts. All of the early researchers received various radiation burns, much like sunburn, and thought little of it. The new phenomenon of radioactivity was seized upon by the manufacturers of quack medicine (as had the discoveries of electricity and magnetism, earlier), and a number of patent medicines and treatments involving radioactivity were put forward. Gradually it was realized that the radiation produced by radioactive decay was ionizing radiation, and that even quantities too small to burn could pose a severe long-term hazard. Many of the scientists working on radioactivity died of cancer as a result of their exposure. Radioactive patent medicines mostly disappeared, but other applications of radioactive materials persisted, such as the use of radium salts to produce glowing dials on meters. As the atom came to be better understood, the nature of radioactivity became clearer. Some larger atomic nuclei are unstable, and so decay (release matter or energy) after a random interval. The three forms of radiation that Becquerel and the Curies discovered are also more fully understood. Alpha decay is when a nucleus releases an alpha particle, which is two protons and two neutrons, equivalent to a helium nucleus. Beta decay is the release of a beta particle, a high-energy electron. Gamma decay releases gamma rays, which unlike alpha and beta radiation are not matter but electromagnetic radiation of very high frequency, and therefore energy. This type of radiation is the most dangerous and most difficult to block. All three types of radiation occur naturally in certain elements. It has also become clear that the ultimate source of most terrestrial energy is nuclear, either through radiation from the Sun caused by stellar thermonuclear reactions or by radioactive decay of uranium within the Earth, the principal source of geothermal energy. Nuclear fission In natural nuclear radiation, the byproducts are very small compared to the nuclei from which they originate. Nuclear fission is the process of splitting a nucleus into roughly equal parts, and releasing energy and neutrons in the process. If these neutrons are captured by another unstable nucleus, they can fission as well, leading to a chain reaction. The average number of neutrons released per nucleus that go on to fission another nucleus is referred to as k. Values of k larger than 1 mean that the fission reaction is releasing more neutrons than it absorbs, and therefore is referred to as a self-sustaining chain reaction. A mass of fissile material large enough (and in a suitable configuration) to induce a self-sustaining chain reaction is called a critical mass. When a neutron is captured by a suitable nucleus, fission may occur immediately, or the nucleus may persist in an unstable state for a short time. If there are enough immediate decays to carry on the chain reaction, the mass is said to be prompt critical, and the energy release will grow rapidly and uncontrollably, usually leading to an explosion. When discovered on the eve of World War II, this insight led multiple countries to begin programs investigating the possibility of constructing an atomic bomb — a weapon which utilized fission reactions to generate far more energy than could be created with chemical explosives. The Manhattan Project, run by the United States with the help of the United Kingdom and Canada, developed multiple fission weapons which were used against Japan in 1945 at Hiroshima and Nagasaki. During the project, the first fission reactors were developed as well, though they were primarily for weapons manufacture and did not generate electricity. In 1951, the first nuclear fission power plant was the first to produce electricity at the Experimental Breeder Reactor No. 1 (EBR-1), in Arco, Idaho, ushering in the "Atomic Age" of more intensive human energy use. However, if the mass is critical only when the delayed neutrons are included, then the reaction can be controlled, for example by the introduction or removal of neutron absorbers. This is what allows nuclear reactors to be built. Fast neutrons are not easily captured by nuclei; they must be slowed (slow neutrons), generally by collision with the nuclei of a neutron moderator, before they can be easily captured. Today, this type of fission is commonly used to generate electricity. Nuclear fusion If nuclei are forced to collide, they can undergo nuclear fusion. This process may release or absorb energy. When the resulting nucleus is lighter than that of iron, energy is normally released; when the nucleus is heavier than that of iron, energy is generally absorbed. This process of fusion occurs in stars, which derive their energy from hydrogen and helium. They form, through stellar nucleosynthesis, the light elements (lithium to calcium) as well as some of the heavy elements (beyond iron and nickel, via the S-process). The remaining abundance of heavy elements, from nickel to uranium and beyond, is due to supernova nucleosynthesis, the R-process. Of course, these natural processes of astrophysics are not examples of nuclear "technology". Because of the very strong repulsion of nuclei, fusion is difficult to achieve in a controlled fashion. Hydrogen bombs obtain their enormous destructive power from fusion, but their energy cannot be controlled. Controlled fusion is achieved in particle accelerators; this is how many synthetic elements are produced. A fusor can also produce controlled fusion and is a useful neutron source. However, both of these devices operate at a net energy loss. Controlled, viable fusion power has proven elusive, despite the occasional hoax. Technical and theoretical difficulties have hindered the development of working civilian fusion technology, though research continues to this day around the world. Nuclear fusion was initially pursued only in theoretical stages during World War II, when scientists on the Manhattan Project (led by Edward Teller) investigated it as a method to build a bomb. The project abandoned fusion after concluding that it would require a fission reaction to detonate. It took until 1952 for the first full hydrogen bomb to be detonated, so-called because it used reactions between deuterium and tritium. Fusion reactions are much more energetic per unit mass of fuel than fission reactions, but starting the fusion chain reaction is much more difficult. Nuclear weapons A nuclear weapon is an explosive device that derives its destructive force from nuclear reactions, either fission or a combination of fission and fusion. Both reactions release vast quantities of energy from relatively small amounts of matter. Even small nuclear devices can devastate a city by blast, fire and radiation. Nuclear weapons are considered weapons of mass destruction, and their use and control has been a major aspect of international policy since their debut. The design of a nuclear weapon is more complicated than it might seem. Such a weapon must hold one or more subcritical fissile masses stable for deployment, then induce criticality (create a critical mass) for detonation. It also is quite difficult to ensure that such a chain reaction consumes a significant fraction of the fuel before the device flies apart. The procurement of a nuclear fuel is also more difficult than it might seem, since sufficiently unstable substances for this process do not currently occur naturally on Earth in suitable amounts. One isotope of uranium, namely uranium-235, is naturally occurring and sufficiently unstable, but it is always found mixed with the more stable isotope uranium-238. The latter accounts for more than 99% of the weight of natural uranium. Therefore, some method of isotope separation based on the weight of three neutrons must be performed to enrich (isolate) uranium-235. Alternatively, the element plutonium possesses an isotope that is sufficiently unstable for this process to be usable. Terrestrial plutonium does not currently occur naturally in sufficient quantities for such use, so it must be manufactured in a nuclear reactor. Ultimately, the Manhattan Project manufactured nuclear weapons based on each of these elements. They detonated the first nuclear weapon in a test code-named "Trinity", near Alamogordo, New Mexico, on July 16, 1945. The test was conducted to ensure that the implosion method of detonation would work, which it did. A uranium bomb, Little Boy, was dropped on the Japanese city Hiroshima on August 6, 1945, followed three days later by the plutonium-based Fat Man on Nagasaki. In the wake of unprecedented devastation and casualties from a single weapon, the Japanese government soon surrendered, ending World War II. Since these bombings, no nuclear weapons have been deployed offensively. Nevertheless, they prompted an arms race to develop increasingly destructive bombs to provide a nuclear deterrent. Just over four years later, on August 29, 1949, the Soviet Union detonated its first fission weapon. The United Kingdom followed on October 2, 1952; France, on February 13, 1960; and China component to a nuclear weapon. Approximately half of the deaths from Hiroshima and Nagasaki died two to five years afterward from radiation exposure. A radiological weapon is a type of nuclear weapon designed to distribute hazardous nuclear material in enemy areas. Such a weapon would not have the explosive capability of a fission or fusion bomb, but would kill many people and contaminate a large area. A radiological weapon has never been deployed. While considered useless by a conventional military, such a weapon raises concerns over nuclear terrorism. There have been over 2,000 nuclear tests conducted since 1945. In 1963, all nuclear and many non-nuclear states signed the Limited Test Ban Treaty, pledging to refrain from testing nuclear weapons in the atmosphere, underwater, or in outer space. The treaty permitted underground nuclear testing. France continued atmospheric testing until 1974, while China continued up until 1980. The last underground test by the United States was in 1992, the Soviet Union in 1990, the United Kingdom in 1991, and both France and China continued testing until 1996. After signing the Comprehensive Test Ban Treaty in 1996 (which had as of 2011 not entered into force), all of these states have pledged to discontinue all nuclear testing. Non-signatories India and Pakistan last tested nuclear weapons in 1998. Nuclear weapons are the most destructive weapons known - the archetypal weapons of mass destruction. Throughout the Cold War, the opposing powers had huge nuclear arsenals, sufficient to kill hundreds of millions of people. Generations of people grew up under the shadow of nuclear devastation, portrayed in films such as Dr. Strangelove and The Atomic Cafe. However, the tremendous energy release in the detonation of a nuclear weapon also suggested the possibility of a new energy source. Civilian uses Nuclear power Nuclear power is a type of nuclear technology involving the controlled use of nuclear fission to release energy for work including propulsion, heat, and the generation of electricity. Nuclear energy is produced by a controlled nuclear chain reaction which creates heat—and which is used to boil water, produce steam, and drive a steam turbine. The turbine is used to generate electricity and/or to do mechanical work. Currently nuclear power provides approximately 15.7% of the world's electricity (in 2004) and is used to propel aircraft carriers, icebreakers and submarines (so far economics and fears in some ports have prevented the use of nuclear power in transport ships). All nuclear power plants use fission. No man-made fusion reaction has resulted in a viable source of electricity. Medical applications The medical applications of nuclear technology are divided into diagnostics and radiation treatment. Imaging - The largest use of ionizing radiation in medicine is in medical radiography to make images of the inside of the human body using x-rays. This is the largest artificial source of radiation exposure for humans. Medical and dental x-ray imagers use of cobalt-60 or other x-ray sources. A number of radiopharmaceuticals are used, sometimes attached to organic molecules, to act as radioactive tracers or contrast agents in the human body. Positron emitting nucleotides are used for high resolution, short time span imaging in applications known as Positron emission tomography. Radiation is also used to treat diseases in radiation therapy. Industrial applications Since some ionizing radiation can penetrate matter, they are used for a variety of measuring methods. X-rays and gamma rays are used in industrial radiography to make images of the inside of solid products, as a means of nondestructive testing and inspection. The piece to be radiographed is placed between the source and a photographic film in a cassette. After a certain exposure time, the film is developed and it shows any internal defects of the material. Gauges - Gauges use the exponential absorption law of gamma rays Level indicators: Source and detector are placed at opposite sides of a container, indicating the presence or absence of material in the horizontal radiation path. Beta or gamma sources are used, depending on the thickness and the density of the material to be measured. The method is used for containers of liquids or of grainy substances Thickness gauges: if the material is of constant density, the signal measured by the radiation detector depends on the thickness of the material. This is useful for continuous production, like of paper, rubber, etc. Electrostatic control - To avoid the build-up of static electricity in production of paper, plastics, synthetic textiles, etc., a ribbon-shaped source of the alpha emitter 241Am can be placed close to the material at the end of the production line. The source ionizes the air to remove electric charges on the material. Radioactive tracers - Since radioactive isotopes behave, chemically, mostly like the inactive element, the behavior of a certain chemical substance can be followed by tracing the radioactivity. Examples: Adding a gamma tracer to a gas or liquid in a closed system makes it possible to find a hole in a tube. Adding a tracer to the surface of the component of a motor makes it possible to measure wear by measuring the activity of the lubricating oil. Oil and Gas Exploration- Nuclear well logging is used to help predict the commercial viability of new or existing wells. The technology involves the use of a neutron or gamma-ray source and a radiation detector which are lowered into boreholes to determine the properties of the surrounding rock such as porosity and lithography. Road Construction - Nuclear moisture/density gauges are used to determine the density of soils, asphalt, and concrete. Typically a cesium-137 source is used. Commercial applications radioluminescence tritium illumination: Tritium is used with phosphor in rifle sights to increase nighttime firing accuracy. Some runway markers and building exit signs use the same technology, to remain illuminated during blackouts. Betavoltaics. Smoke detector: An ionization smoke detector includes a tiny mass of radioactive americium-241, which is a source of alpha radiation. Two ionisation chambers are placed next to each other. Both contain a small source of 241Am that gives rise to a small constant current. One is closed and serves for comparison, the other is open to ambient air; it has a gridded electrode. When smoke enters the open chamber, the current is disrupted as the smoke particles attach to the charged ions and restore them to a neutral electrical state. This reduces the current in the open chamber. When the current drops below a certain threshold, the alarm is triggered. Food processing and agriculture In biology and agriculture, radiation is used to induce mutations to produce new or improved species, such as in atomic gardening. Another use in insect control is the sterile insect technique, where male insects are sterilized by radiation and released, so they have no offspring, to reduce the population. In industrial and food applications, radiation is used for sterilization of tools and equipment. An advantage is that the object may be sealed in plastic before sterilization. An emerging use in food production is the sterilization of food using food irradiation. Food irradiation is the process of exposing food to ionizing radiation in order to destroy microorganisms, bacteria, viruses, or insects that might be present in the food. The radiation sources used include radioisotope gamma ray sources, X-ray generators and electron accelerators. Further applications include sprout inhibition, delay of ripening, increase of juice yield, and improvement of re-hydration. Irradiation is a more general term of deliberate exposure of materials to radiation to achieve a technical goal (in this context 'ionizing radiation' is implied). As such it is also used on non-food items, such as medical hardware, plastics, tubes for gas-pipelines, hoses for floor-heating, shrink-foils for food packaging, automobile parts, wires and cables (isolation), tires, and even gemstones. Compared to the amount of food irradiated, the volume of those every-day applications is huge but not noticed by the consumer. The genuine effect of processing food by ionizing radiation relates to damages to the DNA, the basic genetic information for life. Microorganisms can no longer proliferate and continue their malignant or pathogenic activities. Spoilage causing micro-organisms cannot continue their activities. Insects do not survive or become incapable of procreation. Plants cannot continue the natural ripening or aging process. All these effects are beneficial to the consumer and the food industry, likewise. The amount of energy imparted for effective food irradiation is low compared to cooking the same; even at a typical dose of 10 kGy most food, which is (with regard to warming) physically equivalent to water, would warm by only about 2.5 °C (4.5 °F). The specialty of processing food by ionizing radiation is the fact, that the energy density per atomic transition is very high, it can cleave molecules and induce ionization (hence the name) which cannot be achieved by mere heating. This is the reason for new beneficial effects, however at the same time, for new concerns. The treatment of solid food by ionizing radiation can provide an effect similar to heat pasteurization of liquids, such as milk. However, the use of the term, cold pasteurization, to describe irradiated foods is controversial, because pasteurization and irradiation are fundamentally different processes, although the intended end results can in some cases be similar. Detractors of food irradiation have concerns about the health hazards of induced radioactivity. A report for the industry advocacy group American Council on Science and Health entitled "Irradiated Foods" states: "The types of radiation sources approved for the treatment of foods have specific energy levels well below that which would cause any element in food to become radioactive. Food undergoing irradiation does not become any more radioactive than luggage passing through an airport X-ray scanner or teeth that have been X-rayed." Food irradiation is currently permitted by over 40 countries and volumes are estimated to exceed annually worldwide. Food irradiation is essentially a non-nuclear technology; it relies on the use of ionizing radiation which may be generated by accelerators for electrons and conversion into bremsstrahlung, but which may use also gamma-rays from nuclear decay. There is a worldwide industry for processing by ionizing radiation, the majority by number and by processing power using accelerators. Food irradiation is only a niche application compared to medical supplies, plastic materials, raw materials, gemstones, cables and wires, etc. Accidents Nuclear accidents, because of the powerful forces involved, are often very dangerous. Historically, the first incidents involved fatal radiation exposure. Marie Curie died from aplastic anemia which resulted from her high levels of exposure. Two scientists, an American and Canadian respectively, Harry Daghlian and Louis Slotin, died after mishandling the same plutonium mass. Unlike conventional weapons, the intense light, heat, and explosive force is not the only deadly component to a nuclear weapon. Approximately half of the deaths from Hiroshima and Nagasaki died two to five years afterward from radiation exposure. Civilian nuclear and radiological accidents primarily involve nuclear power plants. Most common are nuclear leaks that expose workers to hazardous material. A nuclear meltdown refers to the more serious hazard of releasing nuclear material into the surrounding environment. The most significant meltdowns occurred at Three Mile Island in Pennsylvania and Chernobyl in the Soviet Ukraine. The earthquake and tsunami on March 11, 2011 caused serious damage to three nuclear reactors and a spent fuel storage pond at the Fukushima Daiichi nuclear power plant in Japan. Military reactors that experienced similar accidents were Windscale in the United Kingdom and SL-1 in the United States. Military accidents usually involve the loss or unexpected detonation of nuclear weapons. The Castle Bravo test in 1954 produced a larger yield than expected, which contaminated nearby islands, a Japanese fishing boat (with one fatality), and raised concerns about contaminated fish in Japan. In the 1950s through 1970s, several nuclear bombs were lost from submarines and aircraft, some of which have never been recovered. The last twenty years have seen a marked decline in such accidents. Examples of environmental benefits Proponents of nuclear energy note that annually, nuclear-generated electricity reduces 470 million metric tons of carbon dioxide emissions that would otherwise come from fossil fuels. Additionally, the amount of comparatively low waste that nuclear energy does create is safely disposed of by the large scale nuclear energy production facilities or it is repurposed/recycled for other energy uses. Proponents of nuclear energy also bring to attention the opportunity cost of utilizing other forms of electricity. For example, the Environmental Protection Agency estimates that coal kills 30,000 people a year, as a result of its environmental impact, while 60 people died in the Chernobyl disaster. A real world example of impact provided by proponents of nuclear energy is the 650,000 ton increase in carbon emissions in the two months following the closure of the Vermont Yankee nuclear plant.
Technology
Basics_5
null
98132
https://en.wikipedia.org/wiki/Radio%20wave
Radio wave
Radio waves (formerly called Hertzian waves) are a type of electromagnetic radiation with the lowest frequencies and the longest wavelengths in the electromagnetic spectrum, typically with frequencies below 300 gigahertz (GHz) and wavelengths greater than , about the diameter of a grain of rice. Radio waves with frequencies above about 1 GHz and wavelengths shorter than 30 centimeters are called microwaves. Like all electromagnetic waves, radio waves in vacuum travel at the speed of light, and in the Earth's atmosphere at a slightly lower speed. Radio waves are generated by charged particles undergoing acceleration, such as time-varying electric currents. Naturally occurring radio waves are emitted by lightning and astronomical objects, and are part of the blackbody radiation emitted by all warm objects. Radio waves are generated artificially by an electronic device called a transmitter, which is connected to an antenna, which radiates the waves. They are received by another antenna connected to a radio receiver, which processes the received signal. Radio waves are very widely used in modern technology for fixed and mobile radio communication, broadcasting, radar and radio navigation systems, communications satellites, wireless computer networks and many other applications. Different frequencies of radio waves have different propagation characteristics in the Earth's atmosphere; long waves can diffract around obstacles like mountains and follow the contour of the Earth (ground waves), shorter waves can reflect off the ionosphere and return to Earth beyond the horizon (skywaves), while much shorter wavelengths bend or diffract very little and travel on a line of sight, so their propagation distances are limited to the visual horizon. To prevent interference between different users, the artificial generation and use of radio waves is strictly regulated by law, coordinated by an international body called the International Telecommunication Union (ITU), which defines radio waves as "electromagnetic waves of frequencies arbitrarily lower than , propagated in space without artificial guide". The radio spectrum is divided into a number of radio bands on the basis of frequency, allocated to different uses. Higher-frequency, shorter-wavelength radio waves are called microwaves. Discovery and exploitation Radio waves were first predicted by the theory of electromagnetism that was proposed in 1867 by Scottish mathematical physicist James Clerk Maxwell. His mathematical theory, now called Maxwell's equations, predicted that a coupled electric and magnetic field could travel through space as an "electromagnetic wave". Maxwell proposed that light consisted of electromagnetic waves of very short wavelength. In 1887, German physicist Heinrich Hertz demonstrated the reality of Maxwell's electromagnetic waves by experimentally generating electromagnetic waves lower in frequency than light, radio waves, in his laboratory, showing that they exhibited the same wave properties as light: standing waves, refraction, diffraction, and polarization. Italian inventor Guglielmo Marconi developed the first practical radio transmitters and receivers around 1894–1895. He received the 1909 Nobel Prize in Physics for his radio work. Radio communication began to be used commercially around 1900. The modern term "radio wave" replaced the original name "Hertzian wave" around 1912. Generation and reception Radio waves are radiated by charged particles when they are accelerated. Natural sources of radio waves include radio noise produced by lightning and other natural processes in the Earth's atmosphere, and astronomical radio sources in space such as the Sun, galaxies and nebulas. All warm objects radiate high frequency radio waves (microwaves) as part of their black body radiation. Radio waves are produced artificially by time-varying electric currents, consisting of electrons flowing back and forth in a specially shaped metal conductor called an antenna. An electronic device called a radio transmitter applies oscillating electric current to the antenna, and the antenna radiates the power as radio waves. Radio waves are received by another antenna attached to a radio receiver. When radio waves strike the receiving antenna they push the electrons in the metal back and forth, creating tiny oscillating currents which are detected by the receiver. From quantum mechanics, like other electromagnetic radiation such as light, radio waves can alternatively be regarded as streams of uncharged elementary particles called photons. In an antenna transmitting radio waves, the electrons in the antenna emit the energy in discrete packets called radio photons, while in a receiving antenna the electrons absorb the energy as radio photons. An antenna is a coherent emitter of photons, like a laser, so the radio photons are all in phase. However, from Planck's relation , the energy of individual radio photons is extremely small, from 10−22 to 10−30 joules. So the antenna of even a very low power transmitter emits an enormous number of photons every second. Therefore, except for certain molecular electron transition processes such as atoms in a maser emitting microwave photons, radio wave emission and absorption is usually regarded as a continuous classical process, governed by Maxwell's equations. Properties Radio waves in vacuum travel at the speed of light . When passing through a material medium, they are slowed depending on the medium's permeability and permittivity. Air is tenuous enough that in the Earth's atmosphere radio waves travel at very nearly the speed of light. The wavelength is the distance from one peak (crest) of the wave's electric field to the next, and is inversely proportional to the frequency of the wave. The relation of frequency and wavelength in a radio wave traveling in vacuum or air is where Equivalently, , the distance that a radio wave travels in vacuum in one second, is , which is the wavelength of a 1 hertz radio signal. A 1 megahertz radio wave (mid-AM band) has a wavelength of . Polarization Like other electromagnetic waves, a radio wave has a property called polarization, which is defined as the direction of the wave's oscillating electric field perpendicular to the direction of motion. A plane-polarized radio wave has an electric field that oscillates in a plane perpendicular to the direction of motion. In a horizontally polarized radio wave the electric field oscillates in a horizontal direction. In a vertically polarized wave the electric field oscillates in a vertical direction. In a circularly polarized wave the electric field at any point rotates about the direction of travel, once per cycle. A right circularly polarized wave rotates in a right-hand sense about the direction of travel, while a left circularly polarized wave rotates in the opposite sense. The wave's magnetic field is perpendicular to the electric field, and the electric and magnetic field are oriented in a right-hand sense with respect to the direction of radiation. An antenna emits polarized radio waves, with the polarization determined by the direction of the metal antenna elements. For example, a dipole antenna consists of two collinear metal rods. If the rods are horizontal, it radiates horizontally polarized radio waves, while if the rods are vertical, it radiates vertically polarized waves. An antenna receiving the radio waves must have the same polarization as the transmitting antenna, or it will suffer a severe loss of reception. Many natural sources of radio waves, such as the sun, stars and blackbody radiation from warm objects, emit unpolarized waves, consisting of incoherent short wave trains in an equal mixture of polarization states. The polarization of radio waves is determined by a quantum mechanical property of the photons called their spin. A photon can have one of two possible values of spin; it can spin in a right-hand sense about its direction of motion, or in a left-hand sense. Right circularly polarized radio waves consist of photons spinning in a right hand sense. Left circularly polarized radio waves consist of photons spinning in a left hand sense. Plane polarized radio waves consist of photons in a quantum superposition of right and left hand spin states. The electric field consists of a superposition of right and left rotating fields, resulting in a plane oscillation. Propagation characteristics Radio waves are more widely used for communication than other electromagnetic waves mainly because of their desirable propagation properties, stemming from their large wavelength. Radio waves have the ability to pass through the atmosphere in any weather, foliage, and through most building materials. By diffraction, longer wavelengths can bend around obstructions, and unlike other electromagnetic waves they tend to be scattered rather than absorbed by objects larger than their wavelength. The study of radio propagation, how radio waves move in free space and over the surface of the Earth, is vitally important in the design of practical radio systems. Radio waves passing through different environments experience reflection, refraction, polarization, diffraction, and absorption. Different frequencies experience different combinations of these phenomena in the Earth's atmosphere, making certain radio bands more useful for specific purposes than others. Practical radio systems mainly use three different techniques of radio propagation to communicate: Line of sight: This refers to radio waves that travel in a straight line from the transmitting antenna to the receiving antenna. It does not necessarily require a cleared sight path; at lower frequencies radio waves can pass through buildings, foliage and other obstructions. This is the only method of propagation possible at frequencies above 30 MHz. On the surface of the Earth, line of sight propagation is limited by the visual horizon to about 64 km (40 mi). This is the method used by cell phones, FM, television broadcasting and radar. By using dish antennas to transmit beams of microwaves, point-to-point microwave relay links transmit telephone and television signals over long distances up to the visual horizon. Ground stations can communicate with satellites and spacecraft billions of miles from Earth. Indirect propagation: Radio waves can reach points beyond the line-of-sight by diffraction and reflection. Diffraction causes radio waves to bend around obstructions such as a building edge, a vehicle, or a turn in a hall. Radio waves also partially reflect from surfaces such as walls, floors, ceilings, vehicles and the ground. These propagation methods occur in short range radio communication systems such as cell phones, cordless phones, walkie-talkies, and wireless networks. A drawback of this mode is multipath propagation, in which radio waves travel from the transmitting to the receiving antenna via multiple paths. The waves interfere, often causing fading and other reception problems. Ground waves: At lower frequencies below 2 MHz, in the medium wave and longwave bands, due to diffraction vertically polarized radio waves can bend over hills and mountains, and propagate beyond the horizon, traveling as surface waves which follow the contour of the Earth. This makes it possible for mediumwave and longwave broadcasting stations to have coverage areas beyond the horizon, out to hundreds of miles. As the frequency drops, the losses decrease and the achievable range increases. Military very low frequency (VLF) and extremely low frequency (ELF) communication systems can communicate over most of the Earth. VLF and ELF radio waves can also penetrate water to hundreds of meters deep, so they are used to communicate with submerged submarines. Skywaves: At medium wave and shortwave wavelengths, radio waves reflect off conductive layers of charged particles (ions) in a part of the atmosphere called the ionosphere. So radio waves directed at an angle into the sky can return to Earth beyond the horizon; this is called "skip" or "skywave" propagation. By using multiple skips communication at intercontinental distances can be achieved. Skywave propagation is variable and dependent on atmospheric conditions; it is most reliable at night and in the winter. Widely used during the first half of the 20th century, due to its unreliability skywave communication has mostly been abandoned. Remaining uses are by military over-the-horizon (OTH) radar systems, by some automated systems, by radio amateurs, and by shortwave broadcasting stations to broadcast to other countries. At microwave frequencies, atmospheric gases begin absorbing radio waves, so the range of practical radio communication systems decreases with increasing frequency. Below about 20 GHz atmospheric attenuation is mainly due to water vapor. Above 20 GHz, in the millimeter wave band, other atmospheric gases begin to absorb the waves, limiting practical transmission distances to a kilometer or less. Above 300 GHz, in the terahertz band, virtually all the power is absorbed within a few meters, so the atmosphere is effectively opaque. Radio communication In radio communication systems, information is transported across space using radio waves. At the sending end, the information to be sent, in the form of a time-varying electrical signal, is applied to a radio transmitter. The information, called the modulation signal, can be an audio signal representing sound from a microphone, a video signal representing moving images from a video camera, or a digital signal representing data from a computer. In the transmitter, an electronic oscillator generates an alternating current oscillating at a radio frequency, called the carrier wave because it creates the radio waves that "carry" the information through the air. The information signal is used to modulate the carrier, altering some aspect of it, encoding the information on the carrier. The modulated carrier is amplified and applied to an antenna. The oscillating current pushes the electrons in the antenna back and forth, creating oscillating electric and magnetic fields, which radiate the energy away from the antenna as radio waves. The radio waves carry the information to the receiver location. At the receiver, the oscillating electric and magnetic fields of the incoming radio wave push the electrons in the receiving antenna back and forth, creating a tiny oscillating voltage which is a weaker replica of the current in the transmitting antenna. This voltage is applied to the radio receiver, which extracts the information signal. The receiver first uses a bandpass filter to separate the desired radio station's radio signal from all the other radio signals picked up by the antenna, then amplifies the signal so it is stronger, then finally extracts the information-bearing modulation signal in a demodulator. The recovered signal is sent to a loudspeaker or earphone to produce sound, or a television display screen to produce a visible image, or other devices. A digital data signal is applied to a computer or microprocessor, which interacts with a human user. The radio waves from many transmitters pass through the air simultaneously without interfering with each other. They can be separated in the receiver because each transmitter's radio waves oscillate at a different rate, in other words each transmitter has a different frequency, measured in kilohertz (kHz), megahertz (MHz) or gigahertz (GHz). The bandpass filter in the receiver consists of one or more tuned circuits which act like a resonator, similarly to a tuning fork. The tuned circuit has a natural resonant frequency at which it oscillates. The resonant frequency is set equal to the frequency of the desired radio station. The oscillating radio signal from the desired station causes the tuned circuit to oscillate in sympathy, and it passes the signal on to the rest of the receiver. Radio signals at other frequencies are blocked by the tuned circuit and not passed on. Biological and environmental effects Radio waves are non-ionizing radiation, which means they do not have enough energy to separate electrons from atoms or molecules, ionizing them, or break chemical bonds, causing chemical reactions or DNA damage. The main effect of absorption of radio waves by materials is to heat them, similarly to the infrared waves radiated by sources of heat such as a space heater or wood fire. The oscillating electric field of the wave causes polar molecules to vibrate back and forth, increasing the temperature; this is how a microwave oven cooks food. Radio waves have been applied to the body for 100 years in the medical therapy of diathermy for deep heating of body tissue, to promote increased blood flow and healing. More recently they have been used to create higher temperatures in hyperthermia therapy and to kill cancer cells. However, unlike infrared waves, which are mainly absorbed at the surface of objects and cause surface heating, radio waves are able to penetrate the surface and deposit their energy inside materials and biological tissues. The depth to which radio waves penetrate decreases with their frequency, and also depends on the material's resistivity and permittivity; it is given by a parameter called the skin depth of the material, which is the depth within which 63% of the energy is deposited. For example, the 2.45 GHz radio waves (microwaves) in a microwave oven penetrate most foods approximately . Looking into a source of radio waves at close range, such as the waveguide of a working radio transmitter, can cause damage to the lens of the eye by heating. A strong enough beam of radio waves can penetrate the eye and heat the lens enough to cause cataracts. Since the heating effect is in principle no different from other sources of heat, most research into possible health hazards of exposure to radio waves has focused on "nonthermal" effects; whether radio waves have any effect on tissues besides that caused by heating. Radiofrequency electromagnetic fields have been classified by the International Agency for Research on Cancer (IARC) as having "limited evidence" for its effects on humans and animals. There is weak mechanistic evidence of cancer risk via personal exposure to RF-EMF from mobile telephones. Radio waves can be shielded against by a conductive metal sheet or screen, an enclosure of sheet or screen is called a Faraday cage. A metal screen shields against radio waves as well as a solid sheet as long as the holes in the screen are smaller than about of wavelength of the waves. Measurement Since radio frequency radiation has both an electric and a magnetic component, it is often convenient to express intensity of radiation field in terms of units specific to each component. The unit volt per meter (V/m) is used for the electric component, and the unit ampere per meter (A/m) is used for the magnetic component. One can speak of an electromagnetic field, and these units are used to provide information about the levels of electric and magnetic field strength at a measurement location. Another commonly used unit for characterizing an RF electromagnetic field is power density. Power density is most accurately used when the point of measurement is far enough away from the RF emitter to be located in what is referred to as the far field zone of the radiation pattern. In closer proximity to the transmitter, i.e., in the "near field" zone, the physical relationships between the electric and magnetic components of the field can be complex, and it is best to use the field strength units discussed above. Power density is measured in terms of power per unit area, for example, with the unit milliwatt per square centimeter (mW/cm2). When speaking of frequencies in the microwave range and higher, power density is usually used to express intensity since exposures that might occur would likely be in the far field zone.
Physical sciences
Electrodynamics
null
98275
https://en.wikipedia.org/wiki/Aphotic%20zone
Aphotic zone
The aphotic zone (aphotic from Greek prefix + "without light") is the portion of a lake or ocean where there is little or no sunlight. It is formally defined as the depths beyond which less than 1 percent of sunlight penetrates. Above the aphotic zone is the photic zone, which consists of the euphotic zone and the disphotic zone. The euphotic zone is the layer of water in which there is enough light for net photosynthesis to occur. The disphotic zone, also known as the twilight zone, is the layer of water with enough light for predators to see but not enough for the rate of photosynthesis to be greater than the rate of respiration. The depth at which less than one percent of sunlight reaches begins the aphotic zone. While most of the ocean's biomass lives in the photic zone, the majority of the ocean's water lies in the aphotic zone. Bioluminescence is more abundant than sunlight in this zone. Most food in this zone comes from dead organisms sinking to the bottom of the lake or ocean from overlying waters. The depth of the aphotic zone can be greatly affected by such things as turbidity and the season of the year. The aphotic zone underlies the photic zone, which is that portion of a lake or ocean directly affected by sunlight. The Dark Ocean In the ocean, the aphotic zone is sometimes referred to as the dark ocean. Depending on how it is defined, the aphotic zone of the ocean begins between depths of about to and extends to the ocean floor. The majority of the ocean is aphotic, with the average depth of the sea being deep; the deepest part of the sea, the Challenger Deep in the Mariana Trench, is about deep. The depth at which the aphotic zone begins in the ocean depends on many factors. In clear, tropical water sunlight can penetrate deeper and so the aphotic zone starts at greater depths. Around the poles, the angle of the sunlight means it does not penetrate as deeply so the aphotic zone is shallower. If the water is turbid, suspended material can block light from penetrating, resulting in a shallower aphotic zone. Temperatures can range from roughly to . The aphotic zone is further divided into the mesopelagic, bathyal, abyssal, and hadal zones. The mesopelagic zone extends from to . The bathyal zone extends from to . The abyssal zone extends from to or , depending on the authority. The hadal zone refers to the greatest depths, deeper than the abyssal zone. Some twilight occurs in the mesopelagic zone, but creatures below the mesopelagic must be able to live in complete darkness. Life in the aphotic zone Though photosynthesis cannot occur in the aphotic zone, it is not unusual to find an abundance of phytoplankton there. Convective mixing due to cooling surface water sinking can increase the concentration of phytoplankton in the aphotic zone and lead to under-estimations of primary production in the euphotic zone during convective mixing events. Unusual and unique creatures dwell in this expanse of pitch black water, such as the gulper eel, giant squid, anglerfish, and vampire squid. Some life in the aphotic zone does not rely on sunlight at all. Benthic communities around methane seeps rely on methane-oxidizing microorganisms to supply energy to other microorganisms. In some rare cases, bacteria use chemical energy sources such as sulfides and methane. Many of the animals in the aphotic zone are bioluminescent, meaning they can produce their light. Bioluminescence can be used both for navigation and luring small animals into their jaws. An excellent example of this is the angler fish, as it has a light lure protruding in front of its mouth from a unique appendage on its head which provides navigation and as bait for smaller animals. Some animals can cross between the photic and aphotic zones in search of food. For example, the sperm whale and the southern elephant seal occasionally hunt in the aphotic zone despite the water pressure squashing their bodies; however, not fatally. Aphotic zone migration After sunset, millions of organisms swarm up from the depths to feed on the microorganisms floating in the warm epipelagic zone. Many copepods and invertebrate larvae come up to shallower waters to eat the phytoplankton, which attracts many predators like squid, hatchetfish, and lantern fish. The migration of the many bioluminescent animals is visible to the naked eye. This nightly vertical migration is the largest (in terms of the number of animals) on our planet.
Physical sciences
Oceanography
Earth science
98370
https://en.wikipedia.org/wiki/London%20Bridge
London Bridge
The name "London Bridge" refers to several historic crossings that have spanned the River Thames between the City of London and Southwark in central London since Roman times. The current crossing, which opened to traffic in 1973, is a box girder bridge built from concrete and steel. It replaced a 19th-century stone-arched bridge, which in turn superseded a 600-year-old stone-built medieval structure. In addition to the roadway, for much of its history, the broad medieval bridge supported an extensive built up area of homes and businesses, part of the City's Bridge ward, and its southern end in Southwark was guarded by a large stone City gateway. The medieval bridge was preceded by a succession of timber bridges, the first of which was built by the Roman founders of London (Londinium) around AD 50. The current bridge stands at the western end of the Pool of London and is positioned upstream from previous alignments. The approaches to the medieval bridge were marked by the church of St Magnus-the-Martyr on the northern bank and by Southwark Cathedral on the southern shore. Until Putney Bridge opened in 1729, London Bridge was the only road crossing of the Thames downstream of Kingston upon Thames. London Bridge has been depicted in its several forms, in art, literature, and songs, including the nursery rhyme "London Bridge Is Falling Down", and the epic poem The Waste Land by T. S. Eliot. The modern bridge is owned and maintained by Bridge House Estates, an independent charity of medieval origin overseen by the City of London Corporation. It carries the A3 road, which is maintained by the Greater London Authority. The crossing also delineates an area along the southern bank of the River Thames, between London Bridge and Tower Bridge, that has been designated as a business improvement district. History Location The abutments of modern London Bridge rest several metres above natural embankments of gravel, sand and clay. From the late Neolithic era the southern embankment formed a natural causeway above the surrounding swamp and marsh of the river's estuary; the northern ascended to higher ground at the present site of Cornhill. Between the embankments, the River Thames could have been crossed by ford when the tide was low, or ferry when it was high. Both embankments, particularly the northern, would have offered stable beachheads for boat traffic up and downstream – the Thames and its estuary were a major inland and Continental trade route from at least the 9th century BC. There is archaeological evidence for scattered Neolithic, Bronze Age and Iron Age settlement nearby, but until a bridge was built there, London did not exist. A few miles upstream, beyond the river's upper tidal reach, two ancient fords were in use. These were apparently aligned with the course of Watling Street, which led into the heartlands of the Catuvellauni, Britain's most powerful tribe at the time of Caesar's invasion of 54 BC. Some time before Claudius's conquest of AD 43, power shifted to the Trinovantes, who held the region northeast of the Thames Estuary from a capital at Camulodunum, nowadays Colchester in Essex. Claudius imposed a major colonia at Camulodunum, and made it the capital city of the new Roman province of Britannia. The first London Bridge was built by the Romans as part of their road-building programme, to help consolidate their conquest. Roman bridges It is possible that Roman military engineers built a pontoon type bridge at the site during the conquest period (AD43). A bridge of any kind would have given a rapid overland shortcut to Camulodunum from the southern and Kentish ports, along the Roman roads of Stane Street and Watling Street (now the A2). The Roman roads leading to and from London were probably built around AD50, and the river-crossing was possibly served by a permanent timber bridge. On the relatively high, dry ground at the northern end of the bridge, a small, opportunistic trading and shipping settlement took root and grew into the town of Londinium. A smaller settlement developed at the southern end of the bridge, in the area now known as Southwark. The bridge may have been destroyed along with the town in the Boudican revolt (AD 60), but Londinium was rebuilt and eventually, became the administrative and mercantile capital of Roman Britain. The bridge offered uninterrupted, mass movement of foot, horse, and wheeled traffic across the Thames, linking four major arterial road systems north of the Thames with four to the south. Just downstream of the bridge were substantial quays and depots, convenient to seagoing trade between Britain and the rest of the Roman Empire. Early medieval bridges With the end of Roman rule in Britain in the early 5th century, Londinium was gradually abandoned and the bridge fell into disrepair. In the Anglo-Saxon period, the river became a boundary between the emergent, mutually hostile kingdoms of Mercia and Wessex. By the late 9th century, Danish invasions prompted at least a partial reoccupation of the site by the Saxons. The bridge may have been rebuilt by Alfred the Great soon after the Battle of Edington as part of Alfred's redevelopment of the area in his system of burhs, or it may have been rebuilt around 990 under the Saxon king Æthelred the Unready to hasten his troop movements against Sweyn Forkbeard, father of Cnut the Great. A skaldic tradition describes the bridge's destruction in 1014 by Æthelred's ally Olaf, to divide the Danish forces who held both the walled City of London and Southwark. The earliest contemporary written reference to a Saxon bridge is , when chroniclers mention how Cnut's ships bypassed the crossing during his war to regain the throne from Edmund Ironside. Following the Norman conquest in 1066, King William I rebuilt the bridge. It was repaired or replaced by King William II, destroyed by fire in 1136, and rebuilt in the reign of Stephen. Henry II created a monastic guild, the "Brethren of the Bridge", to oversee all work on London Bridge. In 1163, Peter of Colechurch, chaplain and warden of the bridge and its brethren, supervised the bridge's last rebuilding in timber. Old London Bridge (1209–1831) After the murder of his former friend and later opponent Thomas Becket, Archbishop of Canterbury, the penitent King Henry II commissioned a new stone bridge in place of the old, with a chapel at its centre dedicated to Becket as martyr. The archbishop had been a native Londoner, born at Cheapside, and a popular figure. The Chapel of St Thomas on the Bridge became the official start of pilgrimage to his Canterbury shrine; it was grander than some town parish churches, and had an additional river-level entrance for fishermen and ferrymen. Building work began in 1176, supervised by Peter of Colechurch. The costs would have been enormous; Henry's attempt to meet them with taxes on wool and sheepskins probably gave rise to a later legend that London Bridge was built on wool packs. In 1202, before Colechurch's death, Isembert, a French monk who was renowned as a bridge builder, was appointed by King John to complete the project. Construction was not finished until 1209. There were houses on the bridge from the start; this was a normal way of paying for the maintenance of a bridge, though in this case it had to be supplemented by other rents and by tolls. From 1282 two bridge wardens were responsible for maintaining the bridge, heading the organization known as the Bridge House. The only two collapses occurred when maintenance had been neglected, in 1281 (five arches) and 1437 (two arches). In 1212, perhaps the greatest of the early fires of London broke out, spreading as far as the chapel and trapping many people. The bridge was about long, and had nineteen piers, supported by timber piles. The piers were linked above by nineteen arches and a wooden drawbridge. Above and below the water-level, the piers were enclosed and protected by 'starlings', supported by deeper piles than the piers themselves. The bridge, including the part occupied by houses, was from wide. The roadway was mostly around wide, varying from about 14 feet to 16 feet, except that it was narrower at defensive features (the stone gate, the drawbridge and the drawbridge tower) and wider south of the stone gate. The houses occupied only a few feet on each side of the bridge. They received their main support either from the piers, which extended well beyond the bridge itself from west to east, or from 'hammer beams' laid from pier to pier parallel to the bridge. It was the length of the piers which made it possible to build quite large houses, up to deep. The numerous starlings restricted the river's tidal ebb and flow. The difference in water levels on the two sides of the bridge could be as much as , producing ferocious rapids between the piers resembling a weir. Only the brave or foolhardy attempted to "shoot the bridge" – steer a boat between the starlings when in flood – and some were drowned in the attempt. The bridge was "for wise men to pass over, and for fools to pass under." The restricted flow also meant that in hard winters the river upstream was more susceptible to freezing. The number of houses on the bridge reached its maximum in the late fourteenth century, when there were 140. Subsequently, many of the houses, originally only 10 to 11 feet wide, were merged, so that by 1605 there were 91. Originally they are likely to have had only two storeys, but they were gradually enlarged. In the seventeenth century, when there are detailed descriptions of them, almost all had four or five storeys (counting the garrets as a storey); three houses had six storeys. Two-thirds of the houses were rebuilt from 1477 to 1548. In the seventeenth century, the usual plan was a shop on the ground floor, a hall and often a chamber on the first floor, a kitchen and usually a chamber and a waterhouse (for hauling up water in buckets) on the second floor, and chambers and garrets above. Approximately every other house shared in a 'cross building' above the roadway, linking the houses either side and extending from the first floor upwards. All the houses were shops, and the bridge was one of the City of London's four or five main shopping streets. There seems to have been a deliberate attempt to attract the more prestigious trades. In the late fourteenth century more than four-fifths of the shopkeepers were haberdashers, glovers, cutlers, bowyers and fletchers or from related trades. By 1600 all of these had dwindled except the haberdashers, and the spaces were filled by additional haberdashers, by traders selling textiles and by grocers. From the late seventeenth century there was a greater variety of trades, including metalworkers such as pinmakers and needle makers, sellers of durable goods such as trunks and brushes, booksellers and stationers. The three major buildings on the bridge were the chapel, the drawbridge tower and the stone gate, all of which seem to have been present soon after the bridge's construction. The chapel was last rebuilt in 1387–1396, by Henry Yevele, master mason to the king. Following the Reformation, it was converted into a house in 1553. The drawbridge tower was where the severed heads of traitors were exhibited. The drawbridge ceased to be opened in the 1470s and in 1577–1579 the tower was replaced by Nonsuch House—a pair of magnificent houses. Its architect was Lewis Stockett, Surveyor of the Queen's Works, who gave it the second classical facade in London (after Somerset House in the Strand). The stone gate was last rebuilt in the 1470s, and later took over the function of displaying the heads of traitors. The heads were dipped in tar and boiled to preserve them against the elements, and were impaled on pikes. The head of William Wallace was the first recorded as appearing, in 1305, starting a long tradition. Other famous heads on pikes included those of Jack Cade in 1450, Thomas More in 1535, Bishop John Fisher in the same year, and Thomas Cromwell in 1540. In 1598, a German visitor to London, Paul Hentzner, counted over 30 heads on the bridge: The last head was installed in 1661; subsequently heads were placed on Temple Bar instead, until the practice ceased. There were two multi-seated public latrines, but they seem to have been at the two ends of the bridge, possibly on the riverbank. The one at the north end had two entrances in 1306. In 1481, one of the latrines fell into the Thames and five men were drowned. Neither of the latrines is recorded after 1591. In 1578–1582 a Dutchman, Peter Morris, created a waterworks at the north end of the bridge. Water wheels under the two northernmost arches drove pumps that raised water to the top of a tower, from which wooden pipes conveyed it into the city. In 1591 water wheels were installed at the south end of the bridge to grind corn. In 1633 fire destroyed the houses on the northern part of the bridge. The gap was only partly filled by new houses, with the result that there was a firebreak that prevented the Great Fire of London (1666) spreading to the rest of the bridge and to Southwark. The Great Fire destroyed the bridge's waterwheels, preventing them from pumping water to fight the fire. For nearly 20 years, only sheds replaced the burnt buildings. They were replaced In the 1680s, when almost all the houses on the bridge were rebuilt. The roadway was widened to by setting the houses further back, and was increased in height from one storey to two. The new houses extended further back over the river, which would cause trouble later. In 1695, the bridge had 551 inhabitants. From 1670, attempts were made to keep traffic in each direction to one side, at first through a keep-right policy and from 1722, through a keep-left policy. This has been suggested as one possible origin for the practice of traffic in Britain driving on the left. A fire in September 1725 destroyed all the houses south of the stone gate; they were rebuilt. The last houses to be built on the bridge were designed by George Dance the Elder in 1745, but these buildings had begun to subside within a decade. The (29 Geo. 2. c. 40) gave the City Corporation the power to purchase all the properties on the bridge so that they could be demolished and the bridge improved. While this work was underway, a temporary wooden bridge was constructed to the west of London Bridge. It opened in October 1757 but caught fire and collapsed in the following April. The old bridge was reopened until a new wooden construction could be completed a year later. To help improve navigation under the bridge, its two centre arches were replaced by a single wider span, the Great Arch, in 1759. Demolition of the houses was completed in 1761 and the last tenant departed after some 550 years of housing on the bridge. Under the supervision of Dance the Elder, the roadway was widened to and a balustrade was added "in the Gothic taste" together with 14 stone alcoves for pedestrians to shelter in. However, the creation of the Great Arch had weakened the rest of the structure and constant expensive repairs were required in the following decades; this, combined with congestion both on and under bridge, often leading to fatal accidents, resulted in public pressure for a modern replacement. New London Bridge (1831–1967) In 1799, a competition was opened to design a replacement for the medieval bridge. Entrants included Thomas Telford; he proposed a single iron arch span of , with centre clearance beneath it for masted river traffic. His design was accepted as safe and practicable, following expert testimony. Preliminary surveys and works were begun, but Telford's design required exceptionally wide approaches and the extensive use of multiple, steeply inclined planes, which would have required the purchase and demolition of valuable adjacent properties. A more conventional design of five stone arches, by John Rennie, was chosen instead. It was built west (upstream) of the original site by Jolliffe and Banks of Merstham, Surrey, under the supervision of Rennie's son. Work began in 1824 and the foundation stone was laid, in the southern coffer dam, on 15 June 1825. The old bridge continued in use while the new bridge was being built, and was demolished after the latter opened in 1831. New approach roads had to be built, which cost three times as much as the bridge itself. The total costs, around £2.5 million (£ in ), were shared by the British Government and the Corporation of London. Rennie's bridge was long and wide, constructed from Haytor granite. The official opening took place on 1 August 1831; King William IV and Queen Adelaide attended a banquet in a pavilion erected on the bridge. The northern approach road, King William Street, was renamed after the monarch and statue of the king subsequently installed. In 1896 the bridge was the busiest point in London, and one of its most congested; 8,000 pedestrians and 900 vehicles crossed every hour. To designs by engineer Edward Cruttwell, it was widened by , using granite corbels. Subsequent surveys showed that the bridge was sinking an inch (about 2.5 cm) every eight years, and by 1924 the east side had sunk some three to four inches (about 9 cm) lower than the west side. The bridge would have to be removed and replaced. Sale to Robert McCulloch Common Council of the City of London member Ivan Luckin put forward the idea of selling the bridge, and recalled: "They all thought I was completely crazy when I suggested we should sell London Bridge when it needed replacing." Subsequently, in 1968, Council placed the bridge on the market and began to look for potential buyers. On 18 April 1968, Rennie's bridge was purchased by the Missourian entrepreneur Robert P. McCulloch of McCulloch Oil for US$2,460,000. The claim that McCulloch believed mistakenly that he was buying the more impressive Tower Bridge was denied by Luckin in a newspaper interview. Before the bridge was taken apart, each granite facing block was marked for later reassembly. The blocks were taken to Merrivale Quarry at Princetown in Devon, where were sliced off the inner faces of many, to facilitate their fixing. (Stones left behind were sold in an online auction when the quarry was abandoned and flooded in 2003.) 10,000 tons of granite blocks were shipped via the Panama Canal to California, then trucked from Long Beach to Arizona. They were used to face a new, purpose-built hollow core steel-reinforced concrete structure, ensuring the bridge would support the weight of modern traffic. The bridge was reconstructed by Sundt Construction at Lake Havasu City, Arizona, and was re-dedicated on 10 October 1971 in a ceremony attended by London's Lord Mayor and celebrities. The bridge carries the McCulloch Boulevard and spans the Bridgewater Channel, an artificial, navigable waterway that leads from the Uptown area of Lake Havasu City. Modern London Bridge (1973–present) The current London Bridge was designed by architect Lord Holford and engineers Mott, Hay and Anderson. It was constructed by contractors John Mowlem and Co from 1967 to 1972, and opened by Queen Elizabeth II on 16 March 1973. It comprises three spans of prestressed-concrete box girders, a total of long. The cost of £4 million (£ in ), was met entirely by the Bridge House Estates charity. The current bridge was built in the same location as Rennie's bridge, with the previous bridge remaining in use while the first two girders were constructed upstream and downstream. Traffic was then transferred onto the two new girders, and the previous bridge demolished to allow the final two central girders to be added. In 1984, the British warship HMS Jupiter collided with London Bridge, causing significant damage to both the ship and the bridge. On Remembrance Day 2004, several bridges in London were furnished with red lighting as part of a night-time flight along the river by wartime aircraft. London Bridge was the one bridge not subsequently stripped of the illuminations, which are regularly switched on at night. The current London Bridge is often shown in films, news and documentaries showing the throng of commuters journeying to work into the City from London Bridge Station (south to north). An example of this is actor Hugh Grant crossing the bridge north to south during the morning rush hour, in the 2002 film About a Boy. On 11 July 2008, as part of the annual Lord Mayor's charity appeal and to mark the 800th anniversary of Old London Bridge's completion in the reign of King John, the Lord Mayor and Freemen of the City drove a flock of sheep across the bridge, supposedly by ancient right. On 3 June 2017, three pedestrians were killed by a van in a terrorist attack. Altogether, eight people died and 48 were injured in the attack. Security barriers were installed on the bridge to help isolate the pedestrian pavement from the road. Transport The nearest London Underground stations are Monument, at the northern end of the bridge, and London Bridge at the southern end. London Bridge station is also served by National Rail. In literature and popular culture The nursery rhyme and folk song "London Bridge Is Falling Down" has been speculatively connected to several of the bridge's historic collapses. Rennie's New London Bridge is a prominent landmark in T. S. Eliot's poem The Waste Land, wherein he compares the shuffling commuters across London Bridge to the hell-bound souls of Dante's Inferno. Also in that poem is a reference to the "inexplicable splendour of Ionian white and gold" of the church of St Magnus-the-Martyr, designed by Sir Christopher Wren, which marks the northern approach to the bridge, and the poem also ends with the lines "I sat upon the shore/fishing, with the arid plain behind me./Shall I at least set my lands in order?/London bridge is falling down, falling down, falling down". In Charles Dickens' Sketches by Boz, in the story entitled Scotland-yard there is much discussion by coal-heavers on the replacement of London Bridge in 1832, including a portent that the event will dry up the Thames. Gary P. Nunn's song "London Homesick Blues" includes the lyrics, "Even London Bridge has fallen down, and moved to Arizona, now I know why." English composer Eric Coates wrote a march about London Bridge in 1934. London Bridge is named in the World War II song "The King is Still in London" by Roma Campbell-Hunter & Hugh Charles. Fergie released a song titled "London Bridge" in 2006 as the lead single of her first solo album, The Dutchess. The music video for the track features the singer on a boat near London's Tower Bridge, which, despite the song's title, is not London Bridge. The song peaked at number one on Billboard's Hot 100 chart.
Technology
Bridges
null
98534
https://en.wikipedia.org/wiki/Asphyxia
Asphyxia
Asphyxia or asphyxiation is a condition of deficient supply of oxygen to the body which arises from abnormal breathing. Asphyxia causes generalized hypoxia, which affects all the tissues and organs, some more rapidly than others. There are many circumstances that can induce asphyxia, all of which are characterized by the inability of a person to acquire sufficient oxygen through breathing for an extended period of time. Asphyxia can cause coma or death. In 2015, about 9.8 million cases of unintentional suffocation occurred which resulted in 35,600 deaths. The word asphyxia is from Ancient Greek "without" and , "squeeze" (throb of heart). Causes Situations that can cause asphyxia include but are not limited to: airway obstruction, the constriction or obstruction of airways, such as from asthma, laryngospasm, or simple blockage from the presence of foreign materials; from being in environments where oxygen is not readily accessible: such as underwater, in a low oxygen atmosphere, or in a vacuum; environments where sufficiently oxygenated air is present, but cannot be adequately breathed because of air contamination such as excessive smoke. Other causes of oxygen deficiency include but are not limited to: Acute respiratory distress syndrome Alcohol poisoning Carbon monoxide inhalation, such as that from a car exhaust and the smoke produced by a lit cigarette: carbon monoxide has a higher affinity than oxygen to the hemoglobin in the blood's red blood corpuscles, bonding with it tenaciously, and, in the process, displacing oxygen and preventing the blood from transporting oxygen around the body Contact with certain chemicals, including pulmonary agents (such as phosgene) and blood agents (such as hydrogen cyanide) Choking by obstruction of a foreign body in the airway (for example: when eating) Cyanide poisoning Drowning Drug overdose Exposure to extreme low pressure or vacuum from spacesuit damage (see space exposure) Hanging, whether suspension or short drop hanging Self-induced hypocapnia by hyperventilation, as in shallow water or deep water blackout and the choking game Inert gas asphyxiation Congenital central hypoventilation syndrome, or primary alveolar hypoventilation, a disorder of the autonomic nervous system in which a patient must consciously breathe; although it is often said that people with this disease will die if they fall asleep, this is not usually the case. Respiratory diseases Sleep apnea A seizure which stops breathing activity An allergic reaction Strangling Breaking the wind pipe Prolonged exposure to chlorine gas Smothering Smothering is a mechanical obstruction of the flow of air from the environment into the mouth or nostrils, for instance, by covering the mouth and nose with a hand, pillow, or a plastic bag. Smothering can be either partial or complete, where partial indicates that the person being smothered is able to inhale some air, although less than required. In a normal situation, smothering requires at least partial obstruction of both the nasal cavities and the mouth to lead to asphyxia. Smothering with the hands or chest is used in some combat sports to distract the opponent, and create openings for transitions, as the opponent is forced to react to the smothering. In some cases, when performing certain routines, smothering is combined with simultaneous compressive asphyxia. One example is overlay, in which an adult accidentally rolls over onto an infant during co-sleeping, an accident that often goes unnoticed and is mistakenly thought to be sudden infant death syndrome. Other accidents involving a similar mechanism are cave-ins, or when an individual is buried in sand, snow, dirt, or grain. In homicidal cases, the term burking is often ascribed to a killing method that involves simultaneous smothering and compression of the torso. The term "burking" comes from the method William Burke and William Hare used to kill their victims during the West Port murders. They killed the usually intoxicated victims by sitting on their chests and suffocating them by putting a hand over their nose and mouth, while using the other hand to push the victim's jaw up. The corpses had no visible injuries, and were supplied to medical schools for money. Compressive asphyxia Compressive asphyxia (also called chest compression) is mechanically limiting expansion of the lungs by compressing the torso, preventing breathing. "Traumatic asphyxia" or "crush asphyxia" usually refers to compressive asphyxia resulting from being crushed or pinned under a large weight or force, or in a crowd crush. An example of traumatic asphyxia is a person who jacks up a car to work on it from below, and is crushed by the vehicle when the jack fails. Constrictor snakes such as boa constrictors kill through slow compressive asphyxia, tightening their coils every time the prey breathes out rather than squeezing forcefully. In cases of an adult co-sleeping with an infant ("overlay"), the heavy sleeping adult may move on top of the infant, causing compression asphyxia. In fatal crowd disasters, compressive asphyxia from being crushed against the crowd causes all or nearly all deaths, rather than blunt trauma from trampling. This is what occurred at the Ibrox disaster in 1971, where 66 Rangers fans died; the 1979 The Who concert disaster where 11 died; the Luzhniki disaster in 1982, when 66 FC Spartak Moscow fans died; the Hillsborough disaster in 1989, where 97 Liverpool fans were crushed to death in an overcrowded terrace, 95 of the 97 from compressive asphyxia, 93 dying directly from it and 3 others from related complications; the 2021 Meron crowd crush where 45 died; the Astroworld Festival crowd crush in 2021, where 10 died; and the Seoul Halloween crowd crush in 2022, where at least 159 died during Halloween celebrations. In confined spaces, people are forced to push against each other; evidence from bent steel railings in several fatal crowd accidents has shown horizontal forces over 4500 N (equivalent to a weight of approximately 450 kg or 1000 lbs). In cases where people have stacked up on each other in a human pile, it has been estimated that those at the bottom are subjected to around 380 kg (840 lbs) of compressive weight. "Positional" or "restraint" asphyxia is when a person is restrained and left alone prone, such as in a police vehicle, and is unable to reposition themself in order to breathe. The death can be in the vehicle, or following loss of consciousness to be followed by death while in a coma, having presented with anoxic brain damage. The asphyxia can be caused by facial compression, neck compression, or chest compression. This occurs mostly during restraint and handcuffing situations by law enforcement, including psychiatric incidents. The weight of the restraint(s) doing the compression may contribute to what is attributed to positional asphyxia. Therefore, passive deaths following custody restraint that are presumed to be the result of positional asphyxia may actually be examples of asphyxia occurring during the restraint process. Chest compression is a technique used in various grappling combat sports, where it is sometimes called wringing, either to tire the opponent or as complementary or distractive moves in combination with pinning holds, or sometimes even as submission holds. Examples of chest compression include the knee-on-stomach position; or techniques such as leg scissors (also referred to as body scissors and in budō referred to as do-jime; 胴絞, "trunk strangle" or "body triangle") where a participant wraps his or her legs around the opponent's midsection and squeezes them together. Pressing is a form of torture or execution using compressive asphyxia. Perinatal asphyxia Perinatal asphyxia is the medical condition resulting from deprivation of oxygen (hypoxia) to a newborn infant long enough to cause apparent harm. It results most commonly from a drop in maternal blood pressure or interference during delivery with blood flow to the infant's brain. This can occur as a result of inadequate circulation or perfusion, impaired respiratory effort, or inadequate ventilation. There has long been a scientific debate over whether newborn infants with asphyxia should be resuscitated with 100% oxygen or normal air. It has been demonstrated that high concentrations of oxygen lead to generation of oxygen free radicals, which have a role in reperfusion injury after asphyxia. Research by Ola Didrik Saugstad and others led to new international guidelines on newborn resuscitation in 2010, recommending the use of normal air instead of 100% oxygen. Mechanical asphyxia Classifications of different forms of asphyxia vary among literature, with differences in defining the concept of mechanical asphyxia being the most obvious. In DiMaio and DiMaio's 2001 textbook on forensic pathology, mechanical asphyxia is caused by pressure from outside the body restricting respiration. Similar narrow definitions of mechanical asphyxia have occurred in Azmak's 2006 literature review of asphyxial deaths and Oehmichen and Auer's 2005 book on forensic neuropathology. According to DiMaio and DiMaio, mechanical asphyxia encompasses positional asphyxia, traumatic asphyxia, and "human pile" deaths. In Shkrum and Ramsay's 2007 textbook on forensic pathology, mechanical asphyxia occurs when any mechanical means cause interference with the exchange of oxygen and carbon dioxide in the body. Similar broad definitions of mechanical asphyxia have occurred in Saukko and Knight's 2004 book on asphyxia, and Dolinak and Matshes' 2005 book on forensic pathology. According to Shkrum and Ramsay, mechanical asphyxia encompasses smothering, choking, positional asphyxia, traumatic asphyxia, wedging, strangulation and drowning. Sauvageau and Boghossian propose in 2010 that mechanical asphyxia should be officially defined as caused by "restriction of respiratory movements, either by the position of the body or by external chest compression", thus encompassing only positional asphyxia and traumatic asphyxia. First aid If there are symptoms of mechanical asphyxia, it is necessary to call the Emergency Medical Services. In some countries, such as the US, there may also be self-acting groups of voluntary first responders who have been trained in first aid. In case of mechanical asphyxia, first aid can be provided on your own. First aid for choking on food In case of choking on a foreign body: Stand behind the affected person and wrap your arms around him/her. Push inwards and upwards under the ribs with a sudden movement by your second hand. If the performed actions were not effective, repeat them until you free respiratory tract of the affected person from a foreign body.
Biology and health sciences
Types
Health
98585
https://en.wikipedia.org/wiki/Jasmine
Jasmine
Jasmine (botanical name: Jasminum; ) is a genus of shrubs and vines in the olive family of Oleaceae. It contains around 200 species native to tropical and warm temperate regions of Eurasia, Africa, and Oceania. Jasmines are widely cultivated for the characteristic fragrance of their flowers. The village of Shubra Beloula in Egypt grows most of the jasmine used by the global perfume industry. Description Jasmine can be either deciduous or evergreen, and can be erect, spreading, or climbing shrubs and vines. The leaves are borne in opposing or alternating arrangement and can be of simple, trifoliate, or pinnate formation. The flowers are typically around in diameter. They are white or yellow, although in rare instances they can be slightly reddish. The flowers are borne in cymose clusters with a minimum of three flowers, though they can also be solitary on the ends of branchlets. Each flower has about four to nine petals, two locules, and one to four ovules. They have two stamens with very short filaments. The bracts are linear or ovate. The calyx is bell-shaped. They are usually very fragrant. The basic chromosome number of the genus is 13, and most species are diploid (2n=26). However, natural polyploidy exists, particularly in Jasminum sambac (triploid 3n=39), Jasminum flexile (tetraploid 4n=52), Jasminum mesnyi (triploid 3n=39), and Jasminum angustifolium (tetraploid 4n=52). Distribution and habitat Jasmines are native to tropical and subtropical regions of Eurasia, Africa, Australasia within Oceania, although only one of the 200 species is native to Europe. Their center of diversity is in South Asia and Southeast Asia. Several jasmine species have become naturalized in Mediterranean Europe. For example, the so-called Spanish jasmine (Jasminum grandiflorum) was originally from West Asia, the Indian subcontinent, Northeast Africa, and East Africa, and is now naturalized in the Iberian Peninsula. Jasminum fluminense (which is sometimes known by the inaccurate name "Brazilian Jasmine") and Jasminum dichotomum (Gold Coast Jasmine) are invasive species in Hawaii and Florida. Jasminum polyanthum, also known as pink jasmine, is an invasive weed in Australia. Etymology The name comes from Old French jessemin, from which is derived from the Middle Persian word and () in Arabic. The word entered Middle French around 1570 and was first used in English in 16th century England. The Persian name is also the origin of the genus name, Jasminum. Taxonomy Species belonging to the genus are classified under the tribe Jasmineae of the olive family (Oleaceae). Jasminum is divided into five sections—Alternifolia, Jasminum, Primulina, Trifoliolata, and Unifoliolata. Species Species include: J. abyssinicum Hochst. ex DC. – forest jasmine J. adenophyllum Wall. – bluegrape jasmine, pinwheel jasmine, princess jasmine J. andamanicum N.P.Balakr. & N.G.Nair J. angulare Vahl J. angustifolium (L.) Willd. J. auriculatum Vahl – Indian jasmine, needle-flower jasmine J. azoricum L. J. beesianum Forrest & Diels – red jasmine J. dichotomum Vahl – Gold Coast jasmine J. didymum G.Forst. J. dispermum Wall. J. elegans Knobl. J. elongatum (P.J.Bergius) Willd. J. floridum Bunge J. fluminense Vell. J. fruticans L. J. grandiflorum L. – Catalan jasmine, jasmin odorant, royal jasmine, Spanish jasmine J. grandiflorum L.Vell. J. humile L. – Italian jasmine, Italian yellow jasmine J. lanceolarium Roxb. J. laurifolium Roxb. ex Hornem. angel-wing jasmine J. malabaricum Wight J. mesnyi Hance – Japanese jasmine, primrose jasmine, yellow jasmine J. multiflorum (Burm.f.) Andrews – Indian jasmine, star jasmine, winter jasmine J. multipartitum Hochst. – starry wild jasmine J. nervosum Lour. J. nobile C.B.Clarke J. nudiflorum Lindl. – winter jasmine J. odoratissimum L. – yellow jasmine J. officinale L. – common jasmine, jasmine, jessamine, poet's jasmine, summer jasmine, white jasmine J. parkeri Dunn – dwarf jasmine J. polyanthum Franch. J. sambac (L.) Aiton – Arabian jasmine, Sambac jasmine J. simplicifolium G.Forst. J. sinense Hemsl. J. subhumile W.W.Sm. J. tortuosum Willd. J. urophyllum Hemsl. J. volubile Jacq.. Jasmonates Jasmine lends its name to jasmonate plant hormones, as methyl jasmonate isolated from the oil of Jasminum grandiflorum led to the discovery of the molecular structure of jasmonates. Jasmonates occur ubiquitously across the plant kingdom, having key roles in responses to environmental cues, such as heat or cold stress, and participate in the signal transduction pathways of many plants. Cultural importance Jasmine is cultivated commercially for domestic and industrial uses, such as the perfume industry. It is used in rituals like marriages, religious ceremonies, and festivals. Jasmine flower vendors sell garlands of jasmine, or in the case of the thicker motiyaa (in Hindi) or mograa (in Marathi) varieties, bunches of jasmine are common. They may be found around entrances to temples, on major thoroughfares, and in major business areas. A change in presidency in Tunisia in 1987 and the Tunisian Revolution of 2011 are both called "Jasmine revolutions" in reference to the flower. "Jasmine" is a common female given name. Symbolism Several countries and states consider jasmine as a national symbol. Syria: The Syrian city Damascus is called the City of Jasmine. Hawaii: Jasminum sambac ("pikake") is a common flower used in leis and is the subject of many Hawaiian songs. Indonesia: Jasminum sambac is the national flower, adopted in 1990. It goes by the name "melati putih" and is used in wedding ceremonies for ethnic Indonesians, especially on the island of Java. Pakistan: Jasminum officinale is known as the "chambeli" or "yasmin", it is the national flower. Philippines: Jasminum sambac is the national flower. Adopted in 1935, it is known as "sampaguita" in the islands. It is usually strung in garlands which are then used to adorn religious images. Thailand: Jasmine flowers are used as a symbol of motherhood. Tunisia: The national flower of Tunisia is jasmine. It was chosen as a symbol for the Tunisian Revolution. Other plants called "jasmine" Brazilian jasmine Mandevilla sanderi Cape jasmine Gardenia Carolina jasmine Gelsemium sempervirens Crape jasmine Tabernaemontana divaricata Chilean jasmine Mandevilla laxa Jasmine rice, a type of long-grain rice Madagascar jasmine Stephanotis floribunda New Zealand jasmine Parsonsia capsularis Night-blooming jasmine Cestrum nocturnum Night-flowering jasmine Nyctanthes arbor-tristis Orange jasmine Murraya paniculata Red jasmine Plumeria rubra Star jasmine, Confederate jasmine Trachelospermum jasminoides Tree jasmine (disambiguation)
Biology and health sciences
Lamiales
Plants
98663
https://en.wikipedia.org/wiki/Orbital%20elements
Orbital elements
Orbital elements are the parameters required to uniquely identify a specific orbit. In celestial mechanics these elements are considered in two-body systems using a Kepler orbit. There are many different ways to mathematically describe the same orbit, but certain schemes, each consisting of a set of six parameters, are commonly used in astronomy and orbital mechanics. A real orbit and its elements change over time due to gravitational perturbations by other objects and the effects of general relativity. A Kepler orbit is an idealized, mathematical approximation of the orbit at a particular time. Keplerian elements The traditional orbital elements are the six Keplerian elements, after Johannes Kepler and his laws of planetary motion. When viewed from an inertial frame, two orbiting bodies trace out distinct trajectories. Each of these trajectories has its focus at the common center of mass. When viewed from a non-inertial frame centered on one of the bodies, only the trajectory of the opposite body is apparent; Keplerian elements describe these non-inertial trajectories. An orbit has two sets of Keplerian elements depending on which body is used as the point of reference. The reference body (usually the most massive) is called the primary, the other body is called the secondary. The primary does not necessarily possess more mass than the secondary, and even when the bodies are of equal mass, the orbital elements depend on the choice of the primary. Two elements define the shape and size of the ellipse: Eccentricity () — shape of the ellipse, describing how much it is elongated compared to a circle (not marked in diagram). Semi-major axis () — half the distance between the apoapsis and periapsis. The portion of the semi-major axis extending from the primary at one focus to the periapsis is shown as a purple line in the diagram; the rest (from the primary/focus to the center of the orbit ellipse) is below the reference plane and not shown. Two elements define the orientation of the orbital plane in which the ellipse is embedded: Inclination () — vertical tilt of the ellipse with respect to the reference plane, measured at the ascending node (where the orbit passes upward through the reference plane, the green angle in the diagram). Tilt angle is measured perpendicular to line of intersection between orbital plane and reference plane. Any three distinct points on an ellipse will define the ellipse orbital plane. The plane and the ellipse are both two-dimensional objects defined in three-dimensional space. Longitude of the ascending node () — horizontally orients the ascending node of the ellipse (where the orbit passes from south to north through the reference plane, symbolized by ) with respect to the reference frame's vernal point (symbolized by ♈︎). This is measured in the reference plane, and is shown as the green angle in the diagram. The remaining two elements are as follows: Argument of periapsis () defines the orientation of the ellipse in the orbital plane, as an angle measured from the ascending node to the periapsis (the closest point the satellite body comes to the primary body around which it orbits), the purple angle in the diagram. True anomaly (, , or ) at epoch () defines the position of the orbiting body along the ellipse at a specific time (the "epoch"), expressed as an angle from the periapsis. The mean anomaly is a mathematically convenient fictitious "angle" which does not correspond to a real geometric angle, but rather varies linearly with time, one whole orbital period being represented by an "angle" of 2 radians. It can be converted into the true anomaly , which does represent the real geometric angle in the plane of the ellipse, between periapsis (closest approach to the central body) and the position of the orbiting body at any given time. Thus, the true anomaly is shown as the red angle in the diagram, and the mean anomaly is not shown. The angles of inclination, longitude of the ascending node, and argument of periapsis can also be described as the Euler angles defining the orientation of the orbit relative to the reference coordinate system. Note that non-elliptic trajectories also exist, but are not closed, and are thus not orbits. If the eccentricity is greater than one, the trajectory is a hyperbola. If the eccentricity is equal to one, the trajectory is a parabola. Regardless of eccentricity, the orbit degenerates to a radial trajectory if the angular momentum equals zero. Required parameters Given an inertial frame of reference and an arbitrary epoch (a specified point in time), exactly six parameters are necessary to unambiguously define an arbitrary and unperturbed orbit. This is because the problem contains six degrees of freedom. These correspond to the three spatial dimensions which define position (, , in a Cartesian coordinate system), plus the velocity in each of these dimensions. These can be described as orbital state vectors, but this is often an inconvenient way to represent an orbit, which is why Keplerian elements are commonly used instead. Sometimes the epoch is considered a "seventh" orbital parameter, rather than part of the reference frame. If the epoch is defined to be at the moment when one of the elements is zero, the number of unspecified elements is reduced to five. (The sixth parameter is still necessary to define the orbit; it is merely numerically set to zero by convention or "moved" into the definition of the epoch with respect to real-world clock time.) Alternative parametrizations Keplerian elements can be obtained from orbital state vectors (a three-dimensional vector for the position and another for the velocity) by manual transformations or with computer software. Other orbital parameters can be computed from the Keplerian elements such as the period, apoapsis, and periapsis. (When orbiting the Earth, the last two terms are known as the apogee and perigee.) It is common to specify the period instead of the semi-major axis a in Keplerian element sets, as each can be computed from the other provided the standard gravitational parameter, , is given for the central body. Instead of the mean anomaly at epoch, the mean anomaly , mean longitude, true anomaly , or (rarely) the eccentric anomaly might be used. Using, for example, the "mean anomaly" instead of "mean anomaly at epoch" means that time must be specified as a seventh orbital element. Sometimes it is assumed that mean anomaly is zero at the epoch (by choosing the appropriate definition of the epoch), leaving only the five other orbital elements to be specified. Different sets of elements are used for various astronomical bodies. The eccentricity, , and either the semi-major axis, , or the distance of periapsis, , are used to specify the shape and size of an orbit. The longitude of the ascending node, , the inclination, , and the argument of periapsis, , or the longitude of periapsis, , specify the orientation of the orbit in its plane. Either the longitude at epoch, , the mean anomaly at epoch, , or the time of perihelion passage, , are used to specify a known point in the orbit. The choices made depend whether the vernal equinox or the node are used as the primary reference. The semi-major axis is known if the mean motion and the gravitational mass are known. It is also quite common to see either the mean anomaly () or the mean longitude () expressed directly, without either or as intermediary steps, as a polynomial function with respect to time. This method of expression will consolidate the mean motion () into the polynomial as one of the coefficients. The appearance will be that or are expressed in a more complicated manner, but we will appear to need one fewer orbital element. Mean motion can also be obscured behind citations of the orbital period . Euler angle transformations The angles , , are the Euler angles (corresponding to , , in the notation used in that article) characterizing the orientation of the coordinate system where: , is in the equatorial plane of the central body. is in the direction of the vernal equinox. is perpendicular to and with defines the reference plane. is perpendicular to the reference plane. Orbital elements of bodies (planets, comets, asteroids, ...) in the Solar System usually the ecliptic as that plane. , are in the orbital plane and with in the direction to the pericenter (periapsis). is perpendicular to the plane of the orbit. is mutually perpendicular to and . Then, the transformation from the , , coordinate frame to the , , frame with the Euler angles , , is: where The inverse transformation, which computes the 3 coordinates in the I-J-K system given the 3 (or 2) coordinates in the x-y-z system, is represented by the inverse matrix. According to the rules of matrix algebra, the inverse matrix of the product of the 3 rotation matrices is obtained by inverting the order of the three matrices and switching the signs of the three Euler angles. That is, where The transformation from , , to Euler angles , , is: where signifies the polar argument that can be computed with the standard function available in many programming languages. Orbit prediction Under ideal conditions of a perfectly spherical central body, zero perturbations and negligible relativistic effects, all orbital elements except the mean anomaly are constants. The mean anomaly changes linearly with time, scaled by the mean motion, where is the standard gravitational parameter. Hence if at any instant the orbital parameters are , then the elements at time is given by . Perturbations and elemental variance Unperturbed, two-body, Newtonian orbits are always conic sections, so the Keplerian elements define an ellipse, parabola, or hyperbola. Real orbits have perturbations, so a given set of Keplerian elements accurately describes an orbit only at the epoch. Evolution of the orbital elements takes place due to the gravitational pull of bodies other than the primary, the nonsphericity of the primary, atmospheric drag, relativistic effects, radiation pressure, electromagnetic forces, and so on. Keplerian elements can often be used to produce useful predictions at times near the epoch. Alternatively, real trajectories can be modeled as a sequence of Keplerian orbits that osculate ("kiss" or touch) the real trajectory. They can also be described by the so-called planetary equations, differential equations which come in different forms developed by Lagrange, Gauss, Delaunay, Poincaré, or Hill. Two-line elements Keplerian elements parameters can be encoded as text in a number of formats. The most common of them is the NASA / NORAD "two-line elements" (TLE) format, originally designed for use with 80 column punched cards, but still in use because it is the most common format, and 80-character ASCII records can be handled efficiently by modern databases. Depending on the application and object orbit, the data derived from TLEs older than 30 days can become unreliable. Orbital positions can be calculated from TLEs through simplified perturbation models (SGP4 / SDP4 / SGP8 / SDP8). Example of a two-line element: 1 27651U 03004A 07083.49636287 .00000119 00000-0 30706-4 0 2692 2 27651 039.9951 132.2059 0025931 073.4582 286.9047 14.81909376225249 Delaunay variables The Delaunay orbital elements were introduced by Charles-Eugène Delaunay during his study of the motion of the Moon. Commonly called Delaunay variables, they are a set of canonical variables, which are action-angle coordinates. The angles are simple sums of some of the Keplerian angles: the mean longitude: , the longitude of periapsis: , and the longitude of the ascending node: along with their respective conjugate momenta, , , and . The momenta , , and are the action variables and are more elaborate combinations of the Keplerian elements , , and . Delaunay variables are used to simplify perturbative calculations in celestial mechanics, for example while investigating the Kozai–Lidov oscillations in hierarchical triple systems. The advantage of the Delaunay variables is that they remain well defined and non-singular (except for , which can be tolerated) when and / or are very small: When the test particle's orbit is very nearly circular (), or very nearly "flat" ().
Physical sciences
Celestial mechanics
Astronomy
99156
https://en.wikipedia.org/wiki/Luna%209
Luna 9
Luna 9 (Луна-9), internal designation Ye-6 No.13, was an uncrewed space mission of the Soviet Union's Luna programme. On 3 February 1966, the Luna 9 spacecraft became the first spacecraft to achieve a soft landing on the Moon and return imagery from its surface. Spacecraft The spacecraft carrying on top the lander capsule, weighed together 1538 kg and was 2.7 meters tall. It commenced the main descent, shortly before its controlled impact it ejected the lander capsule. The lander had a mass of and consisted of a spheroid Automatic Lunar Station (ALS) capsule measuring . It used a landing bag to survive the impact speed of over . It was a hermetically sealed container with radio equipment, a program timing device, heat control systems, scientific apparatus, power sources, and a television system. The spacecraft was developed in the design bureau then known as OKB-1, under Chief Designer Sergei Korolev (who had died before the launch). The first 11 Luna missions were unsuccessful for a variety of reasons. At that time the project was transferred to Lavochkin design bureau since OKB-1 was busy with a human expedition to the Moon. Luna 9 was the twelfth attempt at a soft-landing by the Soviet Union; it was also the first successful deep space probe built by the Lavochkin design bureau, which ultimately would design and build almost all Soviet (later Russian) lunar and interplanetary spacecraft. Launch and translunar coast Luna 9 was launched by a Molniya-M rocket, serial number 103-32, flying from Site 31/6 at the Baikonur Cosmodrome in the Kazakh Soviet Socialist Republic. Liftoff took place at 11:41:37 GMT on 31 January 1966. The first three stages of the four-stage carrier rocket injected the payload and fourth stage into low Earth orbit, at an altitude of and an inclination of 51.8°. The fourth stage, a Blok-L, then fired to raise the perigee of the orbit to a new apogee approximately , before deploying Luna 9 into a highly elliptical geocentric orbit. For thermal control, the spacecraft then spun itself up to 0.67 rpm using nitrogen jets. On 1 February at 19:29 GMT, a mid-course correction took place involving a 48-second burn and resulting in a delta-v of . Descent and landing At an altitude of from the Moon, the spacecraft was oriented for the firing of its retrorockets and its spin was stopped in preparation for landing. From this moment the orientation of the spacecraft was supported by measurements of directions to the Sun and the Earth using an optomechanical system. At above the lunar surface, the radar altimeter triggered the jettison of the side modules, the inflation of the airbags and the firing of the retro rockets. At from the surface, the main retrorocket was turned off by the integrator of an acceleration having reached the planned velocity of the braking manoeuver. The four outrigger engines were used to slow the craft. About above the lunar surface, a contact sensor touched the ground triggering the engines to be shut down and the landing capsule to be ejected and its landing airbag being inflated. The capsule landed at . The capsule bounced several times before coming to rest in Oceanus Procellarum west of Reiner and Marius craters at approximately 7.08 N, 64.37 W (other sources indicate ) on 3 February 1966 at 18:45:30 GMT. Surface operations Approximately 250 seconds after landing in the Oceanus Procellarum, four petals that covered the top half of the spacecraft opened outward for increased stability. Seven hours after (to allow for the Sun to climb to 7° elevation) the probe began sending the first of nine images (including five panoramas) of the surface of the Moon. Seven radio sessions with a total of 8 hours and 5 minutes were transmitted, as well as a series of three TV pictures. After assembly the photographs gave a panoramic view of the immediate lunar surface, comprising views of nearby rocks and of the horizon, away. The pictures from Luna 9 were not released immediately by the Soviet authorities, but scientists at Jodrell Bank Observatory in England, which was monitoring the craft, noticed that the signal format used was identical to the internationally agreed Radiofax system used by newspapers for transmitting pictures. The Daily Express rushed a suitable receiver to the Observatory and the pictures from Luna 9 were decoded and published worldwide. The BBC speculated that the spacecraft's designers deliberately fitted the probe with equipment conforming to the standard, to enable reception of the pictures by Jodrell Bank Observatory. The radiation detector, the only dedicated scientific instrument on board, measured dosage of 30 millirads (0.3 milligrays) per day. The mission also determined that a spacecraft would not sink into the lunar dust; that the ground could support a lander. The last contact with the spacecraft was at 22:55 GMT on 6 February 1966. Models and displays Detailed Luna 9 models are on display at the Memorial Museum of Cosmonautics, Tsiolkovsky State Museum of the History of Cosmonautics, Museum of Cosmonautics and Rocket Technology, Museum of Air and Space Paris and other locations. Stamps The successful Luna 9 landing was commemorated on stamps.
Technology
Unmanned spacecraft
null
99211
https://en.wikipedia.org/wiki/Solar%20calendar
Solar calendar
A solar calendar is a calendar whose dates indicate the season or almost equivalently the apparent position of the Sunday relative to the stars. The Gregorian calendar, widely accepted as a standard in the world, is an example of a solar calendar. The main other types of calendar are lunar calendar and lunisolar calendar, whose months correspond to cycles of Moon phases. The months of the Gregorian calendar do not correspond to cycles of the Moon phase. The Egyptians appear to have been the first to develop a solar calendar, using as a fixed point the annual sunrise reappearance of the Dog Star—Sirius, or Sothis—in the eastern sky, which coincided with the annual flooding of the Nile River. They constructed a calendar of 365 days, consisting of 12 months of 30 days each, with 5 days added at the year’s end. The Egyptians’ failure to account for the extra fraction of a day, however, caused their calendar to drift gradually into error. Examples The oldest solar calendars include the Julian calendar and the Coptic calendar. They both have a year of 365 days, which is extended to 366 once every four years, without exception, so have a mean year of 365.25 days. As solar calendars became more accurate, they evolved into two types. Tropical solar calendars If the position of the Earth in its orbit around the Sun is reckoned with respect to the Equinox, the point at which the orbit crosses the celestial equator, then its dates accurately indicate the seasons, that is, they are synchronized with the declination of the Sun. Such a calendar is called a tropical solar calendar . The duration of the mean calendar year of such a calendar approximates some form of the tropical year, usually either the mean tropical year or the vernal equinox year. The following are tropical solar calendars: Ancient Armenian calendar Bengali calendar (National and official calendar in Bangladesh) Gregorian calendar Iranian calendar (Jalāli calendar) Tabarian calendar (Tabarian calendar) Indian national calendar (Saka calendar) French Republican calendar Every one of these calendars has a year of 365 days, which is occasionally extended by adding an extra day to form a leap year, a method called "intercalation", the inserted day being "intercalary". The Baháʼí calendar, another example of a solar calendar, always begins the year on the vernal equinox and sets its intercalary days so that the following year also begins on the vernal equinox. The moment of the vernal equinox in the northern hemisphere is determined using the location of Tehran "by means of astronomical computations from reliable sources". Sidereal solar calendars If the position of the Earth (see above) is reckoned with respect to the fixed stars, then the dates indicate the zodiacal constellation near which the Sun can be found. A calendar of this type is called a sidereal solar calendar. The mean calendar year of such a calendar approximates the sidereal year. Leaping from one lunation to another, but one Sidereal year is the period between two occurrences of the sun, as measured by the stars' solar calendar, which is derived from the Earth's orbit around the sun every 28 years. Indian calendars like the Hindu calendar, Tamil calendar, Bengali calendar (revised) and Malayalam calendar are sidereal solar calendars. The Thai solar calendar when based on the Hindu solar calendar was also a sidereal calendar. They are calculated on the basis of the apparent motion of the Sun through the twelve zodiacal signs rather than the tropical movement of the Earth. Non-solar calendars The Islamic calendar is a purely lunar calendar and has a year, whose start drifts through the seasons and so is not a solar calendar. The Maya Tzolkin calendar, which follows a 260-day cycle, has no year, therefore it is not a solar calendar. Also, any calendar synchronized only to the synodic period of Venus would not be solar. Lunisolar calendars Lunisolar calendars may be regarded as solar calendars, although their dates additionally indicate the moon phase. Typical lunisolar calendars have years marked with a whole number of lunar months, so they can not indicate the position of Earth relative to the Sun with the same accuracy as a purely solar calendar. List of solar calendars The following is a list of current, historical, and proposed solar calendars: Assamese calendar Assyrian calendar Astronomical year numbering Badí‘ calendar Basotho calendar Bengali calendar Berber calendar Bulgar calendar Byzantine calendar Caesar's Calendar Coptic calendar Discordian calendar EartHeaven calendar Era Fascista Ethiopian calendar Florentine calendar French Republican Calendar Gregorian calendar Hanke-Henry Permanent Calendar Holocene calendar Indian national calendar International Fixed Calendar Invariable Calendar Jalali calendar Javanese calendar Juche era calendar Julian calendar Malayalam calendar Minguo calendar Nanakshahi calendar Odia calendar Old Icelandic calendar Original Julian calendar Pancronometer Pataphysical calendar Pax Calendar Pentecontad calendar Pisan calendar Positivist calendar Revised Julian calendar Roman calendar Runic calendar Solar Hijri calendar Soviet calendar Swedish calendar Symmetry454 Tamil calendar Thai solar calendar Tulu calendar World Calendar World Season Calendar
Technology
Calendars
null
99293
https://en.wikipedia.org/wiki/Shape%20of%20the%20universe
Shape of the universe
In physical cosmology, the shape of the universe refers to both its local and global geometry. Local geometry is defined primarily by its curvature, while the global geometry is characterised by its topology (which itself is constrained by curvature). General relativity explains how spatial curvature (local geometry) is constrained by gravity. The global topology of the universe cannot be deduced from measurements of curvature inferred from observations within the family of homogeneous general relativistic models alone, due to the existence of locally indistinguishable spaces with varying global topological characteristics. For example; a multiply connected space like a 3 torus has everywhere zero curvature but is finite in extent, whereas a flat simply connected space is infinite in extent (such as Euclidean space). Current observational evidence (WMAP, BOOMERanG, and Planck for example) imply that the observable universe is spatially flat to within a 0.4% margin of error of the curvature density parameter with an unknown global topology. It is currently unknown whether the universe is simply connected like euclidean space or multiply connected like a torus. To date, no compelling evidence has been found suggesting the topology of the universe is not simply connected, though it has not been ruled out by astronomical observations. Shape of the observable universe The universe's structure can be examined from two angles: Local geometry: This relates to the curvature of the universe, primarily concerning what we can observe. Global geometry: This pertains to the universe's overall shape and structure. The observable universe (of a given current observer) is a roughly spherical region extending about 46 billion light-years in all directions (from that observer, the observer being the current Earth, unless specified otherwise). It appears older and more redshifted the deeper we look into space. In theory, we could look all the way back to the Big Bang, but in practice, we can only see up to the cosmic microwave background (CMB) (roughly years after the Big Bang) as anything beyond that is opaque. Studies show that the observable universe is isotropic and homogeneous on the largest scales. If the observable universe encompasses the entire universe, we might determine its structure through observation. However, if the observable universe is smaller, we can only grasp a portion of it, making it impossible to deduce the global geometry through observation. Different mathematical models of the universe's global geometry can be constructed, all consistent with current observations and general relativity. Hence, it is unclear whether the observable universe matches the entire universe or is significantly smaller, though it is generally accepted that the universe is larger than the observable universe. The universe may be compact in some dimensions and not in others, similar to how a cuboid is longer in one dimension than the others. Scientists test these models by looking for novel implications – phenomena not yet observed but necessary if the model is accurate. For instance, a small closed universe would produce multiple images of the same object in the sky, though not necessarily of the same age. As of 2024, current observational evidence suggests that the observable universe is spatially flat with an unknown global structure. Curvature of the universe The curvature is a quantity describing how the geometry of a space differs locally from flat space. The curvature of any locally isotropic space (and hence of a locally isotropic universe) falls into one of the three following cases: Zero curvature (flat)a drawn triangle's angles add up to 180° and the Pythagorean theorem holds; such 3-dimensional space is locally modeled by Euclidean space . Positive curvaturea drawn triangle's angles add up to more than 180°; such 3-dimensional space is locally modeled by a region of a 3-sphere . Negative curvaturea drawn triangle's angles add up to less than 180°; such 3-dimensional space is locally modeled by a region of a hyperbolic space . Curved geometries are in the domain of non-Euclidean geometry. An example of a positively curved space would be the surface of a sphere such as the Earth. A triangle drawn from the equator to a pole will have at least two angles equal 90°, which makes the sum of the 3 angles greater than 180°. An example of a negatively curved surface would be the shape of a saddle or mountain pass. A triangle drawn on a saddle surface will have the sum of the angles adding up to less than 180°. General relativity explains that mass and energy bend the curvature of spacetime and is used to determine what curvature the universe has by using a value called the density parameter, represented with Omega (). The density parameter is the average density of the universe divided by the critical energy density, that is, the mass energy needed for a universe to be flat. Put another way, If , the universe is flat. If , there is positive curvature. If , there is negative curvature. Scientists could experimentally calculate to determine the curvature two ways. One is to count all the mass–energy in the universe and take its average density, then divide that average by the critical energy density. Data from the Wilkinson Microwave Anisotropy Probe (WMAP) as well as the Planck spacecraft give values for the three constituents of all the mass–energy in the universe – normal mass (baryonic matter and dark matter), relativistic particles (predominantly photons and neutrinos), and dark energy or the cosmological constant: Ωmass ≈ Ωrelativistic ≈ ΩΛ ≈ Ωtotal = Ωmass + Ωrelativistic + ΩΛ = The actual value for critical density value is measured as ρcritical = . From these values, within experimental error, the universe seems to be spatially flat. Another way to measure Ω is to do so geometrically by measuring an angle across the observable universe. This can be done by using the CMB and measuring the power spectrum and temperature anisotropy. For instance, one can imagine finding a gas cloud that is not in thermal equilibrium due to being so large that light speed cannot propagate the thermal information. Knowing this propagation speed, we then know the size of the gas cloud as well as the distance to the gas cloud, we then have two sides of a triangle and can then determine the angles. Using a method similar to this, the BOOMERanG experiment has determined that the sum of the angles to 180° within experimental error, corresponding to . These and other astronomical measurements constrain the spatial curvature to be very close to zero, although they do not constrain its sign. This means that although the local geometries of spacetime are generated by the theory of relativity based on spacetime intervals, we can approximate 3-space by the familiar Euclidean geometry. The Friedmann–Lemaître–Robertson–Walker (FLRW) model using Friedmann equations is commonly used to model the universe. The FLRW model provides a curvature of the universe based on the mathematics of fluid dynamics, that is, modeling the matter within the universe as a perfect fluid. Although stars and structures of mass can be introduced into an "almost FLRW" model, a strictly FLRW model is used to approximate the local geometry of the observable universe. Another way of saying this is that, if all forms of dark energy are ignored, then the curvature of the universe can be determined by measuring the average density of matter within it, assuming that all matter is evenly distributed (rather than the distortions caused by 'dense' objects such as galaxies). This assumption is justified by the observations that, while the universe is "weakly" inhomogeneous and anisotropic (see the large-scale structure of the cosmos), it is on average homogeneous and isotropic when analyzed at a sufficiently large spatial scale. Global universal structure Global structure covers the geometry and the topology of the whole universe—both the observable universe and beyond. While the local geometry does not determine the global geometry completely, it does limit the possibilities, particularly a geometry of a constant curvature. The universe is often taken to be a geodesic manifold, free of topological defects; relaxing either of these complicates the analysis considerably. A global geometry is a local geometry plus a topology. It follows that a topology alone does not give a global geometry: for instance, Euclidean 3-space and hyperbolic 3-space have the same topology but different global geometries. As stated in the introduction, investigations within the study of the global structure of the universe include: whether the universe is infinite or finite in extent, whether the geometry of the global universe is flat, positively curved, or negatively curved, and, whether the topology is simply connected (for example, like a sphere) or else multiply connected (for example, like a torus). Infinite or finite One of the unanswered questions about the universe is whether it is infinite or finite in extent. For intuition, it can be understood that a finite universe has a finite volume that, for example, could be in theory filled with a finite amount of material, while an infinite universe is unbounded and no numerical volume could possibly fill it. Mathematically, the question of whether the universe is infinite or finite is referred to as boundedness. An infinite universe (unbounded metric space) means that there are points arbitrarily far apart: for any distance , there are points that are of a distance at least apart. A finite universe is a bounded metric space, where there is some distance such that all points are within distance of each other. The smallest such is called the diameter of the universe, in which case the universe has a well-defined "volume" or "scale". With or without boundary Assuming a finite universe, the universe can either have an edge or no edge. Many finite mathematical spaces, e.g., a disc, have an edge or boundary. Spaces that have an edge are difficult to treat, both conceptually and mathematically. Namely, it is difficult to state what would happen at the edge of such a universe. For this reason, spaces that have an edge are typically excluded from consideration. However, there exist many finite spaces, such as the 3-sphere and 3-torus, that have no edges. Mathematically, these spaces are referred to as being compact without boundary. The term compact means that it is finite in extent ("bounded") and complete. The term "without boundary" means that the space has no edges. Moreover, so that calculus can be applied, the universe is typically assumed to be a differentiable manifold. A mathematical object that possesses all these properties, compact without boundary and differentiable, is termed a closed manifold. The 3-sphere and 3-torus are both closed manifolds. Observational methods In the 1990s and early 2000s, empirical methods for determining the global topology using measurements on scales that would show multiple imaging were proposed and applied to cosmological observations. In the 2000s and 2010s, it was shown that, since the universe is inhomogeneous as shown in the cosmic web of large-scale structure, acceleration effects measured on local scales in the patterns of the movements of galaxies should, in principle, reveal the global topology of the universe. Curvature The curvature of the universe places constraints on the topology. If the spatial geometry is spherical, i.e., possess positive curvature, the topology is compact. For a flat (zero curvature) or a hyperbolic (negative curvature) spatial geometry, the topology can be either compact or infinite. Many textbooks erroneously state that a flat or hyperbolic universe implies an infinite universe; however, the correct statement is that a flat universe that is also simply connected implies an infinite universe. For example, Euclidean space is flat, simply connected, and infinite, but there are tori that are flat, multiply connected, finite, and compact (see flat torus). In general, local to global theorems in Riemannian geometry relate the local geometry to the global geometry. If the local geometry has constant curvature, the global geometry is very constrained, as described in Thurston geometries. The latest research shows that even the most powerful future experiments (like the SKA) will not be able to distinguish between a flat, open and closed universe if the true value of cosmological curvature parameter is smaller than 10−4. If the true value of the cosmological curvature parameter is larger than 10−3 we will be able to distinguish between these three models even now. Final results of the Planck mission, released in 2018, show the cosmological curvature parameter, , to be , consistent with a flat universe. (i.e. positive curvature: , , , negative curvature: , , , zero curvature: , , ). Universe with zero curvature In a universe with zero curvature, the local geometry is flat. The most familiar such global structure is that of Euclidean space, which is infinite in extent. Flat universes that are finite in extent include the torus and Klein bottle. Moreover, in three dimensions, there are 10 finite closed flat 3-manifolds, of which 6 are orientable and 4 are non-orientable. These are the Bieberbach manifolds. The most familiar is the aforementioned 3-torus universe. In the absence of dark energy, a flat universe expands forever but at a continually decelerating rate, with expansion asymptotically approaching zero. With dark energy, the expansion rate of the universe initially slows down, due to the effect of gravity, but eventually increases. The ultimate fate of the universe is the same as that of an open universe in the sense that space will continue expanding forever. A flat universe can have zero total energy. Universe with positive curvature A positively curved universe is described by elliptic geometry, and can be thought of as a three-dimensional hypersphere, or some other spherical 3-manifold (such as the Poincaré dodecahedral space), all of which are quotients of the 3-sphere. Poincaré dodecahedral space is a positively curved space, colloquially described as "soccerball-shaped", as it is the quotient of the 3-sphere by the binary icosahedral group, which is very close to icosahedral symmetry, the symmetry of a soccer ball. This was proposed by Jean-Pierre Luminet and colleagues in 2003 and an optimal orientation on the sky for the model was estimated in 2008. Universe with negative curvature A hyperbolic universe, one of a negative spatial curvature, is described by hyperbolic geometry, and can be thought of locally as a three-dimensional analog of an infinitely extended saddle shape. There are a great variety of hyperbolic 3-manifolds, and their classification is not completely understood. Those of finite volume can be understood via the Mostow rigidity theorem. For hyperbolic local geometry, many of the possible three-dimensional spaces are informally called "horn topologies", so called because of the shape of the pseudosphere, a canonical model of hyperbolic geometry. An example is the Picard horn, a negatively curved space, colloquially described as "funnel-shaped". Curvature: open or closed When cosmologists speak of the universe as being "open" or "closed", they most commonly are referring to whether the curvature is negative or positive, respectively. These meanings of open and closed are different from the mathematical meaning of open and closed used for sets in topological spaces and for the mathematical meaning of open and closed manifolds, which gives rise to ambiguity and confusion. In mathematics, there are definitions for a closed manifold (i.e., compact without boundary) and open manifold (i.e., one that is not compact and without boundary). A "closed universe" is necessarily a closed manifold. An "open universe" can be either a closed or open manifold. For example, in the Friedmann–Lemaître–Robertson–Walker (FLRW) model, the universe is considered to be without boundaries, in which case "compact universe" could describe a universe that is a closed manifold.
Physical sciences
Physical cosmology
Astronomy
99358
https://en.wikipedia.org/wiki/Biogeography
Biogeography
Biogeography is the study of the distribution of species and ecosystems in geographic space and through geological time. Organisms and biological communities often vary in a regular fashion along geographic gradients of latitude, elevation, isolation and habitat area. Phytogeography is the branch of biogeography that studies the distribution of plants. Zoogeography is the branch that studies distribution of animals. Mycogeography is the branch that studies distribution of fungi, such as mushrooms. Knowledge of spatial variation in the numbers and types of organisms is as vital to us today as it was to our early human ancestors, as we adapt to heterogeneous but geographically predictable environments. Biogeography is an integrative field of inquiry that unites concepts and information from ecology, evolutionary biology, taxonomy, geology, physical geography, palaeontology, and climatology. Modern biogeographic research combines information and ideas from many fields, from the physiological and ecological constraints on organismal dispersal to geological and climatological phenomena operating at global spatial scales and evolutionary time frames. The short-term interactions within a habitat and species of organisms describe the ecological application of biogeography. Historical biogeography describes the long-term, evolutionary periods of time for broader classifications of organisms. Early scientists, beginning with Carl Linnaeus, contributed to the development of biogeography as a science. The scientific theory of biogeography grows out of the work of Alexander von Humboldt (1769–1859), Francisco Jose de Caldas (1768–1816), Hewett Cottrell Watson (1804–1881), Alphonse de Candolle (1806–1893), Alfred Russel Wallace (1823–1913), Philip Lutley Sclater (1829–1913) and other biologists and explorers. Introduction The patterns of species distribution across geographical areas can usually be explained through a combination of historical factors such as: speciation, extinction, continental drift, and glaciation. Through observing the geographic distribution of species, we can see associated variations in sea level, river routes, habitat, and river capture. Additionally, this science considers the geographic constraints of landmass areas and isolation, as well as the available ecosystem energy supplies. Over periods of ecological changes, biogeography includes the study of plant and animal species in: their past and/or present living refugium habitat; their interim living sites; and/or their survival locales. As writer David Quammen put it, "...biogeography does more than ask Which species? and Where. It also asks Why? and, what is sometimes more crucial, Why not?." Modern biogeography often employs the use of Geographic Information Systems (GIS), to understand the factors affecting organism distribution, and to predict future trends in organism distribution. Often mathematical models and GIS are employed to solve ecological problems that have a spatial aspect to them. Biogeography is most keenly observed on the world's islands. These habitats are often much more manageable areas of study because they are more condensed than larger ecosystems on the mainland. Islands are also ideal locations because they allow scientists to look at habitats that new invasive species have only recently colonized and can observe how they disperse throughout the island and change it. They can then apply their understanding to similar but more complex mainland habitats. Islands are very diverse in their biomes, ranging from the tropical to arctic climates. This diversity in habitat allows for a wide range of species study in different parts of the world. One scientist who recognized the importance of these geographic locations was Charles Darwin, who remarked in his journal "The Zoology of Archipelagoes will be well worth examination". Two chapters in On the Origin of Species were devoted to geographical distribution. History 18th century The first discoveries that contributed to the development of biogeography as a science began in the mid-18th century, as Europeans explored the world and described the biodiversity of life. During the 18th century most views on the world were shaped around religion and for many natural theologists, the bible. Carl Linnaeus, in the mid-18th century, improved our classifications of organisms through the exploration of undiscovered territories by his students and disciples. When he noticed that species were not as perpetual as he believed, he developed the Mountain Explanation to explain the distribution of biodiversity; when Noah's ark landed on Mount Ararat and the waters receded, the animals dispersed throughout different elevations on the mountain. This showed different species in different climates proving species were not constant. Linnaeus' findings set a basis for ecological biogeography. Through his strong beliefs in Christianity, he was inspired to classify the living world, which then gave way to additional accounts of secular views on geographical distribution. He argued that the structure of an animal was very closely related to its physical surroundings. This was important to a George Louis Buffon's rival theory of distribution. Closely after Linnaeus, Georges-Louis Leclerc, Comte de Buffon observed shifts in climate and how species spread across the globe as a result. He was the first to see different groups of organisms in different regions of the world. Buffon saw similarities between some regions which led him to believe that at one point continents were connected and then water separated them and caused differences in species. His hypotheses were described in his work, the 36 volume Histoire Naturelle, générale et particulière, in which he argued that varying geographical regions would have different forms of life. This was inspired by his observations comparing the Old and New World, as he determined distinct variations of species from the two regions. Buffon believed there was a single species creation event, and that different regions of the world were homes for varying species, which is an alternate view than that of Linnaeus. Buffon's law eventually became a principle of biogeography by explaining how similar environments were habitats for comparable types of organisms. Buffon also studied fossils which led him to believe that the Earth was over tens of thousands of years old, and that humans had not lived there long in comparison to the age of the Earth. 19th century Following the period of exploration came the Age of Enlightenment in Europe, which attempted to explain the patterns of biodiversity observed by Buffon and Linnaeus. At the birth of the 19th century, Alexander von Humboldt, known as the "founder of plant geography", developed the concept of physique generale to demonstrate the unity of science and how species fit together. As one of the first to contribute empirical data to the science of biogeography through his travel as an explorer, he observed differences in climate and vegetation. The Earth was divided into regions which he defined as tropical, temperate, and arctic and within these regions there were similar forms of vegetation. This ultimately enabled him to create the isotherm, which allowed scientists to see patterns of life within different climates. He contributed his observations to findings of botanical geography by previous scientists, and sketched this description of both the biotic and abiotic features of the Earth in his book, Cosmos. Augustin de Candolle contributed to the field of biogeography as he observed species competition and the several differences that influenced the discovery of the diversity of life. He was a Swiss botanist and created the first Laws of Botanical Nomenclature in his work, Prodromus. He discussed plant distribution and his theories eventually had a great impact on Charles Darwin, who was inspired to consider species adaptations and evolution after learning about botanical geography. De Candolle was the first to describe the differences between the small-scale and large-scale distribution patterns of organisms around the globe. Several additional scientists contributed new theories to further develop the concept of biogeography. Charles Lyell developed the Theory of Uniformitarianism after studying fossils. This theory explained how the world was not created by one sole catastrophic event, but instead from numerous creation events and locations. Uniformitarianism also introduced the idea that the Earth was actually significantly older than was previously accepted. Using this knowledge, Lyell concluded that it was possible for species to go extinct. Since he noted that Earth's climate changes, he realized that species distribution must also change accordingly. Lyell argued that climate changes complemented vegetation changes, thus connecting the environmental surroundings to varying species. This largely influenced Charles Darwin in his development of the theory of evolution. Charles Darwin was a natural theologist who studied around the world, and most importantly in the Galapagos Islands. Darwin introduced the idea of natural selection, as he theorized against previously accepted ideas that species were static or unchanging. His contributions to biogeography and the theory of evolution were different from those of other explorers of his time, because he developed a mechanism to describe the ways that species changed. His influential ideas include the development of theories regarding the struggle for existence and natural selection. Darwin's theories started a biological segment to biogeography and empirical studies, which enabled future scientists to develop ideas about the geographical distribution of organisms around the globe. Alfred Russel Wallace studied the distribution of flora and fauna in the Amazon Basin and the Malay Archipelago in the mid-19th century. His research was essential to the further development of biogeography, and he was later nicknamed the "father of Biogeography". Wallace conducted fieldwork researching the habits, breeding and migration tendencies, and feeding behavior of thousands of species. He studied butterfly and bird distributions in comparison to the presence or absence of geographical barriers. His observations led him to conclude that the number of organisms present in a community was dependent on the amount of food resources in the particular habitat. Wallace believed species were dynamic by responding to biotic and abiotic factors. He and Philip Sclater saw biogeography as a source of support for the theory of evolution as they used Darwin's conclusion to explain how biogeography was similar to a record of species inheritance. Key findings, such as the sharp difference in fauna either side of the Wallace Line, and the sharp difference that existed between North and South America prior to their relatively recent faunal interchange, can only be understood in this light. Otherwise, the field of biogeography would be seen as a purely descriptive one. 20th and 21st century Moving on to the 20th century, Alfred Wegener introduced the Theory of Continental Drift in 1912, though it was not widely accepted until the 1960s. This theory was revolutionary because it changed the way that everyone thought about species and their distribution around the globe. The theory explained how continents were formerly joined in one large landmass, Pangea, and slowly drifted apart due to the movement of the plates below Earth's surface. The evidence for this theory is in the geological similarities between varying locations around the globe, the geographic distribution of some fossils (including the mesosaurs) on various continents, and the jigsaw puzzle shape of the landmasses on Earth. Though Wegener did not know the mechanism of this concept of Continental Drift, this contribution to the study of biogeography was significant in the way that it shed light on the importance of environmental and geographic similarities or differences as a result of climate and other pressures on the planet. Importantly, late in his career Wegener recognised that testing his theory required measurement of continental movement rather than inference from fossils species distributions. In 1958 paleontologist Paul S. Martin published A Biogeography of Reptiles and Amphibians in the Gómez Farias Region, Tamaulipas, Mexico, which has been described as "ground-breaking" and "a classic treatise in historical biogeography". Martin applied several disciplines including ecology, botany, climatology, geology, and Pleistocene dispersal routes to examine the herpetofauna of a relatively small and largely undisturbed area, but ecologically complex, situated on the threshold of temperate – tropical (nearctic and neotropical) regions, including semiarid lowlands at 70 meters elevation and the northernmost cloud forest in the western hemisphere at over 2200 meters. The publication of The Theory of Island Biogeography by Robert MacArthur and E.O. Wilson in 1967 showed that the species richness of an area could be predicted in terms of such factors as habitat area, immigration rate and extinction rate. This added to the long-standing interest in island biogeography. The application of island biogeography theory to habitat fragments spurred the development of the fields of conservation biology and landscape ecology. Classic biogeography has been expanded by the development of molecular systematics, creating a new discipline known as phylogeography. This development allowed scientists to test theories about the origin and dispersal of populations, such as island endemics. For example, while classic biogeographers were able to speculate about the origins of species in the Hawaiian Islands, phylogeography allows them to test theories of relatedness between these populations and putative source populations on various continents, notably in Asia and North America. Biogeography continues as a point of study for many life sciences and geography students worldwide, however it may be under different broader titles within institutions such as ecology or evolutionary biology. In recent years, one of the most important and consequential developments in biogeography has been to show how multiple organisms, including mammals like monkeys and reptiles like squamates, overcame barriers such as large oceans that many biogeographers formerly believed were impossible to cross.
Biology and health sciences
Ecology
Biology
99384
https://en.wikipedia.org/wiki/Larch
Larch
Larches are deciduous conifers in the genus Larix, of the family Pinaceae (subfamily Laricoideae). Growing from tall, they are native to the cooler regions of the northern hemisphere, where they are found in lowland forests in the high latitudes, and high in mountains further south. Larches are among the dominant plants in the boreal forests of Siberia and Canada. Although they are conifers, larches are deciduous trees that lose their needles in the autumn. Etymology The English name larch ultimately derives from the Latin "larigna", named after the ancient settlement of Larignum. The story of its naming was preserved by Vitruvius: It is worth while to know how this wood was discovered. The divine Caesar, being with his army in the neighbourhood of the Alps, and having ordered the towns to furnish supplies, the inhabitants of a fortified stronghold there, called Larignum, trusting in the natural strength of their defences, refused to obey his command. So the general ordered his forces to the assault. In front of the gate of this stronghold there was a tower, made of beams of this wood laid in alternating directions at right angles to each other, like a funeral pyre, and built high, so that they could drive off an attacking party by throwing stakes and stones from the top. When it was observed that they had no other missiles than stakes, and that these could not be hurled very far from the wall on account of the weight, orders were given to approach and to throw bundles of brushwood and lighted torches at this outwork. These the soldiers soon got together. The flames soon kindled the brushwood which lay about that wooden structure and, rising towards heaven, made everybody think that the whole pile had fallen. But when the fire had burned itself out and subsided, and the tower appeared to view entirely uninjured, Caesar in amazement gave orders that they should be surrounded with a palisade, built beyond the range of missiles. So the townspeople were frightened into surrendering, and were then asked where that wood came from which was not harmed by fire. They pointed to trees of the kind under discussion, of which there are very great numbers in that vicinity. And so, as that stronghold was called Larignum, the wood was called larch. — Description and distribution The tallest species, Larix occidentalis, can reach . Larch tree crowns are sparse, with the major branches horizontal; the second and third order branchlets are also ± horizontal in some species (e.g. L. gmelinii, L. kaempferi), or characteristically pendulous in some other species (e.g. L. decidua, L. griffithii). Larch shoots are dimorphic, with leaves borne singly on long shoots typically long and bearing several buds, and in dense clusters of 20–50 needles on short shoots only long with only a single bud. The leaves (light green) are needle-like, long, slender (under wide). Larches are among the few deciduous conifers, which are usually evergreen. Other deciduous conifers include the golden larch Pseudolarix amabilis, the dawn redwood Metasequoia glyptostroboides, the Chinese swamp cypress Glyptostrobus pensilis and the bald cypresses in the genus Taxodium. The male (pollen) cones are greenish-yellow to orange-yellowish and fall soon after pollination. The female cones) of larches are erect, small, long, green, red, or purple, ripening brown and woody- or leathery-textured 5–8 months after pollination; in about half the species the bract scales are long and visible, and in the others, short and hidden between the seed scales. Those native to northern regions have small cones () with short bracts, with more southerly species tending to have longer cones (), often with exserted bracts, with the longest cones and bracts produced by the southernmost species, in the Himalayas. The seeds are winged. The root system is broad and deep and the bark is finely cracked and wrinkled in irregular plaques. The wood is bicoloured, with salmon-pink heartwood and yellowish-white sapwood. The chromosome number is 2n = 24, similar to that of most of the other species of the family Pinaceae. The genus Larix is present in all the temperate-cold zones of the northern hemisphere, from North America to northern Siberia passing through Europe, mountainous China and Japan. The larches are important forest trees of Russia, Central Europe, United States and Canada. They require a cool and fairly humid climate and for this reason they are found in the mountains of the temperate zones, while in the northernmost boreal zones they are also found in the plains. Larch trees go further north than all, reaching in North America and Siberia the tundra and polar ice. The larches are pioneer species not very demanding towards the soil and they are very long-lived trees. They live in pure or mixed forests together with other conifers or more rarely with broad-leaved trees. Species and taxonomy The genus Larix belongs to the subfamily Laricoideae, which also includes the Douglas-firs, genus Pseudotsuga; the genus Cathaya was also included in some older studies, but is now considered closer to Pinus and Picea. In the past, the cone bract length was often used to divide the larches into two sections (sect. Larix with short bracts, and sect. Multiserialis with long bracts), but genetic evidence does not support this division, pointing instead to a genetic divide between Old World and New World species, with the cone and bract size being merely adaptations to climatic conditions. More recent genetic studies have proposed three groups within the genus, with a primary division into North American and Eurasian species, and a secondary division of the Eurasian into northern short-bracted species and southern long-bracted species; there is some dispute over the position of Larix sibirica, a short-bracted species which is placed in the short-bracted group by some of the studies and the long-bracted group by others. Ten species and one natural hybrid of larch are accepted by Plants of the World Online (POWO), following the conservative treatment in Farjon (2010); several others are accepted by other authors, notably Rushforth, and the Flora of China. These are subdivided on the basis of the most recent phylogenetic investigations: Eurasian species Northern Eurasian species with short bracts Larix decidua (syn. L. europaea ) – European larch. Mountains of central Europe. Larix sibirica – Siberian larch. Plains of western Siberia. Larix × czekanowskii – an accepted hybrid between L. gmelinii and L. sibirica. Larix gmelinii (syn. L. dahurica , L. cajanderi ) – Dahurian larch. Plains of central and eastern Siberia. Larix principis-rupprechtii – Prince Rupprecht's larch. Mountains of northeastern China (disputed; accepted by Rushforth and many Chinese botanists; treated as a variety of L. gmelinii by POWO despite its disjunct distribution and much larger cones). Larix kaempferi (syn. L. leptolepis ) – Japanese larch. Mountains of central Japan. Southern Euroasiatic species with long bracts Larix potaninii – Chinese larch. Mountains of southwestern China (southern Sichuan, northern Yunnan). Larix mastersiana – Masters' larch. Mountains of western China (northern Sichuan). Larix griffithii (syn. L. griffithiana) – Sikkim larch. Mountains of the eastern Himalayas, on the wet (high monsoon) southern slopes. Larix himalaica - Langtang larch. Mountains of the central Himalayas (disputed; accepted by Rushforth and the Flora of China; treated as a variety of L. potaninii by POWO despite being geographically distant from it). Larix kongboensis - Kongbo larch. Mountains of southeastern Tibet, on the dry northern side of the Himalaya in the Yarlung Tsangpo Grand Canyon area (disputed; accepted by the Flora of China; treated as a synonym of L. griffithii by POWO despite its smaller cones and other distinct characters). Larix speciosa - Burmese larch. Mountains of southwestern China (southwestern Yunnan) and northeastern Myanmar (disputed; accepted by Rushforth and the Flora of China; treated as a variety of L. griffithii by POWO despite being geographically distant from it, and closer to L. potaninii in morphology). North American species Larix laricina (Du Roi) K. Koch – Tamarack or American larch. Parts of Alaska and throughout Canada and the northern United States from the eastern Rocky Mountains to the Atlantic shore. Larix lyallii Parl. – Subalpine larch. Mountains of northwest United States and southwest Canada, at very high altitude. Larix occidentalis Nutt. – Western larch. Mountains of northwest United States and southwest Canada, at lower altitudes (Pacific Northwest). Hybrids Most if not all of the species can be hybridised in cultivation; these hybrids are not discussed by POWO as they are not of natural occurrence. The hybrid Larix × marschlinsii (syn. L. × eurolepis), the Dunkeld larch, a spontaneous artificial hybrid L. decidua × L. kaempferi that arose more or less simultaneously in Switzerland and Scotland in 1901–1904, is by far the best known, being of major importance in forestry in northern Europe. Larix × pendula (L. decidua × L. laricina), and Larix × eurokurilensis (L. decidua × L. gmelinii) have also been named, but are rarely seen in cultivation. Larix × stenophylla is another probable hybrid still unresolved. Ecology Larches are associated with a number of mycorrhizal fungal species, including some species which primarily or only associate with larch. One of the most prominent of these species is the larch bolete Suillus grevillei. Larch is used as a food plant by the larvae of a number of Lepidoptera species. Diseases Larches are prone to the fungal canker disease Lachnellula ssp. (larch canker); this is particularly a problem on sites prone to late spring frosts, which cause minor injuries to the tree allowing entry to the fungal spores. In Canada, this disease was first detected in 1980 and is particularly harmful to an indigenous species larch, the tamarack, killing both young and mature trees. Larches are also vulnerable to Phytophthora ramorum. In late 2009 the disease was first found in Japanese larch trees in the English counties of Devon, Cornwall and Somerset, and has since spread to the south-west of Scotland. In August 2010 the disease was found in Japanese larch trees in counties Waterford and Tipperary in Ireland and in 2013 in the Afan Forest Park in south Wales. Laricifomes officinalis is another mushroom found in Europe, North America and northern Asia that causes internal wood rot. It is almost exclusive guest of the gen. Larix. Other diseases are given by mushrooms, fungal rusts, bacteria and insects. Uses Larch wood is valued for its tough, waterproof and durable qualities. Top quality knot-free timber is in great demand for building yachts and other small boats, for exterior cladding of buildings, and interior paneling. The timber is somewhat resistant to rot when in contact with the ground, and historically was used as posts and in fencing. However, European Standard EN 350-2 lists larch as slightly to moderately durable; this would make it unsuitable for ground contact use without preservative in temperate climates, and would give it a limited life as external cladding without coatings. The hybrid Dunkeld larch is widely grown as a timber crop in Northern Europe, valued for its fast growth and disease resistance. Larch on oak was the traditional construction method for Scottish fishing boats in the 19th century. Larch has also been used in herbal medicine; see Bach flower remedies and Arabinogalactan for details. Often, in Eurasian shamanism, the "world tree" is depicted as specifically a larch tree. Planted on borders with birch, both tree species were used in pagan cremations. Gallery
Biology and health sciences
Pinaceae
Plants
99404
https://en.wikipedia.org/wiki/Peat
Peat
Peat is an accumulation of partially decayed vegetation or organic matter. It is unique to natural areas called peatlands, bogs, mires, moors, or muskegs. Sphagnum moss, also called peat moss, is one of the most common components in peat, although many other plants can contribute. The biological features of sphagnum mosses act to create a habitat aiding peat formation, a phenomenon termed 'habitat manipulation'. Soils consisting primarily of peat are known as histosols. Peat forms in wetland conditions, where flooding or stagnant water obstructs the flow of oxygen from the atmosphere, slowing the rate of decomposition. Peat properties such as organic matter content and saturated hydraulic conductivity can exhibit high spatial heterogeneity. Peatlands, particularly bogs, are the primary source of peat; although less common, other wetlands, including fens, pocosins and peat swamp forests, also deposit peat. Landscapes covered in peat are home to specific kinds of plants, including Sphagnum moss, ericaceous shrubs and sedges. Because organic matter accumulates over thousands of years, peat deposits provide records of past vegetation and climate by preserving plant remains, such as pollen. This allows the reconstruction of past environments and the study of land-use changes. Peat is used by gardeners and for horticulture in certain parts of the world, but this is being banned in some places. By volume, there are about 4 trillion cubic metres of peat in the world. Over time, the formation of peat is often the first step in the geological formation of fossil fuels such as coal, particularly low-grade coal such as lignite. The peatland ecosystem covers and is the most efficient carbon sink on the planet, because peatland plants capture carbon dioxide (CO2) naturally released from the peat, maintaining an equilibrium. In natural peatlands, the "annual rate of biomass production is greater than the rate of decomposition", but it takes "thousands of years for peatlands to develop the deposits of , which is the average depth of the boreal [northern] peatlands", which store around 415 gigatonnes (Gt) of carbon (about 46 times 2019 global CO2 emissions). Globally, peat stores up to 550 Gt of carbon, 42% of all soil carbon, which exceeds the carbon stored in all other vegetation types, including the world's forests, although it covers just 3% of the land's surface. Peat is renewable source of energy in theory, but not in practice, due to its extraction rate in industrialized countries far exceeding its slow regrowth rate of per year, and as it is also reported that peat regrowth takes place only in 30–40% of peatlands. Centuries of burning and draining of peat by humans has released a significant amount of into the atmosphere, and much peatland restoration is needed to help limit climate change. Formation Peat forms when plant material does not fully decay in acidic and anaerobic conditions. It is composed mainly of wetland vegetation: principally bog plants including mosses, sedges and shrubs. As it accumulates, the peat holds water. This slowly creates wetter conditions that allow the area of wetland to expand. Peatland features can include ponds, ridges and raised bogs. The characteristics of some bog plants actively promote bog formation. For example, sphagnum mosses actively secrete tannins, which preserve organic material. Sphagnum also have special water-retaining cells, known as hyaline cells, which can release water ensuring the bogland remains constantly wet which helps promote peat production. Most modern peat bogs formed 12,000 years ago in high latitudes after the glaciers retreated at the end of the last ice age. Peat usually accumulates slowly at the rate of about a millimetre per year. The estimated carbon content is (northern peatlands), (tropical peatlands) and (South America). Types of peat material Peat material is either fibric, hemic, or sapric. Fibric peats are the least decomposed and consist of intact fibre. Hemic peats are partially decomposed and sapric are the most decomposed. Phragmites peat are composed of reed grass, Phragmites australis, and other grasses. It is denser than many other types of peat. Engineers may describe a soil as peat which has a relatively high percentage of organic material. This soil is problematic because it exhibits poor consolidation properties—it cannot be easily compacted to serve as a stable foundation to support loads, such as roads or buildings. Peatlands distribution In a widely cited article, Joosten and Clarke (2002) described peatlands or mires (which they say are the same) asthe most widespread of all wetland types in the world, representing 50 to 70% of global wetlands. They cover over or 3% of the land and freshwater surface of the planet. In these ecosystems are found one third of the world's soil carbon and 10% of global freshwater resources. These ecosystems are characterized by the unique ability to accumulate and store dead organic matter from Sphagnum and many other non-moss species, as peat, under conditions of almost permanent water saturation. Peatlands are adapted to the extreme conditions of high water and low oxygen content, of toxic elements and low availability of plant nutrients. Their water chemistry varies from alkaline to acidic. Peatlands occur on all continents, from the tropical to boreal and Arctic zones from sea level to high alpine conditions. A more recent estimate from an improved global peatland map, PEATMAP, based on a meta-analysis of geospatial information at global, regional and national levels puts global coverage slightly higher than earlier peatland inventories at approximately 2.84% of the world land area. In Europe, peatlands extend to about . About 60% of the world's wetlands are made of peat. Peat deposits are found in many places around the world, including northern Europe and North America. The North American peat deposits are principally found in Canada and the Northern United States. Some of the world's largest peatlands include the West Siberian Lowland, the Hudson Bay Lowlands and the Mackenzie River Valley. There is less peat in the Southern Hemisphere, in part because there is less land. The world's largest tropical peatland is located in Africa (the Democratic Republic of Congo). In addition, the vast Magellanic Moorland in South America (Southern Patagonia/Tierra del Fuego) is an extensive peat-dominated landscape. Peat can be found in New Zealand, Kerguelen, the Falkland Islands and Indonesia (Kalimantan [Sungai Putri, Danau Siawan, Sungai Tolak], Rasau Jaya (West Kalimantan) and Sumatra). Indonesia has more tropical peatlands and mangrove forests than any other nation on earth, but Indonesia is losing wetlands by per year. A catalog of the peat research collection at the University of Minnesota Duluth provides references to research on worldwide peat and peatlands. About 7% of all peatlands have been exploited for agriculture and forestry. Under certain conditions, peat will turn into lignite coal over geologic periods of time. General uses Fuel Peat can be used as fuel once dried. Traditionally, peat is cut by hand and left to dry in the sun. In many countries, including Ireland and Scotland, peat was traditionally stacked to dry in rural areas and used for cooking and domestic heating. This tradition can be traced back to the Roman period. For industrial uses, companies may use pressure to extract water from the peat, which is soft and easily compressed. Agriculture In Sweden, farmers use dried peat to absorb excrement from cattle that are wintered indoors. The most essential property of peat is retaining moisture in container soil when it is dry while preventing the excess water from killing roots when it is wet. Peat can store nutrients although it is not fertile itself—it is polyelectrolytic with a high ion-exchange capacity due to its oxidized lignin. Peat is discouraged as a soil amendment by the Royal Botanic Gardens, Kew, England, since 2003. While bark or coir-based peat-free potting soil mixes are on the rise, particularly in the UK, peat is still used as raw material for horticulture in some other European countries, Canada, as well as parts of the United States. Drinking water Peatland can also be an essential source of drinking water, providing nearly 4% of all potable water stored in reservoirs. In the UK, 43% of the population receives drinking water sourced from peatlands, with the number climbing to 68% in Ireland. Catchments containing peatlands are the main source of water for large cities, including Dublin. Metallurgy Peat wetlands also used to have a degree of metallurgical importance in the Early Middle Ages, being the primary source of bog iron used to create swords and armour. Flood mitigation Many peat swamps along the coast of Malaysia serve as a natural means of flood mitigation, with any overflow being absorbed by the peat, provided forests are still present to prevent peat fires. Freshwater aquaria Peat is sometimes used in freshwater aquaria. It is seen most commonly in soft water or blackwater river systems such as those mimicking the Amazon River basin. In addition to being soft and therefore suitable for demersal (bottom-dwelling) species such as Corydoras catfish, peat is reported to have many other beneficial functions in freshwater aquaria. It softens water by acting as an ion exchanger; it also contains substances that are beneficial for plants and fishes' reproductive health. Peat can prevent algae growth and kill microorganisms. Peat often stains the water yellow or brown due to the leaching of tannins. Balneotherapy Peat is widely used in balneotherapy (the use of bathing to treat disease). Many traditional spa treatments include peat as part of peloids. Such health treatments have an enduring tradition in European countries, including Poland, the Czech Republic, Germany and Austria. Some of these old spas date back to the 18th century and are still active today. The most common types of peat application in balneotherapy are peat muds, poultices and suspension baths. Peat archives Authors Rydin and Jeglum in Biology of Habitats described the concept of peat archives, a phrase coined by influential peatland scientist Harry Godwin in 1981. In Quaternary Palaeoecology, first published in 1980, Birks and Birks described how paleoecological studies "of peat can be used to reveal what plant communities were present (locally and regionally), what period each community occupied, how environmental conditions changed, and how the environment affected the ecosystem in that time and place." Scientists continue to compare modern mercury (Hg) accumulation rates in bogs with historical natural archives records in peat bogs and lake sediments to estimate the potential human impacts on the biogeochemical cycle of mercury, for example. Over the years, different dating models and technologies for measuring date sediments and peat profiles accumulated over the last 100–150 years, have been used, including the widely used vertical distribution of 210Pb, the inductively coupled plasma mass spectrometry (ICP-SMS), and more recently the initial penetration (IP). Bog bodies Naturally mummified human bodies, often called "bog bodies" have been found in various places in Scotland, England, Ireland, and especially northern Germany and Denmark. They are almost perfectly preserved by the tanning properties of the acidic water, as well as by the antibiotic properties of the organic component sphagnan. A famous example is the Tollund Man in Denmark. Having been discovered in 1950 after being mistaken for a recent murder victim, he was exhumed for scientific purposes and dated to have lived during the 4th century BC. Before that, another bog body, the Elling Woman, had been discovered in 1938 in the same bog about from the Tollund Man. She is believed to have lived during the late 3rd century BC and was a ritual sacrifice. In the Bronze and Iron Ages, people used peat bogs for rituals to nature gods and spirits. Environmental and ecological issues The distinctive ecological conditions of peat wetlands provide a habitat for distinctive fauna and flora. For example, whooping cranes nest in North American peatlands, whilst Siberian cranes nest in the West Siberian peatland. Palsa mires have a rich bird life and are an EU-red listed habitat, and in Canada riparian peat banks are used as maternity sites for polar bears. Natural peatlands also have many species of wild orchids and carnivorous plants. For more on biological communities, see wetland, bog or fen. Around half of the area of northern peatlands is permafrost-affected, and this area represents around a tenth of the total permafrost area, and also a tenth (185 ± 66 Gt) of all permafrost carbon, equivalent to around half of the carbon stored in the atmosphere. Dry peat is a good insulator (with a thermal conductivity of around 0.25 Wm−1K−1) and therefore plays an important role in protecting permafrost from thaw. The insulating effect of dry peat also makes it integral to unique permafrost landforms such as palsas and permafrost peat plateaus. Peatland permafrost thaw tends to result in an increase in methane emissions and a small increase in carbon dioxide uptake, meaning that it contributes to the permafrost carbon feedback. Under 2 °C global warming, 0.7 million km2 of peatland permafrost could thaw, and with warming of +1.5 to 6 °C a cumulative 0.7 to 3 PgC of methane could be released as a result of permafrost peatland thaw by 2100. The forcing from these potential emissions would be approximately equivalent to 1% of projected anthropogenic emissions. One characteristic of peat is the bioaccumulation of metals concentrated in the peat. Accumulated mercury is of significant environmental concern. Peat drainage Large areas of organic wetland (peat) soils are currently drained for agriculture, forestry and peat extraction (i.e. through canals). This process is taking place all over the world. This not only destroys the habitat of many species but also heavily fuels climate change. As a result of peat drainage, the organic carbon—which built over thousands of years and is normally underwater—is suddenly exposed to the air. It decomposes and turns into carbon dioxide (), which is released into the atmosphere. The global emissions from drained peatlands have increased from 1,058 Mton in 1990 to 1,298 Mton in 2008 (a 20% increase). This increase has particularly taken place in developing countries, of which Indonesia, Malaysia and Papua New Guinea are the fastest-growing top emitters. This estimate excludes emissions from peat fires (conservative estimates amount to at least 4,000 Mton/-eq./yr for south-east Asia). With 174 Mton/-eq./yr, the EU is after Indonesia (500 Mton) and before Russia (161 Mton), the world's second-largest emitter of drainage-related peatland (excl. extracted peat and fires). Total emissions from the worldwide 500,000 km2 of degraded peatland may exceed 2.0 Gtons (including emissions from peat fires), which is almost 6% of all global carbon emissions. Peat fires Peat can be a major fire hazard and is not extinguished by light rain. may burn for great lengths of time, or smoulder underground and reignite after winter if an oxygen source is present. Peat has a high carbon content and can burn under low moisture conditions. Once ignited by the presence of a heat source (e.g., a wildfire penetrating the subsurface), it smoulders. These smouldering fires can burn undetected for very long periods of time (months, years, and even centuries) propagating in a creeping fashion through the underground peat layer. Despite the damage that the burning of raw peat can cause, bogs are naturally subject to wildfires and depend on the wildfires to keep woody competition from lowering the water table and shading out many bog plants. Several families of plants including the carnivorous Sarracenia (trumpet pitcher), Dionaea (Venus flytrap), Utricularia (bladderworts) and non-carnivorous plants such as the sandhills lily, toothache grass and many species of orchid are now threatened and in some cases endangered from the combined forces of human drainage, negligence and absence of fire. The recent burning of peat bogs in Indonesia, with their large and deep growths containing more than of carbon, has contributed to increases in world carbon dioxide levels. Peat deposits in Southeast Asia could be destroyed by 2040. It is estimated that in 1997, peat and forest fires in Indonesia released between of carbon; equivalent to 13–40 percent of the amount released by global fossil fuel burning, and greater than the carbon uptake of the world's biosphere. These fires may be responsible for the acceleration in the increase in carbon dioxide levels since 1998. More than 100 peat fires in Kalimantan and East Sumatra have continued to burn since 1997; each year, these peat fires ignite new forest fires above the ground. In North America, peat fires can occur during severe droughts throughout their occurrence, from boreal forests in Canada to swamps and fens in the subtropical southern Florida Everglades. Once a fire has burnt through the area, hollows in the peat are burnt out, and hummocks are desiccated but can contribute to Sphagnum recolonization. In the summer of 2010, an unusually high heat wave of up to ignited large deposits of peat in Central Russia, burning thousands of houses and covering the capital of Moscow with a toxic smoke blanket. The situation remained critical until the end of August 2010. In June 2019, despite some forest fire prevention methods being put in place, peat fires in the Arctic emitted of CO2, which is equal to Sweden's total annual emissions. The peat fires are linked to climate change, as they are much more likely to occur nowadays due to this effect. Erosion: Peat hags Peat "hags" are a form of erosion that occur at the sides of gullies that cut into the peat; they sometimes also occur in isolation. Hags may result when flowing water cuts downwards into the peat and when fire or overgrazing exposes the peat surface. Once the peat is exposed in these ways, it is prone to further erosion by wind, water and livestock. The result is overhanging vegetation and peat. Hags are too steep and unstable for vegetation to establish itself, so they continue to erode unless restorative action is taken. Protection In June 2002, the United Nations Development Programme launched the Wetlands Ecosystem and Tropical Peat Swamp Forest Rehabilitation Project. This project was targeted to last for five years, and brings together the efforts of various non-government organisations. In November 2002, the International Peatland (formerly Peat) Society (IPS) and the International Mire Conservation Group (IMCG) published guidelines on the "Wise Use of Mires and Peatlands – Backgrounds and Principles including a framework for decision-making". This publication aims to develop mechanisms that can balance the conflicting demands on the global peatland heritage to ensure its wise use to meet the needs of humankind. In June 2008, the IPS published the book Peatlands and Climate Change, summarising the currently available knowledge on the topic. In 2010, IPS presented a "Strategy for Responsible Peatland Management", which can be applied worldwide for decision-making. Peat extraction is forbidden in Chile since April 2024. Restoration Characteristics and uses by nation Latvia Latvia has been the biggest exporter of peat in the world by volume, providing more than 19.9% of the world's volume, followed only by Canada with 13% in 2022. In 2020, Latvia exported 1.97 million tons of peat, followed by Germany with 1.5 and Canada with 1.42 million tons. Nevertheless, although first in the world by volume, in monetary terms, Latvian comes second in the world behind Canada. As an example, Latvia's income from exports was US$237 million. Latvia's peat deposits have been estimated to equal 1.7 billion tons. Latvia, as Finland due its climate has several peat bogs, which account for 9.9% of the country's territory. More than two thirds of the licensed areas for peat extraction are state-owned; 55% belong to the state whilst 23% belong to the municipalities Bogs in Latvia are considered important habitats due to their ecological values, and up to 128,000 hectares, or 40% of the areas in the territory, are protected by environmental laws. The most famous national parks and reserves are the Ķemeri National Park, Cenas tīrelis and Teiči Nature Reserve. Finland The climate, geography and environment of Finland favours bog and peat bog formation. Thus, peat is available in considerable quantities. It is burned to produce heat and electricity. Peat provides around 4% of Finland's annual energy production. Also, agricultural and forestry-drained peat bogs actively release more CO2 annually than is released in peat energy production in Finland. The average regrowth rate of a single peat bog, however, is indeed slow, from 1,000 up to 5,000 years. Furthermore, it is a common practice to forest used peat bogs instead of giving them a chance to renew. This leads to lower levels of CO2 storage than the original peat bog. At 106 g CO2/MJ, the carbon dioxide emissions of peat are higher than those of coal (at 94.6 g CO2/MJ) and natural gas (at 56.1). According to one study, increasing the average amount of wood in the fuel mixture from the current 2.6% to 12.5% would take the emissions down to 93 g CO2/MJ. That said, little effort is being made to achieve this. The International Mire Conservation Group (IMCG) in 2006 urged the local and national governments of Finland to protect and conserve the remaining pristine peatland ecosystems. This includes the cessation of drainage and peat extraction in intact mire sites and the abandoning of current and planned groundwater extraction that may affect these sites. A proposal for a Finnish peatland management strategy was presented to the government in 2011, after a lengthy consultation phase. Sweden About 15% of the land in Sweden is covered by peatlands. Whilst nowadays the main use of such soils is for forestry, peat-rich lands have historically been exploited to produce energy, agricultural land and horticultural substrates. The most common method to extract peat during the 19th and 20th centuries was peat cutting, a process where the land is cleared of forest and subsequently drained. Peat cores are then extracted under dry weather conditions and stored on stacks to let the residual moisture evaporate. Today, clear-cutting for horticultural peat (of which Sweden is an important producer in Europe) is limited to some areas of Sweden and strictly regulated by the Swedish Environmental Code to prevent that significant groundwater storages and carbon sinks areas are altered and compromised by human activities. At the same time, restoration of drained peatlands through rewetting is urged by national and international policies to exploit the peat-rich soil properties in mitigating climate change effects. Ireland In the Republic of Ireland, a state-owned company called was responsible for managing peat extraction. It processed the extracted peat into milled peat used in power stations and sold processed peat fuel in the form of peat briquettes, which is used for domestic heating. These are oblong bars of densely compressed, dried, and shredded peat. Peat moss is a manufactured product for garden cultivation. Turf (dried-out peat sods) is also commonly used in rural areas. In January 2021, Bord na Móna announced that it had ceased all peat harvesting and cutting operations and would move its business to a climate solutions company. In 2022, selling peat for burning was prohibited, but some people are still allowed to cut and burn it. Russia The use of peat for energy production was prominent in the Soviet Union, especially in 1965. In 1929, over 40% of the Soviet Union's electric energy came from peat, which dropped to 1% by 1980. In the 1960s, larger sections of swamps and bogs in Western Russia were drained for agricultural and mining purposes. Netherlands Two-and-a-half thousand years ago, the area now named the Netherlands was largely covered with peat. Drainage, causing compaction and oxidation and excavation have reduced peatlands (> peat) to about or 10% of the land area, mostly used as meadows. Drainage and excavation have lowered the surface of the peatlands. In the west of the country, dikes and mills were built, creating polders so that dwelling and economic activities could continue below sea level, the first polder probably in 1533 and the last one in 1968. Peat harvesting could continue in suitable locations as the lower layers below the current sea level are exposed. This peat was deposited before the sea level rise in the Holocene. As a result, approximately 26% of the area and 21% of the population of the Netherlands are presently below sea level. The deepest point is in the Zuidplaspolder, below average sea level. In 2020, the Netherlands imported 2,156 million kg of peat (5.39 million m3 [400 kg/m3 dry peat]): 44.5% from Germany (2020), 9.5% from Estonia (2018), 9.2% from Latvia (2020), 7.2% from Ireland (2018), 8.0% from Sweden (2019), 6.5% from Lithuania (2020), 5.1% from Belgium (2019) and 1.7% from Denmark (2019); 1.35 million kg was exported. Most is used in gardening and greenhouse horticulture. Since the Netherlands did not have many trees to use as firewood or charcoal, one use the Dutch made of the available peat was to fire kilns to make pottery. During World War II, the Dutch Resistance came up with an unusual use for peat. Since peat was so available in the fields, resistance fighters sometimes stacked peat into human-sized piles and used the piles for target practice. Estonia After oil shale in Estonia, peat is the second-most-mined natural resource. The peat production sector has a yearly revenue of around €100 million and it is mostly export-oriented. Peat is extracted from around . India Sikkim The mountains of the Himalayas and Tibetan Plateau contain pockets of high-altitude wetlands. Khecheopalri is one of the Sikkim's most famous and diverse peatlands in the eastern Indian territory of Sikkim, which includes 682 species representing five kingdoms, 196 families and 453 genera. United Kingdom England England has around 1 million acres of peatland. Peatlands in England store 584m tonnes of carbon in total but emit around 11 million tonnes of every year due to degradation and draining. In 2021 only 124 people owned 60% of England's peatland. The extraction of peat from the Somerset Levels began during the Roman times and has been carried out since the Levels were first drained. On Dartmoor, there were several commercial distillation plants formed and run by the British Patent Naphtha Company in 1844. These produced naphtha on a commercial scale from the high-quality local peat. Fenn's, Whixall and Bettisfield Mosses is an element of a post-Ice Age peat bog that straddles the England–Wales border and contains many rare plant and animal species due to the acidic environment created by the peat. Only lightly hand-dug, it is now a national nature reserve and is being restored to its natural condition. The industrial extraction of peat occurred at the Thorne Moor site, outside Doncaster near the village of Hatfield. Government policy incentivised commercial removal to peat for agricultural use. This caused much destruction of the area during the 1980s. The removal of the peat resulted in later flooding further downstream at Goole due to the loss of water retaining peatlands. Recently regeneration of peatland has occurred as part of the Thorne Moors project, and at Fleet Moss, organised by Yorkshire Wildlife Trust. Northern Ireland In Northern Ireland, there is small-scale domestic turf cutting in rural areas, but areas of bogs have been diminished because of changes in agriculture. In response, afforestation has seen the establishment of tentative steps towards conservation such as Peatlands Park, County Armagh which is an Area of Special Scientific Interest. Scotland Some Scotch whisky distilleries, such as those on Islay, use peat fires to dry malted barley. The drying process takes about 30 hours. This gives the whiskies a distinctive smoky flavour, often called "peatiness". The peatiness, or degree of peat flavour, of a whisky is calculated in ppm of phenol. Normal Highland whiskies have a peat level of up to 30 ppm, and the whiskies on Islay usually have up to 50 ppm. In rare types like the Octomore, the whisky can have more than 100 ppm of phenol. Scotch Ales can also use peat-roasted malt, imparting a similar smoked flavor. Because they are easily compressed under minimal weight, peat deposits pose significant difficulties for building structures, roads and railways. When the West Highland railway line was constructed across Rannoch Moor in western Scotland, its builders had to float the tracks on a multi-thousand-ton mattress of tree roots, brushwood, earth and ash. Wales Wales has over 70,000 hectares of peatlands. Most of it is blanket peat bog in the highlands, but there are a few hundred hectares of peatland in lowland areas. Some peatland areas in Wales are in poor condition. In 2020, the Welsh Government established a five-year peatland restoration initiative, which will be implemented by Natural Resources Wales (NRW). Canada There are 294 million acres of peatland in Canada, with approximately 43,500 acres in production and another 34,500 acres involved in past production. The current and past acreage in production amounts to 0.03 percent of Canada's peatland. Canada is the top exporter of peat by value. In 2021, top exporters of peat (including peat litter), whether or not agglomerated, were Canada ($580,591.39K, 1,643,950,000 kg), European Union ($445,304.42K, 2,362,280,000 kg), Latvia ($275,459.14K, 2,184,860,000 kg), Netherlands ($235,250.84K, 1,312,850,000 kg), Germany ($223,414.66K, 1,721,170,000 kg).
Physical sciences
Pedology
null
99421
https://en.wikipedia.org/wiki/Coppicing
Coppicing
Coppicing is the traditional method in woodland management of cutting down a tree to a stump, which in many species encourages new shoots to grow from the stump or roots, thus ultimately regrowing the tree. A forest or grove that has been subject to coppicing is called a copse or coppice, in which young tree stems are repeatedly cut down to near ground level. The resulting living stumps are called stools. New growth emerges, and after a number of years, the coppiced trees are harvested, and the cycle begins anew. Pollarding is a similar process carried out at a higher level on the tree in order to prevent grazing animals from eating new shoots. Daisugi (台杉, where sugi refers to Japanese cedar), is a similar Japanese technique. Many silviculture practices involve cutting and regrowth; coppicing has been of significance in many parts of lowland temperate Europe. The widespread and long-term practice of coppicing as a landscape-scale industry is something that remains of special importance in southern England. Many of the English language terms referenced in this article are particularly relevant to historic and contemporary practice in that area. Typically a coppiced woodland is harvested in sections or coups (also spelled 'coupe' but pronounced 'coop' and descended from the French or Norman French 'couper', to cut or coupé 'has been cut') on a rotation. English terms for an area of coppice include 'cant', 'panel' and 'fall' which can be interchangeable and regionally-based. In this way, a crop is available each year somewhere in the woodland. Coppicing has the effect of providing a rich variety of habitats, as the woodland always has a range of different-aged coppice growing in it, which is beneficial for biodiversity. The cycle length depends upon the species cut, the local custom, and the use of the product. Birch can be coppiced for faggots on a three- or four-year cycle, whereas oak can be coppiced over a fifty-year cycle for poles or firewood. Trees being coppiced do not die of old age as coppicing maintains the tree at a juvenile stage, allowing them to reach immense ages. The age of a stool may be estimated from its diameter; some are so largeas much as acrossthat they are thought to have been continually coppiced for centuries. History Evidence suggests that coppicing has been continuously practised since pre-history. Coppiced stems are characteristically curved at the base. This curve occurs as the competing stems grow out from the stool in the early stages of the cycle, then up toward the sky as the canopy closes. The curve may allow the identification of coppice timber in archaeological sites. Timber in the Sweet Track in Somerset (built in the winter of 3807 and 3806 BCE) has been identified as coppiced Tilia species. Originally, the silvicultural system now called coppicing was practiced solely for small wood production. In German this is called Niederwald, which translates as low forest. Later on in Mediaeval times, farmers encouraged pigs to feed from acorns, and so some trees were allowed to grow bigger. This different silvicultural system is called in English coppice with standards. In German this is called Mittelwald (middle forest). As modern forestry (Hochwald in German, which translates as High forest) seeks to harvest timber mechanically, and pigs are generally no longer fed from acorns, both systems have declined. However, there are cultural and wildlife benefits from these two silvicultural systems, so both can be found where timber production or some other main forestry purpose (such as a protection forest against an avalanche) is not the sole management objective of the woodland. In the 16th and 17th centuries, the technology of charcoal iron production became widely established in England, continuing in some areas until the late 19th century. Charcoal once fuelled all metalworking (with evidence dating back many thousands of years) and other high temperature industrial processes (see white coal) but scarcity led to the eventual adoption of coal as the primary fuel. Decline in charcoal as an industrial fuel accelerated after the discovery of coke (coal heated in limited oxygen) in the 18th century and leading to a crash in UK charcoal production in the century thereafter. Notably, scarcity of charcoal for industrial processes actually led to the survival of large areas of woodland in the weald of Kent and the Sussexes as large areas of coppiced woodland were jealously guarded by Roman ironmasters and later by Medieval ironmasters. Charcoal hearths in woodlands are indications of ancient status (in context). Along with the need for oak bark for tanning, charcoal required large amounts of coppiced wood. With this coppice management, wood could be provided for those growing industries in principle indefinitely. This was regulated by a statute of 1544 of Henry VIII, which required woods to be enclosed after cutting (to prevent browsing by animals) and 12 standels (standards or mature uncut trees) to be left in each acre, to be grown into timber. Coppice with standards (scattered individual stems allowed to grow on through several coppice cycles) has been commonly used throughout most of Europe as a means of giving greater flexibility in the resulting forest product from any one area. The woodland provides the small material from the coppice as well as a range of larger timber for such uses as house building, bridge repair, cart-making and so on. But note that coppice produce was used in parallel with larger timber. For example, hazel and willow as woven wattle infill panels (daubed or plastered) in housebuilding and ash coppice to produce components for carts, and several species for components for bridge rails and fences. In the 18th century coppicing in Britain began a long decline. This was brought about by the erosion of its traditional markets. Firewood was no longer needed for domestic or industrial uses as coal and coke became easily obtained and transported, and wood as a construction material was gradually replaced by newer materials. Coppicing died out first in the north of Britain and steadily contracted toward the south-east until by the 1960s active commercial coppice was heavily concentrated in Kent and Sussex. Practice The shoots (or suckers) may be used either in their young state for interweaving in wattle fencing (as is the practice with coppiced willows and hazel), or the new shoots may be allowed to grow into large poles, as was often the custom with trees such as oaks or ashes and sweet chestnut (Castanea sativa). This creates long, straight poles which do not have the bends and forks of naturally grown trees. Coppicing may be practised to encourage specific growth patterns, as with cinnamon trees which are grown for their bark. (Note that the use of the term 'suckers' above is incompatible with an accurate understanding of how coppice works. Coppice stems grow from epicormic buds developed from groups of cells called bud precursors in the cambium under the bark on cut stem bases. Epicormic buds develop and grow when the upper parts of the stem (which normally produce inhibitory plant hormone analogues) are removed. Suckers refers to shoots growing from roots in response to felling to ground as seen in wild cherry or gean (Prunus avium) and aspen (Populus tremula) but also has been adopted in horticulture to refer to a competing shoot sprouting from a rootstock below the interface with the scion. Such shoots (if not removed) can grow more vigorously then the grafted material which can fail and die). Another, more complicated system is called compound coppice. Here some of the standards would be left, some harvested. Some of the coppice would be allowed to grow into new standards and some regenerated coppice would be there. Thus there would be three age classes. Coppiced hardwoods were used extensively in carriage and shipbuilding, and they are still sometimes grown for making wooden buildings and furniture. Compound coppice is a term used for when two or more different species are grown in the same cant and cut on different cycles. Example: Hazel-ash coppice with hazel cut at 7 years and ash in the same area cut at 21 years (every third cut, all stools in the cant are cut). But note that under coppice with standards (for instance oak standards over hazel) the oak was cut under a much longer cycle. With hazel-ash under oak standards you now have 3 cycles superimposed. However, a range of ages of standards was managed-for to allow for continuity of oak production for timber (shipbuilding especially) and this was sometimes legislated for. It is commonly written that there should be 12 standards per acre. BUT this '12 per acre' includes (as an average over the whole wood) maybe 1 mature oak per acre, a couple of young standards and several waivers with a larger number of seedlings/saplings whose genesis was sporadic and occurred when oak mast years coincided with coppice cuts - planting being relatively rare until perhaps the 16th century. Coppice can be complicated, which is likely why large areas of one species (hazel, sweet chestnut) with no standards is called 'simple coppice'. Waivers: (also 'wavers') Young oak trees (older than seedlings or saplings) that may become standards in due turn. Or may be cut before becoming standards. If you can get both hands around it at breast height but can't get 4 Sussex fence rails out of the first 10’, it's a waiver. Withies for wicker-work are grown in coppices of various willow species, principally osier. In France, sweet chestnut trees are coppiced for use as canes and bâtons for the martial art Canne de combat (also known as Bâton français). Some Eucalyptus species are coppiced in a number of countries, including Australia, North America, Uganda, and Sudan. The Sal tree is coppiced in India, and the Moringa oleifera tree is coppiced in many countries, including India. Sometimes former coppice is converted to high-forest woodland by the practice of singling. All but one of the regrowing stems are cut, leaving the remaining one to grow as if it were a maiden (uncut) tree. The boundaries of coppice coups were sometimes marked by cutting certain trees as pollards or stubs. United Kingdom In southern Britain, coppice was traditionally hazel, hornbeam, field maple, ash, sweet chestnut, occasionally sallow, elm, small-leafed lime and rarely oak or beech, grown among pedunculate or sessile oak, ash or beech standards. In wet areas alder and willows were used. A small, and growing, number of people make a living wholly or partly by working coppices in the area today, at places such as at the Weald and Downland Living Museum. Coppices provided wood for many purposes, especially charcoal before coal was economically significant in metal smelting. A minority of these woods are still operated for coppice today, often by conservation organisations, producing material for hurdle-making, thatching spars, local charcoal-burning or other crafts. The only remaining large-scale commercial coppice crop in England is sweet chestnut which is grown in parts of Sussex and Kent. Much of this was established as plantations in the 19th century for hop-pole production (hop-poles are used to support the hop plant while growing hops) and is nowadays cut on a 12 to 18-year cycle for splitting and binding into cleft chestnut paling fence, or on a 20- to 35-year cycle for cleft post-and-rail fencing, or for sawing into small lengths to be finger-jointed for architectural use. Other material goes to make farm fencing and to be chipped for modern wood-fired heating systems. In northwest England, coppice-with-standards has been the norm, the standards often of oak with relatively little simple coppice. After World War II, a great deal was planted up with conifers or became neglected. Coppice-working almost died out, though a few men continued in the woods. Wildlife Coppice management favours a range of wildlife, often of species adapted to open woodland. After cutting, the increased light allows existing woodland-floor vegetation such as bluebell, anemone and primrose to grow vigorously. Often brambles grow around the stools, encouraging insects, or various small mammals that can use the brambles as protection from larger predators. Woodpiles (if left in the coppice) encourage insects such as beetles to come into an area. The open area is then colonised by many animals such as nightingale, European nightjar and fritillary butterflies. As the coup grows, the canopy closes and it becomes unsuitable for these animals againbut in an actively managed coppice there is always another recently cut coup nearby, and the populations therefore move around, following the coppice management. However, most British coppices have not been managed in this way for many decades. The coppice stems have grown tall (the coppice is said to be overstood), forming a heavily shaded woodland of many closely spaced stems with little ground vegetation. The open-woodland animals survive in small numbers along woodland rides or not at all, and many of these once-common species have become rare. Overstood coppice is a habitat of relatively low biodiversityit does not support the open-woodland species, but neither does it support many of the characteristic species of high forest, because it lacks many high-forest features such as substantial dead-wood, clearings and stems of varied ages. Suitable conservation management of these abandoned coppices may be to restart coppice management, or in some cases it may be more appropriate to use singling and selective clearance to establish a high-forest structure. Natural occurrence Coppice and pollard growth is a response of the tree to damage, and can occur naturally. Trees may be browsed or broken by large herbivorous animals, such as cattle or elephants, felled by beavers or blown over by the wind. Some trees, such as linden, may produce a line of coppice shoots from a fallen trunk, and sometimes these develop into a line of mature trees. For some trees, such as the common beech (Fagus sylvatica), coppicing is more or less easy depending on the altitude: it is much more efficient for trees in the montane zone. For energy wood Coppicing of willow, alder and poplar for energy wood has proven commercially successful. The Willow Biomass Project in the United States is an example of this. In this case the coppicing is done in a way that an annual or more likely a tri-annual cut can happen. This seems to maximize the production volume from the stand. Such frequent growth means the soils can be easily depleted and so fertilizers are often required. The stock also becomes exhausted after some years and so will be replaced with new plants. The method of harvesting of energy wood can be mechanized by adaptation of specialized agricultural machinery. Species and cultivars vary in when they should be cut, regeneration times and other factors. However, full life cycle analysis has shown that poplars have a lower effect in greenhouse gas emissions for energy production than alternatives. Gallery
Technology
Trees and forestry
null
99426
https://en.wikipedia.org/wiki/Naphthalene
Naphthalene
Naphthalene is an organic compound with formula . It is the simplest polycyclic aromatic hydrocarbon, and is a white crystalline solid with a characteristic odor that is detectable at concentrations as low as 0.08 ppm by mass. As an aromatic hydrocarbon, naphthalene's structure consists of a fused pair of benzene rings. It is the main ingredient of traditional mothballs. History In the early 1820s, two separate reports described a white solid with a pungent odor derived from the distillation of coal tar. In 1821, John Kidd cited these two disclosures and then described many of this substance's properties and the means of its production. He proposed the name naphthaline, as it had been derived from a kind of naphtha (a broad term encompassing any volatile, flammable liquid hydrocarbon mixture, including coal tar). Naphthalene's chemical formula was determined by Michael Faraday in 1826. The structure of two fused benzene rings was proposed by Emil Erlenmeyer in 1866, and confirmed by Carl Gräbe three years later. Physical properties A naphthalene molecule can be viewed as the fusion of a pair of benzene rings. (In organic chemistry, rings are fused if they share two or more atoms.) As such, naphthalene is classified as a benzenoid polycyclic aromatic hydrocarbon (PAH). The eight carbon atoms that are not shared by the two rings carry one hydrogen atom each. For purpose of the standard IUPAC nomenclature of derived compounds, those eight atoms are numbered 1 through 8 in sequence around the perimeter of the molecule, starting with a carbon atom adjacent to a shared one. The shared carbon atoms are labeled 4a (between 4 and 5) and 8a (between 8 and 1). Molecular geometry The molecule is planar, like benzene. Unlike benzene, the carbon–carbon bonds in naphthalene are not of the same length. The bonds C1−C2, C3−C4, C5−C6 and C7−C8 are about 1.37 Å (137 pm) in length, whereas the other carbon–carbon bonds are about 1.42 Å (142 pm) long. This difference, established by X-ray diffraction, is consistent with the valence bond model in naphthalene and in particular, with the theorem of cross-conjugation. This theorem would describe naphthalene as an aromatic benzene unit bonded to a diene but not extensively conjugated to it (at least in the ground state), which is consistent with two of its three resonance structures. Because of this resonance, the molecule has bilateral symmetry across the plane of the shared carbon pair, as well as across the plane that bisects bonds C2-C3 and C6-C7, and across the plane of the carbon atoms. Thus there are two sets of equivalent hydrogen atoms: the alpha positions, numbered 1, 4, 5, and 8, and the beta positions, 2, 3, 6, and 7. Two isomers are then possible for mono-substituted naphthalenes, corresponding to substitution at an alpha or beta position. Structural isomers of naphthalene that have two fused aromatic rings include azulene, which has a 5–7 fused ring system, and Bicyclo[6.2.0]decapentaene which has a fused 4–8 ring system. The point group symmetry of naphthalene is D2h. Electrical conductivity Pure crystalline naphthalene is a moderate insulator at room temperature, with resistivity of about 1012 Ω m. The resistivity drops more than a thousandfold on melting, to about 4 × 108 Ω m. Both in the liquid and in the solid, the resistivity depends on temperature as ρ = ρ0 exp(E/(kT)), where ρ0 (Ω⋅m) and E (eV) are constant parameters, k is the Boltzmann constant (8.617 × 10−5 eV/K), and T is absolute temperature (K). The parameter E is 0.73 in the solid. However, the solid shows semiconducting character below 100 K. Chemical properties Reactions with electrophiles In electrophilic aromatic substitution reactions, naphthalene reacts more readily than benzene. For example, chlorination and bromination of naphthalene proceeds without a catalyst to give 1-chloronaphthalene and 1-bromonaphthalene, respectively. Likewise, whereas both benzene and naphthalene can be alkylated using Friedel–Crafts reaction conditions, naphthalene can also be easily alkylated by reaction with alkenes or alcohols, using sulfuric or phosphoric acid catalysts. In terms of regiochemistry, electrophiles attack at the alpha position. The selectivity for alpha over beta substitution can be rationalized in terms of the resonance structures of the intermediate: for the alpha substitution intermediate, seven resonance structures can be drawn, of which four preserve an aromatic ring. For beta substitution, the intermediate has only six resonance structures, and only two of these are aromatic. Sulfonation gives the "alpha" product naphthalene-1-sulfonic acid as the kinetic product but naphthalene-2-sulfonic acid as the thermodynamic product. The 1-isomer forms predominantly at 25 °C, and the 2-isomer at 160 °C. Sulfonation to give the 1- and 2-sulfonic acid occurs readily: Further sulfonation give di-, tri-, and tetrasulfonic acids. Lithiation Analogous to the synthesis of phenyllithium is the conversion of 1-bromonaphthalene to 1-lithionaphthalene, by lithium–halogen exchange: C10H7Br + BuLi → C10H7Li + BuBr The resulting lithionaphthalene undergoes a second lithiation, in contrast to the behavior of phenyllithium. These 1,8-dilithio derivatives are precursors to a host of peri-naphthalene derivatives. Reduction and oxidation With alkali metals, naphthalene forms the dark blue-green radical anion salts such as sodium naphthalene, Na+C10H. The naphthalene anions are strong reducing agents. Naphthalene can be hydrogenated under high pressure in the presence of metal catalysts to give 1,2,3,4-tetrahydronaphthalene(), also known as tetralin. Further hydrogenation yields decahydronaphthalene or decalin (). Oxidation with in the presence of vanadium pentoxide as catalyst gives phthalic anhydride: C10H8 + 4.5 O2 → C6H4(CO)2O + 2 CO2 + 2 H2O This reaction is the basis of the main use of naphthalene. Oxidation can also be effected using conventional stoichiometric chromate or permanganate reagents. Production From the 1960s until the 1990s, significant amounts of naphthalene were produced from heavy petroleum fractions during refining, but present-day production is mainly from coal tar. Approximately 1.3 million tons are produced annually. Naphthalene is the most abundant single component of coal tar. The composition of coal tar varies with coal type and processing, but typical coal tar is about 10% naphthalene by weight. In industrial practice, distillation of coal tar yields an oil containing about 50% naphthalene, along with twelve other aromatic compounds. This oil, after being washed with aqueous sodium hydroxide to remove acidic components (chiefly various phenols), and with sulfuric acid to remove basic components, undergoes fractional distillation to isolate naphthalene. The crude naphthalene resulting from this process is about 95% naphthalene by weight. The chief impurities are the sulfur-containing aromatic compound benzothiophene (< 2%), indane (0.2%), indene (< 2%), and methylnaphthalene (< 2%). Petroleum-derived naphthalene is usually purer than that derived from coal tar. Where required, crude naphthalene can be further purified by recrystallization from any of a variety of solvents, resulting in 99% naphthalene by weight, referred to as 80 °C (melting point). In North America, the coal tar producers are Koppers Inc., Ruetgers Canada Inc. and Recochem Inc., and the primary petroleum producer is Monument Chemical Inc. In Western Europe the well-known producers are Koppers, Ruetgers, and Deza. In Eastern Europe, naphthalene is produced by a variety of integrated metallurgy complexes (Severstal, Evraz, Mechel, MMK) in Russia, dedicated naphthalene and phenol makers INKOR, Yenakievsky Metallurgy plant in Ukraine and ArcelorMittal Temirtau in Kazakhstan. Other sources and occurrences Naphthalene and its alkyl homologs are the major constituents of creosote. Trace amounts of naphthalene are produced by magnolias and some species of deer, as well as the Formosan subterranean termite, possibly produced by the termite as a repellant against "ants, poisonous fungi and nematode worms". Some strains of the endophytic fungus Muscodor albus produce naphthalene among a range of volatile organic compounds, while Muscodor vitigenus produces naphthalene almost exclusively. Uses Naphthalene is used mainly as a precursor to derivative chemicals. The single largest use of naphthalene is the industrial production of phthalic anhydride, although more phthalic anhydride is made from o-xylene. Fumigant Naphthalene has been used as a fumigant. It was once the primary ingredient in mothballs, although its use has largely been replaced in favor of alternatives such as 1,4-dichlorobenzene. In a sealed container containing naphthalene pellets, naphthalene vapors build up to levels toxic to both the adult and larval forms of many moths that attack textiles. Other fumigant uses of naphthalene include use in soil as a fumigant pesticide, in attic spaces to repel insects and animals such as opossums, and in museum storage-drawers and cupboards to protect the contents from attack by insect pests. Solvent Molten naphthalene provides an excellent solubilizing medium for poorly soluble aromatic compounds. In many cases it is more efficient than other high-boiling solvents, such as dichlorobenzene, benzonitrile, nitrobenzene and durene. The reaction of C60 with anthracene is conveniently conducted in refluxing naphthalene to give the 1:1 Diels–Alder adduct. The aromatization of hydroporphyrins has been achieved using a solution of DDQ in naphthalene. Derivative uses The single largest use of naphthalene is the production of phthalic anhydride, which is an intermediate used to make plasticizers for polyvinyl chloride, and to make alkyd resin polymers used in paints and varnishes. Sulfonic acids and sulfonates Many naphthalenesulfonic acids and sulfonates are useful. Naphthalenesulfonic acids are used in the synthesis of 1-naphthol and 2-naphthol, precursors for various dyestuffs, pigments, rubber processing chemicals and other chemicals and pharmaceuticals. They are also used as dispersants in synthetic and natural rubbers, in agricultural pesticides, in dyes, and in lead–acid battery plates. Naphthalenedisulfonic acids such as Armstrong's acid are used as precursors and to form pharmaceutical salts such as CFT. The aminonaphthalenesulfonic acids are precursors for synthesis of many synthetic dyes. Alkyl naphthalene sulfonates (ANS) are used in many industrial applications as nondetergent surfactants (wetting agents) that effectively disperse colloidal systems in aqueous media. The major commercial applications are in the agricultural chemical industry, which uses ANS for wettable powder and wettable granular (dry-flowable) formulations, and in the textile and fabric industry, which uses the wetting and defoaming properties of ANS for bleaching and dyeing operations. Some naphthalenesulfonate polymers are superplasticizers used for the production of high strength concrete as well as water reducers in the production of gypsum wallboard. They are produced by treating naphthalenesulfonic acid with formaldehyde, followed by neutralization with sodium hydroxide or calcium hydroxide. Other derivative uses Many azo dyes are produced from naphthalene. Useful agrichemicals include naphthoxyacetic acids. Hydrogenation of naphthalene gives tetrahydronaphthalene (tetralin) and decahydronaphthalene (decalin), which are used as low-volatility solvents. Tetralin is used as a hydrogen-donor solvent. Alkylation of naphthalene with propylene gives a mixture of diisopropylnaphthalenes, which are useful as nonvolatile liquids for inks. Substituted naphthalenes serve as pharmaceuticals such as propranolol (a beta blocker) and nabumetone (a nonsteroidal anti-inflammatory drug). Other uses Several uses stem from naphthalene's high volatility: it is used to create artificial pores in the manufacture of high-porosity grinding wheels; it is used in engineering studies of heat transfer using mass sublimation; and it has been explored as a sublimable propellant for cold gas satellite thrusters. Health effects Exposure to large amounts of naphthalene may damage or destroy red blood cells, most commonly in people with the inherited condition known as glucose-6-phosphate dehydrogenase (G6PD) deficiency, from which approximately 400 million people suffer. Humans, in particular children, have developed the condition known as hemolytic anemia, after ingesting mothballs or deodorant blocks containing naphthalene. Symptoms include fatigue, lack of appetite, restlessness, and pale skin. Exposure to large amounts of naphthalene may cause confusion, nausea, vomiting, diarrhea, blood in the urine, and jaundice (yellow coloration of the skin due to dysfunction of the liver). The US National Toxicology Program (NTP) held an experiment where male and female rats and mice were exposed to naphthalene vapors on weekdays for two years. Both male and female rats exhibited evidence of carcinogenesis with increased incidences of adenoma and neuroblastoma of the nose. Female mice exhibited some evidence of carcinogenesis based on increased incidences of alveolar and bronchiolar adenomas of the lung, while male mice exhibited no evidence of carcinogenesis. The International Agency for Research on Cancer (IARC) classifies naphthalene as possibly carcinogenic to humans and animals (Group 2B). The IARC also points out that acute exposure causes cataracts in humans, rats, rabbits, and mice; and that hemolytic anemia (described above) can occur in children and infants after oral or inhalation exposure or after maternal exposure during pregnancy. A probable mechanism for the carcinogenic effects of mothballs and some types of air fresheners containing naphthalene has been identified. Regulation US government agencies have set occupational exposure limits to naphthalene exposure. The Occupational Safety and Health Administration has set a permissible exposure limit at 10 ppm (50 mg/m3) over an eight-hour time-weighted average. The National Institute for Occupational Safety and Health has set a recommended exposure limit at 10 ppm (50 mg/m3) over an eight-hour time-weighted average, as well as a short-term exposure limit at 15 ppm (75 mg/m3). Naphthalene's minimum odor threshold is 0.084 ppm for humans. Mothballs and other products containing naphthalene have been banned within the EU since 2008. In China, the use of naphthalene in mothballs is forbidden. Danger to human health and the common use of natural camphor are cited as reasons for the ban. Naphthalene derivatives The partial list of naphthalene derivatives includes the following compounds:
Physical sciences
Aromatic hydrocarbons
Chemistry
99438
https://en.wikipedia.org/wiki/Extended%20Euclidean%20algorithm
Extended Euclidean algorithm
In arithmetic and computer programming, the extended Euclidean algorithm is an extension to the Euclidean algorithm, and computes, in addition to the greatest common divisor (gcd) of integers a and b, also the coefficients of Bézout's identity, which are integers x and y such that This is a certifying algorithm, because the gcd is the only number that can simultaneously satisfy this equation and divide the inputs. It allows one to compute also, with almost no extra cost, the quotients of a and b by their greatest common divisor. also refers to a very similar algorithm for computing the polynomial greatest common divisor and the coefficients of Bézout's identity of two univariate polynomials. The extended Euclidean algorithm is particularly useful when a and b are coprime. With that provision, x is the modular multiplicative inverse of a modulo b, and y is the modular multiplicative inverse of b modulo a. Similarly, the polynomial extended Euclidean algorithm allows one to compute the multiplicative inverse in algebraic field extensions and, in particular in finite fields of non prime order. It follows that both extended Euclidean algorithms are widely used in cryptography. In particular, the computation of the modular multiplicative inverse is an essential step in the derivation of key-pairs in the RSA public-key encryption method. Description The standard Euclidean algorithm proceeds by a succession of Euclidean divisions whose quotients are not used. Only the remainders are kept. For the extended algorithm, the successive quotients are used. More precisely, the standard Euclidean algorithm with a and b as input, consists of computing a sequence of quotients and a sequence of remainders such that It is the main property of Euclidean division that the inequalities on the right define uniquely and from and The computation stops when one reaches a remainder which is zero; the greatest common divisor is then the last non zero remainder The extended Euclidean algorithm proceeds similarly, but adds two other sequences, as follows The computation also stops when and gives is the greatest common divisor of the input and The Bézout coefficients are and that is The quotients of a and b by their greatest common divisor are given by and Moreover, if a and b are both positive and , then for where denotes the integral part of , that is the greatest integer not greater than . This implies that the pair of Bézout's coefficients provided by the extended Euclidean algorithm is the minimal pair of Bézout coefficients, as being the unique pair satisfying both above inequalities. Also it means that the algorithm can be done without integer overflow by a computer program using integers of a fixed size that is larger than that of a and b. Example The following table shows how the extended Euclidean algorithm proceeds with input and . The greatest common divisor is the last non zero entry, in the column "remainder". The computation stops at row 6, because the remainder in it is . Bézout coefficients appear in the last two columns of the second-to-last row. In fact, it is easy to verify that . Finally the last two entries and of the last row are, up to the sign, the quotients of the input and by the greatest common divisor . Proof As the sequence of the is a decreasing sequence of nonnegative integers (from i = 2 on). Thus it must stop with some This proves that the algorithm stops eventually. As the greatest common divisor is the same for and This shows that the greatest common divisor of the input is the same as that of This proves that is the greatest common divisor of a and b. (Until this point, the proof is the same as that of the classical Euclidean algorithm.) As and we have for i = 0 and 1. The relation follows by induction for all : Thus and are Bézout coefficients. Consider the matrix The recurrence relation may be rewritten in matrix form The matrix is the identity matrix and its determinant is one. The determinant of the rightmost matrix in the preceding formula is −1. It follows that the determinant of is In particular, for we have Viewing this as a Bézout's identity, this shows that and are coprime. The relation that has been proved above and Euclid's lemma show that divides , that is that for some integer . Dividing by the relation gives So, and are coprime integers that are the quotients of and by a common factor, which is thus their greatest common divisor or its opposite. To prove the last assertion, assume that a and b are both positive and . Then, , and if , it can be seen that the s and t sequences for (a,b) under the EEA are, up to initial 0s and 1s, the t and s sequences for (b,a). The definitions then show that the (a,b) case reduces to the (b,a) case. So assume that without loss of generality. It can be seen that is 1 and (which exists by ) is a negative integer. Thereafter, the alternate in sign and strictly increase in magnitude, which follows inductively from the definitions and the fact that for , the case holds because . The same is true for the after the first few terms, for the same reason. Furthermore, it is easy to see that (when a and b are both positive and ). Thus, noticing that , we obtain This, accompanied by the fact that are larger than or equal to in absolute value than any previous or respectively completed the proof. Polynomial extended Euclidean algorithm For univariate polynomials with coefficients in a field, everything works similarly, Euclidean division, Bézout's identity and extended Euclidean algorithm. The first difference is that, in the Euclidean division and the algorithm, the inequality has to be replaced by an inequality on the degrees Otherwise, everything which precedes in this article remains the same, simply by replacing integers by polynomials. A second difference lies in the bound on the size of the Bézout coefficients provided by the extended Euclidean algorithm, which is more accurate in the polynomial case, leading to the following theorem. If a and b are two nonzero polynomials, then the extended Euclidean algorithm produces the unique pair of polynomials (s, t) such that and A third difference is that, in the polynomial case, the greatest common divisor is defined only up to the multiplication by a non zero constant. There are several ways to define unambiguously a greatest common divisor. In mathematics, it is common to require that the greatest common divisor be a monic polynomial. To get this, it suffices to divide every element of the output by the leading coefficient of This allows that, if a and b are coprime, one gets 1 in the right-hand side of Bézout's inequality. Otherwise, one may get any non-zero constant. In computer algebra, the polynomials commonly have integer coefficients, and this way of normalizing the greatest common divisor introduces too many fractions to be convenient. The second way to normalize the greatest common divisor in the case of polynomials with integer coefficients is to divide every output by the content of to get a primitive greatest common divisor. If the input polynomials are coprime, this normalisation also provides a greatest common divisor equal to 1. The drawback of this approach is that a lot of fractions should be computed and simplified during the computation. A third approach consists in extending the algorithm of subresultant pseudo-remainder sequences in a way that is similar to the extension of the Euclidean algorithm to the extended Euclidean algorithm. This allows that, when starting with polynomials with integer coefficients, all polynomials that are computed have integer coefficients. Moreover, every computed remainder is a subresultant polynomial. In particular, if the input polynomials are coprime, then the Bézout's identity becomes where denotes the resultant of a and b. In this form of Bézout's identity, there is no denominator in the formula. If one divides everything by the resultant one gets the classical Bézout's identity, with an explicit common denominator for the rational numbers that appear in it. Pseudocode To implement the algorithm that is described above, one should first remark that only the two last values of the indexed variables are needed at each step. Thus, for saving memory, each indexed variable must be replaced by just two variables. For simplicity, the following algorithm (and the other algorithms in this article) uses parallel assignments. In a programming language which does not have this feature, the parallel assignments need to be simulated with an auxiliary variable. For example, the first one, (old_r, r) := (r, old_r - quotient * r) is equivalent to prov := r; r := old_r - quotient × prov; old_r := prov; and similarly for the other parallel assignments. This leads to the following code: function extended_gcd(a, b) (old_r, r) := (a, b) (old_s, s) := (1, 0) (old_t, t) := (0, 1) while r ≠ 0 do quotient := old_r div r (old_r, r) := (r, old_r − quotient × r) (old_s, s) := (s, old_s − quotient × s) (old_t, t) := (t, old_t − quotient × t) output "Bézout coefficients:", (old_s, old_t) output "greatest common divisor:", old_r output "quotients by the gcd:", (t, s) The quotients of a and b by their greatest common divisor, which is output, may have an incorrect sign. This is easy to correct at the end of the computation but has not been done here for simplifying the code. Similarly, if either a or b is zero and the other is negative, the greatest common divisor that is output is negative, and all the signs of the output must be changed. Finally, notice that in Bézout's identity, , one can solve for given . Thus, an optimization to the above algorithm is to compute only the sequence (which yields the Bézout coefficient ), and then compute at the end: function extended_gcd(a, b) s := 0; old_s := 1 r := b; old_r := a while r ≠ 0 do quotient := old_r div r (old_r, r) := (r, old_r − quotient × r) (old_s, s) := (s, old_s − quotient × s) if b ≠ 0 then bezout_t := (old_r − old_s × a) div b else bezout_t := 0 output "Bézout coefficients:", (old_s, bezout_t) output "greatest common divisor:", old_r However, in many cases this is not really an optimization: whereas the former algorithm is not susceptible to overflow when used with machine integers (that is, integers with a fixed upper bound of digits), the multiplication of old_s * a in computation of bezout_t can overflow, limiting this optimization to inputs which can be represented in less than half the maximal size. When using integers of unbounded size, the time needed for multiplication and division grows quadratically with the size of the integers. This implies that the "optimisation" replaces a sequence of multiplications/divisions of small integers by a single multiplication/division, which requires more computing time than the operations that it replaces, taken together. Simplification of fractions A fraction is in canonical simplified form if and are coprime and is positive. This canonical simplified form can be obtained by replacing the three output lines of the preceding pseudo code by if then output "Division by zero" if then ; (for avoiding negative denominators) if then output (for avoiding denominators equal to 1) output The proof of this algorithm relies on the fact that and are two coprime integers such that , and thus . To get the canonical simplified form, it suffices to move the minus sign for having a positive denominator. If divides evenly, the algorithm executes only one iteration, and we have at the end of the algorithm. It is the only case where the output is an integer. Computing multiplicative inverses in modular structures The extended Euclidean algorithm is the essential tool for computing multiplicative inverses in modular structures, typically the modular integers and the algebraic field extensions. A notable instance of the latter case are the finite fields of non-prime order. Modular integers If is a positive integer, the ring may be identified with the set of the remainders of Euclidean division by , the addition and the multiplication consisting in taking the remainder by of the result of the addition and the multiplication of integers. An element of has a multiplicative inverse (that is, it is a unit) if it is coprime to . In particular, if is prime, has a multiplicative inverse if it is not zero (modulo ). Thus is a field if and only if is prime. Bézout's identity asserts that and are coprime if and only if there exist integers and such that Reducing this identity modulo gives Thus , or, more exactly, the remainder of the division of by , is the multiplicative inverse of modulo . To adapt the extended Euclidean algorithm to this problem, one should remark that the Bézout coefficient of is not needed, and thus does not need to be computed. Also, for getting a result which is positive and lower than n, one may use the fact that the integer provided by the algorithm satisfies . That is, if , one must add to it at the end. This results in the pseudocode, in which the input n is an integer larger than 1. function inverse(a, n) t := 0; newt := 1 r := n; newr := a while newr ≠ 0 do quotient := r div newr (t, newt) := (newt, t − quotient × newt) (r, newr) := (newr, r − quotient × newr) if r > 1 then return "a is not invertible" if t < 0 then t := t + n return t Simple algebraic field extensions The extended Euclidean algorithm is also the main tool for computing multiplicative inverses in simple algebraic field extensions. An important case, widely used in cryptography and coding theory, is that of finite fields of non-prime order. In fact, if is a prime number, and , the field of order is a simple algebraic extension of the prime field of elements, generated by a root of an irreducible polynomial of degree . A simple algebraic extension of a field , generated by the root of an irreducible polynomial of degree may be identified to the quotient ring , and its elements are in bijective correspondence with the polynomials of degree less than . The addition in is the addition of polynomials. The multiplication in is the remainder of the Euclidean division by of the product of polynomials. Thus, to complete the arithmetic in , it remains only to define how to compute multiplicative inverses. This is done by the extended Euclidean algorithm. The algorithm is very similar to that provided above for computing the modular multiplicative inverse. There are two main differences: firstly the last but one line is not needed, because the Bézout coefficient that is provided always has a degree less than . Secondly, the greatest common divisor which is provided, when the input polynomials are coprime, may be any non zero elements of ; this Bézout coefficient (a polynomial generally of positive degree) has thus to be multiplied by the inverse of this element of . In the pseudocode which follows, is a polynomial of degree greater than one, and is a polynomial. function inverse(a, p) t := 0; newt := 1 r := p; newr := a while newr ≠ 0 do quotient := r div newr (r, newr) := (newr, r − quotient × newr) (t, newt) := (newt, t − quotient × newt) if degree(r) > 0 then return "Either p is not irreducible or a is a multiple of p" return (1/r) × t Example For example, if the polynomial used to define the finite field GF(28) is , and is the element whose inverse is desired, then performing the algorithm results in the computation described in the following table. Let us recall that in fields of order 2n, one has −z = z and z + z = 0 for every element z in the field). Since 1 is the only nonzero element of GF(2), the adjustment in the last line of the pseudocode is not needed. Thus, the inverse is , as can be confirmed by multiplying the two elements together, and taking the remainder by of the result. The case of more than two numbers One can handle the case of more than two numbers iteratively. First we show that . To prove this let . By definition of gcd is a divisor of and . Thus for some . Similarly is a divisor of so for some . Let . By our construction of , but since is the greatest divisor is a unit. And since the result is proven. So if then there are and such that so the final equation will be So then to apply to n numbers we use induction with the equations following directly.
Mathematics
Diophantine equations
null
99491
https://en.wikipedia.org/wiki/Exponentiation
Exponentiation
In mathematics, exponentiation, denoted , is an operation involving two numbers: the base, , and the exponent or power, . When is a positive integer, exponentiation corresponds to repeated multiplication of the base: that is, is the product of multiplying bases: In particular, . The exponent is usually shown as a superscript to the right of the base as or in computer code as b^n. This binary operation is often read as "b to the power n"; it may also be called "b raised to the nth power", "the nth power of b", or most briefly "b to the n". The above definition of immediately implies several properties, in particular the multiplication rule: That is, when multiplying a base raised to one power times the same base raised to another power, the powers add. Extending this rule to the power zero gives , and dividing both sides by gives . That is, the multiplication rule implies the definition A similar argument implies the definition for negative integer powers: That is, extending the multiplication rule gives . Dividing both sides by gives . This also implies the definition for fractional powers: For example, , meaning , which is the definition of square root: . The definition of exponentiation can be extended in a natural way (preserving the multiplication rule) to define for any positive real base and any real number exponent . More involved definitions allow complex base and exponent, as well as certain types of matrices as base or exponent. Exponentiation is used extensively in many fields, including economics, biology, chemistry, physics, and computer science, with applications such as compound interest, population growth, chemical reaction kinetics, wave behavior, and public-key cryptography. Etymology The term exponent originates from the Latin exponentem, the present participle of exponere, meaning "to put forth". The term power () is a mistranslation of the ancient Greek δύναμις (dúnamis, here: "amplification") used by the Greek mathematician Euclid for the square of a line, following Hippocrates of Chios. History Antiquity The Sand Reckoner In The Sand Reckoner, Archimedes proved the law of exponents, , necessary to manipulate powers of . He then used powers of to estimate the number of grains of sand that can be contained in the universe. Islamic Golden Age Māl and kaʿbah ("square" and "cube") In the 9th century, the Persian mathematician Al-Khwarizmi used the terms مَال (māl, "possessions", "property") for a square—the Muslims, "like most mathematicians of those and earlier times, thought of a squared number as a depiction of an area, especially of land, hence property"—and كَعْبَة (Kaʿbah, "cube") for a cube, which later Islamic mathematicians represented in mathematical notation as the letters mīm (m) and kāf (k), respectively, by the 15th century, as seen in the work of Abu'l-Hasan ibn Ali al-Qalasadi. 15th–18th century Introducing exponents Nicolas Chuquet used a form of exponential notation in the 15th century, for example to represent . This was later used by Henricus Grammateus and Michael Stifel in the 16th century. In the late 16th century, Jost Bürgi would use Roman numerals for exponents in a way similar to that of Chuquet, for example for . "Exponent"; "square" and "cube" The word exponent was coined in 1544 by Michael Stifel. In the 16th century, Robert Recorde used the terms square, cube, zenzizenzic (fourth power), sursolid (fifth), zenzicube (sixth), second sursolid (seventh), and zenzizenzizenzic (eighth). Biquadrate has been used to refer to the fourth power as well. Modern exponential notation In 1636, James Hume used in essence modern notation, when in L'algèbre de Viète he wrote for . Early in the 17th century, the first form of our modern exponential notation was introduced by René Descartes in his text titled La Géométrie; there, the notation is introduced in Book I. Some mathematicians (such as Descartes) used exponents only for powers greater than two, preferring to represent squares as repeated multiplication. Thus they would write polynomials, for example, as . "Indices" Samuel Jeake introduced the term indices in 1696. The term involution was used synonymously with the term indices, but had declined in usage and should not be confused with its more common meaning. Variable exponents, non-integer exponents In 1748, Leonhard Euler introduced variable exponents, and, implicitly, non-integer exponents by writing: 20th century As calculation was mechanized, notation was adapted to numerical capacity by conventions in exponential notation. For example Konrad Zuse introduced floating point arithmetic in his 1938 computer Z1. One register contained representation of leading digits, and a second contained representation of the exponent of 10. Earlier Leonardo Torres Quevedo contributed Essays on Automation (1914) which had suggested the floating-point representation of numbers. The more flexible decimal floating-point representation was introduced in 1946 with a Bell Laboratories computer. Eventually educators and engineers adopted scientific notation of numbers, consistent with common reference to order of magnitude in a ratio scale. For instance, in 1961 the School Mathematics Study Group developed the notation in connection with units used in the metric system. Terminology The expression is called "the square of " or " squared", because the area of a square with side-length is . (It is true that it could also be called " to the second power", but "the square of " and " squared" are more traditional) Similarly, the expression is called "the cube of " or " cubed", because the volume of a cube with side-length is . When an exponent is a positive integer, that exponent indicates how many copies of the base are multiplied together. For example, . The base appears times in the multiplication, because the exponent is . Here, is the 5th power of 3, or 3 raised to the 5th power. The word "raised" is usually omitted, and sometimes "power" as well, so can be simply read "3 to the 5th", or "3 to the 5". Integer exponents The exponentiation operation with integer exponents may be defined directly from elementary arithmetic operations. Positive exponents The definition of the exponentiation as an iterated multiplication can be formalized by using induction, and this definition can be used as soon as one has an associative multiplication: The base case is and the recurrence is The associativity of multiplication implies that for any positive integers and , and Zero exponent As mentioned earlier, a (nonzero) number raised to the power is : This value is also obtained by the empty product convention, which may be used in every algebraic structure with a multiplication that has an identity. This way the formula also holds for . The case of is controversial. In contexts where only integer powers are considered, the value is generally assigned to but, otherwise, the choice of whether to assign it a value and what value to assign may depend on context. Negative exponents Exponentiation with negative exponents is defined by the following identity, which holds for any integer and nonzero : . Raising 0 to a negative exponent is undefined but, in some circumstances, it may be interpreted as infinity (). This definition of exponentiation with negative exponents is the only one that allows extending the identity to negative exponents (consider the case ). The same definition applies to invertible elements in a multiplicative monoid, that is, an algebraic structure, with an associative multiplication and a multiplicative identity denoted (for example, the square matrices of a given dimension). In particular, in such a structure, the inverse of an invertible element is standardly denoted Identities and properties The following identities, often called , hold for all integer exponents, provided that the base is non-zero: Unlike addition and multiplication, exponentiation is not commutative: for example, , but reversing the operands gives the different value . Also unlike addition and multiplication, exponentiation is not associative: for example, , whereas . Without parentheses, the conventional order of operations for serial exponentiation in superscript notation is top-down (or right-associative), not bottom-up (or left-associative). That is, which, in general, is different from Powers of a sum The powers of a sum can normally be computed from the powers of the summands by the binomial formula However, this formula is true only if the summands commute (i.e. that ), which is implied if they belong to a structure that is commutative. Otherwise, if and are, say, square matrices of the same size, this formula cannot be used. It follows that in computer algebra, many algorithms involving integer exponents must be changed when the exponentiation bases do not commute. Some general purpose computer algebra systems use a different notation (sometimes instead of ) for exponentiation with non-commuting bases, which is then called non-commutative exponentiation. Combinatorial interpretation For nonnegative integers and , the value of is the number of functions from a set of elements to a set of elements (see cardinal exponentiation). Such functions can be represented as -tuples from an -element set (or as -letter words from an -letter alphabet). Some examples for particular values of and are given in the following table: Particular bases Powers of ten In the base ten (decimal) number system, integer powers of are written as the digit followed or preceded by a number of zeroes determined by the sign and magnitude of the exponent. For example, and . Exponentiation with base is used in scientific notation to denote large or small numbers. For instance, (the speed of light in vacuum, in metres per second) can be written as and then approximated as . SI prefixes based on powers of are also used to describe small or large quantities. For example, the prefix kilo means , so a kilometre is . Powers of two The first negative powers of have special names: is a half; is a quarter. Powers of appear in set theory, since a set with members has a power set, the set of all of its subsets, which has members. Integer powers of are important in computer science. The positive integer powers give the number of possible values for an -bit integer binary number; for example, a byte may take different values. The binary number system expresses any number as a sum of powers of , and denotes it as a sequence of and , separated by a binary point, where indicates a power of that appears in the sum; the exponent is determined by the place of this : the nonnegative exponents are the rank of the on the left of the point (starting from ), and the negative exponents are determined by the rank on the right of the point. Powers of one Every power of one equals: . Powers of zero For a positive exponent , the th power of zero is zero: . For a negative\ exponent, is undefined. The expression is either defined as , or it is left undefined. Powers of negative one Since a negative number times another negative is positive, we have:Because of this, powers of are useful for expressing alternating sequences. For a similar discussion of powers of the complex number , see . Large exponents The limit of a sequence of powers of a number greater than one diverges; in other words, the sequence grows without bound: as when This can be read as "b to the power of n tends to +∞ as n tends to infinity when b is greater than one". Powers of a number with absolute value less than one tend to zero: as when Any power of one is always one: for all for Powers of a negative number alternate between positive and negative as alternates between even and odd, and thus do not tend to any limit as grows. If the exponentiated number varies while tending to as the exponent tends to infinity, then the limit is not necessarily one of those above. A particularly important case is as See below. Other limits, in particular those of expressions that take on an indeterminate form, are described in below. Power functions Real functions of the form , where , are sometimes called power functions. When is an integer and , two primary families exist: for even, and for odd. In general for , when is even will tend towards positive infinity with increasing , and also towards positive infinity with decreasing . All graphs from the family of even power functions have the general shape of , flattening more in the middle as increases. Functions with this kind of symmetry are called even functions. When is odd, 's asymptotic behavior reverses from positive to negative . For , will also tend towards positive infinity with increasing , but towards negative infinity with decreasing . All graphs from the family of odd power functions have the general shape of , flattening more in the middle as increases and losing all flatness there in the straight line for . Functions with this kind of symmetry are called odd functions. For , the opposite asymptotic behavior is true in each case. Table of powers of decimal digits Rational exponents If is a nonnegative real number, and is a positive integer, or denotes the unique nonnegative real th root of , that is, the unique nonnegative real number such that If is a positive real number, and is a rational number, with and integers, then is defined as The equality on the right may be derived by setting and writing If is a positive rational number, , by definition. All these definitions are required for extending the identity to rational exponents. On the other hand, there are problems with the extension of these definitions to bases that are not positive real numbers. For example, a negative real number has a real th root, which is negative, if is odd, and no real root if is even. In the latter case, whichever complex th root one chooses for the identity cannot be satisfied. For example, See and for details on the way these problems may be handled. Real exponents For positive real numbers, exponentiation to real powers can be defined in two equivalent ways, either by extending the rational powers to reals by continuity (, below), or in terms of the logarithm of the base and the exponential function (, below). The result is always a positive real number, and the identities and properties shown above for integer exponents remain true with these definitions for real exponents. The second definition is more commonly used, since it generalizes straightforwardly to complex exponents. On the other hand, exponentiation to a real power of a negative real number is much more difficult to define consistently, as it may be non-real and have several values. One may choose one of these values, called the principal value, but there is no choice of the principal value for which the identity is true; see . Therefore, exponentiation with a basis that is not a positive real number is generally viewed as a multivalued function. Limits of rational exponents Since any irrational number can be expressed as the limit of a sequence of rational numbers, exponentiation of a positive real number with an arbitrary real exponent can be defined by continuity with the rule where the limit is taken over rational values of only. This limit exists for every positive and every real . For example, if , the non-terminating decimal representation and the monotonicity of the rational powers can be used to obtain intervals bounded by rational powers that are as small as desired, and must contain So, the upper bounds and the lower bounds of the intervals form two sequences that have the same limit, denoted This defines for every positive and real as a continuous function of and .
Mathematics
Arithmetic
null
99597
https://en.wikipedia.org/wiki/Cecum
Cecum
The cecum or caecum is a pouch within the peritoneum that is considered to be the beginning of the large intestine. It is typically located on the right side of the body (the same side of the body as the appendix, to which it is joined). The word cecum (, plural ceca ) stems from the Latin caecus meaning blind. It receives chyme from the ileum, and connects to the ascending colon of the large intestine. It is separated from the ileum by the ileocecal valve (ICV), also called Bauhin's valve. It is also separated from the colon by the cecocolic junction. While the cecum is usually intraperitoneal, the ascending colon is retroperitoneal. In herbivores, the cecum stores food material where bacteria are able to break down the cellulose. In humans, the cecum is involved in absorption of salts and electrolytes and lubricates the solid waste that passes into the large intestine. Structure Development The cecum and appendix are derived from the bud of cecum that forms during week six in the midgut next to the apex of the umbilical herniation. Specifically, the cecum and appendix are formed by the enlargement of the postarterial segment of the midgut loop. The proximal part of the bud grows rapidly to form the cecum. The lateral wall of the cecum grows much more rapidly than the medial wall, with the result that the point of attachment of the appendix comes to lie on the medial side. The cecum's position changes after the midgut rotates and the ascending colon elongates, and the accumulation of meconium inside the cecum may result in the latter's increased diameter. History Etymology The term cecum comes from Latin (intestinum) caecum, literally 'blind intestine', in the sense 'blind gut' or 'cul de sac'. It is a direct translation from Ancient Greek () (). Thus the inflammation of the cecum is called typhlitis. In dissections by the Greek philosophers, the connection between the ileum of the small intestine and the cecum was not fully understood. Most of the studies of the digestive tract were done on animals and the results were compared to human structures. The junction between the small intestine and the colon, called the ileocecal valve, is so small in some animals that it was not considered to be a connection between the small and large intestines. During a dissection, the colon could be traced from the rectum, to the sigmoid colon, through the descending, transverse, and ascending sections. The cecum is an end point for the colon with a dead-end portion terminating with the appendix. The connection between the end of the small intestine (ileum) and the start (as viewed from the perspective of food being processed) of the colon (cecum) is now clearly understood, and is called the ileocecal orifice. The connection between the end of the cecum and the beginning of the ascending colon is called the cecocolic orifice. Clinical significance A cecal carcinoid tumor is a carcinoid tumor of the cecum. An appendiceal carcinoid tumor (a carcinoid tumor of the appendix) is sometimes found next to a cecal carcinoid. Neutropenic enterocolitis (typhlitis) is the condition of inflammation of the cecum, primarily caused by bacterial infections. Over 99% of the bacteria in the gut are anaerobes, but in the cecum, aerobic bacteria reach high densities. Other animals A cecum is present in most amniote species, and also in lungfish, but not in any living species of amphibian. In reptiles, it is usually a single median structure, arising from the dorsal side of the large intestine. Birds typically have two paired ceca, as do, unlike other mammals, hyraxes. Parrots do not have ceca. Most mammalian herbivores have a relatively large cecum. In many species, it is considerably wider than the colon. For some herbivores such as lagomorphs (rabbits, hares, pikas), easily digestible food is processed in the gastrointestinal tract and expelled as regular feces. But in order to get nutrients out of hard to digest fiber, lagomorphs ferment fiber in the cecum and then expel the contents as cecotropes, which are reingested (cecotrophy). The cecotropes are then absorbed in the small intestine to utilize the nutrients. In contrast, obligate carnivores, whose diets contain little or no plant matter, have a reduced cecum, which is often partially or wholly replaced by the appendix. Mammalian species which do not develop a cecum include raccoons, bears, and the red panda. Many fish have a number of small outpockets, called pyloric ceca, along their intestine; despite the name, they are not homologous with the cecum of amniotes – their purpose is to increase the overall area of the digestive epithelium. Some invertebrates, such as squid, may also have structures with the same name, but these have no relationship with those of vertebrates. Gallery
Biology and health sciences
Gastrointestinal tract
Biology
99599
https://en.wikipedia.org/wiki/Ileum
Ileum
The ileum () is the final section of the small intestine in most higher vertebrates, including mammals, reptiles, and birds. In fish, the divisions of the small intestine are not as clear and the terms posterior intestine or distal intestine may be used instead of ileum. Its main function is to absorb vitamin B12, bile salts, and whatever products of digestion that were not absorbed by the jejunum. The ileum follows the duodenum and jejunum and is separated from the cecum by the ileocecal valve (ICV). In humans, the ileum is about 2–4 m long, and the pH is usually between 7 and 8 (neutral or slightly basic). Ileum is derived from the Greek word εἰλεός (eileós), referring to a medical condition known as ileus. Structure The ileum is the third and final part of the small intestine. It follows the jejunum and ends at the ileocecal junction, where the terminal ileum communicates with the cecum of the large intestine through the ileocecal valve. The ileum, along with the jejunum, is suspended inside the mesentery, a peritoneal formation that carries the blood vessels supplying them (the superior mesenteric artery and vein), lymphatic vessels and nerve fibers. There is no line of demarcation between the jejunum and the ileum. There are, however, subtle differences between the two: The ileum has more fat inside the mesentery than the jejunum. The diameter of its lumen is smaller and has thinner walls than the jejunum. Its circular folds are smaller and absent in the terminal part of the ileum. While the length of the intestinal tract contains lymphoid tissue, only the ileum has abundant Peyer's patches, unencapsulated lymphoid nodules that contain large numbers of lymphocytes and other cells of the immune system. Histology The four layers that make up the wall of the ileum are consistent with those of the gastrointestinal tract. From the inner to the outer surface, these are: A mucous membrane, itself formed by three different layers: A single layer of tall cells that line the lumen of the organ. The epithelium that forms the innermost part of the mucosa has five distinct types of cells that serve different purposes: enterocytes with microvilli, which digest and absorb nutrients; goblet cells, which secrete mucin, a substance that lubricates the wall of the organ; Paneth cells, most common in the terminal part of the ileum, are only found at the bottom of the intestinal glands and release antimicrobial substances such as alpha defensins and lysozyme; microfold cells, which take up and transport antigens from the lumen to lymphatic cells of the lamina propria; and enteroendocrine cells, which secrete hormones. An underlying lamina propria composed of loose connective tissue and containing germinal centers and large aggregates of lymphoid tissue called Peyer's patches, which are a distinctive feature of the ileum. A thin layer of smooth muscle called muscularis mucosae A submucosa formed by dense irregular connective tissue that carries the larger blood vessels and a nervous component called submucosal plexus, which is part of the enteric nervous system An external muscular layer formed by two layers of smooth muscle arranged in circular bundles in the inner layer and in longitudinal bundles in the outer layer. Between the two layers is the myenteric plexus, formed by nervous tissue and also a part of the enteric nervous system. A serosa composed of mesothelium, a single layer of flat cells with varying quantities of underlying connective and adipose tissue. This layer represents the visceral peritoneum and is continuous with the mesentery. Development The small intestine develops from the midgut of the primitive gut tube. By the fifth week of embryological life, the ileum begins to grow longer at a very fast rate, forming a U-shaped fold called the primary intestinal loop. The proximal half of this loop will form the ileum. The loop grows so fast in length that it outgrows the abdomen and protrudes through the umbilicus. By week 10, the loop retracts back into the abdomen. Between weeks six and ten the small intestine rotates anticlockwise, as viewed from the front of the embryo. It rotates a further 180 degrees after it has moved back into the abdomen. This process creates the twisted shape of the large intestine. In the fetus the ileum is connected to the navel by the vitelline duct. In roughly 2−4% of humans, this duct fails to close during the first seven weeks after birth, leaving a remnant called Meckel's diverticulum. Function The main function of the ileum is to absorb vitamin B12, bile salts, and whatever products of digestion were not absorbed by the jejunum. The wall itself is made up of folds, each of which has many tiny finger-like projections known as villi on its surface. In turn, the epithelial cells that line these villi possess even larger numbers of microvilli. Therefore, the ileum has an extremely large surface area both for the adsorption (attachment) of enzyme molecules and for the absorption of products of digestion. The DNES (diffuse neuroendocrine system) cells of the ileum secrete various hormones (gastrin, secretin, cholecystokinin) into the blood. Cells in the lining of the ileum secrete the protease and carbohydrase enzymes responsible for the final stages of protein and carbohydrate digestion into the lumen of the intestine. These enzymes are present in the cytoplasm of the epithelial cells. The villi contain large numbers of capillaries that take the amino acids and glucose produced by digestion to the hepatic portal vein and the liver. Lacteals are small lymph vessels, and are present in villi. They absorb fatty acid and glycerol, the products of fat digestion. Layers of circular and longitudinal smooth muscle enable the chyme (partly digested food and water) to be pushed along the ileum by waves of muscle contractions called peristalsis. The remaining chyme is passed to the colon. Clinical significance It is of importance in medicine as it can be affected in a number of diseases, including: Crohn's disease Tuberculosis Lymphoma Neuroendocrine tumors (carcinoid) Other animals In veterinary anatomy, the ileum is distinguished from the jejunum by being that portion of the jejunoileum that is connected to the caecum by the ileocecal fold. The ileum is the short termi of the small intestine and the connection to the large intestine. It is suspended by the caudal part of the mesentery (mesoileum) and is attached, in addition, to the cecum by the ileocecal fold. The ileum terminates at the cecocolic junction of the large intestine forming the ileal orifice. In the dog the ileal orifice is located at the level of the first or second lumbar vertebra, in the ox in the level of the fourth lumbar vertebrae, in the sheep and goat at the level of the caudal point of the costal arch. By active muscular contraction of the ileum, and closure of the ileal opening as a result of engorgement, the ileum prevents the backflow of ingesta and the equalization of pressure between jejunum and the base of the cecum. Disturbance of this sensitive balance is not uncommon and is one of the causes of colic in horses. During any intestinal surgery, for instance, during appendectomy, distal 2 feet of ileum should be checked for the presence of Meckel's diverticulum.
Biology and health sciences
Gastrointestinal tract
Biology
99601
https://en.wikipedia.org/wiki/Pollarding
Pollarding
Pollarding is a pruning system involving the removal of the upper branches of a tree, which promotes the growth of a dense head of foliage and branches. In ancient Rome, Propertius mentioned pollarding during the 1st century BCE. The practice has occurred commonly in Europe since medieval times, and takes place today in urban areas worldwide, primarily to maintain trees at a determined height or to place new shoots out of the reach of grazing animals. Traditionally, people pollarded trees for one of two reasons: for fodder to feed livestock or for wood. Fodder pollards produced "pollard hay" for livestock feed; they were pruned at intervals of two to six years so their leafy material would be most abundant. Wood pollards were pruned at longer intervals of eight to fifteen years, a pruning cycle tending to produce upright poles favored for fencing and boat construction. Supple young willow or hazel branches may be harvested as material for weaving baskets, fences, and garden constructions such as bowers. Nowadays, the practice is sometimes used for ornamental trees, such as crape myrtles in southern states of the US. Pollarding tends to make trees live longer by maintaining them in a partially juvenile state and by reducing the weight and windage of the top part of the tree. Older pollards often become hollow, so it can be difficult to determine age accurately. Pollards tend to grow slowly, with denser growth-rings in the years immediately after cutting. Practice As in coppicing, pollarding is to encourage the tree to produce new growth on a regular basis to maintain a supply of new wood for various purposes, particularly for fuel. In some areas, dried leafy branches are stored as winter fodder for stock. Depending on the use of the cut material, the length of time between cutting will vary from one year for tree hay or withies, to five years or more for larger timber. Sometimes, only some of the regrown stems may be cut in a seasonthis is thought to reduce the chances of death of the tree when recutting long-neglected pollards. Pollarding was preferred over coppicing in wood-pastures and other grazed areas, because animals would browse the regrowth from coppice stools. Historically in England, the right to pollard or "lop" was often granted to local people for fuel on common land or in royal forests; this was part of the right of estover. An incidental effect of pollarding in woodland is the encouragement of underbrush growth due to increased light reaching the woodland floor. This can increase species diversity. However, in woodland where pollarding was once common but has now ceased, the opposite effect occurs, as the side and top shoots develop into trunk-sized branches. An example of this can be seen in Epping Forest, which is within both London and Essex, UK, the majority of which was pollarded until the late 19th century. Here, the light that reaches the woodland floor is limited owing to the thick growth of the pollarded trees. Pollards cut at about a metre above the ground are called stubs (or stubbs). These were often used as markers in coppice or other woodland. Stubs cannot be used where the trees are browsed by animals, as the regrowing shoots are below the browse line. Species As with coppicing, only species with vigorous epicormic growth may be pollarded. In these species (which include many broadleaved trees but few conifers), removal of the main apical stems releases the growth of many dormant buds under the bark on the lower part of the tree. Trees without this growth will die without their leaves and branches. Some smaller tree species do not readily form pollards, because cutting the main stem stimulates growth from the base, effectively forming a coppice stool instead. Examples of trees that do well as pollards include broadleaves such as beeches (Fagus), oaks (Quercus), maples (Acer), black locust or false acacia (Robinia pseudoacacia), hornbeams (Carpinus), lindens and limes (Tilia), planes (Platanus), horse chestnuts (Aesculus), mulberries (Morus), Eastern redbud (Cercis canadensis), tree of heaven (Ailanthus altissima), willows (Salix), and a few conifers, such as yews (Taxus). Pollarding is also used in urban forestry in certain areas for reasons such as tree size management, safety, and health concerns. It removes rotting or diseased branches to support the overall health of the tree and removes living and dead branches that could harm property and people, as well as increasing the amount of foliage in spring for aesthetic, shade and air quality reasons. Some trees may be rejuvenated by pollardingfor example, Bradford pear (Pyrus calleryana 'Bradford'), a flowering species that becomes brittle and top-heavy when older. Oaks, when very old, can form new trunks from the growth of pollard branches; that is, surviving branches which have split away from the main branch naturally. In Japan, is practiced on Cryptomeria. The technique is used in Africa for moringa trees to bring the nutritious leaves into easier reach for harvesting. Origin and usage of term "Poll" was originally a name for the top of the head, and "to poll" was a verb meaning 'to crop the hair'. This use was extended to similar treatment of the branches of trees and the horns of animals. A pollard simply meant someone or something that had been polled (similar to the formation of "drunkard" and "sluggard"); for example, a hornless ox or polled livestock. Later, the noun "pollard" came to be used as a verb: "pollarding". Pollarding has now largely replaced polling as the verb in the forestry sense. Pollard can also be used as an adjective: "pollard tree".
Technology
Horticulture
null
99603
https://en.wikipedia.org/wiki/Wrought%20iron
Wrought iron
Wrought iron is an iron alloy with a very low carbon content (less than 0.05%) in contrast to that of cast iron (2.1% to 4.5%). It is a semi-fused mass of iron with fibrous slag inclusions (up to 2% by weight), which give it a wood-like "grain" that is visible when it is etched, rusted, or bent to failure. Wrought iron is tough, malleable, ductile, corrosion resistant, and easily forge welded, but is more difficult to weld electrically. Before the development of effective methods of steelmaking and the availability of large quantities of steel, wrought iron was the most common form of malleable iron. It was given the name wrought because it was hammered, rolled, or otherwise worked while hot enough to expel molten slag. The modern functional equivalent of wrought iron is mild steel, also called low-carbon steel. Neither wrought iron nor mild steel contain enough carbon to be hardened by heating and quenching. Wrought iron is highly refined, with a small amount of silicate slag forged out into fibers. It comprises around 99.4% iron by mass. The presence of slag can be beneficial for blacksmithing operations, such as forge welding, since the silicate inclusions act as a flux and give the material its unique, fibrous structure. The silicate filaments in the slag also protect the iron from corrosion and diminish the effect of fatigue caused by shock and vibration. Historically, a modest amount of wrought iron was refined into steel, which was used mainly to produce swords, cutlery, chisels, axes, and other edged tools, as well as springs and files. The demand for wrought iron reached its peak in the 1860s, being in high demand for ironclad warships and railway use. However, as properties such as brittleness of mild steel improved with better ferrous metallurgy and as steel became less costly to make thanks to the Bessemer process and the Siemens–Martin process, the use of wrought iron declined. Many items, before they came to be made of mild steel, were produced from wrought iron, including rivets, nails, wire, chains, rails, railway couplings, water and steam pipes, nuts, bolts, horseshoes, handrails, wagon tires, straps for timber roof trusses, and ornamental ironwork, among many other things. Wrought iron is no longer produced on a commercial scale. Many products described as wrought iron, such as guard rails, garden furniture, and gates are made of mild steel. They are described as "wrought iron" only because they have been made to resemble objects which in the past were wrought (worked) by hand by a blacksmith (although many decorative iron objects, including fences and gates, were often cast rather than wrought). Terminology The word "wrought" is an archaic past participle of the verb "to work", and so "wrought iron" literally means "worked iron". Wrought iron is a general term for the commodity, but is also used more specifically for finished iron goods, as manufactured by a blacksmith. It was used in that narrower sense in British Customs records, such manufactured iron was subject to a higher rate of duty than what might be called "unwrought" iron. Cast iron, unlike wrought iron, is brittle and cannot be worked either hot or cold. In the 17th, 18th, and 19th centuries, wrought iron went by a wide variety of terms according to its form, origin, or quality. While the bloomery process produced wrought iron directly from ore, cast iron or pig iron were the starting materials used in the finery forge and puddling furnace. Pig iron and cast iron have higher carbon content than wrought iron, but have a lower melting point than iron or steel. Cast and especially pig iron have excess slag which must be at least partially removed to produce quality wrought iron. At foundries it was common to blend scrap wrought iron with cast iron to improve the physical properties of castings. For several years after the introduction of Bessemer and open hearth steel, there were different opinions as to what differentiated iron from steel; some believed it was the chemical composition and others that it was whether the iron heated sufficiently to melt and "fuse". Fusion eventually became generally accepted as relatively more important than composition below a given low carbon concentration. Another difference is that steel can be hardened by heat treating. Historically, wrought iron was known as "commercially pure iron"; however, it no longer qualifies because current standards for commercially pure iron require a carbon content of less than 0.008 wt%. Types and shapes Bar iron is a generic term sometimes used to distinguish it from cast iron. It is the equivalent of an ingot of cast metal, in a convenient form for handling, storage, shipping and further working into a finished product. The bars were the usual product of the finery forge, but not necessarily made by that process: Rod iron—cut from flat bar iron in a slitting mill provided the raw material for spikes and nails. Hoop iron—suitable for the hoops of barrels, made by passing rod iron through rolling dies. Plate iron—sheets suitable for use as boiler plate. Blackplate—sheets, perhaps thinner than plate iron, from the black rolling stage of tinplate production. Voyage iron—narrow flat bar iron, made or cut into bars of a particular weight, a commodity for sale in Africa for the Atlantic slave trade. The number of bars per ton gradually increased from 70 per ton in the 1660s to 75–80 per ton in 1685 and "near 92 to the ton" in 1731. Origin Charcoal iron—until the end of the 18th century, wrought iron was smelted from ore using charcoal, by the bloomery process. Wrought iron was also produced from pig iron using a finery forge or in a Lancashire hearth. The resulting metal was highly variable, both in chemistry and slag content. Puddled iron—the puddling process was the first large-scale process to produce wrought iron. In the puddling process, pig iron is refined in a reverberatory furnace to prevent contamination of the iron from the sulfur in the coal or coke. The molten pig iron is manually stirred, exposing the iron to atmospheric oxygen, which decarburizes the iron. As the iron is stirred, globs of wrought iron are collected into balls by the stirring rod (rabble arm or rod) and those are periodically removed by the puddler. Puddling was patented in 1784 and became widely used after 1800. By 1876, annual production of puddled iron in the UK alone was over 4 million tons. Around that time, the open hearth furnace was able to produce steel of suitable quality for structural purposes, and wrought iron production went into decline. Oregrounds iron—a particularly pure grade of bar iron made ultimately from iron ore from the Dannemora mine in Sweden. Its most important use was as the raw material for the cementation process of steelmaking. Danks iron—originally iron imported to Great Britain from Gdańsk, but in the 18th century more probably the kind of iron (from eastern Sweden) that once came from Gdańsk. Forest iron—iron from the English Forest of Dean, where haematite ore enabled tough iron to be produced. Lukes iron—iron imported from Liège, whose Dutch name is "Luik". Ames iron or amys iron—another variety of iron imported to England from northern Europe. Its origin has been suggested to be Amiens, but it seems to have been imported from Flanders in the 15th century and Holland later, suggesting an origin in the Rhine valley. Its origins remain controversial. Botolf iron or Boutall iron—from Bytów (Polish Pomerania) or Bytom (Polish Silesia). Sable iron (or Old Sable)—iron bearing the mark (a sable) of the Demidov family of Russian ironmasters, one of the better brands of Russian iron. Quality Tough iron Also spelled "tuf", is not brittle and is strong enough to be used for tools. Blend iron Made using a mixture of different types of pig iron. Best iron Iron put through several stages of piling and rolling to reach the stage regarded (in the 19th century) as the best quality. Marked bar iron Made by members of the Marked Bar Association and marked with the maker's brand mark as a sign of its quality. Defects Wrought iron is a form of commercial iron containing less than 0.10% of carbon, less than 0.25% of impurities total of sulfur, phosphorus, silicon and manganese, and less than 2% slag by weight. Wrought iron is redshort or hot short if it contains sulfur in excess quantity. It has sufficient tenacity when cold, but cracks when bent or finished at a red heat. Hot short iron was considered unmarketable. Cold short iron, also known as coldshear, colshire, contains excessive phosphorus. It is very brittle when cold and cracks if bent. It may, however, be worked at high temperature. Historically, coldshort iron was considered sufficient for nails. Phosphorus is not necessarily detrimental to iron. Ancient Near Eastern smiths did not add lime to their furnaces. The absence of calcium oxide in the slag, and the deliberate use of wood with high phosphorus content during the smelting, induces a higher phosphorus content (typically <0.3%) than in modern iron (<0.02–0.03%). Analysis of the Iron Pillar of Delhi gives 0.11% in the iron. The included slag in wrought iron also imparts corrosion resistance. Antique music wire, manufactured at a time when mass-produced carbon-steels were available, was found to have low carbon and high phosphorus; iron with high phosphorus content, normally causing brittleness when worked cold, was easily drawn into music wires. Although at the time phosphorus was not an easily identified component of iron, it was hypothesized that the type of iron had been rejected for conversion to steel but excelled when tested for drawing ability. History China During the Han dynasty (202 BC – 220 AD), new iron smelting processes led to the manufacture of new wrought iron implements for use in agriculture, such as the multi-tube seed drill and iron plough. In addition to accidental lumps of low-carbon wrought iron produced by excessive injected air in ancient Chinese cupola furnaces. The ancient Chinese created wrought iron by using the finery forge at least by the 2nd century BC, the earliest specimens of cast and pig iron fined into wrought iron and steel found at the early Han dynasty site at Tieshengguo. Pigott speculates that the finery forge existed in the previous Warring States period (403–221 BC), due to the fact that there are wrought iron items from China dating to that period and there is no documented evidence of the bloomery ever being used in China. The fining process involved liquifying cast iron in a fining hearth and removing carbon from the molten cast iron through oxidation. Wagner writes that in addition to the Han dynasty hearths believed to be fining hearths, there is also pictorial evidence of the fining hearth from a Shandong tomb mural dated 1st to 2nd century AD, as well as a hint of written evidence in the 4th century AD Daoist text Taiping Jing. Western world Wrought iron has been used for many centuries, and is the "iron" that is referred to throughout Western history. The other form of iron, cast iron, was in use in China since ancient times but was not introduced into Western Europe until the 15th century; even then, due to its brittleness, it could be used for only a limited number of purposes. Throughout much of the Middle Ages, iron was produced by the direct reduction of ore in manually operated bloomeries, although water power had begun to be employed by 1104. The raw material produced by all indirect processes is pig iron. It has a high carbon content and as a consequence, it is brittle and cannot be used to make hardware. The osmond process was the first of the indirect processes, developed by 1203, but bloomery production continued in many places. The process depended on the development of the blast furnace, of which medieval examples have been discovered at Lapphyttan, Sweden and in Germany. The bloomery and osmond processes were gradually replaced from the 15th century by finery processes, of which there were two versions, the German and Walloon. They were in turn replaced from the late 18th century by puddling, with certain variants such as the Swedish Lancashire process. Those, too, are now obsolete, and wrought iron is no longer manufactured commercially. Bloomery process Wrought iron was originally produced by a variety of smelting processes, all described today as "bloomeries". Different forms of bloomery were used at different places and times. The bloomery was charged with charcoal and iron ore and then lit. Air was blown in through a tuyere to heat the bloomery to a temperature somewhat below the melting point of iron. In the course of the smelt, slag would melt and run out, and carbon monoxide from the charcoal would reduce the ore to iron, which formed a spongy mass (called a "bloom") containing iron and also molten silicate minerals (slag) from the ore. The iron remained in the solid state. If the bloomery were allowed to become hot enough to melt the iron, carbon would dissolve into it and form pig or cast iron, but that was not the intention. However, the design of a bloomery made it difficult to reach the melting point of iron and also prevented the concentration of carbon monoxide from becoming high. After smelting was complete, the bloom was removed, and the process could then be started again. It was thus a batch process, rather than a continuous one such as a blast furnace. The bloom had to be forged mechanically to consolidate it and shape it into a bar, expelling slag in the process. During the Middle Ages, water-power was applied to the process, probably initially for powering bellows, and only later to hammers for forging the blooms. However, while it is certain that water-power was used, the details remain uncertain. That was the culmination of the direct process of ironmaking. It survived in Spain and southern France as Catalan Forges to the mid 19th century, in Austria as the stuckofen to 1775, and near Garstang in England until about 1770; it was still in use with hot blast in New York in the 1880s. In Japan the last of the old tatara bloomeries used in production of traditional tamahagane steel, mainly used in swordmaking, was extinguished only in 1925, though in the late 20th century the production resumed on a low scale to supply the steel to the artisan swordmakers. Osmond process Osmond iron consisted of balls of wrought iron, produced by melting pig iron and catching the droplets on a staff, which was spun in front of a blast of air so as to expose as much of it as possible to the air and oxidise its carbon content. The resultant ball was often forged into bar iron in a hammer mill. Finery process In the 15th century, the blast furnace spread into what is now Belgium where it was improved. From there, it spread via the Pays de Bray on the boundary of Normandy and then to the Weald in England. With it, the finery forge spread. Those remelted the pig iron and (in effect) burnt out the carbon, producing a bloom, which was then forged into bar iron. If rod iron was required, a slitting mill was used. The finery process existed in two slightly different forms. In Great Britain, France, and parts of Sweden, only the Walloon process was used. That employed two different hearths, a finery hearth for finishing the iron and a chafery hearth for reheating it in the course of drawing the bloom out into a bar. The finery always burnt charcoal, but the chafery could be fired with mineral coal, since its impurities would not harm the iron when it was in the solid state. On the other hand, the German process, used in Germany, Russia, and most of Sweden used a single hearth for all stages. The introduction of coke for use in the blast furnace by Abraham Darby in 1709 (or perhaps others a little earlier) initially had little effect on wrought iron production. Only in the 1750s was coke pig iron used on any significant scale as the feedstock of finery forges. However, charcoal continued to be the fuel for the finery. Potting and stamping From the late 1750s, ironmasters began to develop processes for making bar iron without charcoal. There were a number of patented processes for that, which are referred to today as potting and stamping. The earliest were developed by John Wood of Wednesbury and his brother Charles Wood of Low Mill at Egremont, patented in 1763. Another was developed for the Coalbrookdale Company by the Cranage brothers. Another important one was that of John Wright and Joseph Jesson of West Bromwich. Puddling process A number of processes for making wrought iron without charcoal were devised as the Industrial Revolution began during the latter half of the 18th century. The most successful of those was puddling, using a puddling furnace (a variety of the reverberatory furnace), which was invented by Henry Cort in 1784. It was later improved by others including Joseph Hall, who was the first to add iron oxide to the charge. In that type of furnace, the metal does not come into contact with the fuel, and so is not contaminated by its impurities. The heat of the combustion products passes over the surface of the puddle and the roof of the furnace reverberates (reflects) the heat onto the metal puddle on the fire bridge of the furnace. Unless the raw material used is white cast iron, the pig iron or other raw product of the puddling first had to be refined into refined iron, or finers metal. That would be done in a refinery where raw coal was used to remove silicon and convert carbon within the raw material, found in the form of graphite, to a combination with iron called cementite. In the fully developed process (of Hall), this metal was placed into the hearth of the puddling furnace where it was melted. The hearth was lined with oxidizing agents such as haematite and iron oxide. The mixture was subjected to a strong current of air and stirred with long bars, called puddling bars or rabbles, through working doors. The air, the stirring, and the "boiling" action of the metal helped the oxidizing agents to oxidize the impurities and carbon out of the pig iron. As the impurities oxidize, they formed a molten slag or drifted off as gas, while the remaining iron solidified into spongy wrought iron that floated to the top of the puddle and was fished out of the melt as puddle balls, using puddle bars. Shingling There was still some slag left in the puddle balls, so while they were still hot they would be shingled to remove the remaining slag and cinder. That was achieved by forging the balls under a hammer, or by squeezing the bloom in a machine. The material obtained at the end of shingling is known as bloom. The blooms are not useful in that form, so they were rolled into a final product. Sometimes European ironworks would skip the shingling process completely and roll the puddle balls. The only drawback to that is that the edges of the rough bars were not as well compressed. When the rough bar was reheated, the edges might separate and be lost into the furnace. Rolling The bloom was passed through rollers and to produce bars. The bars of wrought iron were of poor quality, called muck bars or puddle bars. To improve their quality, the bars were cut up, piled and tied together by wires, a process known as faggoting or piling. They were then reheated to a welding state, forge welded, and rolled again into bars. The process could be repeated several times to produce wrought iron of desired quality. Wrought iron that has been rolled multiple times is called merchant bar or merchant iron. Lancashire process The advantage of puddling was that it used coal, not charcoal as fuel. However, that was of little advantage in Sweden, which lacked coal. Gustaf Ekman observed charcoal fineries at Ulverston, which were quite different from any in Sweden. After his return to Sweden in the 1830s, he experimented and developed a process similar to puddling but used firewood and charcoal, which was widely adopted in the Bergslagen in the following decades. Aston process In 1925, James Aston of the United States developed a process for manufacturing wrought iron quickly and economically. It involved taking molten steel from a Bessemer converter and pouring it into cooler liquid slag. The temperature of the steel is about 1500 °C and the liquid slag is maintained at approximately 1200 °C. The molten steel contains a large amount of dissolved gases so when the liquid steel hit the cooler surfaces of the liquid slag the gases were liberated. The molten steel then froze to yield a spongy mass having a temperature of about 1370 °C. The spongy mass would then be finished by being shingled and rolled as described under puddling (above). Three to four tons could be converted per batch with the method. Decline Steel began to replace iron for railroad rails as soon as the Bessemer process for its manufacture was adopted (1865 on). Iron remained dominant for structural applications until the 1880s, because of problems with brittle steel, caused by introduced nitrogen, high carbon, excess phosphorus, or excessive temperature during or too-rapid rolling. By 1890 steel had largely replaced iron for structural applications. Sheet iron (Armco 99.97% pure iron) had good properties for use in appliances, being well-suited for enamelling and welding, and being rust-resistant. In the 1960s, the price of steel production was dropping due to recycling, and even using the Aston process, wrought iron production was labor-intensive. It has been estimated that the production of wrought iron is approximately twice as expensive as that of low-carbon steel. In the United States, the last plant closed in 1969. The last in the world was the Atlas Forge of Thomas Walmsley and Sons in Bolton, Great Britain, which closed in 1973. Its 1860s-era equipment was moved to the Blists Hill site of Ironbridge Gorge Museum for preservation. Some wrought iron is still being produced for heritage restoration purposes, but only by recycling scrap. Properties The slag inclusions, or stringers, in wrought iron give it properties not found in other forms of ferrous metal. There are approximately 250,000 inclusions per square inch. A fresh fracture shows a clear bluish color with a high silky luster and fibrous appearance. Wrought iron lacks the carbon content necessary for hardening through heat treatment, but in areas where steel was uncommon or unknown, tools were sometimes cold-worked (hence cold iron) to harden them. An advantage of its low carbon content is its excellent weldability. Furthermore, sheet wrought iron cannot bend as much as steel sheet metal when cold worked. Wrought iron can be melted and cast; however, the product is no longer wrought iron, since the slag stringers characteristic of wrought iron disappear on melting, so the product resembles impure, cast, Bessemer steel. There is no engineering advantage to melting and casting wrought iron, as compared to using cast iron or steel, both of which are cheaper. Due to the variations in iron ore origin and iron manufacture, wrought iron can be inferior or superior in corrosion resistance, compared to other iron alloys. There are many mechanisms behind its corrosion resistance. Chilton and Evans found that nickel enrichment bands reduce corrosion. They also found that in puddled, forged, and piled iron, the working-over of the metal spread out copper, nickel, and tin impurities that produce electrochemical conditions that slow down corrosion. The slag inclusions have been shown to disperse corrosion to an even film, enabling the iron to resist pitting. Another study has shown that slag inclusions are pathways to corrosion. Other studies show that sulfur in the wrought iron decreases corrosion resistance, while phosphorus increases corrosion resistance. Chloride ions also decrease wrought iron's corrosion resistance. Wrought iron may be welded in the same manner as mild steel, but the presence of oxide or inclusions will give defective results. The material has a rough surface, so it can hold platings and coatings better than smooth steel. For instance, a galvanic zinc finish applied to wrought iron is approximately 25–40% thicker than the same finish on steel. In Table 1, the chemical composition of wrought iron is compared to that of pig iron and carbon steel. Although it appears that wrought iron and plain carbon steel have similar chemical compositions, that is deceptive. Most of the manganese, sulfur, phosphorus, and silicon in the wrought iron are incorporated into the slag fibers, making wrought iron purer than plain carbon steel. Amongst its other properties, wrought iron becomes soft at red heat and can be easily forged and forge welded. It can be used to form temporary magnets, but it cannot be magnetized permanently, and is ductile, malleable, and tough. Ductility For most purposes, ductility rather than tensile strength is a more important measure of the quality of wrought iron. In tensile testing, the best irons are able to undergo considerable elongation before failure. Higher tensile wrought iron is brittle. Because of the large number of boiler explosions on steamboats in the early 1800s, the U.S. Congress passed legislation in 1830 which approved funds for correcting the problem. The treasury awarded a $1500 contract to the Franklin Institute to conduct a study. As part of the study, Walter R. Johnson and Benjamin Reeves conducted strength tests on boiler iron using a tester they had built in 1832 based on a design by Lagerhjelm in Sweden. Because of misunderstandings about tensile strength and ductility, their work did little to reduce failures. The importance of ductility was recognized by some very early in the development of tube boilers, evidenced by Thurston's comment: Various 19th century investigations of boiler explosions, especially those by insurance companies, found causes to be most commonly the result of operating boilers above the safe pressure range, either to get more power, or due to defective boiler pressure relief valves and difficulties of obtaining reliable indications of pressure and water levels. Poor fabrication was also a common problem. Also, the thickness of the iron in steam drums was low, by modern standards. By the late 19th century, when metallurgists were able to better understand what properties and processes made good iron, iron in steam engines was being displaced by steel. Also, the old cylindrical boilers with fire tubes were displaced by water tube boilers, which are inherently safer. Purity In 2010, Gerry McDonnell demonstrated in England by analysis that a wrought iron bloom, from a traditional smelt, could be worked into 99.7% pure iron with no evidence of carbon. It was found that the stringers common to other wrought irons were not present, thus making it very malleable for the smith to work hot and cold. A commercial source of pure iron is available and is used by smiths as an alternative to traditional wrought iron and other new generation ferrous metals. Applications Wrought iron furniture has a long history, dating back to Roman times. There are 13th century wrought iron gates in Westminster Abbey in London, and wrought iron furniture seemed to reach its peak popularity in Britain in the 17th century, during the reign of William III and Mary II. However, cast iron and cheaper steel caused a gradual decline in wrought iron manufacture; the last wrought ironworks in Britain closed in 1974. It is also used to make home decor items such as baker's racks, wine racks, pot racks, etageres, table bases, desks, gates, beds, candle holders, curtain rods, bars, and bar stools. The vast majority of wrought iron available today is from reclaimed materials. Old bridges and anchor chains dredged from harbors are major sources. The greater corrosion resistance of wrought iron is due to the siliceous impurities (naturally occurring in iron ore), namely ferrous silicate. Wrought iron has been used for decades as a generic term across the gate and fencing industry, even though mild steel is used for manufacturing these "wrought iron" gates. This is mainly because of the limited availability of true wrought iron. Steel can also be hot-dip galvanised to prevent corrosion, which cannot be done with wrought iron.
Physical sciences
Specific alloys
null
99608
https://en.wikipedia.org/wiki/Jejunum
Jejunum
The jejunum is the second part of the small intestine in humans and most higher vertebrates, including mammals, reptiles, and birds. Its lining is specialized for the absorption by enterocytes of small nutrient molecules which have been previously digested by enzymes in the duodenum. The jejunum lies between the duodenum and the ileum and is considered to start at the suspensory muscle of the duodenum, a location called the duodenojejunal flexure. The division between the jejunum and ileum is not anatomically distinct. In adult humans, the small intestine is usually long (post mortem), about two-fifths of which (about ) is the jejunum. Structure The interior surface of the jejunum—which is exposed to ingested food—is covered in finger–like projections of mucosa, called villi, which increase the surface area of tissue available to absorb nutrients from ingested foodstuffs. The epithelial cells which line these villi have microvilli. The transport of nutrients across epithelial cells through the jejunum and ileum includes the passive transport of sugar fructose and the active transport of amino acids, small peptides, vitamins, and most glucose. The villi in the jejunum are much longer than in the duodenum or ileum. The pH in the jejunum is usually between 7 and 8 (neutral or slightly alkaline). The jejunum and the ileum are suspended by mesentery which gives the bowel great mobility within the abdomen. It also contains circular and longitudinal smooth muscle which helps to move food along by a process known as peristalsis. Histology The jejunum contains very few Brunner's glands (found in the duodenum) or Peyer's patches (found in the ileum). However, there are a few jejunal lymph nodes suspended in its mesentery. The jejunum has many large circular folds in its submucosa called plicae circulares that increase the surface area for nutrient absorption. The plicae circulares are best developed in the jejunum. There is no line of demarcation between the jejunum and the ileum. However, there are subtle histological differences: The jejunum has less fat inside its mesentery than the ileum. The jejunum is typically of larger diameter than the ileum. The villi of the jejunum look like long, finger-like projections, and are a histologically identifiable structure. While the length of the entire intestinal tract contains lymphoid tissue, only the ileum has abundant Peyer's patches, which are unencapsulated lymphoid nodules that contain large numbers of lymphocytes and immune cells, like microfold cells. Function The lining of the jejunum is specialized for the absorption by enterocytes of small nutrient particles which have been previously digested by enzymes in the duodenum. Once absorbed, nutrients (with the exception of fat, which goes to the lymph) pass from the enterocytes into the enterohepatic circulation and enter the liver via the hepatic portal vein, where the blood is processed. Other animals In fish, the divisions of the small intestine are not as clear and the terms middle intestine or mid-gut may be used instead of jejunum. History Etymology Jejunum is derived from the Latin word jējūnus (iēiūnus), meaning "fasting." It was so called because this part of the small intestine was frequently found to be void of food following death, due to its intensive peristaltic activity relative to the duodenum and ileum. The Early Modern English adjective jejune is derived from the same root.
Biology and health sciences
Gastrointestinal tract
Biology
99610
https://en.wikipedia.org/wiki/Small%20intestine
Small intestine
The small intestine or small bowel is an organ in the gastrointestinal tract where most of the absorption of nutrients from food takes place. It lies between the stomach and large intestine, and receives bile and pancreatic juice through the pancreatic duct to aid in digestion. The small intestine is about long and folds many times to fit in the abdomen. Although it is longer than the large intestine, it is called the small intestine because it is narrower in diameter. The small intestine has three distinct regions – the duodenum, jejunum, and ileum. The duodenum, the shortest, is where preparation for absorption through small finger-like protrusions called villi begins. The jejunum is specialized for the absorption through its lining by enterocytes: small nutrient particles which have been previously digested by enzymes in the duodenum. The main function of the ileum is to absorb vitamin B12, bile salts, and whatever products of digestion that were not absorbed by the jejunum. Structure Size The length of the small intestine can vary greatly, from as short as to as long as , also depending on the measuring technique used. The typical length in a living person is . The length depends both on how tall the person is and how the length is measured. Taller people generally have a longer small intestine and measurements are generally longer after death and when the bowel is empty. It is approximately in diameter in newborns after 35 weeks of gestational age, and in diameter in adults. On abdominal X-rays, the small intestine is considered to be abnormally dilated when the diameter exceeds 3 cm. On CT scans, a diameter of over 2.5 cm is considered abnormally dilated. The surface area of the human small intestinal mucosa, due to enlargement caused by folds, villi and microvilli, averages . Parts The small intestine is divided into three structural parts. The duodenum is a short structure ranging from in length, and shaped like a "C". It surrounds the head of the pancreas. It receives gastric chyme from the stomach, together with digestive juices from the pancreas (digestive enzymes) and the liver (bile). The digestive enzymes break down proteins and bile emulsifies fats into micelles. The duodenum contains Brunner's glands, which produce a mucus-rich alkaline secretion containing bicarbonate. These secretions, in combination with bicarbonate from the pancreas, neutralize the stomach acids contained in gastric chyme. The jejunum is the midsection of the small intestine, connecting the duodenum to the ileum. It is about long, and contains the circular folds, and intestinal villi that increase its surface area. Products of digestion (sugars, amino acids, and fatty acids) are absorbed into the bloodstream here. The suspensory muscle of duodenum marks the division between the duodenum and the jejunum. The ileum: The final section of the small intestine. It is about 3 m long, and contains villi similar to the jejunum. It absorbs mainly vitamin B12 and bile acids, as well as any other remaining nutrients. The ileum joins to the cecum of the large intestine at the ileocecal junction. The jejunum and ileum are suspended in the abdominal cavity by mesentery. The mesentery is part of the peritoneum. Arteries, veins, lymph vessels and nerves travel within the mesentery. Blood supply The small intestine receives a blood supply from the celiac trunk and the superior mesenteric artery. These are both branches of the aorta. The duodenum receives blood from the coeliac trunk via the superior pancreaticoduodenal artery and from the superior mesenteric artery via the inferior pancreaticoduodenal artery. These two arteries both have anterior and posterior branches that meet in the midline and anastomose. The jejunum and ileum receive blood from the superior mesenteric artery. Branches of the superior mesenteric artery form a series of arches within the mesentery known as arterial arcades, which may be several layers deep. Straight blood vessels known as vasa recta travel from the arcades closest to the ileum and jejunum to the organs themselves. Microanatomy The three sections of the small intestine look similar to each other at a microscopic level, but there are some important differences. The parts of the intestine are as follows: Gene and protein expression About 20,000 protein coding genes are expressed in human cells and 70% of these genes are expressed in the normal duodenum. Some 300 of these genes are more specifically expressed in the duodenum with very few genes expressed only in the small intestine. The corresponding specific proteins are expressed in glandular cells of the mucosa, such as fatty acid binding protein FABP6. Most of the more specifically expressed genes in the small intestine are also expressed in the duodenum, for example FABP2 and the DEFA6 protein expressed in secretory granules of Paneth cells. Development The small intestine develops from the midgut of the primitive gut tube. By the fifth week of embryological life, the ileum begins to grow longer at a very fast rate, forming a U-shaped fold called the primary intestinal loop. The loop grows so fast in length that it outgrows the abdomen and protrudes through the umbilicus. By week 10, the loop retracts back into the abdomen. Between weeks six and ten the small intestine rotates anticlockwise, as viewed from the front of the embryo. It rotates a further 180 degrees after it has moved back into the abdomen. This process creates the twisted shape of the large intestine. Function Food from the stomach is allowed into the duodenum through the pylorus by a muscle called the pyloric sphincter. Digestion The small intestine is where most chemical digestion takes place. Many of the digestive enzymes that act in the small intestine are secreted by the pancreas and liver and enter the small intestine via the pancreatic duct. Pancreatic enzymes and bile from the gallbladder enter the small intestine in response to the hormone cholecystokinin, which is produced in the response to the presence of nutrients. Secretin, another hormone produced in the small intestine, causes additional effects on the pancreas, where it promotes the release of bicarbonate into the duodenum in order to neutralize the potentially harmful acid coming from the stomach. The three major classes of nutrients that undergo digestion are proteins, lipids (fats) and carbohydrates: Proteins are degraded into small peptides and amino acids before absorption. Chemical breakdown begins in the stomach and continues in the small intestine. Proteolytic enzymes, including trypsin and chymotrypsin, are secreted by the pancreas and cleave proteins into smaller peptides. Carboxypeptidase, which is a pancreatic brush border enzyme, splits one amino acid at a time. Aminopeptidase and dipeptidase free the end amino acid products. Lipids (fats) are degraded into fatty acids and glycerol. Pancreatic lipase breaks down triglycerides into free fatty acids and monoglycerides. Pancreatic lipase works with the help of the salts from the bile secreted by the liver and stored in the gall bladder. Bile salts attach to triglycerides to help emulsify them, which aids access by pancreatic lipase. This occurs because the lipase is water-soluble but the fatty triglycerides are hydrophobic and tend to orient towards each other and away from the watery intestinal surroundings. The bile salts emulsify the triglycerides in the watery surroundings until the lipase can break them into the smaller components that are able to enter the villi for absorption. Some carbohydrates are degraded into simple sugars, or monosaccharides (e.g., glucose). Pancreatic amylase breaks down some carbohydrates (notably starch) into oligosaccharides. Other carbohydrates pass undigested into the large intestine for further handling by intestinal bacteria. Brush border enzymes take over from there. The most important brush border enzymes are dextrinase and glucoamylase, which further break down oligosaccharides. Other brush border enzymes are maltase, sucrase and lactase. Lactase is absent in some adult humans and, for them, lactose (a disaccharide), as well as most polysaccharides, is not digested in the small intestine. Some carbohydrates, such as cellulose, are not digested at all, despite being made of multiple glucose units. This is because the cellulose is made out of beta-glucose, making the inter-monosaccharidal bindings different from the ones present in starch, which consists of alpha-glucose. Humans lack the enzyme for splitting the beta-glucose-bonds, something reserved for herbivores and bacteria from the large intestine. Absorption Digested food is now able to pass into the blood vessels in the wall of the intestine through either diffusion or active transport. The small intestine is the site where most of the nutrients from ingested food are absorbed. The inner wall, or mucosa, of the small intestine, is lined with intestinal epithelium, a simple columnar epithelium. Structurally, the mucosa is covered in wrinkles or flaps called circular folds, which are considered permanent features in the mucosa. They are distinct from rugae which are considered non-permanent or temporary allowing for distention and contraction. From the circular folds project microscopic finger-like pieces of tissue called villi (Latin for "shaggy hair"). The individual epithelial cells also have finger-like projections known as microvilli. The functions of the circular folds, the villi, and the microvilli are to increase the amount of surface area available for the absorption of nutrients, and to limit the loss of said nutrients to intestinal fauna. Each villus has a network of capillaries and fine lymphatic vessels called lacteals close to its surface. The epithelial cells of the villi transport nutrients from the lumen of the intestine into these capillaries (amino acids and carbohydrates) and lacteals (lipids). The absorbed substances are transported via the blood vessels to different organs of the body where they are used to build complex substances such as the proteins required by our body. The material that remains undigested and unabsorbed passes into the large intestine. Absorption of the majority of nutrients takes place in the jejunum, with the following notable exceptions: Iron is absorbed in the duodenum. Folate (Vitamin B9) is absorbed in the duodenum and jejunum. Vitamin B12 and bile salts are absorbed in the terminal ileum. Vitamin B12 will only be absorbed by the ileum after binding to a protein known as intrinsic factor. Water is absorbed by osmosis and lipids by passive diffusion throughout the small intestine. Sodium bicarbonate is absorbed by active transport and glucose and amino acid co-transport Fructose is absorbed by facilitated diffusion. Immunological The small intestine supports the body's immune system. The presence of gut flora appears to contribute positively to the host's immune system. Peyer's patches, located within the ileum of the small intestine, are an important part of the digestive tract's local immune system. They are part of the lymphatic system, and provide a site for antigens from potentially harmful bacteria or other microorganisms in the digestive tract to be sampled, and subsequently presented to the immune system. Clinical significance The small intestine is a complex organ, and as such, there are a very large number of possible conditions that may affect the function of the small bowel. A few of them are listed below, some of which are common, with up to 10% of people being affected at some time in their lives, while others are vanishingly rare. Small intestine obstruction or obstructive disorders Meconium ileus Paralytic ileus Volvulus Hernia Intussusception Adhesions Obstruction from external pressure Obstruction by masses in the lumen (foreign bodies, bezoar, gallstones) Infectious diseases Giardiasis Ascariasis Tropical sprue Tapeworm (Diphyllobothrium latum, Taenia solium, Hymenolepis nana) Hookworm (e.g. Necator americanus, Ancylostoma duodenale) Nematodes (e.g. Ascaris lumbricoides) Other Protozoa (e.g. Cryptosporidium parvum, Cyclospora, Microsporidia, Entamoeba histolytica) Bacterial infections Enterotoxigenic Escherichia coli Salmonella enterica Campylobacter Shigella Yersinia Clostridioides difficile (antibiotic-associated colitis, Pseudomembranous colitis) Mycobacterium (Mycobacterium avium paratuberculosis, disseminated Mycobacterium tuberculosis) Whipple's disease Vibrio (cholera) Enteric (typhoid) fever (Salmonella enterica var. typhii) and paratyphoid fever Bacillus cereus Clostridium perfringens (gas gangrene) Viral infections Rotavirus Norovirus Astrovirus Adenovirus Calicivirus Neoplasms (cancers) Adenocarcinoma Carcinoid Gastrointestinal stromal tumor (GIST) Lymphoma Sarcoma Leiomyoma Metastatic tumors, especially SCLC or melanoma Small intestine cancer Developmental, congenital or genetic conditions Duodenal (intestinal) atresia Hirschsprung's disease Meckel's diverticulum Pyloric stenosis Pancreas divisum Ectopic pancreas Enteric duplication cyst Situs inversus Cystic fibrosis Malrotation Persistent urachus Omphalocele Gastroschisis Disaccharidase (lactase) deficiencies Primary bile acid malabsorption Gardner syndrome Familial adenomatous polyposis syndrome (FAP) Other conditions Crohn's disease, and the more general inflammatory bowel disease Typhlitis (neutropenic colitis in the immunosuppressed Coeliac disease (sprue or non-tropical sprue) Mesenteric ischemia Embolus or thrombus of the superior mesenteric artery or the superior mesenteric vein Arteriovenous malformation Gastric dumping syndrome Irritable bowel syndrome Duodenal (peptic) ulcers Gastrointestinal perforation Hyperthyroidism Diverticulitis Radiation enterocolitis Mesenteric cysts Peritoneal Infection Sclerosing retroperitonitis Small intestinal bacterial overgrowth Endometriosis Other animals The small intestine is found in all tetrapods and also in teleosts, although its form and length vary enormously between species. In teleosts, it is relatively short, typically around one and a half times the length of the fish's body. It commonly has a number of pyloric caeca, small pouch-like structures along its length that help to increase the overall surface area of the organ for digesting food. There is no ileocaecal valve in teleosts, with the boundary between the small intestine and the rectum being marked only by the end of the digestive epithelium. In tetrapods, the ileocaecal valve is always present, opening into the colon. The length of the small intestine is typically longer in tetrapods than in teleosts, but is especially so in herbivores, as well as in mammals and birds, which have a higher metabolic rate than amphibians or reptiles. The lining of the small intestine includes microscopic folds to increase its surface area in all vertebrates, but only in mammals do these develop into true villi. The boundaries between the duodenum, jejunum, and ileum are somewhat vague even in humans, and such distinctions are either ignored when discussing the anatomy of other animals, or are essentially arbitrary. There is no small intestine as such in non-teleost fish, such as sharks, sturgeons, and lungfish. Instead, the digestive part of the gut forms a spiral intestine, connecting the stomach to the rectum. In this type of gut, the intestine itself is relatively straight but has a long fold running along the inner surface in a spiral fashion, sometimes for dozens of turns. This valve greatly increases both the surface area and the effective length of the intestine. The lining of the spiral intestine is similar to that of the small intestine in teleosts and non-mammalian tetrapods. In lampreys, the spiral valve is extremely small, possibly because their diet requires little digestion. Hagfish have no spiral valve at all, with digestion occurring for almost the entire length of the intestine, which is not subdivided into different regions. Society and culture In traditional Chinese medicine, the small intestine is a yang organ. Additional images
Biology and health sciences
Digestive system
null
99611
https://en.wikipedia.org/wiki/Appendix%20%28anatomy%29
Appendix (anatomy)
The appendix (: appendices or appendixes; also vermiform appendix; cecal (or caecal, cæcal) appendix; vermix; or vermiform process) is a finger-like, blind-ended tube connected to the cecum, from which it develops in the embryo. The cecum is a pouch-like structure of the large intestine, located at the junction of the small and the large intestines. The term "vermiform" comes from Latin and means "worm-shaped". The appendix was once considered a vestigial organ, but this view has changed since the early 2000s. Research suggests that the appendix may serve an important purpose as a reservoir for beneficial gut bacteria. Structure The human appendix averages in length, ranging from . The diameter of the appendix is , and more than is considered a thickened or inflamed appendix. The longest appendix ever removed was long. The appendix is usually located in the lower right quadrant of the abdomen, near the right hip bone. The base of the appendix is located beneath the ileocecal valve that separates the large intestine from the small intestine. Its position within the abdomen corresponds to a point on the surface known as McBurney's point. The appendix is connected to the mesentery in the lower region of the ileum, by a short region of the mesocolon known as the mesoappendix. Variation Some identical twins—known as mirror image twins—can have a mirror-imaged anatomy, a congenital condition with the appendix located in the lower left quadrant of the abdomen instead of the lower right. Intestinal malrotation may also cause displacement of the appendix to the left side. While the base of the appendix is typically located below the ileocecal valve, the tip of the appendix can be variably located—in the pelvis, outside the peritoneum or behind the cecum. The prevalence of the different positions varies amongst populations with the retrocecal position being most common in Ghana and Sudan, with 67.3% and 58.3% occurrence respectively, in comparison to Iran and Bosnia where the pelvic position is most common, with 55.8% and 57.7% occurrence respectively. In very rare cases, the appendix may not be present at all (laparotomies for suspected appendicitis have given a frequency of 1 in 100,000). Sometimes there is a semi-circular fold of mucous membrane at the opening of the appendix. This valve of the vermiform appendix is also called Gerlach's valve. Functions Maintaining gut flora Although it has been long accepted that the immune tissue surrounding the appendix and elsewhere in the gut—called gut-associated lymphoid tissue—carries out a number of important functions, explanations were lacking for the distinctive shape of the appendix and its apparent lack of specific importance and function as judged by an absence of side effects following its removal. Therefore, the notion that the appendix is only vestigial became widely held. William Parker, Randy Bollinger, and colleagues at Duke University proposed in 2007 that the appendix serves as a haven for useful bacteria when illness flushes the bacteria from the rest of the intestines. This proposition is based on an understanding that emerged by the early 2000s of how the immune system supports the growth of beneficial intestinal bacteria, in combination with many well-known features of the appendix, including its architecture, its location just below the normal one-way flow of food and germs in the large intestine, and its association with copious amounts of immune tissue. Research performed at Winthrop–University Hospital showed that individuals without an appendix were four times as likely to have a recurrence of Clostridioides difficile colitis. The appendix, therefore, may act as a "safe house" for beneficial bacteria. This reservoir of bacteria could then serve to repopulate the gut flora in the digestive system following a bout of dysentery or cholera or to boost it following a milder gastrointestinal illness. Immune and lymphatic systems The appendix has been identified as an important component of mammalian mucosal immune function, particularly B cell-mediated immune responses and extrathymically derived T cells. This structure helps in the proper movement and removal of waste matter in the digestive system, contains lymphatic vessels that regulate pathogens, and lastly, might even produce early defences that prevent deadly diseases. Additionally, it is thought that this may provide more immune defences from invading pathogens and getting the lymphatic system's B and T cells to fight the viruses and bacteria that infect that portion of the bowel and training them so that immune responses are targeted and more able to reliably and less dangerously fight off pathogens. In addition, there are different immune cells called innate lymphoid cells that function in the gut in order to help the appendix maintain digestive health. Research also shows a positive correlation between the existence of the appendix and the concentration of cecal lymphoid tissue, which supports the suggestion that not only does the appendix evolve as a complex with the cecum but also has major immune benefits. Clinical significance Common diseases of the appendix (in humans) are appendicitis and carcinoid tumors (appendiceal carcinoid). Appendix cancer accounts for about 1 in 200 of all gastrointestinal malignancies. In rare cases, adenomas are also present. Appendicitis Appendicitis is a condition characterized by inflammation of the appendix. Pain often begins in the center of the abdomen, corresponding to the appendix's development as part of the embryonic midgut. This pain is typically a dull, poorly localized, visceral pain. As the inflammation progresses, the pain begins to localize more clearly to the right lower quadrant, as the peritoneum becomes inflamed. This peritoneal inflammation, or peritonitis, results in rebound tenderness (pain upon removal of pressure rather than application of pressure). In particular, it presents at McBurney's point, 1/3 of the way along a line drawn from the anterior superior iliac spine to the umbilicus. Typically, point (skin) pain is not present until the parietal peritoneum is inflamed, as well. Fever and an immune system response are also characteristic of appendicitis. Other signs and symptoms may include nausea and vomiting, low-grade fever that may get worse, constipation or diarrhea, abdominal bloating, or flatulence. Appendicitis usually requires the removal of the inflamed appendix, in an appendectomy either by laparotomy or laparoscopy. Untreated, the appendix may rupture, leading to peritonitis, followed by shock, and, if still untreated, death. Surgery The surgical removal of the appendix is called an appendectomy. This removal is normally performed as an emergency procedure when the patient is suffering from acute appendicitis. In the absence of surgical facilities, intravenous antibiotics are used to delay or avoid the onset of sepsis. In some cases, the appendicitis resolves completely; more often, an inflammatory mass forms around the appendix. This is a relative contraindication to surgery. The appendix is also used for the construction of an efferent urinary conduit, in an operation known as the Mitrofanoff procedure, in people with a neurogenic bladder. The appendix is also used as a means to access the colon in children with paralysed bowels or major rectal sphincter problems. The appendix is brought out to the skin surface and the child/parent can then attach a catheter and easily wash out the colon (via normal defaecation) using an appropriate solution. History Charles Darwin suggested that the appendix was mainly used by earlier hominids for digesting fibrous vegetation, then evolved to take on a new purpose over time. The very long cecum of some herbivorous animals, such as in the horse or the koala, appears to support this hypothesis. The koala's cecum enables it to host bacteria that specifically help to break down cellulose. Human ancestors may have also relied upon this system when they lived on a diet rich in foliage. As people began to eat more easily digested foods, they may have become less reliant on cellulose-rich plants for energy. As the cecum became less necessary for digestion, mutations that were previously deleterious (and would have hindered evolutionary progress) were no longer important, so the mutations survived. It is suggested that these alleles became more frequent and the cecum continued to shrink. After millions of years, the once-necessary cecum degraded to be the appendix of modern humans. Dr. Heather F. Smith of Midwestern University and colleagues explained: Recently ... improved understanding of gut immunity has merged with current thinking in biological and medical science, pointing to an apparent function of the mammalian cecal appendix as a safe-house for symbiotic gut microbes, preserving the flora during times of gastrointestinal infection in societies without modern medicine. This function is potentially a selective force for the evolution and maintenance of the appendix. Three morphotypes of cecal-appendices can be described among mammals based primarily on the shape of the cecum: a distinct appendix branching from a rounded or sac-like cecum (as in many primate species), an appendix located at the apex of a long and voluminous cecum (as in the rabbit, greater glider and Cape dune mole rat), and an appendix in the absence of a pronounced cecum (as in the wombat). In addition, long narrow appendix-like structures are found in mammals that either lack an apparent cecum (as in monotremes) or lack a distinct junction between the cecum and appendix-like structure (as in the koala). A cecal appendix has evolved independently at least twice, and apparently represents yet another example of convergence in morphology between Australian marsupials and placentals in the rest of the world. Although the appendix has apparently been lost by numerous species, it has also been maintained for more than 80 million years in at least one clade. In a 2013 paper, the appendix was found to have independently evolved in different animals at least 32 times (and perhaps as many as 38 times) and to have been lost no more than six times over the course of history. A more recent study using similar methods on an updated database yielded similar, though less spectacular results, with at least 29 gains and at the most 12 losses (all of which were ambiguous), and this is still significantly asymmetrical. This suggests that the cecal appendix has a selective advantage in many situations and argues strongly against its vestigial nature. Given that this organ may have a selective advantage in numerous situations, it appears to be associated with greater maximal longevity, for a given body mass. For example, in a 2023 study, the protective functions conferred against diarrhea were observed in young primates. This complex evolutionary history of the appendix, along with a great heterogeneity in its evolutionary rate in various taxa, suggests that it is a recurrent trait. Such a function may be useful in a culture lacking modern sanitation and healthcare practice, where diarrhea may be prevalent. Current epidemiological data on the cause of death in developing countries collected by the World Health Organization in 2001 show that acute diarrhea is now the fourth leading cause of disease-related death in developing countries (data summarized by the Bill and Melinda Gates Foundation). Two of the other leading causes of death are expected to have exerted limited or no selection pressure. Additional images
Biology and health sciences
Gastrointestinal tract
Biology
99645
https://en.wikipedia.org/wiki/Early%20modern%20human
Early modern human
Early modern human (EMH), or anatomically modern human (AMH), are terms used to distinguish Homo sapiens (sometimes Homo sapiens sapiens) that are anatomically consistent with the range of phenotypes seen in contemporary humans, from extinct archaic human species (of which some are at times also identified with, but only one, prefix sapiens). This distinction is useful especially for times and regions where anatomically modern and archaic humans co-existed, for example, in Paleolithic Europe. Among the oldest known remains of Homo sapiens are those found at the Omo-Kibish I archaeological site in south-western Ethiopia, dating to about 233,000 to 196,000 years ago, the Florisbad Skull founded at the Florisbad archaeological and paleontological site in South Africa, dating to about 259,000 years ago, and the Jebel Irhoud site in Morocco, dated about 315,000 years ago. Extinct species of the genus Homo include Homo erectus (extant from roughly 2 to 0.1 million years ago) and a number of other species (by some authors considered subspecies of either H. sapiens or H. erectus). The divergence of the lineage leading to H. sapiens out of ancestral H. erectus (or an intermediate species such as Homo antecessor) is estimated to have occurred in Africa roughly 500,000 years ago. The earliest fossil evidence of early modern humans appears in Africa around 300,000 years ago, with the earliest genetic splits among modern people, according to some evidence, dating to around the same time. Sustained archaic human admixture with modern humans is known to have taken place both in Africa and (following the recent Out-Of-Africa expansion) in Eurasia, between about 100,000 and 30,000 years ago. Name and taxonomy The binomial name Homo sapiens was coined by Linnaeus, 1758. The Latin noun homō (genitive hominis) means "human being", while the participle sapiēns means "discerning, wise, sensible". The species was initially thought to have emerged from a predecessor within the genus Homo around 300,000 to 200,000 years ago. A problem with the morphological classification of "anatomically modern" was that it would not have included certain extant populations. For this reason, a lineage-based (cladistic) definition of H. sapiens has been suggested, in which H. sapiens would by definition refer to the modern human lineage following the split from the Neanderthal lineage. Such a cladistic definition would extend the age of H. sapiens to over 500,000 years. Estimates for the split between the Homo sapiens line and combined Neanderthal/Denisovan line range from between 503,000 and 565,000 years ago; between 550,000 and 765,000 years ago; and (based on rates of dental evolution) possibly more than 800,000 years ago. Extant human populations have historically been divided into subspecies, but since around the 1980s all extant groups have tended to be subsumed into a single species, H. sapiens, avoiding division into subspecies altogether. Some sources show Neanderthals (H. neanderthalensis) as a subspecies (H. sapiens neanderthalensis). Similarly, the discovered specimens of the H. rhodesiensis species have been classified by some as a subspecies (H. sapiens rhodesiensis), although it remains more common to treat these last two as separate species within the genus Homo rather than as subspecies within H. sapiens. All humans are considered to be a part of the subspecies H. sapiens sapiens, a designation which has been a matter of debate since a species is usually not given a subspecies category unless there is evidence of multiple distinct subspecies. Age and speciation process Derivation from H. erectus The divergence of the lineage that would lead to H. sapiens out of archaic human varieties derived from H. erectus, is estimated as having taken place over 500,000 years ago (marking the split of the H. sapiens lineage from ancestors shared with other known archaic hominins). But the oldest split among modern human populations (such as the Khoisan split from other groups) has been recently dated to between 350,000 and 260,000 years ago, and the earliest known examples of H. sapiens fossils also date to about that period, including the Jebel Irhoud remains from Morocco (ca. 300,000 or 350–280,000 years ago), the Florisbad Skull from South Africa (ca. 259,000 years ago), and the Omo remains from Ethiopia (ca. 195,000, or, as more recently dated, ca. 233,000 years ago). An mtDNA study in 2019 proposed an origin of modern humans in Botswana (and a Khoisan split) of around 200,000 years. However, this proposal has been widely criticized by scholars, with the recent evidence overall (genetic, fossil, and archaeological) supporting an origin for H. sapiens approximately 100,000 years earlier and in a broader region of Africa than the study proposes. In September 2019, scientists proposed that the earliest H. sapiens (and last common human ancestor to modern humans) arose between 350,000 and 260,000 years ago through a merging of populations in East and South Africa. An alternative suggestion defines H. sapiens cladistically as including the lineage of modern humans since the split from the lineage of Neanderthals, roughly 500,000 to 800,000 years ago. The time of divergence between archaic H. sapiens and ancestors of Neanderthals and Denisovans caused by a genetic bottleneck of the latter was dated at 744,000 years ago, combined with repeated early admixture events and Denisovans diverging from Neanderthals 300 generations after their split from H. sapiens, as calculated by Rogers et al. (2017). The derivation of a comparatively homogeneous single species of H. sapiens from more diverse varieties of archaic humans (all of which were descended from the early dispersal of H. erectus some 1.8 million years ago) was debated in terms of two competing models during the 1980s: "recent African origin" postulated the emergence of H. sapiens from a single source population in Africa, which expanded and led to the extinction of all other human varieties, while the "multiregional evolution" model postulated the survival of regional forms of archaic humans, gradually converging into the modern human varieties by the mechanism of clinal variation, via genetic drift, gene flow and selection throughout the Pleistocene. Since the 2000s, the availability of data from archaeogenetics and population genetics has led to the emergence of a much more detailed picture, intermediate between the two competing scenarios outlined above: The recent Out-of-Africa expansion accounts for the predominant part of modern human ancestry, while there were also significant admixture events with regional archaic humans. Since the 1970s, the Omo remains, originally dated to some 195,000 years ago, have often been taken as the conventional cut-off point for the emergence of "anatomically modern humans". Since the 2000s, the discovery of older remains with comparable characteristics, and the discovery of ongoing hybridization between "modern" and "archaic" populations after the time of the Omo remains, have opened up a renewed debate on the age of H. sapiens in journalistic publications. H. s. idaltu, dated to 160,000 years ago, has been postulated as an extinct subspecies of H. sapiens in 2003. H. neanderthalensis, which became extinct about 40,000 years ago, was also at one point considered to be a subspecies, H. s. neanderthalensis. H. heidelbergensis, dated 600,000 to 300,000 years ago, has long been thought to be a likely candidate for the last common ancestor of the Neanderthal and modern human lineages. However, genetic evidence from the Sima de los Huesos fossils published in 2016 seems to suggest that H. heidelbergensis in its entirety should be included in the Neanderthal lineage, as "pre-Neanderthal" or "early Neanderthal", while the divergence time between the Neanderthal and modern lineages has been pushed back to before the emergence of H. heidelbergensis, to close to 800,000 years ago, the approximate time of disappearance of H. antecessor. Early Homo sapiens The term Middle Paleolithic is intended to cover the time between the first emergence of H. sapiens (roughly 300,000 years ago) and the period held by some to mark the emergence of full behavioral modernity (roughly by 50,000 years ago, corresponding to the start of the Upper Paleolithic). Many of the early modern human finds, like those of Jebel Irhoud, Omo, Herto, Florisbad, Skhul, and Peștera cu Oase exhibit a mix of archaic and modern traits. Skhul V, for example, has prominent brow ridges and a projecting face. However, the brain case is quite rounded and distinct from that of the Neanderthals and is similar to the brain case of modern humans. It is uncertain whether the robust traits of some of the early modern humans like Skhul V reflects mixed ancestry or retention of older traits. The "gracile" or lightly built skeleton of anatomically modern humans has been connected to a change in behavior, including increased cooperation and "resource transport". There is evidence that the characteristic human brain development, especially the prefrontal cortex, was due to "an exceptional acceleration of metabolome evolution ... paralleled by a drastic reduction in muscle strength. The observed rapid metabolic changes in brain and muscle, together with the unique human cognitive skills and low muscle performance, might reflect parallel mechanisms in human evolution." The Schöningen spears and their correlation of finds are evidence that complex technological skills already existed 300,000 years ago, and are the first obvious proof of an active (big game) hunt. H. heidelbergensis already had intellectual and cognitive skills like anticipatory planning, thinking and acting that so far have only been attributed to modern man. The ongoing admixture events within anatomically modern human populations make it difficult to estimate the age of the matrilinear and patrilinear most recent common ancestors of modern populations (Mitochondrial Eve and Y-chromosomal Adam). Estimates of the age of Y-chromosomal Adam have been pushed back significantly with the discovery of an ancient Y-chromosomal lineage in 2013, to likely beyond 300,000 years ago. There have, however, been no reports of the survival of Y-chromosomal or mitochondrial DNA clearly deriving from archaic humans (which would push back the age of the most recent patrilinear or matrilinear ancestor beyond 500,000 years). Fossil teeth found at Qesem Cave (Israel) and dated to between 400,000 and 200,000 years ago have been compared to the dental material from the younger (120,000–80,000 years ago) Skhul and Qafzeh hominins. Dispersal and archaic admixture Dispersal of early H. sapiens begins soon after its emergence, as evidenced by the North African Jebel Irhoud finds (dated to around 315,000 years ago). There is indirect evidence for H. sapiens presence in West Asia around 270,000 years ago. The Florisbad Skull from Florisbad, South Africa, dated to about 259,000 years ago, has also been classified as representing early H. sapiens. In September 2019, scientists proposed that the earliest H. sapiens (and last common human ancestor to modern humans) arose between 350,000 and 260,000 years ago through a merging of populations in East and South Africa. Among extant populations, the Khoi-San (or "Capoid") hunters-gatherers of Southern Africa may represent the human population with the earliest possible divergence within the group Homo sapiens sapiens. Their separation time has been estimated in a 2017 study to be between 350 and 260,000 years ago, compatible with the estimated age of early H. sapiens. The study states that the deep split-time estimation of 350 to 260 thousand years ago is consistent with the archaeological estimate for the onset of the Middle Stone Age across sub-Saharan Africa and coincides with archaic H. sapiens in southern Africa represented by, for example, the Florisbad skull dating to 259 (± 35) thousand years ago. H. s. idaltu, found at Middle Awash in Ethiopia, lived about 160,000 years ago, and H. sapiens lived at Omo Kibish in Ethiopia about 233,000-195,000 years ago. Two fossils from Guomde, Kenya, dated to at least (and likely more than) 180,000 years ago and (more precisely) to 300–270,000 years ago, have been tentatively assigned to H. sapiens and similarities have been noted between them and the Omo Kibbish remains. Fossil evidence for modern human presence in West Asia is ascertained for 177,000 years ago, and disputed fossil evidence suggests expansion as far as East Asia by 120,000 years ago. In July 2019, anthropologists reported the discovery of 210,000 year old remains of a H. sapiens and 170,000 year old remains of a H. neanderthalensis in Apidima Cave, Peloponnese, Greece, more than 150,000 years older than previous H. sapiens finds in Europe. A significant dispersal event, within Africa and to West Asia, is associated with the African megadroughts during MIS 5, beginning 130,000 years ago. A 2011 study located the origin of basal population of contemporary human populations at 130,000 years ago, with the Khoi-San representing an "ancestral population cluster" located in southwestern Africa (near the coastal border of Namibia and Angola). While early modern human expansion in Sub-Saharan Africa before 130 kya persisted, early expansion to North Africa and Asia appears to have mostly disappeared by the end of MIS5 (75,000 years ago), and is known only from fossil evidence and from archaic admixture. Eurasia was re-populated by early modern humans in the so-called "recent out-of-Africa migration" post-dating MIS5, beginning around 70,000–50,000 years ago. In this expansion, bearers of mt-DNA haplogroup L3 left East Africa, likely reaching Arabia via the Bab-el-Mandeb, and in the Great Coastal Migration spread to South Asia, Maritime South Asia and Oceania between 65,000 and 50,000 years ago, while Europe, East and North Asia were reached by about 45,000 years ago. Some evidence suggests that an early wave of humans may have reached the Americas by about 40,000–25,000 years ago. Evidence for the overwhelming contribution of this "recent" (L3-derived) expansion to all non-African populations was established based on mitochondrial DNA, combined with evidence based on physical anthropology of archaic specimens, during the 1990s and 2000s, and has also been supported by Y DNA and autosomal DNA. The assumption of complete replacement has been revised in the 2010s with the discovery of admixture events (introgression) of populations of H. sapiens with populations of archaic humans over the period of between roughly 100,000 and 30,000 years ago, both in Eurasia and in Sub-Saharan Africa. Neanderthal admixture, in the range of 1–4%, is found in all modern populations outside of Africa, including in Europeans, Asians, Papua New Guineans, Australian Aboriginals, Native Americans, and other non-Africans. This suggests that interbreeding between Neanderthals and anatomically modern humans took place after the recent "out of Africa" migration, likely between 60,000 and 40,000 years ago. Recent admixture analyses have added to the complexity, finding that Eastern Neanderthals derive up to 2% of their ancestry from anatomically modern humans who left Africa some 100 kya. The extent of Neanderthal admixture (and introgression of genes acquired by admixture) varies significantly between contemporary racial groups, being absent in Africans, intermediate in Europeans and highest in East Asians. Certain genes related to UV-light adaptation introgressed from Neanderthals have been found to have been selected for in East Asians specifically from 45,000 years ago until around 5,000 years ago. The extent of archaic admixture is of the order of about 1% to 4% in Europeans and East Asians, and highest among Melanesians (the last also having Denisova hominin admixture at 4% to 6% in addition to neanderthal admixture). Cumulatively, about 20% of the Neanderthal genome is estimated to remain present spread in contemporary populations. In September 2019, scientists reported the computerized determination, based on 260 CT scans, of a virtual skull shape of the last common human ancestor to modern humans/H. sapiens, representative of the earliest modern humans, and suggested that modern humans arose between 350,000 and 260,000 years ago through a merging of populations in East and South Africa while North-African fossils may represent a population which introgressed into Neandertals during the LMP. According to a study published in 2020, there are indications that 2% to 19% (or about ≃6.6 and ≃7.0%) of the DNA of four West African populations may have come from an unknown archaic hominin which split from the ancestor of humans and Neanderthals between 360 kya to 1.02 mya. Anatomy Generally, modern humans are more lightly built (or more "gracile") than the more "robust" archaic humans. Nevertheless, contemporary humans exhibit high variability in many physiological traits, and may exhibit remarkable "robustness". There are still a number of physiological details which can be taken as reliably differentiating the physiology of Neanderthals vs. anatomically modern humans. Anatomical modernity The term "anatomically modern humans" (AMH) is used with varying scope depending on context, to distinguish "anatomically modern" Homo sapiens from archaic humans such as Neanderthals and Middle and Lower Paleolithic hominins with transitional features intermediate between H. erectus, Neanderthals and early AMH called archaic Homo sapiens. In a convention popular in the 1990s, Neanderthals were classified as a subspecies of H. sapiens, as H. s. neanderthalensis, while AMH (or European early modern humans, EEMH) was taken to refer to "Cro-Magnon" or H. s. sapiens. Under this nomenclature (Neanderthals considered H. sapiens), the term "anatomically modern Homo sapiens" (AMHS) has also been used to refer to EEMH ("Cro-Magnons"). It has since become more common to designate Neanderthals as a separate species, H. neanderthalensis, so that AMH in the European context refers to H. sapiens, but the question is by no means resolved. In this more narrow definition of H. sapiens, the subspecies Homo sapiens idaltu, discovered in 2003, also falls under the umbrella of "anatomically modern". The recognition of H. sapiens idaltu as a valid subspecies of the anatomically modern human lineage would justify the description of contemporary humans with the subspecies name Homo sapiens sapiens. However, biological anthropologist Chris Stringer does not consider idaltu distinct enough within H. sapiens to warrant its own subspecies designation. A further division of AMH into "early" or "robust" vs. "post-glacial" or "gracile" subtypes has since been used for convenience. The emergence of "gracile AMH" is taken to reflect a process towards a smaller and more fine-boned skeleton beginning around 50,000–30,000 years ago. Braincase anatomy The cranium lacks a pronounced occipital bun in the neck, a bulge that anchored considerable neck muscles in Neanderthals. Modern humans, even the earlier ones, generally have a larger fore-brain than the archaic people, so that the brain sits above rather than behind the eyes. This will usually (though not always) give a higher forehead, and reduced brow ridge. Early modern people and some living people do however have quite pronounced brow ridges, but they differ from those of archaic forms by having both a supraorbital foramen or notch, forming a groove through the ridge above each eye. This splits the ridge into a central part and two distal parts. In current humans, often only the central section of the ridge is preserved (if it is preserved at all). This contrasts with archaic humans, where the brow ridge is pronounced and unbroken. Modern humans commonly have a steep, even vertical forehead whereas their predecessors had foreheads that sloped strongly backwards. According to Desmond Morris, the vertical forehead in humans plays an important role in human communication through eyebrow movements and forehead skin wrinkling. Brain size in both Neanderthals and AMH is significantly larger on average (but overlapping in range) than brain size in H. erectus. Neanderthal and AMH brain sizes are in the same range, but there are differences in the relative sizes of individual brain areas, with significantly larger visual systems in Neanderthals than in AMH. Jaw anatomy Compared to archaic people, anatomically modern humans have smaller, differently shaped teeth. This results in a smaller, more receded dentary, making the rest of the jaw-line stand out, giving an often quite prominent chin. The central part of the mandible forming the chin carries a triangularly shaped area forming the apex of the chin called the mental Trigon, not found in archaic humans. Particularly in living populations, the use of fire and tools requires fewer jaw muscles, giving slender, more gracile jaws. Compared to archaic people, modern humans have smaller, lower faces. Body skeleton structure The body skeletons of even the earliest and most robustly built modern humans were less robust than those of Neanderthals (and from what little we know from Denisovans), having essentially modern proportions. Particularly regarding the long bones of the limbs, the distal bones (the radius/ulna and tibia/fibula) are nearly the same size or slightly shorter than the proximal bones (the humerus and femur). In ancient people, particularly Neanderthals, the distal bones were shorter, usually thought to be an adaptation to cold climate. The same adaptation is found in some modern people living in the polar regions. Height ranges overlap between Neanderthals and AMH, with Neanderthal averages cited as and for males and females, respectively, which is largely identical to pre-industrial average heights for AMH. Contemporary national averages range between in males and in females. Neanderthal ranges approximate the contemporary height distribution measured among Malay people, for one. Recent evolution Following the peopling of Africa some 130,000 years ago, and the recent Out-of-Africa expansion some 70,000 to 50,000 years ago, some sub-populations of H. sapiens had been essentially isolated for tens of thousands of years prior to the early modern Age of Discovery. Combined with archaic admixture this has resulted in significant genetic variation, which in some instances has been shown to be the result of directional selection taking place over the past 15,000 years, i.e., significantly later than possible archaic admixture events. Some climatic adaptations, such as high-altitude adaptation in humans, are thought to have been acquired by archaic admixture. Introgression of genetic variants acquired by Neanderthal admixture have different distributions in European and East Asians, reflecting differences in recent selective pressures. A 2014 study reported that Neanderthal-derived variants found in East Asian populations showed clustering in functional groups related to immune and haematopoietic pathways, while European populations showed clustering in functional groups related to the lipid catabolic process. A 2017 study found correlation of Neanderthal admixture in phenotypic traits in modern European populations. Physiological or phenotypical changes have been traced to Upper Paleolithic mutations, such as the East Asian variant of the EDAR gene, dated to c. 35,000 years ago. Recent divergence of Eurasian lineages was sped up significantly during the Last Glacial Maximum (LGM), the Mesolithic and the Neolithic, due to increased selection pressures and due to founder effects associated with migration. Alleles predictive of light skin have been found in Neanderthals, but the alleles for light skin in Europeans and East Asians, associated with KITLG and ASIP, are () thought to have not been acquired by archaic admixture but recent mutations since the LGM. Phenotypes associated with the "white" or "Caucasian" populations of Western Eurasian stock emerge during the LGM, from about 19,000 years ago. Average cranial capacity in modern human populations varies in the range of 1,200 to 1,450 cm3 for adult males. Larger cranial volume is associated with climatic region, the largest averages being found in populations of Siberia and the Arctic. Both Neanderthal and EEMH had somewhat larger cranial volumes on average than modern Europeans, suggesting the relaxation of selection pressures for larger brain volume after the end of the LGM. Examples for still later adaptations related to agriculture and animal domestication including East Asian types of ADH1B associated with rice domestication, or lactase persistence, are due to recent selection pressures. An even more recent adaptation has been proposed for the Austronesian Sama-Bajau, developed under selection pressures associated with subsisting on freediving over the past thousand years or so. Behavioral modernity Behavioral modernity, involving the development of language, figurative art and early forms of religion (etc.) is taken to have arisen before 40,000 years ago, marking the beginning of the Upper Paleolithic (in African contexts also known as the Later Stone Age). There is considerable debate regarding whether the earliest anatomically modern humans behaved similarly to recent or existing humans. Behavioral modernity is taken to include fully developed language (requiring the capacity for abstract thought), artistic expression, early forms of religious behavior, increased cooperation and the formation of early settlements, and the production of articulated tools from lithic cores, bone or antler. The term Upper Paleolithic is intended to cover the period since the rapid expansion of modern humans throughout Eurasia, which coincides with the first appearance of Paleolithic art such as cave paintings and the development of technological innovation such as the spear-thrower. The Upper Paleolithic begins around 50,000 to 40,000 years ago, and also coincides with the disappearance of archaic humans such as the Neanderthals. The term "behavioral modernity" is somewhat disputed. It is most often used for the set of characteristics marking the Upper Paleolithic, but some scholars use "behavioral modernity" for the emergence of H. sapiens around 200,000 years ago, while others use the term for the rapid developments occurring around 50,000 years ago. It has been proposed that the emergence of behavioral modernity was a gradual process. Examples of behavioural modernity The equivalent of the Eurasian Upper Paleolithic in African archaeology is known as the Later Stone Age, also beginning roughly 40,000 years ago. While most clear evidence for behavioral modernity uncovered from the later 19th century was from Europe, such as the Venus figurines and other artefacts from the Aurignacian, more recent archaeological research has shown that all essential elements of the kind of material culture typical of contemporary San hunter-gatherers in Southern Africa was also present by at least 40,000 years ago, including digging sticks of similar materials used today, ostrich egg shell beads, bone arrow heads with individual maker's marks etched and embedded with red ochre, and poison applicators. There is also a suggestion that "pressure flaking best explains the morphology of lithic artifacts recovered from the c. 75-ka Middle Stone Age levels at Blombos Cave, South Africa. The technique was used during the final shaping of Still Bay bifacial points made on heat‐treated silcrete." Both pressure flaking and heat treatment of materials were previously thought to have occurred much later in prehistory, and both indicate a behaviourally modern sophistication in the use of natural materials. Further reports of research on cave sites along the southern African coast indicate that "the debate as to when cultural and cognitive characteristics typical of modern humans first appeared" may be coming to an end, as "advanced technologies with elaborate chains of production" which "often demand high-fidelity transmission and thus language" have been found at the South African Pinnacle Point Site 5–6. These have been dated to approximately 71,000 years ago. The researchers suggest that their research "shows that microlithic technology originated early in South Africa by 71 kya, evolved over a vast time span (c. 11,000 years), and was typically coupled to complex heat treatment that persisted for nearly 100,000 years. Advanced technologies in Africa were early and enduring; a small sample of excavated sites in Africa is the best explanation for any perceived 'flickering' pattern." Increases in behavioral complexity have been speculated to have been linked to an earlier climatic change to much drier conditions between 135,000 and 75,000 years ago. This might have led to human groups who were seeking refuge from the inland droughts, expanded along the coastal marshes rich in shellfish and other resources. Since sea levels were low due to so much water tied up in glaciers, such marshlands would have occurred all along the southern coasts of Eurasia. The use of rafts and boats may well have facilitated exploration of offshore islands and travel along the coast, and eventually permitted expansion to New Guinea and then to Australia. In addition, a variety of other evidence of abstract imagery, widened subsistence strategies, and other "modern" behaviors has been discovered in Africa, especially South, North, and East Africa, predating 50,000 years ago (with some predating 100,000 years ago). The Blombos Cave site in South Africa, for example, is famous for rectangular slabs of ochre engraved with geometric designs. Using multiple dating techniques, the site was confirmed to be around 77,000 and 100,000–75,000 years old. Ostrich egg shell containers engraved with geometric designs dating to 60,000 years ago were found at Diepkloof, South Africa. Beads and other personal ornamentation have been found from Morocco which might be as much as 130,000 years old; as well, the Cave of Hearths in South Africa has yielded a number of beads dating from significantly prior to 50,000 years ago, and shell beads dating to about 75,000 years ago have been found at Blombos Cave, South Africa. Specialized projectile weapons as well have been found at various sites in Middle Stone Age Africa, including bone and stone arrowheads at South African sites such as Sibudu Cave (along with an early bone needle also found at Sibudu) dating approximately 72,000–60,000 years ago some of which may have been tipped with poisons, and bone harpoons at the Central African site of Katanda dating ca. 90,000 years ago. Evidence also exists for the systematic heat treating of silcrete stone to increase its flake-ability for the purpose of toolmaking, beginning approximately 164,000 years ago at the South African site of Pinnacle Point and becoming common there for the creation of microlithic tools at about 72,000 years ago. In 2008, an ochre processing workshop likely for the production of paints was uncovered dating to ca. 100,000 years ago at Blombos Cave, South Africa. Analysis shows that a liquefied pigment-rich mixture was produced and stored in the two abalone shells, and that ochre, bone, charcoal, grindstones and hammer-stones also formed a composite part of the toolkits. Evidence for the complexity of the task includes procuring and combining raw materials from various sources (implying they had a mental template of the process they would follow), possibly using pyrotechnology to facilitate fat extraction from bone, using a probable recipe to produce the compound, and the use of shell containers for mixing and storage for later use. Modern behaviors, such as the making of shell beads, bone tools and arrows, and the use of ochre pigment, are evident at a Kenyan site by 78,000-67,000 years ago. Evidence of early stone-tipped projectile weapons (a characteristic tool of Homo sapiens), the stone tips of javelins or throwing spears, were discovered in 2013 at the Ethiopian site of Gademotta, and date to around 279,000 years ago. Expanding subsistence strategies beyond big-game hunting and the consequential diversity in tool types have been noted as signs of behavioral modernity. A number of South African sites have shown an early reliance on aquatic resources from fish to shellfish. Pinnacle Point, in particular, shows exploitation of marine resources as early as 120,000 years ago, perhaps in response to more arid conditions inland. Establishing a reliance on predictable shellfish deposits, for example, could reduce mobility and facilitate complex social systems and symbolic behavior. Blombos Cave and Site 440 in Sudan both show evidence of fishing as well. Taphonomic change in fish skeletons from Blombos Cave have been interpreted as capture of live fish, clearly an intentional human behavior. Humans in North Africa (Nazlet Sabaha, Egypt) are known to have dabbled in chert mining, as early as ≈100,000 years ago, for the construction of stone tools. Evidence was found in 2018, dating to about 320,000 years ago at the site of Olorgesailie in Kenya, of the early emergence of modern behaviors including: the trade and long-distance transportation of resources (such as obsidian), the use of pigments, and the possible making of projectile points. The authors of three 2018 studies on the site observe that the evidence of these behaviors is roughly contemporary with the earliest known Homo sapiens fossil remains from Africa (such as at Jebel Irhoud and Florisbad), and they suggest that complex and modern behaviors began in Africa around the time of the emergence of Homo sapiens. In 2019, further evidence of Middle Stone Age complex projectile weapons in Africa was found at Aduma, Ethiopia, dated 100,000–80,000 years ago, in the form of points considered likely to belong to darts delivered by spear throwers. Pace of progress during Homo sapiens history Homo sapiens technological and cultural progress appears to have been very much faster in recent millennia than in Homo sapiens early periods. The pace of development may indeed have accelerated, due to massively larger population (so more humans extant to think of innovations), more communication and sharing of ideas among human populations, and the accumulation of thinking tools. However it may also be that the pace of advancements always looks relatively faster to humans in the time they live, because previous advances are unrecognised.
Biology and health sciences
Homo
Biology
99692
https://en.wikipedia.org/wiki/Gaur
Gaur
The gaur (Bos gaurus; ) is a large bovine native to the Indian Subcontinent and Southeast Asia, and has been listed as Vulnerable on the IUCN Red List since 1986. The global population was estimated at a maximum of 21,000 mature individuals in 2016, with the majority of those existing in India. It is the largest species among the wild cattle and the Bovidae. Etymology The Sanskrit word means 'white, yellowish, reddish'. The Sanskrit word means a kind of water buffalo. The Hindi word means 'fair-skinned, fair, white'. Taxonomy Bison gaurus was the scientific name proposed by Charles Hamilton Smith in 1827. Later authors subordinated the species under either Bos or Bibos. To date, three gaur subspecies have been recognized: B. g. gaurus; the nominate subspecies, ranges in India, Nepal and Bhutan. B. g. readei; described by Richard Lydekker in 1903, based on a specimen from Myanmar, and is thought to range from Upper Myanmar to Tanintharyi Region. B. g. hubbacki; described by Lydekker in 1907, based on a specimen from Pahang in Peninsular Malaysia. It was thought to range from Peninsular Malaysia and northward through Tenasserim. This classification, based largely on differences in coloration and size, is no longer widely recognized. In 2003, the International Commission on Zoological Nomenclature fixed the valid specific name of the wild gaur as the first available name based on the wild population, despite being antedated by the name for the domestic form. Most authors have adopted the binomial Bos gaurus for the wild species as valid for the taxon. In recognition of phenotypic differences between zoological specimens of Indian and Southeast Asian gaur, the trinomials Bos gaurus gaurus and Bos gaurus laosiensis are provisionally accepted, pending further morphometric and genetic study. Within the genus Bos, the gaur is most closely related to the banteng (Bos javanicus) and the probably now extinct kouprey (Bos sauveli), which are also native to Southeast Asia. Relationships of members of the genus Bos based on nuclear genomes after Sinding, et al. 2021. Characteristics The gaur is the largest extant bovid. It is a strong and massively built bovine with a high convex ridge on the forehead between the horns, which protrudes anteriorly, causing a deep hollow in the profile of the upper part of the head. There is a prominent ridge on the back. The ears are very large. In the old bulls, the hair becomes very thin on the back. The adult male is dark brown, approaching black in very old individuals. The upper part of the head, from above the eyes to the nape of the neck, is ashy grey, or occasionally dirty white. The muzzle is pale coloured, and the lower part of the legs are pure white or tan. The cows and young bulls are paler, and in some instances have a rufous tinge, which is most marked in groups inhabiting dry and open areas. The tail is shorter than in the typical oxen, reaching only to the hocks. They have a distinct ridge running from the shoulders to the middle of the back; the shoulders may be as much as higher than the rump. This ridge is caused by the great length of the spinous processes of the vertebrae of the fore-part of the trunk as compared with those of the loins. The hair is short, fine and glossy; the hooves are narrow and pointed. The gaur has a distinct dewlap on the throat and chest. Both sexes have horns, which grow from the sides of the head, curving upwards. Between the horns is a high convex ridge on the forehead. At their bases they present an elliptical cross-section, a characteristic that is more strongly marked in bulls than in cows. The horns are decidedly flattened at the base and regularly curved throughout their length, and are bent inward and slightly backward at their tips. The colour of the horns is some shade of pale green or yellow throughout the greater part of their length, but the tips are black. The horns, of medium size by large bovid standards, grow to a length of . The cow is considerably lighter in colour than the bull. Her horns are more slender and upright, with more inward curvature, and the frontal ridge is scarcely perceptible. In young animals, the horns are smooth and polished. In old bulls they are rugged and dented at the base. The gaur has a head-and-body length of with a long tail, and is high at the shoulder, averaging about in females and in males. At the top of its muscular hump just behind its shoulder, an average adult male is just under tall and the male's girth at its midsection (behind its shoulders) averages about . Males are about one-fourth larger and heavier than females. Body mass ranges widely from in adult females and in adult males. In general, measurements are derived from gaurs surveyed in India. In a sample of 13 individuals in India, gaur males averaged about and females weighed a median of approximately . In China, the shoulder height of gaurs ranges from , and bulls weigh up to . Distribution and habitat The gaur historically occurred throughout mainland South and Southeast Asia, including Nepal, India, Bhutan, Bangladesh, Myanmar, Thailand, Laos, Cambodia, Vietnam and China. Today, its range is seriously fragmented, and it is regionally extinct in Peninsular Malaysia and Sri Lanka. It is largely confined to evergreen forests or semi-evergreen and moist deciduous forests, but also inhabits deciduous forest areas at the periphery. Gaur habitat is characterized by large, relatively undisturbed forest tracts, hilly terrain below an elevation of , availability of water, and an abundance of forage in the form of grasses, bamboo, shrubs, and trees. Its apparent preference for hilly terrain may be partly due to the earlier conversion of most of the plains and other low-lying areas to croplands and pastures. It occurs from sea level to an elevation of at least . Low-lying areas seem to comprise optimal habitat. In Nepal, the gaur population was estimated to be 250–350 in the mid-1990s, with the majority in Chitwan National Park and the adjacent Parsa National Park. These two parks are connected by a chain of forested hills. Population trends appeared to be relatively stable. The Chitwan population has increased from 188 to 368 animals in the years 1997 to 2016. Census conducted in Parsa National Park confirmed the presence of 112 gaur in the same period. In India, the population was estimated to be 12,000–22,000 in the mid-1990s. The Western Ghats and their outflanking hills in southern India constitute one of the most extensive extant strongholds of gaur, in particular in the Wayanad – Nagarhole – Mudumalai – Bandipur complex. The populations in India, Bhutan and Bangladesh are estimated to comprise 23,000–34,000 individuals. Major populations of about 2,000 individuals have been reported in both Nagarahole and Bandipur National Parks, over 1,000 individuals in Tadoba Andhari Tiger Project, 500–1,000 individuals in both Periyar Tiger Reserve and Silent Valley and adjoining forest complexes, and over 800 individuals in Bhadra Wildlife Sanctuary. Trishna Wildlife Sanctuary in southern Tripura is home to a significant number of individuals. In Bhutan, they apparently persist all over the southern foothill zone, notably in Royal Manas National Park, Phibsoo Wildlife Sanctuary and Khaling Wildlife Sanctuary. In Bangladesh, a few gaur occur in the Chittagong Hill Tracts, mostly in Banderban district. During a camera trap project, few gaur were recorded indicating that the population is fragmented and probably declining. Gaurs are hunted by local tribal people in Sangu Matamuhari reserve forest although hunting is prohibited in Bangladesh. In Thailand, gaur were once found throughout the country, but fewer than 1,000 individuals were estimated to have remained in the 1990s. In the mostly semi-evergreen Dong Phayayen – Khao Yai Forest Complex, they were recorded at low density at the turn of the century, with an estimated total of about 150 individuals. In Vietnam, several areas in Đắk Lắk Province were known to contain gaur in 1997. Several herds persist in Cát Tiên National Park and in adjacent state forest enterprises. The current status of the gaur population is poorly known; they may be in serious decline. In Cambodia, gaur declined considerably in the period from the late 1960s to the early 1990s. The most substantial population of the country remained in Mondulkiri Province, where up to 1,000 individuals may have survived up to 2010 in a forested landscape of over . Results of camera trapping carried out in 2009 suggested a globally significant population of gaur in Sre Pok Wildlife Sanctuary and the contiguous Phnom Prich Wildlife Sanctuary, and line transect distance sampling from Keo Seima Wildlife Sanctuary showed around 500 individuals in 2010. Since then, there has been rapid decline of these populations, and likely all populations across Cambodia. Updated figures for Keo Seima Wildlife Sanctuary show a decline to only 33 individuals in 2020, and 2020 encounter rates in Sre Pok Wildlife Sanctuary and Phnom Prich Wildlife Sanctuary were too low to analyze with distance sampling. In Laos, up to 200 individuals were estimated to inhabit protected area boundaries in the mid-1990s. They were reported discontinuously distributed in low numbers. Overhunting had reduced the population, and survivors occurred mainly in remote sites. Fewer than six National Biodiversity Conservation Areas held more than 50 individuals. Areas with populations likely to be nationally important included the Nam Theun catchment and the adjoining plateau. Subsequent surveys carried out a decade later using fairly intensive camera trapping did not record any gaur any more, indicating a massive decline of the population. In China, the gaur was present up to the 34th parallel north during the late Neolithic period about 5,200 years BP. Now it occurs only in heavily fragmented populations in Yunnan and southeastern Tibet. By the 1980s, it was extirpated in Lancang County, and the remaining animals were split into two populations in Xishuangbanna–Simao District and Cangyuan. In the mid-1990s, a population of 600–800 individuals may have lived in Yunnan Province, with the majority occurring in Xishuangbanna National Nature Reserve. In 2016, it was estimated that the global population has declined by more than 70% in Indochina and Malaysia during the last three generations of 24–30 years, and that the gaur is locally extinct in Sri Lanka. Populations in well-protected areas appeared to be stable. Ecology and behaviour Where gaur have not been disturbed, they are basically diurnal. In other areas, they have become largely nocturnal due to human impact on the forest. In central India, they are most active at night, and are rarely seen in the open after 8 o'clock in the morning. During the dry season, herds congregate and remain in small areas, dispersing into the hills with the arrival of the monsoon. While gaur depend on water for drinking, they do not seem to bathe or wallow. In January and February, gaur live in small herds of eight to 11 individuals, one of which is a bull. In April or May, more bulls may join the herd for mating, and individual bulls may move from herd to herd, each mating with many cows. In May or June, they leave the herd and may form herds of bulls only or live alone. Herds wander each day. Each herd has a nonexclusive home range, and sometimes herds may join in groups of 50 or more. Gaur herds are led by an old adult female, the matriarch. Adult males may be solitary. During the peak of the breeding season, unattached males wander widely in search of receptive females. No serious fighting between males has been recorded, with size being the major factor in determining dominance. Males make a mating call of clear, resonant tones which may carry for more than . Gaur have also been known to make a whistling snort as an alarm call, and a low, cow-like moo. In some regions in India where human disturbance is minor, the gaur is very timid and shy despite their great size and power. When alarmed, gaur crash into the jungle at a surprising speed. However, in Southeast Asia and South India, where they are used to the presence of humans, gaur are said by locals to be very bold and aggressive. They are frequently known to go into fields and graze alongside domestic cattle, sometimes killing them in fights. Gaur bulls may charge without provocation, especially during summer, when the intense heat and parasitic insects make them more short-tempered than usual. To warn other members of its herd of approaching danger, the gaur lets out a high whistle for help. Feeding ecology The gaur grazes and browses mostly the upper portions of plants, such as leaf blades, stems, seeds and flowers of grass species, including kadam Adina cordifolia. During a survey in the Bhagwan Mahaveer Sanctuary and Mollem National Park, gaurs were observed to feed on 32 species of plants. They consume herbs, young shoots, flowers, fruits of elephant apple (Dillenia) with a high preference for leaves. Food preference varies by season. In winter and monsoon, they feed on preferably fine and fresh true grasses and herb species of the legume family, such as tick clover (Desmodium triflorum), but also browse on leaves of shrub species such as karvy (Strobilanthes callosus), Indian boxwood (Gardenia latifolia), mallow-leaved crossberry (Grewia abutifolia), East-Indian screw tree (Helicteres) and the chaste tree (Vitex negundo). In summer, they also feed on bark of teak (Tectona grandis), on fruit of golden shower tree (Cassia fistula), and on the bark and fruit of cashew (Anacardium occidentale). Gaur spent most of their daily time feeding. Peak feeding activity was observed between 6:30 and 8:30 in the mornings and between 17:30 and 18:45 in the evenings. During the hottest hours of the day, they rest in the shade of big trees. They may debark trees due to shortages of preferred food, and of minerals and trace elements needed for their nutrition, or for maintaining an optimum fiber/protein ratio for proper digestion of food and better assimilation of nutrients. They may turn to available browse species and fibrous teak bark in summer as green grass and herbaceous resources dry up. High concentrations of calcium (22400 ppm) and phosphorus (400 ppm) have been reported in teak bark, so consumption of teak bark may help animals to satisfy both mineral and other food needs. Long-term survival and conservation of these herbivores depend on the availability of preferred plant species for food. Hence, protection of the historically preferred habitats used by gaur is a significant factor in conservation biology. Reproduction Sexual maturity occurs in the gaur's second or third year. Breeding takes place year-round, but typically peaks between December and June. Females have one calf, rarely two, after a gestation period of about 275 days, a few days less than domestic cattle. Calves are typically weaned after seven to 12 months. The lifespan of a gaur in captivity is up to 30 years. Natural predators Due to their size and power, gaur have few natural predators besides humans. Leopards, dhole packs and large mugger crocodiles occasionally attack unguarded calves or unhealthy animals. Only tigers and saltwater crocodiles have been reported to kill adult gaur. However, the habitat and distribution of the gaur and saltwater crocodile seldom overlap in recent times, due to the decreasing range of both species. A crocodile likely would need to be a mature adult male (more than and ) to make a successful attack on healthy adult gaurs. Tigers hunt young or infirm gaur, but have also been reported to have killed healthy bulls weighing at least . When confronted by a tiger, the adult members of a gaur herd often form a circle surrounding the vulnerable young and calves, shielding them from the big cat. As tigers rely on ambush attacks when taking on prey as large as a gaur, they will almost always abandon a hunt if detected and met in this manner. A herd of gaur in Malaysia encircled a calf killed by a tiger and prevented it from approaching the carcass. Nevertheless, the gaur is a formidable opponent to the tiger and capable of killing tigers in self-defence. Threats In Laos, the gaur is highly threatened by poaching for trade to supply international markets, but also by opportunistic hunting, and specific hunting for home consumption. In the 1990s, gaurs were particularly sought by Vietnamese poachers for their commercial value. In Thailand, the gaur is severely threatened by poaching for commercial trade in meat and trophies. Conservation The gaur is listed in CITES Appendix I, and is legally protected in all range states. In captivity On 8 January 2001, the first cloned gaur was born at Trans Ova Genetics in Sioux Center, Iowa. The calf was carried and brought successfully to term by a surrogate mother, a domestic cow (Bos taurus). While healthy at birth, the calf died within 48 hours of a common dysentery, most likely unrelated to cloning. In popular culture The gaur is the mascot of the 54th Infantry Division of the Indian Army, which is also called the Bison Division. The gaur is the state animal of Goa and Bihar. The Gaur is also the mascot of Goan football club FC Goa, competing in Indian Super League, where their team often referred as "The Gaurs". The Red Gaurs ( ) were an extreme right-wing paramilitary organization active in Thailand during the 1970s. Krating Daeng today is a brand of energy drink featuring a pair of charging red gaur bulls in the logo; also used on the licensed derivative, "Red Bull".
Biology and health sciences
Artiodactyla
null
99794
https://en.wikipedia.org/wiki/Blueprint
Blueprint
A blueprint is a reproduction of a technical drawing or engineering drawing using a contact print process on light-sensitive sheets introduced by Sir John Herschel in 1842. The process allowed rapid and accurate production of an unlimited number of copies. It was widely used for over a century for the reproduction of specification drawings used in construction and industry. Blueprints were characterized by white lines on a blue background, a negative of the original. Color or shades of grey could not be reproduced. The process is obsolete, largely displaced by the diazo-based whiteprint process, and later by large-format xerographic photocopiers. It has almost entirely been superseded by digital computer-aided construction drawings. The term blueprint continues to be used informally to refer to any floor plan (and by analogy, any type of plan). Practising engineers, architects, and drafters often call them "drawings", "prints", or "plans". The blueprint process The blueprint process is based on a photosensitive ferric compound. The best known is a process using ammonium ferric citrate and potassium ferricyanide. The paper is impregnated with a solution of ammonium ferric citrate and dried. When the paper is illuminated, a photoreaction turns the trivalent ferric iron into divalent ferrous iron. The image is then developed using a solution of potassium ferricyanide forming insoluble ferroferricyanide (Prussian blue or Turnbull's blue) with the divalent iron. Excess ammonium ferric citrate and potassium ferricyanide are then washed away. The process is also known as cyanotype. This is a simple process for the reproduction of any light transmitting document. Engineers and architects drew their designs on cartridge paper; these were then traced on to tracing paper using India ink for reproduction whenever needed. The tracing paper drawing is placed on top of the sensitized paper, and both are clamped under glass, in a daylight exposure frame, which is similar to a picture frame. The frame is put out into daylight, requiring a minute or two under a bright sun, or about thirty minutes under an overcast sky to complete the exposure. Where ultra-violet light is transmitted through the tracing paper, the light-sensitive coating converts to a stable blue or black dye. Where the India ink blocks the ultra-violet light the coating does not convert and remains soluble. The image can be seen forming. When a strong image is seen the frame is brought indoors to stop the process. The unconverted coating is washed away, and the paper is then dried. The result is a copy of the original image with the clear background area rendered dark blue and the image reproduced as a white line. This process has several features: Introduction of the blueprint process eliminated the expense of photolithographic reproduction or of hand-tracing of original drawings. By the later 1890s in American architectural offices, a blueprint was one-tenth the cost of a hand-traced reproduction. The blueprint process is still used for special artistic and photographic effects, on paper and fabrics. Various base materials have been used for blueprints. Paper was a common choice; for more durable prints linen was sometimes used, but with time, the linen prints would shrink slightly. To combat this problem, printing on imitation vellum and, later, polyester film (Mylar) was implemented. Whiteprints Traditional blueprints became obsolete when less expensive printing methods and digital displays became available. In the early 1940s, cyanotype blueprint began to be supplanted by diazo prints, also known as whiteprints. This technique produces blue lines on a white background. The drawings are also called blue-lines or bluelines. Other comparable dye-based prints were known as blacklines. Diazo prints remained in use until they were replaced by xerographic print processes. Xerography is standard copy machine technology using toner on copy paper. When large size xerography machines became available, 1975, they replaced the older printing methods. As computer-aided design techniques came into use, the designs were printed directly using a computer printer or plotter. Digital In most computer-aided design of parts to be machined, paper is avoided altogether, and the finished design is an image on the computer display. The computer-aided design program generates a computer numerical control sequence from the approved design. The sequence is a computer file which will control the operation of the machine tools used to make the part. In the case of construction plans, such as road work or erecting a building, the supervising workers may view the "blueprints" directly on displays, rather than using printed paper sheets. These displays include mobile devices, such as smartphones or tablets. Software allows users to view and annotate electronic drawing files. Construction crews use software in the field to edit, share, and view blueprint documents in real-time. Many of the original paper blueprints are archived since they are still in use. In many situations their conversion to digital form is prohibitively expensive. Most buildings and roads constructed before 1990 will only have paper blueprints, not digital. These originals have significant importance to the repair and alteration of constructions still in use, e.g. bridges, buildings, sewer systems, roads, railroads, etc., and sometimes in legal matters concerning the determination of, for example, property boundaries, or who owns or is responsible for a boundary wall.
Technology
Printing
null
99820
https://en.wikipedia.org/wiki/Carbon%20group
Carbon group
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 2 | |- ! 3 | |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} The carbon group is a periodic table group consisting of carbon (C), silicon (Si), germanium (Ge), tin (Sn), lead (Pb), and flerovium (Fl). It lies within the p-block. In modern IUPAC notation, it is called group 14. In the field of semiconductor physics, it is still universally called group IV. The group is also known as the tetrels (from the Greek word tetra, which means four), stemming from the Roman numeral IV in the group name, or (not coincidentally) from the fact that these elements have four valence electrons (see below). They are also known as the crystallogens or adamantogens. Characteristics Chemical Like other groups, the members of this family show patterns in electron configuration, especially in the outermost shells, resulting in trends in chemical behavior: Each of the elements in this group has 4 electrons in its outer shell. An isolated, neutral group 14 atom has the s2 p2 configuration in the ground state. These elements, especially carbon and silicon, have a strong propensity for covalent bonding, which usually brings the outer shell to eight electrons. Bonds in these elements often lead to hybridisation where distinct s and p characters of the orbitals are erased. For single bonds, a typical arrangement has four pairs of sp3 electrons, although other cases exist too, such as three sp2 pairs in graphene and graphite. Double bonds are characteristic for carbon (alkenes, ...); the same for π-systems in general. The tendency to lose electrons increases as the size of the atom increases, as it does with increasing atomic number. Carbon alone forms negative ions, in the form of carbide (C4−) ions. Silicon and germanium, both metalloids, each can form +4 ions. Tin and lead both are metals, while flerovium is a synthetic, radioactive (its half life is very short, only 1.9 seconds) element that may have a few noble gas-like properties, though it is still most likely a post-transition metal. Tin and lead are both capable of forming +2 ions. Although tin is chemically a metal, its α allotrope looks more like germanium than like a metal and it is a poor electric conductor. Among main group (groups 1, 2, 13–17) alkyl derivatives QRn, where n is the standard bonding number for Q (see lambda convention), the group 14 derivatives QR4 are notable in being electron-precise: they are neither electron-deficient (having fewer electrons than an octet and tending to be Lewis acidic at Q and usually existing as oligomeric clusters or adducts with Lewis bases) nor electron-excessive (having lone pair(s) at Q and tending to be Lewis basic at Q). As a result, the group 14 alkyls have low chemical reactivity relative to the alkyl derivatives of other groups. In the case of carbon, the high bond dissociation energy of the C–C bond and lack of electronegativity difference between the central atom and the alkyl ligands render the saturated alkyl derivatives, the alkanes, particularly inert. Carbon forms tetrahalides with all the halogens. Carbon also forms many oxides such as carbon monoxide, carbon suboxide, and carbon dioxide. Carbon forms a disulfide an a diselenide. Silicon forms several hydrides; two of them are SiH4 and Si2H6. Silicon forms tetrahalides with fluorine (SiF4), chlorine (SiCl4), bromine (SiBr4), and iodine (SiI4). Silicon also forms a dioxide and a disulfide. Silicon nitride has the formula Si3N4. Germanium forms five hydrides. The first two germanium hydrides are GeH4 and Ge2H6. Germanium forms tetrahalides with all halogens except astatine and forms dihalides with all halogens except bromine and astatine. Germanium bonds to all natural single chalcogens except polonium, and forms dioxides, disulfides, and diselenides. Germanium nitride has the formula Ge3N4. Tin forms two hydrides: SnH4 and Sn2H6. Tin forms dihalides and tetrahalides with all halogens except astatine. Tin forms monochalcogenides with naturally occurring chalcogens except polonium, and forms dichalcogenides with naturally occurring chalcogens except polonium and tellurium. Lead forms one hydride, which has the formula PbH4. Lead forms dihalides and tetrahalides with fluorine and chlorine, and forms a dibromide and a diiodide, although the tetrabromide and tetraiodide of lead are unstable. Lead forms four oxides, a sulfide, a selenide, and a telluride. There are no known compounds of flerovium. Physical The boiling points of the carbon group tend to get lower with the heavier elements. At standard pressure, carbon, the lightest carbon group element, sublimes at 3825 °C. Silicon's boiling point is 3265 °C, germanium's is 2833 °C, tin's is 2602 °C, and lead's is 1749 °C. Flerovium is predicted to boil at −60 °C. The melting points of the carbon group elements have roughly the same trend as their boiling points. Silicon melts at 1414 °C, germanium melts at 939 °C, tin melts at 232 °C, and lead melts at 328 °C. Carbon's crystal structure is hexagonal; at high pressures and temperatures it forms diamond (see below). Silicon and germanium have diamond cubic crystal structures, as does tin at low temperatures (below 13.2 °C). Tin at room temperature has a tetragonal crystal structure. Lead has a face-centered cubic crystal structure. The densities of the carbon group elements tend to increase with increasing atomic number. Carbon has a density of 2.26 g·cm−3; silicon, 2.33 g·cm−3; germanium, 5.32 g·cm−3; tin, 7.26 g·cm−3; lead, 11.3 g·cm−3. The atomic radii of the carbon group elements tend to increase with increasing atomic number. Carbon's atomic radius is 77 picometers, silicon's is 118 picometers, germanium's is 123 picometers, tin's is 141 picometers, and lead's is 175 picometers. Allotropes Carbon has multiple allotropes. The most common is graphite, which is carbon in the form of stacked sheets. Another form of carbon is diamond, but this is relatively rare. Amorphous carbon is a third allotrope of carbon; it is a component of soot. Another allotrope of carbon is a fullerene, which has the form of sheets of carbon atoms folded into a sphere. A fifth allotrope of carbon, discovered in 2003, is called graphene, and is in the form of a layer of carbon atoms arranged in a honeycomb-shaped formation. Silicon has two known allotropes that exist at room temperature. These allotropes are known as the amorphous and the crystalline allotropes. The amorphous allotrope is a brown powder. The crystalline allotrope is gray and has a metallic luster. Tin has two allotropes: α-tin, also known as gray tin, and β-tin. Tin is typically found in the β-tin form, a silvery metal. However, at standard pressure, β-tin converts to α-tin, a gray powder, at temperatures below . This can cause tin objects in cold temperatures to crumble to gray powder in a process known as tin pest or tin rot. Nuclear At least two of the carbon group elements (tin and lead) have magic nuclei, meaning that these elements are more common and more stable than elements that do not have a magic nucleus. Isotopes There are 15 known isotopes of carbon. Of these, three are naturally occurring. The most common is stable carbon-12, followed by stable carbon-13. Carbon-14 is a natural radioactive isotope with a half-life of 5,730 years. 23 isotopes of silicon have been discovered. Five of these are naturally occurring. The most common is stable silicon-28, followed by stable silicon-29 and stable silicon-30. Silicon-32 is a radioactive isotope that occurs naturally as a result of radioactive decay of actinides, and via spallation in the upper atmosphere. Silicon-34 also occurs naturally as the result of radioactive decay of actinides. 32 isotopes of germanium have been discovered. Five of these are naturally occurring. The most common is the stable germanium-74, followed by stable germanium-72, stable germanium-70, and stable germanium-73. Germanium-76 is a primordial radioisotope. 40 isotopes of tin have been discovered. 14 of these occur in nature. The most common is tin-120, followed by tin-118, tin-116, tin-119, tin-117, tin-124, tin-122, tin-112, and tin-114: all of these are stable. Tin also has four radioisotopes that occur as the result of the radioactive decay of uranium. These isotopes are tin-121, tin-123, tin-125, and tin-126. 38 isotopes of lead have been discovered. 9 of these are naturally occurring. The most common isotope is lead-208, followed by lead-206, lead-207, and lead-204: all of these are stable. 5 isotopes of lead occur from the radioactive decay of uranium and thorium. These isotopes are lead-209, lead-210, lead-211, lead-212 and lead-214. 6 isotopes of flerovium (flerovium-284, flerovium-285, flerovium-286, flerovium-287, flerovium-288, and flerovium-289) have been discovered, all from human synthesis. Flerovium's most stable isotope is flerovium-289, which has a half-life of 2.6 seconds. Occurrence Carbon accumulates as the result of stellar fusion in most stars, even small ones. Carbon is present in the Earth's crust in concentrations of 480 parts per million, and is present in seawater at concentrations of 28 parts per million. Carbon is present in the atmosphere in the form of carbon monoxide, carbon dioxide, and methane. Carbon is a key constituent of carbonate minerals, and is in hydrogen carbonate, which is common in seawater. Carbon forms 22.8% of a typical human. Silicon is present in the Earth's crust at concentrations of 28%, making it the second most abundant element there. Silicon's concentration in seawater can vary from 30 parts per billion on the surface of the ocean to 2000 parts per billion deeper down. Silicon dust occurs in trace amounts in Earth's atmosphere. Silicate minerals are the most common type of mineral on earth. Silicon makes up 14.3 parts per million of the human body on average. Only the largest stars produce silicon via stellar fusion. Germanium makes up 2 parts per million of the Earth's crust, making it the 52nd most abundant element there. On average, germanium makes up 1 part per million of soil. Germanium makes up 0.5 parts per trillion of seawater. Organogermanium compounds are also found in seawater. Germanium occurs in the human body at concentrations of 71.4 parts per billion. Germanium has been found to exist in some very faraway stars. Tin makes up 2 parts per million of the Earth's crust, making it the 49th most abundant element there. On average, tin makes up 1 part per million of soil. Tin exists in seawater at concentrations of 4 parts per trillion. Tin makes up 428 parts per billion of the human body. Tin(IV) oxide occurs at concentrations of 0.1 to 300 parts per million in soils. Tin also occurs in concentrations of one part per thousand in igneous rocks. Lead makes up 14 parts per million of the Earth's crust, making it the 36th most abundant element there. On average, lead makes up 23 parts per million of soil, but the concentration can reach 20000 parts per million (2 percent) near old lead mines. Lead exists in seawater at concentrations of 2 parts per trillion. Lead makes up 1.7 parts per million of the human body by weight. Human activity releases more lead into the environment than any other metal. Flerovium doesn't occur in nature at all, so it only exists in particle accelerators with a few atoms at a time. History Discoveries and uses in antiquity Carbon, tin, and lead are a few of the elements well known in the ancient world, together with sulfur, iron, copper, mercury, silver, and gold. Silicon as silica in the form of rock crystal was familiar to the predynastic Egyptians, who used it for beads and small vases; to the early Chinese; and probably to many others of the ancients. The manufacture of glass containing silica was carried out both by the Egyptians – at least as early as 1500 BCE – and by the Phoenicians. Many of the naturally occurring compounds or silicate minerals were used in various kinds of mortar for construction of dwellings by the earliest people. The origins of tin seem to be lost in history. It appears that bronzes, which are alloys of copper and tin, were used by prehistoric man some time before the pure metal was isolated. Bronzes were common in early Mesopotamia, the Indus Valley, Egypt, Crete, Israel, and Peru. Much of the tin used by the early Mediterranean peoples apparently came from the Scilly Isles and Cornwall in the British Isles, where mining of the metal dates from about 300–200 BCE. Tin mines were operating in both the Inca and Aztec areas of South and Central America before the Spanish conquest. Lead is mentioned often in early Biblical accounts. The Babylonians used the metal as plates on which to record inscriptions. The Romans used it for tablets, water pipes, coins, and even cooking utensils; indeed, as a result of the last use, lead poisoning was recognized in the time of Augustus Caesar. The compound known as white lead was apparently prepared as a decorative pigment at least as early as 200 BCE. Modern discoveries Amorphous elemental silicon was first obtained pure in 1824 by the Swedish chemist Jöns Jacob Berzelius; impure silicon had already been obtained in 1811. Crystalline elemental silicon was not prepared until 1854, when it was obtained as a product of electrolysis. Germanium is one of three elements the existence of which was predicted in 1869 by the Russian chemist Dmitri Mendeleev when he first devised his periodic table. However, the element was not actually discovered for some time. In September 1885, a miner discovered a mineral sample in a silver mine and gave it to the mine manager, who determined that it was a new mineral and sent the mineral to Clemens A. Winkler. Winkler realized that the sample was 75% silver, 18% sulfur, and 7% of an undiscovered element. After several months, Winkler isolated the element and determined that it was element 32. The first attempt to discover flerovium (then referred to as "element 114") was in 1969, at the Joint Institute for Nuclear Research, but it was unsuccessful. In 1977, researchers at the Joint Institute for Nuclear Research bombarded plutonium-244 atoms with calcium-48, but were again unsuccessful. This nuclear reaction was repeated in 1998, this time successfully. Etymologies Carbon comes from the Latin word carbo, meaning "charcoal". Silicon comes from the Latin word silex (or silicis), meaning "flint". Germanium comes from the Latin word Germania, the Latin name for Germany, which is the country where germanium was discovered. Stannum comes from the Latin word stannum, meaning "tin", from or related to Celtic staen. - The common name for stannum in English is tin, inherited directly from Old English. Possibly of common origin with stannum and staen. Plumbum comes from the Latin word plumbum meaning lead. - The common name for plumbum in English is lead, inherited directly from Old English. Flerovium was named after Georgy Flyorov and his Institute. Applications Carbon is most commonly used in its amorphous form. In this form, carbon is used for steelmaking, as carbon black, as a filling in tires, in respirators, and as activated charcoal. Carbon is also used in the form of graphite, for example as the lead in pencils. Diamond, another form of carbon, is commonly used in jewelry. Carbon fibers are used in numerous applications, such as satellite struts, because the fibers are highly strong yet elastic. Silicon dioxide has a wide variety of applications, including toothpaste, construction fillers, and silica is a major component of glass. 50% of pure silicon is devoted to the manufacture of metal alloys. 45% of silicon is devoted to the manufacture of silicones. Silicon is also commonly used in semiconductors since the 1950s. Germanium was used in semiconductors until the 1950s, when it was replaced by silicon. Radiation detectors contain germanium. Germanium dioxide is used in fiber optics and wide-angle camera lenses. A small amount of germanium mixed with silver can make silver tarnish-proof. The resulting alloy is known as argentium sterling silver. Solder is the most important use of tin; 50% of all tin produced goes into this application. 20% of all tin produced is used in tin plate. 20% of tin is used by the chemical industry. Tin is a constituent of numerous alloys, including pewter. Tin(IV) oxide has been commonly used in ceramics for thousands of years. Cobalt stannate is a tin compound which is used as a cerulean blue pigment. 80% of all lead produced goes into lead–acid batteries. Other applications for lead include weights, pigments, and shielding against radioactive materials. Lead was historically used in gasoline in the form of tetraethyllead, but this application has been discontinued due to concerns of toxicity. Production Carbon's allotrope diamond is produced mostly by Russia, Botswana, Congo, Canada, South Africa, and India. 80% of all synthetic diamonds are produced by Russia. China produces 70% of the world's graphite. Other graphite-mining countries are Brazil, Canada, and Mexico. Silicon can be produced by heating silica with carbon. There are some germanium ores, such as germanite, but these are not mined on account of being rare. Instead, germanium is extracted from the ores of metals such as zinc. In Russia and China, germanium is also separated from coal deposits. Germanium-containing ores are first treated with chlorine to form germanium tetrachloride, which is mixed with hydrogen gas. Then the germanium is further refined by zone refining. Roughly 140 metric tons of germanium are produced each year. Mines output 300,000 metric tons of tin each year. China, Indonesia, Peru, Bolivia, and Brazil are the main producers of tin. The method by which tin is produced is to heat the tin mineral cassiterite (SnO2) with coke. The most commonly mined lead ore is galena (lead sulfide). 4 million metric tons of lead are newly mined each year, mostly in China, Australia, the United States, and Peru. The ores are mixed with coke and limestone and roasted to produce pure lead. Most lead is recycled from lead batteries. The total amount of lead ever mined by humans amounts to 350 million metric tons. Biological role Carbon is a key element to all known life. It is in all organic compounds, for example, DNA, steroids, and proteins. Carbon's importance to life is primarily due to its ability to form numerous bonds with other elements. There are 16 kilograms of carbon in a typical 70-kilogram human. Silicon-based life's feasibility is commonly discussed. However, it is less able than carbon to form elaborate rings and chains. Silicon in the form of silicon dioxide is used by diatoms and sea sponges to form their cell walls and skeletons. Silicon is essential for bone growth in chickens and rats and may also be essential in humans. Humans consume on average between 20 and 1200 milligrams of silicon per day, mostly from cereals. There is 1 gram of silicon in a typical 70-kilogram human. A biological role for germanium is not known, although it does stimulate metabolism. In 1980, germanium was reported by Kazuhiko Asai to benefit health, but the claim has not been proven. Some plants take up germanium from the soil in the form of germanium oxide. These plants, which include grains and vegetables contain roughly 0.05 parts per million of germanium. The estimated human intake of germanium is 1 milligram per day. There are 5 milligrams of germanium in a typical 70-kilogram human. Tin has been shown to be essential for proper growth in rats, but there is, as of 2013, no evidence to indicate that humans need tin in their diet. Plants do not require tin. However, plants do collect tin in their roots. Wheat and maize contain 7 and 3 parts per million respectively. However, the level of tin in plants can reach 2000 parts per million if the plants are near a tin smelter. On average, humans consume 0.3 milligrams of tin per day. There are 30 milligrams of tin in a typical 70-kilogram human. Lead has no known biological role, and is in fact highly toxic, but some microbes are able to survive in lead-contaminated environments. Some plants, such as cucumbers contain up to tens of parts per million of lead. There are 120 milligrams of lead in a typical 70-kilogram human. Flerovium has no biological role and instead is found and made only in particle accelerators. Toxicity Elemental carbon is not generally toxic, but many of its compounds are, such as carbon monoxide and hydrogen cyanide. However, carbon dust can be dangerous because it lodges in the lungs in a manner similar to asbestos. Silicon minerals are not typically poisonous. However, silicon dioxide dust, such as that emitted by volcanoes can cause adverse health effects if it enters the lungs. Germanium can interfere with such enzymes as lactate dehydrogenase and alcohol dehydrogenase. Organic germanium compounds are more toxic than inorganic germanium compounds. Germanium has a low degree of oral toxicity in animals. Severe germanium poisoning can cause death by respiratory paralysis. Some tin compounds are toxic to ingest, but most inorganic compounds of tin are considered nontoxic. Organic tin compounds, such as trimethyltin and triethyltin are highly toxic, and can disrupt metabolic processes inside cells. Lead and its compounds, such as lead acetates are highly toxic. Lead poisoning can cause headaches, stomach pain, constipation, and gout. Flerovium is too radioactive to test if it's toxic or not although its high radioactivity alone would be toxic.
Physical sciences
Group 14
Chemistry
99960
https://en.wikipedia.org/wiki/Canoe
Canoe
A canoe is a lightweight, narrow water vessel, typically pointed at both ends and open on top, propelled by one or more seated or kneeling paddlers facing the direction of travel and using paddles. In British English, the term canoe can also refer to a kayak, whereas canoes are then called Canadian or open canoes to distinguish them from kayaks. However, for official competition purposes, the American distinction between a kayak and a canoe is almost always adopted. At the Olympics, both conventions are used: under the umbrella terms Canoe Slalom and Canoe Sprint, there are separate events for canoes and kayaks. Culture Canoes were developed in cultures all over the world, including some designed for use with sails or outriggers. Until the mid-19th century, the canoe was an important means of transport for exploration and trade, and in some places is still used as such, sometimes with the addition of an outboard motor. Where the canoe played a key role in history, such as the Northern United States, Canada, and New Zealand, it remains an important theme in popular culture. For instance, the birch bark canoe of the largely birch-based culture of the First Nations of Quebec, Canada, and North America provided these hunting peoples with the mobility essential to this way of life. Canoes are now widely used for competition — indeed, canoeing has been part of the Olympics since 1936— and pleasure, such as racing, whitewater, touring and camping, freestyle and general recreation. The intended use of the canoe dictates its hull shape, length, and construction material. Although canoes were historically dugouts or made of bark on a wood frame, construction materials later evolved to canvas on a wood frame, then to aluminum. Most modern canoes are made of molded plastic or composites such as fiberglass, or those incorporating kevlar or graphite. History The word canoe came into English from the French word "casnouey" adopted from the Saint-Lawrence Iroquoians language in the 1535 Jacques Cartier Relations translated in 1600 by the English geographer Richard Hackluyt. Dugouts Many peoples have made dugout canoes throughout history, carving them out of a single piece of wood: either a whole trunk or a slab of trunk from particularly large trees. Dugout canoes go back to ancient times. The Dufuna canoe, discovered in Nigeria, dates back to 8500–8000 BC. The Pesse canoe, discovered in the Netherlands, dates back to 8200–7600 BC. Excavations in Denmark reveal the use of dugouts and paddles during the Ertebølle period, (). Canoes played a vital role in the colonisation of the pre-Columbian Caribbean, as they were the only means of reaching the Caribbean Islands from mainland South America. Around 3500 BC, ancient Amerindian groups colonised the first Caribbean Islands using single-hulled canoes. Only a few pre-Columbian Caribbean canoes have been found. Several families of trees could have been used to construct Caribbean canoes, including woods of the mahogany family (Meliaceae) such as the Cuban mahogany (Swietenia mahagoni), that can grow up to 30–35 m tall and the red cedar (Cedrela odorata), that can grow up to 60 m tall, as well as the ceiba genus (Malvacae), such as Ceiba pentandra, that can reach 60–70 m in height. It is likely that these canoes were built in a variety of sizes, ranging from fishing canoes holding just one or a few people to larger ones able to carry as many as a few dozen, and could have been used to reach the Caribbean Islands from the mainland. Reports by historical chroniclers claim to have witnessed a canoe "containing 40 to 50 Caribs [...] when it came out to trade with a visiting English ship". There is still much dispute regarding the use of sails in Caribbean canoes. Some archaeologists doubt that oceanic transportation would have been possible without the use of sails, as winds and currents would have carried the canoes off course. However, no evidence of a sail or a Caribbean canoe that could have made use of a sail has been found. Furthermore, no historical sources mention Caribbean canoes with sails. One possibility could be that canoes with sails were initially used in the Caribbean but later abandoned before European contact. This, however, seems unlikely, as long-distance trade continued in the Caribbean even after the prehistoric colonisation of the islands. Hence, it is likely that early Caribbean colonists made use of canoes without sails. Native American groups of the north Pacific coast made dugout canoes in a number of styles for different purposes, from western red cedar (Thuja plicata) or yellow cedar (Chamaecyparis nootkatensis), depending on availability. Different styles were required for ocean-going vessels versus river boats, and for whale-hunting versus seal-hunting versus salmon-fishing. The Quinault of Washington State built shovel-nose canoes with double bows, for river travel that could slide over a logjam without needing to be portaged. The Kootenai of the Canadian province of British Columbia made sturgeon-nosed canoes from pine bark, designed to be stable in windy conditions on Kootenay Lake. In recent years, First Nations in British Columbia and Washington State have been revitalizing the ocean-going canoe tradition. Beginning in the 1980s, the Heiltsuk and Haida were early leaders in this movement. The Paddle to Expo 86 in Vancouver by the Heiltsuk and the 1989 Paddle to Seattle by multiple Native American tribes on the occasion of Washington State's centennial year were early instances of this. In 1993 a large number of canoes paddled from up and down the coast to Bella Bella in its first canoe festival – Qatuwas. The revitalization continued, and Tribal Journeys began with trips to various communities held in most years. Australian aboriginal people made canoes from hollowed out tree trunks, as well as from tree bark. The indigenous people of the Amazon commonly used Hymenaea (Fabaceae) trees. Bark canoes Australia Some Australian aboriginal peoples made bark canoes. They could be made only from the bark of certain trees (usually red gum or box gum) and during summer. After cutting the outline of the required size and shape, a digging stick was used to cut through the bark to the hardwood, and the bark was then slowly prised out using numerous smaller sticks. The slab of bark was held in place by branches or handwoven rope, and after separation from the tree, lowered to the ground. Small fires would then be lit on the inside of the bark to cause the bark to dry out and curl upwards, after which the ends could be pulled together and stitched with hemp and plugged with mud. It was then allowed to mature, with frequent applications of grease and ochre. The remaining tree was later dubbed a canoe tree by Europeans. Because of the porosity of the bark, these bark canoes did not last too long (about two years). They were mainly used for fishing or crossing rivers and lakes to avoid long journeys. They were usually propelled by punting with a long stick. Another type of bark canoe was made out of a type of stringybark gum known as Messmate stringybark (Eucalyptus obliqua), pleating the bark and tying it at each end, with a framework of cross-ties and ribs. This type was known as a pleated or tied bark canoe. Bark strips could also be sewn together to make larger canoes, known as sewn bark canoes. Americas Many indigenous peoples of the Americas built bark canoes. They were usually skinned with birch bark over a light wooden frame, but other types could be used if birch was scarce. At a typical length of and weight of , the canoes were light enough to be portaged, yet could carry a lot of cargo, even in shallow water. Although susceptible to damage from rocks, they are easily repaired. Their performance qualities were soon recognized by early European settler colonials, and canoes played a key role in the exploration of North America, with Samuel de Champlain canoeing as far as the Georgian Bay in 1615. In 1603 a canoe was brought to Sir Robert Cecil's house in London and rowed on the Thames by Virginian Indians from Tsenacommacah. In 1643 David Pietersz. de Vries recorded a Mohawk canoe in Dutch possession at Rensselaerswyck capable of transporting 225 bushels of maize. René de Bréhant de Galinée, a French missionary who explored the Great Lakes in 1669, declared: "The convenience of these canoes is great in these waters, full of cataracts or waterfalls, and rapids through which it is impossible to take any boat. When you reach them you load canoe and baggage upon your shoulders and go overland until the navigation is good; and then you put your canoe back into the water, and embark again." American painter, author and traveler George Catlin wrote that the bark canoe was "the most beautiful and light model of all the water crafts that ever were invented". The first explorer to cross the North American continent, Alexander Mackenzie, used canoes extensively, as did David Thompson and the Lewis and Clark Expedition. In the North American fur trade, the Hudson's Bay Company's voyageurs used three types of canoe: The rabaska (French: canot du maître, from the surname of Louise Le Maître, an artisan in the Province of Quebec, though the term would literally mean "master canoe" otherwise) — also referred to as the "Montreal canoe — was designed for the long haul from the St. Lawrence River to western Lake Superior. Its dimensions were length, approximately ; beam, ; and height, about . It could carry 60 packs weighing , and of provisions. With a crew of eight or ten paddling or rowing, they could make three knots over calm waters. Four to six men could portage it, bottom up. Henry Schoolcraft declared it "altogether one of the most eligible modes of conveyance that can be employed upon the lakes". Archibald McDonald of the Hudson's Bay Company wrote: "I never heard of such a canoe being wrecked, or upset, or swamped ... they swam like ducks." The {{not a typo|canot}} du nord (French: "canoe of the north"), a craft specially made and adapted for speedy travel, was the workhorse of the fur trade transportation system. About half the size of the rabaska, it could carry about 35 packs weighing and was manned by four to eight men. It could in turn be carried by two men and was portaged in the upright position. The express canoe (French: " léger," light canoe) was about long and was used to carry people, reports, and news. The birch bark canoe was used in a supply route from Montreal to the Pacific Ocean and the Mackenzie River, and continued to be used up to the end of the 19th century. The indigenous peoples of eastern Canada and the northeast United States made canoes using the bark of the paper birch, which was harvested in early spring by stripping off the bark in one piece, using wooden wedges. Next, the two ends (stem and stern) were sewn together and made watertight with the pitch of balsam fir. The ribs of the canoe, called verons in Canadian French, were made of white cedar, and the hull, ribs, and thwarts were fastened using watap, a binding usually made from the roots of various species of conifers, such as the white spruce, black spruce, or cedar, and caulked with pitch. Skin canoes Skin canoes are constructed using animal skins stretched over a framework. Examples include the kayak and umiak. Modern canoes In 19th-century North America, the birch-on-frame construction technique evolved into the wood-and-canvas canoes made by fastening an external waterproofed canvas shell to planks and ribs by boat builders such as Old Town Canoe, E. M. White Canoe, Peterborough Canoe Company and at the Chestnut Canoe Company in New Brunswick. Though similar to bark canoes in the use of ribs, and a waterproof covering, the construction method is different, being built by bending ribs over a solid mold. Once removed from the mold, the decks, thwarts and seats are installed, and canvas is stretched tightly over the hull. The canvas is then treated with a combination of varnishes and paints to render it more durable and watertight. Although canoes were once primarily a means of transport, with industrialization they became popular as recreational or sporting watercraft. John MacGregor popularized canoeing through his books, founding the Royal Canoe Club in London in 1866 and the American Canoe Association in 1880. The Canadian Canoe Association was founded in 1900 and the British Canoe Union in 1936. In Sweden, naval officer Carl Smith was both an enthusiastic promoter of canoeing and a designer of canoes, some experimental, at the end of the 19th century. Sprint canoe was a demonstration sport at the 1924 Paris Olympics and became an Olympic discipline at the 1936 Berlin Olympics. When the International Canoe Federation was formed in 1946, it became the umbrella organization of all national canoe organizations worldwide. Hull design Hull design must meet different, often conflicting, requirements for speed, carrying capacity, maneuverability, and stability The canoe's hull speed can be calculated using the principles of ship resistance and propulsion. Length: although this is often stated by manufacturers as the overall length of the boat, what counts in performance terms is the length of the waterline, and more specifically its value relative to the displacement (the amount of water displaced by the boat) of the canoe, which is equal to the total weight of the boat and its contents because a floating body displaces its own weight in water. When a canoe is paddled through water, effort is required to push all the displaced water out of the way. Canoes are displacement hulls: the longer the waterline relative to its displacement, the faster it can be paddled. Among general touring canoeists, is a popular length, providing a good compromise between capacity and cruising speed. Too large a canoe will simply mean extra work paddling at cruising speed. Width (beam): a wider boat provides more stability at the expense of speed. A canoe cuts through the water like a wedge, and a shorter boat needs a narrower beam to reduce the angle of the wedge cutting through the water. Canoe manufacturers typically provide three beam measurements: the gunwale (the measurement at the top of the hull), the waterline (the measurement at the point where the surface of the water meets the hull when it is empty), and the widest point. Another variation of the waterline beam measurement is called 4" waterline, where the displacement is taken into account. This measurement is done at the waterline level when the maximum load is applied to the canoe. Some canoe races use the 4" waterline beam measurement as the standard for their regulations. In races, the measurement is done by measuring the widest point at 4" (10 cm) from the bottom of the canoe. Freeboard: a higher-sided boat stays drier in rough water. The disadvantage of high sides is extra weight and extra windage. Increased windage adversely affects speed and steering control in crosswinds. Stability and immersed bottom shape: the hull can be optimized for initial stability (the boat feels steady when it sits flat on the water) or final stability (resistance to rolling and capsizing). A flatter-bottomed hull has higher initial stability, versus a rounder or V-shaped hull in cross-section has high final stability. The fastest flat water non-racing canoes have sharp V-bottoms to cut through the water, but they are difficult to turn and have a deeper draft, which makes them less suitable for shallows. Flat-bottomed canoes are most popular among recreational canoeists. At the cost of speed, they have a shallow draft and more cargo space, and they turn better. The reason a flat bottom canoe has lower final stability is that the hull must wrap a sharper angle between the bottom and the sides, compared to a more round-bottomed boat. Keel: an external keel makes a canoe track (hold its course) better and can stiffen a floppy bottom, but it can get stuck on rocks and decrease stability in rapids. Profile, the shape of the canoe's sides. Sides that flare out above the waterline deflect water but require the paddler to reach out over the side of the canoe more. Sides that do the reverse, so that the gunwale width is less than the maximum width, the canoe is said to have tumblehome. Tumblehome improves final stability. Rocker: viewed from the side of the canoe, rocker is the amount of curve in the hull in relation to the water, much like the curve of a banana. The full length of the hull is in the water, so it tracks well and has good speed. As rocker increases, so does the ease of turning but at the cost of tracking. Some Native American birch-bark canoes were characterized by extreme rocker. Hull symmetry: viewed from above, a symmetrical hull has its widest point at the center of the hull and both ends are identical. An asymmetrical hull typically has the widest section aft of centerline, creating a longer bow and improving speed. Modern materials and construction Plastic Folding canoes usually consist of a PVC skin around an aluminum frame. Inflatable canoes contain no rigid frame members and can be deflated, inflated, folded, and stored in bags and boxes. The more durable types consist of an abrasion-resistant nylon or rubber outer shell with separate PVC air chambers for the two side tubes and the floor. Royalex — a composite material comprising an outer layer of vinyl and hard acrylonitrile butadiene styrene plastic (ABS) and an inner layer of ABS foam bonded by heat treatment — was another plastic alternative for canoes until 2014, when the raw composite material was discontinued by its only manufacturer. As a canoe material, Royalex is lighter, more resistant to UV damage, and more rigid, and has greater structural memory than non-composite plastics such as polyethylene. Canoes made of Royalex were, however, more expensive than canoes made from aluminum or from traditionally molded or roto-molded polyethylene hulls. Royalex is heavier and less suited for high-performance paddling than fiber-reinforced composites such as fiberglass, kevlar, or graphite. Fiber reinforced composites Modern canoes are generally constructed by layering a fiber material inside a "female" mold. Fiberglass is the most common material used in manufacturing canoes. Fiberglass is not expensive, can be molded to any shape, and is easy to repair. Kevlar is popular with paddlers looking for a light, durable boat that will not be taken in whitewater. Fiberglass and Kevlar are strong but lack rigidity. Carbon fiber is used in racing canoes to create a very light, rigid construction usually combined with Kevlar for durability. Boats are built by draping the cloth in a mold, then impregnating it with a liquid resin. Optionally, a vacuum process can be used to remove excess resin to reduce weight. A gel coat on the outside gives a smoother appearance. With stitch and glue, plywood panels are stitched together to form a hull shape, and the seams are reinforced with fiber reinforced composites and varnished. A cedar strip canoe is essentially a composite canoe with a cedar core. Usually fiberglass is used to reinforce the canoe since it is clear and allows a view of the cedar. Aluminum Before the invention of fiberglass, aluminum was the standard choice for whitewater canoeing due to its value and strength by weight. This material was once more popular but is being replaced by modern lighter materials. "It is tough, durable, and will take being dragged over the bottom very well", as it has no gel or polymer outer coating which would make it subject to abrasion. The hull does not degrade from long term exposure to sunlight, and "extremes of hot and cold do not affect the material". It can dent, is difficult to repair, is noisy, can get stuck on underwater objects, and requires buoyancy chambers to assist in keeping the canoe afloat in a capsize. Canoes in culture In Canada, the canoe has been a theme in history and folklore, and is a symbol of Canadian identity. From 1935 to 1986 the Canadian silver dollar depicted a canoe with the Northern Lights in the background. The Chasse-galerie is a French-Canadian tale of voyageurs who, after a night of heavy drinking on New Year's Eve at a remote timber camp want to visit their sweethearts some 100 leagues (about 400 km) away. Since they have to be back in time for work the next morning they make a pact with the devil. Their canoe will fly through the air, on condition that they not mention God's name or touch the cross of any church steeple as they fly by in the canoe. One version of this fable ends with the coup de grâce when, still high in the sky, the voyageurs complete the hazardous journey but the canoe overturns, so the devil can honour the pact to deliver the voyageurs and still claim their souls. In John Steinbeck's novella The Pearl, set in Mexico, the main character's canoe is a means of making a living that has been passed down for generations and represents a link to cultural tradition. The Māori, indigenous Polynesian people, arrived in New Zealand in several waves of canoe (called waka) voyages. Canoe traditions are important to the identity of Māori. Whakapapa (genealogical links) back to the crew of founding canoes served to establish the origins of tribes, and defined tribal boundaries and relationships. Types of canoes Modern canoe types are usually categorized by the intended use. Many modern canoe designs are hybrids (a combination of two or more designs, meant for multiple uses). The purpose of the canoe will also often determine the materials used. Most canoes are designed for either one person (solo) or two people (tandem), but some are designed for more than two people. Sprint Sprint canoe is also known as flatwater racing. The paddler kneels on one knee and uses a single-blade paddle. Since canoes have no rudder, they must be steered by the athlete's paddle using a J-stroke. Canoes may be entirely open or be partly covered. The minimum length of the opening on a C1 is . Boats are long and streamlined with a narrow beam, which makes them very unstable. A C4 can be up to long and weigh . International Canoe Federation (ICF) classifications include C1 (solo), C2 (crew of two), and C4 (crew of four). Race distances at the 2012 Olympic Games were 200 and 1000 meters. Slalom and wildwater In ICF whitewater slalom, paddlers negotiate their way down of whitewater rapids through a series of up to 25 gates (pairs of hanging poles). The colour of the poles indicates the direction in which the paddlers must pass through; time penalties are assessed for striking poles or missing gates. Categories are C1 (solo) and C2 (tandem), the latter for two men, and C2M (mixed) for one woman and one man. C1 boats must have a minimum weight and width of and and be not more than long. C2s must have a minimum weight and width of and , and be not more than . Rudders are prohibited. Canoes are decked and propelled by single-bladed paddles, and the competitor must kneel. In ICF wildwater canoeing, athletes paddle a course of class III to IV whitewater (using the International Scale of River Difficulty), passing over waves, holes and rocks of a natural riverbed in events lasting either 20–30 minutes ("Classic" races) or 2–3 minutes ("Sprint" races). Categories are C1 and C2 for both women and men. C1s must have a minimum weight and width of and , and a maximum length of . C2s must have a minimum weight and width of and , and a maximum length of . Rudders are prohibited. The canoes are decked boats which must be propelled by single bladed paddles, with the paddler kneeling inside. Marathon Marathons are long-distance races which may include portages. Under ICF rules, minimum canoe weight is for C1 and C2, respectively. Other rules can vary by race. For example, athletes in the Classique Internationale de Canots de la Mauricie race in C2s, with a maximum length of , minimum width of at from the bottom of the centre of the craft, minimum height of at the bow and at the centre and stern. The Texas Water Safari, at , includes an open class, the only rule being the vessel must be human-powered. Although novel setups have been tried, the fastest so far has been the six-man canoe. Touring A "touring" or "tripping" canoe is a boat for traveling on lakes and rivers with capacity for camping gear. Tripping canoes, such as the Chestnut Prospector and Old Town Tripper derivates, are touring canoes for wilderness trips. They are typically made of heavier and tougher materials and designed with the ability to carry large amounts of gear while being maneuverable enough for rivers with some whitewater. Prospector is now a generic name for derivates of the Chestnut model, a popular type of wilderness tripping canoe. The Prospector is marked by a shallow arch hull with a relatively large amount of rocker, giving optimal balance for wilderness tripping over lakes and rivers with some rapids. A touring canoe is sometimes covered with a greatly extended deck, forming a "cockpit" for the paddlers. A cockpit has the advantage that the gunwales can be made lower and narrower so the paddler can reach the water more easily. Freestyle A freestyle canoe is specialized for whitewater play and tricks. Most are identical to short, flat-bottomed kayak playboats except for their internal outfitting. The paddler kneels and uses a single-blade canoe paddle. Playboating is a discipline of whitewater canoeing where the paddler performs various technical moves in one place (a playspot), as opposed to downriver where the objective is to travel the length of a section of river (although whitewater canoeists will often stop and play en route). Specialized canoes known as playboats can be used. Square-stern canoe A square-stern canoe is an asymmetrical canoe with a squared-off stern for the mounting of an outboard motor, and is meant for lake travel or fishing. Since mounting a rudder on the square stern is very easy, such canoes often are adapted for sailing. Canoe launches A canoe launch is a place for launching canoes, similar to a boat launch which is often for launching larger watercraft. Canoe launches are frequently on river banks or beaches. Canoe launches may be designated on maps of places such as parks or nature reserves. Photo gallery
Technology
Maritime transport
null
100005
https://en.wikipedia.org/wiki/Grebe
Grebe
Grebes () are aquatic diving birds in the order Podicipediformes (). Grebes are widely distributed freshwater birds, with some species also found in marine habitats during migration and winter. Most grebes fly, although some flightless species exist, most notably in stable lakes. The order contains a single family, the Podicipedidae, which includes 22 species in six extant genera. Although, superficially, they resemble other diving birds such as loons and coots, they are most closely related to flamingos, as supported by morphological, molecular and paleontological data. Many species are monogamous and are known for their courtship displays, with the pair performing synchronized dances across the water's surface. The birds build floating vegetative nests where they lay several eggs. About a third of the world's grebes are listed at various levels of conservation concerns—the biggest threats including habitat loss, the introduction of invasive predatory fish and human poaching. As such, three species have gone extinct. Etymology The word "grebe" comes from the French , which is of unknown origin and dating to 1766. It is possibly from the Breton "krib" meaning 'comb', this referring to the crests of many of the European species. However, was used to refer to gulls. The appearance of "grebe" in the English language was introduced in 1768 by the Welsh naturalist Thomas Pennant when he adopted the word for the family. Some of the smaller species are often referred to as "dabchick", which originated in the mid 16th century English as they were said to be chick-like birds that dive. The clade names "Podicipediformes" and "Podicipedidae" is based on the genus Podiceps which is a combination of Latin of , gen. ("rear-end" or "anus") and ("foot"), a reference to the placement of a grebe's legs towards the rear of its body. Field characteristics Grebes are small to medium-large in size ranging from the least grebe (Tachybaptus dominicus), at and , to the great grebe (Podiceps major), at and . Despite these size differences grebes are a homogenous family of waterbirds with very few or slight differences among the genera. Anatomy and physiology On the surface of the water they swim low with just the head and neck exposed. All species have lobed toes, and are excellent swimmers and divers. The feet are always large, with broad lobes on the toes and small webs connecting the front three toes. The hind toe also has a small lobe as well. The claws are similar to nails and are flat. These lobate feet act as an oar, as when moving forward they provide minimum resistance and moving backwards they provide a coverage of maximum surface. The leg bones (femur and tarsometatarsus) are equal in length, with the femur having a large head and the presence of long cnemial crests in the tarsometatarsus. The patella is separate and supports the tarsometatarsus posteriorly which greatly helps with the contraction in the muscles. They swim by simultaneously spreading out the feet and bringing them inward, with the webbing expanded to produce the forward thrust in much the same way as frogs. However, due to the anatomy of the legs, grebes are not as mobile on land as they are on the water. Although they can run for a short distance, they are prone to falling over, since they have their feet placed far back on the body. The wing shape varies depending on the species, ranging from moderately long to incredibly short and rounded in shape. The wing anatomy in grebes has a relatively short and thin carpometacarpus-phalanges component which supports their primary feathers, while the ulna is long and fairly weak, supporting secondary feathers. There are 11 primaries and 17 to 22 secondaries, with the inner secondaries being longer than the primaries. As such grebes are generally not strong or rapid fliers. Some species are reluctant to fly. Indeed, two South American species are completely flightless. Since grebes generally dive more than fly, the sternum can be as small or even smaller than the pelvic girdle. When they do fly, they often launch themselves off from the water and must run along the surface as they flap their wings to provide a lift. Bills vary from short and thick to long and pointed depending on the diet, and are slightly larger in males than in females (though the sizes can overlap between younger males and females). Feathers Grebes have unusual plumage. On average grebes have 20,000 feathers, the highest among birds. The feathers are very dense and strongly curved. In the larger species feathers are more dense but shorter, while the opposite is true in smaller species where the feathers are longer but less dense. The density and length of feathers is correlated exponentially with heat-loss in cold water. For this reason grebes invest plumage maintenance the most in birds in terms of duration of time and energy. The uropygial glands secrete a high concentration of paraffin. The secretion provides a dual purpose of protecting the feathers from external parasites and fungi, as well as waterproofing them. When preening, grebes eat their own feathers and feed them to their young. The function of this behaviour is uncertain, but it is believed to assist with pellet formation, excreting out internal parasites and protecting their insides from sharp bone material during digestion. The ventral plumage is the most dense, described as very fur-like. By pressing their feathers against the body, grebes can adjust their buoyancy. In the nonbreeding season, grebes are plain-coloured in dark browns and whites. However, most have ornate and distinctive breeding plumages, often developing chestnut markings on the head area, and perform elaborate display rituals. The young, particularly those of the genus Podiceps, are often striped and retain some of their juvenile plumage even after reaching full size. Systematics The grebes are a radically distinct group of birds as regards their anatomy. Accordingly, they were at first believed to be related to the loons, which are also foot-propelled diving birds, and both families were once classified together under the order Colymbiformes. However, as early as the 1930s, this was determined to be an example of convergent evolution caused by the strong selective forces encountered by unrelated birds sharing the same lifestyle at different times and in different habitat. Grebes and loons are now separately classified orders of Podicipediformes and Gaviiformes, respectively. Recent molecular studies have suggested a relation with flamingos, a finding that has been backed up by morphological evidence. They hold at least eleven morphological traits in common not found in other birds. For example, both flamingoes and grebes lay eggs coated with chalky amorphous calcium phosphate. Many of these characteristics have been previously identified in flamingos, but not in grebes. For the grebe-flamingo clade, the taxon Mirandornithes ("miraculous birds" due to their extreme divergence and apomorphies) has been proposed. Alternatively, they could be placed in one order, with Phoenocopteriformes taking priority. Fossil record The fossil record of grebes is incomplete as there are no transitional forms between more conventional birds and grebes known from fossils. The enigmatic waterbird genus Juncitarsus, however, may be close to a common ancestor of flamingos and grebes. The extinct stem-flamingo family Palaelodidae have been suggested to be the transitional linkage between the filter-feeding flamingos and the foot-propelled diving grebes. The evidence for this comes from the overall similarity between the foot and limb structure of grebes and palaeloids, suggesting the latter family of waterbirds were able to swim and dive better than flamingos. Some early grebes share similar characteristics to the coracoid and humerus seen in palaeloids. True grebes suddenly appear in the fossil record in the Late Oligocene or Early Miocene, around 23–25 mya. There are a few prehistoric genera that are now completely extinct. Thiornis and Pliolymbus date from a time when most if not all extant genera were already present. Because grebes are evolutionarily isolated and they only started to appear in the Northern Hemisphere fossil record in the Early Miocene, they are likely to have originated in the Southern Hemisphere. Genus Aechmophorus Coues, 1862 †Aechmophorus elasson Murray, 1967 (Piacenzian stage of western United States) Genus †Miobaptus Švec, 1982 †Miobaptus huzhiricus Zelenkov, 2015 (Burdigalian to the Langhian ages of East Siberia) †Miobaptus walteri Švec, 1982 [Podiceps walteri (Švec, 1984) Mlíkovský, 2000] (Aquitanian age of Europe) Genus †Miodytes Dimitreijevich, Gál & Kessler, 2002 †Miodytes serbicus Dimitreijevich, Gál & Kessler, 2002 (Langhian age of Serbia) Genus †Pliolymbus Murray, 1967 [Piliolymbus (sic)] †Pliolymbus baryosteus Murray, 1967 (Piacenzian to the Gelasian stages of western United States and Mexico) Genus Podiceps Latham 1787 †Podiceps arndti Chandler, 1990 (Piacenzian stage of North America) †Podiceps csarnotanus Kessler, 2009 (Piacenzian stage of Europe) †Podiceps discors Murray, 1967 (Piacenzian stage of North America) †Podiceps dixi Brodkorp, 1963 (Chibanian to the Tarantian stages of Florida, United States) †Podiceps howardae Storer, 2001 (Zanclean age of North Carolina, United States) †Podiceps miocenicus Kessler, 1984 (Tortonian age of Moldova) †Podiceps oligoceanus (Shufeldt, 1915) (Aquitanian age of North America) †Podiceps parvus (Shufeldt, 1913) (Gelasian to the Calabrian stages of North America) †Podiceps pisanus (Portis, 1888) (Piacenzian stage of Italy) †Podiceps solidus Kuročkin, 1985 (Zanclean age of Western Mongolia) †Podiceps subparvus (Miller & Bowman, 1958) Genus Podilymbus Lesson 1831 †Podilymbus majusculus Murray 1967 (Piacenzian stage of Idaho, United States) †Podilymbus wetmorei Storer 1976 (Chibanian to the Tarantian stages of Florida, United States) Genus †Thiornis Navás, 1922 †Thiornis sociata Navás, 1922 [Podiceps sociatus (Navás, 1922) Olson, 1995] (Tortonian age of Spain) A few more recent grebe fossils could not be assigned to modern or prehistoric genera: Podicipedidae gen. et sp. indet. (San Diego Late Pliocene of California) – formerly included in Podiceps parvus Podicipedidae gen. et sp. indet. UMMP 49592, 52261, 51848, 52276, KUVP 4484 (Late Pliocene of WC USA) Podicipedidae gen. et sp. indet. (Glenns Ferry Late Pliocene/Early Pleistocene of Idaho, USA) Phylogeny To date there is no complete phylogeny of grebes based on molecular work. However, there are comprehensive morphological works from Bochenski (1994), Fjeldså (2004) and Ksepka et al. (2013) that have been done on the grebe genera. Bochenski (1994) Fjeldså (2004) Ksepka et al. (2013) Recent species listing Natural history Habitat, distribution and migration Grebes are a nearly cosmopolitan clade of waterbirds, found on every continent except Antarctica. They are absent from the Arctic Circle and arid environments. They have successfully colonized islands such as Madagascar and New Zealand. Some species such as the eared grebe (Podiceps nigricollis) and great crested grebe (P. cristatus) are found on multiple continents with regional subspecies or populations. A few species like the Junin grebe (P. taczanowskii) and the recently extinct Atitlán grebe (Podilymbus gigas) are lake endemics. During the warmer or breeding seasons, many species of grebes in the northern hemisphere reside in a variety of freshwater habitats like lakes and marshes. Once winter arrives many will migrate to marine environments along the coastlines. Grebes are most prevalent in the New World with almost half of the world's species native there. Feeding ecology The feeding ecology of grebes is diverse. Larger species such as those in the genus Aechmophorus have spear-like bills to catch mid-depth fish while smaller species such as those in the genera Tachybaptus and Podilymbus tend to be short and stout with a preference for catching small aquatic invertebrates. The majority of grebes predate on aquatic invertebrates, with only a handful of large-bodied piscivores. The aforementioned Aechmophorus is the most piscivorous of the grebes. Closely related species that overlap in their range often avoid interspecific competition by having prey preferences and adaptations for it. In areas where there is just as a single species, they tend to have more generalized bills with more openness to different prey sources. Breeding and reproduction Grebes are perhaps best known for their elaborate courtship displays. Most species perform a duet together and many have their own synchronized rituals. Some, like those species in the genus Podiceps do a "penguin dance" where the male and female stand upright, breast posturing out and run along the water's surface. A similar ritual in other species is the "weed dance" in which both partners hold pieces of aquatic vegetation in their bills and are positioned upright towards each other. There is also the "weed rush" in which partners swim towards each other, necks stretched out with weeds in their bill, and just before colliding position themselves upright and then swim in parallel. In the smaller and basal genera like Tachybaptus and Podilymbus, there is incorporation of aquatic vegetation in their courtship, but it is not as elaborate as the more derived and larger species. It has been hypothesized that such courtship displays between mates originated from intraspecific aggression that evolved in a way that strengthened pair bonds. Once these courtship rituals are completed, both partners solicit copulation towards each other and mount on floating platforms of vegetation. Females lay two to seven eggs and incubation can last nearly a month. Chicks of the nest hatch asynchronously. Once the whole nest has hatched, the chicks begin to climb on one of their parent's backs. Both parents take care of rearing their young, and the duration of care is longer than those of other waterfowl. This enables a greater survival rate for the chicks. One parent dives for food, while the other watches the young on the surface. Parasitology 249 species of parasitic worms have been known to parasitize the intestinal region of grebes. The amabiliids are a family of cyclophyllid cestodes that are almost all grebe specialists. The life cycle of these tapeworms begins when eggs are passed through the feces, where they are picked by intermediate hosts, which include corixid bugs and the nymphs of Odonata. These aquatic insects are eventually be consumed by grebes, where the lifecycle begins again. Another grebe specialist family of internal parasites are the Dioecocestidae. Other families such as Echinostomatidae and Hymenolepididae also contain several cestode species that are grebe specialists. The prominent external parasites of grebes are the lice of the clade Ischnocera. One genus of these lice, Aquanirmus, is the only one that is a grebe specialist. Another major group of parasites are the two mites of the families Rhinonyssidae and Ereynetidae; these infect the nasal passages of grebes. The rhinonyssids move slowly in the mucous membranes, drinking blood, while the ereynetids live on the surface. Various lineages of feather mites of the clade Analgoidea have evolved to occupy different sections of the feather. Theromyzon ("duck leeches") tend to feed in the nasal cavities of waterbirds in general, including grebes. Conservation Thirty percent of the total extant species are considered to be threatened species by the IUCN. The handful of critically endangered and extinct species of grebe are lake endemics and nearly all of them are or were flightless. The three recently extinct species consist of the Alaotra grebe, the Atitlán grebe, and the Colombian grebe. These species went extinct due to anthropogenic changes, such as habitat loss, the introduction of invasive predatory fishes, and the use of fishing nets that tangled birds in the lakes they once existed in. Similar issues are befalling the Colombian grebe's closest relatives, the Junin grebe and hooded grebe, along with climate change.
Biology and health sciences
Basics
Animals
100034
https://en.wikipedia.org/wiki/Military%20engineering
Military engineering
Military engineering is loosely defined as the art, science, and practice of designing and building military works and maintaining lines of military transport and military communications. Military engineers are also responsible for logistics behind military tactics. Modern military engineering differs from civil engineering. In the 20th and 21st centuries, military engineering also includes CBRN defense and other engineering disciplines such as mechanical and electrical engineering techniques. According to NATO, "military engineering is that engineer activity undertaken, regardless of component or service, to shape the physical operating environment. Military engineering incorporates support to maneuver and to the force as a whole, including military engineering functions such as engineer support to force protection, counter-improvised explosive devices, environmental protection, engineer intelligence and military search. Military engineering does not encompass the activities undertaken by those 'engineers' who maintain, repair and operate vehicles, vessels, aircraft, weapon systems and equipment." Military engineering is an academic subject taught in military academies or schools of military engineering. The construction and demolition tasks related to military engineering are usually performed by military engineers including soldiers trained as sappers or pioneers. In modern armies, soldiers trained to perform such tasks while well forward in battle and under fire are often called combat engineers. In some countries, military engineers may also perform non-military construction tasks in peacetime such as flood control and river navigation works, but such activities do not fall within the scope of military engineering. Etymology The word engineer was initially used in the context of warfare, dating back to 1325 when engine’er (literally, one who operates an engine) referred to "a constructor of military engines". In this context, "engine" referred to a military machine, i. e., a mechanical contraption used in war (for example, a catapult). As the design of civilian structures such as bridges and buildings developed as a technical discipline, the term civil engineering entered the lexicon as a way to distinguish between those specializing in the construction of such non-military projects and those involved in the older discipline. As the prevalence of civil engineering outstripped engineering in a military context and the number of disciplines expanded, the original military meaning of the word "engineering" is now largely obsolete. In its place, the term "military engineering" has come to be used. History In ancient times, military engineers were responsible for siege warfare and building field fortifications, temporary camps and roads. The most notable engineers of ancient times were the Romans and Chinese, who constructed huge siege-machines (catapults, battering rams and siege towers). The Romans were responsible for constructing fortified wooden camps and paved roads for their legions. Many of these Roman roads are still in use today. The first civilization to have a dedicated force of military engineering specialists were the Romans, whose army contained a dedicated corps of military engineers known as architecti. This group was pre-eminent among its contemporaries. The scale of certain military engineering feats, such as the construction of a double-wall of fortifications long, in just 6 weeks to completely encircle the besieged city of Alesia in 52 B.C.E., is an example. Such military engineering feats would have been completely new, and probably bewildering and demoralizing, to the Gallic defenders. Vitruvius is the best known of these Roman army engineers, due to his writings surviving. Examples of battles before the early modern period where military engineers played a decisive role include the Siege of Tyre under Alexander the Great, the Siege of Masada by Lucius Flavius Silva as well as the Battle of the Trench under the suggestion of Salman the Persian to dig a trench. For about 600 years after the fall of the Roman empire, the practice of military engineering barely evolved in the west. In fact, much of the classic techniques and practices of Roman military engineering were lost. Through this period, the foot soldier (who was pivotal to much of the Roman military engineering capability) was largely replaced by mounted soldiers. It was not until later in the Middle Ages, that military engineering saw a revival focused on siege warfare. Military engineers planned castles and fortresses. When laying siege, they planned and oversaw efforts to penetrate castle defenses. When castles served a military purpose, one of the tasks of the sappers was to weaken the bases of walls to enable them to be breached before means of thwarting these activities were devised. Broadly speaking, sappers were experts at demolishing or otherwise overcoming or bypassing fortification systems. With the 14th-century development of gunpowder, new siege engines in the form of cannons appeared. Initially military engineers were responsible for maintaining and operating these new weapons just as had been the case with previous siege engines. In England, the challenge of managing the new technology resulted in the creation of the Office of Ordnance around 1370 in order to administer the cannons, armaments and castles of the kingdom. Both military engineers and artillery formed the body of this organization and served together until the office's successor, the Board of Ordnance was disbanded in 1855. In comparison to older weapons, the cannon was significantly more effective against traditional medieval fortifications. Military engineering significantly revised the way fortifications were built in order to be better protected from enemy direct and plunging shot. The new fortifications were also intended to increase the ability of defenders to bring fire onto attacking enemies. Fort construction proliferated in 16th-century Europe based on the trace italienne design. By the 18th century, regiments of foot (infantry) in the British, French, Prussian and other armies included pioneer detachments. In peacetime these specialists constituted the regimental tradesmen, constructing and repairing buildings, transport wagons, etc. On active service they moved at the head of marching columns with axes, shovels, and pickaxes, clearing obstacles or building bridges to enable the main body of the regiment to move through difficult terrain. The modern Royal Welch Fusiliers and French Foreign Legion still maintain pioneer sections who march at the front of ceremonial parades, carrying chromium-plated tools intended for show only. Other historic distinctions include long work aprons and the right to wear beards. In West Africa, the Ashanti army was accompanied to war by carpenters who were responsible for constructing shelters and blacksmiths who repaired weapons. By the 18th century, sappers were deployed in the Dahomeyan army during assaults against fortifications. The Peninsular War (1808–14) revealed deficiencies in the training and knowledge of officers and men of the British Army in the conduct of siege operations and bridging. During this war low-ranking Royal Engineers officers carried out large-scale operations. They had under their command working parties of two or three battalions of infantry, two or three thousand men, who knew nothing in the art of siegeworks. Royal Engineers officers had to demonstrate the simplest tasks to the soldiers, often while under enemy fire. Several officers were lost and could not be replaced, and a better system of training for siege operations was required. On 23 April 1812 an establishment was authorised, by Royal Warrant, to teach "Sapping, Mining, and other Military Fieldworks" to the junior officers of the Corps of Royal Engineers and the Corps of Royal Military Artificers, Sappers and Miners. The first courses at the Royal Engineers Establishment were done on an all ranks basis with the greatest regard to economy. To reduce staff the NCOs and officers were responsible for instructing and examining the soldiers. If the men could not read or write they were taught to do so, and those who could read and write were taught to draw and interpret simple plans. The Royal Engineers Establishment quickly became the centre of excellence for all fieldworks and bridging. Captain Charles Pasley, the director of the Establishment, was keen to confirm his teaching, and regular exercises were held as demonstrations or as experiments to improve the techniques and teaching of the Establishment. From 1833 bridging skills were demonstrated annually by the building of a pontoon bridge across the Medway which was tested by the infantry of the garrison and the cavalry from Maidstone. These demonstrations had become a popular spectacle for the local people by 1843, when 43,000 came to watch a field day laid on to test a method of assaulting earthworks for a report to the Inspector General of Fortifications. In 1869 the title of the Royal Engineers Establishment was changed to "The School of Military Engineering" (SME) as evidence of its status, not only as the font of engineer doctrine and training for the British Army, but also as the leading scientific military school in Europe. The dawn of the internal combustion engine marked the beginning of a significant change in military engineering. With the arrival of the automobile at the end of the 19th century and heavier than air flight at the start of the 20th century, military engineers assumed a major new role in supporting the movement and deployment of these systems in war. Military engineers gained vast knowledge and experience in explosives. They were tasked with planting bombs, landmines and dynamite. At the end of World War I, the standoff on the Western Front caused the Imperial German Army to gather experienced and particularly skilled soldiers to form "Assault Teams" which would break through the Allied trenches. With enhanced training and special weapons (such as flamethrowers), these squads achieved some success, but too late to change the outcome of the war. In early WWII, however, the Wehrmacht "Pioniere" battalions proved their efficiency in both attack and defense, somewhat inspiring other armies to develop their own combat engineers battalions. Notably, the attack on Fort Eben-Emael in Belgium was conducted by Luftwaffe glider-deployed combat engineers. The need to defeat the German defensive positions of the "Atlantic wall" as part of the amphibious landings in Normandy in 1944 led to the development of specialist combat engineer vehicles. These, collectively known as Hobart's Funnies, included a specific vehicle to carry combat engineers, the Churchill AVRE. These and other dedicated assault vehicles were organised into the specialised 79th Armoured Division and deployed during Operation Overlord – 'D-Day'. Other significant military engineering projects of World War II include Mulberry harbour and Operation Pluto. Modern military engineering still retains the Roman role of building field fortifications, road paving and breaching terrain obstacles. A notable military engineering task was, for example, breaching the Suez Canal during the Yom Kippur War. Education Military engineers can come from a variety of engineering programs. They may be graduates of mechanical, electrical, civil, or industrial engineering. Sub-discipline Modern military engineering can be divided into three main tasks or fields: combat engineering, strategic support, and ancillary support. Combat engineering is associated with engineering on the battlefield. Combat engineers are responsible for increasing mobility on the front lines of war such as digging trenches and building temporary facilities in war zones. Strategic support is associated with providing service in communication zones such as the construction of airfields and the improvement and upgrade of ports, roads and railways communication. Ancillary support includes provision and distribution of maps as well as the disposal of unexploded warheads. Military engineers construct bases, airfields, roads, bridges, ports, and hospitals. During peacetime before modern warfare, military engineers took the role of civil engineers by participating in the construction of civil-works projects. Nowadays, military engineers are almost entirely engaged in war logistics and preparedness. Explosives engineering Explosives are defined as any system that produces rapidly expanding gases in a given volume in a short duration. Specific military engineering occupations also extend to the field of explosives and demolitions and their usage on the battlefield. Explosive devices have been used on the battlefield for several centuries, in numerous operations from combat to area clearance. Earliest known development of explosives can be traced back to 10th-century China where the Chinese are credited with engineering the world's first known explosive, black powder. Initially developed for recreational purposes, black powder later was utilized for military application in bombs and projectile propulsion in firearms. Engineers in the military who specialize in this field formulate and design many explosive devices to use in varying operating conditions. Such explosive compounds range from black powder to modern plastic explosives. This particular is commonly listed under the role of combat engineers who demolitions expertise also includes mine and IED detection and disposal. For more information, see Bomb disposal. Military engineering by country Military engineers are key in all armed forces of the world, and invariably found either closely integrated into the force structure, or even into the combat units of the national troops. Brazil Brazilian Army engineers can be part of the Quadro de Engenheiros Militares, with its members trained or professionalized by the traditional Instituto Militar de Engenharia (IME) (Military Institute of Engineering), or the Arma de Engenharia, with its members trained by the Academia Militar das Agulhas Negras (AMAN) (Agulhas Negras Military Academy). In the Brazil's Navy, engineers can occupy the Corpo de Engenheiros da Marinha, the Quadro Complementar de Oficiais da Armada and the Quadro Complementar de Oficiais Fuzileiros Navais. Officers can come from the Centro de Instrução Almirante Wandenkolk (CIAW) (Admiral Wandenkolk Instruction Center) and the Escola Naval (EN) (Naval School) which, through internal selection of the Navy, finish their graduation at the Universidade de São Paulo (USP) (University of São Paulo). The Quadro de Oficias Engenheiros of the Brazilian Air Force is occupied by engineers professionalized by Centro de Instrução e Adaptação da Aeronáutica (CIAAR) (Air Force Instruction and Adaptation Center) and trained, or specialized, by Instituto Tecnológico de Aeronáutica (ITA) (Aeronautics Institute of Technology). Russia – Pososhniye lyudi – Engineer Troops (Soviet Union); Assault Engineering Brigades – Russian Engineer Troops United Kingdom The Royal School of Military Engineering is the main training establishment for the British Army's Royal Engineers. The RSME also provides training for the Royal Navy, Royal Air Force, other Arms and Services of the British Army, Other Government Departments, and Foreign and Commonwealth countries as required. These skills provide vital components in the Army's operational capability, and Royal Engineers are currently deployed in Afghanistan, Iraq, Cyprus, Bosnia, Kosovo, Kenya, Brunei, Falklands, Belize, Germany and Northern Ireland. Royal Engineers also take part in exercises in Saudi Arabia, Kuwait, Italy, Egypt, Jordan, Canada, Poland and the United States. United States The prevalence of military engineering in the United States dates back to the American Revolutionary War when engineers would carry out tasks in the U.S. Army. During the war, they would map terrain to and build fortifications to protect troops from opposing forces. The first military engineering organization in the United States was the Army Corps of Engineers. Engineers were responsible for protecting military troops whether using fortifications or designing new technology and weaponry throughout the United States' history of warfare. The Army originally claimed engineers exclusively, but as the U.S. military branches expanded to the sea and sky, the need for military engineering sects in all branches increased. As each branch of the United States military expanded, technology adapted to fit their respective needs. United States Army Corps of Engineers Air Force Civil Engineer Support Agency, Rapid Engineer Deployable Heavy Operational Repair Squadron Engineers (RED HORSE), and Prime Base Engineer Emergency Force (Prime BEEF) The United States Navy Construction Battalion Corps (better known as the Seabees) and Civil Engineer Corps United States Marine Corps Combat Engineer Battalions Other nations Department of the Engineer Troops of the Armed Forces of Armenia Royal Australian Engineers and the Royal Australian Air Force Airfield Engineers Corps of Engineers and Military Engineer Services (MES), Bangladesh Army Canadian Military Engineers The Danish military engineering corps is almost entirely organized into one regiment, simply named "Ingeniørregimentet" ("The Engineering Regiment"). Engineering Arm, including the Paris Fire Brigade Indian Army Corps of Engineers Indonesian Army Corps of Engineers Irish Army Engineer Corps Combat Engineering Corps of the Israel Defense Forces Engineer Regiment (Namibia) Corps of Royal New Zealand Engineers ("The Engineer Battalion") Rejimen Askar Jurutera DiRaja ("Royal Engineer Regiment") Pakistan Army Corps of Engineers and the Military Engineering Service 10th Engineer Brigade South African Army Engineer Formation Sri Lanka Engineers and the Engineer Services Regiment The Le Quy Don Technical University is the main training establishment for the Vietnamese Army's Corps of Engineers
Technology
Disciplines
null
100245
https://en.wikipedia.org/wiki/Internet%20service%20provider
Internet service provider
An Internet service provider (ISP) is an organization that provides myriad services related to accessing, using, managing, or participating in the Internet. ISPs can be organized in various forms, such as commercial, community-owned, non-profit, or otherwise privately owned. Internet services typically provided by ISPs can include internet access, internet transit, domain name registration, web hosting, and colocation. History The Internet (originally ARPAnet) was developed as a network between government research laboratories and participating departments of universities. Other companies and organizations joined by direct connection to the backbone, or by arrangements through other connected companies, sometimes using dialup tools such as UUCP. By the late 1980s, a process was set in place towards public, commercial use of the Internet. Some restrictions were removed by 1991, shortly after the introduction of the World Wide Web. During the 1980s, online service providers such as CompuServe, Prodigy, and America Online (AOL) began to offer limited capabilities to access the Internet, such as e-mail interchange, but full access to the Internet was not readily available to the general public. In 1989, the first Internet service providers, companies offering the public direct access to the Internet for a monthly fee, were established in Australia and the United States. In Brookline, Massachusetts, The World became the first commercial ISP in the US. Its first customer was served in November 1989. These companies generally offered dial-up connections, using the public telephone network to provide last-mile connections to their customers. The barriers to entry for dial-up ISPs were low and many providers emerged. However, cable television companies and the telephone carriers already had wired connections to their customers and could offer Internet connections at much higher speeds than dial-up using broadband technology such as cable modems and digital subscriber line (DSL). As a result, these companies often became the dominant ISPs in their service areas, and what was once a highly competitive ISP market became effectively a monopoly or duopoly in countries with a commercial telecommunications market, such as the United States. In 1995, NSFNET was decommissioned removing the last restrictions on the use of the Internet to carry commercial traffic and network access points were created to allow peering arrangements between commercial ISPs. Net neutrality On 23 April 2014, the U.S. Federal Communications Commission (FCC) was reported to be considering a new rule permitting ISPs to offer content providers a faster track to send content, thus reversing their earlier net neutrality position. A possible solution to net neutrality concerns may be municipal broadband, according to Professor Susan Crawford, a legal and technology expert at Harvard Law School. On 15 May 2014, the FCC decided to consider two options regarding Internet services: first, permit fast and slow broadband lanes, thereby compromising net neutrality; and second, reclassify broadband as a telecommunications service, thereby preserving net neutrality. On 10 November 2014, President Barack Obama recommended that the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On 16 January 2015, Republicans presented legislation, in the form of a U.S. Congress H.R. discussion draft bill, that makes concessions to net neutrality but prohibits the FCC from accomplishing the goal or enacting any further regulation affecting Internet service providers. On 31 January 2015, AP News reported that the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 to the Internet in a vote expected on 26 February 2015. Adoption of this notion would reclassify Internet service from one of information to one of the telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC was expected to enforce net neutrality in its vote, according to The New York Times. On 26 February 2015, the FCC ruled in favor of net neutrality by adopting Title II (common carrier) of the Communications Act of 1934 and Section 706 in the Telecommunications Act of 1996 to the Internet. The FCC Chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept." On 12 March 2015, the FCC released the specific details of the net neutrality rules. On 13 April 2015, the FCC published the final rule on its new "Net Neutrality" regulations. These rules went into effect on 12 June 2015. Upon becoming FCC chairman in April 2017, Ajit Pai proposed an end to net neutrality, awaiting votes from the commission. On 21 November 2017, Pai announced that a vote will be held by FCC members on 14 December 2017 on whether to repeal the policy. On 11 June 2018, the repeal of the FCC's network neutrality rules took effect. Provisions for low-income families Since December 31, 2021, The Affordable Connectivity Program has given households in the U.S. at or below 200% of the Federal Poverty Guidelines or households which meet a number of other criteria an up to $30 per month discount toward internet service, or up to $75 per month on certain tribal lands. Classifications Access providers Access provider ISPs provide Internet access, employing a range of technologies to connect users to their network. Available technologies have ranged from computer modems with acoustic couplers to telephone lines, to television cable (CATV), Wi-Fi, and fiber optics. For users and small businesses, traditional options include copper wires to provide dial-up, DSL, typically asymmetric digital subscriber line (ADSL), cable modem or Integrated Services Digital Network (ISDN) (typically basic rate interface). Using fiber-optics to end users is called Fiber To The Home or similar names. Customers with more demanding requirements (such as medium-to-large businesses, or other ISPs) can use higher-speed DSL (such as single-pair high-speed digital subscriber line), Ethernet, metropolitan Ethernet, gigabit Ethernet, Frame Relay, ISDN Primary Rate Interface, Asynchronous Transfer Mode (ATM) and synchronous optical networking (SONET). Wireless access is another option, including cellular and satellite Internet access. Access providers may have an MPLS (Multiprotocol label switching) or formerly a SONET backbone network, and have a ring or mesh network topology in their core network. The networks run by access providers can be considered wide area networks. ISPs can have access networks, aggregation networks/aggregation layers/distribution layers/edge routers/metro networks and a core network/backbone network; each subsequent network handles more traffic than the last. Mobile service providers also have similar networks. Mailbox providers A mailbox provider is an organization that provides services for hosting electronic mail domains with access to storage for mail boxes. It provides email servers to send, receive, accept, and store email for end users or other organizations. Many mailbox providers are also access providers, while others are not (e.g., Gmail, Yahoo! Mail, Outlook.com, AOL Mail, Po box). The definition given in RFC 6650 covers email hosting services, as well as the relevant department of companies, universities, organizations, groups, and individuals that manage their mail servers themselves. The task is typically accomplished by implementing Simple Mail Transfer Protocol (SMTP) and possibly providing access to messages through Internet Message Access Protocol (IMAP), the Post Office Protocol, Webmail, or a proprietary protocol. Hosting ISPs Internet hosting services provide email, web-hosting, or online storage services. Other services include virtual server, cloud services, or physical server operation. Transit ISPs Just as their customers pay them for Internet access, ISPs themselves pay upstream ISPs for Internet access. An upstream ISP such as a tier 2 or tier 1 ISP usually has a larger network than the contracting ISP or is able to provide the contracting ISP with access to parts of the Internet the contracting ISP by itself has no access to. In the simplest case, a single connection is established to an upstream ISP and is used to transmit data to or from areas of the Internet beyond the home network; this mode of interconnection is often cascaded multiple times until reaching a tier 1 carrier. In reality, the situation is often more complex. ISPs with more than one point of presence (PoP) may have separate connections to an upstream ISP at multiple PoPs, or they may be customers of multiple upstream ISPs and may have connections to each one of them at one or more point of presence. Transit ISPs provide large amounts of bandwidth for connecting hosting ISPs and access ISPs. Border Gateway Protocol is used by routers to connect to other networks, which are identified by their autonomous system number. Tier 2 ISPs depend on Tier 1 ISPs and often have their own networks, but must pay for transit or internet access to Tier 1 ISPs, but may peer or send transit without paying, to other Tier 2 and/or some Tier 1 ISPs. Tier 3 ISPs do not engage in peering and only purchase transit from Tier 2 and Tier 1 ISPs, and often specialize in offering internet service to end customers such as businesses and individuals. Some organizations act as their own ISPs and purchase transit directly from a Tier 1 ISP. Transit ISPs may use OTN (Optical transport network) or SDH/SONET (Synchronous Digital Hierarchy/Synchronous Optical Networking) with DWDM (Dense wavelength-division multiplexing) for transmitting data through optical fiber over long distances such as across a city or between cities. For transmissions in a metro area such as a city and for large customers such as data centers, special pluggable modules in routers, conforming to standards such as CFP, QSFP-DD, OSFP, 400ZR or OpenZR+ may be used alongside DWDM and many vendors have proprietary offerings. Long-haul networks transport data across longer distances than metro networks, such as through submarine cables, or connecting several metropolitan networks. Optical line systems and packet optical transport systems can also be used for data transmission in metro areas, long haul connections and data center interconnect. Ultra long haul transmission transports data over distances of over 1500 kilometers. Virtual ISPs A virtual ISP (VISP) is an operation that purchases services from another ISP, sometimes called a wholesale ISP in this context, which allow the VISP's customers to access the Internet using services and infrastructure owned and operated by the wholesale ISP. VISPs resemble mobile virtual network operators and competitive local exchange carriers for voice communications. Free ISPs Free ISPs are Internet service providers that provide service free of charge. Many free ISPs display advertisements while the user is connected; like commercial television, in a sense they are selling the user's attention to the advertiser. Other free ISPs, sometimes called freenets, are run on a nonprofit basis, usually with volunteer staff. Wireless ISP A wireless Internet service provider (WISP) is an Internet service provider with a network based on wireless networking. Technology may include commonplace Wi-Fi wireless mesh networking, or proprietary equipment designed to operate over open 900 MHz, 2.4 GHz, 4.9, 5.2, 5.4, 5.7, and 5.8 GHz bands or licensed frequencies such as 2.5 GHz (EBS/BRS), 3.65 GHz (NN) and in the UHF band (including the MMDS frequency band) and LMDS. ISPs in rural regions It is hypothesized that the vast divide between broadband connection in rural and urban areas is partially caused by a lack of competition between ISPs in rural areas, where there exists a market typically controlled by just one provider. A lack of competition problematically causes subscription rates to rise disproportionately with the quality of service in rural areas, causing broadband connection to be unaffordable for some, even when the infrastructure supports service in a given area. In contrast, consumers in urban areas typically benefit from lower rates and higher quality of broadband services, not only due to more advanced infrastructure but also the healthy economic competition caused by having several ISPs in a given area. How the difference in competition levels has potentially negatively affected the innovation and development of infrastructure in specific rural areas remains a question. The exploration and answers developed to the question could provide guidance for possible interventions and solutions meant to remedy the digital divide between rural and urban connectivity. Satellite internet services Altnets Altnets (portmanteau of "alternative network provider") are localized broadband networks, typically formed as an alternative to monopolistic internet service providers within a region. Peering ISPs may engage in peering, where multiple ISPs interconnect at peering points or Internet exchange points (IXPs), allowing routing of data between each network, without charging one another for the data transmitted—data that would otherwise have passed through a third upstream ISP, incurring charges from the upstream ISP. ISPs requiring no upstream and having only customers (end customers or peer ISPs) are called Tier 1 ISPs. Network hardware, software and specifications, as well as the expertise of network management personnel are important in ensuring that data follows the most efficient route, and upstream connections work reliably. A tradeoff between cost and efficiency is possible. Tier 1 ISPs are also interconnected with a mesh network topology. Internet Exchange Points (IXPs) are public locations where several networks are connected to each other. Public peering is done at IXPs, while private peering can be done with direct links between networks. IXPs or peering exchanges may be located in data centers. Law enforcement and intelligence assistance Internet service providers in many countries are legally required (e.g., via Communications Assistance for Law Enforcement Act (CALEA) in the U.S.) to allow law enforcement agencies to monitor some or all of the information transmitted by the ISP, or even store the browsing history of users to allow government access if needed (e.g. via the Investigatory Powers Act 2016 in the United Kingdom). Furthermore, in some countries ISPs are subject to monitoring by intelligence agencies. In the U.S., a controversial National Security Agency program known as PRISM provides for broad monitoring of Internet users traffic and has raised concerns about potential violation of the privacy protections in the Fourth Amendment to the United States Constitution. Modern ISPs integrate a wide array of surveillance and packet sniffing equipment into their networks, which then feeds the data to law-enforcement/intelligence networks (such as DCSNet in the United States, or SORM in Russia) allowing monitoring of Internet traffic in real time.
Technology
Internet
null
4565309
https://en.wikipedia.org/wiki/Ruby%20laser
Ruby laser
A ruby laser is a solid-state laser that uses a synthetic ruby crystal as its gain medium. The first working laser was a ruby laser made by Theodore H. "Ted" Maiman at Hughes Research Laboratories on May 16, 1960. Ruby lasers produce pulses of coherent visible light at a wavelength of 694.3 nm, which is a deep red color. Typical ruby laser pulse lengths are on the order of a millisecond. Design A ruby laser most often consists of a ruby rod that must be pumped with very high energy, usually from a flashtube, to achieve a population inversion. The rod is often placed between two mirrors, forming an optical cavity, which oscillate the light produced by the ruby's fluorescence, causing stimulated emission. Ruby is one of the few solid state lasers that produce light in the visible range of the spectrum, lasing at 694.3 nanometers, in a deep red color, with a very narrow linewidth of 0.53 nm. The ruby laser is a three level solid state laser. The active laser medium (laser gain/amplification medium) is a synthetic ruby rod that is energized through optical pumping, typically by a xenon flashtube. Ruby has very broad and powerful absorption bands in the visual spectrum, at 400 and 550 nm, and a very long fluorescence lifetime of 3 milliseconds. This allows for very high energy pumping, since the pulse duration can be much longer than with other materials. While ruby has a very wide absorption profile, its conversion efficiency is much lower than other mediums. In early examples, the rod's ends had to be polished with great precision, such that the ends of the rod were flat to within a quarter of a wavelength of the output light, and parallel to each other within a few seconds of arc. The finely polished ends of the rod were silvered; one end completely, the other only partially. The rod, with its reflective ends, then acts as a Fabry–Pérot etalon (or a Gires-Tournois etalon). Modern lasers often use rods with antireflection coatings, or with the ends cut and polished at Brewster's angle instead. This eliminates the reflections from the ends of the rod. External dielectric mirrors then are used to form the optical cavity. Curved mirrors are typically used to relax the alignment tolerances and to form a stable resonator, often compensating for thermal lensing of the rod. Ruby also absorbs some of the light at its lasing wavelength. To overcome this absorption, the entire length of the rod needs to be pumped, leaving no shaded areas near the mountings. The active part of the ruby is the dopant, which consists of chromium ions suspended in a synthetic sapphire crystal. The dopant often comprises around only 0.05% of the crystal, but is responsible for all of the absorption and emission of radiation. Depending on the concentration of the dopant, synthetic ruby usually comes in either pink or red. Applications One of the first applications for the ruby laser was in rangefinding. By 1964, ruby lasers with rotating prism q-switches became the standard for military rangefinders, until the introduction of more efficient Nd:YAG rangefinders a decade later. Ruby lasers were used mainly in research. The ruby laser was the first laser used to optically pump tunable dye lasers and is particularly well suited to excite laser dyes emitting in the near infrared. Ruby lasers are rarely used in industry, mainly due to low efficiency and low repetition rates. One of the main industrial uses is drilling holes through diamond, because ruby's high-powered beam closely matches diamond's broad absorption band (the GR1 band) in the red. Ruby lasers have declined in use with the discovery of better lasing media. They are still used in a number of applications where short pulses of red light are required. Holographers around the world produce holographic portraits with ruby lasers, in sizes up to a meter square. Because of its high pulsed power and good coherence length, the red 694 nm laser light is preferred to the 532 nm green light of frequency-doubled Nd:YAG, which often requires multiple pulses for large holograms. Many non-destructive testing labs use ruby lasers to create holograms of large objects such as aircraft tires to look for weaknesses in the lining. Ruby lasers were used extensively in tattoo and hair removal, but are being replaced by alexandrite and Nd:YAG lasers in this application. History The ruby laser was the first laser to be made functional. Built by Theodore Maiman in 1960, the device was created out of the concept of an "optical maser," a maser that could operate in the visual or infrared regions of the spectrum. In 1958, after the inventor of the maser, Charles Townes, and his colleague, Arthur Schawlow, published an article in the Physical Review regarding the idea of optical masers, the race to build a working model began. Ruby had been used successfully in masers, so it was a first choice as a possible medium. While attending a conference in 1959, Maiman listened to a speech given by Schawlow, describing the use of ruby as a lasing medium. Schawlow stated that pink ruby, having a lowest energy-state that was too close to the ground-state, would require too much pumping energy for laser operation, suggesting red ruby as a possible alternative. Maiman, having worked with ruby for many years, and having written a paper on ruby fluorescence, felt that Schawlow was being "too pessimistic." His measurements indicated that the lowest energy level of pink ruby could at least be partially depleted by pumping with a very intense light source, and, since ruby was readily available, he decided to try it anyway. Also attending the conference was Gordon Gould. Gould suggested that, by pulsing the laser, peak outputs as high as a megawatt could be produced. As time went on, many scientists began to doubt the usefulness of any color ruby as a laser medium. Maiman, too, felt his own doubts, but, being a very "single-minded person," he kept working on his project in secret. He searched to find a light source that would be intense enough to pump the rod, and an elliptical pumping cavity of high reflectivity, to direct the energy into the rod. He found his light source when a salesman from General Electric showed him a few xenon flashtubes, claiming that the largest could ignite steel wool if placed near the tube. Maiman realized that, with such intensity, he did not need such a highly reflective pumping cavity, and, with the helical lamp, would not need it to have an elliptical shape. Maiman constructed his ruby laser at Hughes Research Laboratories, in Malibu, California. He used a pink ruby rod, measuring 1 cm by 1.5 cm, and, on May 16, 1960, fired the device, producing the first beam of laser light. Theodore Maiman's original ruby laser is still operational. It was demonstrated on May 15, 2010, at a symposium co-hosted in Vancouver, British Columbia by the Dr. Theodore Maiman Memorial Foundation and Simon Fraser University, where Dr. Maiman was adjunct professor at the School of Engineering Science. Maiman's original laser was fired at a projector screen in a darkened room. In the center of a white flash (leakage from the xenon flashtube), a red spot was briefly visible. The ruby lasers did not deliver a single pulse, but rather delivered a series of pulses, consisting of a series of irregular spikes within the pulse duration. In 1961, R.W. Hellwarth invented a method of q-switching, to concentrate the output into a single pulse. In 1962, Willard Boyle, working at Bell Labs, produced the first continuous output from a ruby laser. Unlike the usual side-pumping method, the light from a mercury arc lamp was pumped into the end of a very small rod, to achieve the necessary population inversion. The laser did not emit a continuous wave, but rather a continuous train of pulses, giving scientists the opportunity to study the spiked output of ruby. The continuous ruby laser was the first laser to be used in medicine. It was used by Leon Goldman, a pioneer in laser medicine, for treatments such as tattoo removal, scar treatments, and to induce healing. Due to its limits in output power, tunability, and complications in operating and cooling the units, the continuous ruby laser was quickly replaced with more versatile dye, Nd:YAG, and argon lasers.
Technology
Lasers
null
2460242
https://en.wikipedia.org/wiki/Physical%20optics
Physical optics
In physics, physical optics, or wave optics, is the branch of optics that studies interference, diffraction, polarization, and other phenomena for which the ray approximation of geometric optics is not valid. This usage tends not to include effects such as quantum noise in optical communication, which is studied in the sub-branch of coherence theory. Principle Physical optics is also the name of an approximation commonly used in optics, electrical engineering and applied physics. In this context, it is an intermediate method between geometric optics, which ignores wave effects, and full wave electromagnetism, which is a precise theory. The word "physical" means that it is more physical than geometric or ray optics and not that it is an exact physical theory. This approximation consists of using ray optics to estimate the field on a surface and then integrating that field over the surface to calculate the transmitted or scattered field. This resembles the Born approximation, in that the details of the problem are treated as a perturbation. In optics, it is a standard way of estimating diffraction effects. In radio, this approximation is used to estimate some effects that resemble optical effects. It models several interference, diffraction and polarization effects but not the dependence of diffraction on polarization. Since this is a high-frequency approximation, it is often more accurate in optics than for radio. In optics, it typically consists of integrating ray-estimated field over a lens, mirror or aperture to calculate the transmitted or scattered field. In radar scattering it usually means taking the current that would be found on a tangent plane of similar material as the current at each point on the front, i. e. the geometrically illuminated part, of a scatterer. Current on the shadowed parts is taken as zero. The approximate scattered field is then obtained by an integral over these approximate currents. This is useful for bodies with large smooth convex shapes and for lossy (low-reflection) surfaces. The ray-optics field or current is generally not accurate near edges or shadow boundaries, unless supplemented by diffraction and creeping wave calculations. The standard theory of physical optics has some defects in the evaluation of scattered fields, leading to decreased accuracy away from the specular direction. An improved theory introduced in 2004 gives exact solutions to problems involving wave diffraction by conducting scatterers.
Physical sciences
Optics
Physics
2461623
https://en.wikipedia.org/wiki/Bassaricyon
Bassaricyon
The genus Bassaricyon consists of small Neotropical procyonids, popularly known as olingos (), cousins of the raccoon. They are native to the rainforests of Central and South America from Nicaragua to Peru. They are arboreal and nocturnal, and live at elevations from sea level to . Olingos closely resemble another procyonid, the kinkajou, in morphology and habits, though they lack prehensile tails and extrudable tongues, have more extended muzzles, and possess anal scent glands. However, the two genera are not sisters. They also resemble galagos and certain lemurs. Species There is disagreement on the number of species in this genus, with some taxonomists splitting the populations into as many as five species (adding B. pauli to the list below), two species (dropping B. medius and B. neblina), or just a single species (B. gabbi). Until recently, only the northern olingo (B. gabbii) was particularly well-known, and it was usually confusingly referred to simply as an olingo. Olingos are quite rare in zoos and are often misidentified as kinkajous. A previously unrecognized olingo, similar to but distinct from B. alleni, was discovered in 2006 by Kristofer Helgen at Las Maquinas in the Andes of Ecuador. He named this species B. neblina or olinguito and presented his findings on August 15, 2013. With data derived from anatomy, morphometrics, nuclear and mitochondrial DNA, field observations, and geographic range modeling, Helgen and coworkers demonstrated that four olingo species can be recognized: Evolution Genetic studies have shown that the closest relatives of the olingos are actually the coatis; the divergence between the two groups is estimated to have occurred about 10.2 million years (Ma) ago during the Tortonian age, while kinkajous split off from the other extant procyonids about 22.6 Ma ago during the Aquitanian age. The similarities between kinkajous and olingos are thus an example of parallel evolution. The diversification of the genus apparently started about 3.5 million years ago, when B. neblina branched off from the others; B. gabbii then split off about 1.8 Ma ago, and the two lowland species, B. alleni and B. medius, diverged about 1.3 Ma ago. The dating and biogeography modeling suggest that the earliest diversification of the genus took place in northwestern South America shortly after the ancestors of olingos first invaded the continent from Central America as part of the Great American Interchange. The evolution of olingos thus contrasts with that of kinkajous, a much older lineage that is thought to have arisen in Central America long before they reached South America.
Biology and health sciences
Procyonidae
Animals
2462757
https://en.wikipedia.org/wiki/Formosan%20subterranean%20termite
Formosan subterranean termite
The Formosan termite (Coptotermes formosanus) is a species of termite local to southern China and introduced to Taiwan (formerly known as Formosa, where it gets its name), Japan, South Africa, Sri Lanka, Hawaii, and the continental United States. The Formosan termite is often nicknamed the super-termite because of its destructive habits due to the large size of its colonies and its ability to consume wood at a rapid rate. Populations of these termites have become large enough to appear on New Orleans' weather radars. A mature Formosan colony can consume as much as 13 ounces of wood a day (about 400 g) and can severely damage a structure in as little as three months. Formosan termites infest a wide variety of structures (including boats and high-rise condominiums) and can damage trees. In the United States, along with another species, Coptotermes gestroi, introduced from Southeast Asia, they are responsible for tremendous damage to property resulting in large treatment and repair costs. Biology Coptotermes formosanus is a social insect. Nutrition Crops include sugarcane. Reproduction and lifecycle stages As an introduced species History Formosan termites are rarely found north of 35°N. They have been reported in 11 states, including Alabama, California, Florida, Georgia, Hawaii, Louisiana, Mississippi, North Carolina, South Carolina, Tennessee, and Texas. Their distribution is restricted to southern areas of the United States because their eggs don't hatch below about 20 °C (68 °F). More information can also be found at University of Florida Entomology. Spread of Formosan infestation Formosan termites, since their probable landing at the Port of New Orleans around the middle of the 20th century, have become a most serious concern to pest control regulators and researchers. In the 1970's, the United States Department of Agriculture began to track the spread of Formosan infestations. Maps of counties infested by Formosans were published by the USDA in 1975, 1990, and 2001. Universities across Texas, Louisiana, Mississippi, and Florida have published updates since then. The annual expansion rate of Formosan infestation between 1990 and present varies from 5.3% in Mississippi to 8.1% in Texas. Chouvenc & Helmick 2015 find that C. formosanus readily hybridizes with another invasive termite in Florida, C. gestroi. Economic impact Historic structures in Hawaii have been threatened, such as Iolani Palace in Honolulu. It has its greatest impact in North America. C. formosanus is the most destructive, difficult to control, and economically important species of termite in the southern United States. The Florida Department of Agriculture and Consumer Services discusses the average cost of Formosan termite damage as "in the $10,000 range per home.......can be much higher...in some severe cases the home may have to be demolished and rebuilt." Florida Consumer Protection. Formosan termite barriers Physical barriers to Formosan termites have been developed. Most of these barriers must be installed during construction, but a few can be installed after construction. The most important application of these post construction barriers is the stone particle barrier, used to protect exposed concrete perimeters. The International Code Council (ICC) has issued an acceptance standard called AC 380 Acceptance Criteria for Termite Physical Barrier Systems which requires five years of controlled field trials in multiple Formosan termite infested locations. These acceptance criteria are rigorous and are drawn from the criteria used by state and federal pest control regulators for termite control methods.
Biology and health sciences
Cockroaches &amp; Termites (Blattodea)
Animals
2465250
https://en.wikipedia.org/wiki/Thermal%20energy%20storage
Thermal energy storage
Thermal energy storage (TES) is the storage of thermal energy for later reuse. Employing widely different technologies, it allows surplus thermal energy to be stored for hours, days, or months. Scale both of storage and use vary from small to large – from individual processes to district, town, or region. Usage examples are the balancing of energy demand between daytime and nighttime, storing summer heat for winter heating, or winter cold for summer cooling (Seasonal thermal energy storage). Storage media include water or ice-slush tanks, masses of native earth or bedrock accessed with heat exchangers by means of boreholes, deep aquifers contained between impermeable strata; shallow, lined pits filled with gravel and water and insulated at the top, as well as eutectic solutions and phase-change materials. Other sources of thermal energy for storage include heat or cold produced with heat pumps from off-peak, lower cost electric power, a practice called peak shaving; heat from combined heat and power (CHP) power plants; heat produced by renewable electrical energy that exceeds grid demand and waste heat from industrial processes. Heat storage, both seasonal and short term, is considered an important means for cheaply balancing high shares of variable renewable electricity production and integration of electricity and heating sectors in energy systems almost or completely fed by renewable energy. Categories The different kinds of thermal energy storage can be divided into three separate categories: sensible heat, latent heat, and thermo-chemical heat storage. Each of these has different advantages and disadvantages that determine their applications. Sensible heat storage Sensible heat storage (SHS) is the most straightforward method. It simply means the temperature of some medium is either increased or decreased. This type of storage is the most commercially available out of the three; other techniques are less developed. The materials are generally inexpensive and safe. One of the cheapest, most commonly used options is a water tank, but materials such as molten salts or metals can be heated to higher temperatures and therefore offer a higher storage capacity. Energy can also be stored underground (UTES), either in an underground tank or in some kind of heat-transfer fluid (HTF) flowing through a system of pipes, either placed vertically in U-shapes (boreholes) or horizontally in trenches. Yet another system is known as a packed-bed (or pebble-bed) storage unit, in which some fluid, usually air, flows through a bed of loosely packed material (usually rock, pebbles or ceramic brick) to add or extract heat. A disadvantage of SHS is its dependence on the properties of the storage medium. Storage capacities are limited by the specific heat capacity of the storage material, and the system needs to be properly designed to ensure energy extraction at a constant temperature. Molten salt technology The sensible heat of molten salt is also used for storing solar energy at a high temperature, termed molten-salt technology or molten salt energy storage (MSES). Molten salts can be employed as a thermal energy storage method to retain thermal energy. Presently, this is a commercially used technology to store the heat collected by concentrated solar power (e.g., from a solar tower or solar trough). The heat can later be converted into superheated steam to power conventional steam turbines and generate electricity at a later time. It was demonstrated in the Solar Two project from 1995 to 1999. Estimates in 2006 predicted an annual efficiency of 99%, a reference to the energy retained by storing heat before turning it into electricity, versus converting heat directly into electricity. Various eutectic mixtures of different salts are used (e.g., sodium nitrate, potassium nitrate and calcium nitrate). Experience with such systems exists in non-solar applications in the chemical and metals industries as a heat-transport fluid. The salt melts at . It is kept liquid at in an insulated "cold" storage tank. The liquid salt is pumped through panels in a solar collector where the focused sun heats it to . It is then sent to a hot storage tank. With proper insulation of the tank the thermal energy can be usefully stored for up to a week. When electricity is needed, the hot molten salt is pumped to a conventional steam-generator to produce superheated steam for driving a conventional turbine/generator set as used in any coal, oil, or nuclear power plant. A 100-megawatt turbine would need a tank of about tall and in diameter to drive it for four hours by this design.A single tank with a divider plate to separate cold and hot molten salt is under development. It is more economical by achieving 100% more heat storage per unit volume over the dual tanks system as the molten-salt storage tank is costly due to its complicated construction. Phase Change Material (PCMs) are also used in molten-salt energy storage, while research on obtaining shape-stabilized PCMs using high porosity matrices is ongoing. Most solar thermal power plants use this thermal energy storage concept. The Solana Generating Station in the U.S. can store 6 hours worth of generating capacity in molten salt. During the summer of 2013 the Gemasolar Thermosolar solar power-tower/molten-salt plant in Spain achieved a first by continuously producing electricity 24 hours per day for 36 days. The Cerro Dominador Solar Thermal Plant, inaugurated in June 2021, has 17.5 hours of heat storage. Heat storage in tanks, ponds or rock caverns A steam accumulator consists of an insulated steel pressure tank containing hot water and steam under pressure. As a heat storage device, it is used to mediate heat production by a variable or steady source from a variable demand for heat. Steam accumulators may take on a significance for energy storage in solar thermal energy projects. Large stores, mostly hot water storage tanks, are widely used in Nordic countries to store heat for several days, to decouple heat and power production and to help meet peak demands. Some towns use insulated ponds heated by solar power as a heat source for district heating pumps. Intersessional storage in caverns has been investigated and appears to be economical and plays a significant role in heating in Finland. Energy producer Helen Oy estimates an 11.6 GWh capacity and 120 MW thermal output for its water cistern under Mustikkamaa (fully charged or discharged in 4 days at capacity), operating from 2021 to offset days of peak production/demand; while the rock caverns under sea level in Kruunuvuorenranta (near Laajasalo) were designated in 2018 to store heat in summer from warm seawater and release it in winter for district heating. In 2024, it was announced that the municipal energy supplier of Vantaa had commissioned an underground heat storage facility of over in size and 90GWh in capacity to be built, expected to be operational in 2028. Hot silicon technology Solid or molten silicon offers much higher storage temperatures than salts with consequent greater capacity and efficiency. It is being researched as a possible more energy efficient storage technology. Silicon is able to store more than 1 MWh of energy per cubic meter at 1400 °C. An additional advantage is the relative abundance of silicon when compared to the salts used for the same purpose. Molten aluminum Another medium that can store thermal energy is molten (recycled) aluminum. This technology was developed by the Swedish company Azelio. The material is heated to 600 °C. When needed, the energy is transported to a Stirling engine using a heat-transfer fluid. Heat storage using oils Using oils as sensible heat storage materials is an effective approach for storing thermal energy, particularly in medium- to high-temperature applications. Different types of oils are used based on the temperature range and the specific requirements of the thermal energy storage system: mineral oils, synthetic oils are more recently, vegetable oils are gaining interest because they are renewable and biodegradable. Numerious criteria are used to select an oil for a particular application: high energy storage capacity and specific heat capacity, high thermal conductivity, high chemical and physical stability, low coefficient of expansion, low cost, availability, low corrosion and compatibility with compounds materials, limited environmental issues, etc. Regarding the selection of a low-cost or cost-effective thermal oil, it is important to consider not only the acquisition or purchase cost, but also the operating and replacement costs or even final disposal costs. An oil that is initially more expensive may prove to be more cost-effective in the long run if it offers higher thermal stability, thereby reducing the frequency of replacement. Heat storage in hot rocks or concrete Water has one of the highest thermal capacities at 4.2 kJ/(kg⋅K) whereas concrete has about one third of that. On the other hand, concrete can be heated to much higher temperatures (1200 °C) by for example electrical heating and therefore has a much higher overall volumetric capacity. Thus in the example below, an insulated cube of about would appear to provide sufficient storage for a single house to meet 50% of heating demand. This could, in principle, be used to store surplus wind or solar heat due to the ability of electrical heating to reach high temperatures. At the neighborhood level, the Wiggenhausen-Süd solar development at Friedrichshafen in southern Germany has received international attention. This features a () reinforced concrete thermal store linked to () of solar collectors, which will supply the 570 houses with around 50% of their heating and hot water. Siemens-Gamesa built a 130 MWh thermal storage near Hamburg with 750 °C in basalt and 1.5 MW electric output. A similar system is scheduled for Sorø, Denmark, with 41–58% of the stored 18 MWh heat returned for the town's district heating, and 30–41% returned as electricity. “Brick toaster” is a recently (August 2022) announced innovative heat reservoir operating at up to 1,500 °C (2,732 °F) that its maker, Titan Cement/Rondo claims should be able cut global output by 15% over 15 years. Latent heat storage Because latent heat storage (LHS) is associated with a phase transition, the general term for the associated media is Phase-Change Material (PCM). During these transitions, heat can be added or extracted without affecting the material's temperature, giving it an advantage over SHS-technologies. Storage capacities are often higher as well. There are a multitude of PCMs available, including but not limited to salts, polymers, gels, paraffin waxes, metal alloys and semiconductor-metal alloys, each with different properties. This allows for a more target-oriented system design. As the process is isothermal at the PCM's melting point, the material can be picked to have the desired temperature range. Desirable qualities include high latent heat and thermal conductivity. Furthermore, the storage unit can be more compact if volume changes during the phase transition are small. PCMs are further subdivided into organic, inorganic and eutectic materials. Compared to organic PCMs, inorganic materials are less flammable, cheaper and more widely available. They also have higher storage capacity and thermal conductivity. Organic PCMs, on the other hand, are less corrosive and not as prone to phase-separation. Eutectic materials, as they are mixtures, are more easily adjusted to obtain specific properties, but have low latent and specific heat capacities. Another important factor in LHS is the encapsulation of the PCM. Some materials are more prone to erosion and leakage than others. The system must be carefully designed in order to avoid unnecessary loss of heat. Miscibility gap alloy technology Miscibility gap alloys rely on the phase change of a metallic material (see: latent heat) to store thermal energy. Rather than pumping the liquid metal between tanks as in a molten-salt system, the metal is encapsulated in another metallic material that it cannot alloy with (immiscible). Depending on the two materials selected (the phase changing material and the encapsulating material) storage densities can be between 0.2 and 2 MJ/L. A working fluid, typically water or steam, is used to transfer the heat into and out of the system. Thermal conductivity of miscibility gap alloys is often higher (up to 400 W/(m⋅K)) than competing technologies which means quicker "charge" and "discharge" of the thermal storage is possible. The technology has not yet been implemented on a large scale. Ice-based technology Several applications are being developed where ice is produced during off-peak periods and used for cooling at a later time. For example, air conditioning can be provided more economically by using low-cost electricity at night to freeze water into ice, then using the cooling capacity of ice in the afternoon to reduce the electricity needed to handle air conditioning demands. Thermal energy storage using ice makes use of the large heat of fusion of water. Historically, ice was transported from mountains to cities for use as a coolant. One metric ton of water (= one cubic meter) can store 334 million joules (MJ) or 317,000 BTUs (93 kWh). A relatively small storage facility can hold enough ice to cool a large building for a day or a week. In addition to using ice in direct cooling applications, it is also being used in heat pump-based heating systems. In these applications, the phase change energy provides a very significant layer of thermal capacity that is near the bottom range of temperature that water source heat pumps can operate in. This allows the system to ride out the heaviest heating load conditions and extends the timeframe by which the source energy elements can contribute heat back into the system. Cryogenic energy storage Cryogenic energy storage uses liquification of air or nitrogen as an energy store. A pilot cryogenic energy system that uses liquid air as the energy store, and low-grade waste heat to drive the thermal re-expansion of the air, operated at a power station in Slough, UK in 2010. Thermo-chemical heat storage Thermo-chemical heat storage (TCS) involves some kind of reversible exotherm/endotherm chemical reaction with thermo-chemical materials (TCM) . Depending on the reactants, this method can allow for an even higher storage capacity than LHS. In one type of TCS, heat is applied to decompose certain molecules. The reaction products are then separated, and mixed again when required, resulting in a release of energy. Some examples are the decomposition of potassium oxide (over a range of 300–800 °C, with a heat decomposition of 2.1 MJ/kg), lead oxide (300–350 °C, 0.26 MJ/kg) and calcium hydroxide (above 450 °C, where the reaction rates can be increased by adding zinc or aluminum). The photochemical decomposition of nitrosyl chloride can also be used and, since it needs photons to occur, works especially well when paired with solar energy. Adsorption (or Sorption) solar heating and storage Adsorption processes also fall into this category. It can be used to not only store thermal energy, but also control air humidity. Zeolites (microporous crystalline alumina-silicates) and silica gels are well suited for this purpose. In hot, humid environments, this technology is often used in combination with lithium chloride to cool water. The low cost ($200/ton) and high cycle rate (2,000×) of synthetic zeolites such as Linde 13X with water adsorbate has garnered much academic and commercial interest recently for use for thermal energy storage (TES), specifically of low-grade solar and waste heat. Several pilot projects have been funded in the EU from 2000 to the present (2020). The basic concept is to store solar thermal energy as chemical latent energy in the zeolite. Typically, hot dry air from flat plate solar collectors is made to flow through a bed of zeolite such that any water adsorbate present is driven off. Storage can be diurnal, weekly, monthly, or even seasonal depending on the volume of the zeolite and the area of the solar thermal panels. When heat is called for during the night, or sunless hours, or winter, humidified air flows through the zeolite. As the humidity is adsorbed by the zeolite, heat is released to the air and subsequently to the building space. This form of TES, with specific use of zeolites, was first taught by Guerra in 1978. Advantages over molten salts and other high temperature TES include that (1) the temperature required is only the stagnation temperature typical of a solar flat plate thermal collector, and (2) as long as the zeolite is kept dry, the energy is stored indefinitely. Because of the low temperature, and because the energy is stored as latent heat of adsorption, thus eliminating the insulation requirements of a molten salt storage system, costs are significantly lower. Salt hydrate technology One example of an experimental storage system based on chemical reaction energy is the salt hydrate technology. The system uses the reaction energy created when salts are hydrated or dehydrated. It works by storing heat in a container containing 50% sodium hydroxide (NaOH) solution. Heat (e.g. from using a solar collector) is stored by evaporating the water in an endothermic reaction. When water is added again, heat is released in an exothermic reaction at 50 °C (120 °F). Current systems operate at 60% efficiency. The system is especially advantageous for seasonal thermal energy storage, because the dried salt can be stored at room temperature for prolonged times, without energy loss. The containers with the dehydrated salt can even be transported to a different location. The system has a higher energy density than heat stored in water and the capacity of the system can be designed to store energy from a few months to years. In 2013 the Dutch technology developer TNO presented the results of the MERITS project to store heat in a salt container. The heat, which can be derived from a solar collector on a rooftop, expels the water contained in the salt. When the water is added again, the heat is released, with almost no energy losses. A container with a few cubic meters of salt could store enough of this thermochemical energy to heat a house throughout the winter. In a temperate climate like that of the Netherlands, an average low-energy household requires about 6.7 GJ/winter. To store this energy in water (at a temperature difference of 70 °C), 23 m3 insulated water storage would be needed, exceeding the storage abilities of most households. Using salt hydrate technology with a storage density of about 1 GJ/m3, 4–8 m3 could be sufficient. As of 2016, researchers in several countries are conducting experiments to determine the best type of salt, or salt mixture. Low pressure within the container seems favorable for the energy transport. Especially promising are organic salts, so called ionic liquids. Compared to lithium halide-based sorbents they are less problematic in terms of limited global resources and compared to most other halides and sodium hydroxide (NaOH) they are less corrosive and not negatively affected by CO2 contaminations. However, a recent meta-analysis on studies of thermochemical heat storage suggests that salt hydrates offer very low potential for thermochemical heat storage, that absorption processes have prohibitive performance for long-term heat storage, and that thermochemical storage may not be suitable for long-term solar heat storage in buildings. Molecular bonds Storing energy in molecular bonds is being investigated. Energy densities equivalent to lithium-ion batteries have been achieved. This has been done by a DSPEC (dys-sensitized photoelectrosythesis cell). This is a cell that can store energy that has been acquired by solar panels during the day for night-time (or even later) use. It is designed by taking an indication from, well known, natural photosynthesis. The DSPEC generates hydrogen fuel by making use of the acquired solar energy to split water molecules into its elements. As the result of this split, the hydrogen is isolated and the oxygen is released into the air. This sounds easier than it actually is. Four electrons of the water molecules need to be separated and transported elsewhere. Another difficult part is the process of merging the two separate hydrogen molecules. The DSPEC consists of two components: a molecule and a nanoparticle. The molecule is called a chromophore-catalyst assembly which absorbs sunlight and kick starts the catalyst. This catalyst separates the electrons and the water molecules. The nanoparticles are assembled into a thin layer and a single nanoparticle has many chromophore-catalyst on it. The function of this thin layer of nanoparticles is to transfer away the electrons which are separated from the water. This thin layer of nanoparticles is coated by a layer of titanium dioxide. With this coating, the electrons that come free can be transferred more quickly so that hydrogen could be made. This coating is, again, coated with a protective coating that strengthens the connection between the chromophore-catalyst and the nanoparticle. Using this method, the solar energy acquired from the solar panels is converted into fuel (hydrogen) without releasing the so-called greenhouse gasses. This fuel can be stored into a fuel cell and, at a later time, used to generate electricity. Molecular Solar Thermal System (MOST) Another promising way to store solar energy for electricity and heat production is a so-called molecular solar thermal system (MOST). With this approach a molecule is converted by photoisomerization into a higher-energy isomer. Photoisomerization is a process in which one (cis trans) isomer is converted into another by light (solar energy). This isomer is capable of storing the solar energy until the energy is released by a heat trigger or catalyst (then, the isomer is converted into its original isomer). A promising candidate for such a MOST is Norbornadiene (NBD). This is because there is a high energy difference between the NBD and the quadricyclane (QC) photoisomer. This energy difference is approximately 96 kJ/mol. It is also known that for such systems, the donor-acceptor substitutions provide an effective means for red shifting the longest-wavelength absorption. This improves the solar spectrum match. A crucial challenge for a useful MOST system is to acquire a satisfactory high energy storage density (if possible, higher than 300 kJ/kg). Another challenge of a MOST system is that light can be harvested in the visible region. The functionalization of the NBD with the donor and acceptor units is used to adjust this absorption maxima. However, this positive effect on the solar absorption is compensated by a higher molecular weight. This implies a lower energy density. This positive effect on the solar absorption has another downside. Namely, that the energy storage time is lowered when the absorption is redshifted. A possible solution to overcome this anti-correlation between the energy density and the red shifting is to couple one chromophore unit to several photo switches. In this case, it is advantageous to form so called dimers or trimers. The NBD share a common donor and/or acceptor. Kasper Moth-Poulsen and his team tried to engineer the stability of the high energy photo isomer by having two electronically coupled photo switches with separate barriers for thermal conversion. By doing so, a blue shift occurred after the first isomerization (NBD-NBD to QC-NBD). This led to a higher energy of isomerization of the second switching event (QC-NBD to QC-QC). Another advantage of this system, by sharing a donor, is that the molecular weight per norbornadiene unit is reduced. This leads to an increase of the energy density. Eventually, this system could reach a quantum yield of photoconversion up 94% per NBD unit. A quantum yield is a measure of the efficiency of photon emission. With this system the measured energy densities reached up to 559 kJ/kg (exceeding the target of 300 kJ/kg). So, the potential of the molecular photo switches is enormousnot only for solar thermal energy storage but for other applications as well. In 2022, researchers reported combining the MOST with a chip-sized thermoelectric generator to generate electricity from it. The system can reportedly store solar energy for up to 18 years and may be an option for renewable energy storage. Thermal Battery A thermal energy battery is a physical structure used for the purpose of storing and releasing thermal energy. Such a thermal battery (a.k.a. TBat) allows energy available at one time to be temporarily stored and then released at another time. The basic principles involved in a thermal battery occur at the atomic level of matter, with energy being added to or taken from either a solid mass or a liquid volume which causes the substance's temperature to change. Some thermal batteries also involve causing a substance to transition thermally through a phase transition which causes even more energy to be stored and released due to the delta enthalpy of fusion or delta enthalpy of vaporization. Thermal batteries are very common, and include such familiar items as a hot water bottle. Early examples of thermal batteries include stone and mud cook stoves, rocks placed in fires, and kilns. While stoves and kilns are ovens, they are also thermal storage systems that depend on heat being retained for an extended period of time. Thermal energy storage systems can also be installed in domestic situations with heat batteries and thermal stores being amongst the most common types of energy storage systems installed at homes in the UK. Types of thermal batteries Thermal batteries generally fall into 4 categories with different forms and applications, although fundamentally all are for the storage and retrieval of thermal energy. They also differ in method and density of heat storage. Phase change thermal battery Phase change materials used for thermal storage are capable of storing and releasing significant thermal capacity at the temperature that they change phase. These materials are chosen based on specific applications because there is a wide range of temperatures that may be useful in different applications and a wide range of materials that change phase at different temperatures. These materials include salts and waxes that are specifically engineered for the applications they serve. In addition to manufactured materials, water is a phase change material. The latent heat of water is 334 joules/gram. The phase change of water occurs at 0 °C (32 °F). Some applications use the thermal capacity of water or ice as cold storage; others use it as heat storage. It can serve either application; ice can be melted to store heat then refrozen to warm an environment. The advantage of using a phase change in this way is that a given mass of material can absorb a large quantity of energy without its temperature changing. Hence a thermal battery that uses a phase change can be made lighter, or more energy can be put into it without raising the internal temperature unacceptably. Encapsulated thermal battery An encapsulated thermal battery is physically similar to a phase change thermal battery in that it is a confined amount of physical material which is thermally heated or cooled to store or extract energy. However, in a non-phase change encapsulated thermal battery, the temperature of the substance is changed without inducing a phase change. Since a phase change is not needed many more materials are available for use in an encapsulated thermal battery. One of the key properties of an encapsulated thermal battery is its volumetric heat capacity (VHC), also termed volume-specific heat capacity. Several substances are used for these thermal batteries, for example water, concrete, and wet or dry sand. An example of an encapsulated thermal battery is a residential water heater with a storage tank. This thermal battery is usually slowly charged over a period of about 30–60 minutes for rapid use when needed (e.g., 10–15 minutes). Many utilities, understanding the "thermal battery" nature of water heaters, have begun using them to absorb excess renewable energy power when available for later use by the homeowner. According to the above-cited article, "net savings to the electricity system as a whole could be $200 per year per heater — some of which may be passed on to its owner". Research into using sand as a heat storage medium has been performed in Finland, where a prototype 8 MWh sand battery was built in 2022 to store renewable solar and wind power as heat, for later use as district heating, and possible later power generation. In Canada, single building thermal storage also stores renewable solar and wind power as heat, for later use as space or water heating for the building in which it's installed. It differs from the system in Finland by being compact, using low pressure pumped fluids, and can only heat one building rather than several. It can take in waste heat from alternate sources such as computer server rooms or compost heaps and store it for later distribution. Ground heat exchange thermal battery A ground heat exchanger (GHEX) is an area of the earth that is utilized as a seasonal/annual cycle thermal battery. These thermal batteries are areas of the earth into which pipes have been placed in order to transfer thermal energy. Energy is added to the GHEX by running a higher temperature fluid through the pipes and thus raising the temperature of the local earth. Energy can also be taken from the GHEX by running a lower-temperature fluid through those same pipes. GHEX are usually implemented in two forms. The picture above depicts what is known as a "horizontal" GHEX where trenching is used to place an amount of pipe in a closed loop in the ground. They are also formed by drilling boreholes into the ground, either vertically or horizontally, and then the pipes are inserted in the form of a closed-loop with a "u-bend" fitting on the far end of the loop. Heat energy can be added to or removed from a GHEX at any point in time. However, they are most often used as a Seasonal thermal energy storage operating on an annual cycle where energy is extracted from a building during the summer season to cool a building and added to the GHEX. Then that same energy is later extracted from the GHEX in the winter season to heat the building. This annual cycle of energy addition and subtraction is highly predictable based on energy modelling of the building served. A thermal battery used in this mode is a renewable energy source as the energy extracted in the winter will be restored to the GHEX the next summer in a continually repeating cycle. This type is solar powered because it is the heat from the sun in the summer that is removed from a building and stored in the ground for use in the next winter season for heating. There are two main methods of Thermal Response Testing that are used to characterize the thermal conductivity and Thermal Capacity/Diffusivity of GHEX Thermal Batteries—Log-Time 1-Dimensional Curve Fit and newly released Advanced Thermal Response Testing. A good example of the Annual Cycle nature of a GHEX Thermal Battery can be seen in the ASHRAE Building study. As seen there in the 'Ground Loop and Ambient Air temperatures by date' graphic (Figure 2–7), one can easily see the annual cycle sinusoidal shape of the ground temperature as heat is seasonally extracted from the ground in winter and rejected to the ground in summer, creating a ground "thermal charge" in one season that is not uncharged and driven the other direction from neutral until a later season. Other more advanced examples of Ground-based Thermal Batteries utilizing intentional well-bore thermal patterns are currently in research and early use. Other thermal batteries In the defense industry primary molten-salt batteries are termed "thermal batteries". They are non-rechargeable electrical batteries using a low-melting eutectic mixture of ionic metal salts (sodium, potassium and lithium chlorides, bromides, etc.) as the electrolyte, manufactured with the salts in solid form. As long as the salts remain solid, the battery has a long shelf life of up to 50 years. Once activated (usually by a pyrotechnic heat source) and the electrolyte melts, it is very reliable with a high energy and power density. They are extensively used for military applications such as small to large guided missiles, and nuclear weapons. There are other items that have historically been termed "thermal batteries", such as energy-storage heat packs that skiers use for keeping hands and feet warm (see hand warmer). These contain iron powder moist with oxygen-free salt water which rapidly corrodes over a period of hours, releasing heat, when exposed to air. Instant cold packs absorb heat by a non-chemical phase-change such as by absorbing the endothermic heat of solution of certain compounds. The one common principle of these other thermal batteries is that the reaction involved is not reversible. Thus, these batteries are not used for storing and retrieving heat energy. Electric thermal storage Storage heaters are commonplace in European homes with time-of-use metering (traditionally using cheaper electricity at nighttime). They consist of high-density ceramic bricks or feolite blocks heated to a high temperature with electricity and may or may not have good insulation and controls to release heat over a number of hours. Some advice not to use them in areas with young children or where there is an increased risk of fires due to poor housekeeping, both due to the high temperatures involved. With the rise of wind and solar power (and other renewable energies) providing an ever increasing share of energy input into the electricity grids in some countries, the use of larger scale electric energy storage is being explored by several commercial companies. Ideally, the utilisation of surplus renewable energy is transformed into high temperature high grade heat in highly insulated heat stores, for release later when needed. An emerging technology is the use of vacuum super insulated (VSI) heat stores. The use of electricity to generate heat, and not say direct heat from solar thermal collectors, means that very high temperatures can be realised, potentially allowing for inter seasonal heat transferstoring high grade heat in summer from surplus photovoltaics generation into heat stored for the following winter with relatively minimal standing losses. Solar energy storage Solar energy is an application of thermal energy storage. Most practical solar thermal storage systems provide storage from a few hours to a day's worth of energy. However, a growing number of facilities use seasonal thermal energy storage (STES), enabling solar energy to be stored in summer to heat space during winter. In 2017 Drake Landing Solar Community in Alberta, Canada, achieved a year-round 97% solar heating fraction, a world record made possible by incorporating STES. The combined use of latent heat and sensible heat are possible with high temperature solar thermal input. Various eutectic metal mixtures, such as aluminum and silicon () offer a high melting point suited to efficient steam generation, while high alumina cement-based materials offer good storage capabilities. Pumped-heat electricity storage In pumped-heat electricity storage (PHES), a reversible heat-pump system is used to store energy as a temperature difference between two heat stores. Isentropic Isentropic systems involve two insulated containers filled, for example, with crushed rock or gravel: a hot vessel storing thermal energy at high temperature/pressure, and a cold vessel storing thermal energy at low temperature/pressure. The vessels are connected at top and bottom by pipes and the whole system is filled with an inert gas such as argon. While charging, the system can use off-peak electricity to work as a heat pump. One prototype used argon at ambient temperature and pressure from the top of the cold store is compressed adiabatically, to a pressure of, for example, 12 bar, heating it to around . The compressed gas is transferred to the top of the hot vessel where it percolates down through the gravel, transferring heat to the rock and cooling to ambient temperature. The cooled, but still pressurized, gas emerging at the bottom of the vessel is then adiabatically expanded to 1 bar, which lowers its temperature to −150 °C. The cold gas is then passed up through the cold vessel where it cools the rock while warming to its initial condition. The energy is recovered as electricity by reversing the cycle. The hot gas from the hot vessel is expanded to drive a generator and then supplied to the cold store. The cooled gas retrieved from the bottom of the cold store is compressed which heats the gas to ambient temperature. The gas is then transferred to the bottom of the hot vessel to be reheated. The compression and expansion processes are provided by a specially designed reciprocating machine using sliding valves. Surplus heat generated by inefficiencies in the process is shed to the environment through heat exchangers during the discharging cycle. The developer claimed that a round trip efficiency of 72–80% was achievable. This compares to >80% achievable with pumped hydro energy storage. Another proposed system uses turbomachinery and is capable of operating at much higher power levels. Use of phase change material as heat storage material could enhance performance.
Technology
Energy storage
null
14221581
https://en.wikipedia.org/wiki/Hosoya%20index
Hosoya index
The Hosoya index, also known as the Z index, of a graph is the total number of matchings in it. The Hosoya index is always at least one, because the empty set of edges is counted as a matching for this purpose. Equivalently, the Hosoya index is the number of non-empty matchings plus one. The index is named after Haruo Hosoya. It is used as a topological index in chemical graph theory. Complete graphs have the largest Hosoya index for any given number of vertices; their Hosoya indices are the telephone numbers. History This graph invariant was introduced by Haruo Hosoya in 1971. It is often used in chemoinformatics for investigations of organic compounds. In his article, "The Topological Index Z Before and After 1971," on the history of the notion and the associated inside stories, Hosoya writes that he introduced the Z index to report a good correlation of the boiling points of alkane isomers and their Z indices, basing on his unpublished 1957 work carried out while he was an undergraduate student at the University of Tokyo. Example A linear alkane, for the purposes of the Hosoya index, may be represented as a path graph without any branching. A path with one vertex and no edges (corresponding to the methane molecule) has one (empty) matching, so its Hosoya index is one; a path with one edge (ethane) has two matchings (one with zero edges and one with one edges), so its Hosoya index is two. Propane (a length-two path) has three matchings: either of its edges, or the empty matching. n-butane (a length-three path) has five matchings, distinguishing it from isobutane which has four. More generally, a matching in a path with edges either forms a matching in the first edges, or it forms a matching in the first edges together with the final edge of the path. This case analysis shows that the Hosoya indices of linear alkanes obey the recurrence governing the Fibonacci numbers, and because they also have the same base case they must equal the Fibonacci numbers. The structure of the matchings in these graphs may be visualized using a Fibonacci cube. The largest possible value of the Hosoya index, on a graph with vertices, is given by the complete graph . The Hosoya indices for the complete graphs are the telephone numbers These numbers can be expressed by a summation formula involving factorials, as Every graph that is not complete has a smaller Hosoya index than this upper bound. Algorithms The Hosoya index is #P-complete to compute, even for planar graphs. However, it may be calculated by evaluating the matching polynomial mG at the argument 1. Based on this evaluation, the calculation of the Hosoya index is fixed-parameter tractable for graphs of bounded treewidth and polynomial (with an exponent that depends linearly on the width) for graphs of bounded clique-width. The Hosoya index can be efficiently approximated to any desired constant approximation ratio using a fully-polynomial randomized approximation scheme.
Mathematics
Graph theory
null
6007618
https://en.wikipedia.org/wiki/Blue%20eared%20pheasant
Blue eared pheasant
The blue eared pheasant (Crossoptilon auritum) is a large pheasant endemic to China. Although it is considered rare, the blue eared pheasant is evaluated as of least concern on the IUCN Red List of Threatened Species. Description The blue eared pheasant has dark blue-gray plumage with velvet black crown, red facial feathers appearing as bare skin, yellow iris, long white ear coverts behind the eyes, and crimson legs. Its tail of 24 elongated bluish-gray feathers is curved, loose, and dark-tipped. Both sexes are similar with the male being slightly larger. They grow up to long. Distribution The blue eared pheasant is found throughout mountain forests of central China. Ecology Its diet consists mainly of berries and vegetable matter.
Biology and health sciences
Galliformes
Animals
12539015
https://en.wikipedia.org/wiki/Indian%20flying%20fox
Indian flying fox
The Indian flying fox (Pteropus medius), also known as the greater Indian fruit bat, is a species of flying fox native to the Indian subcontinent. It is one of the largest bats in the world. It is of interest as a disease vector, as it is capable of transmitting several viruses to humans. It is nocturnal and feeds mainly on ripe fruits, such as mangoes and bananas, and nectar. This species is often regarded as vermin due to its destructive tendencies towards fruit farms, but the benefits of its pollination and seed propagation often outweigh the impacts of its fruit consumption. Taxonomy and phylogeny The Indian flying fox was described as a new species by Dutch zoologist and museum curator Coenraad Jacob Temminck in 1825 who gave it the scientific name Pteropus medius. Confusion over the name has prevailed in the literature as in 1782 Danish zoologist Morten Thrane Brünnich, gave the scientific name Vespertilio gigantea as a replacement for Vespertilio vampyrus Linnaeus (1758: 31). He was specifically referring to Linnaeus's use of Vespertillo vampyrus. Carl Linnaeus had previously classified the species as Pteropus vampyrus and as such gigantea could not be used for a species that was already named. In 1992 Corbett and Hill suggested giganteus was a sub-species of vampyrus. In 2012, Mlíkovský argued that the correct scientific name of the Indian flying fox should be Pteropus medius rather than P. giganteus. He asserted that Brünnich coined a new name for a species that had already been described—Vespertilio vampyrus which is now Pteropus vampyrus (the large flying fox). Mlíkovský made several points in his argument, all with a foundation in the nomenclature rule known as Principle of Priority. The Principle of Priority posits that the first formal, published scientific name given to a species shall be the name that is used. Because Brünnich was attempting to rename the large flying fox in his 1782 publication, his name should not apply to either the large or the Indian flying fox—an older name was in existence, and therefore the large flying fox is P. vampyrus, not P. giganteus; In negating Brünnich's name, Mlíkovský states that the oldest applicable name used to describe the Indian flying fox comes from Coenraad Jacob Temminck's publication in 1825. Mlíkovský's recommendation has been met with varying degrees of acceptance. Some authors who have published on the Indian flying fox since 2012 have accepted this taxonomic revision, using the name Pteropus medius. Other taxonomic authorities, however, such as the Integrated Taxonomic Information System, still recognize Pteropus giganteus as the valid name of the Indian flying fox. It is most closely related to the grey-headed flying fox, P. poliocephalus. As the genus Pteropus is divided into closely related species groups, the Indian flying fox is placed in the vampyrus species group, which also includes the Bonin, Ryukyu, little golden-mantled, Rodrigues, large, Lyle's, Aldabra, Madagascan, Seychelles, and Mauritian flying foxes. There are currently three recognized subspecies of the Indian flying fox: P. m. ariel G. M. Allen, 1908, P. m. medius Temminck, 1825, and P. m. leucocephalus Hodgson, 1835. Phylogeny Description The Indian flying fox is India's largest bat, and one of the largest bats in the world, weighing up to . Its body mass ranges from , and males are generally larger than females. The wingspan ranges from and body length averages . The wings rise from the side of the dorsum and from the back of the second toe, and its thumb has a powerful claw. It has claws on only its first two digits of its wings, with the thumb possessing the more powerful claw, and all five digits of its leg. It lacks a tail. The Indian flying fox ranges in color, with a black back that is lightly streaked with grey, a pale, yellow-brown mantle, a brown head, and dark, brownish underparts. It has large eyes, simple ears, and no facial ornamentation—a typical appearance for a species of the genus Pteropus. The skull is oval-shaped and the greatest length of the skull is . The orbital rim of the skull is incomplete. The ears lack a tragus or antitragus and are ringed, and the ears range in length from in length. The dental formula is . The first upper premolar is absent, the canine is pronounced, and the molars have a longitudinal furrow. As of 1999 the longest-lived member of its genus; lived for 31 years and 5 months in captivity. Distribution and habitat The Indian flying fox is found across the Indian Subcontinent, including in Bangladesh, Bhutan, India, Tibet, the Maldives, Myanmar, Nepal, Pakistan and Sri Lanka. It roosts in large, established colonies on open tree branches, especially in urban areas or in temples. It prefers to roost on tall trees with small diameters, especially canopy trees, and prefers to be in close proximity to bodies of water, human residences, and agricultural land. This habitat selection is highly dependent on food availability. For example, many residences within the bat's distribution have outdoor gardens that support its generalist frugivorous feeding habits. This tendency to support a generalist frugivorous diet through habitat selection also leads it to commonly roost in highly fragmented forests, where the variety of plant species allows it to better utilize its feeding habits. Its populations are constantly threatened through habitat destruction caused by urbanization or widening of roads. Tree roosts are often felled and colonies dispersed. Smaller colonies tend to remain in place longer than larger colonies, as those larger colonies have their roosts felled more quickly. Behavior and ecology The Indian flying fox roosts communally in the treetops of large trees in camps often with thousands of bats. Roosts tend to be used for upwards of ten years, and are usually inhabited year-round rather than seasonally. Within the roost the bats quarrel and chatter often, and during sunny hours of the day bats fan their wings and call, and during cloudy periods bats are silent and wrap their wings around their body. Occasionally a few bats fly around the roost during the day, but most activity is restricted to night, when they leave the roost one by one 20–30 minutes after the sunset. Bats at the top of the roost tend to circle the roost and leave before the rest of the colony emerges. The time of bat emergence was significantly influenced by the day length, sunset and the ambient temperature delayed the time of emergence. The bats fly with the appearance of a large swarm but forage individually, and give off contact calls infrequently. Individuals travel upwards of in search of food, finding it by sight. It can quickly travel up and down tree branches to forage for fruit with a swift hand-over-hand motion. Diet The Indian flying fox is frugivorous or nectarivorous: it eats fruits and blossoms, and it drinks nectar from flowers. At dusk, it forages for ripe fruit. It is a primarily generalist feeder, and eats any available fruits. Seeds from ingested fruits are scarified in its digestive tract and dispersed through its waste. It is relied on for seed propagation by 300 plant species of nearly 200 genera, of which approximately 500 economically valuable products are produced in India. Nearly 70% of the seeds in Indian flying fox guano are of the banyan tree, a keystone species in Indian ecosystems. Although initially thought to be strictly frugivorous, it has been observed deliberately eating insects and leaves. The Indian flying fox also eats flowers, seed pods, bark, cones, and twigs. Their diet changes seasonally, with a greater reliance on mango fruits for moisture in the autumn and spring. A species of ebony tree (pale moon ebony tree) provides dietary fiber year-round. Yellow box eucalyptus and Chinese pistache provide necessary carbohydrates, fats, iron, and phosphorus in the winter. Reproduction The Indian flying fox is a polygynandrous species, and breeds yearly from July to October. Births occur from February to May. Gestation period is typically 140 to 150 days. The average birth number is 1 to 2 pups. Among members of the genus Pteropus, pups are carried by the mother for the first few weeks of life, with weaning occurring around 5 months of age. Males do not participate in parental care. Young bats learn to fly at approximately 11 weeks of age. Reproductive maturity occurs at 18–24 months. It has a common mammalian annual breeding season. Its testes increase in weight as days grow shorter, and are heaviest in October and November. Their weight quickly decreases after eggs are fertilized. Sperm are abundant in the epididymides throughout the year. Just before daylength begins to increase in December, conception occurs. The young are born in May, as pregnancy lasts roughly six months. Copulation rates tend to increase as days grow shorter. To initiate copulation males capture the attention of females with continuous flapping of their wings (probably to spread the odor from male scent glands), though this usually only serves to encourage females to escape. Copulation tends to occur with females that fail to escape after courtship. Males chase females persistently for up to half of an hour until they successfully corner the female. Females attempt to protect themselves and escape during copulation and call constantly. Copulation ranges from 30 to 70 seconds on average. Males do not release females until copulation ends, and afterwards both bats remain silent until the end of the day. Both before and after copulation, Indian flying foxes engage in oral sex, with males performing cunnilingus on females. The duration of oral sex is positively associated with the duration of copulation, suggesting that its purpose is to make the female more receptive to copulation and to increase the male's chances of fertilizing an ovum. Males of this species also engage in homosexual fellatio the function and purpose of which is not yet confirmed. Relationship to people Disease transmission Like other fruit bats, the Indian flying fox may be a natural reservoir for diseases including certain henipaviruses and flaviviruses. These can prove fatal to humans and domestic animals. Indian flying foxes in India and Bangladesh have tested positive for Nipah virus, a type of henipavirus. Due to human encroachment into their habitats, there is a high risk of spillover infection of Nipah virus from Indian flying foxes to humans. While Nipah virus outbreaks are more likely in areas preferred by Indian flying foxes, researchers note that "the presence of bats in and of itself is not considered a risk factor for Nipah virus infection." Rather, the consumption of date palm sap is a significant route of transmission. The practice of date palm sap collection involves placing collecting pots at date palm trees. Indian flying foxes have been observed licking the sap as it flows into the pots, as well as defecating and urinating in proximity to the pots. In this way, humans who drink the palm sap can be exposed to the bats' viruses. The use of bamboo skirts on collecting pots lowers the risk of contamination from bat fluids. While Indian flying foxes have also tested positive for GBV-D, a type of flavivirus, it is unclear whether this virus occurs in humans or if it could be transmitted by Indian flying foxes. As pests To some, the Indian flying fox is vermin because they believe that it "poaches" ripe fruit from orchards. A study in India found that of all orchard crops, Indian flying foxes did the most damage to mango and guava crops. However, an estimated 60% of fruits damaged by the flying foxes were ripe or overripe; overripe fruits are about half as valuable as ripe fruits. In the Maldives, Indian flying foxes are considered major pests of almond, guava, and mango trees. Indian flying foxes in the Maldives have been culled to protect orchards; some managers advocated reducing their population by 75% every three to four years for optimum control. Alternatives to culling include placing barriers between the bats and fruit trees, such as netting, or harvesting fruit in a timely manner to avoid attracting as many flying foxes. Preventing fruit loss may also involve the use of scare guns, chemical deterrents, or night-time lights. Alternatively, planting Singapore cherry trees next to an orchard can be effective, as flying foxes are much more attracted to their fruits than many other orchard crops. As food and medicine In Pakistan, its populations have declined. This has been partly attributed to the belief that its fat is a treatment for rheumatism. Tribes in the Attappadi region of India eat the cooked flesh of the Indian flying fox to treat asthma and chest pain. Healers of the Kanda Tribe of Bangladesh use hair from Indian flying foxes to create treatments for "fever with shivering". In Pakistan and India, Indian flying foxes are more likely to be killed for medicine than for bushmeat even though their medicinal methods are unrealistic. However, the meat is consumed by indigenous tribes of India. Its meat is traded locally in small markets, and is not considered of significant economic importance. Consumption is also reported in three provinces of South China and one province in Southwest China. Hunting may be sustainable, though, with some researchers positing that habitat loss and roost disturbance are much more damaging to its populations. In culture Despite the Indian government classifying bats as vermin in the Indian Wildlife Protection Act, the Indian flying fox is sacred in India. In the Puliangulam village in India, a banyan tree in the middle of local agriculture fields is home to a colony of 500 Indian flying foxes. The bats are protected by the local spirit "Muniyandi", and the villagers make offerings of bananas and rice to the spirit and the bats.
Biology and health sciences
Bats
Animals
12540866
https://en.wikipedia.org/wiki/Warthog
Warthog
Phacochoerus is a genus in the family Suidae, commonly known as warthogs (pronounced wart-hog). They are pigs who live in open and semi-open habitats, even in quite arid regions, in sub-Saharan Africa. The two species were formerly considered conspecific under the scientific name Phacochoerus aethiopicus, but today this is limited to the desert warthog, while the best-known and most widespread species, the common warthog (or simply warthog), is Phacochoerus africanus. Description Although covered in bristly hairs, a warthog's body and head appear largely bare, from a distance, with only a crest of hair along the back and the tufts on the face and tail being obvious. The English name "wart"-hog refers to their facial wattles, which are particularly distinct in males. The males also have very prominent tusks, which reach a length of ; females' tusks are always smaller. They are largely herbivorous, but, like most suids, opportunistically eat invertebrates or small animals, even scavenging on carrion. While both species remain fairly common and widespread, and considered to be of Least Concern by the IUCN, the nominate subspecies of desert warthog, commonly known as the Cape warthog (P. a. aethiopicus) was extinct by around 1865. Species in taxonomic order The genus Phacochoerus contains two species. The two species emerged from ecological barriers. P. africanus were found with a lack of upper incisors, while P. aethiopicus were found with a full set.
Biology and health sciences
Pigs_2
Animals
7901434
https://en.wikipedia.org/wiki/Majoidea
Majoidea
The Majoidea are a superfamily of crabs which includes the various spider crabs. Taxonomy In "A classification of living and fossil genera of decapod crustaceans" De Grave and colleagues divided Majoidea into six families: Family Epialtidae Subfamily Epialtinae Subfamily Pisinae Subfamily Pliosomatinae Subfamily Tychiinae Family Hymenosomatidae Family Inachidae Family Inachoididae Family Majidae Subfamily Eurynolambrinae Subfamily Majinae Subfamily Micromaiinae Subfamily Mithracinae Subfamily Planoterginae Family Oregoniidae The classification has since been revised, with subfamilies Epialtinae and Mithracinae being elevated to families and Hymenosomatidae being moved to its own superfamily. The family composition according to the World Register of Marine Species is as follows: family Epialtidae MacLeay, 1838 family Inachidae MacLeay, 1838 family Inachoididae Dana, 1851 family Macrocheiridae Dana, 1851 family Majidae Samouelle, 1819 – "true" spider crabs family Mithracidae Balss, 1929 family Oregoniidae Garth, 1958 family Priscinachidae Breton, 2009 Notable species within the superfamily include: Japanese spider crab (Macrocheira kaempferi), the largest living species of crab, found on the bottom of the Pacific Ocean. Libinia emarginata, the portly spider crab, a species of crab found in estuarine habitats on the east coast of North America. Hyas, a genus of spider crabs, including the great spider crab (Hyas araneus), found in the Atlantic and the North Sea. Maja squinado, sometimes called the "European long leg crab or pie faced crab" because of the way its face is shaped. Australian majid spider crab, found off Tasmania, are known to pile up on each other, the faster-moving crabs clambering over the smaller, slower ones. There is one fossil family, Priscinachidae, represented by a single species, Priscinachus elongatus, from the Cenomanian of France.
Biology and health sciences
Crabs and hermit crabs
Animals
7901784
https://en.wikipedia.org/wiki/Andromeda%E2%80%93Milky%20Way%20collision
Andromeda–Milky Way collision
The Andromeda–Milky Way collision is a galactic collision predicted to occur in about 4.5 billion years between the two largest galaxies in the Local Group—the Milky Way (which contains the Solar System and Earth) and the Andromeda Galaxy. The stars involved are sufficiently far apart that it is improbable that any of them will individually collide, though some stars will be ejected. Certainty The Andromeda Galaxy is approaching the Milky Way at about as indicated by blueshift. However, the lateral speed (measured as proper motion) is very difficult to measure with sufficient precision to draw reasonable conclusions. Until 2012, it was not known whether the possible collision was definitely going to happen or not. Researchers then used the Hubble Space Telescope to measure the positions of stars in Andromeda in 2002 and 2010, relative to hundreds of distant background galaxies. By averaging over thousands of stars, they were able to obtain the average proper motion with sub-pixel accuracy. The conclusion was that Andromeda is moving southeast in the sky at less than 0.1 milliarc-seconds per year, corresponding to a speed relative to the Sun of less than 200 km/s towards the south and towards the east. Taking also into account the Sun's motion, Andromeda's tangential or sideways velocity with respect to the Milky Way was found to be much smaller than the speed of approach (consistent with zero given the uncertainty) and therefore it will eventually merge with the Milky Way in around five billion years. Such collisions are relatively common, considering galaxies' long lifespans. Andromeda, for example, is believed to have collided with at least one other galaxy in the past, and several dwarf galaxies such as Sgr dSph are currently colliding with the Milky Way and being merged into it. The studies also suggest that M33, the Triangulum Galaxy—the third-largest and third-brightest galaxy of the Local Group—will participate in the collision event, too. Its most likely fate is to end up orbiting the merger remnant of the Milky Way and Andromeda galaxies and finally to merge with it in an even more distant future. However, a collision with the Milky Way, before it collides with the Andromeda Galaxy, or an ejection from the Local Group cannot be ruled out. Stellar collisions While the Andromeda Galaxy contains about 1 trillion () stars and the Milky Way contains about 300 billion (3), the chance of even two stars colliding is negligible because of the huge distances between the stars. For example, the nearest star to the Earth after the Sun is Proxima Centauri, about or 30 million (3) solar diameters away. To visualize that scale, if the Sun were a ping-pong ball, Proxima Centauri would be a pea about away, and the Milky Way would be about wide. Although stars are more common near the centers of each galaxy, the average distance between stars is still 160 billion (1.6) km (100 billion mi). That is analogous to one ping-pong ball every . Thus, it is extremely unlikely that any two stars from the merging galaxies would collide. Black hole collisions The Milky Way and Andromeda galaxies each contain a central supermassive black hole (SMBH), these being Sagittarius A* (c. ) and an object within the P2 concentration of Andromeda's nucleus (). These black holes will converge near the centre of the newly formed galaxy over a period that may take millions of years, due to a process known as dynamical friction: as the SMBHs move relative to the surrounding cloud of much less massive stars, gravitational interactions lead to a net transfer of orbital energy from the SMBHs to the stars, causing the stars to be "slingshotted" into higher-radius orbits, and the SMBHs to "sink" toward the galactic core. When the SMBHs come within one light-year of one another, they will begin to strongly emit gravitational waves that will radiate further orbital energy until they merge completely. Gas taken up by the combined black hole could create a luminous quasar or an active galactic nucleus, releasing as much energy as 100 million supernova explosions. As of 2006, simulations indicated that the Sun might be brought near the centre of the combined galaxy, potentially coming near one of the black holes before being ejected entirely out of the galaxy. Alternatively, the Sun might approach one of the black holes a bit closer and be torn apart by its gravity. Parts of the former Sun would be pulled into the black hole. Fate of the Solar System Based on data available in 2007, two scientists with the Harvard–Smithsonian Center for Astrophysics predict a 50% chance that in a merged galaxy, the Solar System will be swept out three times farther from the galactic core than its current distance. They also predict a 12% chance that the Solar System will be ejected from the new galaxy sometime during the collision. Such an event would have no adverse effect on the system and the chances of any sort of disturbance to the Sun or planets themselves may be remote. Excluding planetary engineering, by the time the two galaxies collide, the surface of the Earth will have already become far too hot for liquid water to exist, ending all terrestrial life; that is currently estimated to occur in about 0.5 to 1.5 billion years due to gradually increasing luminosity of the Sun; by the time of the collision, the Sun's luminosity will have risen by 35–40%, likely initiating a runaway greenhouse effect on the planet by this time. Possible triggered stellar events When two spiral galaxies collide, the hydrogen present on their disks is compressed, producing strong star formation as can be seen on interacting systems like the Antennae Galaxies. In the case of the Andromeda–Milky Way collision, it is believed that there will be little gas remaining in the disks of both galaxies, so the mentioned starburst will be relatively weak, though it still may be enough to form a quasar. Merger remnant The galaxy product of the collision has been named Milkomeda or Milkdromeda. According to simulations, this object is likely to be a giant elliptical galaxy, but with a centre showing less stellar density than current elliptical galaxies. It is, however, possible the resulting object will be a large lenticular or super spiral galaxy, depending on the amount of remaining gas in the Milky Way and Andromeda. Over the course of the next 150 billion years, the remaining galaxies of the Local Group will coalesce into this object, effectively completing its evolution.
Physical sciences
Basics_2
Astronomy
722870
https://en.wikipedia.org/wiki/X%20unit
X unit
For the software testing tools, see xUnit. The x unit (symbol xu) is a unit of length approximately equal to 0.1 pm (10−13 m). It is used to quote the wavelength of X-rays and gamma rays. Originally defined by the Swedish physicist Manne Siegbahn (1886–1978) in 1925, the x unit could not at that time be measured directly; the definition was instead made in terms of the spacing between planes of the calcite crystals used in the measuring apparatus. One x unit was set at of the spacing of the (200) planes of calcite at 18 °C. In modern usage, there are two separate x units, which are defined in terms of the wavelengths of the two most commonly used X-ray lines in X-ray crystallography: the copper x unit (symbol xu(Cu Kα1)) is defined so that the wavelength of the Kα1 line of copper is exactly 1537.400 xu(Cu Kα1); the molybdenum x unit (symbol xu(Mo Kα1)) is defined so that the wavelength of the Kα1 line of molybdenum is exactly 707.831 xu(Mo Kα1). The 2006 CODATA recommended values for these units are: 1 xu(Cu Kα1) = , 1 xu(Mo Kα1) = .
Physical sciences
Other
Basics and measurement
723059
https://en.wikipedia.org/wiki/Soviet%20space%20program
Soviet space program
The Soviet space program () was the state space program of the Soviet Union, active from 1951 until the dissolution of the Soviet Union in 1991. Contrary to its American, European, and Chinese competitors, which had their programs run under single coordinating agencies, the Soviet space program was divided between several internally competing design bureaus led by Korolev, Kerimov, Keldysh, Yangel, Glushko, Chelomey, Makeyev, Chertok and Reshetnev. Several of these bureaus were subordinated to the Ministry of General Machine-Building. The Soviet space program served as an important marker of claims by the Soviet Union to its superpower status. Soviet investigations into rocketry began with the formation of the Gas Dynamics Laboratory in 1921, and these endeavors expanded during the 1930s and 1940s. In the years following World War II, both the Soviet and United States space programs utilised German technology in their early efforts at space programs. In the 1950s, the Soviet program was formalized under the management of Sergei Korolev, who led the program based on unique concepts derived from Konstantin Tsiolkovsky, sometimes known as the father of theoretical astronautics. Competing in the Space Race with the United States and later with the European Union and with China, the Soviet space program was notable in setting many records in space exploration, including the first intercontinental missile (R-7 Semyorka) that launched the first satellite (Sputnik 1) and sent the first animal (Laika) into Earth orbit in 1957, and placed the first human in space in 1961, Yuri Gagarin. In addition, the Soviet program also saw the first woman in space, Valentina Tereshkova, in 1963 and the first spacewalk in 1965. Other milestones included computerized robotic missions exploring the Moon starting in 1959: being the first to reach the surface of the Moon, recording the first image of the far side of the Moon, and achieving the first soft landing on the Moon. The Soviet program also achieved the first space rover deployment with the Lunokhod programme in 1966, and sent the first robotic probe that automatically extracted a sample of lunar soil and brought it to Earth in 1970, Luna 16. The Soviet program was also responsible for leading the first interplanetary probes to Venus and Mars and made successful soft landings on these planets in the 1960s and 1970s. It put the first space station, Salyut 1, into low Earth orbit in 1971, and the first modular space station, Mir, in 1986. Its Interkosmos program was also notable for sending the first citizen of a country other than the United States or Soviet Union into space. The primary spaceport, Baikonur Cosmodrome, is now in Kazakhstan, which leases the facility to Russia. Origins Early Russian-Soviet efforts The theory of space exploration had a solid basis in the Russian Empire before the First World War with the writings of the Russian and Soviet rocket scientist Konstantin Tsiolkovsky (1857–1935), who published pioneering papers in the late 19th and early 20th centuries on astronautic theory, including calculating the Rocket equation and in 1929 introduced the concept of the multistaged rocket. Additional astronautic and spaceflight theory was also provided by the Ukrainian and Soviet engineer and mathematician Yuri Kondratyuk who developed the first known lunar orbit rendezvous (LOR), a key concept for landing and return spaceflight from Earth to the Moon. The LOR was later used for the plotting of the first actual human spaceflight to the Moon. Many other aspects of spaceflight and space exploration are covered in his works. Both theoretical and practical aspects of spaceflight was also provided by the Latvian pioneer of rocketry and spaceflight Friedrich Zander, including suggesting in a 1925 paper that a spacecraft traveling between two planets could be accelerated at the beginning of its trajectory and decelerated at the end of its trajectory by using the gravity of the two planets' moons – a method known as gravity assist. Gas Dynamics Laboratory (GDL) The first Soviet development of rockets was in 1921, when the Soviet military sanctioned the commencement of a small research laboratory to explore solid fuel rockets, led by Nikolai Tikhomirov, a chemical engineer, and supported by Vladimir Artemyev, a Soviet engineer. Tikhomirov had commenced studying solid and Liquid-fueled rockets in 1894, and in 1915, he lodged a patent for "self-propelled aerial and water-surface mines." In 1928 the laboratory was renamed the Gas Dynamics Laboratory (GDL). The First test-firing of a solid fuel rocket was carried out in March 1928, which flew for about 1,300 meters Further developments in the early 1930s were led by Georgy Langemak. and 1932 in-air test firings of RS-82 missiles from an Tupolev I-4 aircraft armed with six launchers successfully took place. Sergey Korolev A key contributor to early soviet efforts came from a young Russian aircraft engineer Sergey Korolev, who would later become the de facto head of the Soviet space programme. In 1926, as an advanced student, Korolev was mentored by the famous Soviet aircraft designer Andrey Tupolev, who was a professor at his University. In 1930, while working as a lead engineer on the Tupolev TB-3 heavy bomber he became interested in the possibilities of liquid-fueled rocket engines to propel airplanes. This led to contact with Zander, and sparked his interest in space exploration and rocketry. Group for the Study of Reactive Motion (GIRD) Practical aspects built on early experiments carried out by members of the 'Group for the Study of Reactive Motion' (better known by its Russian acronym "GIRD") in the 1930s, where Zander, Korolev and other pioneers such as the Russian engineers Mikhail Tikhonravov, Leonid Dushkin, Vladimir Vetchinkin and Yuriy Pobedonostsev worked together. On August 18, 1933, the Leningrad branch of GIRD, led by Tikhonravov, launched the first hybrid propellant rocket, the GIRD-09, and on November 25, 1933, the Soviet's first liquid-fueled rocket GIRD-X. Reactive Scientific Research Institute (RNII) In 1933 GIRD was merged with GDL by the Soviet government to form the Reactive Scientific Research Institute (RNII), which brought together the best of the Soviet rocket talent, including Korolev, Langemak, Ivan Kleymyonov and former GDL engine designer Valentin Glushko. Early success of RNII included the conception in 1936 and first flight in 1941 of the RP-318 the Soviets first rocket-powered aircraft and the RS-82 and RS-132 missiles entered service by 1937, which became the basis for development in 1938 and serial production from 1940 to 1941 of the Katyusha multiple rocket launcher, another advance in the reactive propulsion field. RNII's research and development were very important for later achievements of the Soviet rocket and space programs. During the 1930s, Soviet rocket technology was comparable to Germany's, but Joseph Stalin's Great Purge severely damaged its progress. In November 1937, Kleymyonov and Langemak were arrested and later executed, Glushko and many other leading engineers were imprisoned in the Gulag. Korolev was arrested in June 1938 and sent to a forced labour camp in Kolyma in June 1939. However, due to intervention by Tupolev, he was relocated to a prison for scientists and engineers in September 1940. World War II During World War II rocketry efforts were carried out by three Soviet design bureaus. RNII continued to develop and improve solid fuel rockets, including the RS-82 and RS-132 missiles and the Katyusha rocket launcher, where Pobedonostsev and Tikhonravov continued to work on rocket design. In 1944, RNII was renamed Scientific Research Institute No 1 (NII-I) and combined with design bureau OKB-293, led by Soviet engineer Viktor Bolkhovitinov, which developed, with Aleksei Isaev, Boris Chertok, Leonid Voskresensky and Nikolay Pilyugin a short-range rocket powered interceptor called Bereznyak-Isayev BI-1. Special Design Bureau for Special Engines (OKB-SD) was led by Glushko and focused on developing auxiliary liquid-fueled rocket engines to assist takeoff and climbing of prop aircraft, including the RD-IKhZ, RD-2 and RD-3. In 1944, the RD-1 kHz auxiliary rocket motor was tested in a fast-climb Lavochkin La-7R for protection of the capital from high-altitude Luftwaffe attacks. In 1942 Korolev was transferred to OKB-SD, where he proposed development of the long range missiles D-1 and D-2. The third design bureau was Plant No 51 (OKB-51), led by Soviet Ukrainian Engineer Vladimir Chelomey, where he created the first Soviet pulsating air jet engine in 1942, independently of similar contemporary developments in Nazi Germany. German influence During World War II, Nazi Germany developed rocket technology that was more advanced than the Allies and a race commenced between the Soviet Union and the United States to capture and exploit the technology. Soviet rocket specialist was sent to Germany in 1945 to obtain V-2 rockets and worked with German specialists in Germany and later in the Soviet Union to understand and replicate the rocket technology. The involvement of German scientists and engineers was an essential catalyst to early Soviet efforts. In 1945 and 1946 the use of German expertise was invaluable in reducing the time needed to master the intricacies of the V-2 rocket, establishing production of the R-1 rocket and enable a base for further developments. On 22 October 1946, 302 German rocket scientists and engineers, including 198 from the Zentralwerke (a total of 495 persons including family members), were deported to the Soviet Union as part of Operation Osoaviakhim. However, after 1947 the Soviets made very little use of German specialists and their influence on the future Soviet rocket program was marginal. Sputnik and Vostok The Soviet space program was tied to the USSR's Five-Year Plans and from the start was reliant on support from the Soviet military. Although he was "single-mindedly driven by the dream of space travel", Korolev generally kept this a secret while working on military projects—especially, after the Soviet Union's first atomic bomb test in 1949, a missile capable of carrying a nuclear warhead to the United States—as many mocked the idea of launching satellites and crewed spacecraft. Nonetheless, the first Soviet rocket with animals aboard launched in July 1951; the two dogs, Dezik and Tsygan, were recovered alive after reaching 101 km in altitude. Two months ahead of America's first such achievement, this and subsequent flights gave the Soviets valuable experience with space medicine. Because of its global range and large payload of approximately five tons, the reliable R-7 was not only effective as a strategic delivery system for nuclear warheads, but also as an excellent basis for a space vehicle. The United States' announcement in July 1955 of its plan to launch a satellite during the International Geophysical Year greatly benefited Korolev in persuading Soviet leader Nikita Khrushchev to support his plans. In a letter addressed to Khrushchev, Korolev stressed the necessity of launching a "simple satellite" in order to compete with the American space effort. Plans were approved for Earth-orbiting satellites (Sputnik) to gain knowledge of space, and four uncrewed military reconnaissance satellites, Zenit. Further planned developments called for a crewed Earth orbit flight by and an uncrewed lunar mission at an earlier date. After the first Sputnik proved to be a successful propaganda coup, Korolev—now known publicly only as the anonymous "Chief Designer of Rocket-Space Systems"—was charged to accelerate the crewed program, the design of which was combined with the Zenit program to produce the Vostok spacecraft. After Sputnik, Soviet scientists and program leaders envisioned establishing a crewed station to study the effects of zero-gravity and the long term effects on lifeforms in a space environment. Still influenced by Tsiolkovsky—who had chosen Mars as the most important goal for space travel—in the early 1960s, the Soviet program under Korolev created substantial plans for crewed trips to Mars as early as 1968 to 1970. With closed-loop life support systems and electrical rocket engines, and launched from large orbiting space stations, these plans were much more ambitious than America's goal of landing on the Moon. In late 1963 and early 1964 the Polyot 1 and Polyot 2 satellites were launched, these were the first satellites capable of adjusting both orbital inclination and Apsis. This marked a significant step in the potential use of spacecraft in Anti-satellite warfare, as it demonstrated the potential to eventually for unmanned satellites to intercept and destroy other satellites. This would have highlighted the potential use of the space program in a conflict with the US. Funding and support The Soviet space program was secondary in military funding to the Strategic Rocket Forces' ICBMs. While the West believed that Khrushchev personally ordered each new space mission for propaganda purposes, and the Soviet leader did have an unusually close relationship with Korolev and other chief designers, Khrushchev emphasized missiles rather than space exploration and was not very interested in competing with Apollo. While the government and the Communist Party used the program's successes as propaganda tools after they occurred, systematic plans for missions based on political reasons were rare, one exception being Valentina Tereshkova, the first woman in space, on Vostok 6 in 1963. Missions were planned based on rocket availability or ad hoc reasons, rather than scientific purposes. For example, the government in February 1962 abruptly ordered an ambitious mission involving two Vostoks simultaneously in orbit launched "in ten days time" to eclipse John Glenn's Mercury-Atlas 6 that month; the program could not do so until August, with Vostok 3 and Vostok 4. Internal competition Unlike the American space program, which had NASA as a single coordinating structure directed by its administrator, James Webb through most of the 1960s, the USSR's program was split between several competing design groups. Despite the successes of the Sputnik Program between 1957 and 1961 and Vostok Program between 1961 and 1964, after 1958 Korolev's OKB-1 design bureau faced increasing competition from his rival chief designers, Mikhail Yangel, Valentin Glushko, and Vladimir Chelomei. Korolev planned to move forward with the Soyuz craft and N-1 heavy booster that would be the basis of a permanent crewed space station and crewed exploration of the Moon. However, Dmitry Ustinov directed him to focus on near-Earth missions using the Voskhod spacecraft, a modified Vostok, as well as on uncrewed missions to nearby planets Venus and Mars. Yangel had been Korolev's assistant but with the support of the military, he was given his own design bureau in 1954 to work primarily on the military space program. This had the stronger rocket engine design team including the use of hypergolic fuels but following the Nedelin catastrophe in 1960 Yangel was directed to concentrate on ICBM development. He also continued to develop his own heavy booster designs similar to Korolev's N-1 both for military applications and for cargo flights into space to build future space stations. Glushko was the chief rocket engine designer but he had a personal friction with Korolev and refused to develop the large single chamber cryogenic engines that Korolev needed to build heavy boosters. Chelomey benefited from the patronage of Khrushchev and in 1960 was given the plum job of developing a rocket to send a crewed vehicle around the Moon and a crewed military space station. With limited space experience, his development was slow. The progress of the Apollo program alarmed the chief designers, who each advocated for his own program as the response. Multiple, overlapping designs received approval, and new proposals threatened already approved projects. Due to Korolev's "singular persistence", in August 1964—more than three years after the United States declared its intentions—the Soviet Union finally decided to compete for the Moon. It set the goal of a lunar landing in 1967—the 50th anniversary of the October Revolution—or 1968. At one stage in the early 1960s the Soviet space program was actively developing multiple launchers and spacecraft. With the fall of Krushchev in 1964, Korolev was given complete control of the crewed program. In 1961, Valentin Bondarenko, a cosmonaut training for a crewed Vostok mission, was killed in an endurance experiment after the chamber he was in caught on fire. The Soviet Union chose to cover up his death and continue on with the space program. After Korolev Korolev died in January 1966 from complications of heart disease and severe hemorrhaging following a routine operation that uncovered colon cancer. Kerim Kerimov, who had previously served as the head of the Strategic Rocket Forces and had participated in the State Commission for Vostok as part of his duties, was appointed Chairman of the State Commission on Piloted Flights and headed it for the next 25 years (1966–1991). He supervised every stage of development and operation of both crewed space complexes as well as uncrewed interplanetary stations for the former Soviet Union. One of Kerimov's greatest achievements was the launch of Mir in 1986. The leadership of the OKB-1 design bureau was given to Vasily Mishin, who had the task of sending a human around the Moon in 1967 and landing a human on it in 1968. Mishin lacked Korolev's political authority and still faced competition from other chief designers. Under pressure, Mishin approved the launch of the Soyuz 1 flight in 1967, even though the craft had never been successfully tested on an uncrewed flight. The mission launched with known design problems and ended with the vehicle crashing to the ground, killing Vladimir Komarov. This was the first in-flight fatality of any space program. The Soviets were beaten in sending the first crewed flight around the Moon in 1968 by Apollo 8, but Mishin pressed ahead with development of the flawed super heavy N1, in the hope that the Americans would have a setback, leaving enough time to make the N1 workable and land a man on the Moon first. There was a success with the joint flight of Soyuz 4 and Soyuz 5 in January 1969 that tested the rendezvous, docking, and crew transfer techniques that would be used for the landing, and the LK lander was tested successfully in earth orbit. But after four uncrewed test launches of the N1 ended in failure, the program was suspended for two years and then cancelled, removing any chance of the Soviets landing men on the Moon before the United States. Besides the crewed landings, the abandoned Soviet Moon program included the multipurpose moon base Zvezda, first detailed with developed mockups of expedition vehicles and surface modules. Following this setback, Chelomey convinced Ustinov to approve a program in 1970 to advance his Almaz military space station as a means of beating the US's announced Skylab. Mishin remained in control of the project that became Salyut but the decision backed by Mishin to fly a three-man crew without pressure suits rather than a two-man crew with suits to Salyut 1 in 1971 proved fatal when the re-entry capsule depressurized killing the crew on their return to Earth. Mishin was removed from many projects, with Chelomey regaining control of Salyut. After working with NASA on the Apollo–Soyuz, the Soviet leadership decided a new management approach was needed, and in 1974 the N1 was canceled and Mishin was out of office. The design bureau was renamed NPO Energia with Glushko as chief designer. In contrast with the difficulty faced in its early crewed lunar programs, the USSR found significant success with its remote moon operations, achieving two historical firsts with the automatic Lunokhod and the Luna sample return missions. The Mars probe program was also continued with some success, while the explorations of Venus and then of the Halley comet by the Venera and Vega probe programs were more effective. Lunar missions The "Luna" programme, achieved the first flyby of the moon by Luna 1 in 1959 (also marking the first time a probe reached the far side of the moon), the first impact of the moon by Luna 2, and the first photos of the far side of the moon by Luna 3. As well as garnering scientific information on the moon, Luna 1 was able to detect a strong flow of ionized plasma emanating from the Sun, streaming through interplanetary space. Luna 2 impacted the moon east of Mare Imbrium. Photography transmitted by Luna 3 showed two dark regions which were named Mare Moscoviense (Sea of Moscow) and Mare Desiderii (Sea of Dreams), the latter was found to be composed of the smaller Mare Ingenii and other dark craters. Luna 2 marked the first time a man-made object has contacted a celestial body. Luna 1 discovered the Moon had no magnetic field. In 1963, the Soviet Union's "2nd Generation" Luna programme was less successful, Luna 4, Luna 5, Luna 6, Luna 7, and Luna 8 were all met with mission failures. However, in 1966 Luna 9 achieved the first soft-landing on the Moon, and successfully transmitted photography from the surface. Luna 10 marked the first man-made object to establish an orbit around the Moon, followed by Luna 11, Luna 12, and Luna 14 which also successfully established orbits. Luna 12 was able to transmit detailed photography of the surface from orbit. Luna 10, 12, and Luna 14 conducted Gamma ray spectrometry of the Moon, among other tests. The Zond programme was orchestrated alongside the Luna programme with Zond 1 and Zond 2 launching in 1964, intended as flyby missions, however both failed. Zond 3 however was successful, and transmitted high quality photography from the far side of the moon. In late 1966, Luna 13 became the third spacecraft to make a soft-landing on the Moon, with the American Surveyor 1 having now taken second. Zond 4, launched in 1968 was intended as a means to test the possibility of a manned mission to the moon, including methods of a stable re-entry to earth from a Lunar trajectory using a heat shield. It did not flyby the moon, but established an elliptical orbit at Lunar distance. Due to issues with the crafts orientation, it was unable to make a soft-landing in the Soviet union and instead was self destructed. Later in the year Zond 5, carrying two Russian tortoises became the first man-made object to flyby the moon and return to Earth (as well as the first animal to flyby the moon), splashing down in the Indian Ocean. Zond 6, Zond 7, and Zond 8 had similar mission profiles, Zond 6 failed to return to earth safely, Zond 7 did however and returned high quality color photography of the earth and the moon from varying distances, Zond 8 successfully returned to earth after a Lunar flyby. In 1969, Luna 15 was an intended lunar sample return mission, however resulted in a crash landing. In 1970 however Luna 16 became the first robotic probe to land on the Moon and return a surface sample, having drilled 35 cm into the surface, to Earth and represented the first lunar sample return mission by the Soviet Union and the third overall, having followed the Apollo 11 and Apollo 12 crewed missions. Luna 17, Luna 21 and Luna 24 delivered rovers onto the surface of the moon. Luna 20 was another successful sample return mission. Luna 18 and Luna 23 resulted in crash landings. In total there were 24 missions in the Luna Programme, 15 were considered to be successful, including 4 hard landings and 3 soft landings, 6 orbits, and 2 flybys. The programme was continued after the collapse of the Soviet union, when the Russian federation space agency launched Luna 25 in 2023. Venusian missions The Venera programme marked many firsts in space exploration and explorations of Venus. Venera 1 and Venera 2 resulted in failure due to losses of contact, Venera 3, which also lost contact, marked the first time a man-made object made contact with another planet after it impacted Venus on March 1, 1966. Venera 4, Venera 5, and Venera 6 performed successful atmospheric entry. In 1970 Venera 7 marked the first time a spacecraft was able to return data after landing on another planet. Venera 7 held a resistant thermometer and an aneroid barometer to measure the temperature and atmospheric pressure on the surface, the transmitted data showed 475 C at the surface, and a pressure of 92 bar. A wind of 2.5 meters/sec was extrapolated from other measurements. The landing point of Venera 7 was . Venera 7 impacted the surface at a somewhat high speed of 17 metres per second, later analysis of the recorded radio signals revealed that the probe had survived the impact and continued transmitting a weak signal for another 23 minutes. It is believed that the spacecraft may have bounced upon impact and come to rest on its side, so the antenna was not pointed towards Earth. In 1972, Venera 8 landed on Venus and measured the light level as being suitable for surface photography, finding it to be similar to the amount of light on Earth on an overcast day with roughly 1 km visibility. In 1975, Venera 9 established an orbit around Venus and successfully returned the first photography of the surface of Venus. Venera 10 landed on Venus and followed with further photography shortly after. In 1978, Venera 11 and Venera 12 successfully landed, however ran into issues performing photography and soil analysis. Venera 11's light sensor detected lightning strikes. In 1981, Venera 13 performed a successful soft-landing on Venus and marked the first probe to drill into the surface of another planet and take a sample. Venera 13 also took an audio sample of the Venusian environment, marking another first. Venera 13 returned the first color images of the surface of Venus, revealing an orange-brown flat bedrock surface covered with loose regolith and small flat thin angular rocks. The composition of the sample determined by the X-ray fluorescence spectrometer put it in the class of weakly differentiated melanocratic alkaline gabbroids, similar to terrestrial leucitic basalt with a high potassium content. The acoustic detector returned the sounds of the spacecraft operations and the background wind, estimated to be a speed of around 0.5 m/sec wind. Venera 14, an identical spacecraft to Venera 13, launched 5 days apart. The mission profiles were very similar, except 14 ran into issues using it's spectrometer to analyze the soil. In total 10 Venera probes achieved a soft landing on the surface of Venus. In 1984, the Vega programme began and ended with the launch of two crafts launched 6 days apart, Vega 1 and Vega 2. Both crafts deployed a balloon in addition to a lander, marking a first in spaceflight. Martian missions The first Soviet mission to explore Mars, Mars 1, was launched in 1962. Although it was intended to fly by the planet and transmit scientific data, the spacecraft lost contact before reaching Mars, marking a setback for the program. In 1971, the Soviet Union launched Mars 2 and Mars 3. Mars 2 became the first spacecraft to reach the surface of mars, however this was a hard landing and was destroyed on impact. However, Mars 3 achieved a historic milestone by becoming the first successful soft landing on Mars. Mars 3 used parachutes and rockets as part of its landing system, however contacted the surface at a somewhat high speed of 20 metres per second. Unfortunately, its lander transmitted data for only up to 20 seconds before it went silent. Following the initial successes and setbacks, the Mars 4, Mars 5, Mars 6, and Mars 7 missions were launched between 1969 and 1973. Mars 4 and Mars 5 performed successful flybys, performing analysis which detected the presence of a weak Ozone layer and magnetic field corroborating analysis done by the American Mariner 4 and Mariner 9. Mars 6 and Mars 7 failed to successfully land. Salyut space station The Salyut programme was a series of missions which established the first earth orbit Space station. "Salyut" meaning "Salute" translated. Initially, the Salyut stations served as research laboratories in orbit. Salyut 1, the first in the series, launched in 1971, was primarily a civilian scientific mission. The crew set a then record-setting 24-day mission though its tragic end due to the death of the Soyuz-11 crew after a docking accident underscored the high risks of human spaceflight. Following this, the Soviet Union also developed Salyut 2 and Salyut 3, which featured reconnaissance capabilities and carried a large gun, both ran into significant issues during their missions. This dual use design of both scientific and military research applications demonstrated the Soviet Union's strategy of blending scientific achievement with defense applications. As the Salyut program progressed, later missions like Salyut 6 and Salyut 7 improved upon earlier designs by allowing long-duration crewed missions and more complex experiments. These stations, with their expanded crew capacity and amenities for long term stay, carrying electric stoves, a refrigerator, and constant hot water. The Salyut series effectively paved the way for future Soviet and later Russian space stations, including the Mir space station, which would become a significant part in the history of long-term space exploration. The longest stay, aboard Salyut 7, was 237 days. Program secrecy The Soviet space program had withheld information on its projects predating the success of Sputnik, the world's first artificial satellite. In fact, when the Sputnik project was first approved, one of the most immediate courses of action the Politburo took was to consider what to announce to the world regarding their event. The Telegraph Agency of the Soviet Union (TASS) established precedents for all official announcements on the Soviet space program. The information eventually released did not offer details on who built and launched the satellite or why it was launched. The public release revealed, "there is an abundance of arcane scientific and technical data... as if to overwhelm the reader with mathematics in the absence of even a picture of the object". What remains of the release is the pride for Soviet cosmonautics and the vague hinting of future possibilities then available after Sputnik's success. The Soviet space program's use of secrecy served as both a tool to prevent the leaking of classified information between countries and also to create a mysterious barrier between the space program and the Soviet populace. The program's nature embodied ambiguous messages concerning its goals, successes, and values. Launchings were not announced until they took place. Cosmonaut names were not released until they flew. Mission details were sparse. Outside observers did not know the size or shape of their rockets or cabins or most of their spaceships, except for the first Sputniks, lunar probes and Venus probe. However, the military influence over the Soviet space program may be the best explanation for this secrecy. The OKB-1 was subordinated under the Ministry of General Machine-Building, tasked with the development of intercontinental ballistic missiles, and continued to give its assets random identifiers into the 1960s: "For example, the Vostok spacecraft was referred to as 'object IIF63' while its launch rocket was 'object 8K72K'". Soviet defense factories had been assigned numbers rather than names since 1927. Even these internal codes were obfuscated: in public, employees used a separate code, a set of special post-office numbers, to refer to the factories, institutes, and departments. The program's public pronouncements were uniformly positive: as far as the people knew, the Soviet space program had never experienced failure. According to historian James Andrews, "With almost no exceptions, coverage of Soviet space exploits, especially in the case of human space missions, omitted reports of failure or trouble". According to Dominic Phelan in the book Cold War Space Sleuths, "The USSR was famously described by Winston Churchill as 'a riddle, wrapped in a mystery, inside an enigma' and nothing signified this more than the search for the truth behind its space program during the Cold War. Although the Space Race was literally played out above our heads, it was often obscured by a figurative 'space curtain' that took much effort to see through." Projects and accomplishments Completed projects The Soviet space program's projects include: Almaz space stations Cosmos satellites Foton Luna – Moon flybys, orbiters, impacts, landers, rovers, sample returns Mars probe program Meteor meteorological satellites Molniya communications satellites Mir space station Proton satellites Phobos Mars probes program Salyut space stations Soyuz program spacecraft Sputnik satellites TKS spacecraft Venera – Venus probes program Vega program – Venus and comet Halley probes program Vostok program spacecraft Voskhod program spacecraft Zond program Notable firsts Two days after the United States announced its intention to launch an artificial satellite, on July 31, 1955, the Soviet Union announced its intention to do the same. Sputnik 1 was launched on October 4, 1957, beating the United States and stunning people all over the world. The Soviet space program pioneered many aspects of space exploration: 1957: First intercontinental ballistic missile and orbital launch vehicle, the R-7 Semyorka. 1957: First satellite, Sputnik 1. 1957: First animal in Earth orbit, the dog Laika on Sputnik 2. 1959: First rocket ignition in Earth orbit, first man-made object to escape Earth's gravity, Luna 1. 1959: First data communications, or telemetry, to and from outer space, Luna 1. 1959: First man-made object to pass near the Moon, first man-made object in Heliocentric orbit, Luna 1. 1959: First probe to impact the Moon, Luna 2. 1959: First images of the Moon's far side, Luna 3. 1960: First animals to safely return from Earth orbit, the dogs Belka and Strelka on Sputnik 5. 1961: First probe launched to Venus, Venera 1. 1961: First person in space (International definition) and in Earth orbit, Yuri Gagarin on Vostok 1, Vostok program. 1961: First person to spend over 24 hours in space Gherman Titov, Vostok 2 (also first person to sleep in space). 1962: First dual crewed spaceflight, Vostok 3 and Vostok 4. 1962: First probe launched to Mars, Mars 1. 1963: First woman in space, Valentina Tereshkova, Vostok 6. 1964: First multi-person crew (3), Voskhod 1. 1965: First extra-vehicular activity (EVA), by Alexsei Leonov, Voskhod 2. 1965: First radio telescope in space, Zond 3. 1965: First probe to hit another planet of the Solar System (Venus), Venera 3. 1966: First probe to make a soft landing on and transmit from the surface of the Moon, Luna 9. 1966: First probe in lunar orbit, Luna 10. 1966: First image of the whole Earth disk, Molniya 1. 1967: First uncrewed rendezvous and docking, Cosmos 186/Cosmos 188. 1968: First living beings to reach the Moon (circumlunar flights) and return unharmed to Earth, Russian tortoises and other lifeforms on Zond 5. 1969: First docking between two crewed craft in Earth orbit and exchange of crews, Soyuz 4 and Soyuz 5. 1970: First soil samples automatically extracted and returned to Earth from another celestial body, Luna 16. 1970: First robotic space rover, Lunokhod 1 on the Moon. 1970: First full interplanetary travel with a soft landing and useful data transmission. Data received from the surface of another planet of the Solar System (Venus), Venera 7 1971: First space station, Salyut 1. 1971: First probe to impact the surface of Mars, Mars 2. 1971: First probe to land on Mars, Mars 3. 1971: First armed space station, Almaz. 1975: First probe to orbit Venus, to make a soft landing on Venus, first photos from the surface of Venus, Venera 9. 1980: First Asian person in space, Vietnamese Cosmonaut Pham Tuan on Soyuz 37; and First Latin American, Cuban and person with African ancestry in space, Arnaldo Tamayo Méndez on Soyuz 38 1984: First Indian Astronaut in space, Rakesh Sharma on Soyuz T-11 (Salyut-7 space station). 1984: First woman to walk in space, Svetlana Savitskaya (Salyut 7 space station). 1986: First crew to visit two separate space stations (Mir and Salyut 7). 1986: First probes to deploy robotic balloons into Venus atmosphere and to return pictures of a comet during close flyby Vega 1, Vega 2. 1986: First permanently crewed space station, Mir, 1986–2001, with a permanent presence on board (1989–1999). 1987: First crew to spend over one year in space, Vladimir Titov and Musa Manarov on board of Soyuz TM-4 – Mir. 1988: First fully automated flight of a spaceplane (Buran). Incidents, failures, and setbacks Accidents and cover-ups The Soviet space program experienced a number of fatal incidents and failures. The first official cosmonaut fatality during training occurred on March 23, 1961, when Valentin Bondarenko died in a fire within a low pressure, high oxygen atmosphere. On April 23, 1967, Soyuz 1 crashed into the ground at due to a parachute failure, killing Vladimir Komarov. Komarov's death was the first in-flight fatality in the history of spaceflight. The Soviets continued striving for the first lunar mission with the N-1 rocket, which exploded on each of four uncrewed tests shortly after launch. The Americans won the race to land men on the Moon with Apollo 11 on July 20, 1969. In 1971, the Soyuz 11 mission to stay at the Salyut 1 space station resulted in the deaths of three cosmonauts when the reentry capsule depressurized during preparations for reentry. This accident resulted in the only human casualties to occur in space (beyond , as opposed to the high atmosphere). The crew members aboard Soyuz 11 were Vladislav Volkov, Georgy Dobrovolsky, and Viktor Patsayev. On April 5, 1975, Soyuz 7K-T No.39, the second stage of a Soyuz rocket carrying two cosmonauts to the Salyut 4 space station malfunctioned, resulting in the first crewed launch abort. The cosmonauts were carried several thousand miles downrange and became worried that they would land in China, which the Soviet Union was having difficult relations with at the time. The capsule hit a mountain, sliding down a slope and almost slid off a cliff; however, the parachute lines snagged on trees and kept this from happening. As it was, the two suffered severe injuries and the commander, Lazarev, never flew again. On March 18, 1980, a Vostok rocket exploded on its launch pad during a fueling operation, killing 48 people. In August 1981, Kosmos 434, which had been launched in 1971, was about to re-enter. To allay fears that the spacecraft carried nuclear materials, a spokesperson from the Ministry of Foreign Affairs of the USSR assured the Australian government on 26 August 1981, that the satellite was "an experimental lunar cabin". This was one of the first admissions by the Soviet Union that it had ever engaged in a crewed lunar spaceflight program. In September 1983, a Soyuz rocket being launched to carry cosmonauts to the Salyut 7 space station exploded on the pad, causing the Soyuz capsule's abort system to engage, saving the two cosmonauts on board. Buran The Soviet Buran program attempted to produce a class of spaceplanes launched from the Energia rocket, in response to the US Space Shuttle. It was intended to operate in support of large space-based military platforms as a response to the Strategic Defense Initiative. Buran only had orbital maneuvering engines, unlike the Space Shuttle, Buran did not fire engines during launch, instead relying entirely on Energia to lift it out of the atmosphere. It copied the airframe and thermal protection system design of the US Space Shuttle Orbiter, with a maximum payload of 30 metric tons (slightly higher than that of the Space Shuttle), and weighed less. It also had the capability to land autonomously. Due to this, some retroactively consider it to be the more capable launch vehicle. By the time the system was ready to fly in orbit in 1988, strategic arms reduction treaties made Buran redundant. On November 15, 1988, Buran and its Energia rocket were launched from Baikonur Cosmodrome in Kazakhstan, and after two orbits in three hours, glided to a landing a few miles from its launch pad. While the craft survived that re-entry, the heat shield was not reusable. This failure resulted from United States counter intelligence efforts. After this test flight, the Soviet Ministry of Defense would defund the program, considering it relatively pointless compared to its price. Polyus satellite The Polyus satellite was a prototype orbital weapons platform designed to destroy Strategic Defense Initiative satellites with a megawatt carbon-dioxide laser. Launched mounted upside-down on its Energia rocket, its single flight test was a failure when the inertial guidance system failed to rotate it 180° and instead rotated a complete 360°. Canceled projects Energia rocket The Energia was a successfully developed super heavy-lift launch vehicle which burned liquid hydrogen fuel. But without the Buran or Polyus payloads to launch, it was also canceled due to lack of funding on dissolution of the USSR. Interplanetary projects Mars missions Heavy rover Mars 4NM was going to be launched by the abandoned N1 launcher between 1974 and 1975. Mars sample return mission Mars 5NM was going to be launched by a single N1 launcher in 1975. Mars sample return mission Mars 5M or (Mars-79) was to be double launched in parts by Proton launchers, and then joined in orbit for flight to Mars in 1979. Vesta The Vesta mission would have consisted of two identical double-purposed interplanetary probes to be launched in 1991. It was intended to fly-by Mars (instead of an early plan to Venus) and then study four asteroids belonging to different classes. At 4 Vesta a penetrator would be released. Tsiolkovsky The Tsiolkovsky mission was planned as a double-purposed deep interplanetary probe to be launched in the 1990s to make a "sling shot" flyby of Jupiter and then pass within five or seven radii of the Sun. A derivative of this spacecraft would possibly be launched toward Saturn and beyond.
Technology
Programs and launch sites
null
723067
https://en.wikipedia.org/wiki/Detritivore
Detritivore
Detritivores (also known as detrivores, detritophages, detritus feeders or detritus eaters) are heterotrophs that obtain nutrients by consuming detritus (decomposing plant and animal parts as well as feces). There are many kinds of invertebrates, vertebrates, and plants that carry out coprophagy. By doing so, all these detritivores contribute to decomposition and the nutrient cycles. Detritivores should be distinguished from other decomposers, such as many species of bacteria, fungi and protists, which are unable to ingest discrete lumps of matter. Instead, these other decomposers live by absorbing and metabolizing on a molecular scale (saprotrophic nutrition). The terms detritivore and decomposer are often used interchangeably, but they describe different organisms. Detritivores are usually arthropods and help in the process of remineralization. Detritivores perform the first stage of remineralization, by fragmenting the dead plant matter, allowing decomposers to perform the second stage of remineralization. Plant tissues are made up of resilient molecules (e.g. cellulose, lignin, xylan) that decay at a much lower rate than other organic molecules. The activity of detritivores is the reason why there is not an accumulation of plant litter in nature. Detritivores are an important aspect of many ecosystems. They can live on any type of soil with an organic component, including marine ecosystems, where they are termed interchangeably with bottom feeders. Typical detritivorous animals include millipedes, springtails, woodlice, dung flies, slugs, many terrestrial worms, sea stars, sea cucumbers, fiddler crabs, and some sedentary marine Polychaetes such as worms of the family Terebellidae. Detritivores can be classified into more specific groups based on their size and biomes. Macrodetritivores are larger organisms such as millipedes, springtails, and woodlouse, while microdetritivores are smaller organisms such as bacteria. Scavengers are not typically thought to be detritivores, as they generally eat large quantities of organic matter, but both detritivores and scavengers are the same type of cases of consumer-resource systems. The consumption of wood, whether alive or dead, is known as xylophagy. The activity of animals feeding only on dead wood is called sapro-xylophagy and those animals, sapro-xylophagous. Ecology Detritivores play an important role as recyclers in the ecosystem's energy flow and biogeochemical cycles. Alongside decomposers, they reintroduce vital elements such as carbon, nitrogen, phosphorus, calcium, and potassium back into the soil, allowing plants to take in these elements and use them for growth. They shred the dead plant matter which releases the trapped nutrients in the plant tissues. An abundance of detritivores in the soil allows the ecosystem to efficiently recycle nutrients. Many detritivores live in mature woodland, though the term can be applied to certain bottom-feeders in wet environments. These organisms play a crucial role in benthic ecosystems, forming essential food chains and participating in the nitrogen cycle. Detritivores and decomposers that reside in the desert live in burrows underground to avoid the hot surface since underground conditions provide favorable living conditions for them. Detritivores are the main organisms in clearing plant litter and recycling nutrients in the desert. Due to the limited vegetation available in the desert, desert detritivores adapted and evolved ways to feed in the extreme conditions of the desert. Detritivore feeding behaviour is affected by rainfall; moist soil increases detritivore feeding and excretion. Fungi, acting as decomposers, are important in today's terrestrial environment. During the Carboniferous period, fungi and bacteria had yet to evolve the capacity to digest lignin, and so large deposits of dead plant tissue accumulated during this period, later becoming the fossil fuels. By feeding on sediments directly to extract the organic component, some detritivores incidentally concentrate toxic pollutants.
Biology and health sciences
Ecology
Biology
723434
https://en.wikipedia.org/wiki/Abies%20alba
Abies alba
Abies alba, the European silver fir or silver fir, is a fir native to the mountains of Europe, from the Pyrenees north to Normandy, east to the Alps and the Carpathians, Slovakia, Slovenia, Croatia, Bosnia and Herzegovina, Montenegro, Serbia, and south to Italy, Bulgaria, Kosovo, Albania and northern Greece. Description Abies alba is a large evergreen coniferous tree growing to tall and with a trunk diameter up to . The largest measured tree was tall and had a trunk diameter of . It occurs at altitudes of (mainly over ), on mountains with rainfall over per year. The leaves are needle-like, flattened, long and wide by thick, glossy dark green above, and with two greenish-white bands of stomata below. The leaf is usually slightly notched at the tip. The cones are long and broad, with about 150-200 scales, each scale with an exserted bract and two winged seeds; they disintegrate when mature to release the seeds. The wood is white, leading to the species name alba. In the forest the evergreen tends to form stands with Norway spruce, Scots pine, and European beech. It is closely related to Bulgarian fir (Abies borisii-regis) further to the southeast in the Balkan Peninsula, Spanish fir (Abies pinsapo) of Spain and Morocco and Sicilian fir (Abies nebrodensis) in Sicily, differing from these and other related Euro-Mediterranean firs in the sparser foliage, with the leaves spread either side of the shoot, leaving the shoot readily visible from above. Some botanists treat Bulgarian fir and Sicilian fir as varieties of silver fir, as A. alba var. acutifolia and A. alba var. nebrodensis, respectively. Ecology Silver fir is an important component species in the dinaric calcareous block fir forest in the western Balkan Peninsula. In Italy, the silver fir is an important component of the mixed broadleaved-coniferous forest of the Apennine Mountains, especially in northern Apennine. The fir prefer a cold and humid climate, in northern exposition, with a high rainfall (over 1500 mm per year). In the oriental Alps of Italy, silver firs grow in mixed forests with Norway spruce, beech, and other trees. Its cone scales are eaten by the caterpillars of the tortrix moth Cydia illutana, while C. duplicana feeds on the bark around injuries or canker. Chemistry and pharmacology The bark and wood of silver fir are rich in antioxidative polyphenols. Six phenolic acids were identified (gallic, homovanillic, protocatechuic, p-hydroxybenzoic, vanillic and p-coumaric), three flavonoids (catechin, epicatechin and catechin tetramethyl ether) and eight lignans (taxiresinol, 7-(2-methyl-3,4-dihydroxytetrahydropyran-5-yloxy)-taxiresinol, secoisolariciresinol, lariciresinol, hydroxymatairesinol, isolariciresinol, matairesinol and pinoresinol). The extract from the trunk was shown to prevent atherosclerosis in guinea pigs and to have cardioprotective effect in isolated rat hearts. Silver fir wood extract was found to reduce the post-prandial glycemic response (concentration of sugar in the blood after the meal) in healthy volunteers. Uses In Roman times the wood was used to make wooden casks to store and transport wine and other substances. A resinous essential oil can be extracted. This pine-scented oil is used in perfumes, bath products, and aerosol inhalants. Its branches (including the leaves, bark and wood) were used for production of spruce beer. Silver fir is the species first used as a Christmas tree, but has been largely replaced by Nordmann fir (which has denser, more attractive foliage), Norway spruce (which is much cheaper to grow), and other species. When cultivated on Christmas Tree plantations, the tree naturally forms a symmetrical conical shape. The trees are full and dense with a resinous fragrance, and are known to be one of the longest lasting after being cut. As well as in its native area, it is also grown on Christmas tree plantations in the northeast region of North America spanning New England in the USA to the Maritime Provinces of Canada. The wood is strong, lightweight, light-coloured, fine grained, even-textured and long fibred. The timber is mainly used as construction wood, furniture, plywood, pulpwood and paper manufacture. The honeydew which is produced by aphids sitting on the silver fir is collected by honey bees. The resulting honey is marketed as "fir honey". Etymology Abies is derived from Latin, meaning 'rising one'. The name was used to refer to tall trees or ships. Alba means 'bright' or 'dead white'.
Biology and health sciences
Pinaceae
Plants
723992
https://en.wikipedia.org/wiki/Redback%20spider
Redback spider
The redback spider (Latrodectus hasselti), also known as the Australian black widow, is a species of highly venomous spider believed to originate in Australia but now, Southeast Asia and New Zealand, it has also been found in packing crates in the United States with colonies elsewhere outside Australia. It is a member of the cosmopolitan genus Latrodectus, the widow spiders. The adult female is easily recognised by her spherical black body with a prominent red stripe on the upper side of her abdomen and an hourglass-shaped red/orange streak on the underside. Females usually have a body length of about , while the male is much smaller, being only long. Mainly nocturnal, the female redback lives in an untidy web in a warm sheltered location, commonly near or inside human residences. It preys on insects, spiders and small vertebrates that become ensnared in its web. It kills its prey by injecting a complex venom through its two fangs when it bites, before wrapping them in silk and sucking out the liquefied insides. Often, it first squirts its victim with what resembles 'superglue' from its spinnerets, immobilising the prey by sticking the victim's limbs and appendages to its own body. The redback spider then trusses the victim with silk. Once its prey is restrained, it is bitten repeatedly on the head, body and leg segments and is then hauled back to the redback spider's retreat. Sometimes a potentially dangerous victim can be left to struggle for hours until it is exhausted enough to approach safely. Male spiders and spiderlings often live on the periphery of the female spiders' web and steal leftovers. Other species of spider and parasitoid wasps prey on this species. The redback is one of a number of arachnids that usually display sexual cannibalism while mating. After mating, sperm is stored in the spermathecae, organs of the female reproductive tract, and can be used up to two years later to fertilise several clutches of eggs. Each clutch averages 250 eggs and is housed in a round white silken egg sac. The redback spider has a widespread distribution in Australia, and inadvertent introductions have led to established colonies in New Zealand, the United Arab Emirates, Japan and greenhouses in Belgium. The redback is one of the few spider species that can be seriously harmful to humans, and its liking for habitats in built structures has led it to being responsible for a large number of serious spider bites in Australia. Predominantly neurotoxic to vertebrates, the venom gives rise to the syndrome of latrodectism in humans; this starts with pain around the bite site, which typically becomes severe and progresses up the bitten limb and persists for over 24 hours. Sweating in localised patches of skin occasionally occurs and is highly indicative of latrodectism. Generalised symptoms of nausea, vomiting, headache, and agitation may also occur and indicate severe envenomation. An antivenom has been available since 1956. Taxonomy and naming Common names The common name "redback" is derived from the distinctive red stripe along the dorsal aspect of its abdomen. Other common names include red-striped spider, red-spot spider, jockey spider, Murra-ngura spider, Kapara spider and the Kanna-jeri spider. History Before DNA analysis, the taxonomy of the widow spider genus Latrodectus had been unclear—changes in the number of species reflect the difficulty of using morphology to determine subdivisions within the genus. Substantial interest in their systematics was most likely prompted by the medical importance of these venomous spiders. Swedish arachnologist Tamerlan Thorell described the redback spider in 1870 from specimens collected in Rockhampton and Bowen in central Queensland. He named it Latrodectus hasseltii in honour of colleague A.W.M. van Hasselt. In the same paper, he named a female from Cape York with an all-black abdomen L. scelio, now regarded as the same species. These specimens are in the Naturhistoriska Riksmuseet in Stockholm. German arachnologist Friedrich Dahl revised the genus in 1902 and named L. ancorifer from New Guinea, which was later regarded as a subspecies of the redback. Another subspecies, L. h. aruensis, was described by Norwegian entomologist Embrik Strand in 1911. Subspecies indica (of L. scelio) had been described by Eugène Simon in 1897, but its origin is unclear. Frederick Octavius Pickard-Cambridge questioned Dahl's separating species on what he considered minor anatomical details but Dahl dismissed Pickard-Cambridge as an "ignoramus". Pickard-Cambridge was unsure whether L. hasselti warranted species status, though he confirmed scelio and hasselti as a single species, other researchers such as Ludwig Carl Christian Koch noting the differences to be inconsistent. The redback was also considered by some to be conspecific with the katipō (L. katipo), which is native to New Zealand, though Koch regarded them as distinct. Reviewing the genus Latrodectus in 1959, arachnologist Herbert Walter Levi concluded that the colour variations were largely continuous across the world and were not suitable for distinguishing the individual species. Instead, he focused on differences in the morphology of the female sexual organs, and revised the number of recognised species from 22 to 6. This included reclassifying the redback and several other species as subspecies of the best-known member of the group, the black widow spider (Latrodectus mactans), found in North America and other regions. He did not consider the subspecies L. h. ancorifer, L. h. aruensis and L. h. indicus distinct enough to warrant recognition. Subsequently, more reliable genetic studies have split the genus into about 30 species, and the redback has no recognised subspecies in modern classifications. Placement A member of the genus Latrodectus in the family Theridiidae, the redback belongs in a clade with the black widow spider, with the katipō as its closest relative. A 2004 molecular study supports the redback's status as a distinct species, as does the unique abdomen-presenting behaviour of the male during mating. The close relationship between the two species is shown when mating: the male redback is able to successfully mate with a female katipō producing hybrid offspring. However, the male katipō is too heavy to mate with the female redback, as it triggers a predatory response in the female when it approaches the web, causing the female to eat it. There is evidence of interbreeding between female katipō and male redbacks in the wild. Description The adult female redback has a body around long, with slender legs, the first pair of which are longer than the rest. The round abdomen is a deep black (occasionally brownish), with a red (sometimes orange) longitudinal stripe on the upper surface and an hourglass-shaped scarlet streak on the underside. Females with incomplete markings or all-black abdomens occasionally occur. The cephalothorax is much smaller than the abdomen, and is black. Redback spiderlings are grey with dark spots, and become darker with each moult. Juvenile females have additional white markings on the abdomen. The bright scarlet red colours may serve as a warning to potential predators. Each spider has a pair of venom glands, one attached to each of its chelicerae with very small fangs. Small compared to the female, the male redback is long and is light brown, with white markings on the upper side of the abdomen and a pale hourglass marking on the underside. Another species in Australia with a similar physique, Steatoda capensis, has been termed the "false redback spider", but it is uniformly black (or plum), and does not display the red stripe. Behaviour Web The redback is mainly nocturnal; the female remains concealed during the day, and spins her web during the night, usually remaining in the same location for most of her adult life. Classified as a gum-footed tangle web, the web is an irregular-looking tangle of fine but strong silk. Although the threads seem random, they are strategically placed for support and entrapment of prey. The rear portion of the web forms a funnel-like retreat area where the spider and egg sacs are found. This area has vertical, sticky catching threads that run to ground attachments. The vertical strands act as trip wires to initially alert the spider to the presence of prey or threats. They also snare and haul prey into the air when weaker horizontal strands that hold them down, known as guy lines, break when prey thrash around. These webs are usually placed between two flat surfaces, one beneath the other. The female spends more time in the funnel and less time moving around during cooler weather. The individual web filaments are quite strong, able to entangle and hold small reptiles. Prey Redbacks usually prey on insects, but can capture larger animals that become entangled in the web, including trapdoor spiders, small lizards, and even on rare occasion snakes. One web was recorded as containing a dead mouse. The woodlouse (Porcellio scaber) is a particularly common food item. Developing spiderlings need size-appropriate prey, and laboratory studies show that they are willing to consume common fruit flies (Drosophila melanogaster), mealworm larvae (Tenebrio molitor), muscoid flies and early nymphs of cockroaches. Food scraps and lighting attract insect prey to areas of human activity, which brings the redbacks. Once alerted to a creature becoming ensnared in a trap line, the redback advances to around a leg's length from its target, touching it and squirting a liquid glutinous silk over it to immobilise it. It then bites its victim repeatedly on the head, body and leg joints and wraps it in sticky and dry silk. Unlike other spiders, it does not rotate its prey while wrapping in silk, but like other spiders, it then injects a venom that liquefies its victim's innards. Once it has trussed the prey, the redback takes it to its retreat and begins sucking out the liquefied insides, generally 5 to 20 minutes after first attacking it. Redback spiders do not usually drink, except when starved. Commonly, prey-stealing occurs where larger females take food items stored in other spiders' webs. When they encounter other spiders of the same species, often including those of the opposite sex, they engage in battle, and the defeated spider is eaten. If a male redback is accepted by a female, it is permitted to feed on the victims snared in the female's web. Baby spiders also steal food from their mother, which she tries to prevent. They also consume sticky silk as well as small midges and flies. Spiderlings are cannibalistic, more active ones sometimes eating their less active siblings. Life cycle Spiderlings hatch from their eggs after about 8 days and can emerge from the egg sac as early as 11 days after being laid, although cooler temperatures can significantly slow their development so that emergence does not occur for months. After hatching they spend about a week inside the egg sac, feeding on the yolk and molting once. Baby spiders appear from September to January (spring to early summer). Male spiders mature through five instars in about 45–90 days. Females mature through seven–eight instars in about 75–120 days. Males live for up to six or seven months, while females may live between two and three years. Laboratory tests have shown that redbacks may survive for an average of 100 days, and sometimes over 300 days without any food, those starved at faring better than those kept without food at . Spiders are known to reduce their metabolic rates in response to starvation, and can distend their abdomens to store large amounts of food. Redbacks can survive temperatures from below freezing point to , though they do need relatively warm summers, with temperatures of for two to three months, to survive and breed. Redback spiderlings cohabit on the maternal web for several days to a week, during which time sibling cannibalism is often observed. They then leave by being carried on the wind. They follow light and climb to the top of nearby logs or rocks before extending their abdomens high in the air and producing a droplet of silk. The liquid silk is drawn out into a long gossamer thread that, when long enough, carries the spider away. This behaviour is known as ballooning or kiting. Eventually, the silken thread will adhere to an object where the young spider will establish its own web. They sometimes work cooperatively, climbing, releasing silk and being carried off in clusters. Juvenile spiders build webs, sometimes with other spiders. Reproduction Before a juvenile male leaves its mother's web, it builds a small sperm web on which it deposits its sperm from its gonads and then collects it back into each of its two palps (copulatory organs), because the gonads and palps are not internally connected. After it moults into its last instar, it sets off wandering to seek a female. The male spider does not eat during this period. How males find females is unclear, and it is possible they may balloon like juveniles. A Western Australian field study found that most males took 6 to 8 weeks to travel around with occasional journeys of over , but that only around 11–13% successfully found a mate. They are attracted by pheromones, which are secreted by unmated sexually mature female redback spiders onto their webs and include a serine derivative (N-3-methylbutyryl-O-(S)-2-methylbutyryl-L-serine). This is thought to be the sole method by which males assess a female's reproductive status, and their courtship dismantles much of the pheremone-marked web. During mating, the male redback attempts to copulate by inserting one of its palps into the one of the female's two spermathecae, each of which has its own insemination orifice. It then tries and often succeeds in inserting the other palp into the female's second orifice. The redback spider is one of only two animals known where the male has been found to actively assist the female in sexual cannibalism. In the process of mating, the much smaller male somersaults to place his abdomen over the female's mouthparts. In about two of three cases, the female fully consumes the male while mating continues. Males which are not eaten die of their injuries soon after mating. Sacrifice during mating is thought to confer two advantages to the species. The first is the eating process allows for a longer period of copulation and thus fertilisation of more eggs. The second is females which have eaten a male are more likely to reject subsequent males. Although this prohibits future mating for the males, this is not a serious disadvantage, because the spiders are sufficiently sparse that less than 20% of males ever find a potential mate during their lifetimes, and in any case, the male is functionally sterile if he has used the contents of both of his palps in the first mating. Some redback males have been observed using an alternative tactic that also ensures more of their genetic material is passed on. Juvenile female redbacks nearing their final moulting and adulthood have fully formed reproductive organs, but lack openings in the exoskeleton that allow access to the organs. Males will bite through the exoskeleton and deliver sperm without performing the somersault seen in males mating with adult females. The females then moult within a few days and deliver a clutch of fertilised eggs. Once the female has mated, the sperm is stored in one or both of her spermathecae. The sperm can be used to fertilise several batches of eggs, over a period of up to two years (estimated from observations of closely related species), but typically restarts the female's pheromone production advertising her sexual availability about three months after mating. A female spider may lay four to ten egg sacs, each of which is around in diameter and contains on average around 250 eggs, though can be as few as 40 or as many as 500. She prepares a shallow concave disc around in diameter before laying eggs into it over a period of around five minutes before laying more silk to complete the sac, which becomes spherical, the whole process taking around one and a quarter hours. She can produce a new egg sac as early as one to three weeks after her last. Distribution and habitat The redback spider is widespread across Australia. The current distribution reported by the World Spider Catalogue includes Southeast Asia and New Zealand. Colonies and individuals have been found elsewhere, including Japan, England, Belgium, the United Arab Emirates and Iran. It was believed at one time that the redback may have been introduced to Australia, because when it was first formally described in 1870, it appeared to be concentrated around sea ports. However, an earlier informal description (1850) from the Adelaide Hills is now known, and names in Australian Aboriginal languages also show that it was present well before European settlement. Its original range is thought to be a relatively small arid part of South Australia and Western Australia. Its spread has been inadvertently aided by modern buildings, which often provide habitats conducive to redback populations. The close relationship between the redback and the New Zealand katipō also supports the native status of both in their respective countries. Outside urban areas, the redback is more often found in drier habitats ranging from sclerophyll forest to desert, even as harsh as the Simpson Desert. It became much more common in urban areas in the early decades of the 20th century, and is now found in all but the most inhospitable environments in Australia and its cities. It is particularly common in Brisbane, Perth and Alice Springs. It is wide spread throughout urban Australia, with most suburban backyards in the city of Canberra (for instance) having one or more nesting females in such places as firewood piles, stored brick stacks and around unused or restoring motor vehicles as well as generally behind the shed - as observed since at least the 1970s and probably earlier. The redback spider is commonly found in close proximity to human residences. Webs are usually built in dry, dark, sheltered sites, such as among rocks, in logs, tree hollows, shrubs, old tyres, sheds, outhouses, empty tins and boxes, children's toys or under rubbish or litter. Letterboxes and the undersurface of toilet seats are common sites. Populations can be controlled by clearing these habitats, squashing the spiders and their egg sacs, and using pesticide in outhouses. The CSIRO Division of Entomology recommends against the use of spider pesticides due to their toxicity, and because redbacks are rapid recolonists anyway. Spiders in the French territory of New Caledonia in the Pacific were identified as L. hasselti in 1920, based on morphology. Their behaviour differs from Australian redbacks, as they do not engage in sexual cannibalism and are less prone to biting humans. The first recorded envenomation in New Caledonia was in 2007. Introductions The redback spider's affinity for human-modified habitat has enabled it to spread to several countries via international shipping and trade. Furthermore, its tolerance to cold means that it has the ability to colonise many temperate countries with a winter climate cooler than Australia. This is concerning due to the risks to people being bitten who are unaware of its venomous nature, and also to the conservation of local threatened insect species that the redback might prey upon. Redback spiders are also found in small colonies in areas of New Zealand. They are frequently intercepted by quarantine authorities, often among steel or car shipments. They were introduced into New Zealand in the early 1980s, and now are found around Central Otago (including Alexandra, Bannockburn and near Wānaka) in the South Island and New Plymouth in the North Island. Authorities in the United Arab Emirates warn residents and visitors of redback spiders, which have been present since 1990. Colonies have also been established in greenhouses in Belgium, and isolated observations indicate possible presence in New Guinea, the Philippines, and India. Some redbacks were found in Preston, Lancashire, England, after a container of parts arrived from Australia; some may have escaped into the countryside before pest controllers could destroy them. One redback was found in a back garden in Dartford in Kent. Two females were discovered in the Iranian port city of Bandar Abbas in 2010. There is an established population of redback spiders in Osaka, Japan, thought to have arrived in cargoes of wood chips. In 2008, redback spiders were found in Fukuoka, Japan. Over 700 have been found near the container terminal in Hakata Bay, Fukuoka City. Dispersal mechanisms within Japan are unclear, but redbacks are thought to have spread by walking or by being carried on vehicles. In September 2012, after being bitten a woman was hospitalised in the Higashi Ward of Fukuoka City. Signs warning about redback spiders have been posted in parks around the city. Predators and parasitoids The black house spider (Badumna insignis), the cellar spider (Pholcus phalangioides) and the giant daddy-long-legs spider (Artema atlanta) are known to prey on the redback spider, and redbacks are often absent if these species are present in significant numbers. Agenioideus nigricornis, a spider wasp, is a parasitoid of the adult redback. Other wasps of the families Eurytomidae and Ichneumonidae parasitise redback eggs, and mantid lacewings (Neuroptera and Mantispidae) prey on redback eggs. Bites to humans Incidence The redback spider has been historically responsible for more envenomations requiring antivenom than any other creature in Australia. However, by 2017 the spider was blamed for only 250 envenomations requiring antivenom annually. Estimates of the number of people thought to be bitten by redback spiders each year across Australia range from 2,000 to 10,000. The larger female spider is responsible for almost all cases of redback spider bites. The smaller male was thought to be unable to envenomate a human, although some cases have been reported; their rarity is probably due to the male's smaller size and proportionally smaller fangs, rather than its being incapable of biting or lacking potent venom. The bite from both juvenile and mature females appears to have similar potency. The male bite usually only produces short-lived, mild pain. Most bites occur in the warmer months between December and April, in the afternoon or evening. As the female redback is slow-moving and rarely leaves her web, bites generally occur as a result of placing a hand or other body part too close to the spider, such as when reaching into dark holes or wall cavities. Bites often also occur when a hidden spider is disturbed in items such as clothes, shoes, gloves, building materials, garden tools or children's outdoor toys. A 2004 review reported 46% of bites occurring on distal extremities of the limbs, 25% on proximal areas of limbs (upper arms and thighs), 21% on the trunk, and 7% on the head or neck. In some cases the same spider bites a victim multiple times. Historically, victims were often bitten on the genitalia, though this phenomenon disappeared as outhouses were superseded by plumbed indoor toilets. Conversely, bites on the head and neck have increased with use of safety helmets and ear muffs. Precautions to avoid being bitten include wearing gloves and shoes while gardening, not leaving clothes on the floor, and shaking out gloves or shoes before putting them on. Also, children can be educated not to touch spiders. Venom The redback and its relatives in the genus Latrodectus are considered dangerous, alongside funnel-web spiders (Atrax and Hadronyche), mouse spiders (Missulena), wandering spiders (Phoneutria) and recluse spiders (Loxosceles). Venom is produced by holocrine glands in the spider's chelicerae (mouth parts). Venom accumulates in the lumen of the glands and passes through paired ducts into the spider's two hollow fangs. The venom of the redback spider is thought to be similar to that of the other Latrodectus spiders. It contains a complex mixture of cellular constituents, enzymes and a number of high-molecular-weight toxins, including insect toxins and a vertebrate neurotoxin called alpha-latrotoxin, which causes intense pain in humans. In vertebrates, alpha-latrotoxin produces its effect through destabilisation of cell membranes and degranulation of nerve terminals, resulting in excessive release of neurotransmitters, namely acetylcholine, norepinephrine and GABA. Excess neurotransmitter activity leads to clinical manifestations of envenomation, although the precise mechanisms are not well understood. Acetylcholine release accounts for neuromuscular manifestations, and norepinephrine release accounts for the cardiovascular manifestations. Female redbacks have an average of around 0.08–0.10 mg of venom, and experiments indicate that the median lethal dose (LD50) for mice at room temperature is 10–20% of this quantity (0.27–0.91 mg/kg based on the mass of the mice used), but that it is considerably deadlier for mice kept at lower or higher temperatures. Pure alpha-latrotoxin has an LD50 in mice of 20–40 μg/kg. The specific variant of the vertebrate toxin found in the redback was cloned and sequenced in 2012, and was found to be a sequence of 1180 amino acids, with a strong similarity to the equivalent molecule across the Latrodectus mactans clade. The syndromes caused by bites from any spiders of the genus Latrodectus have similarities; there is some evidence there is a higher incidence of sweating, and local and radiating pain with the redback, while black widow envenomation results in more back and abdominal pain, and abdominal rigidity is a feature common with bites from the west coast button spider (Latrodectus indistinctus) of South Africa. One crustacean-specific and two insect-specific neurotoxins have been recovered from the Mediterranean black widow (L. tredecimguttatus), as have small peptides that inhibit angiotensin-1-converting enzyme; the venom of the redback, although little-studied, likely has similar agents. Antivenom Redback antivenom was developed by Commonwealth Serum Laboratories, then a government body involved with discovering antivenoms for many venomous Australian creatures. Production involves the milking of venom from redbacks and repeatedly inoculating horses with non-lethal doses. The horse immune systems makes polyclonal antibodies. Blood plasma, containing the antibodies, is extracted by plasmapheresis. The plasma is treated with pepsin, and the active F(ab')2 fragments are separated and purified. Each vial contains 500 units of redback antivenom in approximately 1.5 ml, which is enough to inactivate 5 mg of redback spider venom in a test tube. The antivenom has been safely administered to women in various stages of pregnancy. Redback antivenom has been widely used in Australia since 1956, although evidence from controlled studies for its effectiveness has been lacking. Recent trials show antivenom has a low response rate little better than placebo, and any effect is less than might be achieved with optimal use of standard analgesics. Further studies are needed to confirm or refute its effectiveness. It appears clinically active against arachnidism caused by Steatoda spiders; however, as these cases are often mild and the evidence of its effectiveness is limited, this treatment is not recommended. Similarly, the antivenom has been reported as effective with bites of L. katipo, and L. tredecimguttatus. Animal studies also support its use against envenomation from other widow spiders, having successfully been tested against venom from L. mactans, L. hesperus, and L. tredecimguttatus (synonym L. lugubris). Signs and symptoms Envenomation from a redback spider bite produces a syndrome known as latrodectism. A small but significant percentage of people bitten develop significant pain or systemic symptoms. The diagnosis is made from the clinical condition, often based on the victim being aware of a bite and ideally with identification of the spider. Laboratory tests are rarely needed and there is no specific test for the venom or latrodectism. The redback's small size means that swelling or puncture marks at the bite site are uncommon. The bite may be painful from the start, but more often only feels like a pinprick or mild burning sensation. Within an hour, a more severe local pain may develop with local sweating and sometimes piloerection (goosebumps)—these three symptoms together are a classic presentation of redback spider envenomation. Pain, swelling and redness can spread proximally up a limb or away from the bite site and regional lymph nodes may become painful. Some subjects with delayed symptoms may present with a characteristic sweating and pain in the lower limbs, generally below the knees, or a burning sensation in the soles of the feet. This may eventuate even if the person was bitten somewhere else on their body. Around one in three subjects develops systemic symptoms; after a number of hours, or rarely, delayed for more than 24 hours. Symptoms typically include nausea, vomiting, abdominal or chest pain, agitation, headache, generalised sweating and hypertension. Other non-specific systemic effects such as malaise and lethargy are also common. Rarely, other effects are reported such as neurological manifestations, fever and priapism (uncontrolled erection of the penis). Severe pain usually persists for over 24 hours after being bitten. Symptoms of envenomation may linger for weeks or even months. Rare complications include localised skin infection, seizure, coma, pulmonary oedema, or respiratory failure. Children, the elderly, or those with serious medical conditions are at much higher risk of severe effects resulting from a bite. Infants have died within hours of a bite, but adult fatalities have taken up to 30 days. Children and infants may be unable to report being bitten, making it difficult to associate their symptoms with a spider bite. Symptoms seen in infants include inconsolable crying, refusing to feed and a general erythematous rash. Muscle aches and pains, and neck spasm are often seen in children over four years of age. Unlike those of some other spiders, redback bites do not necrose. Latrodectism has been misdiagnosed as various medical conditions including acute hepatitis, sepsis, testicular torsion or an acute abdomen. Treatment Treatment is based on the severity of the envenomation. The majority of cases do not require medical care, and patients with localised pain, swelling and redness usually require only local application of ice and simple oral analgesia such as paracetamol. Pressure immobilisation of the wound site is not recommended. Keeping the victim still and calm is beneficial. Hospital assessment is recommended if simple pain relief does not resolve local pain, or systemic symptoms occur. Opioid analgesics may be necessary to relieve pain. Antivenom has been historically given for adults suffering severe local pain or systemic symptoms consistent with latrodectism, which include pain and swelling spreading proximally from site, distressing local or systemic pain, chest pain, abdominal pain, or excessive sweating (diaphoresis). A significant proportion of bites will not result in envenomation or any symptoms developing; around 2–20% of bite victims have been treated with antivenom. In an Australian study of 750 emergency hospital admissions for spider bites where the spider was definitively identified, 56 were from redbacks. Of these, 37 had significant pain lasting over 24 hours. Only six were treated with the antivenom. The antivenom manufacturer's product information recommends one vial, although more has been used. Past guidelines indicated two vials, with a further two vials recommended if symptoms did not resolve within two hours, however recent guidelines state "antivenom is sometimes given if there is a history, symptoms and signs consistent with systemic envenoming, and severe pain unresponsive to oral analgesics ... however recent trials show antivenom has a low response rate little better than placebo, and any effect is less than might be achieved with optimal use of standard analgesics. The antivenom can be given by injection intramuscularly (IM) or intravenously (IV). The manufacturer recommends IM use, with IV administration reserved for life-threatening cases. In January 2008 toxicologist Geoffrey Isbister suggested IM antivenom was not as effective as IV antivenom, after proposing that IM antivenom took longer to reach the blood serum. Isbister subsequently found the difference between IV and IM routes of administration was, at best, small and did not justify routinely choosing one route over the other. These concerns led two handbooks to recommend IV in preference to IM administration in Australian practice. Despite a long history of usage and anecdotal evidence of effectiveness, there is a lack of data from controlled studies confirming the antivenom's benefits. In 2014 Isbister and others conducted a randomized controlled trial of intravenous antivenom versus placebo for Redback envenomation, finding the addition of antivenom did not significantly improve pain or systemic effects, while antivenom resulted in acute hypersensitivity reactions in 3.6 per cent of those receiving it. The question of abandoning the antivenom on the basis of this and previous studies came up in the Annals of Emergency Medicine in 2015 where White and Weinstein argued that if the recommendations in the 2014 Isbister et al. paper were followed it would lead to abandonment of antivenom as a treatment option, an outcome White and Weinstein considered undesirable. Authors of the 2014 Isbister et al. paper responded in the same issue by suggesting patients for whom antivenom is considered should be fully informed "there is considerable weight of evidence to suggest it is no better than placebo", and in light of a risk of anaphylaxis and serum sickness, "routine use of the antivenom is therefore not recommended". Before the introduction of antivenom, benzodiazepines and intravenous calcium gluconate were used to relieve symptoms of pain and distress, although calcium is not recommended as its benefit has not been shown in clinical trials. Studies support the safety of antivenom, with around a 5% chance of an acute reaction, 1–2% of anaphylaxis and 10% chance of a delayed reaction due to serum sickness. Nevertheless, it is recommended that an injection of adrenaline be ready and available in case it is needed to treat a severe anaphylactic reaction, and also that the antivenom from the vial be administered diluted in a 100 ml bag of intravenous solution for infusion over 30 minutes. While it is rare that patients report symptoms of envenomation lasting weeks or months following a bite, there are case reports from the 1990s in which antivenom was reported to be effective in the relief of chronic symptoms when administered weeks or months after a bite. However, in the vast majority of cases, it is administered within 24 hours. Prognosis According to NSW Health, redback spider bites were considered not life-threatening but capable of causing severe pain and systemic symptoms that could continue for hours to days. In almost all cases, symptoms resolve within a week. Fatalities are extremely unlikely. In 2016, the death of a bushwalker from a redback spider bite was widely reported. In this case, the death occurred from secondary infection; and the man in question had just recovered from a serious car accident. Apart from that, there have been no deaths due to redback bite since the introduction of antivenom. Before this, redback spider bites had been implicated in at least 14 deaths in Australia, however these cases cannot be definitively linked to the redback bite as the sole cause. Bites to animals Redback spider bites are difficult to diagnose in pets unless witnessed. Dogs appear to have some resistance. They are at serious risk only if bitten many times, and rarely need antivenom. Cats are likely to be more susceptible and require antivenom, which can reverse symptoms very quickly. Guinea pigs, horses and camels are very susceptible. As with humans, the symptoms are predominantly autonomic in nature alongside pain at the bite site. Dogs may also suffer vomiting and diarrhea, muscle tremors or clonic contractions, and abdominal wall rigidity, while cats may salivate excessively, protrude their tongue or be overexcitable. Historical treatment of bites Most traditional or historical first-aid treatments for redback spider bites are either useless or dangerous. These include making incisions and promoting bleeding, using ligatures, applying alkaline solutions, providing warmth, and sucking the venom out. In modern first aid, incising, sucking, applying bandages and tourniqueting are strongly discouraged. In 1893, the Camperdown Chronicle reported that a doctor noticed that a severely ill benumbed victim got much better overnight following treatment using injections of strychnine and cocaine; strychnine had been popular as a snake bite antidote, but it was not effective. As of 2011, administration of magnesium sulphate was reported to have had some benefit though evidence of effectiveness is weak. Cultural impact Indigenous Australians in New South Wales mixed the spiders' bodies with the venom of snakes and pine tree gum to form a broth used to coat spear tips. Slim Newton drew popular attention to redbacks with his song "The Redback on the Toilet Seat", which won the Golden Guitar at the first Country Music Awards of Australia in 1973. Newton recalled an occasion when a friend used his outside toilet where the light globe had blown and reported he was lucky there was not a redback spider on the toilet seat. The phrase inspired him to write the song. A sculpture of an impossibly large redback, one of Australia's big things, was built in 1996 at Eight Mile Plains, Queensland. The Angels 1991 album Red Back Fever takes its name from the spider. Matilda Bay Brewing Company produces a wheat beer called Redback, with the distinctive red stripe as the logo. The redback appears in the name and emblem of the South Australia cricket team. The Airborne Redback, an Australian ultralight trike, was also named after the spider. Redback Boots is an Australian workboot manufacturing company, which uses the spider in its name and logo. In 2006 a redback spider postage stamp was designed as part of a "Dangerous Australians" stamp series, but was withheld from general circulation by Australia Post due to concerns that the realistic depiction would scare people opening their letter boxes. In 2012 an episode of the children's TV show Peppa Pig in which the title character picks up and plays with a spider was banned from Australian television due to fears that it would encourage children to pick up and play with redback spiders.
Biology and health sciences
Spiders
Animals
724161
https://en.wikipedia.org/wiki/Isopoda
Isopoda
Isopoda is an order of crustaceans. Members of this group are called isopods and include both aquatic species and terrestrial species such as woodlice. All have rigid, segmented exoskeletons, two pairs of antennae, seven pairs of jointed limbs on the thorax, and five pairs of branching appendages on the abdomen that are used in respiration. Females brood their young in a pouch under their thorax called the marsupium. Isopods have various feeding methods: some eat dead or decaying plant and animal matter, others are grazers or filter feeders, a few are predators, and some are internal or external parasites, mostly of fish. Aquatic species mostly live on the seabed or the bottom of freshwater bodies of water, but some taxa can swim for short distance. Terrestrial forms move around by crawling and tend to be found in cool, moist places. Some species are able to roll themselves into a ball as a defense mechanism or to conserve moisture like species in the family Armadillididae, the pillbugs. There are over 10,000 identified species of isopod worldwide, with around 4,500 species found in marine environments, mostly on the seabed, 500 species in fresh water, and another 5,000 species on land. The order is divided into eleven suborders. The fossil record of isopods dates back to the Carboniferous period (in the Pennsylvanian epoch), at least 300 million years ago, when isopods lived in shallow seas. The name Isopoda is derived from the Greek roots (from , meaning "equal") and (from , the stem of , meaning "foot"). Description Classified within the arthropods, isopods have a chitinous exoskeleton and jointed limbs. Isopods are typically flattened dorsoventrally (broader than they are deep), although many species deviate from this rule, particularly parasitic forms, and those living in the deep sea or in ground water habitats. Their colour may vary, from grey to white, or in some cases red, green, or brown. Isopods vary in size, ranging from some Microcerberidae species of just to the deep sea giant isopod Bathynomus spp. of nearly . Giant isopods lack an obvious carapace (shell), which is reduced to a "cephalic shield" covering only the head. This means that the gill-like structures, which in other related groups are protected by the carapace, are instead found on specialised limbs on the abdomen. The dorsal (upper) surface of the animal is covered by a series of overlapping, articulated plates which give protection while also providing flexibility. The isopod body plan consists of a head (cephalon), a thorax (pereon) with seven segments (pereonites), and an abdomen (pleon) with six segments (pleonites), some of which may be fused. The head is fused with the first segment of the thorax to form the cephalon. There are two pairs of unbranched antennae, the first pair being vestigial in land-dwelling species. The eyes are compound and unstalked and the mouthparts include a pair of maxillipeds and a pair of mandibles (jaws) with palps (segmented appendages with sensory functions) and lacinia mobilis (spine-like movable appendages). The seven free segments of the thorax each bear a pair of unbranched pereopods (limbs). In most species these are used for locomotion and are of much the same size, morphology and orientation, giving the order its name "Isopoda", from the Greek equal foot. In a few species, the front pair are modified into gnathopods with clawed, gripping terminal segments. The pereopods are not used in respiration, as are the equivalent limbs in amphipods, but the coxae (first segments) are fused to the tergites (dorsal plates) to form epimera (side plates). In mature females, some or all of the limbs have appendages known as oostegites which fold underneath the thorax and form a brood chamber for the eggs. In males, the gonopores (genital openings) are on the ventral surface of segment eight and in the females, they are in a similar position on segment six. One or more of the abdominal segments, starting with the sixth segment, is fused to the telson (terminal section) to form a rigid pleotelson. The first five abdominal segments each bear a pair of biramous (branching in two) pleopods (lamellar structures which serve the function of gas exchange, and in aquatic species serve as gills and propulsion), and the last segment bears a pair of biramous uropods (posterior limbs). In males, the second pair of pleopods, and sometimes also the first, are modified for use in transferring sperm. The endopods (inner branches of the pleopods) are modified into structures with thin, permeable cuticles (flexible outer coverings) which act as gills for gas exchange. In some terrestrial isopods, these resemble lungs. Diversity and classification Isopods belong to the larger group Peracarida, which are united by the presence of a special chamber under the thorax for brooding eggs. They have a cosmopolitan distribution and over 10,000 species of isopod, classified into 11 suborders, have been described worldwide. Around 4,500 species are found in marine environments, mostly on the sea floor. About 500 species are found in fresh water and another 5,000 species are the terrestrial woodlice, which form the suborder Oniscidea. In the deep sea, members of the suborder Asellota predominate, to the near exclusion of all other isopods, having undergone a large adaptive radiation in that environment. The largest isopod is in the genus Bathynomus and some large species are fished commercially for human food in Mexico, Japan and Hawaii. Some isopod groups have evolved a parasitic lifestyle, particularly as external parasites of fish. They can damage or kill their hosts and can cause significant economic loss to commercial fisheries. In reef aquariums, parasitic isopods can become a pest, endangering the fish and possibly injuring the aquarium keeper. Some members of the family Cirolanidae suck the blood of fish, and others, in the family Aegidae, consume the blood, fins, tail and flesh and can kill the fish in the process. The World Marine, Freshwater and Terrestrial Isopod Crustaceans database subdivides the order into eleven suborders: Asellota – This suborder contains the superfamily Aselloidea, a group that contains most of the freshwater isopods in the northern hemisphere, and the superfamilies Stenetrioidea, Gnathostenetroidoidea and Janiroidea, which are mostly marine. The latter superfamily, Janiroidea, has a massive radiation of deepsea families, many of which have taken bizarre forms. Calabozoida – A small suborder consisting of two marine species in the family Calabozoidae and one freshwater species in the family Brasileirinidae which is found in subterranean locations. Cymothoida – Chiefly marine isopods with over 2,700 species. Members are mostly carnivorous or parasitic. Includes the family Gnathiidae, the juveniles of which are parasitic on fishes. The previously recognised suborder Epicaridea is included as two superfamilies within this suborder and Cymothoida now includes part of the formerly recognised suborder Flabellifera. Also includes the former suborder Anthuridea, a group of worm-like isopods with very long bodies. Limnoriidea – Mainly tropical isopods, some of which are herbivorous. Microcerberidea – Tiny, worm-like isopods that live between particles on the bed of freshwater and shallow marine habitats. Oniscidea – Semi-terrestrial and terrestrial isopods fully adapted for life on land. There are over 4,000 species of woodlice inhabiting forests, mountains, deserts and the littoral zone. Phoratopidea – A single marine species, Phoratopus remex, which warrants its own suborder because of its unique characteristics. Phreatoicidea – Small suborder of freshwater isopods resembling amphipods, limited to South Africa, India, Australia and New Zealand. Sphaeromatidea – Benthic isopods mostly from the southern hemisphere with respiratory pleopods inside a branchial chamber. This suborder now includes part of the formerly recognised suborder Flabellifera. Tainisopidea – Freshwater isopods in a "relictual environment". Valvifera – A large group of benthic, marine isopods with respiratory pleopods inside a branchial chamber under the abdomen. Evolutionary history Isopods first appeared in the fossil record during the Carboniferous period of the Paleozoic some 300 million years ago. They were primitive, short-tailed members of the suborder Phreatoicidea. At that time, Phreatoicideans were marine organisms with a cosmopolitan distribution. Nowadays, the members of this formerly widespread suborder form relic populations in freshwater environments in South Africa, India and Oceania, the greatest number of species being in Tasmania. Other primitive, short-tailed suborders include Asellota, Microcerberidea, Calabozoidea and the terrestrial Oniscidea. The short-tailed isopods have a short pleotelson and terminal, stylus-like uropods and have a sedentary lifestyle on or under the sediment on the seabed. The long-tailed isopods have a long pleotelson and broad lateral uropods which can be used in swimming. They are much more active and can launch themselves off the seabed and swim for short distances. The more advanced long-tailed isopods are mostly endemic to the southern hemisphere and may have radiated on the ancient supercontinent of Gondwana soon after it broke away from Laurasia 200 million years ago. The short-tailed forms may have been driven from the shallow seas in which they lived by increased predatory pressure from marine fish, their main predators. The development of the long-tailed forms may also have provided competition that helped force the short-tailed forms into refugia. The latter are now restricted to environments such as the deep sea, freshwater, groundwater and dry land. Isopods in the suborder Asellota are by far the most species-rich group of deep sea isopods. Locomotion Unlike the amphipods, marine and freshwater isopods are entirely benthic. This gives them little chance to disperse to new regions and may explain why so many species are endemic to restricted ranges. Crawling is the primary means of locomotion, and some species bore into the seabed, the ground or timber structures. Some members of the families Sphaeromatidae, Idoteidae and Munnopsidae are able to swim pretty well, and have their front three pairs of pleopods modified for this purpose, with their respiratory structures limited to the hind pleopods. Most terrestrial species are slow-moving and conceal themselves under objects or hide in crevices or under bark. The semi-terrestrial sea slaters (Ligia spp.) can run rapidly on land and many terrestrial species can roll themselves into a ball when threatened, a feature that has evolved independently in different groups and also in the marine sphaeromatids. Feeding and nutrition Isopods have a simple gut which lacks a midgut section; instead there are caeca connected to the back of the stomach in which absorption takes place. Food is sucked into the esophagus, a process enhanced in the blood-sucking parasitic species, and passed by peristalsis into the stomach, where the material is processed and filtered. The structure of the stomach varies, but in many species there is a dorsal groove into which indigestible material is channelled and a ventral part connected to the caeca where intracellular digestion and absorption take place. Indigestible material passes on through the hindgut and is eliminated through the anus, which is on the pleotelson. Isopods are detritivores, browsers, carnivores (including predators and scavengers), parasites, and filter feeders, and may occupy one or more of these feeding niches. Only aquatic and marine species are known to be parasites or filter feeders. Some exhibit coprophagia and will also consume their own fecal pellets. Terrestrial species are in general herbivorous, with woodlice feeding on moss, bark, algae, fungi and decaying material. In marine isopods that feed on wood, cellulose is digested by enzymes secreted in the caeca. Limnoria lignorum, for example, bores into wood and additionally feeds on the mycelia of fungi attacking the timber, thus increasing the nitrogen in its diet. Land-based wood-borers mostly house symbiotic bacteria in the hindgut which aid in digesting cellulose. There are numerous adaptations to this simple gut, but these are mostly correlated with diet rather than by taxonomic group. Parasitic species are mostly external parasites of fish or crustaceans and feed on blood. The larvae of the Gnathiidae family and adult cymothoidids have piercing and sucking mouthparts and clawed limbs adapted for clinging onto their hosts. In general, isopod parasites have diverse lifestyles and include Cancricepon elegans, found in the gill chambers of crabs; Athelges tenuicaudis, attached to the abdomen of hermit crabs; Crinoniscus equitans living inside the barnacle Balanus perforatus; cyproniscids, living inside ostracods and free-living isopods; bopyrids, living in the gill chambers or on the carapace of shrimps and crabs and causing a characteristic bulge which is even recognisable in some fossil crustaceans; and entoniscidae living inside some species of crab and shrimp. Cymothoa exigua is a parasite of the spotted rose snapper Lutjanus guttatus in the Gulf of California; it causes the tongue of the fish to atrophy and takes its place in what is believed to be the first instance discovered of a parasite functionally replacing a host structure in animals. Reproduction and development In most species, the sexes are separate and there is little sexual dimorphism, but a few species are hermaphroditic and some parasitic forms show large differences between the sexes. Some Cymothoidans are protandrous hermaphrodites, starting life as males and later changing sex, and some Anthuroideans are the reverse, being protogynous hermaphrodites that are born female. Some Gnathiidans males are sessile and live with a group of females. Males have a pair of penises, which may be fused in some species. The sperm is transferred to the female by the modified second pleopod which receives it from the penis and which is then inserted into a female gonopore. The sperm is stored in a special receptacle, a swelling on the oviduct close to the gonopore. Fertilisation only takes place when the eggs are shed soon after a moult, at which time a connection is established between the semen receptacle and the oviduct. The eggs, which may number up to several hundred, are brooded by the female in the marsupium, a chamber formed by flat plates known as oostegites under the thorax. This is filled with water even in terrestrial species. The eggs hatch as mancae, a post-larval stage which resembles the adult except for the absence of the last pair of pereopods. The lack of a swimming phase in the life cycle is a limiting factor in isopod dispersal, and may be responsible for the high levels of endemism in the order. As adults, isopods differ from other crustaceans in that moulting occurs in two stages known as "biphasic moulting". First they shed the exoskeleton from the posterior part of their body and later shed the anterior part. The giant Antarctic isopod Glyptonotus antarcticus is an exception, and moults in a single process. Terrestrial isopods The majority of crustaceans are aquatic and the isopods are one of the few groups of which some members now live on land. The only other crustaceans which include a small number of terrestrial species are amphipods (like sandhoppers) and decapods (crabs, shrimp, etc.). Terrestrial isopods play an important role in many tropical and temperate ecosystems by aiding in the decomposition of plant material through mechanical and chemical means, and by enhancing the activity of microbes. Macro-detritivores, including terrestrial isopods, are absent from arctic and sub-arctic regions, but have the potential to expand their range with increased temperatures in high latitudes. The woodlice, suborder Oniscidea, are the most successful group of terrestrial crustaceans and show various adaptations for life on land. They are subject to evaporation, especially from their ventral area, and as they do not have a waxy cuticle, they need to conserve water, often living in a humid environment and sheltering under stones, bark, debris or leaf litter. Desert species, like Hemilepistus reaumuri, are usually nocturnal, spending the day in a burrow and emerging at night. Moisture is achieved through food sources or by drinking, and some species can form their paired uropodal appendages into a tube and funnel water from dewdrops onto their pleopods. In many taxa, the respiratory structures on the endopods are internal, with a spiracle and pseudotrachaea, which resemble lungs. In others, the endopod is folded inside the adjoining exopod (outer branch of the pleopod). Both these arrangements help to prevent evaporation from the respiratory surfaces. Many species can roll themselves into a ball, a behaviour used in defense that also conserves moisture. Members of the families Ligiidae and Tylidae, commonly known as rock lice or sea slaters, are the least specialised of the woodlice for life on land. They inhabit the splash zone on rocky shores, jetties and pilings, may hide under debris washed up on the shore and can swim if immersed in water.
Biology and health sciences
Malacostraca
Animals
724730
https://en.wikipedia.org/wiki/Clonazepam
Clonazepam
Clonazepam, sold under the brand name Klonopin among others, is a benzodiazepine medication used to prevent and treat anxiety disorders, seizures, bipolar mania, agitation associated with psychosis, obsessive–compulsive disorder (OCD), and akathisia. It is a long-acting tranquilizer of the benzodiazepine class. It possesses anxiolytic, anticonvulsant, sedative, hypnotic, and skeletal muscle relaxant properties. It is typically taken orally (swallowed by mouth) but is also used intravenously. Effects begin within one hour and last between eight and twelve hours in adults. Common side effects may include sleepiness, weakness, poor coordination, difficulty concentrating, and agitation. Clonazepam may also decrease memory formation. Long-term use may result in tolerance, dependence, and life-threatening withdrawal symptoms if stopped abruptly. Dependence occurs in one-third of people who take benzodiazepines for longer than four weeks. The risk of suicide increases, particularly in people who are already depressed. Use during pregnancy may result in harm to the fetus. Clonazepam binds to GABAA receptors, thus increasing the effect of the chief inhibitory neurotransmitter γ-aminobutyric acid (GABA). Clonazepam was patented in 1960 and went on sale in 1975 in the United States from Roche. It is available as a generic medication. In 2022, it was the 57th most commonly prescribed medication in the United States, with more than 11million prescriptions. In many areas of the world, it is commonly used as a recreational drug. Medical uses Clonazepam is prescribed for short-term management of epilepsy, anxiety, obsessive–compulsive disorder (OCD), and panic disorder with or without agoraphobia. Seizures Clonazepam, like other benzodiazepines, while being a first-line treatment for acute seizures, is not suitable for the long-term treatment of seizures due to the development of tolerance to the anticonvulsant effects. Clonazepam has been found effective in treating epilepsy in children, and the inhibition of seizure activity seemed to be achieved at low plasma levels of clonazepam. As a result, clonazepam is sometimes used for certain rare childhood epilepsies, but it is ineffective in the control of infantile spasms. Clonazepam is mainly prescribed for the acute management of epilepsy. Clonazepam is effective in the acute control of non-convulsive status epilepticus; the benefits, though, tended to be transient in many people, and the addition of phenytoin for lasting control was required in these patients. It is also approved for the treatment of typical and atypical absences (seizures) and infantile myoclonic and akinetic seizures. A subgroup of people with treatment-resistant epilepsy may benefit from long-term use of clonazepam; the benzodiazepine clorazepate may be an alternative due to its slow onset of tolerance. Anxiety disorders Panic disorder with or without agoraphobia. Clonazepam has also been found effective in treating other anxiety disorders, such as social phobia, but this is an off-label use. The effectiveness of clonazepam in the short-term treatment of panic disorder has been demonstrated in controlled clinical trials. Some long-term trials have suggested a benefit of clonazepam for up to three years without the development of tolerance. Clonazepam is also effective in the management of acute mania. Muscle disorders Restless legs syndrome can be treated using clonazepam as a third-line treatment option, as the use of clonazepam is still investigational. Bruxism also responds to clonazepam in the short term. REM sleep behavior disorder responds well to low doses of clonazepam. It is also used for: The treatment of acute and chronic akathisia induced by neuroleptics, also called antipsychotics Spasticity related to amyotrophic lateral sclerosis (ALS) Other Benzodiazepines, such as clonazepam, are sometimes used for the treatment of mania or acute psychosis-induced aggression. In this context, benzodiazepines are given either alone or in combination with other first-line drugs such as lithium, haloperidol, or risperidone. The effectiveness of taking benzodiazepines along with antipsychotic medication is unknown, and more research is needed to determine if benzodiazepines are more effective than antipsychotics when urgent sedation is required. Hyperekplexia Many forms of parasomnia and other sleep disorders are treated with clonazepam. It is not effective for preventing migraines. Contraindications Coma Current alcohol use disorder Current substance use disorder Respiratory depression Adverse effects In September 2020, the U.S. Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class. Common Sedation Euphoria Motor impairment Less common Confusion Irritability and aggression Psychomotor agitation Lack of motivation Increased libido Loss of libido Impaired motor function Impaired coordination Impaired balance Dizziness Cognitive impairments Hallucinations. Short-term memory loss Anterograde amnesia (common with higher doses) Some users report hangover-like symptoms of drowsiness, headaches, sluggishness, and irritability upon waking up if the medication was taken before sleep. This is likely the result of the medication's long half-life, which continues to affect the user after waking up. While benzodiazepines induce sleep, they tend to reduce the quality of sleep by suppressing or disrupting REM sleep. After regular use, rebound insomnia may occur when discontinuing clonazepam. Benzodiazepines may cause or worsen depression. Occasional Dysphoria Induction of seizures or increased frequency of seizures Personality changes Behavioural disturbances Ataxia Rare Suicide through disinhibition Psychosis Incontinence Paradoxical behavioural disinhibition (most frequently in children, the elderly, and in persons with developmental disabilities) Rage Excitement Impulsivity The long-term effects of clonazepam can include depression, disinhibition, and sexual dysfunction. Drowsiness Clonazepam, like other benzodiazepines, may impair a person's ability to drive or operate machinery. The central nervous system depressing effects of the drug can be intensified by alcohol consumption, so alcohol should be avoided while taking this medication. Benzodiazepines have been shown to cause dependence. Patients dependent on clonazepam should be slowly titrated off under the supervision of a qualified healthcare professional to reduce the intensity of withdrawal or rebound symptoms. Withdrawal-related Anxiety Irritability Insomnia Tremors Headaches Stomach pain Hallucinations Suicidal thoughts or urges Depression Fatigue Dizziness Sweating Confusion Potential to exacerbate existing panic disorder upon discontinuation Seizures similar to delirium tremens (with long-term use of excessive doses) Benzodiazepines such as clonazepam can be very effective in controlling status epilepticus, but, when used for longer periods of time, some potentially serious side-effects may develop, such as interference with cognitive functions and behavior. Many individuals treated on a long-term basis develop a dependence. Physiological dependence was demonstrated by flumazenil-precipitated withdrawal. Use of alcohol or other CNS depressants while taking clonazepam greatly intensifies the effects, including side effects, of the drug. A recurrence of symptoms of the underlying disease should be separated from withdrawal symptoms. Tolerance and withdrawal Like all benzodiazepines, clonazepam is a GABA-positive allosteric modulator. One-third of individuals treated with benzodiazepines for longer than four weeks develop a dependence on the drug and experience a withdrawal syndrome upon dose reduction. High dosage and long-term use increase the risk and severity of dependence and withdrawal symptoms. Withdrawal seizures and psychosis can occur in severe cases of withdrawal, and anxiety and insomnia can occur in less severe cases of withdrawal. A gradual reduction in dosage reduces the severity of the benzodiazepine withdrawal syndrome. Due to the risks of tolerance and withdrawal seizures, clonazepam is generally not recommended for the long-term management of epilepsies. Increasing the dose can overcome the effects of tolerance, but tolerance to the higher dose may occur and adverse effects may intensify. The mechanism of tolerance includes receptor desensitization, down regulation, receptor decoupling, and alterations in subunit composition and in gene transcription coding. Tolerance to the anticonvulsant effects of clonazepam occurs in both animals and humans. In humans, tolerance to the anticonvulsant effects of clonazepam occurs frequently. Chronic use of benzodiazepines can lead to the development of tolerance with a decrease of benzodiazepine binding sites. The degree of tolerance is more pronounced with clonazepam than with chlordiazepoxide. In general, short-term therapy is more effective than long-term therapy with clonazepam for the treatment of epilepsy. Many studies have found that tolerance develops to the anticonvulsant properties of clonazepam with chronic use, which limits its long-term effectiveness as an anticonvulsant. Abrupt or over-rapid withdrawal from clonazepam may result in the development of the benzodiazepine withdrawal syndrome, causing psychosis characterised by dysphoric manifestations, irritability, aggressiveness, anxiety, and hallucinations. Sudden withdrawal may also induce the potentially life-threatening condition, status epilepticus. Anti-epileptic drugs, benzodiazepines such as clonazepam in particular, should be reduced in dose slowly and gradually when discontinuing the drug to mitigate withdrawal effects. Carbamazepine has been tested in the treatment of clonazepam withdrawal but was found to be ineffective in preventing clonazepam withdrawal-induced status epilepticus from occurring. Overdose Excess doses may result in: Difficulty staying awake Mental confusion Impaired motor functions Impaired reflexes Impaired coordination Impaired balance Dizziness Respiratory depression Low blood pressure Coma Coma can be cyclic, with the individual alternating from a comatose state to a hyper-alert state of consciousness, which occurred in a four-year-old boy who overdosed on clonazepam. The combination of clonazepam and certain barbiturates (for example, amobarbital), at prescribed doses has resulted in a synergistic potentiation of the effects of each drug, leading to serious respiratory depression. Overdose symptoms may include extreme drowsiness, confusion, muscle weakness, and fainting. Detection in biological fluids Clonazepam and 7-aminoclonazepam may be quantified in plasma, serum, or whole blood in order to monitor compliance in those receiving the drug therapeutically. Results from such tests can be used to confirm the diagnosis in potential poisoning victims or to assist in the forensic investigation in a case of fatal overdosage. Both the parent drug and 7-aminoclonazepam are unstable in biofluids, and therefore specimens should be preserved with sodium fluoride, stored at the lowest possible temperature and analyzed quickly to minimize losses. Special precautions The elderly metabolize benzodiazepines more slowly than younger people and are also more sensitive to the effects of benzodiazepines, even at similar blood plasma levels. Doses for the elderly are recommended to be about half of that given to younger adults and are to be administered for no longer than two weeks. Long-acting benzodiazepines such as clonazepam are not generally recommended for the elderly due to the risk of drug accumulation. The elderly are especially susceptible to increased risk of harm from motor impairments and drug accumulation side effects. Benzodiazepines also require special precaution if used by individuals that may be pregnant, alcohol- or drug-dependent, or may have comorbid psychiatric disorders. Clonazepam is generally not recommended for use in elderly people for insomnia due to its high potency relative to other benzodiazepines. Clonazepam is not recommended for use in those under 18. Use in very young children may be especially hazardous. Of anticonvulsant drugs, behavioural disturbances occur most frequently with clonazepam and phenobarbital. Doses higher than 0.5–1 mg per day are associated with significant sedation. Clonazepam may aggravate hepatic porphyria. Clonazepam is not recommended for patients with chronic schizophrenia. A 1982 double-blinded, placebo-controlled study found clonazepam increases violent behavior in individuals with chronic schizophrenia. Clonazepam has similar effectiveness to other benzodiazepines at often a lower dose. Interactions Clonazepam decreases the levels of carbamazepine, and, likewise, clonazepam's level is reduced by carbamazepine. Azole antifungals, such as ketoconazole, may inhibit the metabolism of clonazepam. Clonazepam may affect levels of phenytoin (diphenylhydantoin). In turn, Phenytoin may lower clonazepam plasma levels by increasing the speed of clonazepam clearance by approximately 50% and decreasing its half-life by 31%. Clonazepam increases the levels of primidone and phenobarbital. Combined use of clonazepam with certain antidepressants, anticonvulsants (such as phenobarbital, phenytoin, and carbamazepine), sedative antihistamines, opiates, and antipsychotics, nonbenzodiazepines (such as zolpidem), and alcohol may result in enhanced sedative effects. Pregnancy There is some medical evidence of various malformations (for example, cardiac or facial deformations when used in early pregnancy); however, the data is not conclusive. The data are also inconclusive on whether benzodiazepines such as clonazepam cause developmental deficits or decreases in IQ in the developing fetus when taken by the mother during pregnancy. Clonazepam, when used late in pregnancy, may result in the development of a severe benzodiazepine withdrawal syndrome in the neonate. Withdrawal symptoms from benzodiazepines in the neonate may include hypotonia, apnoeic spells, cyanosis, and impaired metabolic responses to cold stress. The safety profile of clonazepam during pregnancy is less clear than that of other benzodiazepines, and if benzodiazepines are indicated during pregnancy, chlordiazepoxide and diazepam may be a safer choice. The use of clonazepam during pregnancy should only occur if the clinical benefits are believed to outweigh the clinical risks to the fetus. Caution is also required if clonazepam is used during breastfeeding. Possible adverse effects of use of benzodiazepines such as clonazepam during pregnancy include: miscarriage, malformation, intrauterine growth retardation, functional deficits, carcinogenesis, and mutagenesis. Neonatal withdrawal syndrome associated with benzodiazepines include hypertonia, hyperreflexia, restlessness, irritability, abnormal sleep patterns, inconsolable crying, tremors, or jerking of the extremities, bradycardia, cyanosis, suckling difficulties, apnea, risk of aspiration of feeds, diarrhea and vomiting, and growth retardation. This syndrome can develop between three days to three weeks after birth and can have a duration of up to several months. The pathway by which clonazepam is metabolized is usually impaired in newborns. If clonazepam is used during pregnancy or breastfeeding, it is recommended that serum levels of clonazepam are monitored and that signs of central nervous system depression and apnea are also checked for. In many cases, non-pharmacological treatments, such as relaxation therapy, psychotherapy, and avoidance of caffeine, can be an effective and safer alternative to the use of benzodiazepines for anxiety in pregnant women. Pharmacology Mechanism of action Clonazepam enhances the activity of the inhibitory neurotransmitter gamma-aminobutyric acid (GABA) in the central nervous system to give its anticonvulsant, skeletal muscle relaxant, and anxiolytic effects. It acts by binding to the benzodiazepine site of the GABA receptors, which enhances the electric effect of GABA binding on neurons, resulting in an increased influx of chloride ions into the neurons. This further results in an inhibition of synaptic transmission across the central nervous system. Benzodiazepines do not have any effect on the levels of GABA in the brain. Clonazepam has no effect on GABA levels and has no effect on gamma-aminobutyric acid transaminase. Clonazepam does, however, affect glutamate decarboxylase activity. It differs from other anticonvulsant drugs it was compared to in a study. Clonazepam's primary mechanism of action is the modulation of GABA function in the brain, by the benzodiazepine receptor, located on GABAA receptors, which, in turn, leads to enhanced GABAergic inhibition of neuronal firing. Benzodiazepines do not replace GABA, but instead enhance the effect of GABA at the GABAA receptor by increasing the opening frequency of chloride ion channels, which leads to an increase in GABA's inhibitory effects and resultant central nervous system depression. In addition, clonazepam decreases the utilization of 5-HT (serotonin) by neurons and has been shown to bind tightly to central-type benzodiazepine receptors. Because clonazepam is effective in low milligram doses (0.5 mg clonazepam = 10 mg diazepam), it is said to be among the class of "highly potent" benzodiazepines. The anticonvulsant properties of benzodiazepines are due to the enhancement of synaptic GABA responses, and the inhibition of sustained, high-frequency repetitive firing. Benzodiazepines, including clonazepam, bind to mouse glial cell membranes with high affinity. Clonazepam decreases release of acetylcholine in the feline brain and decreases prolactin release in rats. Benzodiazepines inhibit cold-induced thyroid-stimulating hormone (also known as TSH or thyrotropin) release. Benzodiazepines act via micromolar benzodiazepine binding sites as Ca2+ channel blockers and significantly inhibit depolarization-sensitive calcium uptake in experimentation on rat brain cell components. This has been conjectured as a mechanism for high-dose effects on seizures in the study. Clonazepam is a 2'-chlorinated derivative of nitrazepam, which increases its potency due to electron-attracting effect of the halogen in the ortho-position. Pharmacokinetics Clonazepam is lipid-soluble, rapidly crosses the blood–brain barrier, and penetrates the placenta. It is extensively metabolised into pharmacologically inactive metabolites, with only 2% of the unchanged drug excreted in the urine. Clonazepam is metabolized extensively via nitroreduction by cytochrome P450 enzymes, including CYP3A4. Erythromycin, clarithromycin, ritonavir, itraconazole, ketoconazole, nefazodone, cimetidine, and grapefruit juice are inhibitors of CYP3A4 and can affect the metabolism of benzodiazepines. It has an elimination half-life of 19–60 hours. Peak blood concentrations of 6.5–13.5 ng/mL were usually reached within 1–2 hours following a single 2 mg oral dose of micronized clonazepam in healthy adults. In some individuals, however, peak blood concentrations were reached at 4–8 hours. Clonazepam passes rapidly into the central nervous system, with levels in the brain corresponding with levels of unbound clonazepam in the blood serum. Clonazepam plasma levels are very unreliable amongst patients. Plasma levels of clonazepam can vary as much as tenfold between different patients. Clonazepam has plasma protein binding of 85%. Clonazepam passes through the blood–brain barrier easily, with blood and brain levels corresponding equally with each other. The metabolites of clonazepam include 7-aminoclonazepam, 7-acetaminoclonazepam and 3-hydroxy clonazepam. These metabolites are excreted by the kidney. It is effective for 6–8 hours in children, and 8–12 hours in adults. Society and culture Recreational use A 2006 US government study of hospital emergency department (ED) visits found that sedative-hypnotics were the most frequently implicated pharmaceutical drug in visits, with benzodiazepines accounting for the majority of these. Clonazepam was the second most frequently implicated benzodiazepine in ED visits. Alcohol alone was responsible for over twice as many ED visits as clonazepam in the same study. The study examined the number of times the non-medical use of certain drugs was implicated in an ED visit. The criteria for non-medical use in this study were purposefully broad, and include, for example, drug abuse, accidental or intentional overdose, or adverse reactions resulting from legitimate use of the medication. Formulations Clonazepam was approved in the United States as a generic drug in 1997 and is now manufactured and marketed by several companies. Clonazepam is available as tablets and orally disintegrating tablets (wafers) an oral solution (drops), and as a solution for injection or intravenous infusion. Crime In some countries, clonazepam is used by criminals to subdue their victims. Brand names It is marketed under the brand name Rivotril by Roche in Argentina, Australia, Austria, Bangladesh, Belgium, Brazil, Canada, Colombia, Costa Rica, Croatia, the Czech Republic, Denmark, Estonia, Germany, Hungary, Iceland, Ireland, Italy, China, Mexico, the Netherlands, Norway, Portugal, Peru, Pakistan, Romania, Serbia, South Africa, South Korea, Spain, Turkey, and the United States; Emcloz, Linotril, Lonazep, Clotrin and Clonotril in India and other parts of Europe; under the name Riklona in Indonesia and Malaysia; and under the brand name Klonopin by Roche in the United States. Other names, such as Antelepsin, Clonoten, Ravotril, Rivotril, Iktorivil, Clonex (Israel), Paxam, Petril, Naze, Zilepam and Kriadex, are used throughout the world. In August 2021, Roche Australia transferred Rivotril to Pharmaco Australia Ltd.
Biology and health sciences
Psychiatric drugs
Health
725041
https://en.wikipedia.org/wiki/Diverticulitis
Diverticulitis
Diverticulitis, also called colonic diverticulitis, is a gastrointestinal disease characterized by inflammation of abnormal pouches—diverticula—that can develop in the wall of the large intestine. Symptoms typically include lower abdominal pain of sudden onset, but the onset may also occur over a few days. There may also be nausea, diarrhea or constipation. Fever or blood in the stool suggests a complication. People may experience a single attack, repeated attacks, or ongoing "smoldering" diverticulitis. The causes of diverticulitis are unclear. Risk factors may include obesity, lack of exercise, smoking, a family history of the disease, and use of nonsteroidal anti-inflammatory drugs (NSAIDs). The role of a low fiber diet as a risk factor is unclear. Having pouches in the large intestine that are not inflamed is known as diverticulosis. Inflammation occurs between 10% and 25% at some point in time and is due to a bacterial infection. Diagnosis is typically by CT scan, though blood tests, colonoscopy, or a lower gastrointestinal series may also be supportive. The differential diagnoses include irritable bowel syndrome. Preventive measures include altering risk factors such as obesity, inactivity, and smoking. Mesalazine and rifaximin appear useful for preventing attacks in those with diverticulosis. Avoiding nuts and seeds as a preventive measure is no longer recommended since there is no evidence these play a role in initiating inflammation in the diverticula. For mild diverticulitis, antibiotics by mouth and a liquid diet are recommended. For severe cases, intravenous antibiotics, hospital admission, and complete bowel rest may be recommended. Probiotics are of unclear value. Complications such as abscess formation, fistula formation, and perforation of the colon may require surgery. The disease is common in the Western world and uncommon in Africa and Asia. In the Western world about 35% of people have diverticulosis while it affects less than 1% of those in rural Africa, and 4–15% of those may go on to develop diverticulitis. In North America and Europe the abdominal pain is usually on the left lower side (sigmoid colon), while in Asia it is usually on the right (ascending colon). The disease becomes more frequent with age, ranging from 5% for those under 40 years of age to 50% over the age of 60. It has also become more common in all parts of the world. In 2003 in Europe, it resulted in approximately 13,000 deaths. It is the most frequent anatomic disease of the colon. Costs associated with diverticular disease were around US$2.4 billion a year in the United States in 2013. Signs and symptoms Diverticulitis typically presents with lower quadrant abdominal pain of a sudden onset. Patients commonly have elevated C-reactive protein and a high white blood cell count. In Asia it is usually on the right (ascending colon), while in North America and Europe, the abdominal pain is usually on the left lower side (sigmoid colon). There may also be fever, nausea, diarrhea or constipation, and blood in the stool. Diverticulosis is associated with more frequent bowel movements, contrary to the widespread belief that patients with diverticulosis are constipated. Complications In complicated diverticulitis, an inflamed diverticulum can rupture, allowing bacteria to subsequently infect externally from the colon. If the infection spreads to the lining of the abdominal cavity (the peritoneum), peritonitis results. Sometimes, inflamed diverticula can cause narrowing of the bowel, leading to an obstruction. In some cases, the affected part of the colon adheres to the bladder or other organs in the pelvic cavity, causing a fistula, or creating an abnormal connection between an organ and adjacent structure or another organ (in the case of diverticulitis, the colon, and an adjacent organ). Related pathologies may include: Bowel obstruction Peritonitis Abscess Fistula Bleeding Strictures Causes and prevention The causes of diverticulitis are poorly understood. Formation of diverticula is regarded as likely due to interactions of age, diet, colonic microbiota, genetic factors, colonic motility, and changes in colonic structure. Factors associated with increased diverticulitis risk Genetics A 2021 review estimated that 50% of the risk of diverticulitis was attributable to genetic factors. A 2012 study estimated that heritability made up 40% of cause and non-shared environmental effects 60%. Presence of other ill-health Conditions that increase the risk of developing diverticulitis include arterial hypertension and immunosuppression. Low levels of vitamin D have been associated with an increased risk of diverticulitis. Frequency of bowel movement A 2022 study found that more frequent bowel movements appeared to be a risk factor for subsequent diverticulitis both in men and women. Weight Obesity has been regarded as a risk factor for diverticulitis. Some studies have found a correlation of higher prevalence of diverticulitis with overweight and obese bodyweight. There is some debate if this is causal. Diet It is unclear what role dietary fiber plays in diverticulitis. It is often stated that a diet low in fiber is a risk factor; however, the evidence to support this is unclear. A 2012 study found that a high-fiber diet and increased frequency of bowel movements are associated with greater, rather than lower, prevalence of diverticulosis. There is no evidence to suggest that the avoidance of nuts and seeds prevents the progression of diverticulosis to an acute case of diverticulitis. In fact, it appears that a higher intake of nuts and corn could help to avoid diverticulitis in adult males. Red meat consumption, particularly unprocessed red meat, has been associated with higher diverticulitis risk. A 2017 analysis found a dietary pattern high in red meat, refined grains, and high-fat dairy was associated with an increased risk of incident diverticulitis whereas a dietary pattern high in fruits, vegetables, and whole grains was associated with decreased risk. Men in the highest quintile of Western dietary pattern score had a multivariate hazard ratio (HR) of 1.55 (95% CI, 1.20–1.99) for diverticulitis compared to men in the lowest quintile. Recent dietary intake may be more strongly associated with diverticulitis than long-term intake. The associations between dietary patterns and diverticulitis were largely due to red meat and fiber intake. A systematic review published in 2012 found no high-quality studies, but found that some studies and guidelines favour a high-fiber diet for the treatment of symptomatic disease. A 2011 review found that a high-fiber diet may prevent diverticular disease, and found no evidence for the superiority of low-fiber diets in treating diverticular disease. A 2011 long-term study found that a vegetarian diet and high fiber intake were both associated with lower risks of hospital admission or death from diverticulitis. While it has been suggested that probiotics may be useful for treatment, the evidence currently neither supports nor refutes this claim. Factors associated with reduced diverticulitis risk Healthy lifestyle A prospective cohort study found that a healthy lifestyle (defined as <51 g daily red meat, >23 g daily dietary fiber, 2 hours’ exercise weekly, normal BMI, and never a smoker) was associated with a substantially reduced risk of diverticulitis (relative risk 0.27, 0.15 to 0.48). Exercise A 2009 study found that men who engaged in vigorous physical activity (approximately 3 hours of running a week) had a 34% reduction in the risk of diverticulitis, and a 39% reduction in the risk of diverticular bleeding when compared to men who did not exercise vigorously. Running was the only specific activity to show a statistically significant benefit. The up and down motions of running may impart distinct benefits to the colon. Moderate exercise may accelerate the speed at which food travels through the gut. Pathology Right-sided diverticula are micro-hernias of the colonic mucosa and submucosa through the colonic muscular layer where blood vessels penetrate it. Left-sided diverticula are pseudodiverticula, since the herniation is not through all the layers of the colon. Diverticulitis is postulated to develop because of changes inside the colon, including high pressures because of abnormally vigorous contractions. Diagnosis People with the above symptoms are commonly studied with computed tomography, or a CT scan. Ultrasound can provide preliminary investigation for diverticulitis. Amongst the findings that can be seen on ultrasound is a non-compressing outpouching of bowel wall, hypoechoic and thickened wall, or there is obstructive fecalith at the bowel wall. Besides, bowel wall oedema with adjacent hyperechoic mesentery can also be seen on ultrasound. However, CT scan is the mainstay of diagnosing diverticulitis and its complications. The diagnosis of acute diverticulitis is made confidently when the involved segment contains diverticula. CT images reveal localized colon wall thickening, with inflammation extending into the fat surrounding the colon. Amongst the complications that can be seen on CT scan are: abscesses, perforation, pylephlebitis, intestinal obstruction, bleeding, and fistula. Barium enema and colonoscopy are contraindicated in the acute phase of diverticulitis because of the risk of perforation. Classification by severity Uncomplicated vs complicated Uncomplicated acute diverticulitis is defined as localized diverticular inflammation without any abscess or perforation. Complicated diverticulitis additionally includes the presence of abscess, peritonitis, obstruction, stricture and/or fistula. 12% of patients with diverticulitis present with complicated disease. Classification systems At least four classifications by severity have been published in the literature. As of 2015 the 'German Classification' was widely accepted and is as follows: Stage 0 – asymptomatic diverticulosis Stage 1a – uncomplicated diverticulitis Stage 1b – diverticulitis with phlegmonous peridiverticulitis Stage 2a – diverticulitis with concealed perforation, and abscess with a diameter of one centimeter or less Stage 2b – diverticulitis with abscess greater than one centimeter Stage 3a – diverticulitis with symptoms but without complications Stage 3b – relapsing diverticulitis without complications Stage 3c – relapsing diverticulitis with complications As of 2022 other classification systems are also used. The severity of diverticulitis can be radiographically graded by the Hinchey Classification. Smoldering diverticulitis In "smoldering diverticulitis" (SmD) there are frequent relapsing symptoms but no progression to diverticular complications. Approximately 5% of diverticulitis people experience smoldering diverticulitis. Smoldering diverticulitis cases make up 4–10% of diverticulitis surgeries. Differential diagnoses The differential diagnoses include colon cancer, inflammatory bowel disease, ischemic colitis, and irritable bowel syndrome, as well as a number of urological and gynecological processes. In those with uncomplicated diverticulitis, cancer is present in less than 1% of people. Prognosis Estimates for the % of people with diverticulosis who will develop diverticulitis range from 5% to 10% to 25%. Most people with uncomplicated diverticulitis recover following medical treatment. The median time to recovery is 14 days. Approximately 5% of people experience smoldering diverticulitis. Diverticulitis recurs in around one-third of people – about 50% of recurrences occur within one year, and 90% within 5 years. Recurrence is more common in younger people, in those with an abscess at diagnosis, and after an episode of complicated diverticulitis. About 5% of people with diverticular disease have complications when followed up for 10–30 years. The risk of complications, such as peritonitis or perforation, is greater during the first episode of diverticulitis, and the risk reduces with each recurrence. People who are immunocompromised have a 5-fold increased risk of recurrence with complications, such as bowel perforation, compared to immunocompetent people. The decision criteria for having surgical treatment has been subject to debate and development. Following surgical treatment, approximately 25% of people remain symptomatic. Treatment In uncomplicated diverticulitis, administration of fluids may be sufficient treatment if no other risk factors are present. Diet Diverticulitis patients may be placed on a low-fiber diet, or a liquid diet, although evidence for improved outcomes through diet has not been found. Medication Antibiotics Mild uncomplicated diverticulitis without systemic inflammation should not be treated with antibiotics. For mild, uncomplicated, and non-purulent cases of acute diverticulitis, symptomatic treatment, IV fluids, and bowel rest have no worse outcome than surgical intervention in the short and medium term, and appear to have the same outcomes at 24 months. With abscess confirmed by CT scan, some evidence and clinical guidelines tentatively support the use of oral or IV antibiotics for smaller abscesses (<5 cm) without systemic inflammation, but percutaneous or laparoscopic drainage may be necessary for larger abscesses (>5 cm). Rifaximin was found in a meta-analysis to give symptom relief and reduce complications but the scientific quality of the underlying studies has been questioned. Mesalamine Mesalamine is an anti-inflammatory medication used in the treatment of inflammatory bowel diseases. In limited studies, patients with diverticulitis and symptomatic diverticular disease treated with mesalamine have shown improvement in both conditions. Mesalazine may reduce recurrences in symptomatic uncomplicated diverticular disease. In 2022 Germany introduced guidance to use mesalamine to treat acute uncomplicated diverticulitis. Surgery Indications for surgery are abscess or fistula formation; and intestinal rupture with peritonitis. These, however, rarely occur. Emergency surgery is required for peritonitis with perforated diverticulitis or intestinal rupture. Surgery for abscess or fistula is indicated either urgently or electively. The timing of the elective surgery is determined by evaluating factors such as the stage of the disease, the age of the person, their general medical condition, the severity and frequency of the attacks, and whether symptoms persist after the first acute episode. In most cases, elective surgery is deemed to be indicated when the risks of the surgery are less than the risks of the complications of diverticulitis. Elective surgery is not indicated until at least six weeks after recovery from the acute event. Technique The first surgical approach consists of resection and primary anastomosis. This first stage of surgery is performed on people if they have a well-vascularized, nonedematous, and tension-free bowel. The proximal margin should be an area of the pliable colon without hypertrophy or inflammation. The distal margin should extend to the upper third of the rectum where the taenia coalesces. Not all of the diverticula-bearing colon must be removed, since diverticula proximal to the descending or sigmoid colon are unlikely to result in further symptoms. Approach Diverticulitis surgery consists of a bowel resection with or without colostomy. Either may be done by the traditional laparotomy or by laparoscopic surgery. The traditional bowel resection is made using an open surgical approach, called colectomy. During a colectomy, the person is placed under general anesthesia. A surgeon performing a colectomy will make a lower midline incision in the abdomen or a lateral lower transverse incision. The diseased section of the large intestine is removed, and then the two healthy ends are sewn or stapled back together. A colostomy may be performed when the bowel has to be relieved of its normal digestive work as it heals. A colostomy implies creating a temporary opening of the colon on the skin surface, and the end of the colon is passed through the abdominal wall with a removable bag attached to it. The waste is collected in the bag. However, most surgeons prefer performing the bowel resection laparoscopically, mainly because postoperative pain is reduced with faster recovery. Laparoscopic surgery is a minimally invasive procedure in which three to four smaller incisions are made in the abdomen or navel. After incisions into the abdomen are done, placement of trocars occurs which allows a camera and other equipment entry into the peritoneal cavity. The greater omentum is reflected and the affected section of the bowel is mobilized. Alternately, laparoscopic sigmoid resection (LSR) compared to open sigmoid resection (OSR) showed that LSR is not superior to OSR for acute symptomatic diverticulitis. Furthermore, laparoscopic lavage was as safe as resection for perforated diverticulitis with peritonitis. Maneuvers All colon surgery involves only three maneuvers that may vary in complexity depending on the region of the bowel and the nature of the disease. The maneuvers are the retraction of the colon, the division of the attachments to the colon, and the dissection of the mesentery. After the resection of the colon, the surgeon normally divides the attachments to the liver and the small intestine. After the mesenteric vessels are dissected, the colon is divided with special surgical staplers that close off the bowel while cutting between the staple lines. After resection of the affected bowel segment, an anvil and spike are used to anastomose the remaining segments of the bowel. Anastomosis is confirmed by filling the cavity with normal saline and checking for any air bubbles. Bowel resection with colostomy When excessive inflammation of the colon renders primary bowel resection too risky, bowel resection with colostomy remains an option. Also known as the Hartmann's operation, this is a more complicated surgery typically reserved for life-threatening cases. The bowel resection with colostomy implies a temporary colostomy which is followed by a second operation to reverse the colostomy. The surgeon makes an opening in the abdominal wall (a colostomy) which helps clear the infection and inflammation. The colon is brought through the opening and all waste is collected in an external bag. The colostomy is usually temporary, but it may be permanent, depending on the severity of the case. In most cases several months later, after the inflammation has healed, the person undergoes another major surgery, during which the surgeon rejoins the colon and rectum and reverses the colostomy. Prophylactic endoscopic clipping Prophylactic endoscopic clipping is being researched for diverticulitis. Epidemiology Diverticulitis most often affects the elderly. In Western countries, diverticular disease most commonly involves the sigmoid colon (95 percent of people with diverticulitis). Diverticulosis affects 5–45% of individuals with the prevalence of diverticulosis increasing with age from under 20% of individuals affected at age 40 up to 60% of individuals affected by age 60. Left-sided diverticular disease (involving the sigmoid colon) is most common in the West, while right-sided diverticular disease (involving the ascending colon) is more common in Asia and Africa. Among people with diverticulosis, 4–15% may go on to develop diverticulitis.
Biology and health sciences
Specific diseases
Health