id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
4,394,636
https://en.wikipedia.org/wiki/Taplow%20Court
Taplow Court is a Victorian house in the village of Taplow in Buckinghamshire, England. Its origins are an Elizabethan manor house, remodelled in the early 17th century. In the 18th century the court was owned by the Earls of Orkney. In the 1850s, the court was sold to Charles Pascoe Grenfell, whose descendants retained ownership until after the Second World War. The court then served as a corporate headquarters for British Telecommunications Research (BTR) an independent research company set up in 1946. BTR was subsequently acquired by Plessey Electronics. In 1988 it was bought by the Buddhist foundation, Soka Gakkai International and serves as their UK headquarters. The court is a Grade II listed building, and its present appearance is due to a major rebuilding undertaken by William Burn for Charles Grenfell in 1855–1860. In the early 20th century, the court was home to William Grenfell and his wife Ettie. She was a noted Edwardian hostess, and Taplow Court became a gathering place for The Souls, a group of aristocratic intellectuals. History Pevsner and Williamson record the court's "complicated" history. Its origins are an Elizabethan manor, which was reconstructed after a fire in 1616 by Sir Henry Guildford. In the 18th and early 19th centuries the court was owned by the Earls of Orkney, who has also owned the adjacent Cliveden. From 1852, Taplow Court became the home of the Grenfell family, purchased by Charles Pascoe Grenfell in August of that year. It was inherited in 1867 by his grandson William Grenfell, 1st Baron Desborough where a prominent social role was also played by his wife Ettie. Ettie was described by her nephew David Cecil as "the most brilliant hostess in an age of brilliant hostesses", and hosted an aristocratic group known as "the Souls" at the house. Visitors included Henry Irving, Vita Sackville-West, Edward VII then Prince of Wales, Winston Churchill, H. G. Wells, Patrick Shaw Stewart, Edith Wharton and Oscar Wilde. During the Great War, the Grenfells lost two of their three sons.Julian was killed by a shell splinter in May 1915, and his brother Billy was killed in July the same year. A letter of condolence from Arthur Balfour, printed in the Grenfell's Family Journal, begins: "I do not pretend to offer consolation; in one very real sense there is no consolation to be offered. The blow, the double blow, has fallen and the shock which threatens the very citadel of life can be softend by nothing that I or perhaps any other can do or utter". A stické court was built by William Grenfell at Taplow Court in 1892 and the dimension of this court subsequently became the standard size of the court. In 1913 Taplow Court was rented by Rodman Wanamaker, the U.S. Department Store magnate. After World War II, Taplow Court was owned by British Telecommunications Research, a subsidiary of Plessey Electronics until 1988 when it became a Soka Gakkai International(SGI) Buddhist centre. Description The present building dates mainly from William Burn's rebuilding in 1855–1860. Historic England describes Burn's chosen style for the exterior of the court as "early Tudor", and its interior as "Romanesque". Constructed of red brick, with four storeys and slate roofs, it is a Grade II listed building. The interior contains a double-height hall, enclosing the Elizabethan inner courtyard, which was undertaken for Lord Orkney and has been attributed to Thomas Hopper, the architect of Penrhyn Castle. Pevsner notes the "beautiful" restoration of the house, undertaken by SGI. The grounds have their own Grade II listing on HE's Parks and Gardens Register. Notes References Sources External links Taplow Court information Taplow Court Open Days Ancient Taplow Country houses in Buckinghamshire Engineering research institutes Grenfell family History of telecommunications in the United Kingdom Plessey Research institutes in Buckinghamshire Soka Gakkai Tourist attractions in Buckinghamshire
Taplow Court
Engineering
832
5,044,491
https://en.wikipedia.org/wiki/Sharp%20PC-5000
The Sharp PC-5000 was a pioneering laptop computer, announced by Sharp Corporation of Japan in November 1983. It employed a clamshell design in which the display closes over the keyboard, like the earlier GRiD Compass and contemporary Gavilan SC. The PC-5000 was largely IBM PC-compatible, with the same 4.77-MHz Intel 8088 processor as the IBM PC, and ran MS-DOS 2.0 (in ROM). It had 128 kilobytes of internal memory (it was one of the few computers to use bubble memory), which could be expanded by the use of plug-in cartridges. The cartridge slots also accepted ROM cartridges containing software, such as the Extended BASIC programming language and the EasyPac software suite, which contained the EasyWrite II word processor, EasyReport reports program, and EasyComm terminal software for use with the internal modem. It featured a 640×80-pixel (80-character by 8-line) LCD display, a full-travel keyboard, and an external dual 5.25-inch floppy disk drive. A notable feature of the computer was its built-in thermal printer, which could also be purchased separately and attached to the machine. It is perhaps due to this attachment that the case design of the PC-5000 owes much to that of electronic typewriters of its time. While far more portable than the popular Compaq Portable or Osborne 1 computers, the machine weighed 5 kg (11 lb). Sharp succeeded the PC-5000 with the fully IBM-compatible PC-7000 in late 1985. Reception Your Computer magazine selected the PC-5000 as one of the best personal computers of 1983. Creative Computing chose the PC-5000 as the best notebook portable between $1,000 and $2,500 for 1984, although criticizing the difficulty of finding the computer in stores and questionable support from Sharp and third-party vendors. References External links Article about the PC-5000 Review of the PC-5000 from 1984 Vintage-computer page about the PC-5000 Old-computers page about the PC-5000 PC-5000 Computer-related introductions in 1983 Japanese inventions Early laptops
Sharp PC-5000
Technology
437
71,284,012
https://en.wikipedia.org/wiki/OpenHarmony
OpenHarmony (OHOS, OH) is a family of open-source distributed operating systems based on HarmonyOS derived from LiteOS, donated the L0-L2 branch source code by Huawei to the OpenAtom Foundation. Similar to HarmonyOS, the open-source distributed operating system is designed with a layered architecture, consisting of four layers from the bottom to the top: the kernel layer, system service layer, framework layer, and application layer. It is also an extensive collection of free software, which can be used as an operating system or in parts with other operating systems via Kernel Abstraction Layer subsystems. OpenHarmony supports various devices running a mini system, such as printers, speakers, smartwatches, and other smart device with memory as small as 128 KB, or running a standard system with memory greater than 128 MB. The system contains the basic and some advanced capabilities of HarmonyOS such as DSoftBus technology with distributed device virtualization platform, that is a departure from traditional virtualised guest OS for connected devices. The operating system is oriented towards the Internet of things (IoT) and embedded devices market with a diverse range of device support, including smartphones, tablets, smart TVs, smart watches, personal computers and other smart devices. History The first version of OpenHarmony was launched by the OpenAtom Foundation on September 10, 2020, after receiving a donation of the open-source code from Huawei. In December 2020, the OpenAtom Foundation and Runhe Software officially launched OpenHarmony open source project with seven units including Huawei and Software Institute of the Chinese Academy of Sciences. The OpenHarmony 2.0 (Canary version) was launched in June 2021, supporting a variety of smart terminal devices. Based on its earlier version, OpenAtom Foundation launched OpenHarmony 3.0 on September 30, 2021, and brought substantial improvements over the past version to optimize the operating system, including supports for file security access (the ability to convert files into URIs and resolve URIs to open files) and support for basic capabilities of relational databases and distributed data management. A release of OpenHarmony supporting devices with up to 4 GB RAM was made available in April 2021. OpenAtom Foundation added a UniProton kernel, a hardware-based Microkernel real-time operating system, into its repo as part of the Kernel subsystem of the OpenHarmony operating system as an add-on on August 10, 2022. Development The primary IDE known as DevEco Studio to build OpenHarmony applications with OpenHarmony SDK full development kit that includes a comprehensive set of development tools, including a debugger, tester system via DevEco Testing, a repository with software libraries for software development, an embedded device emulator, previewer, documentation, sample code, and tutorials. Applications for OpenHarmony are mostly built using components of ArkUI, a Declarative User Interface framework. ArkUI elements are adaptable to various custom open-source hardware and industry hardware devices and include new interface rules with automatic updates along with HarmonyOS updates. Hardware development is developed using DevEco Studio via DevEco Device tool for building on OpenHarmony, also creating distros with operating system development with toolchains provided, including verification certification processes for the platform, as well as customising the operating system as an open source variant compared to original closed distro variant HarmonyOS that primarily focus on HarmonyOS Connect partners with Huawei. OpenHarmony Application Binary Interface (ABI) ensures compatibility across various OpenHarmony powered devices with diverse set of chipset instruction set platforms. HDC (OpenHarmony Device Connector) is a command-line tool tailored for developers working with OpenHarmony devices. The BM command tool component of HDC tool is used to facilitate debugging by developers. After entering in the HDC shell command, the BM tool can be utilised. Like HarmonyOS, OpenHarmony uses App Pack files suffixed with .app, also known as APP files on AppGallery and third party distribution application stores on OpenHarmony-based and non-OpenHarmony operating systems such as Linux-based Unity Operating System which is beneficial for interoperability and compatibility. Each App Pack has one or more HarmonyOS Ability Packages (HAP) containing code for their abilities, resources, libraries, and a JSON file with configuration information. While incorporating the OpenHarmony layer for running the APP files developed based on HarmonyOS APIs, the operating system utilizes the main Linux kernel for bigger memory devices, as well as the RTOS-based LiteOS kernel for smaller memory-constrained devices, as well as add-ons, custom kernels in distros in the Kernel Abstract Layer (KAL) subsystem that is not kernel dependent nor instruction set dependent. For webview applications, it incorporates ArkWeb software engine as of API 11 release at system level for security enhancing Chromium Embedded Framework nweb software engine that facilitated Blink-based Chromium in API 5. Unlike with open-source Android operating system with countless third-party dependency packages repeatedly built into the apps at a disadvantage when it comes to fragmentation. The OpenHarmony central repositories with the Special Interest Group at OpenAtom governance provides commonly used third-party public repositories for developers in the open-source environment which brings greater interoperability and compatibility with OpenHarmony-based operating systems. Apps does not require repeated built-in third-party dependencies, such as Chromium, Unity and Unreal Engine. This can greatly reduce the system ROM volume. Harmony Distributed File System (HMDFS) is a distributed file system designed for large-scale data storage and processing that is also used in openEuler. It is inspired by the Hadoop Distributed File System (HDFS). The file system suitable for scenarios where large-scale data storage and processing are essential, such as IoT applications, edge computing, and cloud services. On Orange Pi OS (OHOS), the native file system shows LOCAL and shared_disk via OpenHarmony's Distributed File System (HMDFS) File path/root folder for the file system uses ">" instead of traditional "/" in Unix/Linux/Unix-like and "\" on Windows with its DLL (Dynamic-link library) system. Access token manager is an essential component in OpenHarmony-based distributed operating systems, responsible for unified app permission management based on access tokens. Access tokens serve as identifiers for apps, containing information such as app ID, user ID, app privilege level (APL), and app permissions. By default, apps can access limited system resources. ATM ensures controlled access to sensitive functionalities which combines both RBAC and CBAC models as a hybrid ACL model. OpenHarmony kernel abstract layer employs the third-party musl libc library and native APIs, providing support for the Portable Operating System Interface (POSIX) for Linux syscalls within the Linux kernel side and LiteOS kernel that is the inherent part of the original LiteOS design in POSIX API compatibility within multi-kernel Kernel Abstract Layer architecture. Developers and vendors can create components and applications that work on the kernel based on POSIX standards. OpenHarmony NDK is a toolset that enables developers to incorporate C and C++ code into their applications. Specifically, in the case of OpenHarmony, the NDK serves as a bridge between the native world (C/C++) and the OpenHarmony ecosystem. This NAPI method is a vital importance of open source community of individual developers, companies and non-profit organisations of stakeholders in manufacturers creating third party libraries for interoperability and compatibility on the operating system native open source and commercial applications development from third-party developers between southbound and northbound interface development of richer APIs, e.g. third party Node.js, Simple DirectMedia Layer, Qt framework, LLVM compiler, FFmpeg etc. Timeline September 10, 2020 – Initial release of OpenHarmony with support for devices with 128 KB – 128 MB RAM April 2021 – OpenHarmony release with support for smartphones and other devices with 128 MB – 4 GB RAM October 2021 – OpenHarmony release with support for additional devices with 4+ GB RAM. Hardware OpenHarmony can be deployed on various hardware devices of ARM, RISC-V and x86 architectures with memory volumes ranging from as small as 128 KB up to more than 1 MB. It supports hardware devices with three types of system as follows: Mini system – running on such devices as connection modules, sensors, and wearables, with memory equal to or larger than 128 KB and equipped with processors including ARM Cortex-M and 32-bit RISC-V. Small system – running on such devices as IP cameras, routers, event data recorders, with memory equal to or larger than 1 MB and equipped with processors including ARM Cortex-A. Standard system – running on devices with enhanced interaction, 3D GPU, rich animations and diverse components, with memory equal to or larger than 128 MB and equipped with processors including ARM Cortex-A. Compatibility certification To ensure OpenHarmony-based devices are compatible and interoperable in the ecosystem, the OpenAtom Foundation has set up product compatibility specifications, with a Compatibility Working Group to evaluate and certify the products that are compatible with OpenHarmony. The following two types of certifications were published for the partners supporting the compatibility work, with the right to use the OpenHarmony Compatibility Logo on their certified products, packaging, and marketing materials. Development boards, modules, and software distributions Equipment On April 25, 2022, 44 products have obtained the compatibility certificates, and more than 80 software and hardware products are in the process of evaluation for OpenHarmony compatibility. Software development Since OpenHarmony was open source in September 2020 to December 2021, more than 1,200 developers and 40 organizations have participated in the open source project and contributed code. At present, OpenHarmony has developed to 4.x version. Software distributions OpenHarmony is the most active open source project hosted on the Gitee platform. As of September 2023, it has over 30 open-source software distributions compatible with OpenHarmony for various sectors such as education, finance, smart home, transportation, digital government and other industries. MineHarmony OS On 14, September 2021, Huawei announced the launch of commercial proprietary MineHarmony OS, a customized operating system by Huawei based on its in-house HarmonyOS distro based on OpenHarmony for industrial use. MineHarmony is compatible with about 400 types of underground coal mining equipment, providing the equipment with a single interface to transmit and collect data for analysis. Wang Chenglu, President of Huawei's consumer business AI and smart full-scenario business department, indicated that the launch of MineHarmony OS signified that the HarmonyOS ecology had taken a step further from B2C to B2B. Midea IoT OS Midea, a Chinese electrical appliance manufacturer launched Midea IoT operating system 1.0. An IoT centric operating system based on OpenHarmony 2.0 officially launched in October 2021. After, the company used HarmonyOS operating system with Huawei partnership for its smart devices compatibility since June 2, 2021 launch of HarmonyOS 2.0. OpenHarmony in Space On January 6, 2022, OpenHarmony in Space (OHIS) by OHIS Working Group and Dalian University of Technology led by Yu Xiaozhou was reported to be a vital play in the future from a scientific and engineering point of view, expecting to open up opportunities for development in China's satellite systems, and surpass SpaceX’s Star Chain plan with the idea of micro-nano satellite technology. SwanLinkOS Based on OpenHarmony, SwanLinkOS was released in June 2022 by Honghu Wanlian (Jiangsu) Technology Development, a subsidiary of iSoftStone, for the transportation industry. The operating system supports mainstream chipsets, such as Rockchip RK3399 and RK3568, and can be applied in transportation and shipping equipment for monitoring road conditions, big data analysis, maritime search and rescue. It was awarded the OpenHarmony Ecological Product Compatibility Certificate by the OpenAtom Foundation. ArcherMind HongZOS On November 7, 2022, ArcherMind Cooperation that deals with operating systems, interconnection solutions, smart innovations, and R&D aspects launched the HongZOS system that supports OpenHarmony and HiSilicon chips, solution mainly focuses on AIoT in industrial sectors. Orange Pi OS (OHOS) On November 28, 2022, Orange Pi launched the Orange Pi OS based on the open-source OpenHarmony version. In October 2023, they released the Orange Pi 3B board with the Orange Pi OHOS version for hobbyists and developers based on the OpenHarmony 4.0 Beta1 version. RobanTrust OS On December 23, 2022, the integrated software and hardware solution together with the self-developed hardware products of Youbo Terminal runs RobanTrust OS, based on OpenHarmony that was launched as version 1.0 with 3.1.1 compatibility release. KaihongOS On January 14, 2023, Red Flag smart supercharger, first launched on OpenHarmony-based KaihongOS with OpenHarmony 3.1 support that supports the distributed soft bus that allows interconnection with other electronic devices and electrical facilities. On January 17, 2023, an electronic class card with 21.5-inch screen developed by Chinasoft and New Cape Electronics. On November 17, 2023, Kaihong Technology and Leju Robot collaborated to release the world's first humanoid robot powered by the open-source OpenHarmony distro KaihongOS with Rockchip SoC hardware using RTOS kernel technology for industrial robotic machines with predictable response times in determinism. USmartOS On April 15, 2023, Tongxin Software became OpenAtom's OpenHarmony Ecological Partner. An intelligent terminal operating system for enterprises in China by Tongxin Software was passed for compatibility certification on June 7, 2023. Tongxin intelligent terminal operating system supports ARM, X86, and other architectures that is supported. Tongxin has established cooperative relations with major domestic mobile chip manufacturers and has completed adaptations using the Linux kernel. Together with the desktop operating system and the server operating system, it constitutes the Tongxin operating system family. PolyOS Mobile PolyOS Mobile is an AI IoT open-source operating system tailored for RISC-V intelligent terminal devices by the PolyOS Project based on OpenHarmony, which was released on August 30, 2023, and is available for QEMU virtualisation on Windows 10 and 11 desktop machines. LightBeeOS LightBeeOS launched on September 28, 2023, is an OpenHarmony-based distro that supports financial level security, with distribution bus by Shenzhen Zhengtong Company used for industrial public banking solutions of systems, tested on ATM machines with UnionPay in Chinese domestic market. The operating system has been launched with OpenHarmony 3.2 support and up. Oniro On September 28, 2021, the Eclipse Foundation and the OpenAtom Foundation announced their intention to form a partnership to collaborate on OpenHarmony European distro which is a global family of operating systems under it and a family of the OpenHarmony operating system. Like OpenHarmony, it is one OS kit for all paradigm, enables a collection of free software, which can be used as an operating system or can be used in parts with other operating systems via Kernel Abstraction Layer subsystems on Oniro OS distros. Oniro OS or simply Oniro, also known as Eclipse Oniro Core Platform, is a distributed operating system for AIoT embedded systems launched on October 26, 2021, as Oniro OS 1.0, which is implemented to be compatible with HarmonyOS based on OpenHarmony L0-L2 branch source code, was later launched by the Eclipse Foundation for the global market with the founding members including Huawei, Linaro and Seco among others joined later on. Oniro is designed on the basis of open source and aims to be transparent, vendor-neutral, and independent system in the era of IoT with globalisation and localisation strategies resolving a fragmentated IoT and Embedded devices market. The operating system featured a Yocto system of Linux kernel for developments of OpenEmbedded build system with BitBake and Poky which is now part of Oniro blueprints that aims to be platform agnostic, however it is now aligned with OpenAtom development of OpenHarmony. The goal is to increase the distro with partners that create their own OpenHarmony-Oniro compatible distros that increase interoperability which reduces fragmentation of diverse platforms with diverse set of hardwares with enhancements from derived project back to original project in Upstream development of OpenHarmony source code branch to improve global industrial standards compatibilities customised for global markets. It is also used for Downstream development for enhancing OpenHarmony base in global and western markets for compatibility and interoperability with connected IoT systems as well as custom third-party support on-device AI features on custom frameworks such as Tensorflow, CUDA and others, alongside native Huawei MindSpore solutions across the entire OpenHarmony ecosystem. Oniro platform which is both compatible with OpenHarmony systems in China and Huawei's own HarmonyOS platform globally, including western markets in connectivity and apps. Development tools Rust in a framework alongside the Data Plane Development Kit (DPDK) IP Pipeline and profiling, React Native and Kanto in Applications development system on top of OpenHarmony, Servo and Linaro tools in system services, Matter opеn-sourcе, royalty-frее connеctivity standard that aims to unify smart homе dеvicеs and incrеasе thеir compatibility with various platforms and OSGi in driver subsystem, IoTex in swappable kernel development, and Eclipse Theia in integrated development environment to build Oniro OS apps that has interoperability with OpenHarmony based operating systems. Data can be transmitted directly rather than being shared via cloud online, enabling low latency architectures in more secure methods and privacy functions suitable for AIoT and smart home devices integration. In September 2023, Open Mobile Hub (OMH) led by Linux Foundation was formed, as an open-source platform ecosystem that aims to simplify and enhance the development of mobile applications for various platforms, including iOS, Android, and OpenHarmony based global Oniro OS alongside, HarmonyOS (NEXT) with greater cross platform and open interoperability in mobile with OMH plugins such as Google APIs, Google Drive, OpenStreetMap alongside Bing Maps, Mapbox, Microsoft, Facebook, Dropbox, LinkedIn, X and more. Open Mobile Hub platform aims to provide a set of tools and resources to streamline the mobile app development process. Upstream and downstream software releases The Oniro project is focused on being a horizontal platform for application processors and microcontrollers. it is an embedded OS, using the Yocto build system, with a choice of either the Linux kernel, Zephyr, or FreeRTOS. It includes an IP toolchain, maintenance, OTA, and OpenHarmony. It provides example combinations of components for various use cases, called "Blueprints". Oniro OS 2.0 was released in 2022 and Oniro OS 3.0 based on OpenHarmony 3.2 LTS in October 2023, alongside latest 4.0 version as of December 6, 2023 on the main branch. HarmonyOS Huawei officially announced the commercial distro of proprietary HarmonyOS NEXT, microkernel-based core distributed operating system for HarmonyOS at Huawei Developer Conference 2023 (HDC) on August 4, 2023, which supports only native APP apps via Ark Compiler with Huawei Mobile Services (HMS) Core support. Proprietary system built on OpenHarmony, HarmonyOS NEXT has the HarmonyOS microkernel at its core and it has no apk compatibility support built exclusively for Huawei devices ecosystem. In the long term, as the company builds up the software root in downstream development for both domestic Chinese and global markets, the closed HarmonyOS NEXT customised L0-L2 full branch source code of the OpenHarmony operating system is aimed to replace the current closed-source L3-L5 branch since OpenHarmony 2.2 fork branch with 8GB worth of code up to 4.x with 60% codebase designed with a dual-frame architecture that is compatible with Android with EMUI userland in the multi-kernel architecture of HarmonyOS from current Linux kernel on phones and tablets, cars, TVs and advanced wearables, alongside lightweight LiteOS kernel on basic wearables and various IoT smart devices. On the same day at HDC 2023, the developer preview version of HarmonyOS NEXT was opened for cooperating enterprise developers to build and test native mobile apps. It will be open to all developers in the first quarter of 2024 according to the official announcement. On 18 January 2024, Huawei announced HarmonyOS NEXT Galaxy stable rollout will begin in Q4 2024 based on OpenHarmony 5.0 (API 12) version after OpenHarmony 4.1 (API 11) based Q2 Developer Beta after release of public developer access of HarmonyOS NEXT Developer Preview 1 that has been in the hands of closed cooperative developers partners since August 2023 debut. The new system of HarmonyOS 5 version will replace previous HarmonyOS 4.2 system for commercial Huawei consumer devices that can only run native HarmonyOS apps built for HarmonyOS and OpenHarmony as well as localisation using Oniro OS for downstream development at global level customised to global markets and standards enhancing OpenHarmony development. On June 21, 2024, Huawei announced via HDC 2024 conference and released Developer Beta milestone of HarmonyOS NEXT based on OpenHarmony 5.0 beta1 version for registered public developers with HMS Core library embedded in native NEXT-specific API Developer Kit alongside supported compatible OpenHarmony APIs for native OpenHarmony-based HarmonyOS apps. The company officially confirmed the operating system is OpenHarmony compatible with the new boot image system. On October 22, 2024, Huawei launched HarmonyOS 5.0.0 at its launch event, upgrading the HarmonyOS Next developer internal and public software versions, completing the transitioning and replacing dual-framework of previous mainline HarmonyOS versions with full OpenHarmony base with custom HarmonyOS kernel on the original L0-L2 codebase branch, marking officially as an independent commercial operating system and ecosystem from Android fork dependencies with 15,000+ native apps launched on the platform. As a result, OpenHarmony-based systems, including Oniro-based systems are aimed to be compatible with HarmonyOS native HAP apps, NearLink wireless connectivity stack and cross-device with upgraded DSoftBus connectivity. Relationship with OpenEuler In terms of architecture, OpenHarmony alongside HarmonyOS has close relationship with server-based multi-kernel operating system OpenEuler, which is a community edition of EulerOS, as they have implemented the sharing of kernel technology as revealed by Deng Taihua, President of Huawei's Computing Product Line. The sharing is reportedly to be strengthened in the future in the areas of the distributed software bus, app framework, system security, device driver framework and new programming language on the server side. Harmony Distributed File System (HMDFS) is a distributed file system designed for large-scale data storage and processing that is also used in openEuler server operating system. Developer Kit Devices Hi3861 based HiSpark WiFi IoT development board released in October 2020 with OpenHarmony support alongside LiteOS. Raspberry Pi ported to OpenHarmony 3.0 in November 2021 Zilong development board with MIPS ARCH and 1c300B chip December 2021 powered by OpenHarmony 3.0. HiHope HH-SCDAYU200 released in May 2022 by HopeRun Software using Runhe Software, HiHope OS based on OpenHarmony with Rockchip's RK3568 processor. Also ported to OpenHarmony-based Oniro OS. HopeRun's HiHope development board with HiSilicon Hi3861V100 32-bit RISC-V microcontroller that is compatible with OpenHarmony launched in September 2022. Niobe U4 development board kit by Kaihong Zhigu, in October 2022. Shenzhen Kaihong KHDVK-3566B smart screen development board running OpenHarmony-based KaihongOS embedded operating system in October 2022 Xianji Semiconductor Technology HPM6700 processor development November 2022 built for OpenHarmony ChinaSoft development board released December 2022. Unionpi Lion board based on an SV823 chip launched in February 2023. It includes a self-developed NPU and is capable of high-quality image processing, encoding, and decoding running OpenHarmony. HH-SCDAYU210 board launched in May 2023, powered by OpenHarmony with RockChip RK3588. Shenzhen Qianhai New Silk Road Technology Co., Ltd releases a Developer Phone powered by OpenHarmony in October 2023. Raspberry Pi 4B development board comes with OpenHarmony port in February 2024. MILOS_Standard0 with NXP i.MX8M Mini powered by OpenHarmony. Yangfan development board Huawei's HiSilicon, Hispark_Taurus BearPi-HM MicroB Multi-modal V200Z-R Langguo LANGO200 Goodix GR5515-STARTER-KIT Niobe407 B91 Generic Starter Kit cst85_wblink Neptune100 released in May 2022. RK2206 Purple Pi OH alongside Purple Pi OH Pro, Rockchip RK3566 chip powered by OpenHarmony in March 2024. See also HarmonyOS NEXT EulerOS BlueOS References External links 2020 software Embedded operating systems Huawei products Internet of things Mobile operating systems Tablet operating systems Linux distributions Wearable computers Smartwatches Free software operating systems Operating system families HarmonyOS ARM operating systems IA-32 operating systems X86-64 operating systems
OpenHarmony
Technology
5,478
2,777,554
https://en.wikipedia.org/wiki/Market%20data
In finance, market data is price and other related data for a financial instrument reported by a trading venue such as a stock exchange. Market data allows traders and investors to know the latest price and see historical trends for instruments such as equities, fixed-income products, derivatives, and currencies. The market data for a particular instrument would include the identifier of the instrument and where it was traded such as the ticker symbol and exchange code plus the latest bid and ask price and the time of the last trade. It may also include other information such as volume traded, bid, and offer sizes and static data about the financial instrument that may have come from a variety of sources. It is used in conjunction with the related financial reference data that is typically distributed ahead of market data. There are a number of financial data vendors that specialize in collecting, cleaning, collating, and distributing market data and this has become the most common way that traders and investors get access to market data. Delivery of price data from exchanges to users, such as traders, is highly time-sensitive and involves specialized technologies designed to handle collection and throughput of massive data streams are used to distribute the information to traders and investors. The speed that market data is distributed can become critical when trading systems are based on analyzing the data before others are able to, such as in high-frequency trading. Market price data is not only used in real-time to make on-the-spot decisions about buying or selling, but historical market data can also be used to project pricing trends and to calculate market risk on portfolios of investments that may be held by an individual or an institutional investor. Data structure A typical equity market data message or business object furnished from NYSE, TSX, or NASDAQ might appear something like this: The above example is an aggregation of different sources of data, as quote data (bid, ask, bid size, ask size) and trade data (last sale, last size, volume) are often generated over different data feeds. Delivery of data Delivery of price data from exchanges to users is highly time-sensitive. Specialized software and hardware systems called ticker plants are designed to handle collection and throughput of massive data streams, displaying prices for traders and feeding computerized trading systems fast enough to capture opportunities before markets change. When stored, historical market data is a type of time series data. Latency is the time lag in delivery of real-time data, i.e. the lower the latency, the faster the data transmission speed. Processing of large amounts of data with minimal delay is low latency. The delivery of data has increased in speed dramatically since 2010, with "low" latency delivery meaning delivery under 1 millisecond. The competition for low latency data has intensified with the rise of algorithmic and high frequency trading and the need for competitive trade performance. Market data generally refers to either real-time or delayed price quotations. The term also includes static or reference data, that is, any type of data related to securities that is not changing in real time. Reference data includes identifier codes such as ISIN codes, the exchange a security trades on, end-of-day pricing, name and address of the issuing company, the terms of the security (such as dividends or interest rate and maturity on a bond), and the outstanding corporate actions (such as pending stock splits or proxy votes) related to the security. While price data generally originates from the exchanges, reference data generally originates from the issuer. Before investors and traders receive price or updated reference data, financial data vendors may reformat, organize, and attempt to correct obvious outliers due to data feed or other real-time collection based errors. For consumers of market data, which are primarily the financial institutions and industry utilities serving the capital markets, the complexity of managing market data rose with the increase in the number of issued securities, number of exchanges and the globalization of capital markets. Beyond the rising volume of data, the continuing evolution of complex derivatives and indices, along with new regulations designed to contain risk and protect markets and investors, created more operational demands on market data management. Initially, individual financial data vendors provided data for software applications in financial institutions that were specifically designed for one data feed; thus, giving that financial data vendor control of that area of operations. Next, many of the larger investment banks and asset management firms started to design systems that would integrate market data into one central store. This drove investments in large-scale enterprise data management systems which collect, normalize and integrate feeds from multiple financial data vendors, with the goal of building a "single version of the truth" of data repository supporting every kind of operation throughout the institution. Beyond the operational efficiency gained, this data consistency became increasingly necessary to enable compliance with regulatory requirements, such as Sarbanes Oxley, Regulation NMS, and the Basel 2 accord. Industry bodies There are various industry bodies that focus on market data: FISD – Based in Washington DC, the Financial Information Services Division (FISD) of the Software and Information Industry Association operates globally and consists of three constituency groups: Consumer Firms, Vendor Firms and Exchanges. IPUG – The Information Providers User Group (IPUG) is a UK-based organization whose membership is limited to consumer firms. Its main activities consist of lobbying vendor firms on key issues. COSSIOM – Commission des Services et Systèmes d'Informations destinés aux Opérateurs de Marchés (COSSIOM) is the Paris-based organization for French consumer firms. BlueFedFin IXC – Launched via Digta Channels in 2014 , The Sovereign Body is Federal and a FIN Creator from New Age. Reserves, Remote FIN Line, Wire Service & Potent Wealth Advisory to our Group Members. BlueFedFin is headed as a One Person Execution Complex. Led by Fonder CEO & Chairman & Principal Data Scientist, Karanvir Singh (India). Investor & Business Dealings are catered Online & on a Binary Scale of Operations with Cloud, AI & FIN BI. SEC – The Securities and Exchange Commission (SEC) is an independent government agency whose role is to protect investors and oversee securities markets. The SEC helps regulate data management, transparency, and auditing of trading patterns in the market. For example, a recent regulatory action taken by the SEC is the adoption of Rule 613, also known as the Consolidated Audit Trail. CFTC – The U.S. Commodity Futures Trading Commission oversees the markets and their participants, monitors liquidity and systematic risk, regulates compliance, and enforces the CEA. The CFTC uses data sourced from market data providers to perform its functions and publish reports on the health of the derivatives market including the Commitment of Traders report, Cotton on Call and the Weekly Swaps Report. FINRA – FINRA (Financial Industry Regulatory Authority) is a non-government, self-regulatory organization that regulates member brokerage firms and exchange markets. CTA operates one of the Securities Information Processors in the United States. UTP Plan operates the Securities Information Processors for securities listed on Nasdaq and over-the-counter securities. OPRA operates the Securities Information Processors for equity options in the United States. SIAC, the Securities Industry Automation Corporation, which operates the CTA and OPRA SIPs. Technology solutions The business of providing technology solutions to financial institutions for data management has grown over the past decade, as market data management has emerged from a little-known discipline for specialists to a high-priority issue for the entire capital markets industry and its regulators. Providers range from middleware and messaging vendors, vendors of cleansing and reconciliation software and services, and vendors of highly scalable solutions for managing the massive loads of incoming and stored reference data that must be maintained for daily trading, accounting, settlement, risk management and reporting to investors and regulators. The market data distribution platforms are designed to transport over the network large amounts of data from financial markets. They are intended to respond to the fast changes on the financial markets, compressing or representing data using specially designed protocols to increase throughput and/or reduce latency. Most market data servers run on Solaris or Linux as main targets. However, some have versions for Windows. Feed handlers A typical usage can be a "feed handler" solution. Applications (sources) receive data from specific feed and connect to a server (authority) which accepts connections from clients (destinations) and redistributes data further. When a client (Destination) wants to subscribe for an instrument (to open an instrument), it sends a request to the server (authority) and if the server has not got the information in its cache it forwards the request to the source(s). Each time a server (authority) receives updates for an instrument, it sends them to all clients (destinations), subscribed for it. Notes: A client (destination) can unsubscribe itself for an individual instrument (close the instrument) and no further updates will be sent. When the connection between Authority and Destination breaks off, all requests made from the client will be dropped. A server (authority) can handle large client-connections, though usually a relatively small number of clients are connected to the same server at the same time. A client (destination) usually has a small number of open instruments, though larger numbers are also supported. The server has two levels of access permission: Login permission – whether the client is allowed to connect to the server. Information permission – whether the client is allowed to view information about the current instrument. This check is usually made by checking the contents of the instrument. Types of market data vendors Exchanges Hosting providers Ticker plant providers Feed providers Software providers Market data needs Market data requirements depend on the need for customization, latency sensitivity, and market depth. Customization: How much operational control a firm has over its market data infrastructure. Latency sensitivity: The measure of how important high-speed market data is to a trading strategy. Market depth: the volume of quotes in a market data feed. Market data fees There are 5 market data fee types charged by exchanges and financial data vendors. These fees are access fees, user fees, non-display fees, redistribution fees, and market data provider fees. Management Market data is expensive (global expenditure yearly exceeds $50 billion) and complex (data variety, functionality, technology, billing). Therefore, it needs to be managed professionally. Professional market data management deals with issues such as: Inventory management Contract management Cost management Change management Invoice reconciliation and administration Permissioning Reporting Budgeting Demand management Technology management Vendor management Mobile applications Financial data vendors typically also offer mobile applications that provide market data in real time to financial institutions and consumers. See also Financial data vendor Reference data (financial markets) Stock market data systems WhatShed References Financial markets
Market data
Technology
2,191
1,350,961
https://en.wikipedia.org/wiki/Sexercise
Sexercise is physical exercise performed in preparation for sexual activity and designed to tone, build, and strengthen muscles. Sexercises are often performed as part of a sex diet lifestyle, which seeks to maximize the health benefits of regular sexual activity. Sexercise is known to improve and quicken the flow of oxygenated blood, in higher and consistent amounts, along with other beneficial chemical compounds, to the genitalia, which is important for fertility and important during intercourse. Routines Sexercises range from kegel exercise to aerobic exercise and cardiovascular routines. Flexibility for performing contortion specifically for erotic or sexual positions may also be practised. References OCLC: 1109229 The Dieter's Guide to Weight Loss During Sex by Richard Smith; Workman Publishing 1978 () Physical exercise Human sexuality
Sexercise
Biology
164
10,859,405
https://en.wikipedia.org/wiki/NOAA-16
NOAA-16, also known as NOAA-L before launch, was an operational, polar orbiting, weather satellite series (NOAA K-N) operated by the National Environmental Satellite Service (NESS) of the National Oceanic and Atmospheric Administration (NOAA). NOAA-16 continued the series of Advanced TIROS-N (ATN) spacecraft that began with the launch of NOAA-8 (NOAA-E) in 1983; but it had additional new and improved instrumentation over the NOAA A-K series and a new launch vehicle (Titan 23G). It was launched on 21 September 2000 and, following an unknown anomaly, it was decommissioned on 9 June 2014. In November 2015 it broke up in orbit, creating more than 200 pieces of debris. Launch NOAA-16 was launched by the Titan 23G launch vehicle on 21 September 2000 at 10:22 UTC from Vandenberg Air Force Base, at Vandenberg Space Launch Complex 4 (SLW-4W), in a Sun-synchronous orbit, at 843 km above the Earth, orbiting every 102.10 minutes. NOAA-16 was in a morning equator-crossing orbit and has replaced the NOAA-14 as the prime morning spacecraft. Spacecraft The goal of the NOAA/NESS polar orbiting program is to provide output products used in meteorological prediction and warning, oceanographic and hydrologic services, and space environment monitoring. The polar orbiting system complements the NOAA/NESS geostationary meteorological satellite program (GOES). The NOAA-16 Advanced TIROS-N spacecraft was based on the Defense Meteorological Satellite Program (DMSP Block 5D) spacecraft and was a modified version of the ATN spacecraft (NOAA 6-11, 13-15) to accommodate the new instrumentation, supporting antennas and electrical subsystems. The spacecraft structure consisted of four components: 1° the Reaction System Support (RSS); 2° the Equipment Support Module (ESM); 3° the Instrument Mounting Platform (IMP); and 4° the Solar Array (SA). Instruments All of the instruments were located on the ESM and the IMP. The spacecraft power was provided by a direct energy transfer system from the single solar array which consisted of eight panels of solar cells. The in-orbit Attitude Determination and Control Subsystem (ADACS) provided three-axis pointing control by controlling torque in three mutually orthogonal momentum wheels with input from the Earth Sensor Assembly (ESA) for pitch, roll, and yaw updates. The ADACS controlled the spacecraft attitude so that orientation of the three axes was maintained to within ± 0.2° and pitch, roll, and yaw to within 0.1°. The ADACS consisted of the Earth Sensor Assembly (ESA), the Sun Sensor Assembly (SSA), four Reaction Wheel Assemblies (RWA), two roll/yaw coils (RYC), two pitch torquing coils (PTC), four gyros, and computer software for data processing. The ATN data handling subsystem, consisted of the TIROS Information Processor (TIP) for low data rate instruments, the Manipulated Information Rate Processor (MIRP) for high data rate AVHRR, digital tape recorders (DTR), and a cross strap unit (XSU). The NOAA-16 instrument complement consists of: 1° an improved six-channel Advanced Very High Resolution Radiometer/3 (AVHRR/3); 2° an improved High Resolution Infrared Radiation Sounder (HIRS/3); 3° the Search and Rescue Satellite Aided Tracking System (SARSAT), which consists of the Search and Rescue Repeater (SARR) and the Search and Rescue Processor (SARP-2); 4° the French/CNES-provided improved Argos Data Collection System (Argos DCS-2); 5° the Solar Backscatter Ultraviolet Spectral radiometer (SBUV/2); and 6° the Advanced Microwave Sounding Unit (AMSU), which consists of three separate modules, A1, A2, and B to replace the previous MSU and SSU instruments. It hosts the Advanced Microwave Sounding Unit (AMSU), Advanced very-high-resolution radiometer (AVHRR) and High Resolution Infrared Radiation Sounder (HIRS) instruments' Automatic Picture Transmission (APT) transmitter. NOAA-16 has the same suite of instruments as carried by NOAA-15 plus an SBUV/2 instrument as well. Advanced Very High Resolution Radiometer (AVHRR/3) The AVHRR/3 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is an improved instrument over previous AVHRRs. The AVHRR/3 adds a sixth channel and is a cross-track scanning instrument providing imaging and radiometric data in the visible, near-IR and infrared of the same area on the Earth. Data from the visible and near-IR channels provide information on vegetation, clouds, snow, and ice. Data from the near-IR and thermal channels provide information on the land and ocean surface temperature and radiative properties of clouds. Only five channels can be transmitted simultaneously with channels 3A and 3B being switched for day/night operation. The instrument produces data in High Resolution Picture Transmission (HRPT) mode at 1.1 km resolution or in Automatic Picture Transmission (APT) mode at a reduced resolution of 4 km. The AVHRR/3 scans 55.4° per scan line on either side of the orbital track and scans 360 lines per minute. The six channels are: 1) channel 1, visible (0.58-0.68 μm); 2) channel 2, near-IR (0.725-1.0 μm); 3) channel 3A, near-IR (1.58-1.64 μm); 4) channel 3B, infrared (3.55-3.93 μm; 5) channel 4, infrared (10.3-11.3 μm); and 6) channel 5 (11.5-12.5 μm). High Resolution Infrared Sounder (HIRS/3) The improved HIRS/3 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting weather satellites is a 20-channel, step-scanned, visible and infrared spectrometer designed to provide atmospheric temperature and moisture profiles. The HIRS/3 instrument is basically identical to the HIRS/2 flown on previous spacecraft except for changes in six spectral bands to improve the sounding accuracy. The HIRS/3 is used to derive water vapor, ozone, and cloud liquid water content. The instrument scans 49.5° on either side of the orbital track with a ground resolution at nadir of 17.4 km. The instrument produces 56 IFOVs for each 1,125 km scan line at 42 km between IFOVs along-track. The instrument consists of 19 infrared and 1 visible channel centered at 14.95, 14.71, 14.49, 14.22, 13.97, 13.64, 13.35, 11.11, 9.71, 12.45, 7.33, 6.52, 4.57, 4.52, 4.47, 4.45, 4.13, 4.0, 3.76, and 0.69 μm. Advanced Microwave Sounding Unit (AMSU-A) The AMSU was an instrument on the Advanced TIROS-N (ATN) NOAA K-N series of operational meteorological satellites. The AMSU consisted of two functionally independent units, AMSU-A and AMSU-B. The AMSU-A was a line-scan instrument designed to measure scene radiance in 15 channels, ranging from 23.8 to 89 GHz, to derive atmospheric temperature profiles from the Earth's surface to about 3 millibar pressure height. The instrument was a total power system having a field of view (FOV) of 3.3° at half-power points. The antenna provided cross track scan 50° on either side of the orbital track at nadir with a total of 30 IFOVs per scan line. The AMSU-A was calibrated on-board using a blackbody and space as references. The AMSU-A was physically divided into two separate modules which interface independently with the spacecraft. The AMSU-A1 contained all of the 5 mm oxygen channels (channels 3-14) and the 80 GHz channel. The AMSU-A2 module consisted of two low-frequency channels (channels 1 and 2). The 15 channels had a center frequency at: 23.8, 31.4, 50.3, 52.8, 53.6, 54.4, 54.94, 55.5, six at 57.29, and 89 GHz. Advanced Microwave Sounding Unit (AMSU-B) The AMSU was an instrument on the Advanced TIROS-N (ATN) NOAA K-N series of operational meteorological satellites. The AMSU consisted of two functionally independent units, AMSU-A and AMSU-B. The AMSU-B was a line-scan instrument designed to measure scene radiance in five channels, ranging from 89 GHz to 183 GHz for the computation of atmospheric water vapor profiles. The AMSU-B was a total power system with a field of view (FOV) of 1.1° at half-power points. The antenna provided a cross-track scan, scanning 50° on either side of the orbital track with 90 IFOVs per scan line. On-board calibration was accomplished with blackbody targets and space as references. The AMSU-B channels at the center frequency (GHz) were: 90, 157, and 3 channels at 183.31. Space Environment Monitor-2 (SEM-2) The SEM-2 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites provides measurements to determine the population of the Earth's radiation belts and data on charged particle precipitation in the upper atmosphere as a result of solar activity. The SEM-2 consists of two separate sensors the Total Energy Detector (TED) and the Medium Energy Proton/Electron Detector (MEPED). In addition, the SEM-2 includes a common Data Processing Unit (DPU). The TED uses eight programmed swept electrostatic curved-plate analyzers to select particle type and energy and Channeltron detectors to measure the intensity in the selected energy bands. The particle energies range from 50 eV to 20 keV. The MEPED detects protons, electrons, and ions with energies from 30keV to several tens of MeV. The MEPED consists of four directional solid-state detector telescopes and four omnidirectional sensors. The DPU sorts and counts the events and the results are multiplexed and incorporated into the satellite telemetry system. Once received on the ground, the SEM-2 data is separated from the rest of the data and sent to the NOAA Space Environment Laboratory in Boulder, Colorado, for processing and dissemination. Search and Rescue Satellite Aided Tracking System (SARSAT) The SARSAT on the Advanced TIROS-N NOAA K-N series of polar orbiting meteorological satellites is designed to detect and locate Emergency Locator Transmitters (ELTs) and Emergency Position-Indicating Radio Beacons. The SARSAT instrumentation consists of two elements: the Search and Rescue Repeater (SARR) and the Search and Rescue Processor (SARP-2). The SARR is a radiofrequency (RF) system that accepts signals from emergency ground transmitters at three very high frequency (VHF/UHF) ranges (121.5 MHz, 243 MHz and 406.05 MHz) and translates, multiplexes, and transmits these signals at L-band frequency (1.544 GHz) to local Search and Rescue stations (LUTs or Local User Terminals) on the ground. The location of the transmitter is determined by retrieving the Doppler information in the relayed signal at the LUT. The SARP-2 is a receiver and processor that accepts digital data from emergency ground transmitters at UHF and demodulates, processes, stores, and relays the data to the SARR where they are combined with the three SARR signals and transmitted via L-band frequency to local stations. ARGOS Data Collection System (Argos DCS-2) The DCS-2 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is a random-access system for the collection of meteorological data from in situ platforms (moveable and fixed). The Argos DCS-2 collects telemetry data using a one-way RF link from data collection platforms (such as buoys, free-floating balloons and remote weather stations) and processes the inputs for on-board storage and later transmission from the spacecraft. For free-floating platforms, the DCS-2 system determines the position to within 5 to 8 km RMS and velocity to an accuracy of 1.0 to 1.6 mps RMS. The DCS-2 measures the in-coming signal frequency and time. The formatted data are stored on the satellite for transmission to NOAA stations. The DCS-2 data is stripped from the GAC data by NOAA / NESDIS and sent to the Argos center at CNES in France for processing, distribution to users, and archival. Solar Backscatter Ultraviolet Radiometer (SBUV/2) The SBUV/2 on the Advanced TIROS-N (ATN) NOAA K-N series of polar orbiting meteorological satellites is a dual monochrometer ultraviolet grating spectrometer for stratospheric ozone measurements. The SBUV/2 is designed to measure scene radiance and solar spectral irradiance in the ultraviolet spectral range from 160 to 406 nm. Measurements are made in discrete mode or sweep mode. In discrete mode, measurements are made in 12 spectral bands from which the total ozone and vertical distribution of ozone are derived. In the sweep mode, a continuous spectral scan from 160 to 406 nm is made primarily for computation of ultraviolet solar spectral irradiance. The 12 spectral channels are (μm): 252.0, 273.61, 283.1, 287.7, 292,29, 297.59, 301.97, 305.87, 312.57, 317.56, 331.26, and 339.89. Telecommunications The TIP formats low bit rate instruments and telemetry to tape recorders and direct read-out. The MIRP process high data rate AVHRR to tape recorders (GAC) and direct read-out (HRPT and LAC). On-board recorders can store 110 minutes of GAC, 10 minutes HRPT and 250 minutes TIP. Anomaly, Decommissioning and Breakup The Automatic Picture Transmission (APT) of NOAA-16 became inoperable due to sensor degradation on 15 November 2000, and the High Resolution Picture Transmission (HRPT) was done via STX-1 (1698 MHz) starting on 9 November 2010. On June 6, 2014, NOAA-16 controllers were unable to establish contact with the satellite due to an undefined "critical anomaly". After extensive engineering analysis and recovery efforts it was determined that recovery of the mission was not possible. It was decommissioned on 9 June 2014. On 25 November 2015, at 08:16 UTC, the Combined Space Operations Center (JSpOC) identified a possible breakup of NOAA 16 (#26536). All associated objects have been added to conjunction assessment screenings, and satellite operators was notified of close approaches between the debris and active satellites. The JSpOC catalogs the debris objects when sufficient data is available. As of 26 March 2016, 275 pieces of debris were being tracked. The debris caused no danger for other satellites at the time and there was no indication that a collision had caused the breakup of NOAA 16. Debris distribution suggested that battery rupture as a possible cause of the breakup similar to Defense Meteorological Satellite Program's DMSP F-13, F-11, and NOAA 17 breakups. DMSP F-13 was known to have battery overcharge issues. References External links NOAA-16 Satellite Position Orbital Tracking Weather satellites of the United States Spacecraft launched in 2000 Spacecraft that broke apart in space Television Infrared Observation Satellites
NOAA-16
Technology
3,407
5,089,705
https://en.wikipedia.org/wiki/Scientist%E2%80%93practitioner%20model
The scientist–practitioner model, also called the Boulder Model, is a training model for graduate programs that provide applied psychologists with a foundation in research and scientific practice. It was initially developed to guide clinical psychology graduate programs accredited by the American Psychological Association (APA). David Shakow created the first version of the model and introduced it to the academic community. From the years of 1941 until 1949, Shakow presented the model to a series of committees where the core tenets developed further. The model changed minimally from its original version because it was received extremely well at all of the conferences. At the Boulder Conference of 1949, this model of training for clinical graduate programs was proposed. Here, it received accreditation by the psychological community and the American Psychological Association. The goal of the scientist–practitioner model is to increase scientific growth within clinical psychology in the United States. It calls for graduate programs to engage and develop psychologists' background in psychological theory, field work, and research methodology. The scientist–practitioner model urges clinicians to allow empirical research to influence their applied practice; while simultaneously, allowing their experiences during applied practice to shape their future research questions. Therefore, continuously advancing, refining and perfecting the scientific paradigms of the field. History After World War I, returning veterans reported decreased life satisfaction after serving. This was partly due to the lack of clinical psychologists available to treat victims of "shell-shock" (now known as post traumatic stress disorder). At this time, psychology was primarily an academic discipline, with just a few thousand practicing clinicians. The Second World War also influenced the development of the Boulder Model by fueling the growth of clinical psychology. Psychiatrists in the US military requested help from psychologists in efforts to treat "psychological and psychiatric casualties the war was producing" (p. 426). In order to increase life satisfaction for World War II veterans the federal government increased funding to clinical psychology graduate programs and created the GI Bill. As a result, after the war Psychology graduate programs flourished with applicants and resources. The field's increasing popularity called for action, by the academic community, to establish universal standards for educating graduate psychologists. Although the model has not been as prominent in industrial/organizational (I/O) psychology, Campbell acknowledged that the model later influenced I/O psychology (see page 447). Development David Shakow is largely responsible for the ideas and developments of the Boulder Model. On May 3, 1941, while he was chief psychologist at Worcester State Hospital, Shakow drafted his first training plan to educate clinical psychology graduate students during a Conference at The New York Psychiatric Institute, now referred to as Shakow's 1941 American Association for Applied Psychology Report. In the report, Shakow outlined a 4-year education track: Year 1: establish a strong foundation in psychology and other applied sciences Year 2: learn therapeutic principles and practices needed to treat patients Year 3: internship, gain supervised field experience Year 4: complete research dissertation. Overall, the report aimed to help clinical graduate students perfect their abilities to complete diagnoses, therapy, and scientific research. The report was endorsed and recommended its review to the American Association for Applied Psychology (AAAP). Later in the year, the AAAP accepted the recommendation and planned a conference to address training guidelines for graduate programs. The following year the Penn State Conference was held with 3 subcommittees containing representatives from educational institutions, health establishments, and business/industry. These measures were taken to ensure that the final model was not biased towards Shakow's profession, although only minute changes were made to his original model. In 1944, a conference was held at the Vineland training school to reexamine Shakow's report. The American Association for Applied Psychology integrated into the American Psychological Association. Meanwhile, increased demand for professional psychologists prompted the United States Public Health Service (USPHS) and the Veteran Administrative (VA) to increase funding for clinical psychology graduate programs. With more resources at hand, APA president, Carl Rogers asked David Shakow to chair The Committee on Training in Clinical Psychology (CTCP). This committee's primarily responsibility was to decide upon an effective model for education at the graduate level. Shakow's revised report was published in the Journal of Consulting Psychology in 1945 titled Graduate Internship Training in Psychology. Shakow presented his published report to the CTCP and received minimal critique. So, the committee submitted his report to the APA for approval. The APA endorsed Shakow's training model and published it in the American Psychologist declared as the set agenda for an upcoming conference discussing training methods in clinical graduate programs. By December, the report was known as "The Shakow Report". The CTCP members made site visits and evaluations of universities who had clinical graduate programs. At a joint meeting of the USPHS and the CTCP, a six-week conference was suggested to discuss reported inconsistencies in current clinical training programs. The conference would be sponsored by the APA and would be granted $40,000 in financial backing by the USPHS. In January 1949, a planning meeting for the upcoming conference was held in Chicago by members of the CTCP and representatives from the APA board of directors. Here, details including the conference's name, attendants, and location were decided upon. The planning committee of 1949, agreed to name the conference, The Boulder Conference on Graduate Education in Clinical Psychology, and invited participants from a variety of disciplines. The conference would be held at the University of Colorado at Boulder, thereby allowing participants to attend the proceeding annual meeting of the APA scheduled in Denver. Boulder Conference The Boulder Conference met from August 20 till September 3 in 1949. A total of 73 committee members attended the conference representing fields of academic and applied psychology, medicine, and educational disciplines. This conference's goal was to agree upon a standard training plan for clinical psychologists. The Shakow Report was on the agenda, and was received with unanimous support. Due to this consensus, the Shakow report is now referred to as the Boulder Model. This model aims to teach clinical graduate students to adhere to the scientific method when executing their applied practices. The model states that in order to master these techniques, graduate students need to attend seminars and lectures that strengthen their background in psychology, complete monitored field work, and receive research training. Ultimately, most psychologists specialize in either research academia or applied practice, but this model argues that having sufficient knowledge in the entire field will enhance a psychologist's ability to perform their specialty. Criticisms Despite the Boulder Model's widespread adoption by graduate psychology programs, it was met with mounting criticism after its installment in 1949. The debate over the Boulder Model's value centers around an array of criticisms: That the Boulder Model lacks validity, meaning that the Boulder Model does not actually help graduate students become better scientists and practitioners. That the Boulder Model monopolizes the energies of students, demanding that they spend a large portion of their graduate careers studying research methods that they will not use in professional practice, and depriving them of intensive and extensive formal training and apprenticeship in the art and craft of psychotherapy. That the Boulder Model promotes a view of humans and their suffering that has been simplified to the point at which it does not yield significantly clinically useful guidance to determine practice. Further, the tendency to focus on symptoms and discrete patient characteristics promotes an instrumentalizing view of people in distress that filters into the clinical work of students. That diversity of clinical approaches is restricted as programs emphasize those methods that can be easily measured. That the version of the scientific method taught in Boulder Model programs stresses data gathering techniques over critical thinking skills and theory-building, setting it apart from the so-called hard sciences in its uncritical approach to empiricism. That publication history tends to eclipse clinical sensitivity and depth in the evaluation and promotion of students. That the Boulder Model promotes short-cycle research over longitudinal and more intricate studies that cannot be completed within the timeframe of a training cycle. Thus, that minority of students who do follow a more research-oriented career path are not trained in, or trained to respect, qualitative, longer-term or more complex studies of human psychology. In short, that the skills needed for practice in clinical psychology versus those needed for research are not compatible. . Criticisms continued to accumulate until 1965 at the Chicago Conference. Here, it was recommended that clinical graduate programs restructure their training methods for students who wanted to focus their careers on applied practices. This idea was reinforced by the Clark Committee of 1967. The committee developed the practitioner-oriented model for clinical graduate programs, and presented it at the Vail Conference in 1973. This model was accepted readily to coexist with the Boulder Model, which is still used by many psychology graduate programs today. Core tenets Core tenets of the today's model included in the current Boulder Model: Giving psychological assessment, testing, and intervention in accordance with scientifically based protocols Accessing and integrating scientific findings to make informed healthcare decisions for patients Questioning and testing hypotheses that are relevant to current healthcare; Building and maintaining effective cross-disciplinary relationships with professionals in other fields Research-based training and support to other health professions in the process of providing psychological care; Contribute to practice-based research and development to improve the quality of health care. References Further reading Hayes, S. C., Barlow, D. H., & Nelson-Gray, R. O. (1999) The scientist practitioner research and accountability in the age of managed care (2nd ed.). Boston: Allyn & Bacon. Soldz, S., & McCullough, L. (Eds). (1999). Reconciling empirical knowledge and clinical experience: The art and science of psychotherapy. Washington, DC: American Psychological Association. External links Evidence-Based Practice: A Framework for Twenty-First-Century Scientist-Practitioner Training Scientist-Practitioner? - Discussion Paper Clinical psychology Industrial and organizational psychology
Scientist–practitioner model
Biology
2,012
1,440,742
https://en.wikipedia.org/wiki/Saul%20Perlmutter
Saul Perlmutter (born September 22, 1959) is a U.S. astrophysicist, a professor of physics at the University of California, Berkeley, where he holds the Franklin W. and Karen Weber Dabby Chair, and head of the International Supernova Cosmology Project at the Lawrence Berkeley National Laboratory. He is a member of both the American Academy of Arts & Sciences and the American Philosophical Society, and was elected a Fellow of the American Association for the Advancement of Science in 2003. He is also a member of the National Academy of Sciences. Perlmutter shared the 2006 Shaw Prize in Astronomy, the 2011 Nobel Prize in Physics, and the 2015 Breakthrough Prize in Fundamental Physics with Brian P. Schmidt and Adam Riess for providing evidence that the expansion of the universe is accelerating. Since 2021, he has been a member of the President’s Council of Advisors on Science and Technology (PCAST). Education Saul Perlmutter was born one of three children in the Ashkenazi Jewish family of Daniel D. Perlmutter, professor emeritus of chemical and biomolecular engineering at University of Pennsylvania, and Felice (Feige) D. Perlmutter (née Davidson), professor emerita of Temple University’s School of Social Administration. His maternal grandfather, the Yiddish teacher Samuel Davidson (1903–1989), emigrated to Canada (and then with his wife Chaika Newman to New York) from the Bessarabian town of Floreşti in 1919. Perlmutter spent his childhood in the Mount Airy neighborhood of Philadelphia. He went to school in nearby Germantown; first Greene Street Friends School for the elementary grades, followed by Germantown Friends School for grades 7 through 12. He graduated with an AB in physics from Harvard magna cum laude in 1981 and received his PhD in physics from Berkeley in 1986. Perlmutter's PhD thesis, titled "An Astrometric Search for a Stellar Companion to the Sun" and supervised by Richard A. Muller, described the development and use of an automated telescope to search for Nemesis candidates. At the same time, he was using this telescope to search for Nemesis and supernovae, which would lead him to his award-winning work in cosmology. Perlmutter attributes the idea for an automated supernova search to Luis Alvarez, a 1968 Nobel laureate, who shared his idea with Perlmutter's research adviser. Work Perlmutter heads the Supernova Cosmology Project at Lawrence Berkeley National Laboratory. It was this team along with the competing High-z Supernova Search Team led by Riess and Schmidt, which found evidence of the accelerating expansion of the universe based on observing Type Ia supernova in the distant universe. Type Ia supernova occurs whenever a white dwarf star gains enough additional mass to pass above the Chandrasekhar limit, usually by stealing additional mass from a companion star. Since all Type Ia supernovae are believed to occur in essentially the same way, they form a standard candle whose intrinsic luminosity can be assumed to be approximately the same in all cases. By measuring the apparent luminosity of the explosion from Earth, researchers can then infer the distance to supernova. Comparing this inferred distance to the apparent redshift of the explosion allows the observer to measure both the distance and relative velocity of the supernova. The Supernova Cosmology Project concluded that these distant supernovae were receding more quickly than would be expected due to the Hubble expansion alone, and, by inference, the expansion of the universe must have been accelerated over the billions of years since the supernovae occurred. The High-z Team also came to a similar conclusion. The two teams' reports were published within weeks of each other, and their conclusions were readily accepted by the scientific community due to corroborating theories. This conclusion has subsequently been supported by other lines of evidence. These findings reinvigorated research into the nature of the universe, and especially into the role of dark energy. For this work Perlmutter was awarded the 2011 Nobel Prize in Physics, shared jointly with Riess and Schmidt. Perlmutter is also a lead investigator in the Supernova/Acceleration Probe project, which aims to build a satellite dedicated to finding and studying more supernovae in the distant universe. The goal is to more precisely determine the rate at which the universe has been accelerating. He is also a participant in the Berkeley Earth Surface Temperature project, which aims to increase our understanding of recent global warming through improved analyses of climate data. Perlmutter is a professor and currently teaches at UC Berkeley. Awards and recognition In 2002, Perlmutter won the Department of Energy's E. O. Lawrence Award in Physics. In 2003, he was awarded the California Scientist of the Year Award, and, in 2005, he won the John Scott Award and the Padua Prize. In 2006, he shared the Shaw Prize in Astronomy with Adam Riess and Brian P. Schmidt. The same year, Perlmutter won the Antonio Feltrinelli International Prize. Perlmutter and his team shared the 2007 Gruber Cosmology Prize (a $500,000 award) with Schmidt and the High-Z Team for discovering the accelerating expansion of the universe. In 2010, Perlmutter was named a Miller Senior Fellow of the Miller Institute at the University of California Berkeley. In 2011, Perlmutter and Riess were named co-recipients of the Albert Einstein Medal. Perlmutter shared the 2011 Nobel Prize in Physics with Riess and Schmidt. The Nobel Prize includes a SEK 10 million cash award (approximately US$1.5 million). Perlmutter received one-half of the cash prize, while Riess and Schmidt shared the other half. In 2014, Perlmutter received the Golden Plate Award of the American Academy of Achievement. Perlmutter, Schmidt, Riess and their teams shared the 2015 Breakthrough Prize in Fundamental Physics with $3 million to be split among them. A United States Department of Energy 2020 supercomputer is named Perlmutter in his honor. Family Saul Perlmutter has two sisters: Shira Perlmutter (b. 1956), a lawyer, and Tova Perlmutter (b. 1967), a nonprofit executive. He is married to Laura Nelson, an anthropologist at University of California, Berkeley, and has one daughter, Noa. Popular culture Reference to Saul Perlmutter was made on the CBS television comedy series The Big Bang Theory during the 2011 episode "The Speckerman Recurrence". In the episode, the character Sheldon Cooper watches the Nobel award ceremony on his laptop, and jealously berates Perlmutter: "Look at Dr. Saul Perlmutter up there, clutching that Nobel prize. What's the matter Saul, you afraid somebody's going to steal it? Like you stole Einstein's cosmological constant?" Then later: "Oh, now Perlmutter's shaking the King's hand. Yeah, check for your watch, Gustaf, he might have lifted it." Perlmutter was also referenced in the 2011 episode of The Big Bang Theory, "The Rhinitis Revelation". In a conversation with his mother, Sheldon says, "I’ve got a treat for us tomorrow, Mom. I’m taking you to see Saul Perlmutter give a lecture about his Nobel Prize-winning work in cosmology. And the best part is, at the Q and A afterward, I’ve worked up a couple of Q’s that will stump his sorry A." Later in the episode, Sheldon criticises the lecture and questions the decision to award Perlmutter a Nobel Prize. Technical reports and conference/event proceedings Perlmutter, S., et al. "Progress Report on the Berkeley/Anglo-Australian Observatory High-redshift Supernova Search", Lawrence Berkeley National Laboratory, (November 1990). Perlmutter, S., et al. "Discovery of the Most Distant Supernovae and the Quest for {Omega}", Lawrence Berkeley National Laboratory, (May 1994). Perlmutter, S., et al. "Discovery of a Supernova Explosion at Half the Age of the Universe and its Cosmological Implications", Lawrence Berkeley National Laboratory, (December 16, 1997). Perlmutter, S., et al. "The Distant Type Ia Supernova Rate", Lawrence Berkeley National Laboratory, (May 28, 2002). Perlmutter, S., et al. "The Supernova Legacy Survey: Measurement of Omega_M, Omega_Lambda, and w from the First Year Data Set", Lawrence Berkeley National Laboratory, (October 14, 2005). Perlmutter, S. "Supernovae, Dark Energy and the Accelerating Universe: How DOE Helped to Win (yet another) Nobel Prize", Lawrence Berkeley National Laboratory, (January 13, 2012). See also List of Jewish Nobel laureates Cosmological constant Dark energy Dark matter References External links Supernova Cosmology Project Website Supernova Cosmology Project Shaw Prize Press Release Nobel Prize in Physics Press Release List of scholarly publications as provided by the SAO/NASA Astrophysics Data System (ADS) abstract server. 1959 births Living people Nobel laureates in Physics American Nobel laureates American astrophysicists American people of Moldovan-Jewish descent American cosmologists Harvard College alumni Jewish American physicists Members of the United States National Academy of Sciences Institute for Advanced Study visiting scholars University of California, Berkeley alumni University of California, Berkeley College of Letters and Science faculty 21st-century American astronomers 20th-century American astronomers Fellows of the American Association for the Advancement of Science Fellows of the American Academy of Arts and Sciences Members of the American Philosophical Society Germantown Friends School alumni Lawrence Berkeley National Laboratory people Albert Einstein Medal recipients Dark energy Fellows of the American Physical Society
Saul Perlmutter
Physics,Astronomy
2,008
70,849,383
https://en.wikipedia.org/wiki/Taiwania%203
Taiwania 3 (Traditional Chinese (Taiwan): 台灣杉三號) is one of the supercomputers made by Taiwan, and also the newest one (August, 2021). It is placed in the National Center for High-performance Computing of NARLabs. There are 50,400 cores in total with 900 nodes, using Intel Xeon Platinum 8280 2.4 GHz CPU (28 Cores/CPU) and using CentOS as Operating System. It is an open access for public supercomputer. It is currently open access to scientists and more to do specific research after getting permission from Taiwan's National Center for High-performance Computing. This is the third supercomputer of the Taiwania series. It uses CentOS x86_64 7.8 as its system operator and Slurm Workload Manager as workflow manager to ensure better performance. Taiwania 3 uses InfiniBand HDR100 100 Gbit/s high speed Internet connection to ensure better performance of the supercomputer. The main memory capability is 192 GB. There's currently two Intel Xeon Platinum 8280 2.4 GHz CPU (28 Cores/CPU) inside each node. The full calculation capability is 2.7PFLOPS. It is launched into operation in November 2020 before schedule due to the needed for COVID-19. It is currently ranked number 227 on Top 500 list of June, 2021 and number 80 on Green 500 list. It is manufactured by Quanta Computer, Taiwan Fixed Network, and ASUS Cloud. Capability and specifications This supercomputer's Rmax is 2297.6 TFLOPS, with Rpeak at 4354.6 TFLOPS and Nmax at 4,354,560, costing 563.85 kW. The housing is mainly designed and manufactured by ASUS Cloud, owned by Taiwan government which has experiences on constructing supercomputer housing and storage device housing The hardware is provided by Quanta Computer, which mainly manufactures servers. Software Software details are listed below (all data are according to Top 500 and NCHC): Operating system : CentOS x86_64 7.8 Workload manager : Slurm Workload Manager Compiler : Intel Parallel Studio XE Composer Edition for Fortran and C++ Linux 2020 Update 4 Math library : Intel Math Kernel Library for Linux 2020 Update 4 MPI : Intel MPI Library for Linux 2019 Update 9 Hardware Hardware details are listed below (all data are according to Top 500 and NCHC): CPU : Intel Xeon Platinum 8280 2.4 GHz CPU (28 Cores/CPU) Main memory : 192 GB/node (172800 GB total) Interconnection : NVIDIA Mellanox InfiniBand HDR100 Hardware basically uses :QuantaPlex T42D-2U (4-Node) Dense Memory Multi-node Compute Servermanufactured by Quanta Computer.Operating temperature: 5 °C to 35 °C (41 °F to 95 °F), operating relative humidity: 20% to 85%RH. It has two processors per node.The Intel Xeon CPU and memory mentioned above is inside it. Speed Speed details are listed below. Rpeak : 4354.6 TeraFLOPS Rmax : 2297.6 TeraFLOPS P.S. This machine relies on CPU to calculate. Accessibility Taiwania series have always been available for everybody access through iService and pay according to their requested time and CPUs and GPUs. Films involved Seqalu movie was filmed in collaboration with TWCC, a national service provided by Taiwan. TWCC includes enormous calculation resource provided by Taiwania supercomputer series. Taiwania 3 is one of the resource providers of the film. The programmers at National Centers for High-performance Computing of Taiwan designed the algorithms used in the simulations such as arrows and guns firing. Taiwanese Environment Evaluation Contributions The system operates under NCHC, NARLabs, which means it is part of the Taiwanese government. It contributes to the federal analytics by assisting on combining the information generated by the miscellaneous equipment all around Taiwan. It also help via combining the LIDAR, visual camera, DSM, and more together to form a map during disasters. Moreover, it constructs 3D visualizations for the Taiwanese government along with other supercomputers to assist on rescue, research, training, decision making, mapping, and more. Biology and medication Taiwania 3 is a supercomputer aimed to help the biomedical development. Taiwania 3 supercomputer was meant to help scientists find a solution to the COVID-19 pandemic. It has also been connected to the Taiwanese biological laboratories and their data base. The laboratories must reach a certain level in order to make contact with the system. History 2019 In 2019, the NCHC started project on Taiwania 3 construction... 2020 November 2020Taiwania 3 supercomputer launched officially by NCHC. November 2020Taiwania 3 supercomputer joins COVID-19 research 2021 May 2021outbreak of COVID-19 in Taiwan started June 2021Taiwania 3 grandly activates officially. July 3 of 2021Tech V2.0 Coronavirus project last registration date.note that the deadline has been moved to 8/31 September 2021unveiled collaboration to film Seqalu. Architecture Taiwania 3 Supercomputer is a CPU-based supercomputer, with about fifty thousand cores of Intel Xeon CPUs. This supercomputer uses Linux as OS, just like any other Top 500 supercomputers. More information about supercomputer architecture, visit here! Comparison with other Taiwania supercomputers Taiwania series is a family of supercomputers made by Taiwan during 21st century. Taiwania 2 supercomputer, a GPU machine also made by the NCHC of Taiwan, has a capability of 9 PetaFLOPS, nearly 4 times greater compared with the 2.2 ~ 2.7 PetaFLOPS of Taiwania 3 (which uses mainly CPUs, just like Taiwania 1). The main difference between Taiwania 2 and 3 are the main calculators and objectives. Taiwania 2 is a GPU machine learning usage supercomputer, whereas Taiwania 3 is a CPU computing device for general scientific research usage. They are very different from head to toe. Compare with Taiwania 2, Taiwania 3 is more alike with the Taiwania 1 supercomputer also of Taiwan. They both uses CPU architecture and are both more open access to the public (August, 2021). These two supercomputers both uses Intel CPUs to perform calculations. Their main difference is the capacity of the two supercomputers. These three supercomputers are all currently part of iService systems and partially TWCC computing systems, though Taiwania 3 was to replace Taiwania 1. The new replacement will be Taiwania 4, a CPU HPC which will replace the retiring Taiwania 1 when finished. Taiwania 3 is a CPU supercomputer made by Taiwan. It has total capacity of 2.2 PetaFLOPS according to NCHC of Taiwan. It is a 21st-century HPC, meaning that it uses "Multitasking" to perform high performance computations. Other than using NVIDIA GPUs to boost capacity, this system uses Intel Xeon CPUs to calculate, making it closer to our regular life programming (but still a little bit different). Because it's using CPUs, the overall capacity on Machine learning is not as good as NVIDIA GPU Machine learning systems See also Taiwania (supercomputer) Supercomputers Computer cluster Xeon References Supercomputers Science and technology in Taiwan 2020 establishments in Taiwan Computer-related introductions in 2020
Taiwania 3
Technology
1,606
152,692
https://en.wikipedia.org/wiki/Tractor
A tractor is an engineering vehicle specifically designed to deliver a high tractive effort (or torque) at slow speeds, for the purposes of hauling a trailer or machinery such as that used in agriculture, mining or construction. Most commonly, the term is used to describe a farm vehicle that provides the power and traction to mechanize agricultural tasks, especially (and originally) tillage, and now many more. Agricultural implements may be towed behind or mounted on the tractor, and the tractor may also provide a source of power if the implement is mechanised. Etymology The word tractor was taken from Latin, being the agent noun of trahere "to pull". The first recorded use of the word meaning "an engine or vehicle for pulling wagons or plows" occurred in 1896, from the earlier term "traction motor" (1859). National variations In the UK, Ireland, Australia, India, Spain, Argentina, Slovenia, Serbia, Croatia, the Netherlands, and Germany, the word "tractor" usually means "farm tractor", and the use of the word "tractor" to mean other types of vehicles is familiar to the vehicle trade, but unfamiliar to much of the general public. In Canada and the US, the word may also refer to the road tractor portion of a tractor trailer truck, but also usually refers to the piece of farm equipment. History Traction engines The first powered farm implements in the early 19th century were portable engines – steam engines on wheels that could be used to drive mechanical farm machinery by way of a flexible belt. Richard Trevithick designed the first 'semi-portable' stationary steam engine for agricultural use, known as a "barn engine" in 1812, and it was used to drive a corn threshing machine. The truly portable engine was invented in 1839 by William Tuxford of Boston, Lincolnshire who started manufacture of an engine built around a locomotive-style boiler with horizontal smoke tubes. A large flywheel was mounted on the crankshaft, and a stout leather belt was used to transfer the drive to the equipment being driven. In the 1850s, John Fowler used a Clayton & Shuttleworth portable engine to drive apparatus in the first public demonstrations of the application of cable haulage to cultivation. In parallel with the early portable engine development, many engineers attempted to make them self-propelled – the fore-runners of the traction engine. In most cases this was achieved by fitting a sprocket on the end of the crankshaft, and running a chain from this to a larger sprocket on the rear axle. These experiments met with mixed success. The first proper traction engine, in the form recognisable today, was developed in 1859 when British engineer Thomas Aveling modified a Clayton & Shuttleworth portable engine, which had to be hauled from job to job by horses, into a self-propelled one. The alteration was made by fitting a long driving chain between the crankshaft and the rear axle. The first half of the 1860s was a period of great experimentation but by the end of the decade the standard form of the traction engine had evolved and changed little over the next sixty years. It was widely adopted for agricultural use. The first tractors were steam-powered plowing engines. They were used in pairs, placed on either side of a field to haul a plow back and forth between them using a wire cable. In Britain Mann's and Garrett developed steam tractors for direct ploughing, but the heavy, wet soil of England meant that these designs were less economical than a team of horses. In the United States, where soil conditions permitted, steam tractors were used to direct-haul plows. Steam-powered agricultural engines remained in use well into the 20th century until reliable internal combustion engines had been developed. Fuel The first gasoline powered tractors were built in Illinois, by John Charter combining single cylinder Otto engines with a Rumley Steam engine chassis, in 1889. In 1892, John Froelich built a gasoline-powered tractor in Clayton County, Iowa, US. A Van Duzen single-cylinder gasoline engine was mounted on a Robinson engine chassis, which could be controlled and propelled by Froelich's gear box. After receiving a patent, Froelich started up the Waterloo Gasoline Engine Company and invested all of his assets. The venture was very unsuccessful, and by 1895 all was lost and he went out of business. Richard Hornsby & Sons are credited with producing and selling the first oil-engined tractor in Britain, invented by Herbert Akroyd Stuart. The Hornsby-Akroyd Patent Safety Oil Traction Engine was made in 1896 with a engine. In 1897, it was bought by Mr. Locke-King, the first recorded British tractor sale. That year, it won a Silver Medal from the Royal Agricultural Society of England. It later returned to the factory for a caterpillar track fitting. The first commercially successful light-weight petrol-powered general purpose tractor was built by Dan Albone, a British inventor in 1901. He filed for a patent on 15 February 1902 for his tractor design and then formed Ivel Agricultural Motors Limited. The other directors were Selwyn Edge, Charles Jarrott, John Hewitt and Lord Willoughby. He called his machine the Ivel Agricultural Motor; the word "tractor" came into common use after Hart-Parr created it. The Ivel Agricultural Motor was light, powerful and compact. It had one front wheel, with a solid rubber tyre, and two large rear wheels like a modern tractor. The engine used water cooling, utilizing the thermo-syphon effect. It had one forward and one reverse gear. A pulley wheel on the left hand side allowed it to be used as a stationary engine, driving a wide range of agricultural machinery. The 1903 sale price was £300. His tractor won a medal at the Royal Agricultural Show, in 1903 and 1904. About 500 were built, and many were exported all over the world. The original engine was made by Payne & Co. of Coventry. After 1906, French Aster engines were used. The first successful American tractor was built by Charles W. Hart and Charles H. Parr. They developed a two-cylinder gasoline engine and set up their business in Charles City, Iowa. In 1903, the firm built 15 tractors. Their #3 is the oldest surviving internal combustion engine tractor in the United States, and is on display at the Smithsonian National Museum of American History in Washington, D.C. The two-cylinder engine has a unique hit-and-miss firing cycle that produced at the belt and at the drawbar. In 1908, the Saunderson Tractor and Implement Co. of Bedford introduced a four-wheel design, and became the largest tractor manufacturer in Britain at the time. While the earlier, heavier tractors were initially very successful, it became increasingly apparent at this time that the weight of a large supporting frame was less efficient than lighter designs. Henry Ford introduced a light-weight, mass-produced design which largely displaced the heavier designs. Some companies halfheartedly followed suit with mediocre designs, as if to disprove the concept, but they were largely unsuccessful in that endeavor. While unpopular at first, these gasoline-powered machines began to catch on in the 1910s, when they became smaller and more affordable. Henry Ford introduced the Fordson, a wildly popular mass-produced tractor, in 1917. They were built in the U.S., Ireland, England and Russia, and by 1923, Fordson had 77% of the U.S. market. The Fordson dispensed with a frame, using the strength of the engine block to hold the machine together. By the 1920s, tractors with gasoline-powered internal combustion engines had become the norm. The first three-point hitches were experimented with in 1917. After Harry Ferguson applied for a British patent for his three-point hitch in 1926, they became popular. A three-point attachment of the implement to the tractor is the simplest and the only statically determinate way of joining two bodies in engineering. The Ferguson-Brown Company produced the Model A Ferguson-Brown tractor with a Ferguson-designed hydraulic hitch. In 1938 Ferguson entered into a collaboration with Henry Ford to produce the Ford-Ferguson 9N tractor. The three-point hitch soon became the favorite hitch attachment system among farmers around the world. This tractor model also included a rear Power Take Off (PTO) shaft that could be used to power three point hitch mounted implements such as sickle-bar mowers. Electric In 1969, General Electric introduced the Elec-Trak, the first commercial, electric tractor (electric-powered garden tractor). The Elec-Trak was manufactured by General Electric until 1975. Electric tractors are manufactured by a German company, Fendt, and by US companies, Solectrac and Monarch Tractor. John Deere's protoype electric tractor is a plug-in, powered by an electrical cable. Kubota is prototyping an autonomous electric tractor. Design, power and transmission Configuration Tractors can be generally classified by number of axles or wheels, with main categories of two-wheel tractors (single-axle tractors) and four-wheel tractors (two-axle tractors); more axles are possible but uncommon. Among four-wheel tractors (two-axle tractors), most are two-wheel drive (usually at the rear); but many are two-wheel drive with front wheel assist, four-wheel drive (often with articulated steering), or track crawler (with steel or rubber tracks). The classic farm tractor is a simple open vehicle, with two very large driving wheels on an axle below a single seat (the seat and steering wheel consequently are in the center), and the engine in front of the driver, with two steerable wheels below the engine compartment. This basic design has remained unchanged for a number of years after being pioneered by Wallis, but enclosed cabs are fitted on almost all modern models, for operator safety and comfort. In some localities with heavy or wet soils, notably in the Central Valley of California, the "Caterpillar" or "crawler" type of tracked tractor became popular due to superior traction and flotation. These were usually maneuvered through the use of turning brake pedals and separate track clutches operated by levers rather than a steering wheel. Four-wheel drive tractors began to appear in the 1960s. Some four-wheel drive tractors have the standard "two large, two small" configuration typical of smaller tractors, while some have four large, powered wheels. The larger tractors are typically an articulated, center-hinged design steered by hydraulic cylinders that move the forward power unit while the trailing unit is not steered separately. In the early 21st century, articulated or non-articulated, steerable multitrack tractors have largely supplanted the Caterpillar type for farm use. Larger types of modern farm tractors include articulated four-wheel or eight-wheel drive units with one or two power units which are hinged in the middle and steered by hydraulic clutches or pumps. A relatively recent development is the replacement of wheels or steel crawler-type tracks with flexible, steel-reinforced rubber tracks, usually powered by hydrostatic or completely hydraulic driving mechanisms. The configuration of these tractors bears little resemblance to the classic farm tractor design. Engine and fuels The predecessors of modern tractors, traction engines, used steam engines for power. Gasoline and kerosene Since the turn of the 20th century, internal combustion engines have been the power source of choice. Between 1900 and 1960, gasoline was the predominant fuel, with kerosene (the Rumely Oil Pull was the most notable of this kind)being a common alternative. Generally, one engine could burn any of those, although cold starting was easiest on gasoline. Often, a small auxiliary fuel tank was available to hold gasoline for cold starting and warm-up, while the main fuel tank held whatever fuel was most convenient or least expensive for the particular farmer. In the United Kingdom, a gasoline-kerosene engine is known as a petrol-paraffin engine. Diesel Dieselisation gained momentum starting in the 1960s, and modern farm tractors usually employ diesel engines, which range in power output from 18 to 575 horsepower (15 to 480 kW). Size and output are dependent on application, with smaller tractors used for lawn mowing, landscaping, orchard work, and truck farming, and larger tractors for vast fields of wheat, corn, soy, and other bulk crops. Liquefied petroleum gas Liquefied petroleum gas (LPG) or propane also have been used as tractor fuels, but require special pressurized fuel tanks and filling equipment and produced less power, so are less prevalent in most markets. Most are confined for inside work due to their clean burning. Wood During the second world war, Petrolium based fuel was scarce in many European nations. So they resorted to using wood gasifires on every vehicle, including tractors. Biodiesel In some countries such as Germany, biodiesel is often used. Some other biofuels such as straight vegetable oil are also being used by some farmers. Electric powered Prototype battery powered electric tractors are being developed by a German company, Fendt, and by two US companies, Solectrac and Monarch Tractor. John Deere's protoype electric tractor is a plug-in, powered by an electrical cable. Kubota is prototyping an autonomous electric tractor. Transmission Most older farm tractors use a manual transmission with several gear ratios, typically three to six, sometimes multiplied into two or three ranges. This arrangement provides a set of discrete ratios that, combined with the varying of the throttle, allow final-drive speeds from less than one up to about 25 miles per hour (40 km/h), with the lower speeds used for working the land and the highest speed used on the road. Slow, controllable speeds are necessary for most of the operations performed with a tractor. They help give the farmer a larger degree of control in certain situations, such as field work. When travelling on public roads, the slow operating speeds can cause problems, such as long queues or tailbacks, which can delay or annoy motorists in cars and trucks. These motorists are responsible for being duly careful around farm tractors and sharing the road with them, but many shirk this responsibility, so various ways to minimize the interaction or minimize the speed differential are employed where feasible. Some countries (for example the Netherlands) employ a road sign on some roads that means "no farm tractors". Some modern tractors, such as the JCB Fastrac, are now capable of much higher road speeds of around 50 mph (80 km/h). Older tractors usually have unsynchronized transmission designs, which often require the operator to engage the clutch to shift between gears. This mode of use is inherently unsuited to some of the work tractors do, and has been circumvented in various ways over the years. For existing unsynchronized tractors, the methods of circumvention are double clutching or power-shifting, both of which require the operator to rely on skill to speed-match the gears while shifting, and are undesirable from a risk-mitigation standpoint because of what can go wrong if the operator makes a mistake – transmission damage is possible, and loss of vehicle control can occur if the tractor is towing a heavy load either uphill or downhill – something that tractors often do. Therefore, operator's manuals for most of these tractors state one must always stop the tractor before shifting. In newer designs, unsynchronized transmission designs were replaced with synchronization or with continuously variable transmissions (CVTs). Either a synchronized manual transmission with enough available gear ratios (often achieved with dual ranges, high and low) or a CVT allow the engine speed to be matched to the desired final-drive speed, while keeping engine speed within the appropriate speed (as measured in rotations per minute or rpm) range for power generation (the working range) (whereas throttling back to achieve the desired final-drive speed is a trade-off that leaves the working range). The problems, solutions, and developments described here also describe the history of transmission evolution in semi-trailer trucks. The biggest difference is fleet turnover; whereas most of the old road tractors have long since been scrapped, many of the old farm tractors are still in use. Therefore, old transmission design and operation is primarily just of historical interest in trucking, whereas in farming it still often affects daily life. Hitches and power applications The power produced by the engine must be transmitted to the implement or equipment to do the actual work intended for the equipment. This may be accomplished via a drawbar or hitch system if the implement is to be towed or otherwise pulled through the tractive power of the engine, or via a pulley or power takeoff system if the implement is stationary, or a combination of the two. Drawbars Plows and other tillage equipment are most commonly connected to the tractor via a drawbar. The classic drawbar is simply a steel bar attached to the tractor (or in some cases, as in the early Fordsons, cast as part of the rear transmission housing) to which the hitch of the implement was attached with a pin or by a loop and clevis. The implement could be readily attached and removed, allowing the tractor to be used for other purposes on a daily basis. If the tractor was equipped with a swinging drawbar, then it could be set at the center or offset from center to allow the tractor to run outside the path of the implement. The drawbar system necessitated the implement having its own running gear (usually wheels) and in the case of a plow, chisel cultivator or harrow, some sort of lift mechanism to raise it out of the ground at turns or for transport. Drawbars necessarily posed a rollover risk depending on how the tractive torque was applied. The Fordson tractor was prone to roll backward due to an excessively short wheelbase. The linkage between the implement and the tractor usually had some slack which could lead to jerky starts and greater wear and tear on the tractor and the equipment. Drawbars were appropriate to the dawn of mechanization, because they were very simple in concept and because as the tractor replaced the horse, existing horse-drawn implements usually already had running gear. As the history of mechanization progressed, the advantages of other hitching systems became apparent, leading to new developments (see below). Depending on the function for which a tractor is used, though, the drawbar is still one of the usual means of attaching an implement to a tractor (see photo at left). Fixed mounts Some tractor manufacturers produced matching equipment that could be directly mounted on the tractor. Examples included front-end loaders, belly mowers, row crop cultivators, corn pickers and corn planters. In most cases, these fixed mounts were proprietary and unique to each make of tractor, so an implement produced by John Deere, for example, could not be attached to a Minneapolis Moline tractor. Another disadvantage was mounting usually required some time and labor, resulting in the implement being semi-permanently attached with bolts or other mounting hardware. Usually, it was impractical to remove the implement and reinstall it on a day-to-day basis. As a result, the tractor was unavailable for other uses and dedicated to a single use for an appreciable period of time. An implement was generally mounted at the beginning of its season of use (such as tillage, planting or harvesting) and removed when the season ended. Three-point and quick The drawbar system was virtually the exclusive method of attaching implements (other than direct attachment to the tractor) before Harry Ferguson developed the three-point hitch. Equipment attached to the three-point hitch can be raised or lowered hydraulically with a control lever. The equipment attached to the three-point hitch is usually completely supported by the tractor. Another way to attach an implement is via a quick hitch, which is attached to the three-point hitch. This enables a single person to attach an implement quicker and put the person in less danger when attaching the implement. The three-point hitch revolutionized farm tractors and their implements. While the Ferguson System was still under patent, other manufacturers developed new hitching systems to try to fend off some of Ferguson's competitive advantage. For example, International Harvester's Farmall tractors gained a two-point "Fast Hitch", and John Deere had a power lift that was somewhat similar to the more flexible Ferguson invention. Once the patent protection expired on the three-point hitch, it became an industry standard. Almost every tractor today features Ferguson's three-point linkage or a derivative of it. This hitch allows for easy attachment and detachment of implements while allowing the implement to function as a part of the tractor, almost as if it were attached by a fixed mount. Previously, when the implement hit an obstacle, the towing link broke or the tractor flipped over. Ferguson's idea was to combine a connection via two lower and one upper lift arms that were connected to a hydraulic lifting ram. The ram was, in turn, connected to the upper of the three links so the increased drag (as when a plough hits a rock) caused the hydraulics to lift the implement until the obstacle was passed. Recently, Bobcat's patent on its front loader connection (inspired by these earlier systems) has expired, and compact tractors are now being outfitted with quick-connect attachments for their front-end loaders. Power take-off systems and hydraulics In addition to towing an implement or supplying tractive power through the wheels, most tractors have a means to transfer power to another machine such as a baler, swather, or mower. Unless it functions solely by pulling it through or over the ground, a towed implement needs its own power source (such as a baler or combine with a separate engine) or else a means of transmitting power from the tractor to the mechanical operations of the equipment. Early tractors used belts or cables wrapped around the flywheel or a separate belt pulley to power stationary equipment, such as a threshing machine, buzz saw, silage blower, or stationary baler. In most cases, it was impractical for the tractor and equipment to move with a flexible belt or cable between them, so this system required the tractor to remain in one location, with the work brought to the equipment, or the tractor to be relocated at each turn and the power set-up reapplied (as in cable-drawn plowing systems used in early steam tractor operations). Modern tractors use a power take-off (PTO) shaft to provide rotary power to machinery that may be stationary or pulled. The PTO shaft generally is at the rear of the tractor, and can be connected to an implement that is either towed by a drawbar or a three-point hitch. This eliminates the need for a separate, implement-mounted power source, which is almost never seen in modern farm equipment. It is also optional to get a front PTO as well when buying a new tractor. Virtually all modern tractors can also provide external hydraulic fluid and electrical power to the equipment they are towing, either by hoses or wires. Operation Modern tractors have many electrical switches and levers in the cab for controlling the multitude of different functions available on the tractor. Pedals Some modern farm tractors retain a traditional manual transmission; increasingly they have hydraulically driven powershift transmissions and CVT, which vastly simplify operation. Those with powershift transmissions have identical pedal arrangements on the floor for the operator to actuate, replacing a clutch pedal on the far left with an inching pedal that cuts off hydraulic flow to the clutches. Twinned brake pedals – one each for left and right side wheels – are placed together on the right side. Some have a pedal for a foot throttle on the far right. Unlike automobiles, throttle speed can also be controlled by a hand-operated lever ("hand throttle"), which may be set to a fixed position. This helps provide a constant speed in field work. It also helps provide continuous power for stationary tractors that are operating an implement by PTO shaft or axle driven belt. The foot throttle gives the operator more automobile-like control over the speed of a mobile tractor in any operation. Some modern tractors also have (or offer as optional equipment) a button on the gear stick for controlling the clutch, in addition to the standard pedal, allowing for gear changes and the tractor to be brought to a stop without using the foot pedal to engage the clutch. Others have a button for temporarily increasing throttle speed to improve hydraulic flow to implements, such as a front end loader bucket. Independent left and right brake pedals are provided to allow improved steering (by engaging the side one wishes to turn to, slowing or stopping its wheel) and improved traction in soft and slippery conditions (by transferring rotation to the wheel with better grip). Some users prefer to lock both pedals together, or utilize a partial lock that allows the left pedal to be depressed independently but engages both when the right is applied. This may be in the form of a swinging or sliding bolt that may be readily engaged or disengaged in the field without tools. Foot pedal throttle control is mostly a returning feature of newer tractors. In the UK, foot pedal use to control engine speed while travelling on the road is mandatory. Some tractors, especially those designed for row-crop work, have a 'de-accelerator' pedal, which operates in the reverse fashion of an automobile throttle, slowing the engine when applied. This allows control over the speed of a tractor with its throttle set high for work, as when repeatedly slowing to make U-turns at the end of crop rows in fields. A front-facing foot button is traditionally included just ahead of the driver's seat (designed to be pressed by the operator's heel) to engage the rear differential lock (diff-lock), which prevents wheel slip. The differential normally allows driving wheels to operate at their own speeds, as required, for example, by the different radius each takes in a turn. This allows the outside wheel to travel faster than the inside wheel, thereby traveling further during a turn. In low-traction conditions on a soft surface, the same mechanism can allow one wheel to slip, wasting its torque and further reducing traction. The differential lock overrides this, forcing both wheels to turn at the same speed, reducing wheel slip and improving traction. Care must be taken to unlock the differential before turning, usually by hitting the pedal a second time, since the tractor with good traction cannot perform a turn with the diff-lock engaged. In many modern tractors, this pedal is replaced with an electrical switch. Levers and switches Many functions once controlled with levers have been replaced with some model of electrical switch with the rise of indirect computer controlling of functions in modern tractors. Until the late of the 1950s, tractors had a single register of gears, hence one gear stick, often with three to five forward gears and one reverse. Then, group gears were introduced, and another gear stick was added. Later, control of the forward-reverse direction was moved to a special stick attached at the side of the steering wheel, which allowed forward or reverse travel in any gear. Now, with CVTs or other gear types, fewer sticks control the transmission, and some are replaced with electrical switches or are totally computer-controlled. The three-point hitch was controlled with a lever for adjusting the position, or as with the earliest ones, just the function for raising or lowering the hitch. With modern electrical systems, it is often replaced with a potentiometer for the lower bound position and another one for the upper bound, and a switch allowing automatic adjustment of the hitch between these settings. The external hydraulics also originally had levers, but now are often replaced with some form of electrical switch; the same is true for the power take-off shaft. Safety Agriculture in the United States is one of the most hazardous industries, only surpassed by mining and construction. No other farm machine is so identified with the hazards of production agriculture as the tractor. Tractor-related injuries account for approximately 32% of the fatalities and 6% of the nonfatal injuries in agriculture. Over 50% is attributed to tractor overturns. The roll-over protection structure (ROPS) and seat belt, when worn, are the most important safety devices to protect operators from death during tractor overturns. Modern tractors have a ROPS to prevent an operator from being crushed when overturning. This is especially important in open-air tractors, where the ROPS is a steel beam that extends above the operator's seat. For tractors with operator cabs, the ROPS is part of the frame of the cab. A ROPS with enclosed cab further reduces the likelihood of serious injury because the operator is protected by the sides and windows of the cab. These structures were first required by legislation in Sweden in 1959. Before they were required, some farmers died when their tractors rolled on top of them. Row-crop tractors, before ROPS, were particularly dangerous because of their 'tricycle' design with the two front wheels spaced close together and angled inward toward the ground. Some farmers were killed by rollovers while operating tractors along steep slopes. Others have been killed while attempting to tow or pull an excessive load from above axle height, or when cold weather caused the tires to freeze to the ground, in both cases causing the tractor to pivot around the rear axle. ROPS were first required in the United States in 1986, non-retroactively. ROPS adoption by farmers is thus incomplete. To treat this problem, CROPS (cost-effective roll-over protection structures) have been developed to encourage farmers to retrofit older tractors. For the ROPS to work as designed, the operator must stay within its protective frame and wear the seat belt. In addition to ROPS, U.S. manufacturers add instructional seats on tractors with enclosed cabs. The tractors have a ROPS with seatbelts for both the operator and passenger. This instructional seat is intended to be used for training new tractor operators, but can also be used to diagnose machine problems. The misuse of an instructional seat increases the likelihood of injury, especially when children are transported. The International Organization for Standardization's ISO standard 23205:2014 specifies the minimum design and performance requirements for an instructional seat and states that the instructional seat is neither intended for, nor is it designed for use by children. Despite this, upwards of 40% of farm families give their children rides on tractors, often using these instructional seats. Applications and variations Farm The most common use of the term "tractor" is for the vehicles used on farms. The farm tractor is used for pulling or pushing agricultural machinery or trailers, for plowing, tilling, disking, harrowing, planting, and similar tasks. A variety of specialty farm tractors have been developed for particular uses. These include "row crop" tractors with adjustable tread width to allow the tractor to pass down rows of cereals, maize, tomatoes or other crops without crushing the plants, "wheatland" or "standard" tractors with fixed wheels and a lower center of gravity for plowing and other heavy field work for broadcast crops, and "high crop" tractors with adjustable tread and increased ground clearance, often used in the cultivation of cotton and other high-growing row crop plant operations, and "utility tractors", typically smaller tractors with a low center of gravity and short turning radius, used for general purposes around the farmstead. Many utility tractors are used for nonfarm grading, landscape maintenance and excavation purposes, particularly with loaders, backhoes, pallet forks and similar devices. Small garden or lawn tractors designed for suburban and semirural gardening and landscape maintenance are produced in a variety of configurations, and also find numerous uses on a farmstead. Some farm-type tractors are found elsewhere than on farms: with large universities' gardening departments, in public parks, or for highway workman use with blowtorch cylinders strapped to the sides and a pneumatic drill air compressor permanently fastened over the power take-off. These are often fitted with grass (turf) tyres which are less damaging to soft surfaces than agricultural tires. Precision Space technology has been incorporated into agriculture in the form of GPS devices, and robust on-board computers installed as optional features on farm tractors. These technologies are used in modern, precision farming techniques. The spin-offs from the space race have actually facilitated automation in plowing and the use of autosteer systems (drone on tractors that are manned but only steered at the end of a row), the idea being to neither overlap and use more fuel nor leave streaks when performing jobs such as cultivating. Several tractor companies have also been working on producing a driverless tractor. Engineering The durability and engine power of tractors made them very suitable for engineering tasks. Tractors can be fitted with engineering tools such as dozer blades, buckets, hoes, rippers, etc. The most common attachments for the front of a tractor are dozer blades or buckets. When attached to engineering tools, the tractor is called an engineering vehicle. A bulldozer is a track-type tractor with a blade attached in the front and a rope-winch behind. Bulldozers are very powerful tractors and have excellent ground-hold, as their main tasks are to push or drag. Bulldozers have been further modified over time to evolve into new machines which are capable of working in ways that the original bulldozer can not. One example is that loader tractors were created by removing the blade and substituting a large volume bucket and hydraulic arms which can raise and lower the bucket, thus making it useful for scooping up earth, rock and similar loose material to load it into trucks. A front-loader or loader is a tractor with an engineering tool which consists of two hydraulic powered arms on either side of the front engine compartment and a tilting implement. This is usually a wide-open box called a bucket, but other common attachments are a pallet fork and a bale grappler. Other modifications to the original bulldozer include making the machine smaller to let it operate in small work areas where movement is limited. Also, tiny wheeled loaders, officially called skid-steer loaders, but nicknamed "Bobcat" after the original manufacturer, are particularly suited for small excavation projects in confined areas. Backhoe The most common variation of the classic farm tractor is the backhoe, also called a backhoe-loader. As the name implies, it has a loader assembly on the front and a backhoe on the back. Backhoes attach to a three-point hitch on farm or industrial tractors. Industrial tractors are often heavier in construction, particularly with regards to the use of a steel grill for protection from rocks and the use of construction tires. When the backhoe is permanently attached, the machine usually has a seat that can swivel to the rear to face the hoe controls. Removable backhoe attachments almost always have a separate seat on the attachment. Backhoe-loaders are very common and can be used for a wide variety of tasks: construction, small demolitions, light transportation of building materials, powering building equipment, digging holes, loading trucks, breaking asphalt and paving roads. Some buckets have retractable bottoms, enabling them to empty their loads more quickly and efficiently. Buckets with retractable bottoms are also often used for grading and scratching off sand. The front assembly may be a removable attachment or permanently mounted. Often the bucket can be replaced with other devices or tools. Their relatively small frames and precise controls make backhoe-loaders very useful and common in urban engineering projects, such as construction and repairs in areas too small for larger equipment. Their versatility and compact size make them one of the most popular urban construction vehicles. In the UK and Ireland, the word "JCB" is used colloquially as a genericized trademark for any such type of engineering vehicle. The term JCB now appears in the Oxford English Dictionary, although it is still legally a trademark of J. C. Bamford Ltd. The term "digger" is also commonly used. Compact utility A compact utility tractor (CUT) is a smaller version of an agricultural tractor, but designed primarily for landscaping and estate management tasks, rather than for planting and harvesting on a commercial scale. Typical CUTs range from with available power take-off (PTO) power ranging from . CUTs are often equipped with both a mid-mounted and a standard rear PTO, especially those below . The mid-mount PTO shaft typically rotates at/near 2000 rpm and is typically used to power mid-mount finish mowers, front-mounted snow blowers or front-mounted rotary brooms. The rear PTO is standardized at 540 rpm for the North American markets, but in some parts of the world, a dual 540/1000 rpm PTO is standard, and implements are available for either standard in those markets. One of the most common attachments for a CUT is the front-end loader or FEL. Like the larger agricultural tractors, a CUT will have an adjustable, hydraulically controlled three-point hitch. Typically, a CUT will have four-wheel drive, or more correctly four-wheel assist. Modern CUTs often feature hydrostatic transmissions, but many variants of gear-drive transmissions are also offered from low priced, simple gear transmissions to synchronized transmissions to advanced glide-shift transmissions. All modern CUTs feature government-mandated roll over protection structures just like agricultural tractors. The most well-known brands in North America include Kubota, John Deere Tractor, New Holland Ag, Case-Farmall and Massey Ferguson. Although less common, compact backhoes are often attached to compact utility tractors. Compact utility tractors require special, smaller implements than full-sized agricultural tractors. Very common implements include the box blade, the grader blade, the landscape rake, the post hole digger (or post hole auger), the rotary cutter (slasher or a brush hog), a mid- or rear-mount finish mower, a broadcast seeder, a subsoiler and the rototiller (rotary tiller). In northern climates, a rear-mounted snow blower is very common; some smaller CUT models are available with front-mounted snow blowers powered by mid-PTO shafts. Implement brands outnumber tractor brands, so CUT owners have a wide selection of implements. For small-scale farming or large-scale gardening, some planting and harvesting implements are sized for CUTs. One- and two-row planting units are commonly available, as are cultivators, sprayers and different types of seeders (slit, rotary and drop). One of the first CUTs offered for small farms of three to 30 acres and for small jobs on larger farms was a three-wheeled unit, with the rear wheel being the drive wheel, offered by Sears & Roebuck in 1954 and priced at $598 for the basic model. An even smaller variant of the compact utility tractor is the subcompact utility tractor. Although these tractors are often barely larger than a riding lawn mower, these tractors have all the same features of a compact tractor, such as a three-point hitch, power steering, four-wheel-drive, and front-end loader. These tractors are generally marketed towards homeowners who intend to mostly use them for lawn mowing, with the occasional light landscaping task. Standard The earliest tractors were called "standard" tractors, and were intended almost solely for plowing and harrowing before planting, which were difficult tasks for humans and draft animals. They were characterized by a low, rearward seating position, fixed-width tread, and low ground clearance. These early tractors were cumbersome, and ill-suited to enter a field of planted row crops for weed control. The "standard" tractor definition is no longer in current use. However, tractors with fixed wheel spacing and a low center of gravity are well-suited as loaders, forklifts and backhoes, so that the configuration continues in use without the "standard" nomenclature. Row-crop A general-purpose or row-crop tractor is tailored specifically to the growing of crops grown in rows, and most especially to cultivating these crops. These tractors are universal machines, capable of both primary tillage and cultivation of a crop. The row-crop tractor category evolved rather than appearing overnight, but the International Harvester (IH) Farmall is often considered the "first" tractor of the category. Some earlier tractors of the 1910s and 1920s approached the form factor from the heavier side, as did motorized cultivators from the lighter side, but the Farmall brought all of the salient features together into one package, with a capable distribution network to ensure its commercial success. In the new form factor that the Farmall popularized, the cultivator was mounted in the front so it was easily visible. Additionally, the tractor had a narrow front end; the front tires were spaced very closely and angled in toward the bottom. The back wheels straddled two rows with their spacing adjustable depending on row spacing, and the unit could cultivate four rows at once. Where wide front wheels were used, they often could be adjusted as well. Tractors with non-adjustable spacing were called "standard" or "wheatland", and were chiefly meant for pulling plows or other towed implements, typically with a lower overall tractor height than row-crop models. From 1924 until 1963, Farmalls were the largest selling row-crop tractors. To compete, John Deere designed the Model C, which had a wide front and could cultivate three rows at once. Only 112 prototypes were made, as Deere realized it would lose sales to Farmall if its model did less. In 1928, Deere released the Model C anyway, only as the Model GP (General Purpose) to avoid confusion with the Model D when ordered over the then unclear telephone. Oliver refined its "Row Crop" model early in 1930. Until 1935, the 18–27 was Oliver–Hart-Parr's only row-crop tractor. Many Oliver row-crop models are referred to as "Oliver Row Crop 77", "Oliver Row Crop 88", etc. Many early row-crop tractors had a tricycle design with two closely spaced front tires, and some even had a single front tire. This made it dangerous to operate on the side of a steep hill; as a result, many farmers died from tractor rollovers. Also, early row-crop tractors had no rollover protection system (ROPS), meaning if the tractor flipped back, the operator could be crushed. Sweden was the first country which passed legislation requiring ROPS, in 1959. Over 50% of tractor related injuries and deaths are attributed to tractor rollover. Canadian agricultural equipment manufacturer Versatile makes row-crop tractors that are ; powered by an 8.9 liter Cummins Diesel engine. Case IH and New Holland of CNH Industrial both produce high horsepower front-wheel-assist row crop tractors with available rear tracks. Case IH also has a four-wheel drive track system called Rowtrac. John Deere has an extensive line of available row crop tractors ranging from . Modern row crop tractors have rollover protection systems in the form of a reinforced cab or a roll bar. Garden Garden tractors, sometimes called lawn tractors, are small, light tractors designed for use in domestic gardens, lawns, and small estates. Lawn tractors are designed for cutting grass and snow removal, while garden tractors are for small property cultivation. In the U.S., the term riding lawn mower today often is used to refer to mid- or rear-engined machines. Front-engined tractor layout machines designed primarily for cutting grass and light towing are called lawn tractors; heavier-duty tractors of similar size are garden tractors. Garden tractors are capable of mounting a wider array of attachments than lawn tractors. Unlike lawn tractors and rear-engined riding mowers, garden tractors are powered by horizontal-crankshaft engines with a belt-drive to transaxle-type transmissions (usually of four or five speeds, although some may also have two-speed reduction gearboxes, drive-shafts, or hydrostatic or hydraulic drives). Garden tractors from Wheel Horse, Cub Cadet, Economy (Power King), John Deere, Massey Ferguson and Case Ingersoll are built in this manner. The engines are generally one- or two-cylinder petrol (gasoline) engines, although diesel engine models are also available, especially in Europe. Typically, diesel-powered garden tractors are larger and heavier-duty than gasoline-powered units and compare more similarly to compact utility tractors. Visually, the distinction between a garden tractor and a lawn tractor is often hard to make – generally, garden tractors are more sturdily built, with stronger frames, 12-inch or larger wheels mounted with multiple lugs (most lawn tractors have a single bolt or clip on the hub), heavier transaxles, and ability to accommodate a wide range of front, belly, and rear mounted attachments. Two-wheel Although most people think primarily of four-wheel vehicles when they think of tractors, a tractor may have one or more axles. The key benefit is the power itself, which only takes one axle to provide. Single-axle tractors, more often called two-wheel tractors or walk-behind tractors, have had many users since the introduction of the internal combustion engine tractors. They tend to be small and affordable, this was especially true before the 1960s when a walk-behind tractor could often be more affordable than a two-axle tractor of comparable power. Today's compact utility tractors and advanced garden tractors may negate most of that market advantage, but two-wheel tractors still have a following, especially among those who already own one. Countries where two-wheel tractors are especially prevalent today include Thailand, China, Bangladesh, India, and other Southeast Asia countries. Most two-wheel tractors today are specialty tractors made for one purpose, such as snow blowers, push tillers, and self propelled push mowers. Orchard Tractors tailored to use in fruit orchards typically have features suited to passing under tree branches with impunity. These include a lower overall profile; reduced tree-branch-snagging risk (via underslung exhaust pipes rather than smoke-stack-style exhaust, and large sheetmetal cowlings and fairings that allow branches to deflect and slide off rather than catch); spark arrestors on the exhaust tips; and often wire cages to protect the operator from snags. Automobile conversions and other homemade versions The ingenuity of farm mechanics, coupled in some cases with OEM or aftermarket assistance, has often resulted in the conversion of automobiles for use as farm tractors. In the United States, this trend was especially strong from the 1910s through 1950s. It began early in the development of vehicles powered by internal combustion engines, with blacksmiths and amateur mechanics tinkering in their shops. Especially during the interwar period, dozens of manufacturers (Montgomery Ward among them) marketed aftermarket kits for converting Ford Model Ts for use as tractors. (These were sometimes called 'Hoover wagons' during the Great Depression, although this term was usually reserved for automobiles converted to horse-drawn buggy use when gasoline was unavailable or unaffordable. During the same period, another common name was "Doodlebug", after the popular kit by the same name.) Ford even considered producing an "official" optional kit. Many Model A Fords also were converted for this purpose. In later years, some farm mechanics have been known to convert more modern trucks or cars for use as tractors, more often as curiosities or for recreational purposes (rather than out of the earlier motives of pure necessity or frugality). During World War II, a shortage of tractors in Sweden led to the development of the so-called "EPA" tractor (EPA was a chain of discount stores and it was often used to signify something lacking in quality). An EPA tractor was simply an automobile, truck or lorry, with the passenger space cut off behind the front seats, equipped with two gearboxes in a row. When done to an older car with a ladder frame, the result was similar to a tractor and could be used as one. After the war it remained popular as a way for young people without a driver's license to own something similar to a car. Since it was legally seen as a tractor, it could be driven from 16 years of age and only required a tractor license. Eventually, the legal loophole was closed and no new EPA tractors were allowed to be made, but the remaining ones were still legal, which led to inflated prices and many protests from people who preferred EPA tractors to ordinary cars. The Swedish government eventually replaced them with the so called "A-tractor" which now had its speed limited to 30 km/h and allowed people aged 16 and older to drive the cars with a moped license. The German occupation of Italy during World resulted in a severe shortage of mechanized farm equipment. The destruction of tractors was a sort of scorched-earth strategy used to reduce the independence of the conquered. The shortage of tractors in that area of Europe was the origin of Lamborghini. The war was also the inspiration for dual-purpose vehicles such as the Land Rover. Based on the Jeep, the company made a vehicle that combined PTO, tillage, 4wd, and transportation. In March 1975, a similar type of vehicle was introduced in Sweden, the A tractor [from arbetstraktor (work tractor)]; the main difference is an A tractor has a top speed of 30 km/h. This is usually done by fitting two gearboxes in a row and only using one. The Volvo Duett was, for a long time, the primary choice for conversion to an EPA or A tractor, but since supplies have dried up, other cars have been used, in most cases another Volvo. The SFRO is a Swedish organization advocating homebuilt and modified vehicles. Another type of homemade tractors are ones that are fabricated from scratch. The "from scratch" description is relative, as often individual components will be repurposed from earlier vehicles or machinery (e.g., engines, gearboxes, axle housings), but the tractor's overall chassis is essentially designed and built by the owner (e.g., a frame is welded from bar stockchannel stock, angle stock, flat stock, etc.). As with automobile conversions, the heyday of this type of tractor, at least in developed economies, lies in the past, when there were large populations of blue-collar workers for whom metalworking and farming were prevalent parts of their lives. (For example, many 19th- and 20th-century New England and Midwestern machinists and factory workers had grown up on farms.) Backyard fabrication was a natural activity to them (whereas it might seem daunting to most people today). Nomenclature The term "tractor" (US and Canada) or "tractor unit" (UK) is also applied to: Road tractors, tractor units or traction heads, familiar as the front end of an articulated lorry / semi-trailer truck. They are heavy-duty vehicles with large engines and several axles. The majority of these tractors are designed to pull long semi-trailers, most often to transport freight over a significant distance, and is connected to the trailer with a fifth wheel coupling. In England, this type of "tractor" is often called an "artic cab" (short for "articulated" cab). A minority is the ballast tractor, whose load is hauled from a drawbar. Pushback tractors are used on airports to move aircraft on the ground, most commonly pushing aircraft away from their parking stands. Locomotive tractors (engines) or rail car movers – the amalgamation of machines, electrical generators, controls and devices that comprise the traction component of railway vehicles Artillery tractors – vehicles used to tow artillery pieces of varying weights. NASA and other space agencies use very large tractors to move large launch vehicles and Space Shuttles between their hangars and launch pads. A pipe-tractor is a device used for conveying advanced instruments into pipes for measurement and data logging, and the purging of well holes, sewer pipes and other inaccessible tubes. Nebraska tests Nebraska tractor tests are tests mandated by the Nebraska Tractor Test Law and administered by the University of Nebraska, that objectively test the performance of all brands of tractors, 40 horsepower or more, sold in Nebraska. In the 1910s and 1920s, an era of snake oil sales and advertising tactics, the Nebraska tests helped farmers throughout North America to see through marketing claims and make informed buying decisions. The tests continue today, making sure tractors fulfill the manufacturer's advertised claims. Manufacturers Some of the many tractor manufacturers and brands worldwide include: Belarus Case IH Caterpillar Claas Challenger Deutz-Fahr Fendt ITMCO Iseki JCB John Deere Lamborghini Landini Kubota Mahindra Tractors Massey Ferguson McCormick Mercedes-Benz New Holland SAME Steyr TAFE Ursus Valtra Zetor In addition to commercial manufacturers, the Open Source Ecology group has developed several working prototypes of an open source hardware tractor called the LifeTrac as part of its Global Village Construction Set. Gallery See also Agricultural machinery Artillery tractor Ballast tractor Big Bud 747, the world's largest farm tractor Driverless tractor Heavy equipment Lester F. Larsen Tractor Museum Non-road engine Power take-off Railcar mover Terminal tractor Tractor pulling Tractor unit Two-wheel tractor Unimog 70200 DT-20 References External links Tractor information Purdue University Tractor Safety Article re: ROPS, PTO, etc Nebraska Tractor Test Laboratory Historical Tractor Test Reports and Manufacturers' Literature Reports on 400+ models 1903–2006 A History of Tractors at the Canada Agriculture Museum Tractor safety EU Working Group on Agricultural Tractors – Work Safety EU Directives on tractor design: (Mapped Index), or (Numerical Index) Tractor Safety (National Agricultural Safety Database) Tractor Safety (National Safety Council) Adaptive Tractor Overturn Prediction System Tractor Overturn Protection and Prevention ACC: Farm safety: Vehicles, machinery and equipment. CDC – Agricultural Safety: Cost-effective Rollover Protective Structures – NIOSH Workplace Safety and Health Topic Agricultural machinery Engineering vehicles Heavy equipment Vehicles introduced in 1901
Tractor
Engineering
11,145
1,416,932
https://en.wikipedia.org/wiki/Tollens%27%20reagent
Tollens' reagent (chemical formula Ag(NH3)2OH) is a chemical reagent used to distinguish between aldehydes and ketones along with some alpha-hydroxy ketones which can tautomerize into aldehydes. The reagent consists of a solution of silver nitrate, ammonium hydroxide and some sodium hydroxide (to maintain a basic pH of the reagent solution). It was named after its discoverer, the German chemist Bernhard Tollens. A positive test with Tollens' reagent is indicated by the precipitation of elemental silver, often producing a characteristic "silver mirror" on the inner surface of the reaction vessel. Laboratory preparation This reagent is not commercially available due to its short shelf life, so it must be freshly prepared in the laboratory. One common preparation involves two steps. First a few drops of dilute sodium hydroxide are added to some aqueous 0.1 M silver nitrate. The OH- ions convert the silver aquo complex form into silver(I) oxide, Ag2O, which precipitates from the solution as a brown solid: 2AgNO3 + 2NaOH -> Ag2O(s) + 2NaNO3 + H2O In the next step, sufficient aqueous ammonia is added to dissolve the brown silver(I) oxide. The resulting solution contains the [Ag(NH3)2]+ complexes in the mixture, which is the main component of Tollens' reagent. Sodium hydroxide is reformed: Ag2O(s) + 4NH3 + 2NaNO3 + H2O -> 2[Ag(NH3)2]NO3 + 2NaOH Alternatively, aqueous ammonia can be added directly to silver nitrate solution. At first, ammonia will induce formation of solid silver oxide, but with additional ammonia, this solid precipitate dissolves to give a clear solution of diamminesilver(I) coordination complex, [Ag(NH3)2]+. Filtering the reagent before use helps to prevent false-positive results. Uses Qualitative organic analysis Once the presence of a carbonyl group has been identified using 2,4-dinitrophenylhydrazine (also known as Brady's reagent or 2,4-DNPH or 2,4-DNP), Tollens' reagent can be used to distinguish ketone vs aldehyde. Tollens' reagent gives a negative test for most ketones, with alpha-hydroxy ketones being one exception. The test rests on the premise that aldehydes are more readily oxidized compared with ketones; this is due to the carbonyl-containing carbon in aldehydes having attached hydrogen. The diamine silver(I) complex in the mixture is an oxidizing agent and is the essential reactant in Tollens' reagent. The test is generally carried out in a test tube in a warm water bath. In a positive test, the diamine silver(I) complex oxidizes the aldehyde to a carboxylate ion and in the process is reduced to elemental silver and aqueous ammonia. The elemental silver precipitates out of solution, occasionally onto the inner surface of the reaction vessel, giving a characteristic "silver mirror". The carboxylate ion on acidification will give its corresponding carboxylic acid. The carboxylic acid is not directly formed in the first place as the reaction takes place under alkaline conditions. The ionic equations for the overall reaction are shown below; R refers to an alkyl group. 2[Ag(NH3)2]+ + R-CHO + H2O -> 2Ag(s) + 4NH3 + R-COOH + 2H+ Tollens' reagent can also be used to test for terminal alkynes (R-C2H). A white precipitate of the acetylide (AgC_2-R) is formed in this case. Another test relies on reaction of the furfural with phloroglucinol to produce a colored compound with high molar absorptivity. It also gives a positive test with hydrazines, hydrazones, α-hydroxy ketones and 1,2-dicarbonyls. Both Tollens' reagent and Fehling's reagent give positive results with formic acid. Staining In anatomic pathology, ammonical silver nitrate is used in the Fontana–Masson Stain, which is a silver stain technique used to detect melanin, argentaffin and lipofuscin in tissue sections. Melanin and the other chromaffins reduce the silver nitrate to metallic silver. In silver mirroring Tollens' reagent is also used to apply a silver mirror to glassware; for example the inside of an insulated vacuum flask. The underlying chemical process is called silver mirror reaction. The reducing agent is glucose (an aldehyde) for such applications. Clean glassware is required for a high quality mirror. To increase the speed of deposition, the glass surface may be pre-treated with tin(II) chloride stabilised in hydrochloric acid solution. For applications requiring the highest optical quality, such as in telescope mirrors, the use of tin(II) chloride is problematic, since it creates nanoscale roughness and reduces the reflectivity. Methods to produce telescope mirrors include additional additives to increase adhesion and film resilience, such as in Martin's method, which includes tartaric acid and ethanol. Safety Aged reagent can be destroyed with dilute acid to prevent the formation of the highly explosive silver nitride. See also Benedict's reagent Walden reductor (opposite use involving metallic silver) References External links Video of experimental process involving Tollens' reagent Tollens' reagent on www.wiu.edu Univ. of Minnesota Organic Chemistry Class Demo Result fr:Réaction de Tollens pl: Próba Tollensa Silver compounds Oxidizing agents Coordination complexes Chemical tests Analytical reagents Ammine complexes
Tollens' reagent
Chemistry
1,290
847,731
https://en.wikipedia.org/wiki/Michael%20J.%20Adams
Michael James Adams (May 5, 1930 – November 15, 1967) (Maj USAF) was an American aviator, aeronautical engineer, and USAF astronaut. He was one of twelve pilots who flew the North American X-15, an experimental spaceplane jointly operated by the Air Force and NASA. On November 15, 1967, Adams flew X-15 Flight 191 (also known as X-15 Flight 3-65-97) aboard the X-15-3, one of three planes in the X-15 fleet. Flying to an altitude above 50 miles, Adams qualified as an astronaut according to the United States definition of the boundary of space. Moments later the craft broke apart, killing Adams and destroying the X-15-3. He was the first American space mission fatality by the American convention. Background Early life and military experience Adams was born May 5, 1930, in Sacramento, California. He graduated from Sacramento Junior College. He enlisted in the United States Air Force in 1950, and earned his pilot wings and commission in 1952 at Webb Air Force Base, Texas. He served as a fighter-bomber pilot during the Korean War, where he flew 49 combat missions. This was followed by 30 months with the 613th Fighter-Bomber Squadron at England Air Force Base, Louisiana, and six months rotational duty at Chaumont Air Base in France. Education and flight experience In 1958, Adams received a Bachelor of Science degree in Aeronautical Engineering from the University of Oklahoma and, after 18 months of astronautics study at Massachusetts Institute of Technology, was selected in 1962 for the U.S. Air Force Test Pilot School at Edwards Air Force Base, California. Here, he won the A.B. Honts Trophy as the best scholar and pilot in his class. Adams subsequently attended the Aerospace Research Pilot School (ARPS), graduating with honors in December 1963. He was one of four Edwards aerospace research pilots to participate in a five-month series of NASA Moon landing practice tests at the Martin Company in Baltimore, Maryland. In November 1965, he was selected to be an astronaut in the United States Air Force Manned Orbiting Laboratory program. In July 1966, Major Adams came to the North American X-15 program, a joint USAF/NASA project. He made his first X-15 flight on October 6, 1966. Death Adams's seventh X-15 flight, Flight 3-65-97, took place on November 15, 1967. He reached a peak altitude of ; the nose of the aircraft was off heading by 15 degrees to the right. While descending, at the aircraft encountered rapidly increasing aerodynamic pressure which impinged on the airframe, causing the X-15 to enter a violent Mach 5 spin. As the X-15 neared , it was diving at Mach 3.93 and experiencing more than 15 g vertically (positive and negative), and 8 g laterally, which inevitably exceeded the design limits of the aircraft. The aircraft broke up 10 minutes and 35 seconds after launch, killing Adams. The United States Air Force posthumously awarded him Astronaut Wings for his last flight. An excerpt from NASA's biography page on Mike Adams discusses findings from the crash investigation: Ground parties scoured the countryside looking for wreckage; critical to the investigation was the film from the cockpit camera. The weekend after the accident, an unofficial FRC (Fleet Readiness Centers) search party found the camera; disappointingly, the film cartridge was nowhere in sight. Engineers theorized that the film cassette, being lighter than the camera, might be further away, blown north by winds at altitude. FRC (Fleet Readiness Centers) engineer Victor Horton organized a search and on 29 November, during the first pass over the area, Willard E. Dives found the cassette. Most puzzling was Adams's complete lack of awareness of major heading deviations in spite of accurately functioning cockpit instrumentation. The accident board concluded that he had allowed the aircraft to deviate as the result of a combination of distraction, misinterpretation of his instrumentation display, and possible vertigo. The electrical disturbance early in the flight degraded the overall effectiveness of the aircraft's control system and further added to pilot workload. The MH-96 adaptive control system then caused the airplane to break up during reentry. His remains were buried at the Mulhearn Memorial Park Cemetery, Monroe, Ouachita Parish, Louisiana. Awards and honors During his military career he was awarded: Astronaut Wings, posthumously Air Medal Air Force Commendation Medal Korean Service Medal United Nations Service Medal for Korea National Defense Service Medal with 1 Bronze Service Star Air Force Longevity Service Award with 4 clusters Air Force Good Conduct Medal A.B. Honts Trophy Memorials In 1991, Adams's name was added to the Space Mirror Memorial at the Kennedy Space Center in Florida. On June 8, 2004, a memorial monument to Adams was erected near the crash site, northwest of Randsburg, California. References External links Michael J. Adams at nasa.gov 1930 births 1967 deaths Accidental deaths in California American aerospace engineers American Korean War pilots American test pilots Aviators from California Aviators killed in aviation accidents or incidents in the United States Engineers from California Military personnel from Sacramento, California People who have flown in suborbital spaceflight Recipients of the Air Medal Sacramento City College alumni Space program fatalities 20th-century American engineers United States Air Force astronauts United States Air Force officers University of Oklahoma alumni U.S. Air Force Test Pilot School alumni Victims of aviation accidents or incidents in 1967 Victims of flight test accidents X-15 program
Michael J. Adams
Engineering
1,113
30,938,173
https://en.wikipedia.org/wiki/Ate-u-tiv
Ate-u-Tiv (sometimes written as "Ate u Tiv" and less popularly known as "Tsun") is a kind of communal reception hut built by the Tiv People of the Middle-belt Region of Nigeria in West Africa. The word "Atē" stands for the round, open hut; while "Átē-ŭ-Tiv" attributes it to the Tiv people. The Ate-u-Tiv serves as a relaxation and reception point for "vanya" (guests) and allows "mbamaren, ônov man angbianev" (family members) to "tema imiôngo" (chat), sharing ideas and telling stories. The "Orya" (family head) receives guests and attend to family issues (discussions) from the "Ate". A traditional Ate-u-Tiv is supported by a minimum of six poles called "mtôm" which are y-shaped at the top; these serve as the pillars. The total number of poles depends on the diameter of the Ate. The poles are erected upright in a circle, spaced evenly. The "ukyaver", stems of slim climbing plants form a sort of lintel to hold the roof. The roof of an "Ate-u-Tiv" comprise "ihyange" (paulins) and "ihila" (grass). The ihyange are woven together in a cone-shape with ukyaver holding together the rafters. The completed structure is then hoisted unto the pillars/lintel with the coned-top upright. Ihila, having been woven together, is then used to provide a thick-layered roofing. This roof filters incoming air making it cool and clean and at the same time stopping the rains. The Tiv people are well known for their hospitality and the "Ate-u-Tiv" is an important component of this hospitality. In order to readily receive visitors, each compound builds an "Ate" which is furnished with chairs made from wood, canes, etc. In modern days, the components of the "Ate" may vary. Some roofs are now a combination of iron roofing sheets covered by grass that may not necessarily be "ihila"; paulins are regularly made of plywood, etc. The "Ate" design now adorns public places such as hotel gardens, public amusement parks, zoos, museums, etc. New usage of the term "Ate-u-Tiv" may refer to a meeting place, social network or forum. See also Tiv people Tiv language References External links Tiv Social Network Tiv Internet Project Nigeria Architecture Thatching Huts Tiv people
Ate-u-tiv
Physics
546
8,223,609
https://en.wikipedia.org/wiki/Dynamic/Dialup%20Users%20List
A Dial-up/Dynamic User List (DUL) is a type of DNSBL which contains the IP addresses an ISP assigns to its customer on a temporary basis, often using DHCP or similar protocols. Dynamically assigned IP addresses are contrasted with static IP addresses which do not change once they have been allocated by the service provider. DULs serve several purposes. Their primary function is to assist an ISP in enforcement of its Acceptable Use Policy, many of which prohibit customers from setting up an email server. Customers are expected to use the email facilities of the service provider. This use of a DUL is especially helpful in curtailing abuse when a customer's computer has been converted into a zombie computer and is distributing email without the knowledge of the computer's owner. A second major use involves receivers who do not wish to accept email from computers with dynamically assigned email addresses. They use DULs to enforce this policy. Receivers adopt such policies because computers at dynamically assigned IP addresses so often are a source of spam. The first DUL was created by Gordon Fecyk in 1998. It quickly became quite popular because it addressed a specific tactic popular with spammers at the time. The DUL subsequently was absorbed by Mail Abuse Prevention System (MAPS) in 1999. When MAPS was no longer a free service, other DNSBLs such as Dynablock, Not Just Another Bogus List (NJABL), and Spam and Open Relay Blocking System (SORBS) began providing lists of dynamically assigned IP addresses. References Internet access
Dynamic/Dialup Users List
Technology
320
39,580,830
https://en.wikipedia.org/wiki/Symmetry%20in%20quantum%20mechanics
Symmetries in quantum mechanics describe features of spacetime and particles which are unchanged under some transformation, in the context of quantum mechanics, relativistic quantum mechanics and quantum field theory, and with applications in the mathematical formulation of the standard model and condensed matter physics. In general, symmetry in physics, invariance, and conservation laws, are fundamentally important constraints for formulating physical theories and models. In practice, they are powerful methods for solving problems and predicting what can happen. While conservation laws do not always give the answer to the problem directly, they form the correct constraints and the first steps to solving a multitude of problems. In application, understanding symmetries can also provide insights on the eigenstates that can be expected. For example, the existence of degenerate states can be inferred by the presence of non commuting symmetry operators or that the non degenerate states are also eigenvectors of symmetry operators. This article outlines the connection between the classical form of continuous symmetries as well as their quantum operators, and relates them to the Lie groups, and relativistic transformations in the Lorentz group and Poincaré group. Notation The notational conventions used in this article are as follows. Boldface indicates vectors, four vectors, matrices, and vectorial operators, while quantum states use bra–ket notation. Wide hats are for operators, narrow hats are for unit vectors (including their components in tensor index notation). The summation convention on the repeated tensor indices is used, unless stated otherwise. The Minkowski metric signature is (+−−−). Symmetry transformations on the wavefunction in non-relativistic quantum mechanics Continuous symmetries Generally, the correspondence between continuous symmetries and conservation laws is given by Noether's theorem. The form of the fundamental quantum operators, for example the energy operator as a partial time derivative and momentum operator as a spatial gradient, becomes clear when one considers the initial state, then changes one parameter of it slightly. This can be done for displacements (lengths), durations (time), and angles (rotations). Additionally, the invariance of certain quantities can be seen by making such changes in lengths and angles, illustrating conservation of these quantities. In what follows, transformations on only one-particle wavefunctions in the form: are considered, where denotes a unitary operator. Unitarity is generally required for operators representing transformations of space, time, and spin, since the norm of a state (representing the total probability of finding the particle somewhere with some spin) must be invariant under these transformations. The inverse is the Hermitian conjugate . The results can be extended to many-particle wavefunctions. Written in Dirac notation as standard, the transformations on quantum state vectors are: Now, the action of changes to , so the inverse changes back to . Thus, an operator invariant under satisfies [I am sorry, but this is non-sequitor. You have not laid a foundation for this proposition]: Concomitantly, for any state ψ. Quantum operators representing observables are also required to be Hermitian so that their eigenvalues are real numbers, i.e. the operator equals its Hermitian conjugate, . Overview of Lie group theory Following are the key points of group theory relevant to quantum theory, examples are given throughout the article. For an alternative approach using matrix groups, see the books of Hall Let be a Lie group, which is a group that locally is parameterized by a finite number of real continuously varying parameters . In more mathematical language, this means that is a smooth manifold that is also a group, for which the group operations are smooth. the dimension of the group, , is the number of parameters it has. the group elements, , in are functions of the parameters: and all parameters set to zero returns the identity element of the group: Group elements are often matrices which act on vectors, or transformations acting on functions. The generators of the group are the partial derivatives of the group elements with respect to the group parameters with the result evaluated when the parameter is set to zero: In the language of manifolds, the generators are the elements of the tangent space to G at the identity. The generators are also known as infinitesimal group elements or as the elements of the Lie algebra of G. (See the discussion below of the commutator.) One aspect of generators in theoretical physics is they can be constructed themselves as operators corresponding to symmetries, which may be written as matrices, or as differential operators. In quantum theory, for unitary representations of the group, the generators require a factor of : The generators of the group form a vector space, which means linear combinations of generators also form a generator. The generators (whether matrices or differential operators) satisfy the commutation relations: where are the (basis dependent) structure constants of the group. This makes, together with the vector space property, the set of all generators of a group a Lie algebra. Due to the antisymmetry of the bracket, the structure constants of the group are antisymmetric in the first two indices. The representations of the group then describe the ways that the group (or its Lie algebra) can act on a vector space. (The vector space might be, for example, the space of eigenvectors for a Hamiltonian having as its symmetry group.) We denote the representations using a capital . One can then differentiate to obtain a representation of the Lie algebra, often also denoted by . These two representations are related as follows: without summation on the repeated index . Representations are linear operators that take in group elements and preserve the composition rule: A representation which cannot be decomposed into a direct sum of other representations, is called irreducible. It is conventional to label irreducible representations by a superscripted number in brackets, as in , or if there is more than one number, we write . There is an additional subtlety that arises in quantum theory, where two vectors that differ by multiplication by a scalar represent the same physical state. Here, the pertinent notion of representation is a projective representation, one that only satisfies the composition law up to a scalar. In the context of quantum mechanical spin, such representations are called spinorial. Momentum and energy as generators of translation and time evolution, and rotation The space translation operator acts on a wavefunction to shift the space coordinates by an infinitesimal displacement . The explicit expression can be quickly determined by a Taylor expansion of about , then (keeping the first order term and neglecting second and higher order terms), replace the space derivatives by the momentum operator . Similarly for the time translation operator acting on the time parameter, the Taylor expansion of is about , and the time derivative replaced by the energy operator . The exponential functions arise by definition as those limits, due to Euler, and can be understood physically and mathematically as follows. A net translation can be composed of many small translations, so to obtain the translation operator for a finite increment, replace by and by , where is a positive non-zero integer. Then as increases, the magnitude of and become even smaller, while leaving the directions unchanged. Acting the infinitesimal operators on the wavefunction times and taking the limit as tends to infinity gives the finite operators. Space and time translations commute, which means the operators and generators commute. For a time-independent Hamiltonian, energy is conserved in time and quantum states are stationary states: the eigenstates of the Hamiltonian are the energy eigenvalues : and all stationary states have the form where is the initial time, usually set to zero since there is no loss of continuity when the initial time is set. An alternative notation is . Angular momentum as the generator of rotations Orbital angular momentum The rotation operator, , acts on a wavefunction to rotate the spatial coordinates of a particle by a constant angle : where are the rotated coordinates about an axis defined by a unit vector through an angular increment , given by: where is a rotation matrix dependent on the axis and angle. In group theoretic language, the rotation matrices are group elements, and the angles and axis are the parameters, of the three-dimensional special orthogonal group, SO(3). The rotation matrices about the standard Cartesian basis vector through angle , and the corresponding generators of rotations , are: More generally for rotations about an axis defined by , the rotation matrix elements are: where is the Kronecker delta, and is the Levi-Civita symbol. It is not as obvious how to determine the rotational operator compared to space and time translations. We may consider a special case (rotations about the , , or -axis) then infer the general result, or use the general rotation matrix directly and tensor index notation with and . To derive the infinitesimal rotation operator, which corresponds to small , we use the small angle approximations and , then Taylor expand about or , keep the first order term, and substitute the angular momentum operator components. The -component of angular momentum can be replaced by the component along the axis defined by , using the dot product . Again, a finite rotation can be made from many small rotations, replacing by and taking the limit as tends to infinity gives the rotation operator for a finite rotation. Rotations about the same axis do commute, for example a rotation through angles and about axis can be written However, rotations about different axes do not commute. The general commutation rules are summarized by In this sense, orbital angular momentum has the common sense properties of rotations. Each of the above commutators can be easily demonstrated by holding an everyday object and rotating it through the same angle about any two different axes in both possible orderings; the final configurations are different. In quantum mechanics, there is another form of rotation which mathematically appears similar to the orbital case, but has different properties, described next. Spin angular momentum All previous quantities have classical definitions. Spin is a quantity possessed by particles in quantum mechanics without any classical analogue, having the units of angular momentum. The spin vector operator is denoted . The eigenvalues of its components are the possible outcomes (in units of ) of a measurement of the spin projected onto one of the basis directions. Rotations (of ordinary space) about an axis through angle about the unit vector in space acting on a multicomponent wave function (spinor) at a point in space is represented by: However, unlike orbital angular momentum in which the z-projection quantum number can only take positive or negative integer values (including zero), the z-projection spin quantum number s can take all positive and negative half-integer values. There are rotational matrices for each spin quantum number. Evaluating the exponential for a given z-projection spin quantum number s gives a (2s + 1)-dimensional spin matrix. This can be used to define a spinor as a column vector of 2s + 1 components which transforms to a rotated coordinate system according to the spin matrix at a fixed point in space. For the simplest non-trivial case of s = 1/2, the spin operator is given by where the Pauli matrices in the standard representation are: Total angular momentum The total angular momentum operator is the sum of the orbital and spin and is an important quantity for multi-particle systems, especially in nuclear physics and the quantum chemistry of multi-electron atoms and molecules. We have a similar rotation matrix: Conserved quantities in the quantum harmonic oscillator The dynamical symmetry group of the n dimensional quantum harmonic oscillator is the special unitary group SU(n). As an example, the number of infinitesimal generators of the corresponding Lie algebras of SU(2) and SU(3) are three and eight respectively. This leads to exactly three and eight independent conserved quantities (other than the Hamiltonian) in these systems. The two dimensional quantum harmonic oscillator has the expected conserved quantities of the Hamiltonian and the angular momentum, but has additional hidden conserved quantities of energy level difference and another form of angular momentum. Lorentz group in relativistic quantum mechanics Following is an overview of the Lorentz group; a treatment of boosts and rotations in spacetime. Throughout this section, see (for example) T. Ohlsson (2011) and E. Abers (2004). Lorentz transformations can be parametrized by rapidity for a boost in the direction of a three-dimensional unit vector , and a rotation angle about a three-dimensional unit vector defining an axis, so and are together six parameters of the Lorentz group (three for rotations and three for boosts). The Lorentz group is 6-dimensional. Pure rotations in spacetime The rotation matrices and rotation generators considered above form the spacelike part of a four-dimensional matrix, representing pure-rotation Lorentz transformations. Three of the Lorentz group elements and generators for pure rotations are: The rotation matrices act on any four vector and rotate the space-like components according to leaving the time-like coordinate unchanged. In matrix expressions, is treated as a column vector. Pure boosts in spacetime A boost with velocity in the x, y, or z directions given by the standard Cartesian basis vector , are the boost transformation matrices. These matrices and the corresponding generators are the remaining three group elements and generators of the Lorentz group: The boost matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to: The term "boost" refers to the relative velocity between two frames, and is not to be conflated with momentum as the generator of translations, as explained below. Combining boosts and rotations Products of rotations give another rotation (a frequent exemplification of a subgroup), while products of boosts and boosts or of rotations and boosts cannot be expressed as pure boosts or pure rotations. In general, any Lorentz transformation can be expressed as a product of a pure rotation and a pure boost. For more background see (for example) B.R. Durney (2011) and H.L. Berk et al. and references therein. The boost and rotation generators have representations denoted and respectively, the capital in this context indicates a group representation. For the Lorentz group, the representations and of the generators and fulfill the following commutation rules. In all commutators, the boost entities mixed with those for rotations, although rotations alone simply give another rotation. Exponentiating the generators gives the boost and rotation operators which combine into the general Lorentz transformation, under which the spacetime coordinates transform from one rest frame to another boosted and/or rotating frame. Likewise, exponentiating the representations of the generators gives the representations of the boost and rotation operators, under which a particle's spinor field transforms. In the literature, the boost generators and rotation generators are sometimes combined into one generator for Lorentz transformations , an antisymmetric four-dimensional matrix with entries: and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrix , with entries: The general Lorentz transformation is then: with summation over repeated matrix indices α and β. The Λ matrices act on any four vector A = (A0, A1, A2, A3) and mix the time-like and the space-like components, according to: Transformations of spinor wavefunctions in relativistic quantum mechanics In relativistic quantum mechanics, wavefunctions are no longer single-component scalar fields, but now 2(2s + 1) component spinor fields, where s is the spin of the particle. The transformations of these functions in spacetime are given below. Under a proper orthochronous Lorentz transformation in Minkowski space, all one-particle quantum states locally transform under some representation of the Lorentz group: where is a finite-dimensional representation, in other words a dimensional square matrix, and is thought of as a column vector containing components with the allowed values of : Real irreducible representations and spin The irreducible representations of and , in short "irreps", can be used to build to spin representations of the Lorentz group. Defining new operators: so and are simply complex conjugates of each other, it follows they satisfy the symmetrically formed commutators: and these are essentially the commutators the orbital and spin angular momentum operators satisfy. Therefore, and form operator algebras analogous to angular momentum; same ladder operators, z-projections, etc., independently of each other as each of their components mutually commute. By the analogy to the spin quantum number, we can introduce positive integers or half integers, , with corresponding sets of values and . The matrices satisfying the above commutation relations are the same as for spins a and b have components given by multiplying Kronecker delta values with angular momentum matrix elements: where in each case the row number m′n′ and column number mn are separated by a comma, and in turn: and similarly for J(n). The three J(m) matrices are each square matrices, and the three J(n) are each square matrices. The integers or half-integers m and n numerate all the irreducible representations by, in equivalent notations used by authors: , which are each square matrices. Applying this to particles with spin ; left-handed -component spinors transform under the real irreps , right-handed -component spinors transform under the real irreps , taking direct sums symbolized by (see direct sum of matrices for the simpler matrix concept), one obtains the representations under which -component spinors transform: where . These are also real irreps, but as shown above, they split into complex conjugates. In these cases the refers to any of , , or a full Lorentz transformation . Relativistic wave equations In the context of the Dirac equation and Weyl equation, the Weyl spinors satisfying the Weyl equation transform under the simplest irreducible spin representations of the Lorentz group, since the spin quantum number in this case is the smallest non-zero number allowed: 1/2. The 2-component left-handed Weyl spinor transforms under and the 2-component right-handed Weyl spinor transforms under . Dirac spinors satisfying the Dirac equation transform under the representation , the direct sum of the irreps for the Weyl spinors. The Poincaré group in relativistic quantum mechanics and field theory Space translations, time translations, rotations, and boosts, all taken together, constitute the Poincaré group. The group elements are the three rotation matrices and three boost matrices (as in the Lorentz group), and one for time translations and three for space translations in spacetime. There is a generator for each. Therefore, the Poincaré group is 10-dimensional. In special relativity, space and time can be collected into a four-position vector , and in parallel so can energy and momentum which combine into a four-momentum vector . With relativistic quantum mechanics in mind, the time duration and spatial displacement parameters (four in total, one for time and three for space) combine into a spacetime displacement , and the energy and momentum operators are inserted in the four-momentum to obtain a four-momentum operator, which are the generators of spacetime translations (four in total, one time and three space): There are commutation relations between the components four-momentum P (generators of spacetime translations), and angular momentum M (generators of Lorentz transformations), that define the Poincaré algebra: where η is the Minkowski metric tensor. (It is common to drop any hats for the four-momentum operators in the commutation relations). These equations are an expression of the fundamental properties of space and time as far as they are known today. They have a classical counterpart where the commutators are replaced by Poisson brackets. To describe spin in relativistic quantum mechanics, the Pauli–Lubanski pseudovector a Casimir operator, is the constant spin contribution to the total angular momentum, and there are commutation relations between P and W and between M and W: Invariants constructed from W, instances of Casimir invariants can be used to classify irreducible representations of the Lorentz group. Symmetries in quantum field theory and particle physics Unitary groups in quantum field theory Group theory is an abstract way of mathematically analyzing symmetries. Unitary operators are paramount to quantum theory, so unitary groups are important in particle physics. The group of N dimensional unitary square matrices is denoted U(N). Unitary operators preserve inner products which means probabilities are also preserved, so the quantum mechanics of the system is invariant under unitary transformations. Let be a unitary operator, so the inverse is the Hermitian adjoint , which commutes with the Hamiltonian: then the observable corresponding to the operator is conserved, and the Hamiltonian is invariant under the transformation . Since the predictions of quantum mechanics should be invariant under the action of a group, physicists look for unitary transformations to represent the group. Important subgroups of each U(N) are those unitary matrices which have unit determinant (or are "unimodular"): these are called the special unitary groups and are denoted SU(N). U(1) The simplest unitary group is U(1), which is just the complex numbers of modulus 1. This one-dimensional matrix entry is of the form: in which θ is the parameter of the group, and the group is Abelian since one-dimensional matrices always commute under matrix multiplication. Lagrangians in quantum field theory for complex scalar fields are often invariant under U(1) transformations. If there is a quantum number a associated with the U(1) symmetry, for example baryon and the three lepton numbers in electromagnetic interactions, we have: U(2) and SU(2) The general form of an element of a U(2) element is parametrized by two complex numbers a and b: and for SU(2), the determinant is restricted to 1: In group theoretic language, the Pauli matrices are the generators of the special unitary group in two dimensions, denoted SU(2). Their commutation relation is the same as for orbital angular momentum, aside from a factor of 2: A group element of SU(2) can be written: where σj is a Pauli matrix, and the group parameters are the angles turned through about an axis. The two-dimensional isotropic quantum harmonic oscillator has symmetry group SU(2), while the symmetry algebra of the rational anisotropic oscillator is a nonlinear extension of u(2). U(3) and SU(3) The eight Gell-Mann matrices (see article for them and the structure constants) are important for quantum chromodynamics. They originally arose in the theory SU(3) of flavor which is still of practical importance in nuclear physics. They are the generators for the SU(3) group, so an element of SU(3) can be written analogously to an element of SU(2): where are eight independent parameters. The matrices satisfy the commutator: where the indices , , take the values 1, 2, 3, ..., 8. The structure constants fabc are totally antisymmetric in all indices analogous to those of SU(2). In the standard colour charge basis (r for red, g for green, b for blue): the colour states are eigenstates of the and matrices, while the other matrices mix colour states together. The eight gluons states (8-dimensional column vectors) are simultaneous eigenstates of the adjoint representation of , the 8-dimensional representation acting on its own Lie algebra , for the and matrices. By forming tensor products of representations (the standard representation and its dual) and taking appropriate quotients, protons and neutrons, and other hadrons are eigenstates of various representations of of color. The representations of SU(3) can be described by a "theorem of the highest weight". Matter and antimatter In relativistic quantum mechanics, relativistic wave equations predict a remarkable symmetry of nature: that every particle has a corresponding antiparticle. This is mathematically contained in the spinor fields which are the solutions of the relativistic wave equations. Charge conjugation switches particles and antiparticles. Physical laws and interactions unchanged by this operation have C symmetry. Discrete spacetime symmetries Parity mirrors the orientation of the spatial coordinates from left-handed to right-handed. Informally, space is "reflected" into its mirror image. Physical laws and interactions unchanged by this operation have P symmetry. Time reversal flips the time coordinate, which amounts to time running from future to past. A curious property of time, which space does not have, is that it is unidirectional: particles traveling forwards in time are equivalent to antiparticles traveling back in time. Physical laws and interactions unchanged by this operation have T symmetry. C, P, T symmetries CPT theorem CP violation PT symmetry Lorentz violation Gauge theory In quantum electrodynamics, the local symmetry group is U(1) and is abelian. In quantum chromodynamics, the local symmetry group is SU(3) and is non-abelian. The electromagnetic interaction is mediated by photons, which have no electric charge. The electromagnetic tensor has an electromagnetic four-potential field possessing gauge symmetry. The strong (color) interaction is mediated by gluons, which can have eight color charges. There are eight gluon field strength tensors with corresponding gluon four potentials field, each possessing gauge symmetry. The strong (color) interaction Color charge Analogous to the spin operator, there are color charge operators in terms of the Gell-Mann matrices : and since color charge is a conserved charge, all color charge operators must commute with the Hamiltonian: Isospin Isospin is conserved in strong interactions. The weak and electromagnetic interactions Duality transformation Magnetic monopoles can be theoretically realized, although current observations and theory are consistent with them existing or not existing. Electric and magnetic charges can effectively be "rotated into one another" by a duality transformation. Electroweak symmetry Electroweak symmetry Electroweak symmetry breaking Supersymmetry A Lie superalgebra is an algebra in which (suitable) basis elements either have a commutation relation or have an anticommutation relation. Symmetries have been proposed to the effect that all fermionic particles have bosonic analogues, and vice versa. These symmetry have theoretical appeal in that no extra assumptions (such as existence of strings) barring symmetries are made. In addition, by assuming supersymmetry, a number of puzzling issues can be resolved. These symmetries, which are represented by Lie superalgebras, have not been confirmed experimentally. It is now believed that they are broken symmetries, if they exist. But it has been speculated that dark matter is constitutes gravitinos, a spin 3/2 particle with mass, its supersymmetric partner being the graviton. Exchange symmetry The concept of exchange symmetry is derived from a fundamental postulate of quantum statistics, which states that no observable physical quantity should change after exchanging two identical particles. It states that because all observables are proportional to for a system of identical particles, the wave function must either remain the same or change sign upon such an exchange. More generally, for a system of n identical particles the wave function must transform as an irreducible representation of the finite symmetric group Sn. It turns out that, according to the spin-statistics theorem, fermion states transform as the antisymmetric irreducible representation of Sn and boson states as the symmetric irreducible representation. Because the exchange of two identical particles is mathematically equivalent to the rotation of each particle by 180 degrees (and so to the rotation of one particle's frame by 360 degrees), the symmetric nature of the wave function depends on the particle's spin after the rotation operator is applied to it. Integer spin particles do not change the sign of their wave function upon a 360 degree rotation—therefore the sign of the wave function of the entire system does not change. Semi-integer spin particles change the sign of their wave function upon a 360 degree rotation (see more in spin–statistics theorem). Particles for which the wave function does not change sign upon exchange are called bosons, or particles with a symmetric wave function. The particles for which the wave function of the system changes sign are called fermions, or particles with an antisymmetric wave function. Fermions therefore obey different statistics (called Fermi–Dirac statistics) than bosons (which obey Bose–Einstein statistics). One of the consequences of Fermi–Dirac statistics is the exclusion principle for fermions—no two identical fermions can share the same quantum state (in other words, the wave function of two identical fermions in the same state is zero). This in turn results in degeneracy pressure for fermions—the strong resistance of fermions to compression into smaller volume. This resistance gives rise to the “stiffness” or “rigidity” of ordinary atomic matter (as atoms contain electrons which are fermions). See also Symmetric group Spin-statistics theorem Projective representation Casimir operator Pauli–Lubanski pseudovector Symmetries in general relativity Renormalization group Representation of a Lie group Representation theory of the Poincaré group Representation theory of the Lorentz group Footnotes References Further reading External links The molecular symmetry group @ The University of Western Ontario (2010) Irreducible Tensor Operators and the Wigner-Eckart Theorem Lie groups Continuous Groups, Lie Groups, and Lie Algebras Pauli exclusion principle Special relativity Quantum field theory Group theory Theoretical physics
Symmetry in quantum mechanics
Physics,Mathematics
6,244
2,380,366
https://en.wikipedia.org/wiki/Catwalk%20%28theater%29
A catwalk is an elevated service platform from which many of the technical functions of a theater, such as lighting and sound, may be manipulated. Function Catwalks are used to suspend lighting instruments and microphones directed at the stage. The catwalks provide easy access for theater personnel to perform common tasks. For example, lights may need to be accessed for maintenance, position adjustment, or addition and removal of gels and gobos. Placement Typically, catwalks are located in positions hidden from audience view or directly above an audience, and are considered "behind-the-scenes". For example, many proscenium theaters have a series of two or more catwalks running parallel to the proscenium arch above a false ceiling. Stairs or a ladder up to the catwalks is usually located somewhere backstage. In modern theatres, many architects design catwalks into the "look" of the theatre. In black box theatres, catwalks and pipe grids may be the only architectural feature. A catwalk may also be placed upstage of the proscenium as part of the fly system. These may be fixed, or they may be able to be raised and lowered. In older decorated theaters, the catwalks are not designed to be seen by the audience. Sometimes, because of this, they are placed in the attic area above the auditorium’s ceiling, where slots and movable panels provide openings into the auditorium from the ceiling. Construction Structural Most catwalks have several battens (pipes) that lighting fixtures may be attached to. Lights are usually attached by a C-Clamp or a hook clamp around the pipes. In addition to this primary attachment, fixtures generally have an additional safety cable attaching them to the catwalk, so that if the clamp or bolt gives way, the safety cable will catch the light. This is used because the lights are generally very expensive and heavy, but mainly to protect the audience members and performers from the possibility of fixtures falling down from the catwalks. Catwalks often include a platform for a spotlight operator to work from. Electrical A typical catwalk has a built in electrical conduit to carry power for the lighting fixtures from the dimmers. They often hold other electrical wiring, for example standard sockets for tools, coaxial cable for projection and video monitors, built-in safety lighting to protect technicians, audio cables, and special cables for headset communications with other technicians. Safety Since a catwalk is usually placed high above the floor, spaces where lighting instruments can go are usually chained or otherwise blocked off when a light is not present to prevent people and/or objects from falling through. The instruments themselves are attached by a safety chain to prevent them from falling. Technicians normally attach objects (such as wrenches) to themselves before going onto the catwalk, so that such objects cannot fall and possibly injure someone or damage something. This also prevents objects from falling into a place where they cannot be retrieved, such as between the catwalk floor and the ceiling, or into an HVAC vent. Sometimes, to create better lighting positions or allow more flexibility, catwalks have minimal railings. Because of this, sometimes it is necessary for people working on them to wear fall arrest to satisfy safety requirements, as the railing cannot be considered sufficient. See also Tension grid Fly system Stage lighting Catwalk (disambiguation) References Stage lighting Parts of a theatre Fly system Stage terminology
Catwalk (theater)
Technology
706
15,440,560
https://en.wikipedia.org/wiki/American%20Concrete%20Institute
The American Concrete Institute (ACI, formerly National Association of Cement Users or NACU) is a non-profit technical society and standards developing organization. ACI was founded in January 1905 during a convention in Indianapolis. The Institute's headquarters are currently located in Farmington Hills, Michigan, USA. ACI's mission is "ACI develops and disseminates consensus-based knowledge on concrete and its uses." ACI History A lack of standards for making concrete blocks resulted in a negative perception of concrete for construction. An editorial by Charles C. Brown in the September 1904 issue of Municipal Engineering discussed the idea of forming an organization to bring order and standard practices to the industry. In 1905 the National Association of Cement Users was formally organized and adopted a constitution and bylaws. Richard Humphrey was elected its first President. The first committees were appointed at the 1905 convention in Indianapolis and offered preliminary reports on a number of subject areas. The first complete committee reports were offered at the 1907 convention. The association's first official headquarters was established in 1908 at Richard Humphrey's office in Philadelphia, Pennsylvania. Clerical and editorial help was brought on to more effectively organize conventions and publish proceedings of the institute. The "Standard Building Regulations for the Use of Reinforced Concrete" was adopted at the 1910 convention and became the association's first reinforced concrete building code. By 1912 the association had adopted 14 standards. At the December 1912 convention the association approved publication of a monthly journal of proceedings. In July 1913 the Board of Direction of NACU decided to change its name to the American Concrete Institute. The new name was deemed to be more descriptive of the work being conducted within the institute. ACI 318 ACI 318 Building Code Requirements for Structural Concrete provides minimum requirements necessary to provide public health and safety for the design and construction of structural concrete buildings. It is issued and maintained by the American Concrete Institute. The latest edition of the code is ACI 318-19. Previous versions: ACI 318-14 Major update, reordered chapters. ACI 318-11 ACI 318-08 ACI 318-02. Features major rewrite for seismicity. Concrete International Concrete International is a monthly magazine published by the American Concrete Institute. Searchable abstracts of articles are available via the magazine's web page. Awards The Wason Medal for Most Meritorious Paper has been awarded each year since 1917 to the author or authors of a paper published by ACI. Notable recipients include: 1922: Harold M. Westergaard 1927: Abraham Burton Cohen 1933: Charles S. Whitney 1936: Hardy Cross 1950: Chester P. Siess, George E. Beggs and Nathan M. Newmark 1953: Charles S. Whitney, Boyd Anderson and Mario Salvadori 1969: Uğur Ersoy 1970: W. Gene Corley and Neil M. Hawkins 1971: Fazlur Khan and Mark Fintel References External links American engineering organizations Trade associations based in the United States Organizations based in Michigan Organizations established in 1905 1905 establishments in Indiana Concrete
American Concrete Institute
Engineering
608
12,241,503
https://en.wikipedia.org/wiki/Bunsen%20reaction
The Bunsen reaction is a chemical reaction that describes water, sulfur dioxide, and iodine reacting to form sulfuric acid and hydrogen iodide: 2H2O + SO2 + I2 → H2SO4 + 2HI This reaction is the first step in the sulfur-iodine cycle to produce hydrogen. The products separate into two aqueous layers, with the sulfuric acid floating on top, and a mixture of hydrogen iodide and unreacted iodine on the bottom. While the two layers are generally considered immiscible, small amounts of sulfuric acid may still remain in the hydrogen iodide layer and vice versa. This can lead to unwanted side reactions, one of which precipitates out sulfur, a potential obstruction to the reaction vessel. The reaction is named after Robert Bunsen. He did not discover the reaction, but he described it in detail in 1853. A similar reaction is the basis for Karl Fischer titration. Note that at sufficiently high temperatures, concentrated H2SO4 may react with HI, giving I2, SO2 and H2O, which reverses the reaction. Many chemical processes are reversible reactions, such as ammonia production from N2 and H2, and removing the desired product will shift equilibrium to the right of the equation favoring reaction products as per the Le Chatelier principle. References Inorganic reactions Name reactions
Bunsen reaction
Chemistry
286
1,073,133
https://en.wikipedia.org/wiki/Marcel%20Riesz
Marcel Riesz ( ; 16 November 1886 – 4 September 1969) was a Hungarian mathematician, known for work on summation methods, potential theory, and other parts of analysis, as well as number theory, partial differential equations, and Clifford algebras. He spent most of his career in Lund, Sweden. Marcel is the younger brother of Frigyes Riesz, who was also an important mathematician and at times they worked together (see F. and M. Riesz theorem). Biography Marcel Riesz was born in Győr, Austria-Hungary. He was the younger brother of the mathematician Frigyes Riesz. In 1904, he won the Loránd Eötvös competition. Upon entering the Budapest University, he also studied in Göttingen, and the academic year 1910-11 he spent in Paris. Earlier, in 1908, he attended the 1908 International Congress of Mathematicians in Rome. There he met Gösta Mittag-Leffler, in three years, Mittag-Leffler would offer Riesz to come to Sweden. Riesz obtained his PhD at Eötvös Loránd University under the supervision of Lipót Fejér. In 1911, he moved to Sweden, where from 1911 to 1925 he taught at Stockholm University. From 1926 to 1952, he was a professor at Lund University. According to Lars Gårding, Riesz arrived in Lund as a renowned star of mathematics, and for a time his appointment may have seemed like an exile. Indeed, there was no established school of mathematics in Lund at the time. However, Riesz managed to turn the tide and make the academic atmosphere more active. Retired from the Lund University, he spent 10 years at universities in the United States. As a visiting research professor, he worked in Maryland, Chicago, etc. After ten years of intense work with little rest, he suffered a breakdown. Riesz returned to Lund in 1962. After a long illness, he died there in 1969. Riesz was elected a member of the Royal Swedish Academy of Sciences in 1936. Mathematical work Classical analysis The work of Riesz as a student of Fejér in Budapest was devoted to trigonometric series: One of his results states that if and if the Fejer means of the series tend to zero, then all the coefficients an and bn are zero. His results on summability of trigonometric series include a generalisation of Fejér's theorem to Cesàro means of arbitrary order. He also studied the summability of power and Dirichlet series, and coauthored a book on the latter with G.H. Hardy. In 1916, he introduced the Riesz interpolation formula for trigonometric polynomials, which allowed him to give a new proof of Bernstein's inequality. He also introduced the Riesz function Riesz(x), and showed that the Riemann hypothesis is equivalent to the bound as for any Together with his brother Frigyes Riesz, he proved the F. and M. Riesz theorem, which implies, in particular, that if μ is a complex measure on the unit circle such that then the variation |μ| of μ and the Lebesgue measure on the circle are mutually absolutely continuous. Functional-analytic methods Part of the analytic work of Riesz in the 1920s used methods of functional analysis. In the early 1920s, he worked on the moment problem, to which he introduced the operator-theoretic approach by proving the Riesz extension theorem (which predated the closely related Hahn–Banach theorem). Later, he devised an interpolation theorem to show that the Hilbert transform is a bounded operator in Lp The generalisation of the interpolation theorem by his student Olaf Thorin is now known as the Riesz–Thorin theorem. Riesz also established, independently of Andrey Kolmogorov, what is now called the Kolmogorov–Riesz compactness criterion in Lp: a subset K ⊂Lp(Rn) is precompact if and only if the following three conditions hold: (a) K is bounded; (b) for every there exists so that for every (c) for every there exists so that for every with |y| < ρ, and every . Potential theory, PDE, and Clifford algebras After 1930, the interests of Riesz shifted to potential theory and partial differential equations. He made use of "generalised potentials", generalisations of the Riemann–Liouville integral. In particular, Riesz discovered the Riesz potential, a generalisation of the Riemann–Liouville integral to dimension higher than one. In the 1940s and 1950s, Riesz worked on Clifford algebras. His 1958 lecture notes, the complete version of which was only published in 1993 (), were dubbed by the physicist David Hestenes "the midwife of the rebirth" of Clifford algebras. Students Riesz's doctoral students in Stockholm include Harald Cramér and Einar Carl Hille. In Lund, Riesz supervised the theses of Otto Frostman, Lars Gårding, Lars Hörmander, and Olof Thorin. Publications References External links 1886 births 1969 deaths 20th-century Hungarian mathematicians 20th-century Hungarian people 20th-century Swedish people Swedish mathematicians Mathematical analysts Functional analysts Measure theorists People connected to Lund University People from Lund Members of the Royal Swedish Academy of Sciences Emigrants from Austria-Hungary Immigrants to Sweden People from Győr Swedish Jews Mathematicians from Austria-Hungary
Marcel Riesz
Mathematics
1,135
3,514,718
https://en.wikipedia.org/wiki/The%20Terratin%20Incident
"The Terratin Incident" is the eleventh episode of the first season of the American animated science fiction television series Star Trek. It first aired in the NBC Saturday morning lineup on November 17, 1973, and was written by American screenwriter Paul Schneider who had previously written the Original Series episodes "Balance of Terror" and "The Squire of Gothos". It came from a one-paragraph story idea by Gene Roddenberry based on Gulliver's Travels. In this episode, after an apparent attack, the crew of the Enterprise find themselves beginning to shrink in size toward the point that they will no longer be able to control the ship. Plot While observing a burnt-out supernova, the Federation starship Enterprise picks up a strange message transmitted in a two-hundred-year-old Earth code. The signal is traced to a nearby planet. When the Enterprise enters orbit, it is hit by an energy beam of "spiroid radiation" that damages its dilithium crystals and makes the crew begin to shrink (along with all other organic material aboard the ship, including the crew's uniforms). Chief Medical Officer Dr. McCoy determines that the crew will continue to shrink beyond their ability to control the ship unless a cure is found. Captain Kirk beams down to the surface and finds that the transporter can revert crew members to their original size. He also observes what appears to be a miniature city. Kirk returns to the ship, but the crew are now too small for him to see easily, and too small to operate the ship's controls. Meanwhile, the Terratins have beamed the bridge crew down to their city, where the crew learns the Terratins' fate. Terratin is a lost Earth colony, originally called "Terra Ten"; its inhabitants have mutated because of the supernova's radiation, and are now all approximately one-sixteenth of an inch in height. The beam which caused the crew to shrink was not intended as an attack, but was the only way the Terratins had to draw attention to themselves. The crew are beamed back to the ship and return to normal size. However, the Terratins have been small for generations and cannot be restored to normal size. Their planet is in peril from massive volcanic activity, so the whole Terratin city is beamed aboard the Enterprise, and moved to another planet. Reception This episode was noted as a case where the fictional Star Trek transporter technology changes the size of the entity being transported, along with "The Counter Clock Incident" from the same TV series. The episode is noted for harnessing the flexibility of the animated format, by having the bridge crew of the Enterprise shrink. Notes See also "The Lorelei Signal" - an earlier animated episode where the idea of using the transporter to restore physical patterns is introduced "One Little Ship" - an episode of Star Trek: Deep Space Nine where a Starfleet runabout and its crew are miniaturized References External links "The Terratin Incident" at Curt Danhauser's Guide to the Animated Star Trek "The Terratin Incident" Full episode for viewing at StarTrek.com 1973 American television episodes Star Trek: The Animated Series episodes Fiction about size change Television episodes directed by Hal Sutherland Television episodes written by Paul Schneider (writer)
The Terratin Incident
Physics,Mathematics
667
58,940,381
https://en.wikipedia.org/wiki/World%20War%20II%20bomb%20disposal%20in%20Europe
The Royal Air Force and United States Army Air Forces dropped 2.7 million tonnes of bombs on Europe during World War II. In the United Kingdom, the German Luftwaffe dropped more than 12,000 tonnes of bombs on London alone. In 2018, the British Ministry of Defence reported that 450 World War II bombs were made safe or defused since 2010 by disposal teams. Every year, an estimated 2,000 tons of World War II munitions are found in Germany, at times requiring the evacuation of tens of thousands of residents from their homes. In Berlin alone, 1.8 million pieces of ordnance have been defused between 1947 and 2018. Buried bombs, as well as mortars, land mines and grenades, are often found during construction work or other excavations, or by farmers tilling the land. Belgium February 2020: Hundreds of people were evacuated after construction workers discovered a World War II bomb in Maasmechelen, Limburg. Great Britain 1 October 1969: A German parachute mine was defused by a team led by Major George R. Fletcher MBE, Royal Engineers. at Burghley Road, Camden. 5 March 2010: A unexploded German bomb was found in Southampton and was blown up in a controlled explosion by the Royal Navy. 11 August 2015: A German bomb was found and defused by British Army experts in East London. 12 May 2016: 1,100 properties were advised to evacuate and three primary schools were closed after a German bomb was found under a school playground in Bath, with the bomb being safely deactivated the following day. 2 March 2017: A German bomb was found and defused by a British Army disposal team in Brent, north-west London. 16 May 2017: A German bomb was found at Aston Expressway, near Birmingham, and destroyed by British Army experts with a controlled explosion. Hundreds of homes were evacuated and businesses were closed, and London Midland rail services suspended. Two buildings were damaged by the blast. 29 November 2017: A German ‘G’ parachute mine was discovered offshore at Falmouth and was detonated safely. 14 February 2018: A German bomb, found during works in King George V Dock, near London City Airport, was removed from the area and detonated at sea off Shoeburyness, Essex, by British Army experts. 24 May 2019: 1,500 houses were evacuated at Kingston upon Thames after a German bomb was found and defused by a controlled explosion by a disposal team. The blast shattered windows along Fasset Road. 3 February 2020: A number of streets were evacuated in Central London when a A World War II bomb was found in the district of Soho. 1 December 2020: Royal Navy experts were called after the discovery of a World War II German submarine-laid, moored influence, mine in the River Clyde, Scotland, which contained of explosives and a controlled explosion to dispose of the mine was carried out. 15 December 2020: A 42-foot trawler, the Galwad-Y-Mor, was utterly damaged and disabled by the explosion of what could be, according to the experts, WWII discarded ordnance off Cromer, Norfolk. The wheelhouse was completely wrecked by the shock wave, and the captain and the rest of the seven men crew, two Britons and five Latvians, were injured, some of them suffering "life-changing" wounds. The trawler, low in the water but still afloat, was towed by the tug GPS Avenger to Grimsby where she was laid up to assess damage; the crew had been already rescued by the offshore support vessel Esvagt Njord. 26 February 2021: 2,600 households and the University of Exeter halls of residence were evacuated after the discovery of an unexploded World War II German bomb in Exeter and a controlled detonation was carried out. Despite precautions, houses within 100m were damaged, a large crater was formed, and debris was thrown 250m away. 22 July 2021: Eight homes were evacuated and a section of the M62 motorway was closed after the discovery of a World War II bomb on a new housing development in Goole. 3 December 2021: Train services were delayed after the discovery of an unexploded World War II bomb in Netley, Hampshire at a construction site near a railway track. 8 June 2022: After a suspected wartime bomb is found in a lake in Mossley Hill in Liverpool, a planned detonation is successfully carried out. 26 January 2023: A planned detonation of a wartime bomb found on a beach in Essex occurs. 10 February 2023: A World War II bomb exploded in Great Yarmouth during attempts to defuse it. Minor damage and no injuries were reported. Germany September 1994: A bomb exploded on a building site in Friedrichshain, Berlin, damaging many houses and killing three people. June 2010: 7,000 people were evacuated in Göttingen after a bomb was found. Three members of the bomb-disposal unit died after the bomb exploded. January 2014: A construction worker in Euskirchen was killed and two critically wounded after hitting a buried bomb with an excavator. December 2016: A World War II bomb was defused in Augsburg, requiring the evacuation of 54,000 people. May 2017: Three British World War II bombs were defused in Hanover, requiring the evacuation of 50,000 people. September 2017: A bomb dropped by the USAAF during World War II led to the evacuation of 21,000 people in Koblenz. September 2017: 70,000 people had to leave their homes in Frankfurt after a British bomb was discovered. April 2018: A bomb found in Paderborn forced the evacuation of 26,000 people. April 2018: 12,000 people were evacuated in Berlin after a bomb was discovered just north of Berlin Hauptbahnhof. August 2018: the discovery of a World War II bomb required the evacuation of 18,500 people in Ludwigshafen. April 2019: 600 people were evacuated when a bomb was discovered in Frankfurt's River Main. Divers with the city's fire service were participating in a routine training exercise when they found the device. July 2019: 16,500 people evacuated in Frankfurt when a bomb was found during construction. January 2020: Two World War II bombs were found in Dortmund forcing the evacuation of 14,000 residents and the closure of the city's main train station. April 2020: A World War II bomb in Bonn was successfully defused but required the evacuation of 1,200 residents and 200 patients at a local hospital, including 11 people critically ill with coronavirus. October 2020: 10,000 office workers were evacuated, along with 15 residents in Neukölln, Berlin when a World War II bomb was discovered. January 2021: Over 8,000 people were evacuated in Göttingen after four World War II bombs were discovered in the city centre. May 2021: 16,500 people were evacuated when a bomb was discovered in Flensburg. Construction workers were excavating nearby when they found the device. May 2021: Around 25,000 people were evacuated in Frankfurt after the discovery of a unexploded bomb. October 2021: 2,000 people evacuated in Munich after the discovery of a unexploded aerial bomb. December 2021: Four people were injured during the construction of Trunk Line 2 after a bomb exploded in Munich. December 2021: 15,000 people were evacuated in Berlin after the discovery of a unexploded aerial bomb. August 2022: 12,000 people were evacuated in Berlin after the discovery of a unexploded bomb in Friedrichshain. September 2022: A unexploded bomb was found during construction at a community garden southeast of Berlin's A115 autobahn. March 2023: A unexploded bomb was found during construction in Berlin’s Zehlendorf district. June 2023: A 500-kilogram (1,100 lb) unexploded aerial bomb was found during sewer maintenance works in Northern Darmstadt, prompting evacuations of the area. April 2024: A 500-kilogram (1,100 lb) unexploded American-made World War II arial bomb was found near Mewa Arena during construction work in Mainz, prompting evacuations of 3,500 residents from the area. June 2024: A unexploded World War II bomb was found at Tesla, Inc.'s factory site in Grünheide. July 2024: A second bomb was found and detonated on the grounds of Tesla’s Gigafactory Berlin-Brandenburg. January 2025: An unexploded 250 kg (550 lb) bomb, identified as a British aerial bomb, was found during demolition work on the Carola Bridge in Dresden. Italy October 2016: 1,300 people were evacuated in Rovereto after the discovery of a American bomb; less than one year earlier, a bomb was found in the same town. March 2018: 23,000 people were evacuated in Fano after a British-made bomb was discovered. July 2018: 12,000 were forced from their homes after a bomb was discovered in Terni. October 2019: 4,000 people were evacuated and a nearby highway was closed after a discovery of a World War II bomb in Bolzano, which was removed and blown up in controlled explosion. December 2019: 10,000 people were evacuated in Turin upon the discovery of a British bomb; Mayor Chiara Appendino reported that the device was defused by the Italian Army. December 2019: 54,000 people were evacuated in Brindisi from a radius of after the discovery of a World War II bomb. November 2020: The Italian army was called to AS Roma training ground after the discovery of as many as 20 devices were found underneath the turf during work to build new pitches at their training complex. August 2022: During the 2022 European heat waves, a dried-up river bank of the River Po revealed an unexploded World War II bomb. The bomb was subsequently disposed of through a planned detonation. Poland October 2020: Around 750 people were evacuated in the port city of Świnoujście after the largest ever unexploded World War II bomb in Poland, a Tallboy bomb, was discovered in the Baltic Sea shipping canal, with the bomb detonating during the defusing process. Slovenia Areas with highest concentrations of unexploded ordinances from second world war are in Žužemberk and in Tezno, Maribor because of former airplane motor factory. According to research by the Geodetic Institute of Slovenia and colleagues using aerial photography done in 2024, there are still at least 30 unexploded bombs from the Second World War in the Maribor area. 30 June 2005: A 500 kg bomb was found in Vodole, Maribor. Bomb was relocated and detonated on same day, Slovenian Police evacuated nearest two houses. 10 May 2008: A 100 kg bomb was found in Nova Gorica. 14 July 2011: A 250 kg bomb was found on construction site in Dravograd. Bomb was disarmed on same day and disposed day after. 3 October 2011: A 250 kg bomb was found on same construction site in Dravograd. Bomb was disposed of on 4 October. June 2014: A 250 kg bomb was found on tug Maone in Gulf of Piran. It was lifted on 3 March 2015 as bad weather prevented bomb's relocation. 17 August 2014: A 250 kg bomb was found in Maribor. 13 February 2015: Three 250 kg American bombs wre found in Drava river in Maribor. Bombs were not dangerous and were stored till disposal. 19 July 2017: A 41-year-old man found a 227 kg American bomb in the surroundings of Vurberk and brought it home. The bomb deactivated on 25 July, two explosions occurred. Intervention was regarded as one of the hardest in Slovenian history. 400 people were evacuated in radius of 1 km. 17 April 2018: A 285 kg bomb was found in Nova Gorica and stored till disposal. 17 April 2018: A British incendiary bomb of unknown mass was found in Rošpoh, Maribor and detonated. 26 October 2019: A 250 kg was found in Maribor and detonated on 31 October. 27 October 2019: An aerial bomb of unknown mass was found while a railway was undergoing reconstruction in Maribor. 3 November 2019: A 500 kg bomb was deactivated in Maribor. There's still about 200 bombs left in Maribor. 11 January 2022: A 250 kg was found in Maribor and deactivated on 16 January. 14 May 2022: A 200 kg bomb was found in the vicinity of Pragersko railway station and stored till disposal. 17 July 2023: A 250 kg aerial bomb was found in Nova Gorica while groundworks for railway. 100 meter parameter was declared, evacuation and bomb deactivation is planned for Sunday 23 July. Evacuation was led in cooperation with Italian officials as bomb was found near the border. 26 February 2024: A 250 kg aerial bomb of American origin was found on Nova Gorica Railway Station. Another bomb of same type was found on February 29 about 50 m apart. Disposal took place on 10 March 2024, evacuation led by Slovenian and Italian authorities. 25 August 2024: A 226 kg MC 500 lb bomb was found on Nova Gorica Railway station and was deactivated. Evacuation was done in cooperation with Italian authorities. See also Unexploded ordnance References Aftermath of World War II Bomb disposal Emergency management in Germany Emergency management in Poland Emergency management in the United Kingdom
World War II bomb disposal in Europe
Chemistry
2,772
3,033,696
https://en.wikipedia.org/wiki/Nikolai%20Chebotaryov
Nikolai Grigorievich Chebotaryov (often spelled Chebotarov or Chebotarev, , ) ( – 2 July 1947) was a Soviet mathematician. He is best known for the Chebotaryov density theorem. He was a student of Dmitry Grave, a Russian mathematician. Chebotaryov worked on the algebra of polynomials, in particular examining the distribution of the zeros. He also studied Galois theory and wrote a textbook on the subject titled Basic Galois Theory. His ideas were used by Emil Artin to prove the Artin reciprocity law. He worked with his student Anatoly Dorodnov on a generalization of the quadrature of the lune, and proved the conjecture now known as the Chebotarev theorem on roots of unity. Early life Nikolai Chebotaryov was born on 15 June 1894 in Kamianets-Podilskyi, Russian Empire (now in Ukraine). He entered the department of physics and mathematics at Kyiv University in 1912. In 1928, he became a professor at Kazan University, remaining there for the rest of his life. He died on 2 July 1947. He was an atheist. On 14 May 2010, a memorial plaque for Nikolai Chebotaryov was unveiled on the main administration building of I.I. Mechnikov Odessa National University. References 1894 births 1947 deaths People from Kamianets-Podilskyi People from Kamenets-Podolsky Uyezd Ukrainian people of Russian descent 20th-century Russian mathematicians Soviet mathematicians Russian atheists Ukrainian mathematicians Number theorists Corresponding Members of the USSR Academy of Sciences Recipients of the Stalin Prize Recipients of the Order of Lenin Recipients of the Order of the Red Banner of Labour Russian scientists
Nikolai Chebotaryov
Mathematics
349
24,109,458
https://en.wikipedia.org/wiki/Popoviciu%27s%20inequality
In convex analysis, Popoviciu's inequality is an inequality about convex functions. It is similar to Jensen's inequality and was found in 1965 by Tiberiu Popoviciu, a Romanian mathematician. Formulation Let f be a function from an interval to . If f is convex, then for any three points x, y, z in I, If a function f is continuous, then it is convex if and only if the above inequality holds for all x, y, z from . When f is strictly convex, the inequality is strict except for x = y = z. Generalizations It can be generalized to any finite number n of points instead of 3, taken on the right-hand side k at a time instead of 2 at a time: Let f be a continuous function from an interval to . Then f is convex if and only if, for any integers n and k where n ≥ 3 and , and any n points from I, Weighted inequality Popoviciu's inequality can also be generalized to a weighted inequality. Let f be a continuous function from an interval to . Let be three points from , and let be three nonnegative reals such that and . Then, Notes Inequalities Convex analysis
Popoviciu's inequality
Mathematics
249
44,006,850
https://en.wikipedia.org/wiki/Onoffice
OnOffice is a quarterly architecture and design magazine launched in 2006 by publishing director Daren Newton, with a particular focus on the workplace, hospitality, and education sectors. It features news, criticism and case studies on architecture, interior and product design for the commercial industry. OnOffice is owned by Media 10 LTD. The current editor is Kaye Preston. OnOffice is part of a wider Media 10 publication and event portfolio that includes ICON, The Clerkenwell Post, Clerkenwell Design Week and Design London. Notes In May 2013, the London Evening Standard quoted OnOffice features editor Jenny Brewer by Kate Burnett. References External links onoffice magazine Media 10 Architecture magazines Visual arts magazines published in the United Kingdom Monthly magazines published in the United Kingdom Design magazines English-language magazines Magazines established in 2006 Mass media in Essex
Onoffice
Engineering
167
70,105,084
https://en.wikipedia.org/wiki/Coral%20Barbas
María del Coral Barbas Arribas (or Arriba) is a professor at the Universidad CEU San Pablo in Madrid, Spain who is known for her research on metabolomics and integration of chemical data. Education and career Barbas has a Ph.D. from Complutense University of Madrid. From 2005 until 2006 she was a Marie Curie fellow at King's College London. As of 2022 she is a professor of analytical chemistry at the Universidad CEU San Pablo and is the president of the Madrid section of the Spanish Royal Society of Chemistry. Research Barbas is known for her research on metabolomics, a field she was first introduced to while she was a Marie Curie fellow. Her early research centered on the analysis of vitamins and development of chemical methods to analyze compounds such as caffeine. Her subsequent research has developed methods to analyze organic compounds in pharmaceutical drugs and foods, and defined biomarkers for diseases such as leukemia and Parkinson's disease. She is also known for defining quality assurance protocols for metabolomics data analysis and establishing workflows to analyze metabolomics data. Selected publications Awards and honors The Analytical Scientist named Barbas to their 2016 Power List in recognition of her contributions to chemistry. In 2017, she was honored by Acta Sanitaria for her chemical research linking diabetes and obesity. In 2018, she received the International Award of the Belgian Society of Pharmaceutical Sciences. References External links Analytical chemists Women chemists Living people Spanish scientists Year of birth missing (living people)
Coral Barbas
Chemistry
312
73,287,048
https://en.wikipedia.org/wiki/Zinc%20perchlorate
Zinc perchlorate is the inorganic compound with the chemical formula Zn(ClO4)2 which forms the hexahydrate. Synthesis Zinc perchlorate can be prepared by dissolving zinc oxide or zinc carbonate in perchloric acid: ZnO + 2HClO4 -> Zn(ClO4)2 + H2O ZnCO3 + 2HClO4 -> Zn(ClO4)2 + H2O + CO2 Chemical properties The compound decomposes when heated to high temperatures and may explode if heated too strongly. Like most other perchlorates such as copper perchlorate and lead perchlorate, zinc perchlorate is prone to deliquescence. Zinc perchlorate can form complexes with ligands such as 8-aminoquinoline, tricarbohydrazide, and tetraphenylethylene tetratriazole. Physical properties The compound forms a hexahydrate ·6. Zinc perchlorate forms a hygroscopic colorless solid, odorless, soluble in water and low-weight alcohols. Uses Zinc perchlorate is used as an oxidizing agent and catalyst. References External links Perchlorates Oxidizing agents Zinc compounds
Zinc perchlorate
Chemistry
260
45,042,878
https://en.wikipedia.org/wiki/Evan%20J.%20Crane
Evan Jay Crane (February 14, 1889 – December 30, 1966) was an American chemist and the editor of Chemical Abstracts 1915–1958. He graduated from Ohio State University in 1911, and received an Honorary D.Sc. there in 1938. In 1951 he was awarded the Priestley Medal, the highest honour of the American Chemical Society, and in 1953 the Austin M. Patterson Award. In 1958, at the 134th national meeting of the American Chemical Society, Crane was presented with a commemorative scroll from the ACS Division of Chemical Literature, worded as follows: Publications A Guide to the Literature of Chemistry, by E. J. Crane and Austin M. Patterson (Wiley, 1927) (2nd edition: 1957, with Eleanor B. Marr) References External links Photograph of Crane in the Smithsonian Institution Archives A Guide to the Literature of Chemistry (1927) full text at Internet Archive 1889 births 1966 deaths 20th-century American chemists Cheminformatics Place of birth missing Ohio State University alumni
Evan J. Crane
Chemistry
202
32,346,989
https://en.wikipedia.org/wiki/Edward%20Pritchard%20Martin
Edward Pritchard Martin (20 January 1844 Dowlais - 25 September 1910 Harrogate) was a British engineer, and steel maker. Life His father was mining engineer to the Dowlais Iron Co. for 58 years. In 1860, he apprenticed with William Menelaus, who had worked with Sir Henry Bessemer. In 1864, he worked at the London office of the Dowlais Iron Co. In 1869, he was deputy general manager of the Dowlais Ironworks under Menelaus. At the end of 1870, he became manager of the Cwmavon Works. Later he worked at the Blaenavon Ironworks. He became associated with the Thomas-Gilchrist attempts to make a satisfactory metal from phosphoric ores. He made commercial trials of the process. He was awarded the Bessemer Gold Medal, by the Iron and Steel Institute in 1884, with Edward Windsor Richards. After the death of Menelaus, he became General Manager of the Dowlais Iron Works, from 1882 to 1902. He supervised the erection of new works at East Moors Steelworks, Cardiff beginning in 1888; blast furnaces were blown in February 1891, and the works and steel mill started production in 1895. He was elected President of the Institution of Mechanical Engineers in both 1905 and 1906. and of the Iron and Steel Institute in 1897–98. He was also President of the South Wales Institute of Engineers and the Monmouth and South Wales Colliery-Owners’ Association. In 1884 he was awarded the Bessemer Gold Medal of the Iron and Steel Institute for his services to the industry. He served as High Sheriff of Monmouthshire in 1803. References 1844 births 1910 deaths People from Dowlais Welsh mechanical engineers Bessemer Gold Medal High sheriffs of Monmouthshire
Edward Pritchard Martin
Chemistry
358
70,509,023
https://en.wikipedia.org/wiki/Balance%20of%20angular%20momentum
The balance of angular momentum or Euler's second law in classical mechanics is a law of physics, stating that to alter the angular momentum of a body a torque must be applied to it. An example of use is the playground merry-go-round in the picture. To put it in rotation it must be pushed. Technically one summons a torque that feeds angular momentum to the merry-go-round. The torque of frictional forces in the bearing and drag, however, make a resistive torque that will gradually lessen the angular momentum and eventually stop rotation. The mathematical formulation states that the rate of change of angular momentum about a point , is equal to the sum of the external torques acting on that body about that point: The point is a fixed point in an inertial system or the center of mass of the body. In the special case, when external torques vanish, it shows that the angular momentum is preserved. The d'Alembert force counteracting the change of angular momentum shows as a gyroscopic effect. From the balance of angular momentum follows the equality of corresponding shear stresses or the symmetry of the Cauchy stress tensor. The same follows from the Boltzmann Axiom, according to which internal forces in a continuum are torque-free. Thus the balance of angular momentum, the symmetry of the Cauchy stress tensor, and the Boltzmann Axiom in continuum mechanics are related terms. Especially in the theory of the spinning top the balance of angular momentum plays a crucial part. In continuum mechanics it serves to exactly determine the skew-symmetric part of the stress tensor. The balance of angular momentum is, besides the Newtonian laws, a fundamental and independent principle and was introduced first by Swiss mathematician and physicist Leonhard Euler in 1775. History Swiss mathematician Jakob I Bernoulli applied the balance of angular momentum in 1703 – without explicitly formulating it – to find the center of oscillation of a pendulum, which he had already done in a first, somewhat incorrect manner in 1686. The balance of angular momentum thus preceded Newton's laws, which were first published in 1687. In 1744, Euler was the first to use the principles of momentum and of angular momentum to state the equations of motion of a system. In 1750, in his treatise "Discovery of a new principle of mechanics" he published the Euler's equations of rigid body dynamics, which today are derived from the balance of angular momentum, which Euler, however, could deduce for the rigid body from Newton's second law. After studies on plane elastic continua, which are indispensable to the balance of the torques, Euler raised the balance of angular momentum to an independent principle for calculation of the movement of bodies in 1775. In 1822, French mathematician Augustin-Louis Cauchy introduced the stress tensor whose symmetry in combination with the balance of linear momentum made sure the fulfillment of the balance of angular momentum in the general case of the deformable body. The interpretation of the balance of angular momentum was first noted by M. P. Saint-Guilhem in 1851. Kinetics of rotation Kinetics deals with states that are not in mechanical equilibrium. According to Newton's second law, an external force leads to a change in velocity (acceleration) of a body. Analogously an external torque means a change in angular velocity resulting in an angular acceleration. The inertia relating to rotation depends not only on the mass of a body but also on its spatial distribution. With a rigid body this is expressed by the moment of inertia. With a rotation around a fixed axis, the torque is proportional to the angular acceleration with the moment of inertia as proportionality factor. Here the moment of inertia is not only dependent on the position of the axis of rotation (see Steiner Theorem) but also on its direction. Should the above law be formulated more generally for any axis of rotation then the inertia tensor must be used. With the two-dimensional special case, a torque only results in an acceleration or slowing down of a rotation. With the general three-dimensional case, however, it can also alter the direction of the axis (precession). Formulations In rigid body dynamics the balance of angular momentum leads to Euler's equations. In continuum mechanics the balance of angular momentum leads to Cauchy's second law of motion, that states the symmetry of the Cauchy stress tensor. The Boltzmann Axiom has the same consequence. Boltzmann Axiom In 1905, Austrian physicist Ludwig Boltzmann pointed out that with reduction of a body into infinitesimally smaller volume elements, the inner reactions have to meet all static conditions for mechanical equilibrium. Cauchy's stress theorem handles the equilibrium in terms of force. For the analogous statement in terms of torque, German mathematician Georg Hamel coined the name Boltzmann Axiom. This axiom is equivalent to the symmetry of the Cauchy stress tensor. For the resultants of the stresses do not exert a torque on the volume element, the resultant force must lead through the center of the volume element. The line of action of the inertia forces and the normal stress resultants σxx·dy and σyy·dx lead through the center of the volume element. In order that the shear stress resultants τxy·dy and τyx·dx lead through the center of the volume element must hold. This is actually the statement of the equality of corresponding shear stresses in the xy plane. Cosserat Continuum In addition to the torque-free classical continuum with a symmetric stress tensor, cosserat continua (polar continua) that are not torque-free have also been defined. One application of such a continuum is the theory of shells. Cosserat continua are not only capable to transport a momentum flux but also an angular momentum flux. Therefore, there also may be sources of momentum and angular momentum inside the body. Here the Boltzmann Axiom does not apply and the stress tensor may be skew-symmetric. If these fluxes are treated as usual in continuum mechanics, field equations arise in which the skew-symmetric part of the stress tensor has no energetic significance. The balance of angular momentum becomes independent of the balance of energy and is used to determine the skew-symmetric part of the stress tensor. American mathematician Clifford Truesdell saw in this the "true basic sense of Euler's second law". Area rule The area rule is a corollary of the angular momentum law in the form: The resulting moment is equal to the product of twice the mass and the time derivative of the areal velocity. It refers to the ray to a point mass with mass m. This has the angular momentum with the velocity and the momentum . In the infinitesimal time dt the trajectory sweeps over a triangle whose content is , see image, areal velocity and cross product "×". This is how it turns out: . With Euler's second law this becomes: . The special case of plane, moment-free central force motion is treated by Kepler's second law, also known as the area rule. References Continuum mechanics Equations of physics Scientific observation
Balance of angular momentum
Physics,Mathematics
1,486
77,903,754
https://en.wikipedia.org/wiki/Ammonium%20hexacyanoferrate%28II%29
Ammonium hexacyanoferrate(II) is an inorganic chemical compound with the chemical formula . Synthesis Neutralization of ferruginous acid with ammonia solution followed by salting with ethanol: Physical properties Ammonium hexacyanoferrate(II) forms green crystals. The compound is well-soluble in water, and does not dissolve in ethanol. It forms hydrates. References Cyano complexes Ammonium compounds Cyanometallates Ferrates
Ammonium hexacyanoferrate(II)
Chemistry
96
889,940
https://en.wikipedia.org/wiki/Trivial%20name
In chemistry, a trivial name is a non-systematic name for a chemical substance. That is, the name is not recognized according to the rules of any formal system of chemical nomenclature such as IUPAC inorganic or IUPAC organic nomenclature. A trivial name is not a formal name and is usually a common name. Generally, trivial names are not useful in describing the essential properties of the thing being named. Properties such as the molecular structure of a chemical compound are not indicated. And, in some cases, trivial names can be ambiguous or will carry different meanings in different industries or in different geographic regions (for example, a trivial name such as white metal can mean various things). Trivial names are simpler. As a result, a limited number of trivial chemical names are retained names, an accepted part of the nomenclature. Trivial names often arise in the common language; they may come from historic usages in, for example, alchemy. Many trivial names pre-date the institution of formal naming conventions. Names can be based on a property of the chemical, including appearance (color, taste or smell), consistency, and crystal structure; a place where it was found or where the discoverer comes from; the name of a scientist; a mythological figure; an astronomical body; the shape of the molecule; and even fictional figures. All elements that have been isolated have trivial names. Definitions In scientific documents, international treaties, patents and legal definitions, names for chemicals are needed that identify them unambiguously. This need is satisfied by systematic names. One such system, established by the International Union of Pure and Applied Chemistry (IUPAC), was established in 1950. Other systems have been developed by the American Chemical Society, the International Organization for Standardization, and the World Health Organization. However, chemists still use many names that are not systematic because they are traditional or because they are more convenient than the systematic names. These are called trivial names. The word "trivial", often used in a pejorative sense, was intended to mean "commonplace". In addition to trivial names, chemists have constructed semi-trivial names by appending a standard symbol to a trivial stem. Some trivial and semi-trivial names are so widely used that they have been officially adopted by IUPAC; these are known as retained names. Pesticide common names The common names used for pesticides did not become commonplace through repeated informal usage, The names are granted by ISO committee (TC81), who approve the common name according to ISO1750. Elements Traditional names of elements are trivial, some originating in alchemy. IUPAC has accepted these names, but has also defined systematic names of elements that have not yet been prepared. It has adopted a procedure by which the scientists who are credited with preparing an element can propose a new name. Once the IUPAC has accepted such a (trivial) name, it replaces the systematic name. Origins Nine elements were known by the Middle Ages: gold, silver, tin, mercury, copper, lead, iron, sulfur, and carbon. Mercury was named after the planet, but its symbol was derived from the Latin hydrargyrum, which itself comes from the Greek υδράργυρος, meaning liquid silver; mercury is also known as quicksilver in English. The symbols for the other eight are derived from their Latin names. Systematic nomenclature began after Louis-Bernard Guyton de Morveau stated the need for "a constant method of denomination, which helps the intelligence and relieves the memory". The resulting system was popularized by Antoine Lavoisier's publication of Méthode de nomenclature chimique (Method of Chemical Nomenclature) in 1787. Lavoisier proposed that elements be named after their properties. For the next 125 years, most chemists followed this suggestion, using Greek and Latin roots to compose the names; for example, hydrogen ("water-producing"), oxygen ("acid-producing"), nitrogen ("soda-producing"), bromine ("stink"), and argon were based on Greek roots, while the names of iodine and chlorine were derived from the Greek words for their characteristic colors. Indium, rubidium, and thallium were similarly named for the colors of particular lines in their emission spectra. Iridium, which forms compounds of many different colors, takes its name from iris, the Latin for "rainbow". The noble gases have all been named for their origin or properties. Helium comes from the Greek helios, meaning "Sun" because it was first detected as a line in the spectrum of the Sun (it is not known why the suffix -ium, which is used for metals, was chosen). The other noble gases are neon ("new"), argon ("slow, lazy"), krypton ("hidden"), xenon ("stranger"), and radon ("from radium"). Many more elements have been given names that have little or nothing to do with their properties. Elements have been named for celestial bodies (helium, selenium, tellurium, for the Sun, Moon, and Earth; cerium and palladium for Ceres and Pallas, two asteroids). They have been named for mythological figures, including Titans in general (titanium) and Prometheus in particular (promethium); Roman and Greek gods (uranium, neptunium, and plutonium) and their descendants (tantalum for Tantalus, a son of Zeus, and niobium for Niobe, a daughter of Tantalus); and Norse deities (vanadium for the goddess Vanadis and thorium for the god Thor). Some elements were named for aspects of the history of their discovery. In particular, technetium and promethium were so named because the first samples detected were artificially synthesised; neither of the two has any isotope sufficiently stable to occur in nature on Earth in significant quantities. The connection to the Titan Prometheus was that he had been fabled to have stolen fire from the gods for mankind. Discoverers of some elements named them after their home country or city. Marie Curie named polonium after Poland; ruthenium, gallium, germanium, and lutetium were based on the Latin names for Russia, France, Germany, and Paris. Other elements are named after the place where they were discovered. Four elements — terbium, erbium, ytterbium, and yttrium — were named after the Swedish village Ytterby, where ores containing them were extracted. Other elements named after places are magnesium (after Magnesia), strontium, scandium, europium, thulium (after an old Roman name for an unidentified northern region), holmium, copper (derived from Cyprus, where it was mined in the Roman era), hafnium, rhenium, americium, berkelium, californium, and darmstadtium. For the elements up to 92 (uranium), naming elements after people was discouraged. The two exceptions are indirect, the elements being named after minerals that were themselves named after people. These were gadolinium (found in gadolinite, named after the Finnish chemist Johan Gadolin) and samarium (the mineral samarskite was named after a Russian mining engineer, Vasili Samarsky-Bykhovets). Among the transuranium elements, this restriction was relaxed; there followed curium (after the Curies), einsteinium (Albert Einstein), fermium (Enrico Fermi), mendelevium (Dmitri Mendeleev), nobelium (Alfred Nobel) and lawrencium (Ernest Lawrence). Relation to IUPAC standards IUPAC has established international standards for naming elements. The first scientist or laboratory to isolate an element has the right to propose a name; after a review process, a final decision is made by the IUPAC Council. In keeping with tradition, names can be based on a mythological concept or character, astronomical object, mineral, place, property of the element or scientist. For those elements that have not yet been discovered, IUPAC has established a systematic name system. The names combine syllables that represent the digits of the atomic number, followed by "-ium". For example, "unununium" is element 111 ("un" being the syllable for 1). However, once the element has been found, the systematic name is replaced by a trivial one, in this case roentgenium. The IUPAC names for elements are intended for use in the official languages. At the time of the first edition of the IUPAC Red Book (which contains the rules for inorganic compounds), those languages were English and French; now English is the sole official language. However, other languages still have their own names for elements. The chemical symbol for tungsten, W, is based on the German name , which is found in wolframite and comes from the German for "wolf's foam", how the mineral was known to Saxon miners. The name tungsten means "heavy stone", a description of scheelite, another mineral in which tungsten is found. Russian names for hydrogen, oxygen and carbon are vodorod, kislorod and uglerod (generating water, acid and coal respectively). The German names for hydrogen, oxygen, and nitrogen are (water substance), (acid substance), and (smothering substance). The corresponding Chinese names are qīngqì (light gas), yǎngqì (nourishing gas), and dànqì (diluting gas). A method for translating chemical names into Chinese was developed by John Fryer and Xu Shou in 1871. Where traditional names were well established, they kept them; otherwise, a single character was created. Inorganic chemistry Early terminology for compound chemicals followed similar rules to the naming of elements. The names could be based on the appearance of the substance, including all five senses. In addition, chemicals were named after the consistency, crystalline form, a person or place, its putative medical properties or method of preparation. Salt (sodium chloride) is soluble and is used to enhance the taste of food. Substances with similar properties came to be known as salts, in particular Epsom salt (magnesium sulfate, found in a bitter saline spring in the English town of Epsom). Ammonium (with the little-used systematic name azanium) was first extracted from sal ammoniac, meaning "salt of Amun". Ancient Romans noticed crystals of it in Egyptian temples devoted to the god Amun; the crystals had condensed from the smoke of burning camel dung. Lead acetate was called sugar of lead. However, other names like sugar of lead (lead(II) acetate), butter of antimony (antimony trichloride), oil of vitriol (sulfuric acid), and cream of tartar (potassium bitartrate) borrowed their language from the kitchen. Many more names were based on color; for example, hematite, orpiment, and verdigris come from words meaning "blood-like stone", "gold pigment", and "green of Greece". Some names are based on their use. Lime is a general name for materials combining calcium with carbonates, oxides or hydroxides; the name comes from a root "sticking or adhering"; its earliest use was as mortar for construction. Water has several systematic names, including oxidane (the IUPAC name), hydrogen oxide, and dihydrogen monoxide (DHMO). The latter was the basis of the dihydrogen monoxide hoax, a document that was circulated warning readers of the dangers of the chemical (for example, it is fatal if inhaled). Organic chemistry In organic chemistry, some trivial names derive from a notable property of the thing being named. For instance, lecithin, the common name for phosphatidylcholine, was originally isolated from egg yolk. The word is coined from the Greek λέκιθος (lékithos) for yolk. Many trivial names continue to be used because their sanctioned equivalents are considered too cumbersome for everyday use. For example, "tartaric acid", a compound found in wine, has a systematic name of 2,3-dihydroxybutanedioic acid. The pigment β-Carotene has an IUPAC name of 1,3,3-trimethyl-2-[(1E,3E,5E,7E,9E,11E,13E,15E,17E)-3,7,12,16-tetramethyl-18-(2,6,6-trimethylcyclohexen-1-yl)octadeca-1,3,5,7,9,11,13,15,17-nonaenyl]cyclohexene. However, the trivial name can be potentially confusing. Based on its name, one might come to the conclusion that the molecule theobromine contains one or more bromine atoms. In reality it is an alkaloid similar in structure to caffeine. Shape-based Several organic molecules have semitrivial names where the suffixes -ane (for an alkane) or -ene (for an alkene) are added to a name based on the shape of the molecule. Some are pictured below. Other examples include barrelene (shaped like a barrel), fenestrane (having a window-pane motif), ladderane (a ladder shape), olympiadane (having a shape with the same topology as the Olympic rings) and quadratic acid (also known as squaric acid). Based on fiction The bohemic acid complex is a mixture of chemicals obtained through fermentation of a species of actinobacteria. In 1977 the components were isolated and have been found useful as antitumor agents and anthracycline antibiotics. The authors named the complex (and one of its components, bohemamine) after the opera La bohème by Puccini, and the remaining components were named after characters in the opera: alcindoromycin (Alcindoro), collinemycin (Colline), marcellomycin (Marcello), mimimycin (Mimi), musettamycin (Musetta), rudolphomycin (Rodolfo) and schaunardimycin (Schaunard). However, the relationships between the characters do not correctly reflect the chemical relationships. A research lab at Lepetit Pharmaceuticals, led by Piero Sensi, was fond of coining nicknames for chemicals that they discovered, later converting them to a form more acceptable for publication. The antibiotic rifampicin was named after a French movie, Rififi, about a jewel heist. They nicknamed another antibiotic "Mata Hari" before changing the name to matamycin. See also List of chemical compounds with unusual names Organic chemistry: Common nomenclature – trivial names References Further reading External links Appendix 13: Trivial names still in common use for selected inorganic and organic compounds, inorganic ions and organic substituents Chemical nomenclature
Trivial name
Chemistry
3,178
5,492,198
https://en.wikipedia.org/wiki/Limit%20set
In mathematics, especially in the study of dynamical systems, a limit set is the state a dynamical system reaches after an infinite amount of time has passed, by either going forward or backwards in time. Limit sets are important because they can be used to understand the long term behavior of a dynamical system. A system that has reached its limiting set is said to be at equilibrium. Types fixed points periodic orbits limit cycles attractors In general, limits sets can be very complicated as in the case of strange attractors, but for 2-dimensional dynamical systems the Poincaré–Bendixson theorem provides a simple characterization of all nonempty, compact -limit sets that contain at most finitely many fixed points as a fixed point, a periodic orbit, or a union of fixed points and homoclinic or heteroclinic orbits connecting those fixed points. Definition for iterated functions Let be a metric space, and let be a continuous function. The -limit set of , denoted by , is the set of cluster points of the forward orbit of the iterated function . Hence, if and only if there is a strictly increasing sequence of natural numbers such that as . Another way to express this is where denotes the closure of set . The points in the limit set are non-wandering (but may not be recurrent points). This may also be formulated as the outer limit (limsup) of a sequence of sets, such that If is a homeomorphism (that is, a bicontinuous bijection), then the -limit set is defined in a similar fashion, but for the backward orbit; i.e. . Both sets are -invariant, and if is compact, they are compact and nonempty. Definition for flows Given a real dynamical system with flow , a point , we call a point y an -limit point of if there exists a sequence in so that . For an orbit of , we say that is an -limit point of , if it is an -limit point of some point on the orbit. Analogously we call an -limit point of if there exists a sequence in so that . For an orbit of , we say that is an -limit point of , if it is an -limit point of some point on the orbit. The set of all -limit points (-limit points) for a given orbit is called -limit set (-limit set) for and denoted (). If the -limit set (-limit set) is disjoint from the orbit , that is (), we call () a ω-limit cycle (α-limit cycle). Alternatively the limit sets can be defined as and Examples For any periodic orbit of a dynamical system, For any fixed point of a dynamical system, Properties and are closed if is compact then and are nonempty, compact and connected and are -invariant, that is and See also Julia set Stable set Limit cycle Periodic point Non-wandering set Kleinian group References Further reading
Limit set
Mathematics
615
3,647,027
https://en.wikipedia.org/wiki/Flashback%20arrestor
A flashback arrestor or flash arrestor is a gas safety device most commonly used in oxy-fuel welding and cutting to stop the flame or reverse flow of gas back up into the equipment or supply line. It protects the user and equipment from damage or explosions. These devices are mainly used in industrial processes where oxy-fuel gas mixtures are handled and used. Flashback arrestors as safety products are essential to secure the workplaces and working environment. In former times wet flashback arrestors were also used. Today the industry standard is to use dry flashback arrestors with at least two safety elements. Dry type Dry flashback arrestors typically use a combination of safety elements to stop a flashback or reverse flow of gas. This type is typically found in cutting and welding applications all over the world. They work equally effectively in all orientations, and need very little maintenance. The simplest flashback arrestor consists of a metallic tube filled with iron wool, which cools the flame below the ignition temperature. In many countries or regions they are mandatory to be installed at the gas regulator or gas outlet/ tapping point. Depending on the application they are also often used at the torch side as an additional safety device. Flashback arrestors help prevent: Further gas flow in the case of pressure shocks. The entry of air or oxygen into the distribution line or single cylinders. Flashbacks which are the rapid propagation of a flame down the hose. Further gas flow in the event of a burnback. According to the standard DIN EN ISO 5175-1 (formerly EN 730-1) they include a minimum of two safety elements: A gas non-return valve (NV), which: prevents dangerous gas mixtures. ensures the gas only flows in the intended direction. and a flame arrestor (FA), which: cools the flame to below the ignition temperature of the gas or gas mixture. prevents flashback. In addition to these two basic safety functions a flashback arrestor can also have a: Thermal cut-off valve (TV), which: prevents excessive temperatures. closes automatically at a certain temperature and cuts off the gas flow long before the ignition temperature of the gas mixture is reached. and a pressure-sensitive gas cut-off valve (PV), which stops the gas flow in the event of pressure shocks The flashback arrestors are suitable for most technical gases (fuel gases) such as acetylene, hydrogen, methane, propane, propylene and butane as well as oxygen and compressed air. Flashback arrestors have to be tested for gas non-return, for tightness and for gas flow by a qualified person depending on the country specific regulations. Wet type Liquid seal flame arrestors are liquid barriers following the principle of a siphon where the liquid stops the entering deflagration and/or detonation and extinguishes the flame, they work by bubbling the gas through a non-flammable and ideally non-gas-absorbing liquid, which is typically water. They stop the flame by preventing it from reaching the submerged intake. These devices are normally very effective at stopping flashbacks from reaching the protected side of the system. They have the disadvantages of only working in one orientation and tend to be much larger than dry type arrestors. This makes them mainly only suitable for large or fixed installations and the liquid level needs to be constantly checked. On smaller units having a pressure release valve to prevent the unit from bursting under a severe flashback, the fluid level should be monitored to keep it always above the intake and not so high that the liquid could splash or overflow into the outlet. External links Video and explanation of flashback arrestor with four safety elements Gas technologies Welding Safety equipment
Flashback arrestor
Engineering
742
45,047,140
https://en.wikipedia.org/wiki/QoS%20Class%20Identifier
QoS Class Identifier (QCI) is a mechanism used in 3GPP Long Term Evolution (LTE) networks to ensure carrier traffic is allocated appropriate Quality of Service (QoS). Different carrier traffic requires different QoS and therefore different QCI values. QCI value 9 is typically used for the default carrier of a UE/PDN for non privileged subscribers. Background To ensure that carrier traffic in LTE networks is appropriately handled, a mechanism is needed to classify the different types of carriers into different classes, with each class having appropriate QoS parameters for the traffic type. Examples of the QoS parameters include Guaranteed Bit Rate (GBR) or non-Guaranteed Bit Rate (non-GBR), Priority Handling, Packet Delay Budget and Packet Error Loss rate. This overall mechanism is called QCI. Mechanism The QoS concept as used in LTE networks is class-based, where each carrier type is assigned one QoS Class Identifier (QCI) by the network. The QCI is a scalar that is used within the access network (namely the eNodeB) as a reference to node specific parameters that control packet forwarding treatment, for example scheduling weight, admission thresholds and link-layer protocol configuration. The QCI is also mapped to transport network layer parameters in the relevant Evolved Packet Core (EPC) core network nodes (for example, the PDN Gateway (P-GW), Mobility Management Entity (MME) and Policy and Charging Rules Function (PCRF)), by preconfigured QCI to Differentiated Services Code Point (DSCP) mapping. According to 3GPP TS 23.203, 9 QCI values in Rel-8 (13 QCIs Rel-12, 15 QCIs Rel-14) are standardized and associated with QCI characteristics in terms of packet forwarding treatment that the carrier traffic receives edge-to-edge between the UE and the P-GW. Scheduling priority, resource type, packet delay budget and packet error loss rate are the set of characteristics defined by the 3GPP standard and they should be understood as guidelines for the pre-configuration of node specific parameters to ensure that applications/services mapped to a given QCI receive the same level of QoS in multi-vendor environments as well as in roaming scenarios. The QCI characteristics are not signalled on any interface. The following table illustrates the standardized characteristics as defined in the 3GPP TS 23.203 standard "Policy and Charging Control Architecture". Every QCI (GBR and Non-GBR) is associated with a Priority level. Priority level 0.5 is the highest Priority level. If congestion is encountered, the lowest Priority level traffic (highest Priority number!) would be the first to be discarded. QCI-65, QCI-66, QCI-69 and QCI-70 were introduced in 3GPP TS 23.203 Rel-12. QCI-75 and QCI-79 were introduced in 3GPP TS 23.203 Rel-14. QCI-67 was introduced in 3GPP TS 23.203 Rel-15. See also LTE References LTE (telecommunication) Mobile technology
QoS Class Identifier
Technology
660
2,128,300
https://en.wikipedia.org/wiki/Cascade%20reaction
A cascade reaction, also known as a domino reaction or tandem reaction, is a chemical process that comprises at least two consecutive reactions such that each subsequent reaction occurs only in virtue of the chemical functionality formed in the previous step. In cascade reactions, isolation of intermediates is not required, as each reaction composing the sequence occurs spontaneously. In the strictest definition of the term, the reaction conditions do not change among the consecutive steps of a cascade and no new reagents are added after the initial step. By contrast, one-pot procedures similarly allow at least two reactions to be carried out consecutively without any isolation of intermediates, but do not preclude the addition of new reagents or the change of conditions after the first reaction. Thus, any cascade reaction is also a one-pot procedure, while the reverse does not hold true. Although often composed solely of intramolecular transformations, cascade reactions can also occur intermolecularly, in which case they also fall under the category of multicomponent reactions. The main benefits of cascade sequences include high atom economy and reduction of waste generated by the several chemical processes, as well as of the time and work required to carry them out. The efficiency and utility of a cascade reaction can be measured in terms of the number of bonds formed in the overall sequence, the degree of increase in the structural complexity via the process, and its applicability to broader classes of substrates. The earliest example of a cascade reaction is arguably the synthesis of tropinone reported in 1917 by Robinson. Since then, the use of cascade reactions has proliferated in the area of total synthesis. Similarly, the development of cascade-driven organic methodology has also grown tremendously. This increased interest in cascade sequences is reflected by the numerous relevant review articles published in the past couple of decades. A growing area of focus is the development of asymmetric catalysis of cascade processes by employing chiral organocatalysts or chiral transition-metal complexes. Classification of cascade reactions is sometimes difficult due to the diverse nature of the many steps in the transformation. K. C. Nicolaou labels the cascades as nucleophilic/electrophilic, radical, pericyclic or transition-metal-catalyzed, based on the mechanism of the steps involved. In the cases in which two or more classes of reaction are included in a cascade, the distinction becomes rather arbitrary and the process is labeled according to what can be arguably considered the “major theme”. In order to highlight the remarkable synthetic utility of cascade reactions, the majority of the examples below come from the total syntheses of complex molecules. Nucleophilic/electrophilic cascades Nucleophilic/electrophilic cascades are defined as the cascade sequences in which the key step constitutes a nucleophilic or electrophilic attack. An example of such a cascade is seen in the short enantioselective synthesis of the broad-spectrum antibiotic (–)-chloramphenicol, reported by Rao et al. (Scheme 1). Herein, the chiral epoxy-alcohol 1 was first treated with dichloroacetonitrile in the presence of NaH. The resulting intermediate 2 then underwent a BF3·Et2O-mediated cascade reaction. Intramolecular opening of the epoxide ring yielded intermediate 3, which, after an in situ hydrolysis facilitated by excess BF3·Et2O, afforded (–)-chloramphenicol (4) in 71% overall yield. Scheme 1. Synthesis of (–)-chloramphenicol via a nucleophilic cascade A nucleophilic cascade was also employed in the total synthesis of the natural product pentalenene (Scheme 2). In this procedure, squarate ester 5 was treated with (5-methylcyclopent-1-en-1-yl)lithium and propynyllithium. The two nucleophilic attacks occurred predominantly with trans addition to afford intermediate 6, which spontaneously underwent a 4π-conrotatory electrocyclic opening of the cyclobutene ring. The resulting conjugated species 7 equilibrated to conformer 8, which more readily underwent an 8π-conrotatory electrocyclization to the highly strained intermediate 9. The potential to release strain directed protonation of 9 such that species 10 was obtained selectively. The cascade was completed by an intramolecular aldol condensation that afforded product 11 in 76% overall yield. Further elaboration afforded the target (±)-pentalenene (12). Scheme 2. Cascade reaction in the total synthesis of (±)-pentalenene Organocatalytic cascades A subcategory of nucleophilic/electrophilic sequences is constituted by organocatalytic cascades, in which the key nucleophilic attack is driven by organocatalysis. An organocatalytic cascade was employed in the total synthesis of the natural product harziphilone, reported by Sorensen et al. in 2004 (Scheme 3). Herein, treatment of the enone starting material 13 with organocatalyst 14 yielded intermediate 15 via conjugate addition. Subsequent cyclization by the intramolecular Michael addition of the enolate into the triple bond of the system gave species 16, which afforded intermediate 17 after proton transfer and tautomerization. The cascade was completed by elimination of the organocatalyst and a spontaneous 6π-electrocyclic ring closure of the resultant cis-dienone 18 to (+)-harziphilone (19) in 70% overall yield. Scheme 3. Organocatalytic cascade in the total synthesis of (+)-harziphilone An outstanding triple organocatalytic cascade was reported by Raabe et al. in 2006. Linear aldehydes (20), nitroalkenes (21) and α,β-unsaturated aldehydes (22) could be condensed together organocatalytically to afford tetra-substituted cyclohexane carbaldehydes (24) with moderate to excellent diastereoselectivity and complete enantiocontrol (Scheme 4). The transformation is mediated by the readily available proline-derived organocatalyst 23. Scheme 4. Asymmetric synthesis of tetra-substituted cyclohexane carbaldehydes via a triple organocatalytic cascade reaction The transformation was proposed to proceed via a Michael addition/Michael addition/aldol condensation sequence (Scheme 5). In the first step, Michael addition of aldehyde 20 to nitroalkene 21 occurs through enamine catalysis, yielding nitroalkane 25. Condensation of α,β-unsaturated aldehyde 22 with the organocatalyst then facilitates the conjugate addition of 25 to give intermediate enamine 26, which is prone to undergo an intramolecular aldol condensation to iminium species 27. Organocatalyst 23 is regenerated by hydrolysis, along with the product 24, thus closing the triple cascade cycle. Scheme 5. Proposed catalytic cycle for the asymmetric triple organocatalytic cascade Radical cascades Radical cascades are those in which the key step constitutes a radical reaction. The high reactivity of free radical species renders radical-based synthetic approaches decidedly suitable for cascade reactions. One of the most widely recognized examples of the synthetic utility of radical cascades is the cyclization sequence employed in the total synthesis of (±)-, in 1985 (Scheme 6). Herein, alkyl iodide 28 was converted to the primary radical intermediate 29, which underwent a 5-exo-trig cyclization to afford reactive species 30. A subsequent 5-exo-dig radical cyclization lead to intermediate 31, which upon quenching gave the target (±)- (32) in 80% overall yield. Scheme 6. Cascade radical cyclization in the total synthesis of (±)-hirsutene A cascade radical process was also used in one of the total syntheses of (–)-morphine (Scheme 7). Aryl bromide 33 was converted to the corresponding radical species 34 by treatment with tri-n-butyltin hydride. A 5-exo-trig cyclization then occurred to give intermediate 35 stereoselectively in virtue of the stereochemistry of the ether linkage. In the next step of the cascade, the geometric constraints of 35 forbid the kinetically favored 5-exo-trig cyclization pathway; instead secondary benzylic radical species 36 was obtained via a geometrically-allowed 6-endo-trig cyclization. Subsequent elimination of the phenyl sulfinyl radical afforded product 37 in 30% overall yield, which was further elaborated to (–)-morphine (38). Scheme 7. Cascade radical cyclization in the synthesis of (–)-morphine Pericyclic cascades Possibly the most widely encountered kind of process in cascade transformations, pericyclic reactions include cycloadditions, electrocyclic reactions and sigmatropic rearrangements. Although some of the abovementioned instances of nucleophilic/electrophilic and radical cascades involved pericyclic processes, this section contains only cascade sequences that are solely composed of pericyclic reactions or in which such a reaction arguably constitutes the key step. A representative example of a pericyclic cascade is the endiandric acid cascade reported by Nicolaou et al. in 1982 (Scheme 8). Herein the highly unsaturated system 39 was first hydrogenated to the conjugated tetraene species 40, which upon heating underwent an 8π-conrotatory electrocyclic ring closure, yielding cyclic intermediate 41. A second spontaneous electrocyclization, this time a 6π-disrotatory ring closure, converted 41 to the bicyclic species 42, the geometry and stereochemistry of which favored a subsequent intramolecular Diels-Alder reaction. The methyl ester of endiandric acid B (43) was thus obtained in 23% overall yield. Scheme 8. Pericyclic cascade in the synthesis of endiandric acid derivatives A pericyclic sequence involving intramolecular hetero-cycloaddition reactions was employed in the total synthesis of naturally occurring alkaloid (–)-vindorosine (Scheme 9). Rapid access to the target was achieved from a solution of 1,3,4-oxadiazole 44 in triisopropyl benzene subjected to high temperatures and reduced pressure. First an inverse-electron-demand hetero-Diels-Alder reaction occurred to give intermediate 45. Thermodynamically favorable loss of nitrogen generated the 1,3-dipole-containing species 46. A spontaneous intramolecular [3+2] cycloaddition of the 1,3-dipole and the indole system then formed the endo-product 47 in 78% overall yield. Further elaboration yielded the target natural product 48. Scheme 9. Pericyclic cascade in the total synthesis of (–)-vindorosine The total synthesis of (–)- reported in 2005 by the Harrowven group included an electrocyclic cascade (Scheme 10). When subjected to heat via microwave irradiation, squarate derivative 49 underwent an electrocyclic opening of the cyclobutene ring, followed by a 6π-electrocyclic ring closure that yielded bicyclic intermediate 51. Tautomerization thereof gave the aromatic species 52, which upon exposure to air was oxidized to product 53 in 80% overall yield. The target (–)- (54) was then obtained from 53 via a heat-facilitated Diels-Alder reaction followed by cleavage of the tert-butyl protecting group. Scheme 10. Electrocyclic cascade in the total synthesis of (–)-colombiasin A Certain [2,2]paracyclophanes can also be obtained via pericyclic cascades, as reported by the Hopf group in 1981 (Scheme 11). In this sequence, a Diels-Alder reaction between 1,2,4,5-hexatetraene 55 and dienophile 56 first formed the highly reactive intermediate 57, which subsequently dimerized to yield [2,2]paracyclophane 58. Scheme 11. Pericyclic sequence for the synthesis of [2,2]paracyclophanes Transition-metal-catalyzed cascades Transition-metal-catalyzed cascade sequences combine the novelty and power of organometallic chemistry with the synthetic utility and economy of cascade reactions, providing an even more ecologically and economically desirable approach to organic synthesis. For instance, rhodium catalysis was used to convert acyclic monoterpenes of the type 59 to 4H-chromen products in a hydroformylation cascade (Scheme 12). First, selective rhodium-catalyzed hydroformylation of the less sterically hindered olefin bond in 59 yielded unsaturated aldehyde 60, which under the same conditions was then converted to intermediate 61 via a carbonyl-ene reaction. A second rhodium-catalyzed hydroformylation to species 62 was followed by condensation to form 4H-chromen products of the type 63 in 40% overall yield. Scheme 12. Rhodium-catalyzed hydroformylation cascade for the preparation of 4H-chromens Rhodium catalysis was also employed to initiate a cyclization/cycloaddition cascade in the synthesis of a tigliane reported by the Dauben group (Scheme 13). Treatment of with rhodium(II) acetate dimer generated a carbenoid that yielded reactive ylide 65 after an intramolecular cyclization with the neighboring carbonyl group. An intramolecular [3+2] cycloaddition then spontaneously occurred to afford the target tigliane 66. Scheme 13. Rhodium(II)-carbenoid-initiated cascade in the synthesis of a tigliane The formal intramolecular [4+2] cycloaddition of 1,6-enynes of the type 67 mediated by gold catalysis is another example of a transition-metal-catalyzed cascade (Scheme 14). A variety of 1,6-enynes reacted under mild conditions in the presence of Au(I) complexes 68a–b to yield the tricyclic products 69 in moderate to excellent yields. Scheme 14. Gold-catalyzed formal intramolecular [4+2] cycloaddition of 1,6-enynes This formal cycloaddition was proposed to proceed via the cascade process shown in Scheme 15. Complexation of the 1,6-enyne 67 with the cationic form of the catalyst yields intermediate 70, in which the activated triple bond is attacked by the olefin functionality to yield substituted cyclopropane 71. Electrophilic opening of the three-membered ring forms cationic species 72, which undergoes a Friedel Crafts-type reaction and then rearomatizes to give tricyclic product 69. Due to the nature of the interaction of gold complexes with unsaturated systems, this process could also be considered an electrophilic cascade. Scheme 15. Proposed cascade process in the formal intramolecular [4+2] cycloaddition of 1,6-enynes An example of palladium-catalyzed cascades is represented by the asymmetric polyene Heck cyclization used in the preparation of (+)-xestoquinone from triflate substrate 75 (Scheme 16). Oxidative addition of the aryl–triflate bond into the palladium(0) complex in the presence of chiral diphosphine ligand (S)-binap yields chiral palladium(II) complex 77. This step is followed by the dissociation of the triflate anion, association of the neighboring olefin and 1,2-insertion of the naphthyl group into the olefin to yield intermediate 79. A second migratory insertion into the remaining olefin group followed by a β-elimination then occurs to afford product 81 in 82% overall yield and with moderate enantioselectivity. The palladium(0) catalyst is also regenerated in this step, thus allowing the cascade to be reinitiated. Scheme 16. Palladium-catalyzed Heck cascade in the enantioselective synthesis of (+)-xestoquinone Multistep tandem reactions Multistep tandem reactions (or cascade reactions) are a sequence of chemical transformations (usually more than two steps) that happens consecutively to convert a starting material to a complex product. This kind of organic reactions are designed to construct difficult structures encountered in natural product total synthesis. In the total synthesis of spiroketal ionophore antibiotic routiennocin 1 (Fig. 1), the central spiroketal skeleton was constructed by a multistep tandem reaction (Fig. 2). Fragment A and fragment B were coupled in a single step to form the key intermediate G that could be further elaborated to afford the final product routiennocin. Four chemical transformations happened in this tandem reaction. First, treating fragment A with n-butyllithium formed carbon anion that attacked the alkyl iodide part of fragment B to generate intermediate C (step 1). Then a 3, 4-dihydropyran derivative D was formed through base-mediated elimination reaction on intermediate C (step 2). The protecting group on 1, 3-diol moiety in intermediate D was removed by acid treatment to give the diol product E (step 3). The spiroketal product G was generated via intramolecular ketal formation reaction. This multistep tandem reaction greatly simplified the construction of this complex spiroketal structure and eased the path towards the total synthesis of routiennocin. References External links Chemical Knots at The Periodic Table of Videos (University of Nottingham) Organic chemistry Chemical synthesis
Cascade reaction
Chemistry
3,874
66,916,675
https://en.wikipedia.org/wiki/QTY%20Code
The QTY Code is a design method to transform membrane proteins that are intrinsically insoluble in water into variants with water solubility, while retaining their structure and function. Similar structures of amino acids The QTY Code is based on two key molecular structural facts: 1) all 20 natural amino acids are found in alpha-helices regardless of their chemical properties, although some amino acids have a higher propensity to form an alpha-helix; and, 2) several amino acids share striking structural similarities despite their very different chemical properties. These may be paired as: Glutamine (Q) vs Leucine (L); Threonine (T) vs Valine (V) and Isoleucine (I); and Tyrosine (Y) vs Phenylalanine (F). The QTY Code systematically replaces water-insoluble amino acids (L, V, I and F) with water-soluble amino acids (Q, T and Y) in transmembrane alpha-helices. Thus, its application to membrane proteins changes the water-insoluble form of membrane proteins into water-soluble variants. The QTY Code was specifically conceived to render G protein-coupled receptors (GPCRs) into a water-soluble form. Despite substantial transmembrane domain changes, the QTY variants of GPCRs maintain stable structure and ligand binding activities. Hydrogen bond interactions between water and the amino acids The side chain of glutamine (Q) can form 4 hydrogen bonds with 4 water molecules. There are 2 hydrogen donors from nitrogen and 2 hydrogen acceptors for oxygen. The –OH group of threonine (T) and tyrosine (Y) can form 3 hydrogen bonds with 3 water molecules (2 H-acceptors and 1 H-donor). Color code: Green = carbon, red = oxygen, blue = nitrogen, gray = hydrogen, yellow disks = hydrogen bonds. Three types of alpha-helices and with nearly identical molecular structure There are 3 types of alpha-helices and with nearly identical molecular structure, namely: a) 1.5Å per amino acid rise, b) 100˚ per amino acid turn, c) 3.6 amino acids and 360˚ per helical turn, and d) 5.4Å per helical turn. The 3 types of alpha-helices are: 1) mostly hydrophobic amino acids including Leucine (L), Isoleucine (I), Valine (V), Phenylalanine (F), Methionine (M) and Alanine (A) that are commonly found as the helical transmembrane segments in membrane proteins; 2) mostly hydrophilic amino acids including Aspartic acid (D), Glutamic acid (E), Glutamine (Q), Lysine (K), Arginine (R), Serine (S), Threonine (T), Tyrosine (Y) that are commonly found on the out layer in water-soluble globular proteins; 3) mixed hydrophobic and hydrophilic amino acids that are partitioned in 2 faces: hydrophobic face and hydrophilic face, in an analogy, like our fingers with front and back. These alpha-helices sometimes attach to surface of membrane lipid bilayer, or partially buried to the hydrophobic core and partially close to the surface of water-soluble globular proteins. The QTY code The QTY Code is likely universally applicable and also reversible, namely, Q changes to L, T changes to V and I, and Y changes to F. The QTY Code has been successful in designing many water-soluble variants of chemokine receptors and cytokine receptors. The QTY Code may likely be successfully applied to other water-insoluble aggregated proteins. The QTY Code is robust and straightforward: it is the simplest tool to carry out membrane protein design without sophisticated computer algorithms. Thus, it can be used broadly. The QTY Code has implications for designing additional GPCRs and other membrane proteins including cytokine receptors that are directly involved in cytokine storm syndrome. The QTY Code has also been applied to cytokine receptor water-soluble variants with the aim of combatting the cytokine storm syndrome (also called cytokine release syndrome) suffered by cancer patients receiving CAR-T therapy. This therapeutic application may be equally applicable to severely infected COVID-19 patients, for whom cytokine storms often lead to death. References Further reading Medicinal chemistry Biochemistry
QTY Code
Chemistry,Biology
956
34,130,293
https://en.wikipedia.org/wiki/Thompson%20sampling
Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that address the exploration-exploitation dilemma in the multi-armed bandit problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief. Description Consider a set of contexts , a set of actions , and rewards in . The aim of the player is to play actions under the various contexts, such as to maximize the cumulative rewards. Specifically, in each round, the player obtains a context , plays an action and receives a reward following a distribution that depends on the context and the issued action. The elements of Thompson sampling are as follows: a likelihood function ; a set of parameters of the distribution of ; a prior distribution on these parameters; past observations triplets ; a posterior distribution , where is the likelihood function. Thompson sampling consists of playing the action according to the probability that it maximizes the expected reward; action is chosen with probability where is the indicator function. In practice, the rule is implemented by sampling. In each round, parameters are sampled from the posterior , and an action chosen that maximizes , i.e. the expected reward given the sampled parameters, the action, and the current context. Conceptually, this means that the player instantiates their beliefs randomly in each round according to the posterior distribution, and then acts optimally according to them. In most practical applications, it is computationally onerous to maintain and sample from a posterior distribution over models. As such, Thompson sampling is often used in conjunction with approximate sampling techniques. History Thompson sampling was originally described by Thompson in 1933. It was subsequently rediscovered numerous times independently in the context of multi-armed bandit problems. A first proof of convergence for the bandit case has been shown in 1997. The first application to Markov decision processes was in 2000. A related approach (see Bayesian control rule) was published in 2010. In 2010 it was also shown that Thompson sampling is instantaneously self-correcting. Asymptotic convergence results for contextual bandits were published in 2011. Thompson Sampling has been widely used in many online learning problems including A/B testing in website design and online advertising, and accelerated learning in decentralized decision making. A Double Thompson Sampling (D-TS) algorithm has been proposed for dueling bandits, a variant of traditional MAB, where feedback comes in the form of pairwise comparison. Relationship to other approaches Probability matching Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances. Bayesian control rule A generalization of Thompson sampling to arbitrary dynamical environments and causal structures, known as Bayesian control rule, has been shown to be the optimal solution to the adaptive coding problem with actions and observations. In this formulation, an agent is conceptualized as a mixture over a set of behaviours. As the agent interacts with its environment, it learns the causal properties and adopts the behaviour that minimizes the relative entropy to the behaviour with the best prediction of the environment's behaviour. If these behaviours have been chosen according to the maximum expected utility principle, then the asymptotic behaviour of the Bayesian control rule matches the asymptotic behaviour of the perfectly rational agent. The setup is as follows. Let be the actions issued by an agent up to time , and let be the observations gathered by the agent up to time . Then, the agent issues the action with probability: where the "hat"-notation denotes the fact that is a causal intervention (see Causality), and not an ordinary observation. If the agent holds beliefs over its behaviors, then the Bayesian control rule becomes , where is the posterior distribution over the parameter given actions and observations . In practice, the Bayesian control amounts to sampling, at each time step, a parameter from the posterior distribution , where the posterior distribution is computed using Bayes' rule by only considering the (causal) likelihoods of the observations and ignoring the (causal) likelihoods of the actions , and then by sampling the action from the action distribution . Upper-Confidence-Bound (UCB) algorithms Thompson sampling and upper-confidence bound algorithms share a fundamental property that underlies many of their theoretical guarantees. Roughly speaking, both algorithms allocate exploratory effort to actions that might be optimal and are in this sense "optimistic". Leveraging this property, one can translate regret bounds established for UCB algorithms to Bayesian regret bounds for Thompson sampling or unify regret analysis across both these algorithms and many classes of problems. References Artificial intelligence engineering Heuristic algorithms Sequential methods Sequential experiments
Thompson sampling
Engineering
1,007
214,053
https://en.wikipedia.org/wiki/Host%20%28biology%29
In biology and medicine, a host is a larger organism that harbours a smaller organism; whether a parasitic, a mutualistic, or a commensalist guest (symbiont). The guest is typically provided with nourishment and shelter. Examples include animals playing host to parasitic worms (e.g. nematodes), cells harbouring pathogenic (disease-causing) viruses, or a bean plant hosting mutualistic (helpful) nitrogen-fixing bacteria. More specifically in botany, a host plant supplies food resources to micropredators, which have an evolutionarily stable relationship with their hosts similar to ectoparasitism. The host range is the collection of hosts that an organism can use as a partner. Symbiosis Symbiosis spans a wide variety of possible relationships between organisms, differing in their permanence and their effects on the two parties. If one of the partners in an association is much larger than the other, it is generally known as the host. In parasitism, the parasite benefits at the host's expense. In commensalism, the two live together without harming each other, while in mutualism, both parties benefit. Most parasites are only parasitic for part of their life cycle. By comparing parasites with their closest free-living relatives, parasitism has been shown to have evolved on at least 233 separate occasions. Some organisms live in close association with a host and only become parasitic when environmental conditions deteriorate. A parasite may have a long-term relationship with its host, as is the case with all endoparasites. The guest seeks out the host and obtains food or another service from it, but does not usually kill it. In contrast, a parasitoid spends a large part of its life within or on a single host, ultimately causing the host's death, with some of the strategies involved verging on predation. Generally, the host is kept alive until the parasitoid is fully grown and ready to pass on to its next life stage. A guest's relationship with its host may be intermittent or temporary, perhaps associated with multiple hosts, making the relationship equivalent to the herbivory of a wild-living animal. Another possibility is that the host–guest relationship may have no permanent physical contact, as in the brood parasitism of the cuckoo. Hosts to parasites Parasites follow a wide variety of evolutionary strategies, placing their hosts in an equally wide range of relationships. Parasitism implies host–parasite coevolution, including the maintenance of gene polymorphisms in the host, where there is a trade-off between the advantage of resistance to a parasite and a cost such as disease caused by the gene. Types of hosts Definitive or primary host – an organism in which the parasite reaches the adult stage and reproduces sexually, if possible. This is the final host. Secondary or intermediate host – an organism that harbors the sexually immature parasite and is required by the parasite to undergo development and complete its life cycle. It often acts as a vector of the parasite to reach its definitive host. For example, Dirofilaria immitis, the heartworm of dogs, uses the mosquito as its intermediate host until it matures into the infective L3 larval stage. It is not always easy or even possible to identify which host is definitive and which secondary. The life cycles of many parasites are not well understood, and the subjectively or economically more important organism may initially be designated incorrectly as primary. Mislabelling may continue even after the error becomes known. For example trout and salmon are sometimes said to be "primary hosts" for salmonid whirling disease, even though the myxosporean parasite reproduces sexually inside the sludge worm. And where the host harbors the different parasite's phases at different sites within its body, the host is both intermediate and definitive: for example trichinosis, a disease caused by roundworms, where the host has immature juveniles in its muscles and reproductive adults in its digestive tract. Paratenic or transport host – an organism that harbors the sexually immature parasite but is not necessary for the parasite's development cycle to progress. Paratenic hosts serve as "dumps" for non-mature stages of a parasite in which they can accumulate in high numbers. The trematode Alaria americana is an example: the so-called mesocercarial stages of this parasite reside in tadpoles, which are rarely eaten by the definitive canine host. The tadpoles (or the frogs, following metamorphosis) are more frequently preyed on by snakes, which then function as paratenic hosts: the mesocercariae do not undergo further development there, but may accumulate, and infect the definitive host once the snake is consumed by a canid. The nematode Skrjabingylus nasicola is another example, with slugs as the intermediate hosts, shrews and rodents as the paratenic hosts, and mustelids as the definitive hosts. Dead-end, incidental, or accidental host – an organism that generally does not allow transmission to the definitive host, thereby preventing the parasite from completing its development. For example, humans and horses are dead-end hosts for West Nile virus, whose life cycle is normally between culicine mosquitoes and birds. People and horses can become infected, but the level of virus in their blood does not become high enough to pass on the infection to mosquitoes that bite them. Reservoir host – an organism that harbors a pathogen but suffers no ill effects. However, it serves as a source of infection to other species that are susceptible, with important implications for disease control. A reservoir host individual may be reinfected several times. Plant hosts of micropredators Micropredation is an evolutionarily stable strategy within parasitism, in which a small predator lives parasitically on a much larger host plant, eating parts of it. The range of plants on which a herbivorous insect feeds is known as its host range. This can be wide or narrow, but it never includes all plants. A small number of insects are monophagous, feeding on a single plant. The silkworm larva is one of these, with mulberry leaves being the only food consumed. More often, an insect with a limited host range is oligophagous, being restricted to a few closely related species, usually in the same plant family. The diamondback moth is an example of this, feeding exclusively on brassicas, and the larva of the potato tuber moth feeds on potatoes, tomatoes and tobacco, all members of the same plant family, Solanaceae. Herbivorous insects with a wide range of hosts in various different plant families are known as polyphagous. One example is the buff ermine moth whose larvae feed on alder, mint, plantain, oak, rhubarb, currant, blackberry, dock, ragwort, nettle and honeysuckle. Plants often produce toxic or unpalatable secondary metabolites to deter herbivores from feeding on them. Monophagous insects have developed specific adaptations to overcome those in their specialist hosts, giving them an advantage over polyphagous species. However, this puts them at greater risk of extinction if their chosen hosts suffer setbacks. Monophagous species are able to feed on the tender young foliage with high concentrations of damaging chemicals on which polyphagous species cannot feed, having to make do with older leaves. There is a trade off between offspring quality and quantity; the specialist maximises the chances of its young thriving by paying great attention to the choice of host, while the generalist produces larger numbers of eggs in sub-optimal conditions. Some insect micropredators migrate regularly from one host to another. The hawthorn-carrot aphid overwinters on its primary host, a hawthorn tree, and migrates during the summer to its secondary host, a plant in the carrot family. Host range The host range is the set of hosts that a parasite can use as a partner. In the case of human parasites, the host range influences the epidemiology of the parasitism or disease. Host range of viruses For instance, the production of antigenic shifts in Influenza A virus can result from pigs being infected with the virus from several different hosts (such as human and bird). This co-infection provides an opportunity for mixing of the viral genes between existing strains, thereby producing a new viral strain. An influenza vaccine produced against an existing viral strain might not be effective against this new strain, which then requires a new influenza vaccine to be prepared for the protection of the human population. Non-parasitic associations Mutualistic hosts Some hosts participate in fully mutualistic interactions with both organisms being completely dependent on the other. For example, termites are hosts to the protozoa that live in their gut and which digest cellulose, and the human gut flora is essential for efficient digestion. Many corals and other marine invertebrates house zooxanthellae, single-celled algae, in their tissues. The host provides a protected environment in a well-lit position for the algae, while benefiting itself from the nutrients produced by photosynthesis which supplement its diet. Lamellibrachia luymesi, a deep sea giant tubeworm, has an obligate mutualistic association with internal, sulfide-oxidizing, bacterial symbionts. The tubeworm extracts the chemicals that the bacteria need from the sediment, and the bacteria supply the tubeworm, which has no mouth, with nutrients. Some hermit crabs place pieces of sponge on the shell in which they are living. These grow over and eventually dissolve away the mollusc shell; the crab may not ever need to replace its abode again and is well-camouflaged by the overgrowth of sponge. An important hosting relationship is mycorrhiza, a symbiotic association between a fungus and the roots of a vascular host plant. The fungus receives carbohydrates, the products of photosynthesis, while the plant receives phosphates and nitrogenous compounds acquired by the fungus from the soil. Over 95% of plant families have been shown to have mycorrhizal associations. Another such relationship is between leguminous plants and certain nitrogen-fixing bacteria called rhizobia that form nodules on the roots of the plant. The host supplies the bacteria with the energy needed for nitrogen fixation and the bacteria provide much of the nitrogen needed by the host. Such crops as beans, peas, chickpeas and alfalfa are able to fix nitrogen in this way, and mixing clover with grasses increases the yield of pastures. Neurotransmitter tyramine produced by commensal Providencia bacteria, which colonize the gut of the nematode Caenorhabditis elegans, bypasses the requirement for its host to biosynthesise tyramine. This product is then probably converted to octopamine by the host enzyme tyramine β-hydroxylase and manipulates a host sensory decision. Hosts in cleaning symbiosis Hosts of many species are involved in cleaning symbiosis, both in the sea and on land, making use of smaller animals to clean them of parasites. Cleaners include fish, shrimps and birds; hosts or clients include a much wider range of fish, marine reptiles including turtles and iguanas, octopus, whales, and terrestrial mammals. The host appears to benefit from the interaction, but biologists have disputed whether this is a truly mutualistic relationship or something closer to parasitism by the cleaner. Commensal hosts Remoras (also called suckerfish) can swim freely but have evolved suckers that enable them to adhere to smooth surfaces, gaining a free ride (phoresis), and they spend most of their lives clinging to a host animal such as a whale, turtle or shark. However, the relationship may be mutualistic, as remoras, though not generally considered to be cleaner fish, often consume parasitic copepods: for example, these are found in the stomach contents of 70% of the common remora. Many molluscs, barnacles and polychaete worms attach themselves to the carapace of the Atlantic horseshoe crab; for some this is a convenient arrangement, but for others it is an obligate form of commensalism and they live nowhere else. History The first host to be noticed in ancient times was human: human parasites such as hookworm are recorded from ancient Egypt from 3000 BC onwards, while in ancient Greece, the Hippocratic Corpus describes human bladder worm. The medieval Persian physician Avicenna recorded human and animal parasites including roundworms, threadworms, the Guinea worm and tapeworms. In Early Modern times, Francesco Redi recorded animal parasites, while the microscopist Antonie van Leeuwenhoek observed and illustrated the protozoan Giardia lamblia from "his own loose stools". Hosts to mutualistic symbionts were recognised more recently, when in 1877 Albert Bernhard Frank described the mutualistic relationship between a fungus and an alga in lichens. See also PHI-base (Pathogen-Host Interaction database) Generalist and specialist species Host cell protein References Biological interactions Parasitology Disease ecology
Host (biology)
Biology
2,742
2,214,041
https://en.wikipedia.org/wiki/Sector%20mass%20spectrometer
A sector instrument is a general term for a class of mass spectrometer that uses a static electric (E) or magnetic (B) sector or some combination of the two (separately in space) as a mass analyzer. Popular combinations of these sectors have been the EB, BE (of so-called reverse geometry), three-sector BEB and four-sector EBEB (electric-magnetic-electric-magnetic) instruments. Most modern sector instruments are double-focusing instruments (first developed by Francis William Aston, Arthur Jeffrey Dempster, Kenneth Bainbridge and Josef Mattauch in 1936) in that they focus the ion beams both in direction and velocity. Theory The behavior of ions in a homogeneous, linear, static electric or magnetic field (separately) as is found in a sector instrument is simple. The physics are described by a single equation called the Lorentz force law. This equation is the fundamental equation of all mass spectrometric techniques and applies in non-linear, non-homogeneous cases too and is an important equation in the field of electrodynamics in general. where E is the electric field strength, B is the magnetic field induction, q is the charge of the particle, v is its current velocity (expressed as a vector), and × is the cross product. So the force on an ion in a linear homogenous electric field (an electric sector) is: , in the direction of the electric field, with positive ions and opposite that with negative ions. The force is only dependent on the charge and electric field strength. The lighter ions will be deflected more and heavier ions less due to the difference in inertia and the ions will physically separate from each other in space into distinct beams of ions as they exit the electric sector. And the force on an ion in a linear homogenous magnetic field (a magnetic sector) is: , perpendicular to both the magnetic field and the velocity vector of the ion itself, in the direction determined by the right-hand rule of cross products and the sign of the charge. The force in the magnetic sector is complicated by the velocity dependence but with the right conditions (uniform velocity for example) ions of different masses will separate physically in space into different beams as with the electric sector. Classic geometries These are some of the classic geometries from mass spectrographs which are often used to distinguish different types of sector arrangements, although most current instruments do not fit precisely into any of these categories as the designs have evolved further. Bainbridge–Jordan The sector instrument geometry consists of a 127.30° electric sector without an initial drift length followed by a 60° magnetic sector with the same direction of curvature. Sometimes called a "Bainbridge mass spectrometer," this configuration is often used to determine isotopic masses. A beam of positive particles is produced from the isotope under study. The beam is subject to the combined action of perpendicular electric and magnetic fields. Since the forces due to these two fields are equal and opposite when the particles have a velocity given by they do not experience a resultant force; they pass freely through a slit, and are then subject to another magnetic field, transversing a semi-circular path and striking a photographic plate. The mass of the isotope is determined through subsequent calculation. Mattauch–Herzog The Mattauch–Herzog geometry consists of a 31.82° ( radians) electric sector, a drift length which is followed by a 90° magnetic sector of opposite curvature direction. The entry of the ions sorted primarily by charge into the magnetic field produces an energy focussing effect and much higher transmission than a standard energy filter. This geometry is often used in applications with a high energy spread in the ions produced where sensitivity is nonetheless required, such as spark source mass spectrometry (SSMS) and secondary ion mass spectrometry (SIMS). The advantage of this geometry over the Nier–Johnson geometry is that the ions of different masses are all focused onto the same flat plane. This allows the use of a photographic plate or other flat detector array. Nier–Johnson The Nier–Johnson geometry consists of a 90° electric sector, a long intermediate drift length and a 60° magnetic sector of the same curvature direction. Hinterberger–Konig The Hinterberger–Konig geometry consists of a 42.43° electric sector, a long intermediate drift length and a 130° magnetic sector of the same curvature direction. Takeshita The Takeshita geometry consists of a 54.43° electric sector, and short drift length, a second electric sector of the same curvature direction followed by another drift length before a 180° magnetic sector of opposite curvature direction. Matsuda The Matsuda geometry consists of an 85° electric sector, a quadrupole lens and a 72.5° magnetic sector of the same curvature direction. This geometry is used in the SHRIMP and Panorama (gas source, high-resolution, multicollector to measure isotopologues in geochemistry). See also Mass-analyzed ion kinetic energy spectrometry Charge remote fragmentation Kenneth Bainbridge Alfred O. C. Nier References Further reading Thomson, J. J.: Rays of Positive Electricity and their Application to Chemical Analyses; Longmans Green: London, 1913 Mass spectrometry Measuring instruments
Sector mass spectrometer
Physics,Chemistry,Technology,Engineering
1,081
1,799,688
https://en.wikipedia.org/wiki/Immunoprecipitation
Immunoprecipitation (IP) is the technique of precipitating a protein antigen out of solution using an antibody that specifically binds to that particular protein. This process can be used to isolate and concentrate a particular protein from a sample containing many thousands of different proteins. Immunoprecipitation requires that the antibody be coupled to a solid substrate at some point in the procedure. Types Individual protein immunoprecipitation (IP) Involves using an antibody that is specific for a known protein to isolate that particular protein out of a solution containing many different proteins. These solutions will often be in the form of a crude lysate of a plant or animal tissue. Other sample types could be body fluids or other samples of biological origin. Protein complex immunoprecipitation (Co-IP) Immunoprecipitation of intact protein complexes (i.e. antigen along with any proteins or ligands that are bound to it) is known as co-immunoprecipitation (Co-IP). Co-IP works by selecting an antibody that targets a known protein that is believed to be a member of a larger complex of proteins. By targeting this known member with an antibody it may become possible to pull the entire protein complex out of solution and thereby identify unknown members of the complex. This works when the proteins involved in the complex bind to each other tightly, making it possible to pull multiple members of the complex out of the solution by latching onto one member with an antibody. This concept of pulling protein complexes out of solution is sometimes referred to as a "pull-down". Co-IP is a powerful technique that is used regularly by molecular biologists to analyze protein–protein interactions. A particular antibody often selects for a subpopulation of its target protein that has the epitope exposed, thus failing to identify any proteins in complexes that hide the epitope. This can be seen in that it is rarely possible to precipitate even half of a given protein from a sample with a single antibody, even when a large excess of antibody is used. As successive rounds of targeting and immunoprecipitations take place, the number of identified proteins may continue to grow. The identified proteins may not ever exist in a single complex at a given time, but may instead represent a network of proteins interacting with one another at different times for different purposes. Repeating the experiment by targeting different members of the protein complex allows the researcher to double-check the result. Each round of pull-downs should result in the recovery of both the original known protein as well as other previously identified members of the complex (and even new additional members). By repeating the immunoprecipitation in this way, the researcher verifies that each identified member of the protein complex was a valid identification. If a particular protein can only be recovered by targeting one of the known members but not by targeting other of the known members then that protein's status as a member of the complex may be subject to question. Chromatin immunoprecipitation (ChIP) Chromatin immunoprecipitation (ChIP) is a method used to determine the location of DNA binding sites on the genome for a particular protein of interest. This technique gives a picture of the protein–DNA interactions that occur inside the nucleus of living cells or tissues. The in vivo nature of this method is in contrast to other approaches traditionally employed to answer the same questions. The principle underpinning this assay is that DNA-binding proteins (including transcription factors and histones) in living cells can be cross-linked to the DNA that they are binding. By using an antibody that is specific to a putative DNA binding protein, one can immunoprecipitate the protein–DNA complex out of cellular lysates. The crosslinking is often accomplished by applying formaldehyde to the cells (or tissue), although it is sometimes advantageous to use a more defined and consistent crosslinker such as dimethyl 3,3′-dithiobispropionimidate-2 HCl (DTBP). Following crosslinking, the cells are lysed and the DNA is broken into pieces 0.2–1.0 kb in length by sonication. At this point the immunoprecipitation is performed resulting in the purification of protein–DNA complexes. The purified protein–DNA complexes are then heated to reverse the formaldehyde cross-linking of the protein and DNA complexes, allowing the DNA to be separated from the proteins. The identity and quantity of the DNA fragments isolated can then be determined by polymerase chain reaction (PCR). The limitation of performing PCR on the isolated fragments is that one must have an idea which genomic region is being targeted in order to generate the correct PCR primers. Sometimes this limitation is circumvented simply by cloning the isolated genomic DNA into a plasmid vector and then using primers that are specific to the cloning region of that vector. Alternatively, when one wants to find where the protein binds on a genome-wide scale, ChIP-sequencing is used and has recently emerged as a standard technology that can localize protein binding sites in a high-throughput, cost-effective fashion, allowing also for the characterization of the cistrome. Previously, DNA microarray was also used (ChIP-on-chip or ChIP-chip). RNP immunoprecipitation (RIP and CLIP) RIP and CLIP both purify a specific RNA-binding protein in order to identify bound RNAs, thereby studying ribonucleoproteins (RNPs). In RIP, the co-purified RNAs are extracted and their enrichment is compared to control, which was originally done by microarray or RT-PCR. In CLIP, cells are UV crosslinked prior to lysis, followed by additional purification steps beyond standard immunoprecipitation, including partial RNA fragmentation, high-salt washing, SDS-PAGE separation and membrane transfer, and identification of direct RNA binding sites by cDNA sequencing. Tagged proteins One of the major technical hurdles with immunoprecipitation is the great difficulty in generating an antibody that specifically targets a single known protein. To get around this obstacle, many groups will engineer tags onto either the C- or N- terminal end of the protein of interest. The advantage here is that the same tag can be used time and again on many different proteins and the researcher can use the same antibody each time. The advantages with using tagged proteins are so great that this technique has become commonplace for all types of immunoprecipitation, including all of the types of IP detailed above. Examples of tags in use are the green fluorescent protein (GFP) tag, glutathione-S-transferase (GST) tag and the FLAG-tag tag. While the use of a tag to enable pull-downs is convenient, it raises some concerns regarding biological relevance because the tag itself may either obscure native interactions or introduce new and unnatural interactions. Methods The two general methods for immunoprecipitation are the direct capture method and the indirect capture method. Direct Antibodies that are specific for a particular protein (or group of proteins) are immobilized on a solid-phase substrate such as superparamagnetic microbeads or on microscopic agarose (non-magnetic) beads. The beads with bound antibodies are then added to the protein mixture, and the proteins that are targeted by the antibodies are captured onto the beads via the antibodies; in other words, they become immunoprecipitated. Indirect Antibodies that are specific for a particular protein, or a group of proteins, are added directly to the mixture of protein. The antibodies have not been attached to a solid-phase support yet. The antibodies are free to float around the protein mixture and bind their targets. As time passes, beads coated in Protein A/G are added to the mixture of antibody and protein. At this point, the antibodies, which are now bound to their targets, will stick to the beads. From this point on, the direct and indirect protocols converge because the samples now have the same ingredients. Both methods give the same end-result with the protein or protein complexes bound to the antibodies which themselves are immobilized onto the beads. Selection An indirect approach is sometimes preferred when the concentration of the protein target is low or when the specific affinity of the antibody for the protein is weak. The indirect method is also used when the binding kinetics of the antibody to the protein is slow for a variety of reasons. In most situations, the direct method is the default, and the preferred, choice. Technological advances Agarose Historically the solid-phase support for immunoprecipitation used by the majority of scientists has been highly-porous agarose beads (also known as agarose resins or slurries). The advantage of this technology is a very high potential binding capacity, as virtually the entire sponge-like structure of the agarose particle (50 to 150 μm in size) is available for binding antibodies (which will in turn bind the target proteins) and the use of standard laboratory equipment for all aspects of the IP protocol without the need for any specialized equipment. The advantage of an extremely high binding capacity must be carefully balanced with the quantity of antibody that the researcher is prepared to use to coat the agarose beads. Because antibodies can be a cost-limiting factor, it is best to calculate backward from the amount of protein that needs to be captured (depending upon the analysis to be performed downstream), to the amount of antibody that is required to bind that quantity of protein (with a small excess added in order to account for inefficiencies of the system), and back still further to the quantity of agarose that is needed to bind that particular quantity of antibody. In cases where antibody saturation is not required, this technology is unmatched in its ability to capture extremely large quantities of captured target proteins. The caveat here is that the "high capacity advantage" can become a "high capacity disadvantage" that is manifested when the enormous binding capacity of the sepharose/agarose beads is not completely saturated with antibodies. It often happens that the amount of antibody available to the researcher for their immunoprecipitation experiment is less than sufficient to saturate the agarose beads to be used in the immunoprecipitation. In these cases the researcher can end up with agarose particles that are only partially coated with antibodies, and the portion of the binding capacity of the agarose beads that is not coated with antibody is then free to bind anything that will stick, resulting in an elevated background signal due to non-specific binding of lysate components to the beads, which can make data interpretation difficult. While some may argue that for these reasons it is prudent to match the quantity of agarose (in terms of binding capacity) to the quantity of antibody that one wishes to be bound for the immunoprecipitation, a simple way to reduce the issue of non-specific binding to agarose beads and increase specificity is to preclear the lysate, which for any immunoprecipitation is highly recommended. Preclearing Lysates are complex mixtures of proteins, lipids, carbohydrates and nucleic acids, and one must assume that some amount of non-specific binding to the IP antibody, Protein A/G or the beaded support will occur and negatively affect the detection of the immunoprecipitated target(s). In most cases, preclearing the lysate at the start of each immunoprecipitation experiment (see step 2 in the "protocol" section below) is a way to remove potentially reactive components from the cell lysate prior to the immunoprecipitation to prevent the non-specific binding of these components to the IP beads or antibody. The basic preclearing procedure is described below, wherein the lysate is incubated with beads alone, which are then removed and discarded prior to the immunoprecipitation. This approach, though, does not account for non-specific binding to the IP antibody, which can be considerable. Therefore, an alternative method of preclearing is to incubate the protein mixture with exactly the same components that will be used in the immunoprecipitation, except that a non-target, irrelevant antibody of the same antibody subclass as the IP antibody is used instead of the IP antibody itself. This approach attempts to use as close to the exact IP conditions and components as the actual immunoprecipitation to remove any non-specific cell constituent without capturing the target protein (unless, of course, the target protein non-specifically binds to some other IP component, which should be properly controlled for by analyzing the discarded beads used to preclear the lysate). The target protein can then be immunoprecipitated with the reduced risk of non-specific binding interfering with data interpretation. Superparamagnetic beads While the vast majority of immunoprecipitations are performed with agarose beads, the use of superparamagnetic beads for immunoprecipitation is a newer approach that is gaining in popularity as an alternative to agarose beads for IP applications. Unlike agarose, magnetic beads are solid and can be spherical, depending on the type of bead, and antibody binding is limited to the surface of each bead. While these beads do not have the advantage of a porous center to increase the binding capacity, magnetic beads are significantly smaller than agarose beads (1 to 4 μm), and the greater number of magnetic beads per volume than agarose beads collectively gives magnetic beads an effective surface area-to-volume ratio for optimum antibody binding. Commercially available magnetic beads can be separated based by size uniformity into monodisperse and polydisperse beads. Monodisperse beads, also called microbeads, exhibit exact uniformity, and therefore all beads exhibit identical physical characteristics, including the binding capacity and the level of attraction to magnets. Polydisperse beads, while similar in size to monodisperse beads, show a wide range in size variability (1 to 4 μm) that can influence their binding capacity and magnetic capture. Although both types of beads are commercially available for immunoprecipitation applications, the higher quality monodisperse superparamagnetic beads are more ideal for automatic protocols because of their consistent size, shape and performance. Monodisperse and polydisperse superparamagnetic beads are offered by many companies, including Invitrogen, Thermo Scientific, and Millipore. Agarose vs. magnetic beads Proponents of magnetic beads claim that the beads exhibit a faster rate of protein binding over agarose beads for immunoprecipitation applications, although standard agarose bead-based immunoprecipitations have been performed in 1 hour. Claims have also been made that magnetic beads are better for immunoprecipitating extremely large protein complexes because of the complete lack of an upper size limit for such complexes, although there is no unbiased evidence stating this claim. The nature of magnetic bead technology does result in less sample handling due to the reduced physical stress on samples of magnetic separation versus repeated centrifugation when using agarose, which may contribute greatly to increasing the yield of labile (fragile) protein complexes. Additional factors, though, such as the binding capacity, cost of the reagent, the requirement of extra equipment and the capability to automate IP processes should be considered in the selection of an immunoprecipitation support. Binding capacity Proponents of both agarose and magnetic beads can argue whether the vast difference in the binding capacities of the two beads favors one particular type of bead. In a bead-to-bead comparison, agarose beads have significantly greater surface area and therefore a greater binding capacity than magnetic beads due to the large bead size and sponge-like structure. But the variable pore size of the agarose causes a potential upper size limit that may affect the binding of extremely large proteins or protein complexes to internal binding sites, and therefore magnetic beads may be better suited for immunoprecipitating large proteins or protein complexes than agarose beads, although there is a lack of independent comparative evidence that proves either case. Some argue that the significantly greater binding capacity of agarose beads may be a disadvantage because of the larger capacity of non-specific binding. Others may argue for the use of magnetic beads because of the greater quantity of antibody required to saturate the total binding capacity of agarose beads, which would obviously be an economical disadvantage of using agarose. While these arguments are correct outside the context of their practical use, these lines of reasoning ignore two key aspects of the principle of immunoprecipitation that demonstrates that the decision to use agarose or magnetic beads is not simply determined by binding capacity. First, non-specific binding is not limited to the antibody-binding sites on the immobilized support; any surface of the antibody or component of the immunoprecipitation reaction can bind to nonspecific lysate constituents, and therefore nonspecific binding will still occur even when completely saturated beads are used. This is why it is important to preclear the sample before the immunoprecipitation is performed. Second, the ability to capture the target protein is directly dependent upon the amount of immobilized antibody used, and therefore, in a side-by-side comparison of agarose and magnetic bead immunoprecipitation, the most protein that either support can capture is limited by the amount of antibody added. So the decision to saturate any type of support depends on the amount of protein required, as described above in the Agarose section of this page. Cost The price of using either type of support is a key determining factor in using agarose or magnetic beads for immunoprecipitation applications. A typical first-glance calculation on the cost of magnetic beads compared to sepharose beads may make the sepharose beads appear less expensive. But magnetic beads may be competitively priced compared to agarose for analytical-scale immunoprecipitations depending on the IP method used and the volume of beads required per IP reaction. Using the traditional batch method of immunoprecipitation as listed below, where all components are added to a tube during the IP reaction, the physical handling characteristics of agarose beads necessitate a minimum quantity of beads for each IP experiment (typically in the range of 25 to 50 μl beads per IP). This is because sepharose beads must be concentrated at the bottom of the tube by centrifugation and the supernatant removed after each incubation, wash, etc. This imposes absolute physical limitations on the process, as pellets of agarose beads less than 25 to 50 μl are difficult if not impossible to visually identify at the bottom of the tube. With magnetic beads, there is no minimum quantity of beads required due to magnetic handling, and therefore, depending on the target antigen and IP antibody, it is possible to use considerably less magnetic beads. Conversely, spin columns may be employed instead of normal microfuge tubes to significantly reduce the amount of agarose beads required per reaction. Spin columns contain a filter that allows all IP components except the beads to flow through using a brief centrifugation and therefore provide a method to use significantly less agarose beads with minimal loss. Equipment As mentioned above, only standard laboratory equipment is required for the use of agarose beads in immunoprecipitation applications, while high-power magnets are required for magnetic bead-based IP reactions. While the magnetic capture equipment may be cost-prohibitive, the rapid completion of immunoprecipitations using magnetic beads may be a financially beneficial approach when grants are due, because a 30-minute protocol with magnetic beads compared to overnight incubation at 4 °C with agarose beads may result in more data generated in a shorter length of time. Automation An added benefit of using magnetic beads is that automated immunoprecipitation devices are becoming more readily available. These devices not only reduce the amount of work and time to perform an IP, but they can also be used for high-throughput applications. Summary While clear benefits of using magnetic beads include the increased reaction speed, more gentle sample handling and the potential for automation, the choice of using agarose or magnetic beads based on the binding capacity of the support medium and the cost of the product may depend on the protein of interest and the IP method used. As with all assays, empirical testing is required to determine which method is optimal for a given application. Protocol Background Once the solid substrate bead technology has been chosen, antibodies are coupled to the beads and the antibody-coated-beads can be added to the heterogeneous protein sample (e.g. homogenized tissue). At this point, antibodies that are immobilized to the beads will bind to the proteins that they specifically recognize. Once this has occurred the immunoprecipitation portion of the protocol is actually complete, as the specific proteins of interest are bound to the antibodies that are themselves immobilized to the beads. Separation of the immunocomplexes from the lysate is an extremely important series of steps, because the protein(s) must remain bound to each other (in the case of co-IP) and bound to the antibody during the wash steps to remove non-bound proteins and reduce background. When working with agarose beads, the beads must be pelleted out of the sample by briefly spinning in a centrifuge with forces between 600–3,000 x g (times the standard gravitational force). This step may be performed in a standard microcentrifuge tube, but for faster separation, greater consistency and higher recoveries, the process is often performed in small spin columns with a pore size that allows liquid, but not agarose beads, to pass through. After centrifugation, the agarose beads will form a very loose fluffy pellet at the bottom of the tube. The supernatant containing contaminants can be carefully removed so as not to disturb the beads. The wash buffer can then be added to the beads and after mixing, the beads are again separated by centrifugation. With superparamagnetic beads, the sample is placed in a magnetic field so that the beads can collect on the side of the tube. This procedure is generally complete in approximately 30 seconds, and the remaining (unwanted) liquid is pipetted away. Washes are accomplished by resuspending the beads (off the magnet) with the washing solution and then concentrating the beads back on the tube wall (by placing the tube back on the magnet). The washing is generally repeated several times to ensure adequate removal of contaminants. If the superparamagnetic beads are homogeneous in size and the magnet has been designed properly, the beads will concentrate uniformly on the side of the tube and the washing solution can be easily and completely removed. After washing, the precipitated protein(s) are eluted and analyzed by gel electrophoresis, mass spectrometry, western blotting, or any number of other methods for identifying constituents in the complex. Protocol times for immunoprecipitation vary greatly due to a variety of factors, with protocol times increasing with the number of washes necessary or with the slower reaction kinetics of porous agarose beads. Steps Lyse cells and prepare sample for immunoprecipitation. Pre-clear the sample by passing the sample over beads alone or bound to an irrelevant antibody to soak up any proteins that non-specifically bind to the IP components. Incubate solution with antibody against the protein of interest. Antibody can be attached to solid support before this step (direct method) or after this step (indirect method). Continue the incubation to allow antibody-antigen complexes to form. Precipitate the complex of interest, removing it from bulk solution. Wash precipitated complex several times. Spin each time between washes when using agarose beads or place tube on magnet when using superparamagnetic beads and then remove the supernatant. After the final wash, remove as much supernatant as possible. Elute proteins from the solid support using low-pH or SDS sample loading buffer. Analyze complexes or antigens of interest. This can be done in a variety of ways: SDS-PAGE (sodium dodecyl sulfate-polyacrylamide gel electrophoresis) followed by gel staining. SDS-PAGE followed by: gel staining, cutting out individual stained protein bands, and sequencing the proteins in the bands by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry. Transfer and Western blot using another antibody for proteins that were interacting with the antigen, followed by detection using a chemiluminescent or fluorescent secondary antibody. References External links Analysis of Proteins Using Immunoprecipitation at ufl.edu Introduction to Immunoprecipitation Methodology Co-Immunoprecipitation (Co-IP) Technical Biochemical separation processes Protein methods Molecular biology techniques Protein–protein interaction assays Immunologic tests
Immunoprecipitation
Chemistry,Biology
5,344
4,984,079
https://en.wikipedia.org/wiki/Statistical%20Accounts%20of%20Scotland
The Statistical Accounts of Scotland are a series of documentary publications, related in subject matter though published at different times, covering life in Scotland in the 18th, 19th and 20th centuries. The Old (or First) Statistical Account of Scotland was published between 1791 and 1799 by Sir John Sinclair of Ulbster. The New (or Second) Statistical Account of Scotland published under the auspices of the General Assembly of the Church of Scotland between 1834 and 1845. These first two Statistical Accounts of Scotland are among the finest European contemporary records of life during the agricultural and industrial revolutions. A Third Statistical Account of Scotland was published between 1951 and 1992. Early attempts Attempts at getting an accurate picture of the geography, people and economy of Scotland had been attempted in the 1620s and 1630s, using the network of about 900 ministers of the established Church of Scotland. The time and resources involved, not to mention the troubled times of the Civil Wars, led to limited results. Sir Robert Sibbald (1684–1690s) However, the Geographer Royal for Scotland, Sir Robert Sibbald took this forward between 1684 and the early 1690s. Sir Robert circulated some "General Queries" to parish ministers, but again this was the unsettled time of the Glorious Revolution and, though progress was made, the results provided a very incomplete picture of the nation. The General Assembly of the Church of Scotland (1720–1755) The General Assembly proposed a "Geographical Description of Scotland" and took some action on this between 1720 and 1744, again during troubled times for the country, latterly involving the Jacobite rebellion under Bonnie Prince Charlie. Nonetheless, during 1743, the Moderator of the General Assembly, the Rev Robert Wallace organised the distribution of questionnaires, aimed at finding out how to devise a scheme for the support of the widows and orphans of clergy. This work helped to develop actuarial methods, and explains the involvement of a society for ministers’ widows and orphans in later work. The Rev Alexander Webster produced a population census of Scotland in 1755, based to some extent on Wallace's work. Sir James Steuart (1767) and David Erskine (1781) In 1767, Sir James Denham-Steuart suggested a national survey in his "Enquiry into the principles of Œconomy" and this was taken up in 1781 by David Erskine, Earl of Buchan. However, by the time this came to fruition in 1792, it had been overtaken by the work of Sir John Sinclair of Ulbster. The First (Old) Statistical Account of Scotland Sir John Sinclair of Ulbster had studied German state surveys and wished to use what he called for the first time these "statistical" methods to measure the quantum of happiness that existed in the nation and find ways of improving this. In this he was a remarkable example of Enlightenment idealism at work. He stressed the empirical ideal of that age by lauding its anxious attention to the facts and he set about completing the work left unachieved by the previous attempt mentioned above. The results are crucial to an understanding of Scotland on the eve of both the Industrial Revolution and the French Revolution. In 1790, Sir John sent structured questionnaires to over 900 parish ministers, covering the whole country. This contained 160 questions in 4 sections, namely Geography and topography Population Agricultural and Industrial production Miscellaneous questions There were follow up questions in Appendices – six new questions in 1790 and four more in 1791. The general response was excellent, though the length and quality of submissions varied greatly, as can be seen by comparing those for two East Lothian parishes: Whittingehame (19 pages with detailed tables) and Stenton (2 pages of minimal information). Since the survey was not complete, Sir John sent out Statistical Missionaries in 1796 . The project was finished by June 1799, though much had already been published, and Sir John was able to lay before the General Assembly a detailed portrait of the nation. Taken as a whole, the reports are of inestimable historical value. Some are excellently written by ministers who were themselves meticulous Enlightenment scholars (see for example the response by the Rev Dr James Meek for the Parish of Cambuslang in Lanarkshire). The finished volumes were published in Edinburgh by William Creech. The Second (New) Statistical Account of Scotland As mentioned above, early attempts at producing an accurate statistical account of Scotland were related to schemes to support the widows and orphans of the clergy. In 1832 the Committee for the Society for the Sons and Daughters of the Clergy, with the blessing of the General Assembly of the Church of Scotland, took Sir John's work further. It was to be more modern (including maps for each county) and was to draw upon the specialist knowledge of local doctors and schoolmasters. It very self-consciously set out not to produce a new statistical account, but a statistical account of a new country – one that the revolutions mentioned above had changed rapidly. It was, however, very much the child of the "Old Statistical Account". Indeed, the Rev Dr John Robertson, the Minister responsible for of the new account for Cambuslang, was the former assistant to the writer of the old account. The Third Statistical Account of Scotland Following a grant of some £8,000 from the Nuffield Foundation in 1947, the Third Statistical Account was initiated, and followed a similar parish format to the earlier accounts. The first volume, covering Ayrshire, was published in 1951. Ultimately it was more rigorous and wide-ranging than either of its predecessors, covering industry, transport, culture and demographics. Volume editors ensured a more generic approach than before, but even so the spirit of the originals was retained, even if idiosyncrasies remained. The scale of the project, ongoing difficulties with funding and finding publishers (which included Collins and Oliver & Boyd) meant that the project took over forty years to complete, with a gap of more than a decade following the publication of Edinburgh in 1966. It was not until 1992 that the last volume, The County of Roxburgh, was published, under the auspices of the Scottish Council for Voluntary Organisations. Another consequence of this delay was that the later volumes covered administrative divisions which no longer existed. Several parish accounts had to be revised or rewritten due to the lapse of time between the fieldwork and publication. One account, the parish of Livingston in West Lothian, was revised twice and all three versions appear in the published volume. The account for the parish of Currie went missing by the time the Midlothian volume was put together and the book appears without it. Although the project was more secular than before, sections of the accounts continued to focus on religious life, and several of the parish accounts were still written by Church of Scotland ministers. The tone of the comments in the 'Way of Life' often appear surprisingly judgmental to a modern reader, and there can be ill-concealed exasperation with the behaviour of working-class parishioners. For example, again and again, spending on football pools is denounced, as are other ways of spending money and leisure time. Judgmentalism turns to plain insult in remarks like 'The people of Dura Den can be extremely ignorant' (Parish of Kemback, Fife) and 'Singing in the schools and the church is painful to an educated ear' (Parish of Inch, Wigtownshire). Note: each volume is entitled either County of... or City of.... Aberdeen (1953), MacKenzie, H. Aberdeenshire (1960), Hamilton, H. Angus (1977), Illsley, W.A. Argyll (1961), MacDonald, C.M. Ayrshire (1951), Strawhorn & Boyd Banffshire (1961), Hamilton, H. Berwickshire (1992), Herdman, J. Caithness (1961), Smith, J. S. Dumfriesshire (1962), Houston, G. Dunbartonshire (1959), Dilke, M.S. & Templeton, A.A. Dundee (1979), Jackson, J.M. East Lothian (1953), Snodgrass, Catherine P. Edinburgh (1966), Keir, D. Fife (1952), Smith, A. Glasgow (1958), Cunnison & Gilfillan Inverness-shire (1985), Barron, H. Stewartry of Kirkcudbright & Wigtownshire (1965), Laird, J. & Ramsay, D.G. Kincardineshire (1988), Smith, D. Lanarkshire (1960), Thomson, G. Midlothian (1985), Kirkland, H. Moray & Nairnshire (1965), Hamilton, H. Orkney (1985), Miller, R. Peeblesshire & Selkirkshire (1964), Bulloch, J.P.B. & Urquhart, J.M. Perthshire & Kinross-shire (1980), Taylor, D.B. Renfrewshire & Bute (1962), Moisley, H.A., Thain, A.G., Somerville, A.C. & Stevenson, W. Ross & Cromarty (1987), Mather, A.S. Roxburghshire (1992), Herdman, J. Shetland (1985), Coull, J.R. Stirlingshire & Clackmannanshire (1966), Rennie & Gordon Sutherland (1988), Smith, J. S. D. West Lothian (1992), Cadell, P. References External links The Statistical Accounts of Scotland Online gives access to the Old and the New accounts and has an introduction from which much of this article is taken. Google Books also has the Accounts free of charge. ElectricScotland hosts pdf copies of Google's scans of the First and Second Statistical Accounts. 1792 non-fiction books 1845 non-fiction books 1951 non-fiction books 1992 non-fiction books 1791 establishments in Scotland 18th-century documents 19th-century documents 20th-century documents Agriculture in Scotland Cultural history of Scotland Demographics of Scotland Economic history of Scotland Geography of Scotland History of the Church of Scotland Industrial Revolution Industry in Scotland Censuses in the United Kingdom Social history of Scotland Surveys (human research) Topography History of probability and statistics Church of Scotland
Statistical Accounts of Scotland
Mathematics
2,109
6,867,743
https://en.wikipedia.org/wiki/Astrodome%20%28aeronautics%29
An astrodome is a hemispherical transparent dome that was installed in the cabin roof of an aircraft. Such a dome would allow a trained navigator to perform astronavigation and thereby guide the aircraft at night without the aid of land-based visual references. Astronavigation was a principal early method for attaining an aircraft's position during nighttime by referencing the stars. The practice of sighting stars using a sextant had been commonplace amongst navigators for hundreds of years aboard ships, and proved to be applicable to faster moving aircraft as well, however, the task required a 360-degree view of the celestial horizon. By installing an astrodome, such a view could be readily achieved. The Royal Air Force (RAF) adopted astronavigation techniques into standard navigator training during the late 1930s, both the methods used and the design of the sextant were adapted to better suit the aviation environment, while many aircraft ordered by the service would be furnished with astrodomes to enable navigators to use this technique. During the Second World War, astronavigation became a critical ability used to by various nations to conduct long distance flights at night, particularly strategic bombing campaigns. The RAF's choice to mainly operate its bombers at night meant that its crews were particularly dependent on astronavigation for finding their way to and from targets. The introduction of electronic means of navigation soon competed with astronavigation, although electronic techniques had their shortcomings as well. Use in aviation Sporadic use of astronavigation in aviation can be found in numerous long distance flights performed during the 1920s and even amid the First World War. During these early days of aviation, those individual officers that chose to employ astronavigation often attempted to simplify the traditional procedures of marine navigators in this new operating context. Amid the 1930s, the Royal Air Force (RAF) became seriously interested in the widespread use of astronavigation for nighttime flights. During November 1937, astronavigation was formally endorsed to be a part of standard navigation procedure amongst general reconnaissance and twin-engine bomber pilots. Two years later, a specialised bubble sextant was designed for the service, which became a preferred tool for this form of navigation. Typically, there would be a suspension arm mounted in the vicinity of the astrodome, upon which the sextant could be mounted via a swivel clip affixed to the top of the instrument. During the Second World War, astrodomes were prominent on many RAF and Commonwealth-operated multi-engined aircraft and on foreign aircraft ordered by them for their use, such as the Liberator and Dakota. Furthermore, numerous aircraft would be retrofitted with astrodomes to better facilitate operational use. For the RAF, it was particularly important for specific aircraft to possess astrodomes as the service had opted to perform the majority of its offensive operations over the continent under the cover of night, hindering conventional navigation by landmarks. On numerous aircraft, such as the Short Stirling four-engined heavy bomber, the astrodome was angled so that it could provide generous external views, including of ground positions, not only those relevant to the task of astronavigation, thus the facility was sometimes used for observation (unrelated to navigation). Several Avro Lancasters were outfitted with a pair of astrodomes. Similar hemispherical-shaped domes were also installed on some Second World War era heavy bombers for the purpose of sighting of their defensive gun turrets, particularly those that were remotely operated. Examples of such installations include the German Heinkel He 177A, which had a single forward dorsal dome to aim its remotely operated FDL 131 twin MG 131 dorsal turret, and the American Boeing B-29 Superfortress heavy bomber, which had used a dome in its complex sighting system for its quartet of remote gun turrets. On the B-29, the bonding of the astrodome was designed so that it would generate only minimal radio interference via static electric discharges. Several RAF bombers, such as the Sterling, were equipped with an astrograph; this device, installed above the navigator's table, projected lines of equal altitude for two stars at any one time. The navigator only needed to observe Polaris from this point to achieve a three-star fix. While deemed to be useful in astronavigation, by this time inertial guidance systems were becoming increasingly available; these devices would eventually displace the use of astronavigation and thus aircraft would increasingly be built without astrodomes or other accommodations for this means of navigation. Astrodomes added drag and could fail under pressurization (called a blowout) which has occurred in several instances often with fatal consequences for the navigator. Efforts were made to reduce this danger such as retractable periscopic sextants. Early jet-powered bombers, such as the English Electric Canberra and the V bombers, while furnished with internal navigation systems, would often still be navigable by astronavigation. During the early 1960s, astrodomes were still being employed in the USMC Lockheed Hercules GV-1 (later designated as C-130); the navigator was able to employ a bubble sextant hung from a hook in the middle of the dome. The USMC operated its Aerial Navigation School at MCAS Cherry Point, NC with graduates receiving their designation and wings as an Aerial Navigator. The Lockheed SR-71 Blackbird, a high speed aerial reconnaissance aircraft, was furnished with a complex array of navigation systems, which included an astro-inertial guidance system (ANS) to correct deviations produced by the inertial navigation system via a series of celestial observations. This system performed its observations of the stars above the aircraft via a circular quartz glass window set onto the upper fuselage. Its "blue light" source star tracker, which could see stars during both day and night, would continuously track a variety of stars as the aircraft's changing position brought them into view. The system's digital computer ephemeris contained data on a list of stars used for celestial navigation: the list first included 56 stars, and was later expanded to 61. Use at sea During the postwar era, the use of the astrodome spread to other vehicles, including a number of ocean-going vessels. In particular, they found popularity on long distance racing yachts, especially those that were being used in solo racing. Eric Tabarly, record-breaking winner of the 1964 OSTAR single-handed transatlantic race, and former French Aéronavale (Fleet air arm) pilot, had fitted his revolutionary lightweight ketch-rigged racer Pen Duick II with an astrodome scavenged from a decommissioned Short Sunderland flying boat. Not only could he use it for sextant astro-navigation, but it provided a sheltered place from which he could steer his yacht during a stormy race. This was quite useful, as his wind-vane autopilot (also derived from aeronautical technology) had broken down. See also Index of aviation articles References Citations Bibliography Shul, Brian and Sheila Kathleen O'Grady. Sled Driver: Flying the World's Fastest Jet. Marysville, California: Gallery One, 1994. . Air navigation Aircraft canopies Celestial navigation Navigational equipment
Astrodome (aeronautics)
Astronomy
1,467
2,395,919
https://en.wikipedia.org/wiki/Bower%E2%80%93Barff%20process
In metallurgy, the Bower–Barff process is a method of coating iron or steel with magnetic iron oxide, such as Fe2O4, in order to minimize atmospheric corrosion. The articles to be treated are put into a closed retort and a current of superheated steam passed through for twenty minutes followed by a current of producer gas (carbon monoxide), to reduce any higher oxides that may have been formed. References ebook of creative chemistry, page 273. Metallurgical processes
Bower–Barff process
Chemistry,Materials_science
105
21,536,051
https://en.wikipedia.org/wiki/Francis%20Wentworth-Sheilds
Francis Ernest Wentworth-Sheilds OBE (also spelt Shields; 16 November 1869 – 10 May 1959) was a British civil engineer. Francis Ernest Sheilds was born in London in 1869, the younger son of engineer Francis Webb Sheilds. Rev. Wentworth Wentworth-Sheilds was his elder brother. The family added the surname Wentworth in 1877. He was educated at St Paul's School in London and Owens College, Manchester. He was appointed to be a Major of the Territorial Army's Engineer and Railway Staff Corps, an unpaid, volunteer unit which provides technical expertise to the British Army, on 28 March 1925. He served as president of the Institution of Civil Engineers for the November 1944 to November 1945 session. Wentworth-Shields was an Officer of the Order of the British Empire. He died in 1959 in Southampton. References Bibliography External links 1869 births 1959 deaths Engineers from London British civil engineers Officers of the Order of the British Empire Engineer and Railway Staff Corps officers Presidents of the Institution of Civil Engineers Presidents of the Institution of Structural Engineers
Francis Wentworth-Sheilds
Engineering
212
1,723,471
https://en.wikipedia.org/wiki/TCP%20congestion%20control
Transmission Control Protocol (TCP) uses a congestion control algorithm that includes various aspects of an additive increase/multiplicative decrease (AIMD) scheme, along with other schemes including slow start and a congestion window (CWND), to achieve congestion avoidance. The TCP congestion-avoidance algorithm is the primary basis for congestion control in the Internet. Per the end-to-end principle, congestion control is largely a function of internet hosts, not the network itself. There are several variations and versions of the algorithm implemented in protocol stacks of operating systems of computers that connect to the Internet. To avoid congestive collapse, TCP uses a multi-faceted congestion-control strategy. For each connection, TCP maintains a CWND, limiting the total number of unacknowledged packets that may be in transit end-to-end. This is somewhat analogous to TCP's sliding window used for flow control. Additive increase/multiplicative decrease The additive increase/multiplicative decrease (AIMD) algorithm is a closed-loop control algorithm. AIMD combines linear growth of the congestion window with an exponential reduction when congestion occurs. Multiple flows using AIMD congestion control will eventually converge to use equal amounts of a contended link. This is the algorithm that is described in for the "congestion avoidance" state. Congestion window In TCP, the congestion window (CWND) is one of the factors that determines the number of bytes that can be sent out at any time. The congestion window is maintained by the sender and is a means of preventing a link between the sender and the receiver from becoming overloaded with too much traffic. This should not be confused with the sliding window maintained by the sender which exists to prevent the receiver from becoming overloaded. The congestion window is calculated by estimating how much congestion there is on the link. When a connection is set up, the congestion window, a value maintained independently at each host, is set to a small multiple of the maximum segment size (MSS) allowed on that connection. Further variance in the congestion window is dictated by an additive increase/multiplicative decrease (AIMD) approach. This means that if all segments are received and the acknowledgments reach the sender on time, some constant is added to the window size. It will follow different algorithms. A system administrator may adjust the maximum window size limit, or adjust the constant added during additive increase, as part of TCP tuning. The flow of data over a TCP connection is also controlled by the use of the receive window advertised by the receiver. A sender can send data less than its own congestion window and the receive window. Slow start Slow start, defined by . is part of the congestion control strategy used by TCP in conjunction with other algorithms to avoid sending more data than the network is capable of forwarding, that is, to avoid causing network congestion. Slow start begins initially with a congestion window size (CWND) of 1, 2, 4 or 10 MSS. The value for the congestion window size can be increased by 1 MSS with each acknowledgment (ACK) received, effectively doubling the window size each RTT. The transmission rate will be increased by the slow-start algorithm until either a packet loss is detected, the receiver's advertised window (rwnd) becomes the limiting factor, or slow start threshold (ssthresh) is reached, which is used to determine whether the slow start or congestion avoidance algorithm is used, a value set to limit slow start. If the CWND reaches ssthresh, TCP switches to the congestion avoidance algorithm. It should be increased by up to 1 MSS for each RTT. A common formula is that each new ACK increases the CWND by It increases almost linearly and provides an acceptable approximation. If a loss event occurs, TCP assumes that it is due to network congestion and takes steps to reduce the offered load on the network. These measures depend on the exact TCP congestion avoidance algorithm used. When a TCP sender detects segment loss using the retransmission timer and the given segment has not yet been resent, the value of ssthresh must be set to no more than half of the amount of data that has been sent but not yet cumulatively acknowledged or 2 * MSS, whichever value is greater. TCP Tahoe When a loss occurs, retransmit is sent, half of the current CWND is saved as ssthresh and slow start begins again from its initial CWND. TCP Reno A fast retransmit is sent, half of the current CWND is saved as ssthresh and as new CWND, thus skipping slow start and going directly to the congestion avoidance algorithm. The overall algorithm here is called . Slow start assumes that unacknowledged segments are due to network congestion. While this is an acceptable assumption for many networks, segments may be lost for other reasons, such as poor data link layer transmission quality. Thus, slow start can perform poorly in situations with poor reception, such as wireless networks. The slow start protocol also performs badly for short-lived connections. Older web browsers would create many consecutive short-lived connections to the web server, and would open and close the connection for each file requested. This kept most connections in the slow start mode, which resulted in poor response time. To avoid this problem, modern browsers either open multiple connections simultaneously or reuse one connection for all files requested from a particular web server. Connections, however, cannot be reused for the multiple third-party servers used by web sites to implement web advertising, sharing features of social networking services, and counter scripts of web analytics. Fast retransmit Fast retransmit is an enhancement to TCP that reduces the time a sender waits before retransmitting a lost segment. A TCP sender normally uses a simple timer to recognize lost segments. If an acknowledgment is not received for a particular segment within a specified time (a function of the estimated round-trip delay time), the sender will assume the segment was lost in the network and will retransmit the segment. Duplicate acknowledgment is the basis for the fast retransmit mechanism. After receiving a packet an acknowledgement is sent for the last in-order byte of data received. For an in-order packet, this is effectively the last packet's sequence number plus the current packet's payload length. If the next packet in the sequence is lost but a third packet in the sequence is received, then the receiver can only acknowledge the last in-order byte of data, which is the same value as was acknowledged for the first packet. The second packet is lost and the third packet is not in order, so the last in-order byte of data remains the same as before. Thus a Duplicate acknowledgment occurs. The sender continues to send packets, and a fourth and fifth packet are received by the receiver. Again, the second packet is missing from the sequence, so the last in-order byte has not changed. Duplicate acknowledgments are sent for both of these packets. When a sender receives three duplicate acknowledgments, it can be reasonably confident that the segment carrying the data that followed the last in-order byte specified in the acknowledgment was lost. A sender with fast retransmit will then retransmit this packet immediately without waiting for its timeout. On receipt of the retransmitted segment, the receiver can acknowledge the last in-order byte of data received. In the above example, this would acknowledge to the end of the payload of the fifth packet. There is no need to acknowledge intermediate packets since TCP uses cumulative acknowledgments by default. Algorithms The naming convention for congestion control algorithms (CCAs) may have originated in a 1996 paper by Kevin Fall and Sally Floyd. The following is one possible classification according to the following properties: the type and amount of feedback received from the network incremental deployability on the current Internet the aspect of performance it aims to improve: high bandwidth-delay product networks (B); lossy links (L); fairness (F); advantage to short flows (S); variable-rate links (V); speed of convergence (C) the fairness criterion it uses Some well-known congestion avoidance mechanisms are classified by this scheme as follows: TCP Tahoe and Reno TCP Tahoe and Reno algorithms were retrospectively named after the versions or flavors of the 4.3BSD operating system in which each first appeared (which were themselves named after Lake Tahoe and the nearby city of Reno, Nevada). The Tahoe algorithm first appeared in 4.3BSD-Tahoe (which was made to support the CCI Power 6/32 "Tahoe" minicomputer), and was later made available to non-AT&T licensees as part of the 4.3BSD Networking Release 1; this ensured its wide distribution and implementation. Improvements were made in 4.3BSD-Reno and subsequently released to the public as Networking Release 2 and later 4.4BSD-Lite. While both consider retransmission timeout (RTO) and duplicate ACKs as packet loss events, the behavior of Tahoe and Reno differ primarily in how they react to duplicate ACKs: Tahoe: if three duplicate ACKs are received (i.e. four ACKs acknowledging the same packet, which are not piggybacked on data and do not change the receiver's advertised window), Tahoe performs a fast retransmit, sets the slow start threshold to half of the current congestion window, reduces the congestion window to 1 MSS, and resets to slow start state. Reno: if three duplicate ACKs are received, Reno will perform a fast retransmit and skip the slow start phase by instead halving the congestion window (instead of setting it to 1 MSS like Tahoe), setting the ssthresh equal to the new congestion window, and enter a phase called fast recovery. In both Tahoe and Reno, if an ACK times out (RTO timeout), slow start is used, and both algorithms reduce the congestion window to 1 MSS. TCP New Reno TCP New Reno, defined by (which obsolesces previous definitions in and ), improves retransmission during the fast-recovery phase of TCP Reno. During fast recovery, to keep the transmit window full, for every duplicate ACK that is returned, a new unsent packet from the end of the congestion window is sent. The difference from Reno is that New Reno does not halve ssthresh immediately which may reduce the window too much if multiple packet losses occur. It does not exit fast-recovery and reset ssthresh until it acknowledges all of the data. After retransmission, newly acknowledged data have two cases: Full acknowledgments: The ACK acknowledges all the intermediate segments sent, the ssthresh cannot be changed, cwnd can be set to ssthresh Partial acknowledgments: The ACK does not acknowledge all data. It means another loss may occur, retransmit the first unacknowledged segment if permitted It uses a variable called "recover" to record how much data needs to be recovered. After a retransmit timeout, it records the highest sequence number transmitted in the recover variable and exits the fast recovery procedure. If this sequence number is acknowledged, TCP returns to the congestion avoidance state. A problem occurs with New Reno when there are no packet losses but instead, packets are reordered by more than 3 packet sequence numbers. In this case, New Reno mistakenly enters fast recovery. When the reordered packet is delivered, duplicate and needless retransmissions are immediately sent. New Reno performs as well as SACK at low packet error rates and substantially outperforms Reno at high error rates. TCP Vegas Until the mid-1990s, all of TCP's set timeouts and measured round-trip delays were based upon only the last transmitted packet in the transmit buffer. University of Arizona researchers Larry Peterson and Lawrence Brakmo introduced TCP Vegas in which timeouts were set and round-trip delays were measured for every packet in the transmit buffer. In addition, TCP Vegas uses additive increases in the congestion window. In a comparison study of various TCP s, TCP Vegas appeared to be the smoothest followed by TCP CUBIC. TCP Vegas was not widely deployed outside Peterson's laboratory but was selected as the default congestion control method for DD-WRT firmware v24 SP2. TCP Hybla TCP Hybla aims to eliminate penalties to TCP connections that use high-latency terrestrial or satellite radio links. Hybla improvements are based on analytical evaluation of the congestion window dynamics. TCP BIC Binary Increase Congestion control (BIC) is a TCP implementation with an optimized CCA for high-speed networks with high latency, known as long fat networks (LFNs). BIC is used by default in Linux kernels 2.6.8 through 2.6.18. TCP CUBIC CUBIC is a less aggressive and more systematic derivative of BIC, in which the window is a cubic function of time since the last congestion event, with the inflection point set to the window prior to the event. CUBIC is used by default in Linux kernels since version 2.6.19. Agile-SD TCP Agile-SD is a Linux-based CCA which is designed for the real Linux kernel. It is a receiver-side algorithm that employs a loss-based approach using a novel mechanism, called agility factor (AF). to increase the bandwidth utilization over high-speed and short-distance networks (low bandwidth-delay product networks) such as local area networks or fiber-optic network, especially when the applied buffer size is small. It has been evaluated by comparing its performance to Compound TCP (the default CCA in MS Windows) and CUBIC (the default of Linux) using NS-2 simulator. It improves the total performance up to 55% in term of average throughput. TCP Westwood+ Westwood+ is a sender-only modification of TCP Reno that optimizes the performance of TCP congestion control over both wired and wireless networks. TCP Westwood+ is based on end-to-end bandwidth estimation to set the congestion window and slow-start threshold after a congestion episode, that is, after three duplicate acknowledgments or a timeout. The bandwidth is estimated by averaging the rate of returning acknowledgment packets. In contrast with TCP Reno, which blindly halves the congestion window after three duplicate ACKs, TCP Westwood+ adaptively sets a slow-start threshold and a congestion window that takes into account an estimate of bandwidth available at the time congestion is experienced. Compared to Reno and New Reno, Westwood+ significantly increases throughput over wireless links and improves fairness in wired networks. Compound TCP Compound TCP is a Microsoft implementation of TCP which maintains two different congestion windows simultaneously, with the goal of achieving good performance on LFNs while not impairing fairness. It has been widely deployed in Windows versions since Microsoft Windows Vista and Windows Server 2008 and has been ported to older Microsoft Windows versions as well as Linux. TCP Proportional Rate Reduction TCP Proportional Rate Reduction (PRR) is an algorithm designed to improve the accuracy of data sent during recovery. The algorithm ensures that the window size after recovery is as close as possible to the slow start threshold. In tests performed by Google, PRR resulted in a 3–10% reduction in average latency and recovery timeouts were reduced by 5%. PRR is available in Linux kernels since version 3.2. TCP BBR Bottleneck Bandwidth and Round-trip propagation time (BBR) is a CCA developed at Google in 2016. While most CCAs are loss-based, in that they rely on packet loss to detect congestion and lower rates of transmission, BBR, like TCP Vegas, is model-based. The algorithm uses the maximum bandwidth and round-trip time at which the network delivered the most recent flight of outbound data packets to build a model of the network. Each cumulative or selective acknowledgment of packet delivery produces a rate sample that records the amount of data delivered over the time interval between the transmission of a data packet and the acknowledgment of that packet. When implemented at YouTube, BBRv1 yielded an average of 4% higher network throughput and up to 14% in some countries. BBR has been available for Linux TCP since Linux 4.9. It is also available for QUIC. BBR version 1 (BBRv1) fairness to non-BBR streams is disputed. While Google's presentation shows BBRv1 co-existing well with CUBIC, researchers like Geoff Huston and Hock, Bless and Zitterbart found it unfair to other streams and not scalable. Hock et al. also found "some severe inherent issues such as increased queuing delays, unfairness, and massive packet loss" in the BBR implementation of Linux 4.9. Soheil Abbasloo et al. (authors of C2TCP) show that BBRv1 doesn't perform well in dynamic environments such as cellular networks. They have also shown that BBR has an unfairness issue. For instance, when a CUBIC flow (which is the default TCP implementation in Linux, Android, and MacOS) coexists with a BBR flow in the network, the BBR flow can dominate the CUBIC flow and get the whole link bandwidth from it (see figure 16 in). Version 2 attempts to deal with the issue of unfairness when operating alongside loss-based congestion management such as CUBIC. In BBRv2 the model used by BBRv1 is augmented to include information about packet loss and information from Explicit Congestion Notification (ECN). Whilst BBRv2 may at times have lower throughput than BBRv1 it is generally considered to have better goodput. Version 3 (BBRv3) fixes two bugs in BBRv2 (premature end of bandwidth probing, bandwidth convergence) and performs some performance tuning. There is also a variant, termed BBR.Swift, optimized for datacenter-internal links: it uses network_RTT (excluding receiver delay) as the main congestion control signal. C2TCP Cellular Controlled Delay TCP (C2TCP) was motivated by the lack of a flexible end-to-end TCP approach that can satisfy various QoS requirements for different applications without requiring any changes in the network devices. C2TCP aims to satisfy ultra-low latency and high-bandwidth requirements of applications such as virtual reality, video conferencing, online gaming, vehicular communication systems, etc. in a highly dynamic environment such as current LTE and future 5G cellular networks. C2TCP works as an add-on on top of loss-based TCP (e.g. Reno, NewReno, CUBIC, BIC, ...), it is only required to be installed on the server-side and makes the average delay of packets bounded to the desired delays set by the applications. Researchers at NYU showed that C2TCP outperforms the delay and delay-variation performance of various state-of-the-art TCP schemes. For instance, they showed that compared to BBR, CUBIC, and Westwood on average, C2TCP decreases the average delay of packets by about 250%, 900%, and 700% respectively on various cellular network environments. Elastic-TCP Elastic-TCP was proposed in February 2019 to increase bandwidth utilization over high-BDP networks in support of cloud computing. It is a Linux-based CCA that is designed for the Linux kernel. It is a receiver-side algorithm that employs a loss-delay-based approach using a novel mechanism called a window-correlated weighting function (WWF). It has a high level of elasticity to deal with different network characteristics without the need for human tuning. It has been evaluated by comparing its performance to Compound TCP (the default CCA in MS Windows), CUBIC (the default for Linux) and TCP-BBR (the default of Linux 4.9 used by Google) using the NS-2 simulator and testbed. Elastic-TCP significantly improves the total performance in terms of average throughput, loss ratio, and delay. NATCP Soheil Abbasloo et al. proposed NATCP (Network-Assisted TCP) a TCP design targeting multi-access edge computing (MEC). The key idea of NATCP is that if the characteristics of the network were known beforehand, TCP would have been designed differently. Therefore, NATCP employs the available features and properties in the current MEC-based cellular architectures to push the performance of TCP close to the optimal performance. NATCP uses out-of-band feedback from the network to the servers located nearby. The feedback from the network, which includes the capacity of the cellular access link and the minimum RTT of the network, guides the servers to adjust their sending rates. As preliminary results show, NATCP outperforms the state-of-the-art TCP schemes. Other TCP congestion avoidance algorithms FAST TCP Generalized FAST TCP H-TCP Data Center TCP High Speed TCP HSTCP-LP TCP-Illinois TCP-LP TCP SACK Scalable TCP TCP Veno Westwood XCP YeAH-TCP TCP-FIT Congestion Avoidance with Normalized Interval of Time (CANIT) Non-linear neural network congestion control based on genetic algorithm for TCP/IP networks D-TCP NexGen D-TCP Copa TCP New Reno was the most commonly implemented algorithm, SACK support is very common and is an extension to Reno/New Reno. Most others are competing proposals that still need evaluation. Starting with 2.6.8 the Linux kernel switched the default implementation from New Reno to BIC. The default implementation was again changed to CUBIC in the 2.6.19 version. FreeBSD from version 14.X onwards also uses CUBIC as the default algorithm. Previous version used New Reno. However, FreeBSD supports a number of other choices. When the per-flow product of bandwidth and latency increases, regardless of the queuing scheme, TCP becomes inefficient and prone to instability. This becomes increasingly important as the Internet evolves to incorporate very high-bandwidth optical links. TCP Interactive (iTCP) allows applications to subscribe to TCP events and respond accordingly enabling various functional extensions to TCP from outside TCP layer. Most TCP congestion schemes work internally. iTCP additionally enables advanced applications to directly participate in congestion control such as to control the source generation rate. Zeta-TCP detects congestion from both latency and loss rate measures. To maximize the goodput Zeta-TCP and applies different congestion window backoff strategies based on the likelihood of congestion. It also has other improvements to accurately detect packet losses, avoiding retransmission timeout retransmission; and accelerate and control the inbound (download) traffic. Classification by network awareness CCAs may be classified in relation to network awareness, meaning the extent to which these algorithms are aware of the state of the network. This consist of three primary categories: black box, grey box, and green box. Black box algorithms offer blind methods of congestion control. They operate only on the binary feedback received upon congestion and do not assume any knowledge concerning the state of the networks which they manage. Grey box algorithms use time-based measurement, such as RTT variation and rate of packet arrival, in order to obtain measurements and estimations of bandwidth, flow contention, and other knowledge of network conditions. Green box algorithms offer bimodal methods of congestion control which measures the fair share of total bandwidth which should be allocated for each flow, at any point, during the system's execution. Black box Highspeed-TCP BIC TCP (Binary Increase Congestion Control Protocol) uses a concave increase of the sources rate after each congestion event until the window is equal to that before the event, in order to maximize the time that the network is fully utilized. After that, it probes aggressively. CUBIC TCP – a less aggressive and more systematic derivative of BIC, in which the window is a cubic function of time since the last congestion event, with the inflection point set to the window prior to the event. AIMD-FC (additive increase multiplicative decrease with fast convergence), an improvement of AIMD. Binomial Mechanisms SIMD Protocol GAIMD Grey box TCP Vegas – estimates the queuing delay, and linearly increases or decreases the window so that a constant number of packets per flow are queued in the network. Vegas implements proportional fairness. FAST TCP – achieves the same equilibrium as Vegas, but uses proportional control instead of linear increase, and intentionally scales the gain down as the bandwidth increases with the aim of ensuring stability. TCP BBR – estimates the queuing delay but uses exponential increase. Intentionally slows down periodically for fairness and decreased delay. TCP-Westwood (TCPW) – a loss causes the window to be reset to the sender's estimate of the bandwidth-delay product (the smallest measured RTT multiplied by the observed rate of receiving ACKs). C2TCP TFRC TCP-Real TCP-Jersey Green box Bimodal Mechanism – Bimodal Congestion Avoidance and Control mechanism. Signalling methods implemented by routers Random Early Detection (RED) randomly drops packets in proportion to the router's queue size, triggering multiplicative decrease in some flows. Explicit Congestion Notification (ECN) Network-Assisted Congestion Control NATCP – Network-Assisted TCP uses out-of-band explicit feedback indicating minimum RTT of the network and capacity of the cellular access link. The variable-structure congestion control protocol (VCP) uses two ECN bits to explicitly feedback the network state of congestion. It includes an end host side algorithm as well. The following algorithms require custom fields to be added to the TCP packet structure: Explicit Control Protocol (XCP) – XCP packets carry a congestion header with a feedback field, indicating the increase or decrease of the sender's congestion window. XCP routers set the feedback value explicitly for efficiency and fairness. MaxNet – Uses a single header field, which carries the maximum congestion level of any router on a flow's path. The rate is set as a function of this maximum congestion, resulting in max-min fairness. JetMax, like MaxNet, responds only to the maximum congestion signal, but also carries other overhead fields. Linux usage BIC is used by default in Linux kernels 2.6.8 through 2.6.18. (August 2004 – September 2006) CUBIC is used by default in Linux kernels since version 2.6.19. (November 2006) PRR is incorporated in Linux kernels to improve loss recovery since version 3.2. (January 2012) BBRv1 is incorporated in Linux kernels to enable model-based congestion control since version 4.9. (December 2016) See also Low Extra Delay Background Transport (LEDBAT) Notes References Sources External links Papers in Congestion Control TCP Congestion Handling and Congestion Avoidance Algorithms The TCP/IP Guide Flow control (data) Network performance
TCP congestion control
Engineering
5,707
30,137,737
https://en.wikipedia.org/wiki/Entoloma%20albidum
Entoloma albidum is a poisonous mushroom found in North America. See also List of Entoloma species References Poisonous fungi Entolomataceae Fungi described in 1917 Fungi of North America Taxa named by William Alphonso Murrill Fungus species
Entoloma albidum
Biology,Environmental_science
55
20,219
https://en.wikipedia.org/wiki/Mycoplasma%20genitalium
Mycoplasma genitalium (also known as MG, Mgen, or since 2018, Mycoplasmoides genitalium) is a sexually transmitted, small and pathogenic bacterium that lives on the mucous epithelial cells of the urinary and genital tracts in humans. Medical reports published in 2007 and 2015 state that Mgen is becoming increasingly common. Resistance to multiple antibiotics, including the macrolide azithromycin, which until recently was the most reliable treatment, is becoming prevalent. The bacterium was first isolated from the urogenital tract of humans in 1981, and was eventually identified as a new species of Mycoplasma in 1983. It can cause negative health effects in men and women. It also increases the risk for HIV spread with higher occurrences in those previously treated with the azithromycin antibiotics. Symptoms of infection Mgen is a bacterium recognized for causing urethritis in both men and women along with cervicitis and pelvic inflammation in women. It presents clinically similar symptoms to that of Chlamydia trachomatis infection and has shown higher incidence rates, compared to both Chlamydia trachomatis and Neisseria gonorrhoeae infections in some populations. Infection with Mgen can be symptomatic or asymptomatic. Both men and women may experience inflammation in the urethra (urethritis), characterized by mucopurulent discharge in the urinary tract, and burning while urinating. In women, it causes cervicitis and pelvic inflammatory diseases (PID), including endometritis and salpingitis. Women may also experience bleeding after sex and it is also linked with tubal factor infertility. For men, the most common signs are painful urination or a watery discharge from the penis. There is a consistent association of M. genitalium infection and female reproductive tract syndromes. M. genitalium infection was significantly associated with increased risk of preterm birth, spontaneous abortion, cervicitis, and pelvic inflammatory disease. In addition, this pathogen may latently infect the chorionic villi tissues of pregnant women, thereby impacting pregnancy outcome. Infertility risk is also strongly associated with infection with M. genitalium, although evidence suggests it is not associated with male infertility. When M. genitalium is a co-infectious agent risk associations are stronger and statistically significant. Polymerase chain reaction analyses indicated that it is a cause of acute non-gonococcal urethritis (NGU) and probably chronic NGU. It is strongly associated with persistent and recurring non-gonococcal urethritis (NGU), responsible for 15 percent to 20 percent of symptomatic NGU cases in men. Unlike other mycoplasmas, the infection is not associated with bacterial vaginosis. It is highly associated with the intensity of HIV infection. Some scientists are performing research to determine if Mgen could play a role in the development of prostate and ovarian cancers and lymphomas in some individuals. These studies have yet to find conclusive evidence to suggest a link. Genome The genome of M. genitalium strain G37T consists in one circular DNA molecule of 580,070 base pairs. Scott N. Peterson and his team at the University of North Carolina at Chapel Hill reported the first genetic map using pulsed-field gel electrophoresis in 1991. They performed an initial study of the genome using sequencing in 1993, by which they found 100,993 nucleotides and 390 protein-coding genes. Collaborating with researchers at The Institute for Genomic Research (TIGR; now the J. Craig Venter Institute), which included Craig Venter, they made the complete genome sequence in 1995 using shotgun sequencing. Only 470 predicted coding regions were identified in 1995, including genes required for DNA replication, transcription and translation, DNA repair, cellular transport, and energy metabolism. It was the second complete bacterial genome ever sequenced, after Haemophilus influenzae. Later data from KEGG reports 476 protein-coding genes and 43 RNA genes, totaling 519. It is unclear where the "525" gene count for the G37T stems from and what gene calling procedure was used. In 2006, the team at the J. Craig Venter Institute reported that only 382 genes are essential for biological functions. The small genome of M. genitalium made it the organism of choice in The Minimal Genome Project, a study to find the smallest set of genetic material necessary to sustain life. There is limited divergence among clinical strains of M. genitalium. All strains retain the small genome size. Diagnosis Recent research shows that prevalence of Mgen is currently higher than other commonly occurring sexually transmitted infections (STIs). Mgen is a fastidious organism with prolonged growth durations. This makes detection of the pathogen in clinical specimens and subsequent isolation extremely difficult. Lacking a cell wall, Mycoplasma remains unaffected by commonly used antibiotics. The absence of specific serological assays leaves nucleic acid amplification tests (NAAT) as the only viable option for detection of Mgen DNA or RNA. However, samples with positive NAAT for the pathogen should be tested for macrolide resistance mutations, which are strongly correlated to azithromycin treatment failures, owing to rapid rates of mutation of the pathogen. Mutations in the 23S rRNA gene of Mgen have been linked with clinical treatment failure and high level in vitro macrolide resistance. Macrolide resistance mediating mutations have been observed in 20-50% of cases in the UK, Denmark, Sweden, Australia, and Japan. Resistance is also developing towards the second-line antimicrobials like fluoroquinolone. According to the European guidelines, the indication for commencement of diagnosis for Mgen infection are: Detection of nucleic acid (DNA and/or RNA) specific for Mgen in a clinical specimen Current partners of individuals who tested positive for Mgen should be treated with the same antimicrobial as the index patient If current partner does not attend for evaluation and testing, treatment with the same regimen as given to the index patient should be offered on epidemiological grounds On epidemiological grounds for sexual contacts in the previous 3 months; ideally, specimens for a Mgen NAAT should be collected before treatment and treatment should not be given before the result are available Screening for Mgen with a combination of detection and macrolide resistance mutations will provide the adequate information required to develop personalised antimicrobial treatments, in order to optimise patient management and control the spread of antimicrobial resistance (AMR). Detection of resistance Owing to the widespread macrolide resistance, samples that are positive for Mgen should ideally be followed up with an assay capable of detecting mutations that mediate antimicrobial resistance. The European Guideline on Mgen infections recommend complementing the molecular detection of Mgen with an assay capable of detecting macrolide resistance-associated mutations. Furthermore, molecular assays for quinolone resistance-associated mutations are available at specialised laboratories in suspected treatment failure due to treatment with moxifloxacin. Treatment The U.S. Centers for Disease Control and Prevention recommends a step-wise treatment approach for Mycoplasma genitalium with doxycycline for seven days followed immediately by a seven-day course of moxifloxacin as the preferred therapy due to high rates of macrolide resistance. If resistance assay testing is available, and the Mgen is sensitive to macrolides, the CDC recommends a seven-day course of doxycycline followed by a four-day course of azithromycin. Although the majority of M. genitalium strains are sensitive to moxifloxacin, resistance has been reported, and potential for serious, adverse side effects should be considered with this regimen. Floroquinolones, including moxifloxacin, have been associated with disabling and potentially irreversible serious adverse reactions that have occurred together including: Tendinitis and tendon rupture Peripheral Neuropathy Central nervous system effects and other serious side effects detailed in the FDA black box warning. Moxifloxacin/Avelox should be reserved for use when patients have no other treatment options. In settings without access to resistance testing, or if moxifloxacin cannot be used, the CDC recommends as an alternative regimen: seven days of doxycycline followed by the four-day course of azithromycin, with a test of cure 21 days after treatment being required due to the high rate of macrolide resistance. Beta lactam antibiotics are not effective against Mgen as the organism lacks a cell wall. In the UK the British Association for Sexual Health and HIV (BASHH) guidelines for treatment are: Doxycycline 100mg twice a day for seven days followed by azithromycin 1 gram orally as a single dose then 500mg orally once daily for 2 days where organism is known to be macrolide-sensitive or where resistance status is unknown. Moxifloxacin 400mg orally once daily for 10 days if organism known to be macrolide-resistant or where treatment with azithromycin has failed. Treatment of Mycoplasma genitalium infections is becoming increasingly difficult due to rapidly growing antimicrobial resistance. Diagnosis and treatment is further hampered by the fact that Mycoplasma genitalium infections are not routinely tested. Studies have demonstrated that a 5-day course of azithromycin has a superior cure rate compared to a single, larger dose. Further, a single dose of azithromycin can lead to the bacteria becoming resistant to azithromycin. Among Swedish patients, doxycycline was shown to be relatively ineffective (with a cure rate of 48% for women and 38% for men); and treatment with a single dose of azithromycin is not prescribed due to it inducing antimicrobial resistance. The five-day treatment with azithromycin showed no development of antimicrobial resistance. Based on these findings, UK doctors are moving to the 5-day azithromycin regimen. Doxycycline is also still used, and moxifloxacin is used as a second-line treatment in case doxycyline and azithromycin are not able to eradicate the infection. In patients where doxycycline, azithromycin and moxifloxacin all failed, pristinamycin has been shown to still be able to eradicate the infection. History Mycoplasma genitalium was originally isolated in 1980 from urethral specimens of two male patients with non-gonococcal urethritis in the genitourinary medicine (GUM) clinic at St Mary's Hospital, Paddington, London. It was reported in 1981 by a team led by Joseph G. Tully. Under electron microscopy, it appears as a flask-shaped cell with a narrow terminal portion that is crucial for its attachment to the host cell surfaces. The bacterial cell is slightly elongated, somewhat like a vase, and measures 0.6–0.7 μm in length, 0.3–0.4 μm at the broadest region, and 0.06–0.08 μm at the tip. The base is broad while the tip is stretched into a narrow neck, which terminates with a cap. The terminal region has a specialised region called nap, which is absent in other mycoplasmas. Serological tests indicated that the bacterium was not related to known species of Mycoplasma. The comparison of genome sequences with other urinogenital bacteria, such as M. hominis and Ureaplasma parvum, revealed that M. genitalium is significantly different, especially in the energy-generating pathways, although it shared a core genome of ~250 protein-encoding genes. In 2018, Gupta et al. proposed to change the name of Mycoplasma genitalium to Mycoplasmoides genitalium on phylogenetic grounds, reflecting the existing knowledge that M. genitalium is not very related to other Mycoplasma. The change became correct name under the International Code of Nomenclature of Prokaryotes (ICNP, "Code") with Validation List 184, published by the ICSP ("Committee"). Mycoplasmaologists working in the field generally oppose this renaming. In 2019, they published an opinion paper arguing that even though the phylogenetic methods are valid, Gupta's renaming scheme causes too many changes, which is impractical and confusing. They cite some essential principles of the Code, such as "no unnecessary new names", "aim at stability of names", and "avoid or reject the use of names which may cause error or confusion". However, the 2019 argument for preserving old names was rejected by the Committee in Opinion 122 of 2022, where it was ruled that the argument incorrectly cited the Code. The Opinion emphasizes that use of an older validly published name remains acceptable under the Code. Synthetic genome On 6 October 2007, Craig Venter announced that a team of scientists led by Nobel laureate Hamilton Smith at the J. Craig Venter Institute had successfully constructed synthetic DNA with which they planned to make the first synthetic genome. Reporting in The Guardian, Venter said that they had stitched together a DNA strand containing 381 genes, consisting of 580,000 base pairs, based on the genome of M. genitalium. On 24 January 2008, they announced the successful creation of a synthetic bacterium, which they named Mycoplasma genitalium JCVI-1.0 (the name of the strain indicating J. Craig Venter Institute with its specimen number). They synthesised and assembled the complete 582,970-base pair genome of the bacterium. The final stages of synthesis involved cloning the DNA into the bacterium E. coli for nucleotide production and sequencing. This produced large fragments of approximately 144,000 base pairs or 1/4th of the whole genome. Finally, the products were cloned inside the yeast Saccharomyces cerevisiae to synthesize the 580,000 base pairs. The molecular size of the synthetic bacterial genome is 360,110 kilodaltons (kDa). Printed in 10-point font, the letters of the genome cover 147 pages. On 20 July 2012, Stanford University and the J. Craig Venter Institute announced successful simulation of the complete life cycle of a Mycoplasma genitalium cell, in the journal Cell. The entire organism is modeled in terms of its molecular components, integrating all cellular processes into a single model. Using object oriented programming to model the interactions of 28 categories of molecules, including DNA, RNA, proteins, and metabolites, and running on a 128 computer Linux cluster, the simulation takes 10 hours for a single M. genitalium cell to divide once—about the same time the actual cell takes—and generates half a gigabyte of data. Research The discovery of Protein M, a protein produced by M. genitalium, was announced in February 2014. The protein was identified during investigations on the origin of multiple myeloma, a B-cell hematologic neoplasm. To understand the long-term Mycoplasma infection, it was found that antibodies from multiple myeloma patients' blood were recognised by M. genitalium. The antibody reactivity was due to a protein, designated Protein M, that is chemically responsive to all types of human and nonhuman antibodies available. The protein is about 50 kDa in size, and composed of 556 amino acids. Mgen evolved from a gram-positive ancestor that was clostridium-like but has lost the genes to code for the enzymes involved in de novo nucleic acid synthesis, amino acid synthesis, and synthesis of fatty acids. This means that Mgen needs the host's growth factors to keep reproducing. Although Mgen has abilities that help it adhere to cells, it is still unknown how the bacteria can maintain an infection inside the epithelial cells of the ectocervix and vagina when shedding of the apical layer of cells occur. The organism's ability to have adhesion to host cells relies of two proteins, P110 and P140. Adhesion is an important step in beginning an infection in a cell and Mgen can adhere to spermatozoa, erythrocytes, and epithelial cells. The terminal organelle relies on these proteins as well because without them the organelle was not present. The segmented pair plates of Mgen is a core of dense electrons which is anchored to the cell membrane. The end of this core is in contact with the wheel complex and the wheel complex contains the proteins MG219, MG200, MG386, and MG491 which aid in the gliding motility of the bacteria. Although Mgen lacks secreted virulence factors, the protein MG186 degrades host nucleic acids due to it being a calcium-dependent membrane-associated nuclease. See also Smallest organisms References External links Mycoplasma genitalium Reference Work at the UK Health Protection Agency Type strain of Mycoplasma genitalium at BacDive - the Bacterial Diversity Metadatabase genitalium Model organisms Organism size Sexually transmitted diseases and infections Synthetic biology Bacteria described in 1983 Pathogenic bacteria Infectious causes of cancer
Mycoplasma genitalium
Physics,Mathematics,Engineering,Biology
3,660
1,551,187
https://en.wikipedia.org/wiki/Pledge%20drive
A pledge drive is an extended period of fundraising activities, generally used by public broadcasting stations to increase contributions. The term "pledge" originates from the promise that a contributor makes to send in funding at regular intervals for a certain amount of time. During a pledge drive, regular and special programming is followed by on-air appeals for pledges by station employees, who ask the audience to make their contributions, usually by phone or the Internet, during this break. Pledge drives are typically held two to four times annually, at calendar periods which vary depending on the scheduling designated by the local public broadcasting station. Background Pledge drives are especially common among U.S. stations. Public broadcasting organizations like National Public Radio (NPR) and the Public Broadcasting Service (PBS) are largely dependent on program fees paid by their member stations. The federal government of the United States provides some money for them, primarily through the Corporation for Public Broadcasting (CPB), and corporate underwriting. American public broadcasting services hold pledge drives about two to three times each year, each one usually lasting one to two weeks. Some religious broadcasting organizations, including Educational Media Foundation (which operates the K-Love and Air1 radio networks), also rely heavily on such program fees. These stations require funding in turn from listeners and viewers (as well as, if necessary, local corporate sponsors) for not only these fees, but also other daily operating costs, and stage regular pledge drives in an attempt to persuade their audiences to contribute donations. Originally, such programming consisted of arts presentations such as classical music, drama, and documentaries. However, the audience for supposedly "high-brow" fare began declining steadily during the 1980s and 1990s, due to the attrition of the generations to whom such programming mainly appealed. Younger people were less interested in the higher arts, for a variety of reasons having to do with the eclipse of "high culture" in American society. In order to appeal to such a largely Euro-American, middle-aged and affluent demographic (the so-called "Baby Boomers" and "Generation X"), PBS has resorted to specials such as self-help programs with speakers such as Suze Orman, nostalgic popular music concerts (including T. J. Lubinsky's My Music concert series, produced specifically for pledge drive airings), and special versions of PBS' traditionally popular "how-to" programs. This approach was largely pioneered by the Oklahoma Educational Television Authority (OETA), which introduced a number of popular music specials as part of its 1987 pledge drive. A retrospective on The Lawrence Welk Show was originally introduced as pledge drive material in 1987; its popularity prompted the OETA to acquire rerun rights to the series and distribute it through PBS. A hallmark of pledge breaks is the "pledge room", where the speakers deliver their message as volunteering individuals answer ringing telephones in the background, though in some cases, it may actually be a fictionalized part of the program (noticeable if the pledge room is drastically different from program to program and is neutralized, featuring none of the member station's logos within the set dressing), with the volunteers actually paid actors feigning telephone calls and the hosts having been filmed months before. Small prizes such as mugs, tote bags, various DVD sets, and books (known as "thank-you" gifts or, euphemistically, as "premiums"), as well as entries into drawings for larger awards such as trips and vehicles donated by local businesses, are also offered by many stations in return for pledging certain amounts of money. The pledges can be done by either paying per month or a one-time contribution, e.g. $15 a month or $180. Controversy Pledge drives have been controversial for most of their existence. While pledge drives are an effective method of raising money for stations, they usually annoy viewers and listeners, who find the regular interruption of what is ordinarily commercial-free content and the station's regular programming being suspended for lifestyle and music specials to be a nuisance. Audience numbers often decline during pledge drives; to compensate, most television stations air special television shows during these fundraising periods. This practice began in earnest in the mid-1970s due to CPB funding cutbacks that were the result of political pressures and the recessions of the time, as well as increasing inflation. As the proportions of government funding in stations' budgets continued to decline over time, such programs became more elaborate in order to sway people who would otherwise watch public television only sporadically (or not at all) to tune in, and possibly donate money in response to appeals during program breaks. There has also been criticism of the format depending on controversial self-help writers or lecturers not usually a part of any regular PBS member station's schedule, or if the presented program is targeted to appeal only to a wealthy and/or older demographic (as seen with Doo Wop 50) while completely ignoring the viewing needs of other audiences. Stations also have had to reckon with balancing out or dispensing with pledge drives entirely during PBS Kids children's programming, as due to their very nature, the disruption of a routine, for a matter children are unable to understand or contribute to, could drive or push those young viewers towards commercial children's programming on other networks or Internet streaming. Generally speaking, the phenomenon is less pronounced on American public radio stations, primarily because of the high popularity of the news and talk programs on that medium and the routine-based patterns of radio listeners that are much more easily disrupted than those of television, along with stricter underwriting guidelines and less tolerance for the television formats and hosts on radio. Much of the focus is placed upon the "drive time" NPR news programs Morning Edition and All Things Considered, which have the highest ratings of all public broadcasting in the U.S. This is in contrast to PBS member stations sometimes holding their drives during prime time daily and on weekend afternoons, and not during the daytime on weekdays or weekend mornings, when children's programming is typically scheduled. However, in light of intense competition public broadcasting faces from a greatly expanded media environment, other stations, especially radio, have aimed to eliminate pledge drives altogether, or significantly reduce their length, by asking for contributions throughout the year during regular station identification breaks. On radio, such programs as ATC may have one of their planned stories deleted simply to extend the length of the fund-raising "pitches". In a more recent trend, some stations also advertise that pledge drives will be shortened by one day for every day's worth of contributions donated in the weeks leading up to a drive. Additionally, some radio stations have started using prospect screening during their pledge drive to identify potential major donors for later fundraising activities. Another service which has cut down pledge drives is the introduction of PBS's Passport streaming service, which provides a tangible and continuing item (full streaming access to several years of PBS's programs) with a monthly or yearly contribution, rather than a one-time premium. See also Telethon Underwriting spot "The Pledge Drive", an episode of Seinfeld about a WNET pledge drive "The One Where Phoebe Hates PBS", an episode of Friends also featuring a WNET pledge drive and guest-starring Gary Collins as the drive's host References External links Philanthropy Publicly funded broadcasters Telethons
Pledge drive
Biology
1,499
3,442,915
https://en.wikipedia.org/wiki/Audiovisual%20education
Audiovisual (AV) education or multimedia-based education (MBE) is an instruction method where particular attention is paid to the audiovisual or multimedia presentation of the material to improve comprehension and retention. History The concept of audiovisual aids can be traced back to the seventeenth century, when John Amos Comenius, a Bohemian educator, used illustrations of everyday objects as teaching aids in his book, Orbis Sensualium Pictus. Other early advocates of using visual materials in teaching included Jean-Jacques Rousseau, John Locke and J.H Pestalozzi. Audiovisual aids were also widely used by the armed forces during World War II. The United States Air Force created over 400 training films and 600 film strips to be shown to military personnel. <ref></</ref> Various types of audiovisual materials range from film strips, microforms, slides, projected opaque materials, tape recordings, and flashcards. In the current digital world, audiovisual aids have grown exponentially with multimedia such as educational DVDs, PowerPoint, television educational series, YouTube, and other online materials. The goal of audio-visual aids is to enhance the teacher's ability to present the lesson in a simple, effective, and easy to understand for the students. Audiovisual materials make learning more permanent since students use more than one sense. It is important to create awareness for the state and federal ministry of education as policymakers in secondary schools of the need to teach audiovisual resources as the main teaching pedagogy in curricula. The outcome is promoting audiovisual material in secondary schools because they lack the resources to produce them. Visual instruction makes abstract ideas more concrete for the learners. This is to provide a basis for schools to understand the important roles in encouraging and supporting the use of audiovisual resources. In addition, studies have shown a significant difference between the use and non-use of audiovisual material in teaching and learning.<ref></</ref> Objectives To strengthen students' learning skills and make teaching-learning more effective. To attract and retain learners' attention To generate interest across different levels of students To develop lesson plans that are simple and easy to follow To make the class more interactive and interesting To focus on a student-centered approach Advantages We use digital tools to improve the teaching-learning process in the modern world. The most common tool we use in the classroom these days is PowerPoint slides, which make the class more interesting, dynamic, and effective. Moreover, they also help introduce new topics easily. The use of audiovisual aids makes the students remember the concept for a more extended period. They convey the same meaning as words but give clear concepts, thus helping to bring effectiveness to learning. Integrating technology into the classroom helps students to experience things virtually or vicariously. For example, if the teacher wants to give a lesson on the Taj Mahal, only some of the students in India may have visited the place, but you can show it through a video, allowing the students to see the monument with their own eyes. Although first-hand experience is the best way of educative experience, such an experience can only sometimes be done practical, so we need to have substitution. Using audio-visual aids helps maintain discipline in the class since all the student's attention is focused on learning. This interactive session also develops critical thinking and reasoning which are important components of the teaching-learning process.<ref></</ref> Audiovisual provides opportunities for effective communication between teachers and students in learning. For example, in a study on English as Foreign Language (EFL) classrooms, the difficulties faced by EFL learners are lack of motivation, lack of exposure to the target language, and lack of pronunciation by the teacher, and such challenges can be overcome by Audio as the purpose of communication and Visual as more exposure. Students learn when they are motivated and curious about something. Traditional verbal instructions can be boring and painful for students. However, using audio-visual motivates students by piquing their curiosity and stimulating their interest in the subjects. Disadvantages One should have an idea that too much audio-visual material used at one time can result in boredom. It is applicable only if it is implemented effectively. Considering that each teaching/learning situation varies, it is essential to know that not all concepts can be learned effectively through audiovisuals. Most of the time, equipment like projectors, speakers, and headphones are costly; hence, some schools need more money to afford them. It takes a lot of time for teachers to prepare lessons to have interactive classroom sessions. Also, the teacher's valuable time in gaining familiarity with new equipment may need to be recovered. Some students may feel reluctant to ask questions while the film is playing, which can be a physical barrier in small rooms. In places where electricity is not available, i.e., in rural areas, it is not feasible to use audio-visual aids that require electricity. Conclusion Audiovisual aids are essential tools for teaching the learning process. It helps the teacher to present the lesson effectively, and students learn and retain the concepts better for a longer duration. The use of audio-visual aids improves student's critical and analytical thinking. It helps to remove abstract concepts through visual presentation. However, improper and unplanned use of these aids can negatively affect the learning outcome. Therefore, teachers should be well trained through in-service training to maximize the benefits of using these aids. The curriculum should be designed with options for activity-based learning through audio-visual aids. In addition, the government should fund resources to purchase audio-visual aids in schools. Equipment used for audiovisual presentations Television LCD projectors Film projectors Slide projectors Opaque projectors (episcopes and epidiascopes) Overhead projectors References Pedagogy Multimedia
Audiovisual education
Technology
1,192
1,544,998
https://en.wikipedia.org/wiki/Van%20Wijngaarden%20grammar
In computer science, a Van Wijngaarden grammar (also vW-grammar or W-grammar) is a formalism for defining formal languages. The name derives from the formalism invented by Adriaan van Wijngaarden for the purpose of defining the ALGOL 68 programming language. The resulting specification remains its most notable application. Van Wijngaarden grammars address the problem that context-free grammars cannot express agreement or reference, where two different parts of the sentence must agree with each other in some way. For example, the sentence "The birds was eating" is not Standard English because it fails to agree on number. A context-free grammar would parse "The birds was eating" and "The birds were eating" and "The bird was eating" in the same way. However, context-free grammars have the benefit of simplicity whereas van Wijngaarden grammars are considered highly complex. Two levels W-grammars are two-level grammars: they are defined by a pair of grammars, that operate on different levels: the hypergrammar is an attribute grammar, i.e. a set of context-free grammar rules in which the nonterminals may have attributes; and the metagrammar is a context-free grammar defining possible values for these attributes. The set of strings generated by a W-grammar is defined by a two-stage process: within each hyperrule, for each attribute that occurs in it, pick a value for it generated by the metagrammar; the result is a normal context-free grammar rule; do this in every possible way; use the resulting (possibly infinite) context-free grammar to generate strings in the normal way. The consistent substitution used in the first step is the same as substitution in predicate logic, and actually supports logic programming; it corresponds to unification in Prolog, as noted by Alain Colmerauer. W-grammars are Turing complete; hence, all decision problems regarding the languages they generate, such as whether a W-grammar generates a given string whether a W-grammar generates no strings at all are undecidable. Curtailed variants, known as affix grammars, were developed, and applied in compiler construction and to the description of natural languages. Definite logic programs, that is, logic programs that make no use of negation, can be viewed as a subclass of W-grammars. Motivation and history In the 1950s, attempts started to apply computers to the recognition, interpretation and translation of natural languages, such as English and Russian. This requires a machine-readable description of the phrase structure of sentences, that can be used to parse and interpret them, and to generate them. Context-free grammars, a concept from structural linguistics, were adopted for this purpose; their rules can express how sentences are recursively built out of parts of speech, such as noun phrases and verb phrases, and ultimately, words, such as nouns, verbs, and pronouns. This work influenced the design and implementation of programming languages, most notably, of ALGOL 60, which introduced a syntax description in Backus–Naur form. However, context-free rules cannot express agreement or reference (anaphora), where two different parts of the sentence must agree with each other in some way. These can be readily expressed in W-grammars. (See example below.) Programming languages have the analogous notions of typing and scoping. A compiler or interpreter for the language must recognize which uses of a variable belong together (refer to the same variable). This is typically subject to constraints such as: A variable must be initialized before its value is used. In strongly typed languages, each variable is assigned a type, and all uses of the variable must respect its type. Often, its type must be declared explicitly, before use. W-grammars are based on the idea of providing the nonterminal symbols of context-free grammars with attributes (or affixes) that pass information between the nodes of the parse tree, used to constrain the syntax and to specify the semantics. This idea was well known at the time; e.g. Donald Knuth visited the ALGOL 68 design committee while developing his own version of it, attribute grammars. By augmenting the syntax description with attributes, constraints like the above can be checked, ruling many invalid programs out at compile time. As Van Wijngaarden wrote in his preface: Quite peculiar to W-grammars was their strict treatment of attributes as strings, defined by a context-free grammar, on which concatenation is the only possible operation; complex data structures and operations can be defined by pattern matching. (See example below.) After their introduction in the 1968 ALGOL 68 "Final Report", W-grammars were widely considered as too powerful and unconstrained to be practical. This was partly a consequence of the way in which they had been applied; the 1973 ALGOL 68 "Revised Report" contains a much more readable grammar, without modifying the W-grammar formalism itself. Meanwhile, it became clear that W-grammars, when used in their full generality, are indeed too powerful for such practical purposes as serving as the input for a parser generator. They describe precisely all recursively enumerable languages, which makes parsing impossible in general: it is an undecidable problem to decide whether a given string can be generated by a given W-grammar. Hence, their use must be seriously constrained when used for automatic parsing or translation. Restricted and modified variants of W-grammars were developed to address this, e.g. Extended Affix Grammars (EAGs), applied to describe the grammars of natural language such as English and Spanish); Q-systems, also applied to natural language processing; the CDL series of languages, applied as compiler construction languages for programming languages. After the 1970s, interest in the approach waned; occasionally, new studies are published. Examples Agreement in English grammar In English, nouns, pronouns and verbs have attributes such as grammatical number, gender, and person, which must agree between subject, main verb, and pronouns referring to the subject: I wash myself. She washes herself. We wash ourselves. are valid sentences; invalid are, for instance: *I washes ourselves. *She wash himself. *We wash herself. Here, agreement serves to stress that both pronouns (e.g. I and myself) refer to the same person. A context-free grammar to generate all such sentences: <sentence> ::= <subject> <verb> <object> <subject> ::= I | You | He | She | We | They <verb> ::= wash | washes <object> ::= myself | yourself | himself | herself | ourselves | yourselves | themselves From <sentence>, we can generate all combinations: I wash myself I wash yourself I wash himself [...] They wash yourselves They wash themselves A W-grammar to generate only the valid sentences: <sentence <NUMBER> <GENDER> <PERSON>> ::= <subject <NUMBER> <GENDER> <PERSON>> <verb <NUMBER> <PERSON>> <object <NUMBER> <GENDER> <PERSON>> <subject singular <GENDER> 1st> ::= I <subject <NUMBER> <GENDER> 2nd> ::= You <subject singular male 3rd> ::= He <subject singular female 3rd> ::= She <subject plural <GENDER> 1st> ::= We <subject singular <GENDER> 3rd> ::= They <verb singular 1st> ::= wash <verb singular 2nd> ::= wash <verb singular 3rd> ::= washes <verb plural <PERSON>> ::= wash <object singular <GENDER> 1st> ::= myself <object singular <GENDER> 2nd> ::= yourself <object singular male 3rd> ::= himself <object singular female 3rd> ::= herself <object plural <GENDER> 1st> ::= ourselves <object plural <GENDER> 2nd> ::= yourselves <object plural <GENDER> 3rd> ::= themselves <NUMBER> ::== singular | plural <GENDER> ::== male | female <PERSON> ::== 1st | 2nd | 3rd A standard non-context-free language A well-known non-context-free language is A two-level grammar for this language is the metagrammar N ::= 1 | N1 X ::= a | b together with grammar schema Start ::= ::= X ::= X Questions. If one substitutes a new letter, say C, for N1, is the language generated by the grammar preserved? Or N1 should be read as a string of two symbols, that is, N followed by 1? End of questions. Requiring valid use of variables in ALGOL The Revised Report on the Algorithmic Language Algol 60 defines a full context-free syntax for the language. Assignments are defined as follows (section 4.2.1): <left part> ::= <variable> := | <procedure identifier> := <left part list> ::= <left part> | <left part list> <left part> <assignment statement> ::= <left part list> <arithmetic expression> | <left part list> <Boolean expression> A <variable> can be (amongst other things) an <identifier>, which in turn is defined as: <identifier> ::= <letter> | <identifier> <letter> | <identifier> <digit> Examples (section 4.2.2): s:=p[0]:=n:=n+1+s n:=n+1 A:=B/C-v-q×S S[v,k+2]:=3-arctan(sTIMESzeta) V:=Q>Y^Z Expressions and assignments must be type checked: for instance, in n:=n+1, n must be a number (integer or real); in A:=B/C-v-q×S, all variables must be numbers; in V:=Q>Y^Z, all variables must be of type Boolean. The rules above distinguish between <arithmetic expression> and <Boolean expression>, but they cannot verify that the same variable always has the same type. This (non-context-free) requirement can be expressed in a W-grammar by annotating the rules with attributes that record, for each variable used or assigned to, its name and type. This record can then be carried along to all places in the grammar where types need to be matched, and implement type checking. Similarly, it can be used to checking initialization of variables before use, etcetera. One may wonder how to create and manipulate such a data structure without explicit support in the formalism for data structures and operations on them. It can be done by using the metagrammar to define a string representation for the data structure and using pattern matching to define operations: <left part with <TYPED> <NAME>> ::= <variable with <TYPED> <NAME>> := | <procedure identifier with <TYPED> <NAME>> := <left part list <TYPEMAP1>> ::= <left part with <TYPED> <NAME>> <where <TYPEMAP1> is <TYPED> <NAME> added to sorted <EMPTY>> | <left part list <TYPEMAP2>> <left part with <TYPED> <NAME>> <where <TYPEMAP1> is <TYPED> <NAME> added to sorted <TYPEMAP2>> <assignment statement <ASSIGNED TO> <USED>> ::= <left part list <ASSIGNED TO>> <arithmetic expression <USED>> | <left part list <ASSIGNED TO>> <Boolean expression <USED>> <where <TYPED> <NAME> is <TYPED> <NAME> added to sorted <EMPTY>> ::= <where <TYPEMAP1> is <TYPED1> <NAME1> added to sorted <TYPEMAP2>> ::= <where <TYPEMAP2> is <TYPED2> <NAME2> added to sorted <TYPEMAP3>> <where <NAME1> is lexicographically before <NAME2>> <where <TYPEMAP1> is <TYPED1> <NAME1> added to sorted <TYPEMAP2>> ::= <where <TYPEMAP2> is <TYPED2> <NAME2> added to sorted <TYPEMAP3>> <where <NAME2> is lexicographically before <NAME1>> <where <TYPEMAP3> is <TYPED1> <NAME1> added to sorted <TYPEMAP4>> <where <EMPTY> is lexicographically before <NAME1>> ::= <where <NAME1> is <LETTER OR DIGIT> followed by <NAME2>> <where <NAME1> is lexicographically before <NAME2>> ::= <where <NAME1> is <LETTER OR DIGIT> followed by <NAME3>> <where <NAME2> is <LETTER OR DIGIT> followed by <NAME4>> <where <NAME3> is lexicographically before <NAME4>> <where <NAME1> is lexicographically before <NAME2>> ::= <where <NAME1> is <LETTER OR DIGIT 1> followed by <NAME3>> <where <NAME2> is <LETTER OR DIGIT 2> followed by <NAME4>> <where <LETTER OR DIGIT 1> precedes+ <LETTER OR DIGIT 2> <where <LETTER OR DIGIT 1> precedes+ <LETTER OR DIGIT 2> ::= <where <LETTER OR DIGIT 1> precedes <LETTER OR DIGIT 2> <where <LETTER OR DIGIT 1> precedes+ <LETTER OR DIGIT 2> ::= <where <LETTER OR DIGIT 1> precedes+ <LETTER OR DIGIT 3> <where <LETTER OR DIGIT 3> precedes+ <LETTER OR DIGIT 2> <where a precedes b> :== <where b precedes c> :== [...] <TYPED> ::== real | integer | Boolean <NAME> ::== <LETTER> | <NAME> <LETTER> | <NAME> <DIGIT> <LETTER OR DIGIT> ::== <LETTER> | <DIGIT> <LETTER OR DIGIT 1> ::= <LETTER OR DIGIT> <LETTER OR DIGIT 2> ::= <LETTER OR DIGIT> <LETTER OR DIGIT 3> ::= <LETTER OR DIGIT> <LETTER> ::== a | b | c | [...] <DIGIT> ::== 0 | 1 | 2 | [...] <NAMES1> ::== <NAMES> <NAMES2> ::== <NAMES> <ASSIGNED TO> ::== <NAMES> <USED> ::== <NAMES> <NAMES> ::== <NAME> | <NAME> <NAMES> <EMPTY> ::== <TYPEMAP> ::== (<TYPED> <NAME>) <TYPEMAP> <TYPEMAP1> ::== <TYPEMAP> <TYPEMAP2> ::== <TYPEMAP> <TYPEMAP3> ::== <TYPEMAP> When compared to the original grammar, three new elements have been added: attributes to the nonterminals in what are now the hyperrules; metarules to specify the allowable values for the attributes; new hyperrules to specify operations on the attribute values. The new hyperrules are -rules: they only generate the empty string. ALGOL 68 examples The ALGOL 68 reports use a slightly different notation without <angled brackets>. ALGOL 68 as in the 1968 Final Report §2.1 a) program : open symbol, standard prelude, library prelude option, particular program, exit, library postlude option, standard postlude, close symbol. b) standard prelude : declaration prelude sequence. c) library prelude : declaration prelude sequence. d) particular program : label sequence option, strong CLOSED void clause. e) exit : go on symbol, letter e letter x letter i letter t, label symbol. f) library postlude : statement interlude. g) standard postlude : strong void clause train ALGOL 68 as in the 1973 Revised Report §2.2.1, §10.1.1 program : strong void new closed clause A) EXTERNAL :: standard ; library ; system ; particular. B) STOP :: label letter s letter t letter o letter p. a) program text : STYLE begin token, new LAYER1 preludes, parallel token, new LAYER1 tasks PACK, STYLE end token. b) NEST1 preludes : NEST1 standard prelude with DECS1, NEST1 library prelude with DECSETY2, NEST1 system prelude with DECSETY3, where (NEST1) is (new EMPTY new DECS1 DECSETY2 DECSETY3). c) NEST1 EXTERNAL prelude with DECSETY1 : strong void NEST1 series with DECSETY1, go on token ; where (DECSETY1) is (EMPTY), EMPTY. d) NEST1 tasks : NEST1 system task list, and also token, NEST1 user task PACK list. e) NEST1 system task : strong void NEST1 unit. f) NEST1 user task : NEST2 particular prelude with DECS, NEST2 particular program PACK, go on token, NEST2 particular postlude, where (NEST2) is (NEST1 new DECS STOP). g) NEST2 particular program : NEST2 new LABSETY3 joined label definition of LABSETY3, strong void NEST2 new LABSETY3 ENCLOSED clause. h) NEST joined label definition of LABSETY : where (LABSETY) is (EMPTY), EMPTY ; where (LABSETY) is (LAB1 LABSETY1), NEST label definition of LAB1, NEST joined label definition of$ LABSETY1. i) NEST2 particular postlude : strong void NEST2 series with STOP. A simple example of the power of W-grammars is clause a) program text : STYLE begin token, new LAYER1 preludes, parallel token, new LAYER1 tasks PACK, STYLE end token. This allows BEGIN ... END and { } as block delimiters, while ruling out BEGIN ... } and { ... END. One may wish to compare the grammar in the report with the Yacc parser for a subset of ALGOL 68 by Marc van Leeuwen. Implementations Anthony Fisher wrote yo-yo, a parser for a large class of W-grammars, with example grammars for expressions, eva, sal and Pascal (the actual ISO 7185 standard for Pascal uses extended Backus–Naur form). Dick Grune created a C program that would generate all possible productions of a W-grammar. Applications outside of ALGOL 68 The applications of Extended Affix Grammars (EAG)s mentioned above can effectively be regarded as applications of W-grammars, since EAGs are so close to W-grammars. W-grammars have also been proposed for the description of complex human actions in ergonomics. A W-Grammar Description has also been supplied for Ada. See also Affix grammar Extended Affix Grammar Attribute grammar References Further reading . Formal languages Parsing Compiler construction Dutch inventions
Van Wijngaarden grammar
Mathematics
4,133
4,638,484
https://en.wikipedia.org/wiki/Clause%20%28logic%29
In logic, a clause is a propositional formula formed from a finite collection of literals (atoms or their negations) and logical connectives. A clause is true either whenever at least one of the literals that form it is true (a disjunctive clause, the most common use of the term), or when all of the literals that form it are true (a conjunctive clause, a less common use of the term). That is, it is a finite disjunction or conjunction of literals, depending on the context. Clauses are usually written as follows, where the symbols are literals: Empty clauses A clause can be empty (defined from an empty set of literals). The empty clause is denoted by various symbols such as , , or . The truth evaluation of an empty disjunctive clause is always . This is justified by considering that is the neutral element of the monoid . The truth evaluation of an empty conjunctive clause is always . This is related to the concept of a vacuous truth. Implicative form Every nonempty (disjunctive) clause is logically equivalent to an implication of a head from a body, where the head is an arbitrary literal of the clause and the body is the conjunction of the complements of the other literals. That is, if a truth assignment causes a clause to be true, and all of the literals of the body satisfy the clause, then the head must also be true. This equivalence is commonly used in logic programming, where clauses are usually written as an implication in this form. More generally, the head may be a disjunction of literals. If are the literals in the body of a clause and are those of its head, the clause is usually written as follows: If n = 1 and m = 0, the clause is called a (Prolog) fact. If n = 1 and m > 0, the clause is called a (Prolog) rule. If n = 0 and m > 0, the clause is called a (Prolog) query. If n > 1, the clause is no longer Horn. See also Conjunctive normal form Disjunctive normal form Horn clause References External links Clause logic related terminology Propositional calculus Predicate logic Logic programming
Clause (logic)
Mathematics
474
54,065,944
https://en.wikipedia.org/wiki/Anne%20Neville%20%28engineer%29
Anne Neville (21 March 1970 – 2 July 2022) was the Royal Academy of Engineering Chair in emerging technologies and Professor of Tribology and Surface Engineering at the University of Leeds. Early life and education Anne Neville grew up in Dumfries with her older sister Linda. Their mother Doris worked as a pharmacy technician and their father Bill was a process worker at ICI Dumfries. Her uncle is Professor Robert Black, Emeritus Professor of Scots Law at the University of Edinburgh. Anne attended Maxwellton High School where her interest in maths and physics grew. Anne was also a good badminton player and played the trumpet. Anne Neville was educated at Maxwelltown High School in Dumfries and was unsure what she should do at university, at one point considered becoming a social worker. She went into engineering by accident. The Glasgow University prospectus fell open at the page with a Rolls-Royce gas turbine picture and she thought it looked interesting. Anne Neville's maths teacher was a mechanical engineer and he inspired her to investigate further. After visiting the university open days, Anne Neville decided that she wanted to study engineering and rejected her earlier initial thoughts of either studying maths or physics. Anne Neville began her studies at the University of Glasgow in 1988 and she graduated in 1992 with a First Class Honours BEng degree followed by PhD in mechanical engineering in 1995. As part of her PhD, she conducted an experimental study of corrosion and tribocorrosion processes on high alloy stainless steels and Ni-alloys and her work led to an increased understanding of the synergies that exist between corrosion and wear processes. Career and research Anne Neville was a mechanical engineer with a specific interest in corrosion, tribology and processes that occur at engineering interfaces. She was appointed a lecturer at Heriot-Watt University immediately after her PhD and started to build a research team. Anne Neville's contributions were manifold, across lubrication and wear, mineral scaling and tribo-corrosion, with applications in diverse fields such as the oil and gas sector, wind energy and tribo-corrosion and surgical technologies. In particular, her group were the first to measure corrosion rates in-situ in hip joint simulators which made important contributions to the work associated with the controversies associated with metal-on-metal hip implants. In 2009 and 2013 Anne's work was used to guide the medical health authorities in the UK on what to do with a hip prostheses that had shown unacceptably high failure rates in patients. They used advanced microscopy x-ray spectroscopy to understand how surfaces are lubricated in industrial and medical components. Her research team grew to 25 researchers in the following years during her time at Heriot Watt University and in 1999 she was promoted to Reader and then Professor in 2002. Anne Neville and her group moved to Leeds in 2003 where she founded and was the Director of the Institute of Functional Surfaces (iFS) which comprised 70 researchers. The institute had a £10 million funding portfolio that spanned many agencies and industrial sectors including medical, oil and gas and automotive. Her research group was the first to measure corrosion rates in-situ in hip joint simulators. This was very important in the most recent controversies around metal-on-metal implants. Anne Neville's publications were numerous, widely relied upon, and she published nearly 700 peer-reviewed articles during her career, with more than 11,000 citations. Neville retired from her Leeds chair in 2020, having been diagnosed with terminal cancer. Awards and honours Anne Neville was the first woman to win the Royal Society of Edinburgh's 150 year old Makdougall Brisbane prize in 1999 and was an Engineering and Physical Sciences Research Council (EPSRC) Advanced Fellow from 1999 to 2004, elected a Fellow of the Institution of Mechanical Engineers (FIMechE) in 2007, elected a Fellow of the Royal Society of Edinburgh (FRSE) in 2005, elected a Fellow of the Institute of Materials, Minerals and Mining in 2009 and a Fellow of the Royal Academy of Engineering (FREng) in 2010. She was awarded Institution of Mechanical Engineers Donald Julius Green prize in 2010, a Royal Society Wolfson Research Merit Award in 2011, the Donald Julius Groen Prize for Tribology in 2012, the 2014 STLE Wilbert Shultz Prize, Royal Society Wolfson Research MERIT Award in 2013 and was selected as an EPSRC RISE Fellow in 2014 which was an honour bestowed on the best established and future leaders in engineering and physical sciences. In 2015, Neville was awarded an Engineering and Physical Sciences Suffrage Science award. She was the first woman to be awarded the Institute of Mechanical Engineers' James Clayton Prize and she was also the first woman to win the Royal Society's Leverhulme Medal in 2016 for "revealing diverse physical and chemical processes at interacting interfaces, emphasising significant synergy between tribology and corrosion.” Anne Neville was appointed OBE in the 2017 New Year Honours for services to engineering. She was elected a Fellow of the Royal Society (FRS) in 2017. Anne Neville received the following honorary degrees: DEng, Heriot Watt University, 2017 DEng, University of Glasgow, 2019 Neville was awarded the Royal Society's Clifford Patterson Medal in 2022. She was posthumously inducted into the Scottish Engineering Hall of Fame in October 2022. Personal life Anne Neville married Mark McKelvie in 1999 and their daughter Rachel was born in 2005. Views Neville believed that more women in engineering could be achieved by ensuring that at primary school level we have the same number of girls and boys engaging with technology. "Male or female… go for it! You will have the time of your life. I can honestly say I love my job. As an academic in engineering I can do what I want in terms of research as long as I can raise the funds to pay for it. This is a real privilege. I have travelled the world, met some brilliant people and have had great fun. What else could you ask for in a job?" Death Anne Neville was first diagnosed with cancer in 2008. Neville died at her home on 2 July 2022. References 1970 births 2022 deaths Officers of the Order of the British Empire Fellows of the Royal Society Female fellows of the Royal Society Fellows of the Institution of Mechanical Engineers Fellows of the Royal Academy of Engineering Female fellows of the Royal Academy of Engineering Fellows of the Royal Society of Edinburgh People educated at Maxwelltown High School Mechanical engineers Academics of the University of Leeds Alumni of the University of Glasgow 21st-century British women engineers Tribologists Academics of Heriot-Watt University Scottish Engineering Hall of Fame inductees
Anne Neville (engineer)
Materials_science
1,329
23,820,492
https://en.wikipedia.org/wiki/Gymnopilus%20chrysotrichoides
Gymnopilus chrysotrichoides is a species of mushroom in the family Hymenogastraceae. Description The cap is in diameter. Habitat and distribution Gymnopilus chrysotrichoides has been found growing on coconut logs, in Cuba in October. See also List of Gymnopilus species References External links Gymnopilus chrysotrichoides at Index Fungorum chrysotrichoides Fungi of North America Taxa named by William Alphonso Murrill Fungus species
Gymnopilus chrysotrichoides
Biology
105
43,827,511
https://en.wikipedia.org/wiki/Marysville%20Power%20Plant
The Marysville Power Plant, nicknamed the Mighty Marysville, was a coal-fired power plant in Marysville, Michigan on the shore of the St. Clair River. The plant was demolished on November 7, 2015, after the land was sold to a developer. History of the Property The power plant was built on land that formerly housed a lumber mill. The mill was erected in 1690 and is claimed to be the first European settlement in present-day St. Clair County. In 1817 legislator, judge, and businessman Zephaniah W. Bunce came to the area. He named it "Bunceville" and the creek that ran through it "Bunce Creek". Bunceville, along with other small settlements along the St. Clair River would be consolidated into the community of Marysville in the late 1800s. The Bunce household was demolished for construction of the Power Plant and the creek was re-routed underneath the property. A rock and plaque mark the location of the Bunce homestead on the Detroit Edison property. History of the Plant Work on the plant began in 1914, as demand grew for electrical power north of Metro Detroit. The plant started generating electricity in 1922 with its first original two units. Two more units were added in 1926. During the mid 1940s, the "high side" was added to the power plant. The "high side" referred to higher pressure steam which could turn higher capacity 75MW generators. During this period nearly 300 people worked at the Power Plant. The plant also had a max capacity of 300MW when both the low and high sides were operational. Between 1988 and 1992 the plant was idled, and returned to service in 1992. For nearly the next ten years, the power plant would continue to run with the two "high side" units with a total of 150MW. In 2001 the plant was idled again. In 2012 the plant was officially decommissioned and the property was placed on the market. Demolition and Implosion DTE Energy placed the property on the market in 2012 and held an auction to sell any remaining equipment of value inside the plant. The property was sold to Commercial Development Corporation 2013. Contractors began demolition shortly after, demolishing the plant's turbine hall and historic club house. On November 7, 2015, the remaining boiler house was imploded. Plans for the site include a hotel, condos, and marina. See also List of power stations in Michigan References Energy infrastructure completed in 1922 Coal-fired power stations in Michigan DTE Energy 1922 establishments in Michigan 2001 disestablishments in Michigan Buildings and structures in St. Clair County, Michigan Buildings and structures demolished in 2015 Buildings and structures demolished by controlled implosion Demolished power stations in the United States Former coal-fired power stations in Michigan Articles containing video clips
Marysville Power Plant
Engineering
567
43,261,589
https://en.wikipedia.org/wiki/Ampliotrema%20cocosense
Ampliotrema cocosense is a little-known species of corticolous (bark-dwelling) lichen in the family Graphidaceae. Found in Cocos Island, Costa Rica, it was described as new to science in 2011. Its distinctive features include its large, ascospores and a notable chemical composition. Taxonomy Ampliotrema cocosense, was formally described as new species described by lichenologists Robert Lücking and José Luis Chaves The species epithet of this lichen is derived from its type locality, which is Cocos Island. The type specimen was collected in April 1992 on a trail above the ranger station in Cocos Island National Park, located in Puntarenas, Costa Rica. Description Ampliotrema cocosense has a grey-olive-yellow thallus with a texture and a dense cortex. The and/or medulla contain clusters of calcium oxalate crystals. The apothecia are sessile and rounded, measuring 1–2 mm in diameter, with a brown-black that is partially covered by a 0.3–0.6 mm wide pore. The disc also exhibits a yellow-to-orange . The margin is entire, fused, and yellowish-white, covered by a thalline layer. The species lacks a and . The hymenium has a height of 150–200 μm and is with apically branched . The ascospores are 80–100 by 17–22 μm in size, oblong, colourless, and I+ (violet-blue) when treated with iodine (indicating an amyloid reaction). They are richly with thick septa and rounded . The secondary chemistry of this species includes protocetraric and virensic acids along with structurally related compounds, and the thallus (medulla) has C−, K−, and P+ (orange-red) reactions to standard chemical spot tests. The lichen is most closely related to A. lepadinoides, which has similarly sized , but much narrower with transverse septa only. Ampliotrema cocosense can be distinguished from other Ocellularia species with large muriform ascospores by its unique combination of a wide pore, absence of a columella, paraplectenchymatous , inspersed hymenium, and protocetraric acid as its main secondary substance. Habitat and distribution Ampliotrema cocosense is known from a single, well-developed collection found in the lower montane rainforest of Cocos Island. The island's lichen diversity is not well-documented, particularly for non-foliicolous (leaf-dwelling) species. The discovery of this new species indicates that Cocos Island may harbour a unique lichen biota for certain groups of lichens. Most of the island's lichen flora represents pantropical species, with A. cocosense as the sole endemic representative. References Graphidaceae Lichen species Lichens described in 2011 Lichens of Central America Taxa named by Robert Lücking Species known from a single specimen
Ampliotrema cocosense
Biology
637
5,979,010
https://en.wikipedia.org/wiki/Elliptic%20unit
In mathematics, elliptic units are certain units of abelian extensions of imaginary quadratic fields constructed using singular values of modular functions, or division values of elliptic functions. They were introduced by Gilles Robert in 1973, and were used by John Coates and Andrew Wiles in their work on the Birch and Swinnerton-Dyer conjecture. Elliptic units are an analogue for imaginary quadratic fields of cyclotomic units. They form an example of an Euler system. Definition A system of elliptic units may be constructed for an elliptic curve E with complex multiplication by the ring of integers R of an imaginary quadratic field F. For simplicity we assume that F has class number one. Let a be an ideal of R with generator α. For a Weierstrass model of E, define where P is a point on E, Δ is the discriminant, and x is the X-coordinate on the Weierstrass model. The function Θ is independent of the choice of model, and is defined over the field of definition of E. Properties Let b be an ideal of R coprime to a and Q an R-generator of the b-torsion. Then Θa(Q) is defined over the ray class field K(b), and if b is not a prime power then Θa(Q) is a global unit: if b is a power of a prime p then Θa(Q) is a unit away from p. The function Θa satisfies a distribution relation for b = (β) coprime to a: See also Modular unit References Robert, Gilles Unités elliptiques. (Elliptic units) Bull. Soc. Math. France, Supp. Mém. No. 36. Bull. Soc. Math. France, Tome 101. Société Mathématique de France, Paris, 1973. 77 pp. Algebraic number theory Modular forms
Elliptic unit
Mathematics
386
36,255,747
https://en.wikipedia.org/wiki/Lactarius%20fennoscandicus
Lactarius fennoscandicus is a member of the large milk-cap genus Lactarius in the order Russulales. It is found in Scandinavia, where it grows in a mycorrhizal association with spruce trees. Taxonomy The species was described as new to science in 1998 by mycologists Annemieke Verbeken and Jan Vesterholt. The type locality was in Siljanfors, Sweden. Description The fruit bodies have caps that are initially convex with a central depression and an inward-curled margin, later becoming more funnel-shaped, reaching a diameter of . The slightly sticky cap surface is marked into circular zones. The colour of the inner zones ranges from brownish with vivacious tones to cinnamon, with the colours lightening moving outwards toward the margin. The crowded gills have a decurrent attachment to the stipe. They are peach to yellowish orange, and turn greyish green where bruised. The stipe measures long by thick and is cylindrical to somewhat club-shaped. The flesh is whitish in the centre and orangish near the surface; it turns blueish-green when injured. It lacks any distinctive odor and has a taste that is initially mild before turning bitter. The sparse, orange milk turns greenish grey when exposed to air. The spores are somewhat spherical to ellipsoid, and measure 7.5–8.1 by 6.1–6.5 μm. The surface features edges and warts that form an incomplete network. The basidia (spore-bearing cells) are four-spored, cylindrical to somewhat club-shaped, and measure 50–60 μm. The cap cuticle is a 50–100-micrometre thick ixocutis—whereby the hyphae are gelatinous, and lay flat in a horizontal layer. Habitat and distribution Lactarius fennoscandicus is known from boreal forest in Finland and Sweden. See also List of Lactarius species References External links fennoscandicus Fungi described in 1998 Fungi of Europe Fungus species
Lactarius fennoscandicus
Biology
421
75,136,868
https://en.wikipedia.org/wiki/Raygrantite
Raygrantite is a mineral first discovered in Big Horn Mountains, Maricopa County, Arizona, US. More specifically, it is located in the evening star mine, which is a Cu, V, Pb, Ag, Au, and W mine. Raygrantite is a member of the iranite mineral group, which consists of hemihedrite, iranite, and raygrantite. This mineral received its name in honor of Raymond W. Grant, a retired professor who primarily focused on the minerals of Arizona. The typical crystal structure of raygrantite is bladed with parallel striations to the C axis. Its ideal chemical formula is Pb10Zn(SO4)6(SiO4)2(OH)2. The IMA (International Mineralogical Association) approved raygrantite in 2013, and the first publication regarding this mineral was put forth in 2017. Occurrence Raygrantite is associated with cerussite, galena, mattheddleite, lanarkite, leadhillite, anglesite, alamosite, hydrocerussite, diaboleite, and caledonite. Crystals were found in pockets encased in masses of galena. Raygrantite is a secondary mineral and is the result of pyrite-galena-chalcopyrite veins. In this district of the Rocky Mountains, intrusions can date back to the late Cretaceous period. Physical properties Raygrantite is a colorless, transparent mineral that occurs in bladed crystal structures. This bladed structure has striations parallel to the C-axis. Its luster is vitreous, which means it looks similar to glass. Raygrantite on the Mohs hardness scale is a three, which is .5 softer than a penny. It exhibits brittle tenacity and has good cleavage along the {120} plain. This mineral also has characteristic fishtail twinning along the {12} in addition to a twin axis along the {010}. This mineral's recorded density is 6.374 g/cm3. Optical properties Raygrantite is transparent with a vitreous luster. It is biaxial positive, which means it will refract light along two axes. The mineral's 2Vmeas. 76° (2) and 2Vcalc. 85°. The refractive indices are: nα= 1.915(7) nβ= 1.981(7) nγ= 2.068(9). Dispersion is strong, v < r. Raygrantite also exhibits absorption spectra of Z>Y>X. Chemical structure Raygrantite is isotypic with iranite and hemihedrite. When looking at the chemical structure of the iranite mineral group, there are 10 symmetrically independent non-H cation sites. Of these sites, five are filled by lead Pb2þ (Pb1, Pb2, Pb3, Pb4, and Pb5). Then three are filled by S6þ (S1, S2, and S3). Finally, one of the sites is filled by Si4þ, and the last is filled by Zn2þ. Raygrantite is composed of layers of tetrahedron and octahedron joined together by lead ions. Chemical composition X-ray crystallography To collect this data, a Bruker X8 APEX2 CCD X-ray diffractometer equipped with graphite-monochromatized MoKa radiation was used. Through these analyses, we can understand that Raygrantite is a member of the triclinic crystal system. It was also noted that the space group of this mineral is – Pinacoidal. The next conclusion that can be drawn from the X-ray diffraction data is the unit cell dimensions. These are as such: a = 9.3175(4) Å, b = 11.1973(5) Å c = 10.08318(5) Å. See also List of minerals References Natural materials Lead minerals Triclinic minerals Minerals in space group 11 Zinc minerals Sulfate minerals Wikipedia Student Program
Raygrantite
Physics
857
66,854,421
https://en.wikipedia.org/wiki/Surface%20equivalence%20principle
In electromagnetism, surface equivalence principle or surface equivalence theorem relates an arbitrary current distribution within an imaginary closed surface with an equivalent source on the surface. It is also known as field equivalence principle, Huygens' equivalence principle or simply as the equivalence principle. Being a more rigorous reformulation of the Huygens–Fresnel principle, it is often used to simplify the analysis of radiating structures such as antennas. Certain formulations of the principle are also known as Love equivalence principle and Schelkunoff equivalence principle, after Augustus Edward Hough Love and Sergei Alexander Schelkunoff, respectively. Physical meaning General formulation The principle yields an equivalent problem for a radiation problem by introducing an imaginary closed surface and fictitious surface current densities. It is an extension of Huygens–Fresnel principle, which describes each point on a wavefront as a spherical wave source. The equivalence of the imaginary surface currents are enforced by the uniqueness theorem in electromagnetism, which dictates that a unique solution can be determined by fixing a boundary condition on a system. With the appropriate choice of the imaginary current densities, the fields inside the surface or outside the surface can be deduced from the imaginary currents. In a radiation problem with given current density sources, electric current density and magnetic current density , the tangential field boundary conditions necessitate that where and correspond to the imaginary current sources that are impressed on the closed surface. and represent the electric and magnetic fields inside the surface, respectively, while and are the fields outside of the surface. Both the original and imaginary currents should produce the same external field distributions. Love and Schelkunoff equivalence principles Per the boundary conditions, the fields inside the surface and the current densities can be arbitrarily chosen as long as they produce the same external fields. Love's equivalence principle, introduced in 1901 by Augustus Edward Hough Love, takes the internal fields as zero: The fields inside the surface are referred as null fields. Thus, the surface currents are chosen as to sustain the external fields in the original problem. Alternatively, Love equivalent problem for field distributions inside the surface can be formulated: this requires the negative of surface currents for the external radiation case. Thus, the surface currents will radiate the fields in the original problem in the inside of the surface; nevertheless, they will produce null external fields. Schelkunoff equivalence principle, introduced by Sergei Alexander Schelkunoff, substitutes the closed surface with a perfectly conducting material body. In the case of a perfect electrical conductor, the electric currents that are impressed on the surface won't radiate due to Lorentz reciprocity. Thus, the original currents can be substituted with surface magnetic currents only. A similar formulation for a perfect magnetic conductor would use impressed electric currents. The equivalence principles can also be applied to conductive half-spaces with the aid of method of image charges. Applications The surface equivalence principle is heavily used in the analysis of antenna problems to simplify the problem: in many of the applications, the close surface is chosen as so to encompass the conductive elements to alleviate the limits of integration. Selected uses in antenna theory include the analysis of aperture antennas and the cavity model approach for microstrip patch antennas. It has also been used as a domain decomposition method for method of moments analysis of complex antenna structures. Schelkunoff's formulation is employed particularly for scattering problems. The principle has also been used in the analysis design of metamaterials such as Huygens’ metasurfaces and plasmonic scatterers. See also Aperture antennas Babinet's principle Electromagnetism uniqueness theorem Huygens–Fresnel principle Reciprocity (electromagnetism) References Bibliography Electromagnetism Antennas Diffraction Electromagnetic radiation
Surface equivalence principle
Physics,Chemistry,Materials_science,Engineering
779
36,980,430
https://en.wikipedia.org/wiki/15%20Draconis
15 Draconis is a single star in the northern circumpolar constellation of Draco, located 452 light years away from the Sun. 15 Draconis is the Flamsteed designation; it also has the Bayer designation A Draconis. This object is visible to the naked eye as a white-hued star with an apparent visual magnitude of 4.94. It is moving closer to the Earth with a heliocentric radial velocity of −7 km/s. This star has a stellar classification of A0 III, matching that of an A-type giant star. It has a relatively high rate of spin with a projected rotational velocity of 154 km/s. The star is radiating 286 times the Sun's luminosity from its photosphere at an effective temperature of 9,980 K. References A-type giants Draco (constellation) Draconis, 15 Draconis, A Durchmusterung objects 149212 080650 6161
15 Draconis
Astronomy
205
818,329
https://en.wikipedia.org/wiki/Co-counselling
Co-counselling (spelled co-counseling in American English) is a grassroots method of personal change based on reciprocal peer counselling. It uses simple methods. Time is shared equally and the essential requirement of the person taking their turn in the role of counsellor is to do their best to listen and give their full attention to the other person. It is not a discussion; the aim is to support the person in the client role to work through their own issues in a mainly self-directed way. Co-counselling was originally formulated in the early 1950s by the American Harvey Jackins and originated in a schism in the Dianetics movement (itself in part derived from schisms in general semantics and cybernetics). Jackins founded the Re-evaluation Counseling (RC) Communities, with headquarters in Seattle, Washington, United States. His son, Tim Jackins, is currently the international leader of Re-evaluation Counseling and its main affiliates. Like other offshoots of Dianetics such as Scientology and the Landmark Forum, Re-evaluation Counseling has features of a cult and an authoritarian leadership structure that actively suppresses dissent and critique. There are a number of smaller, separate, independent organizations that have resulted from breakaways from, or re-workings of, Re-evaluation Counseling. The principal one of these is Co-Counseling International (CCI). General description The main activity in co-counselling involves participants arranging to meet regularly in pairs to give each other peer-to-peer counselling, in turn taking the role of counsellor and client, with equal amounts of time allocated to each. Co-counselling functions by giving people an opportunity to work on whatever issues they choose with the accepting support of another person. The person in the role of counsellor acts as a facilitator to the client, sometimes as a third-party observer and sometimes as a second-party confidant. While co-counselling is sometimes practiced outside a formal organisation, formal co-counselling organisations have developed leadership and support structures, including trainings and retreats. Safety (in the sense of being very low risk) and the sense that a co-counselling session is a safe space is important to the methods. There are strict rules of confidentiality. In most circumstances, the counsellor may not talk about a client's session without explicit and specific permission by the client. This is stricter than in other practices where practitioners discuss clients with supervisors, colleagues and sometimes with all sorts of other people. The peer relationship makes a considerable contribution to a sense of trust. The nature of the co-counselling session opens up the possibility for people to get in touch with emotions that they would avoid in any other circumstance. A belief in the value of working with emotions has become a core focus of the approach. Co-counselling training emphasizes methods for accessing and working with emotions, and co-counsellors aim to develop and improve emotional competence through the practice. Evidence as to the actual effectiveness of this method is undemonstrated. To get involved in co-counselling, it is usually first necessary to complete a Fundamentals course. The training involves learning how to carry out the roles of client and counsellor. Trainers may be counsellors or simply experienced members of the community. It also covers the guidelines or rules affecting co-counselling for the particular organization. Differences in approach mean that each organization normally requires completion of one of its own courses as a prerequisite for membership, even if someone has already completed a course with another organization. Theoretical framework and assumptions The original theory of co-counselling centres on the concept of distress patterns. These are patterns of behaviour, that is, behaviour that tends to be repeated in a particular type of circumstance, that are irrational, unhelpful or compulsive. The theory is that these patterns are driven by the accumulated consequences in the mind of (not currently) conscious memories of past events in which the person was unable to express or discharge the emotion appropriate to the event. Co-counselling enables release from the patterns by allowing "emotional discharge" of the past hurt experiences. Such cathartic discharge includes crying, warm perspiration, trembling, yawning, laughing and relaxed, non-repetitive talking. In day-to-day life, these "discharging" actions may be limited by social norms, such as, for example, taboos around crying, which are widespread in many cultures. Having temporary, undivided, supportive attention from another person often gives rise to strong feelings towards that person; your counsellor often becomes your best friend for life. Sometimes people "fall in love" with each other. This is similar to the phenomenon of transference, particularly when one of the partners is felt to have more authority because, for instance, they are more experienced, are teachers of co-counselling, or have authority roles within the organisation. The organisations differ in the ways that they handle this. The inability to trust and feel in real relationships is sometimes exacerbated by the intimacy of co-counselling relationship, making transference a possibility. But participants are strongly encouraged and supported to counsel through these feelings, often leading to profound changes in their perspectives and abilities around closeness. For the most part, co-counselling relationships become life-long, therapeutic partnerships that enable the participant to have healthier relationships in general. Therapeutic context Many co-counsellors take the view, often quite strongly, that co-counselling is not psychotherapy. In the beginning, this was because Re-evaluation Counseling decided not to draw on any discipline of psychotherapy for its theory and practice, although RC did incorporate some ideas from psycho-analysis such as "unconscious promptings" which Jackins adapted and relabeled "restimulation". A similar view is taken by some non-RC co-counsellors who regard psychotherapy as involving specialist techniques used by a therapist on a client and is therefore not peer and the client has little or no control over the process. Others consider that co-counselling is psychotherapeutic, in that it enables change or therapy to take place in the psyche, soul affect or being of an individual. Co-counselling takes a positive view of the person (i.e. we are all essentially good), considers the mind and body as an integrated whole and acknowledges the value of catharsis; it is regarded as an approach within humanistic psychology, a view that would be rejected by some within RC. Re-evaluation Counseling The core organization structure of RC consists of classes and local communities set up by experienced co-counsellors, which are in turn organized by regions and country. The term "re-evaluation" refers to the client's need to rethink their past distress experiences after the emotional hurt in those experiences have been discharged, and thereby regain ("re-emerge" with) their natural intellectual and emotional capacities. The RC organization and literature do not accept the description of its practice as psychotherapy, maintaining instead that the process of developing distress patterns that dissolve through emotional discharge in the context of appreciative attention is simply a natural process that does not imply either psychopathology on the part of the individual or the need for professional treatment. Re-evaluation Counseling regards other forms of "mainstream counselling" and psychotherapy in general as frequently inadequate attempts to bring about relief from distress using methods that do not focus on discharge and re-emergence. In RC, the client and counsellor are expected to work co-operatively, participants are expected to provide non-judgmental active listening and to "contradict" the misinformation or other conditions thought to be associated with distress patterns. RC also engages techniques such as "non-permissive" counselling, in which the counsellor intervenes to "interrupt" client patterns without the consent of the client. The structure of RC is one of clearly defined leadership, to encourage clarity in the difficult struggles many people have to achieve breakthroughs against their distresses. RC encourages counsellors to think very hard about all possible ways to assist the client in discharging. RC approaches the issue of feelings between co-counsellors by having a strict "no-socialising" rule. RC co-counsellors are expected not to socialise or have social or sexual relationships with other co-counsellors unless these relationships pre-dated their becoming co-counsellors. RC specifically rejects the label "transference" for this phenomenon, as this is seen as part of a "symptomatic" method typical in psychology; the original theory of co-counselling (from RC) teaches that the best thing to do in these circumstances is repeatedly counsel on, and "discharge" about, such feelings. In addition, methods of "getting attention out of distress" are available which help with the difficulty of "switching roles" between counsellor and client. When taught correctly, counsellors are soon able to grasp the difference between counselling relationships and those from outside life. However, sometimes there is a marked pull to "socialise" or confuse the boundaries of the co-counselling relationship with other types of relationships. This is one reason why many consider a well-organised community of co-counsellors with clear rules to be essential in the successful practise of co-counselling. Re-evaluation Counseling places a high importance on the need to understand and adhere to a comprehensive theory about the nature of the universe and of human beings (known in general as the "Benign Reality"), the best ways of assisting the discharge process and of pro-liberation attitudes in co-counselling. RCers believe that, when taken together, these enable the counsellor to keep a clear picture of the client's "re-emergence" and are therefore very effective. People disagreeing with the theoretical perspective are asked to think and discharge on the points at issue before actively challenging such perspectives. The main aim is to provide a safe, stable and supportive atmosphere within which people can client skillfully and also lead "re-emergent lives" where they are not dependent in a therapeutic sense, but instead become more energetic and effective (a state known as "zestfulness" in RC). Co-Counselling International Co-Counselling International (CCI) was started in 1974 as a breakaway from Re-evaluation Counseling by John Heron, who was at the time director of the Human Potential Research Project, University of Surrey UK, and Tom Sargent and Dency Sargent from Hartford, Connecticut, United States. Unlike other breakaways from RC, which involved changes of leadership but otherwise continued to practice in similar ways to RC, the CCI break was ideological, and CCI developed in significantly different ways. Relations between CCI and RC The existence of other co-counselling organisations is generally not mentioned in RC, and RC co-counsellors are often not aware of their existence. Amongst those within RC who know about it, CCI is often seen as an "attack organisation" and was specifically condemned as such in many private and public conversations by Jackins, who claimed that Heron had started it against a specific agreement not to, and in breach of RC guidelines he had previously agreed to. In turn, Heron and many of his supporters claimed that RC was authoritarian and cult-like, and later, that Jackins engaged in sexual abuse of clients. RC supporters parried that CCI fostered a sexually-liberal atmosphere that blurred the boundaries of co-counselling and relationships. The history of co-counselling including its origins with RC is normally taught on CCI Fundamentals courses. CCI, by its nature, has no corporate opinion about RC, and individual CCI co-counsellors have their own views. Most CCI co-counsellors have a benevolent view toward RC, regarding it as a different, alternative approach to co-counselling. Membership of RC is not a bar to membership of CCI, and a few people manage to do both despite the RC ban. See also Tim Jackins Harvey Jackins John Heron List of counseling topics Critical friend References Further reading Jackins, Harvey (1970); Fundamentals of co-counselling manual; Rational Island, Seattle; Jackins, Harvey (1973); The human situation; Rational Island, Seattle; Ernst, Sheila & Goodison, Lucy (1981); In Our Own Hands; The Women's Press, London; Evison, Rose & Horobin, Richard (1988); Co-counselling in J Rowan & W Dryden (eds) Innovative therapy in Britain; Open University Press, Milton Keynes; Caroline New, Katie Kauffman (July 2004); Co-Counselling: The Theory and Practice of Re-Evaluation Counselling; Brunner-Routledge; Postle Denis (2003) Letting the Heart Sing - The Mind Gymnasium London: Wentworth; R.D. Rosen, Psychobabble, 1975, chapter on Jackins and Co-counselling. Personal development Counseling Health movements
Co-counselling
Biology
2,660
77,070,659
https://en.wikipedia.org/wiki/Chimalliviridae
Chimalliviridae is a family of bacteriophages in the class Caudoviricetes. It includes the subfamily Gorgonvirinae, 19 genera (including Phikzvirus), and 33 species. References Virus families Caudovirales
Chimalliviridae
Biology
58
8,712,440
https://en.wikipedia.org/wiki/Switched%20video
Switched video or switched digital video (SDV), sometimes referred to as switched broadcast (SWB), is a telecommunications industry term for a network scheme for distributing digital video via a cable. Switched video sends the digital video more efficiently freeing bandwidth. The scheme applies to digital video distribution both on typical cable TV systems using QAM channels, or on IPTV systems. Description In hybrid fibre-coaxial systems, a fiber optic network extending from the operator's head end carries video channels out to a fiber optic node that services up to 2000 end points. Video is then sent via coaxial cable. Note that only a percentage of these homes are actively watching channels at a given time. Rarely are all channels being accessed by the homes in the service group. In a switched video system, the unwatched channels do not need to be sent. In US cable systems, equipment in the home sends a channel request signal back to the distribution hub. If a channel is requested, the distribution hub allocates a QAM channel and transmits the channel to the coaxial cable. For this to work, the home equipment must have two-way communication ability. Switched video uses the same mechanisms as video on demand and may be viewed as non-ending video on demand that users share. Technical Two-way communication is handled differently between cable and IPTV schemes. IPTV uses Internet communication protocols but requires a different distribution infrastructure. US cable companies elected the less costly approach of upgrading existing infrastructure. In the upgrade approach, various proprietary schemes use specific frequencies for messaging the distribution hub. For switched video to work on cable systems, digital television users in a subscription group must have devices capable of communicating to the distribution hub in a compatible manner. Unlike other features dependent on two-way communication such as video on demand, the requirement to upgrade all digital set-top boxes within a group makes conversion to switched video expensive. CableLabs proposed in the CableCARD 2.0 specification that two-way communication be supported with a scheme that required more powerful hardware capable of running Java software. Many cable companies indicated they would build lower cost devices that do not require this OCAP programming environment, so that upgrading to switched video would not be as costly. Consumer electronics companies also prefer a lighter weight solution, and so absent a standard, the conversion to switched video may require many years. History BigBand Networks (acquired by Arris Group in 2011) was the switched video pioneer, and received the Technology & Engineering Emmy Award in 2008 for innovation in the HFC market. Major vendors like Arris Group and Cisco also provide SDV solutions for the cable operators. An emerging market supplies back office applications to analyze and control performance. See also Cable television Internet Protocol television (IPTV) Quadrature amplitude modulation References External links Definition of switched video - PC Magazine Overview Switched Digital Video Architecture Guide - Cisco White Paper How Switched Digital Video Works - HowStuffWorks.com Outline Using Bandwidth More Efficiently with Switched Digital Video - Motorola White Paper (archived) Broadband Cable television technology Streaming television
Switched video
Technology
617
502,832
https://en.wikipedia.org/wiki/Kitti%27s%20hog-nosed%20bat
Kitti's hog-nosed bat (Craseonycteris thonglongyai), also known as the bumblebee bat, is a near-threatened species of bat and the only extant member of the family Craseonycteridae. It occurs in western Thailand and southeast Myanmar, where it occupies limestone caves along rivers. Kitti's hog-nosed bat is the smallest species of bat and arguably the world's smallest mammal by body length (the Etruscan shrew is regarded as the smallest by body mass). It has a reddish-brown or grey coat, with a distinctive pig-like snout. Colonies range greatly in size, with an average of 100 individuals per cave. The bat feeds during short activity periods in the evening and dawn, foraging around nearby forest areas for insects. Females give birth annually to a single offspring. Although the bat's status in Myanmar is not well known, the Thai population is restricted to a single province and may be at risk of extinction. Its potential threats are primarily anthropogenic, and include habitat degradation and the disturbance of roosting sites. Description Kitti's hog-nosed bat is small at about in length and in mass, hence the common name of "bumblebee bat". It is the smallest species of bat and may be the world's smallest mammal, depending on how size is defined. The main competitors for the title are small shrews; in particular, the Etruscan shrew may be lighter at but its body is longer, measuring from its head to the base of the tail. The bat has a distinctive swollen, pig-like snout with thin, vertical nostrils. Its ears are relatively large, while its eyes are small and mostly concealed by fur. In the jaw, the premaxillae are not fused with surrounding bones, and the coronoid process is significantly reduced. Its teeth are typical of an insectivorous bat. The dental formula is 1:1:1:3 in the upper jaw and 2:1:2:3 in the lower jaw, with large upper incisors. The bat's upperparts are reddish-brown or grey, while the underside is generally paler. The wings are relatively large and darker in colour, with long tips that allow the bat to hover. The second digit of the wing is made of a single short phalanx. And the humerus has an increased number of locking tubercles on its head and beyond. There is a considerable fusion in the axial skeleton, concerning the thoracic (three posterior vertebrae), lumbar (two posterior) and sacral (all) sections. The bat has particularly slender legs, with rather thin fibula. Despite having two caudal vertebrae, Kitti's hog-nosed bat has no visible tail. There is a large web of skin between the hind legs (the uropatagium) which may assist in flying and catching insects, although there are no tail bones or calcars to help control it in flight. Range, habitat and diversity Kitti's hog-nosed bat occupies limestone caves along rivers within dry evergreen or deciduous forests. In Thailand, it is restricted to a small region of the Tenasserim Hills in Sai Yok District, Kanchanaburi Province, within the drainage basin of the Khwae Noi River. While Sai Yok National Park in the Dawna Hills contains much of the bat's range, some Thai populations occur outside the park and are therefore unprotected. Since the 2001 discovery of a single individual in Myanmar, at least nine separate sites have been identified in the limestone outcrops of the Dawna and Karen Hills outside the Thanlwin, Ataran, and Gyaing Rivers of Kayin and Mon States. The Thai and Myanmar populations are morphologically identical, but their echolocation calls are distinct. It is not known whether the two populations are reproductively isolated. Despite its restricted geographical range and specialized habitat requirements, Kitti's hog-nosed bat exhibits remarkable genetic diversity within its populations. Molecular analyses using microsatellite markers have revealed moderate levels of genetic differentiation among cave roosts in Thailand and Myanmar, suggesting historical isolation and limited gene flow between populations. Biology and reproductive structure Kitti's hog-nosed bat roosts in caves in limestone hills, far from the entrance. While many caves contain only 10 to 15 individuals, the average group size is 100, with a maximum of about 500. Individuals roost high on walls or roof domes, far apart from each other. Bats also undertake seasonal migration between caves. Kitti's hog-nosed bat has a brief activity period, leaving its roost for only 30 minutes in the evening and 20 minutes at dawn. These short flights are easily interrupted by heavy rain or cold temperatures. During this period, the bat forages within fields of cassava and kapok or around the tops of bamboo clumps and teak trees, within one kilometre of the roosting site. The wings seem to be shaped for hovering flight, and the gut contents of specimens include spiders and insects that are presumably gleaned off foliage. Nevertheless, most prey is probably caught in flight. Main staples of the bat's diet include small flies (Chloropidae, Agromyzidae, and Anthomyiidae), hymenopterans, and psocopterans. Kitti's hog-nosed bat suggest a unique reproductive strategy characteristic of microchiropterans. Females of this species typically give birth to a single offspring per reproductive event, with births occurring during the dry season between March and May. Maternity colonies composed of a small number of females are formed within cave roosts, providing communal protection and thermoregulatory benefits for nursing offspring. Male mating behaviors, such as courtship vocalizations and scent marking, have been documented in captive populations, indicating potential sexual selection mechanisms. Taxonomy Kitti's hog-nosed bat is the only extant species in the family Craseonycteridae, which is grouped in the superfamily Rhinolophoidea as a result of molecular testing. Based on this determination, the bat's closest relatives are members of the families Hipposideridae and Rhinopomatidae. Kitti's hog-nosed bat was unknown to the world at large prior to 1974. Its common name refers to its discoverer, Thai zoologist Kitti Thonglongya. Thonglongya worked with a British partner, John E. Hill, in classifying bats of Thailand; after Thonglongya died suddenly in February 1974, Hill formally described the species, giving it the binomial name Craseonycteris thonglongyai in honour of his colleague. Ecological role and conservation As a microchiropteran species, Kitti's hog-nosed bat plays a crucial ecological role in its habitat, primarily as an insectivore. This species preys predominantly on small flying insects, including mosquitoes, moths, and beetles. By controlling insect populations, particularly those of agricultural pests and disease vectors, Kitti's hog-nosed bat contributes to ecosystem balance and human well-being. Furthermore, its presence in cave ecosystems may also influence nutrient cycling and the distribution of guano-dependent organisms. As of the species' review in 2019, Kitti's hog-nosed bat is listed by the IUCN as near-threatened, with a downward population trend. Soon after the bat's discovery in the 1970s, some roosting sites became disturbed as a result of tourism, scientific collection, and even the collection and sale of individuals as souvenirs. However, these pressures may not have had a significant effect on the species as a whole, since many small colonies exist in hard-to-access locations, and only a few major caves were disturbed. Another potential risk is the activity of local monks, who have occupied roost caves during periods of meditation. Currently, the most significant and long-term threat to the Thai population could be the annual burning of forest areas, which is most prevalent during the bat's breeding season. In addition, the proposed construction of a gas pipeline from Myanmar to Thailand may have a negative impact. Threats to the Myanmar population are not well known. In 2007, Kitti's hog-nosed bat was identified by the Evolutionarily Distinct and Globally Endangered project as one of its Top 10 "focal species". See also Smallest organisms References External links Information and image at ADW Bats by classification Kitti's Hog-nosed Bat Kitti's Hog-nosed Bat Kitti's Hog-nosed Bat Mammals of Myanmar Mammals of Thailand EDGE species
Kitti's hog-nosed bat
Biology
1,770
61,455,123
https://en.wikipedia.org/wiki/R%C3%B6ssler%20Prize
The Rössler Prize, offered by the ETH Zurich Foundation, is a monetary prize that has been awarded annually since 2009 to a promising young tenured professor of the ETH Zurich in the middle of an accelerating career. The prize of 200,000 Swiss Francs is financed by the returns from an endowment made by Max Rössler, an alumnus of the ETH. The prize money has to be used for the research of the laureate. Laureates 2009: Nenad Ban, Microbiology 2010: Gerald Haug, Geology of Climate 2011: Andreas Wallraff, Solid State Physics 2012: Nicola Spaldin, Material Science 2013: Olivier Voinnet, RNA Biology 2014: , Health Sciences and Technology 2015: David J. Norris, Mechanical and Process Engineering 2016: Christophe Copéret, Chemistry and Applied Biosciences 2017: Olga Sorkine-Hornung, Computer Science 2018: Philippe Block, Architecture 2019: Maksym Kovalenko, Inorganic chemistry/Nanotechnology 2020: Paola Picotti, Biology 2021: , Machine Learning 2022: Tanja Stadler, Mathematics and Computational evolutionary biology 2023: , Mathematics 2024: , Robotics See also Science and technology in Switzerland Prizes named after people References External links Academic awards Science and technology awards Swiss awards Awards established in 2009
Rössler Prize
Technology
263
47,218,129
https://en.wikipedia.org/wiki/Canna%20%28unit%29
Canna (pl. canne; proper meaning in Italian: Cane) was an ancient Italian unit of length, which differed from place to place. Capua: 2.1768707 m (9th – 15th centuries) Republic of Genoa: 2.49095 m Kingdom of Naples: canna: 2.1163952 m (edict of 6 April 1480) canna: 2.6455026 m (law of 6 April 1840) field surveying canna: 6.998684 m² (law of 6 April 1840) Romagna: 1.9928 m Sicily: 2.062 m Tuscany field surveying canna, a.k.a. 5 "braccia" long "pertica": 2.9183 m canna (fabric): 0.58366 m Rome canna (architecture): 2.234 m commercial canna: 1.992 m Teramo: 3.17 m Malta: 2.08 m (2 yd, 10 in) Sources Notes Units of length
Canna (unit)
Mathematics
216
64,577,716
https://en.wikipedia.org/wiki/Adultery%20laws
Adultery laws are the laws in various countries that deal with extramarital sex. Historically, many cultures considered adultery a very serious crime, some subject to severe punishment, especially in the case of extramarital sex involving a married woman and a man other than her husband, with penalties including capital punishment, mutilation, or torture. Such punishments have gradually fallen into disfavor, especially in Western countries from the 19th century. In countries where adultery is still a criminal offense, punishments range from fines to caning and even capital punishment. Since the 20th century, criminal laws against adultery have become controversial, with most Western countries repealing them. Most countries that criminalize adultery are those where the dominant religion is Islam, and several sub-Saharan African Christian-majority countries. Notable exceptions to this rule are the Philippines and 17 U.S. states (as well as Puerto Rico) although adultery charges are rare in the United States. However, even in jurisdictions that have decriminalised adultery, adultery may still have legal consequences, particularly in jurisdictions with fault-based divorce laws, where adultery can constitute a ground for divorce and may be a factor in property settlement, the custody of children, the denial of alimony, etc. Adultery is not a ground for divorce in jurisdictions which have adopted a no-fault divorce model, but may still be a factor in child custody and property disputes. The criminal status of adultery has attracted criticism, especially where there are violent penalties. The head of the United Nations expert body charged with identifying ways to eliminate laws that discriminate against women or are discriminatory to them in terms of implementation or impact, Kamala Chandrakirana, has stated that: "Adultery must not be classified as a criminal offence at all". A joint statement by the United Nations Working Group on discrimination against women in law and in practice states that: "Adultery as a criminal offence violates women’s human rights". In Muslim countries that follow Sharia law for criminal justice, the punishment for adultery may be stoning. There are fifteen countries in which stoning is authorized as lawful punishment, though in recent times it has been legally carried out only in Iran and Somalia. Countries which follow very strict versions of Sharia law in their criminal systems include Saudi Arabia, Iran, Brunei, Afghanistan, Sudan, Pakistan, 12 of Nigeria's 36 states (in Northern Nigeria) and Qatar; although these laws are not necessarily enforced. Al-Shabaab, a jihadist fundamentalist group based in East Africa (mainly Somalia) and Yemen also implements an extreme form of Sharia. Punishment In jurisdictions where adultery is illegal, punishments vary from fines (for example in the US state of Rhode Island) to caning in parts of Asia. In fifteen countries the punishment includes stoning, although in recent times it has been legally enforced only in Iran and Somalia. Most stoning cases are the result of mob violence, and while technically illegal, no action is usually taken against perpetrators. Sometimes such stonings are ordered by informal village leaders who have de facto power in the community. Adultery may have consequences under civil law even in countries where it is not outlawed by the criminal law. For instance it may constitute fault in countries where the divorce law is fault based or it may be a ground for tort. In some jurisdictions, the "intruder" (the third party) is punished, rather than the adulterous spouse. For instance act 266 of the Penal Code of South Sudan reads: "Whoever, has consensual sexual intercourse with a man or woman who is and whom he or she has reason to believe to be the spouse of another person, commits the offence of adultery [...]". Similarly, under the adultery law in India (Section 497 of the Indian Penal Code, until overturned by the Supreme Court in 2018) it was a criminal offense for a man to have consensual sexual intercourse with a married woman, without the consent of her husband (no party was criminally punished in case of adultery between a married man and an unmarried woman). Asia Southwest Asia In Southwest Asia, adultery has attracted severe sanctions, including the death penalty. In some places, such as Saudi Arabia, the method of punishment for adultery is stoning to death. Proving adultery under Muslim law can be a very difficult task as it requires the accuser to produce four eyewitnesses to the act of sexual intercourse, each of whom should have a good reputation for truthfulness and honesty. The criminal standards do not apply in the application of social and family consequences of adultery, where the standards of proof are not as exacting. Sandra Mackey, author of The Saudis: Inside the Desert Kingdom, stated in 1987 that in Saudi Arabia, "unlike the tribal rights of a father to put to death a daughter who has violated her chastity, death sentences under Koranic law [for adultery] are extremely rare." In regions of Iraq and Syria under ISIL, there have been reports of floggings as well as execution of people who engaged in adultery. The method of execution was typically by stoning. ISIL would not merely oppose adultery but also oppose behavior that from their point of view could lead to adultery, such as women not being covered, people of the opposite sex socializing with one another, or even female mannequins in store windows. China In China, punishments for adultery were differentiated based on gender of the spouse until 1935. Adultery is no longer a crime in the People's Republic of China, but is a ground for divorce. It is illegal to commit adultery with the spouse of a servicemember in the People's Liberation Army. Taiwan In Taiwan, adultery was a criminal offense before 2020. The law was challenged in 2002 when it was upheld by the Constitutional Court. Arguments were heard again by the court in March 2020, and the court ruled the law unconstitutional on 29 May 2020. Twelve of fifteen justices issued a concurring opinion, two others concurred in part, and one dissented. The Legislative Yuan amended the on 31 May 2021, removing the article criminalizing adultery entirely. During Qing rule in Taiwan (1683 to 1895), the husband or his relatives could bring charges. The standard sentence was ninety lashes for each of the accused. The woman could be sold or divorced. The matter could be settled out of court, with bodily harm to the accused or assorted punishments affecting his social standing. Under Japanese rule, only the husband could bring charges. The accused could be sentenced to two years imprisonment. Wife selling became illegal, although private settlements still occurred. India On 27 September 2018, the Supreme Court of India ruled Section 497 of the Indian Penal Code, the law which criminalized adultery, as unconstitutional. Before 2018, adultery was defined as sex between a man and a woman without the consent of the woman's husband. The man was prosecutable and could be sentenced for up to five years (even if he himself was unmarried) whereas the married woman couldn't be jailed. Men have called the law gender discrimination in that women cannot be prosecuted for adultery and the National Commission of Women has criticized the British era law of being anti-feminist as it treats women as the property of their husbands and has consequently recommended deletion of the law or reducing it to a civil offense. Extramarital sex without the consent of one's partner can be a valid grounds for monetary penalty on government employees, as ruled by the Central Administrative Tribunal. Japan Adultery was a crime in Japan until 1947. South Korea In 2015, South Korea's Constitutional Court overturned the country's law against adultery. Previously, adultery was criminalized in 1953, and violators were subject to two years in prison, with the aim of protecting women from divorce. The law was overturned because the court found that adultery is a private matter in which the state should not intervene. Pakistan In Pakistan, adultery is a crime under the Hudood Ordinance, promulgated in 1979. The Ordinance sets a maximum penalty of death. The Ordinance has been particularly controversial because it requires a woman making an accusation of rape to provide extremely strong evidence to avoid being charged with adultery herself. A conviction for rape is only possible with evidence from no fewer than four witnesses. In recent years high-profile rape cases in Pakistan have given the Ordinance more exposure than similar laws in other countries. Similar laws exist in some other Muslim countries, such as Saudi Arabia and Brunei. Philippines Adultery is a crime in the Philippines. In the Philippines, the law differentiates based on the gender of the spouse. A wife can be charged with adultery, while a husband can only be charged with the related crime of concubinage, which is more loosely defined (it requires either keeping the mistress in the family home, or cohabiting with her, or having sexual relations under scandalous circumstances). There are currently proposals to decriminalize adultery in the Philippines. Europe Adultery is no longer a crime in any European country. Adultery in English law was not a criminal offence in secular law from the later twelfth century until the seventeenth century. It was punishable under ecclesiastical law from the twelfth century until jurisdiction over adultery by ecclesiastical courts in England and Wales was abolished in England and Wales (and some British territories of the British Empire) by the Matrimonial Causes Act 1857. However, in English and Welsh common law of tort it was possible from the early seventeenth century for a spouse to prosecute an adulterer for damages on the grounds of loss of consortium until the Law Reform (Miscellaneous Provisions) Act 1970. Adultery was also illegal under secular statute law for the decade in which the Commonwealth (Adultery) Act (1650) was in force. Among the last European countries to decriminalise adultery were Italy (1969), West Germany (1969), Malta (1973), Luxembourg (1974), France (1975), Spain (1978), Portugal (1982), Greece (1983), Belgium (1987), Switzerland (1989), and Austria (1997). In Romania adultery was a crime until 2006, though the crime of adultery had a narrow definition, excluding situations where the other spouse encouraged the act or when the act happened at a time the couple was living separate and apart; and in practice prosecutions were extremely rare. In Turkey, adultery laws were held to be invalid in 1996/1998 because the law was deemed discriminatory as it differentiated between women and men. In 2004, there were proposals to introduce a gender-neutral adultery law. The plans were dropped, and it has been suggested that the objections from the European Union played a role. Before the 20th century, adultery was often punished harshly. In Scandinavia, in the 17th century, adultery and bigamy were subject to the death penalty, although few people were actually executed. Examples of women who have been executed for adultery in Medieval and Early Modern Europe include Maria of Brabant, Duchess of Bavaria (in 1256), Agnese Visconti (in 1391), Beatrice Lascaris di Tenda (in 1418), Anne Boleyn (in 1536), and Catherine Howard (in 1542). The enforcement of adultery laws varied by jurisdiction. In England, the last execution for adultery is believed to have taken place in 1654, when a woman named Susan Bounty was hanged. The European Court of Human Rights (ECHR) has had the opportunity to rule in recent years on several cases involving the legitimacy of firing a person from their job due to adultery. These cases dealt with people working for religious organizations and raised the question of the balancing of the right of a person to respect for their private life (recognized in the EU) and the right of religious communities to be protected against undue interference by the State (recognized also in the EU). These situations must be analyzed with regard to their specific circumstances, in each case. The ECtHR had ruled both in favor of the religious organization (in the case of Obst) and in favor of the fired person (in the case of Schüth). Latin America Until the 1990s, most Latin American countries had laws against adultery. Adultery has been decriminalized in most of these countries, including Paraguay (1990), Chile (1994), Argentina (1995), Nicaragua (1996), Dominican Republic (1997), Brazil (2005), and Haiti (2005). In some countries, adultery laws have been struck down by courts on the ground that they discriminated against women, such as Guatemala (1996), where the Guatemalan Constitutional Court struck down the adultery law based both on the Constitution's gender equality clause and on human rights treaties including CEDAW; and Venezuela in 2016. The adultery law of the Federal Criminal Code of Mexico was repealed in 2011. Australia Adultery is not a crime in Australia. Under federal law enacted in 1994, sexual conduct between consenting adults (18 years of age or older) is their private matter throughout Australia, irrespective of marital status. Australian states and territories had previously repealed their respective adultery criminal laws. Australia changed to no-fault divorce in 1975, abolishing adultery as a ground for divorce. Canada Adultery is not a crime in Canada. It has never been defined as a criminal offence within the Criminal Code, which was enacted in 1892, nor is it considered an offence at common law. United States The United States is one of few industrialized countries to have laws criminalizing adultery. In the United States, laws vary from state to state. Until the mid-20th century, most U.S. states (especially Southern and Northeastern states) had laws against fornication, adultery or cohabitation. These laws have gradually been abolished or struck down by courts as unconstitutional. State criminal laws against adultery are rarely enforced. Federal appeals courts have ruled inconsistently as to whether these laws are unconstitutional (especially after the 2003 Supreme Court decision Lawrence v. Texas) and as of 2019 the Supreme Court has not ruled directly on the issue. As of 22 November 2024, adultery remains a crime in 16 states and the Commonwealth of Puerto Rico but prosecutions are rare. Pennsylvania abolished its fornication and adultery laws in 1973. States which have decriminalised adultery in recent years include West Virginia (2010), Colorado (2013), New Hampshire (2014), Massachusetts (2018), Utah (2019), Idaho (2022), Minnesota (2023), and New York (2024). The District of Columbia repealed its adultery law in 2003. When passing the District of Columbia Organic Act of 1801, the 6th United States Congress extended all of the criminal laws of Maryland and Virginia to the respective territory within the District that each state had ceded to the federal government under Article I, Section VIII, and adultery had been an indictable offense in Maryland since the passage of a provincial law in 1715. The last conviction for adultery in Massachusetts was in 1983 and held that the statute was constitutional and that "no fundamental personal privacy right implicit in the concept of ordered liberty guaranteed by the United States Constitution bars the criminal prosecution of such persons [adulterers]." Today, adultery laws are mostly found in the conservative southern states. In general, 3 US states criminalize it as a felony (Oklahoma, Michigan, and Wisconsin) and 13 states along with Puerto Rico criminalize it as a misdemeanor. Punishments range from as little as a $10 fine in Maryland (despite being technically a criminal offense, not a civil one) to a fine of up to $10,000 and jail time of up to 3.5 years in Wisconsin and a fine of up to $5,000 and jail time of up to 5 years in Michigan. List of the statutes: Alabama (Alabama Revised Statutes, § 13a-13-2) Arizona (Arizona Revised Statutes, § 13–1408) Florida (Florida Statutes, § 798.01) Georgia (Official Code of Georgia Annotated, § 16–6–19) Illinois (Illinois Compiled Statutes, § 720-5-11/35) Kansas (Kansas Statutes Annotated, § 21–5511) Maryland (Annotated Code of Maryland, § 10–5–501) Michigan (Michigan Compiled Laws, §§ 750.29-32) Mississippi (Unannotated Mississippi Code, § 97–29–1) North Carolina (North Carolina General Statutes, § 14–26–184) North Dakota (North Dakota Century Code, § 12.1-20-09) Oklahoma (Oklahoma Statutes Annotated, §§ 21–871–872) Rhode Island (Rhode Island General Laws, § 11–6–2) South Carolina (South Carolina Code of Laws, §§ 16-15-60-16-15-80) Virginia (Virginia Code Annotated, § 18–2–365) Wisconsin (Wisconsin Statutes, § 944.16) Puerto Rico (Puerto Rico Laws, § 33–4758) 2 of these statutes (of Mississippi and North Carolina) refer to fornication as well and thus also by definition ban any extramarital sex altogether. 1 of these statutes (of Michigan) considers cohabitation between ex-spouses after their divorce as falling under the crime of adultery. Below is the list of the specific anti-fornication statutes (in states where there's an offense of fornication and where it's a separate offence): Georgia (Official Code of Georgia Annotated, § 16–6–8) Illinois (Illinois Compiled Statutes, § 720-5/11-40) North Dakota (North Dakota Century Code, § 12.1-20-08) (note: even though the crime is called "fornication", it only refers to having sex with minors or having sex in public. It doesn't target private consensual sex between adults, so in practice this law is irrelevant; it's only listed here for the sake of completeness because the crime is called "fornication" under the North Dakotan law) Below is the list of the specific anti-cohabitation statutes (in states where there's an offense of cohabitation and where it's a separate offence): Massachusetts (Massachusetts General Laws, § 208–40) (note: criminalizes cohabitation between 2 ex-spouses after divorce as adultery. But since the Massachusettsan criminal anti-adultery statute was repealed in 2018 and there's no punishment for it anymore, in practice this law is an irrelevant legislative remnant with no function) Oklahoma (Oklahoma Statutes Annotated, § 43–123) (criminalizes cohabitation between 2 ex-spouses after divorce as adultery) Mississippi (Unannotated Mississippi Code, § 93–5–29) (criminalizes cohabitation between 2 ex-spouses after divorce as adultery) In the U.S. military, adultery is a potential court-martial offense, falling under the General article (Art. 134). The Manual for Courts-Martial defines (para. 99) "Extramarital sexual conduct" as being: "Elements.(1) That the accused wrongfully engaged in extramarital conduct as described in subparagraph c.(2) with a certain person; (2) That, at the time, the accused knew that the accused or the other person was married to someone else; and (3) That, under the circumstances, the conduct of the accused was either: (i) to the prejudice of good order and discipline in the armed forces; (ii) was of a nature to bring discredit upon the armed forces; or (iii) to the prejudice of good order and discipline in the armed forces and of a nature to bring discredit upon the armed forces". As such, extramarital sex is not automatically an offense, it must be conducted under such circumstances that it is prejudicial to the armed forces. The law on adultery was revised in 2019 in order to include same-sex encounters in the offense. The enforceability of adultery laws in the United States is unclear following Supreme Court decisions since 1965 relating to privacy and sexual intimacy of consenting adults. However, occasional prosecutions do occur. Six U.S. states (Hawaii, North Carolina, Mississippi, New Mexico, South Dakota, and Utah) allow the possibility of the tort action of alienation of affections (brought by a deserted spouse against a third party alleged to be responsible for the failure of the marriage). In a highly publicized case in 2010, a woman in North Carolina won a $9 million suit against her husband's mistress. Laws against adultery in colonial America were very harsh. Despite this, there is only one known execution for adultery in American history: it occurred in the Colony of Massachusetts in 1643, when the married 18 year old Mary Latham and her extramarital lover James Britton were executed. Criticism of adultery laws Political arguments Laws against adultery have been named as invasive and incompatible with principles of limited government (see Dennis J. Baker, The Right Not to be Criminalized: Demarcating Criminal Law's Authority (Ashgate) chapter 2). Much of the criticism comes from libertarianism, the consensus among whose adherents is that government must not intrude into daily personal lives and that such disputes are to be settled privately rather than prosecuted and penalized by public entities. It is also argued that adultery laws are rooted in religious doctrines; which should not be the case for laws in a secular state. Opponents of adultery laws regard them as painfully archaic, believing they represent sanctions reminiscent of nineteenth-century novels. They further object to the legislation of morality, especially a morality so steeped in religious doctrine. Support for the preservation of the adultery laws comes from religious groups and from political parties who feel quite independent of morality, that the government has reason to concern itself with the consensual sexual activity of its citizens … The crucial question is: when, if ever, is the government justified to interfere in consensual bedroom affairs? Discrimination against women Opponents of adultery laws argue that these laws maintain social norms which justify violence, discrimination and oppression of women; in the form of state sanctioned forms of violence such as stoning, flogging or hanging for adultery; or in the form of individual acts of violence committed against women by husbands or relatives, such as honor killings, crimes of passion, and beatings. UN Women has called for the decriminalization of adultery. A Joint Statement by the United Nations Working Group on discrimination against women in law and in practice in 2012, stated: The United Nations Working Group on discrimination against women in law and in practice is deeply concerned at the criminalization and penalization of adultery whose enforcement leads to discrimination and violence against women. Concerns exist that the existence of "adultery" as a criminal offense (and even in family law) can affect the criminal justice process in cases of domestic assaults and killings, in particular by mitigating murder to manslaughter, or otherwise proving for partial or complete defenses in case of violence. These concerns have been officially raised by the Council of Europe and the UN in recent years. The Council of Europe Recommendation Rec(2002)5 of the Committee of Ministers to member states on the protection of women against violence states that member states should: (...) "57. preclude adultery as an excuse for violence within the family". UN Women has also stated in regard to the defense of provocation and other similar defenses that "laws should clearly state that these defenses do not include or apply to crimes of 'honour', adultery, or domestic assault or murder." Use of limited resources An argument against the criminal status of adultery is that the resources of the law enforcement are limited, and that they should be used carefully; by investing them in the investigation and prosecution of adultery (which is very difficult) the curbing of serious violent crimes may suffer. Consent as the basis of sexual offenses legislation Human rights organizations have stated that legislation on sexual crimes must be based on consent, and must recognize consent as central, and not trivialize its importance; doing otherwise can lead to legal, social or ethical abuses. Amnesty International, when condemning stoning legislation that targets adultery, among other acts, has referred to "acts which should never be criminalized in the first place, including consensual sexual relations between adults". Salil Shetty, Amnesty International's Secretary General, said: "It is unbelievable that in the twenty-first century some countries are condoning child marriage and marital rape while others are outlawing abortion, sex outside marriage and same-sex sexual activityeven punishable by death." The My Body My Rights campaign has condemned state control over individual sexual and reproductive decisions; stating "All over the world, people are coerced, criminalized and discriminated against, simply for making choices about their bodies and their lives". References Family law Human sexuality Extramarital relationships
Adultery laws
Biology
5,086
34,197,327
https://en.wikipedia.org/wiki/Hodge%E2%80%93Tate%20module
In mathematics, a Hodge–Tate module is an analogue of a Hodge structure over p-adic fields. introduced and named Hodge–Tate structures using the results of on p-divisible groups. Definition Suppose that G is the absolute Galois group of a p-adic field K. Then G has a canonical cyclotomic character χ given by its action on the pth power roots of unity. Let C be the completion of the algebraic closure of K. Then a finite-dimensional vector space over C with a semi-linear action of the Galois group G is said to be of Hodge–Tate type if it is generated by the eigenvectors of integral powers of χ. See also p-adic Hodge theory Mumford–Tate group References Algebraic geometry Number theory Hodge theory
Hodge–Tate module
Mathematics,Engineering
164
78,663,925
https://en.wikipedia.org/wiki/The%20Feiner%20Points%20of%20Leadership
The Feiner Points of Leadership: The 50 Basic Laws That Will Make People Want to Perform Better for You, first published in 2004, is the book written by Michael Feiner, the former Vice President and Chief People Officer at Pepsi-Cola and former professor of Columbia Graduate School of Business. It presents 50 laws for managing business problems based on the author's experiences. The book explores how leaders can positively influence their teams, foster collaboration, and build a productive, motivated work environment. Feiner's approach is based on the idea that leadership is not just about managing tasks but about inspiring and empowering others to achieve their best potential. It was selected by the Toronto Globe and Mail as the Best Business Book of 2004. References 2004 non-fiction books Management books Business books Personal development
The Feiner Points of Leadership
Biology
159
66,373,900
https://en.wikipedia.org/wiki/Girardin%20Jean-Louis
Girardin Jean-Louis is an American academic who is a Professor in the Departments of Psychiatry and Neurology at the University of Miami, Miller School of Medicine. He serves as Director of the Translational Sleep and Circadian Sciences Program and the "Program to Increase Diversity among Individuals Engaged in Health-Related Research" (PRIDE BSM) Institute. Dr. Jean-Louis’ translational behavioral sleep and circadian research was recently featured in Science and NPR. In 2020, he was named ‘Pioneer in Minority Health and Health Disparities’ and one of the Community of Scholars' most inspiring Black scientists in America. In 2021, he received the Mary A. Carskadon Outstanding Educator Award from the Sleep Research Society, and in 2022 the Diversity, Equity, and Inclusion Leadership Award from the American Academy of Sleep Medicine. Early life and education Jean-Louis grew up in Haiti. He became interested in engineering as a child, and particularly enjoyed building different contraptions. At the age of seventeen he immigrated to New York City, where he joined the City College of New York as an undergraduate student in engineering. As a student he took an elective course in sleep lab techniques, and became interested in sleep and wakefulness. He earned his doctoral degree at the Graduate Center of the City University of New York. His doctoral research considered the impact of melatonin on sleep and cognition in elderly individuals. He was a postdoctoral research associate at the University of California, San Diego, where he specialized in sleep and chronobiology. As part of his research, Jean-Louis advanced the science around wearable technologies (actigraphy) to monitor patient's sleep-wake behavior out of hospital and expensive laboratories. In the early days of his research on sleep science, Jean-Louis struggled to find academic mentors, particularly mentors of color. He continued to improve the science of actigraphy such that it could be more readily used to collect sleep data in the comfort and safety of patient's own home. Research and career Jean-Louis studies the sociocultural and environmental determinants of health. His research considers sleep medicine and health equity, an in particular, how low-income and minority communities are impacted by insufficient sleep. He is particularly interested in why sleep apnea is under-diagnosed in African-Americans. In 2008, he showed that less than 40% of African-American patients with sleep apnea agreed to having a diagnostic test. In an effort to understand the sleep behavior of minority groups, Jean-Louis has led several outreach initiatives. These include programs in churches, barber shops and health salons. Jean-Louis was awarded an National Institute on Aging (NIA) Leadership Career Award in 2018. In 2020, he was selected as one of The Community of Scholars' most inspiring Black scientists in America. Alongside his academic research, Jean-Louis has launched several initiatives to support underrepresented minority groups in science and medicine. As the satisfaction and medical outcomes of communities of color are impacted by the racial/ethnic heritage of the physician, Jean-Louis believes there is an urgent need for more diverse medical practitioners. Selected publications References Year of birth missing (living people) Living people American people of Haitian descent CUNY Graduate Center alumni City College of New York alumni New York University faculty American psychiatrists Psychiatry academics Sleep researchers
Girardin Jean-Louis
Biology
672
9,848,870
https://en.wikipedia.org/wiki/Energy%20efficiency%20in%20transport
The energy efficiency in transport is the useful travelled distance, of passengers, goods or any type of load; divided by the total energy put into the transport propulsion means. The energy input might be rendered in several different types depending on the type of propulsion, and normally such energy is presented in liquid fuels, electrical energy or food energy. The energy efficiency is also occasionally known as energy intensity. The inverse of the energy efficiency in transport is the energy consumption in transport. Energy efficiency in transport is often described in terms of fuel consumption, fuel consumption being the reciprocal of fuel economy. Nonetheless, fuel consumption is linked with a means of propulsion which uses liquid fuels, whilst energy efficiency is applicable to any sort of propulsion. To avoid said confusion, and to be able to compare the energy efficiency in any type of vehicle, experts tend to measure the energy in the International System of Units, i.e., joules. Therefore, in the International System of Units, the energy efficiency in transport is measured in terms of metre per joule, or m/J, while the energy consumption in transport is measured in terms of joules per metre, or J/m. The more efficient the vehicle, the more metres it covers with one joule (more efficiency), or the fewer joules it uses to travel over one metre (less consumption). The energy efficiency in transport largely varies by means of transport. Different types of transport range from some hundred kilojoules per kilometre (kJ/km) for a bicycle to tens of megajoules per kilometre (MJ/km) for a helicopter. Via type of fuel used and rate of fuel consumption, energy efficiency is also often related to operating cost ($/km) and environmental emissions (e.g. CO/km). Units of measurement In the International System of Units, the energy efficiency in transport is measured in terms of metre per joule, or m/J. Nonetheless, several conversions are applicable, depending on the unit of distance and on the unit of energy. For liquid fuels, normally the quantity of energy input is measured in terms of the liquid's volume, such as litres or gallons. For propulsion which runs on electricity, normally kWh is used, while for any type of human-propelled vehicle, the energy input is measured in terms of Calories. It is typical to convert between different types of energy and units. For passenger transport, the energy efficiency is normally measured in terms of passengers times distance per unit of energy, in the SI, passengers metres per joule (pax.m/J); while for cargo transport the energy efficiency is normally measured in terms of the mass of transported cargo times distance per unit of energy, in the SI, kilograms metres per joule (kg.m/J). Volumetric efficiency with respect to vehicle capacity may also be reported, such as passenger-mile per gallon (PMPG), obtained by multiplying the miles per gallon of fuel by either the passenger capacity or the average occupancy. The occupancy of personal vehicles is typically lower than capacity by a considerable degree and thus the values computed based on capacity and on occupancy will often be quite different. Typical conversions into SI unit Liquid fuels Energy efficiency is expressed in terms of fuel economy: distance per vehicle per unit fuel volume; e.g., km/L or miles per gallon (US or imperial). distance per vehicle per unit fuel mass; e.g., km/kg. distance per vehicle per unit energy; e.g., miles per gallon equivalent (mpg-e). Energy consumption (reciprocal efficiency) is expressed terms of fuel consumption: volume of fuel (or total energy) consumed per unit distance per vehicle; e.g. l/100 km or MJ/100 km. volume of fuel (or total energy) consumed per unit distance per passenger; e.g., l/(100 passenger·km). volume of fuel (or total energy) consumed per unit distance per unit mass of cargo transported; e.g., l/100 kg·km or MJ/t·km. Electricity Electricity consumption: electrical energy used per vehicle per unit distance; e.g., kWh/100 km. Producing electricity from fuel requires much more primary energy than the amount of electricity produced. Food energy Energy consumption: calories burnt by the body's metabolism per kilometre; e.g., Cal/km. calories burnt by the body's metabolism per mile; e.g., Cal/miles. Land Passenger Transport Table Overview In the following table the energy efficiency and energy consumption for different types of passenger land vehicles and modes of transport, as well as standard occupancy rates, are presented. The sources for these figures are in the correspondent section for each vehicle, in the following article. The conversions amongst different types of units, are well known in the art. For the conversion amongst units of energy in the following table, 1 litre of petrol amounts to 34.2 MJ, 1 kWh amounts to 3.6 MJ and 1 kilocalorie amounts to 4184 J. For the car occupation ratio, the value of 1.2 passengers per automobile was considered. Nonetheless, in Europe this value slightly increases to 1.4. The sources for conversions amongst units of measurements appear only of the first row. Land transport means Walking A person walking at requires approximately of food energy per hour, which is equivalent to 4.55 km/MJ. of petrol contains about of energy, so this is approximately equivalent to . Velomobile Velomobiles (enclosed recumbent bicycles) have the highest energy efficiency of any known mode of personal transport because of their small frontal area and aerodynamic shape. At a speed of , the velomobile manufacturer WAW claims that only 0.5 kWh (1.8 MJ) of energy per 100 km is needed to transport the passenger (= 18 J/m). This is around (20%) of what is needed to power a standard upright bicycle without aerodynamic cladding at same speed, and (2%) of that which is consumed by an average fossil fuel or electric car (the velomobile efficiency corresponds to 4700 miles per US gallon, 2000 km/L, or 0.05 L/100 km). Real energy from food used by human is 4–5 times more. Unfortunately their energy efficiency advantage over bicycles becomes smaller with decreasing speed and disappears at around 10 km/h where power needed for velomobiles and triathlon bikes are almost the same. Bicycle A standard lightweight, moderate-speed bicycle is one of the most energy-efficient forms of transport. Compared with walking, a cyclist riding at requires about half the food energy per unit distance: 27 kcal/km, per 100 km, or 43 kcal/mi. This converts to about . This means that a bicycle will use between 10 and 25 times less energy per distance travelled than a personal car, depending on fuel source and size of the car. This figure does depend on the speed and mass of the rider: greater speeds give higher air drag and heavier riders consume more energy per unit distance. In addition, because bicycles are very lightweight (usually between 7–15 kg) this means they consume very low amounts of materials and energy to manufacture. In comparison to an automobile weighing 1500 kg or more, a bicycle typically requires 100–200 times less energy to produce than an automobile. In addition, bicycles require less space both to park and to operate and they damage road surfaces less, adding an infrastructural factor of efficiency. Motorised bicycle A motorised bicycle allows human power and the assistance of a engine, giving a range of . Electric pedal-assisted bikes run on as little as per 100 km, while maintaining speeds in excess of . These best-case figures rely on a human doing 70% of the work, with around per 100 km coming from the motor. This makes an electric bicycle one of the most efficient possible motorised vehicles, behind only a motorised velomobile and an electric unicycle (EUC). Electric kick scooter Electric kick scooters, such as those used by scooter-sharing systems like Bird or Lime, typically have a maximum range of under and are commonly limited to a maximum speed of . Intended to fit into a last mile niche and be ridden in bike lanes, they require little skill from the rider. Because of their light weight and small motors, they are extremely energy-efficient with a typical energy efficiency of 1.1 kWh (4.0 MJ) per 100 km (1904 MPGe 810 km/L 0.124 L/100 km), even more efficient than bicycles and walking. However, as they must be recharged frequently, they are often collected overnight with motor vehicles, somewhat negating this efficiency. The lifecycle of electric scooters is also notably shorter than that of bicycles, often reaching only a single digit number of years. Electric Unicycle An electric unicycle (EUC) cross electric skateboard variant called the Onewheel Pint can carry a 50 kg person 21.5 km at an average speed of 20 km/h. The battery holds 148Wh. Without taking energy lost to heat in the charging stage into account, this equates to an efficiency of 6.88Wh/km or 0.688kWh/100 km. Additionally, with regenerative braking as a standard design feature, hilly terrain would have less impact on an EUC compared to a vehicle with friction brakes such as a push bike. This combined with the single wheel ground interaction may make the EUC the most efficient known vehicle at low speeds (below 25 km/h), with the velomobile overtaking the position as most efficient at higher speeds due to superior aerodynamics. Automobiles Automobiles are generally inefficient when compared to other modes of transport, due to the relatively high weight of the vehicle compared to its occupants. On a percentage basis, if there is one occupant in an automobile, only about 0.5% of the total energy used is used to move the person in the car, while the remaining 99.5% (about 200 times more) is used to move the car itself. An important driver of energy consumption of cars per passenger is the occupancy rate of the vehicle. Although the consumption per unit distance per vehicle increases with increasing number of passengers, this increase is slight compared to the reduction in consumption per unit distance per passenger. This means that higher occupancy yields higher energy efficiency per passenger. Automobile occupancy varies across regions. For example, the estimated average occupancy rate is about 1.3 passengers per car in the San Francisco Bay Area, while the 2006 UK estimated average is 1.58. Due to the efficiency of electric motors, electric cars are much more efficient than their internal combustion engine counterparts, consuming on the order of 38 megajoules (38 000 kJ) per 100 km in comparison to 142 megajoules per 100 km for combustion powered cars. However, depending on the way the electricity is generated, the actual primary energy use may be higher. Driving practices and vehicles can be modified to improve their energy efficiency by about 15%. Common efficiency measures Automobile fuel efficiency is most commonly expressed in terms of the volume of fuel consumed per one hundred kilometres (l/100 km), but in some countries (including the United States, the United Kingdom and India) it is more commonly expressed in terms of the distance per volume fuel consumed (km/L or miles per gallon). This is complicated by the different energy content of fuels such as petrol and diesel. The Oak Ridge National Laboratory (ORNL) states that the energy content of unleaded petrol is 115,000 British thermal unit (BTU) per US gallon (32 MJ/L) compared to 130,500 BTU per US gallon (36.4 MJ/L) for diesel. Life-cycle energy use Automobiles have significant energy use in their life cycle, not directly attributable to the running of the vehicle. An important consideration is the energy costs of producing the energy form used by the automobile. Bio-fuels, electricity and hydrogen, for instance, have significant energy inputs in their production. Hydrogen production efficiency are 50–70% when produced from natural gas, and 10–15% from electricity. The efficiency of hydrogen production, as well as the energy required to store and transport hydrogen, must to be combined with the vehicle efficiency to yield net efficiency. Because of this, hydrogen automobiles are one of the least efficient means of passenger transport, generally around 50 times as much energy must be put into the production of hydrogen compared to how much is used to move the car. Another important factor is the energy needed to build and maintain roads is an important consideration, as is the energy returned on energy invested (EROEI). Between these two factors, roughly 20% must be added to the energy of the fuel consumed, to accurately account for the total energy used. Finally, vehicle energy efficiency calculations would be misleading without factoring the energy cost of producing the vehicle itself. This initial energy cost can of course be depreciated over the life of the vehicle to calculate an average energy efficiency over its effective life span. In other words, vehicles that take a lot of energy to produce and are used for relatively short periods will require a great deal more energy over their effective lifespan than those that do not, and are therefore much less energy efficient than they may otherwise seem. Hybrid and electric cars use less energy in their operation than comparable petroleum-fuelled cars but more energy is used to manufacture them, so the overall difference would be less than immediately apparent. Compare, for example, walking, which requires no special equipment at all, and an automobile, produced in and shipped from another country, and made from parts manufactured around the world from raw materials and minerals mined and processed elsewhere again, and used for a limited number of years. According to the French energy and environment agency ADEME, an average motor car has an embodied energy content of 20,800 kWh and an average electric vehicle amounts to 34,700 kWh. The electric car requires nearly twice as much energy to produce, primarily due to the large amount of mining and purification necessary for the rare earth metals and other materials used in lithium-ion batteries and in the electric drive motors. This represents a significant portion of the energy used over the life of the car (in some cases nearly as much as energy that is used through the fuel that is consumed, effectively doubling the car's per-distance energy consumption), and cannot be ignored when comparing automobiles to other transport modes. As these are average numbers for French automobiles and they are likely to be significantly larger in more auto-centric countries like the United States and Canada, where much larger and heavier cars are more common. The usage of private vehicles can be significantly decreased and can help to promote sustainable urban growth if more appealing non-motorized transportation options are developed, as well as more comfortable public transportation environments. Example consumption figures Solar cars are electric vehicles that use little or no externally supplied energy other than from sunlight, charging the batteries from built-in solar panels, and typically use less than 3 kWh per 100 miles (67 kJ/km or 1.86 kWh/100 km). Most of these cars are race cars designed for competition and not for passenger or utility use. However several companies are designing solar cars for public use. As of December 2021, none have yet been released. The four passenger GEM NEV uses , which equates to 2.6 kWh/100 km per person when fully occupied, albeit at only . The General Motors EV1 was rated in a test with a charging efficiency of 373 Wh-AC/mile or 23 kWh/100 km approximately equivalent to for petroleum-fuelled vehicles. Chevrolet Volt in full electric mode uses , meaning it may approach or exceed the energy efficiency of walking if the car is fully occupied with 4 or more passengers, although the relative emissions produced may not follow the same trends if analysing environmental impacts. The Daihatsu Charade 993cc turbo diesel (1987–1993) won the most fuel efficient vehicle award for going round the United Kingdom consuming an average of . It was surpassed only recently by the VW Lupo 3 L which consumes about . Both cars are rare to find on the popular market. The Daihatsu had major problems with rust and structural safety which contributes to its rarity and the quite short production run. The Volkswagen Polo 1.4 TDI Bluemotion and the SEAT Ibiza 1.4 TDI Ecomotion, both rated at (combined) were the most fuel efficient petroleum-fuelled cars on sale in the UK as of 22 March 2008. Honda Insight – achieves under real-world conditions. Honda Civic Hybrid regularly averages around . 2012 Cadillac CTS-V Wagon 6.2 L Supercharged, 2012 Bugatti Veyron, 2018 Honda Civic: 2017 Mitsubishi Mirage: 2017 Hyundai Ioniq hybrid: 2017 Toyota Prius: (Eco trim) 2018 Nissan Leaf: /100 mi (671 kJ/km) or 112 MPGe 2017 Hyundai Ioniq EV: /100 mi (560 kJ/km) or 136 MPGe 2020 Tesla model 3: 24 kWh (86.4 MJ)/100 mi (540 kJ/km) or 141 MPGe Trains Trains are in general one of the most efficient means of transport for freight and passengers. Advantages of trains include low friction of steel wheels on steel rails, as well as an intrinsic high occupancy rate. Train lines are typically used to serve urban or inter-urban transit applications where their capacity utilization is maximized. Efficiency varies significantly with passenger loads, and losses incurred in electricity generation and supply (for electrified systems), and, importantly, end-to-end delivery, where stations are not the originating final destinations of a journey. While electric motors used in most passenger trains are more efficient than internal combustion engines, power generation in thermal power plants is limited to (at best) Carnot efficiency and there are transmission losses on the way from the power plant to the train. Switzerland, which has electrified virtually its entire railway network (heritage railways like the Dampfbahn Furka-Bergstrecke being notable exceptions), derives much of the electricity used by trains from hydropower, including pumped hydro storage. While the mechanical efficiency of the turbines involved is comparatively high, pumped hydro involves energy losses and is only cost effective as it can consume energy during times of excess production (leading to low or even negative spot prices) and release the energy again during high-demand times. with some sources claiming up to 87%. Actual consumption depends on gradients, maximum speeds, and loading and stopping patterns. Data produced for the European MEET project (Methodologies for Estimating Air Pollutant Emissions) illustrate the different consumption patterns over several track sections. The results show the consumption for a German ICE high-speed train varied from around . The Siemens Velaro D type ICE trains seat 460 (16 of which in the restaurant car) in their 200-meter length edition of which two can be coupled together. Per Deutsche Bahn calculations, the energy used per 100 seat-km is the equivalent of of gasoline (). The data also reflects the weight of the train per passenger. For example, TGV double-deck Duplex trains use lightweight materials, which keep axle loads down and reduce damage to track and also save energy. The TGV mostly runs on French nuclear fission power plants which are again limited – as all thermal power plants – to Carnot efficiency. Due to nuclear reprocessing being standard operating procedure, a higher share of the energy contained in the original Uranium is used in France than in e.g. the United States with its once thru fuel cycle. The specific energy consumption of the trains worldwide amounts to about 150 kJ/pkm (kilojoule per passenger kilometre) and 150 kJ/tkm (kilojoule per tonne kilometre) (ca. 4.2 kWh/100 pkm and 4.2 kWh/100 tkm) in terms of final energy. Passenger transportation by rail systems requires less energy than by car or plane (one seventh of the energy needed to move a person by car in an urban context,). This is the reason why, although accounting for 9% of world passenger transportation activity (expressed in pkm) in 2015, rail passenger services represented only 1% of final energy demand in passenger transportation. Freight Energy consumption estimates for rail freight vary widely, and many are provided by interested parties. Some are tabulated below. Passenger Braking losses Having to accelerate and decelerate a heavy train load of people at every stop is inefficient. Modern electric trains therefore use regenerative braking to return current into the catenary while they brake. The International Union of Railways has stated that full stop service commuter trains reduce emissions by 8-14% by employing regenerative braking, and very dense suburban network trains by ~30%. High-speed electric trains like the N700 Series Shinkansen (the Bullet Train) employ regenerative braking, but due to the high speed, UIC estimates regenerative braking to only reduce emissions by 4.5%. Buses In July 2005, the average occupancy for buses in the UK was stated to be 9 passengers per vehicle. The fleet of 244 1982 New Flyer trolley buses in local service with BC Transit in Vancouver, Canada, in 1994/95 used 35,454,170 kWh for 12,966,285 vehicle km, or 9.84 MJ/vehicle km. Exact ridership on trolleybuses is not known, but with all 34 seats filled this equates to 0.32 MJ/passenger km. It is quite common to see people standing on Vancouver trolleybuses. This is a service with many stops per kilometre; part of the reason for the efficiency is the use of regenerative braking. A commuter service in Santa Barbara, California, USA, found average diesel bus efficiency of (using MCI 102DL3 buses). With all 55 seats filled this equates to 330 passenger mpg; with 70% filled, 231 passenger mpg. In 2011 the fleet of 752 buses in the city of Lisbon had an average speed of 14.4 km/h and an average occupancy of 20.1 passengers per vehicle. Battery electric buses combine the electric motive power of a trolleybus, the drawbacks of battery manufacture, weight and lifespan with the routing flexibility of a bus with any onboard power. Major manufacturers include BYD and Proterra. Other NASA's Crawler-Transporter was used to haul the Saturn V and Space Shuttle rockets from storage to the launch pad. It uses diesel and has one of the highest fuel consumption rates on record, . Air transport means Aircraft A principal determinant of energy consumption in aircraft is drag, which must be in the opposite direction of motion to the craft. Drag is proportional to the lift required for flight, which is equal to the weight of the aircraft. As induced drag increases with weight, mass reduction, with improvements in engine efficiency and reductions in aerodynamic drag, has been a principal source of efficiency gains in aircraft, with a rule-of-thumb being that a 1% weight reduction corresponds to around a 0.75% reduction in fuel consumption. Flight altitude affects engine efficiency. Jet-engine efficiency increases at altitude up to the tropopause, the temperature minimum of the atmosphere; at lower temperatures, the Carnot efficiency is higher. Jet engine efficiency is also increased at high speeds, but above about Mach 0.85 the airframe aerodynamic losses increase faster. Compressibility effects: beginning at transonic speeds of around Mach 0.85, shockwaves form increasing drag. For supersonic flight, it is difficult to achieve a lift to drag ratio greater than 5, and fuel consumption is increased in proportion. However, the faster speed inherent to supersonic flight means that the higher fuel burn is counterbalanced by a shorter flight duration. Passenger airplanes averaged 4.8 L/100 km per passenger (1.4 MJ/passenger-km) (49 passenger-miles per gallon) in 1998. On average 20% of seats are left unoccupied. Jet aircraft efficiencies are improving: Between 1960 and 2000 there was a 55% overall fuel efficiency gain (if one were to exclude the inefficient and limited fleet of the DH Comet 4 and to consider the Boeing 707 as the base case). Most of the improvements in efficiency were gained in the first decade when jet craft first came into widespread commercial use. Compared to advanced piston engine airliners of the 1950s, current jet airliners are only marginally more efficient per passenger-mile. Between 1971 and 1998 the fleet-average annual improvement per available seat-kilometre was estimated at 2.4%. Concorde the supersonic transport managed about 17 passenger-miles to the Imperial gallon; similar to a business jet, but much worse than a subsonic turbofan aircraft. Airbus puts the fuel rate consumption of their A380 at less than 3 L/100 km per passenger (78 passenger-miles per US gallon). The mass of an aircraft can be reduced by using light-weight materials such as titanium, carbon fibre and other composite plastics. Expensive materials may be used, if the reduction of mass justifies the price of materials through improved fuel efficiency. The improvements achieved in fuel efficiency by mass reduction, reduces the amount of fuel that needs to be carried. This further reduces the mass of the aircraft and therefore enables further gains in fuel efficiency. For example, the Airbus A380 design includes multiple light-weight materials. Airbus has showcased wingtip devices (sharklets or winglets) that can achieve 3.5 percent reduction in fuel consumption. There are wingtip devices on the Airbus A380. Further developed Minix winglets have been said to offer 6 percent reduction in fuel consumption. Winglets at the tip of an aircraft wing smooth out the wing-tip vortex (reducing the aircraft's wing drag) and can be retrofitted to any airplane. NASA and Boeing are conducting tests on a "blended wing" aircraft. This design allows for greater fuel efficiency since the whole craft produces lift, not just the wings. The blended wing body (BWB) concept offers advantages in structural, aerodynamic and operating efficiencies over today's more conventional fuselage-and-wing designs. These features translate into greater range, fuel economy, reliability and life cycle savings, as well as lower manufacturing costs. NASA has created a cruise efficient STOL (CESTOL) concept. Fraunhofer Institute for Manufacturing Engineering and Applied Materials Research (IFAM) have researched a shark skin imitating paint that would reduce drag through a riblet effect. Aircraft are a major potential application for new technologies such as aluminium metal foam and nanotechnology such as the shark skin imitating paint. Propeller systems, such as turboprops and propfans are a more fuel efficient technology than jets. But turboprops have an optimum speed below about 450 mph (700 km/h). This speed is less than used with jets by major airlines today. With the current high price for jet fuel and the emphasis on engine/airframe efficiency to reduce emissions, there is renewed interest in the propfan concept for jetliners that might come into service beyond the Boeing 787 and Airbus A350XWB. For instance, Airbus has patented aircraft designs with twin rear-mounted counter-rotating propfans. NASA has conducted an Advanced Turboprop Project (ATP), where they researched a variable pitch propfan that produced less noise and achieved high speeds. Related to fuel efficiency is the impact of aviation emissions on climate. Small aircraft Motor-gliders can reach an extremely low fuel consumption for cross-country flights, if favourable thermal air currents and winds are present. At 160 km/h, a diesel powered two-seater Dieselis burns 6 litres of fuel per hour, 1.9 litres per 100 passenger km. at 220 km/h, a four-seater 100 hp MCR-4S burns 20 litres of gas per hour, 2.2 litres per 100 passenger km. Under continuous motorised flight at 225 km/h, a Pipistrel Sinus burns 11 litres of fuel per flight hour. Carrying 2 people aboard, it operates at 2.4 litres per 100 passenger km. Ultralight aircraft Tecnam P92 Echo Classic at cruise speed of 185 km/h burns 17 litres of fuel per flight hour, 4.6 litres per 100 passenger km (2 people). Other modern ultralight aircraft have increased efficiency; Tecnam P2002 Sierra RG at cruise speed of 237 km/h burns 17 litres of fuel per flight hour, 3.6 litres per 100 passenger km (2 people). Two-seater and four-seater flying at 250 km/h with old generation engines can burn 25 to 40 litres per flight hour, 3 to 5 litres per 100 passenger km. The Sikorsky S-76C++ twin turbine helicopter gets about at and carries 12 for about 19.8 passenger-miles per gallon (11.9 L per 100 passenger km). Water transport means Ships Queen Elizabeth Cunard stated that Queen Elizabeth 2 travelled 49.5 feet per imperial gallon of diesel oil (3.32 m/L or 41.2 ft/US gal), and that it had a passenger capacity of 1777. Thus carrying 1777 passengers we can calculate an efficiency of 16.7 passenger miles per imperial gallon (16.9 L/100 p·km or 13.9 p·mpg–US). Cruise ships has a capacity of 6,296 passengers and a fuel efficiency of 14.4 passenger miles per US gallon. Voyager-class cruise ships have a capacity of 3,114 passengers and a fuel efficiency of 12.8 passenger miles per US gallon. Emma Maersk Emma Maersk uses a Wärtsilä-Sulzer RTA96-C, which consumes 163 g/kWh and 13,000 kg/h. If it carries 13,000 containers then 1 kg fuel transports one container for one hour over a distance of 45 km. The ship takes 18 days from Tanjung (Singapore) to Rotterdam (Netherlands), 11 from Tanjung to Suez, and 7 from Suez to Rotterdam, which is roughly 430 hours, and has 80 MW, +30 MW. 18 days at a mean speed of gives a total distance of . Assuming the Emma Maersk consumes diesel (as opposed to fuel oil which would be the more precise fuel) then 1 kg diesel = 1.202 litres = 0.317 US gallons. This corresponds to 46,525 kJ. Assuming a standard 14 tonnes per container (per teu) this yields 74 kJ per tonne-km at a speed of 45 km/h (24 knots). Boats A sailboat, much like a solar car, can locomote without consuming any fuel. A sail boat such as a dinghy using just wind power requires no input energy in terms of fuel. However some manual energy is required by the crew to steer the boat and adjust the sails using lines. In addition energy will be needed for demands other than propulsion, such as cooking, heating or lighting. The fuel efficiency of a single-occupancy boat is highly dependent on the size of its engine, the speed at which it travels, and its displacement. With a single passenger, the equivalent energy efficiency will be lower than in a car, train, or plane. International transport comparisons European Public transport Rail and bus are generally required to serve 'off peak' and rural services, which by their nature have lower loads than city bus routes and inter city train lines. Moreover, due to their 'walk on' ticketing it is much harder to match daily demand and passenger numbers. As a consequence, the overall load factor on UK railways is 35% or 90 people per train: Conversely, airline services generally work on point-to-point networks between large population centres and are 'pre-book' in nature. Using yield management, overall load factors can be raised to around 70–90%. Intercity train operators have begun to use similar techniques, with loads reaching typically 71% overall for TGV services in France and a similar figure for the UK's Virgin Rail Group services. For emissions, the electricity generating source needs to be taken into account. US Passenger transport The US Transport Energy Data Book states the following figures for passenger transport in 2018. These are based on actual consumption of energy, at whatever occupancy rates there were. For modes using electricity, losses during generation and distribution are included. Values are not directly comparable due to differences in types of services, routes, etc. US Freight transport The US Transport Energy book states the following figures for freight transport in 2010: From 1960 to 2010 the efficiency of air freight has increased 75%, mostly due to more efficient jet engines. 1 gal (3.785 L, 0.833 gal) of fuel can move a ton of cargo 857 km or 462 nmi by barge, or by rail, or by lorry. Compare: Space Shuttle used to transport freight to the other side of the Earth (see above): 40 megajoules per tonne-kilometre. Net energy for lifting: 10 megajoules per tonne-kilometre. Canadian transport Natural Resources Canada's Office of Energy Efficiency publishes annual statistics regarding the efficiency of the entire Canadian fleet. For researchers, these fuel consumption estimates are more realistic than the fuel consumption ratings of new vehicles, as they represent the real world driving conditions, including extreme weather and traffic. The annual report is called Energy Efficiency Trends Analysis. There are dozens of tables illustrating trends in energy consumption expressed in energy per passenger km (passengers) or energy per tonne km (freight). French environmental calculator The environmental calculator of the French environment and energy agency (ADEME) published in 2007 using data from 2005 enables one to compare the different means of transport as regards the emissions (in terms of carbon dioxide equivalent) as well as the consumption of primary energy. In the case of an electric vehicle, the ADEME makes the assumption that 2.58 toe as primary energy are necessary for producing one toe of electricity as end energy in France (see Embodied energy: In the energy field). This computer tool devised by the ADEME shows the importance of public transport from an environmental point of view. It highlights the primary energy consumption as well as the emissions due to transport. Due to the relatively low environmental impact of radioactive waste, compared to that of fossil fuel combustion emissions, this is not a factor in the tool. Moreover, intermodal passenger transport is probably a key to sustainable transport, by allowing people to use less polluting means of transport. German environmental costs calculates the energy consumption of their various means of transportation. Note - External costs not included above To include all the energy used in transport, we would need to also include the external energy costs of producing, transporting and packaging of fuel (food or fossil fuel or electricity), the energy incurred in disposing of exhaust waste, and the energy costs of manufacturing the vehicle. For example, a human walking requires little or no special equipment while automobiles require a great deal of energy to produce and have relatively short product lifespans. However, these external costs are independent of the energy cost per distance travelled, and can vary greatly for a particular vehicle depending on its lifetime, how often it is used and how it is energized over its lifetime. Thus this article's numbers include none of these external factors. See also ACEA agreement Alternative fuel vehicle Brake-specific fuel consumption Car speed and energy consumption Corporate average fuel economy (CAFE) Emission standard Fuel economy in automobiles Fuel-management systems Gas-guzzler Gasoline gallon equivalent Life-cycle assessment Marine fuel management Thrust-specific fuel consumption Vehicular metrics Von Kármán–Gabrielli diagram - What Price Speed? Transport Transport ecology Speed record Footnotes External links ECCM Study for rail, road and air journeys between main UK cities Traction Summary Report 2007– Prof. Roger Kemp Transport Energy Data Book (US) Fuel Consumption Ratings Infographic on Energy Efficiency in Transportation Webarchive template wayback links Energy conservation Fuels Energy use comparisons
Energy efficiency in transport
Physics,Chemistry
7,491
72,257,559
https://en.wikipedia.org/wiki/Internet%20intervention
Internet intervention, in medical context, refers to the delivery of health care-related treatments through Internet. References Therapy Internet
Internet intervention
Technology
25
216,254
https://en.wikipedia.org/wiki/BEAM%20robotics
BEAM robotics (from biology, electronics, aesthetics and mechanics) is a style of robotics that primarily uses simple analogue circuits, such as comparators, instead of a microprocessor in order to produce an unusually simple design. While not as flexible as microprocessor based robotics, BEAM robotics can be robust and efficient in performing the task for which it was designed. BEAM robots may use a set of the analog circuits, mimicking biological neurons, to facilitate the robot's response to its working environment. Mechanisms and principles The basic BEAM principles focus on a stimulus-response based ability within a machine. The underlying mechanism was invented by Mark W. Tilden where the circuit (or a Nv net of Nv neurons) is used to simulate biological neuron behaviours. Some similar research was previously done by Ed Rietman in 'Experiments In Artificial Neural Networks'. Tilden's circuit is often compared to a shift register, but with several important features making it a useful circuit in a mobile robot. Other rules that are included (and to varying degrees applied): Use the lowest number possible of electronic elements ("keep it simple") Recycle and reuse technoscrap Use radiant energy (such as solar power) There are a large number of BEAM robots designed to use solar power from small solar arrays to power a "Solar Engine" which creates autonomous robots capable of operating under a wide range of lighting conditions. Besides the simple computational layer of Tilden's "Nervous Networks", BEAM has brought a multitude of useful tools to the roboticist's toolbox. The "Solar Engine" circuit, many H-bridge circuits for small motor control, tactile sensor designs, and meso-scale (palm-sized) robot construction techniques have been documented and shared by the BEAM community. BEAM robots Being focused on "reaction-based" behaviors (as originally inspired by the work of Rodney Brooks), BEAM robotics attempts to copy the characteristics and behaviours of biological organisms, with the ultimate goal of domesticating these "wild" robots. The aesthetics of BEAM robots derive from the principle "form follows function" modulated by the particular design choices the builder makes while implementing the desired functionality. Disputes in the name Various people have varying ideas about what BEAM actually stands for. The most widely accepted meaning is Biology, Electronics, Aesthetics, and Mechanics. This term originated with Mark Tilden during a discussion at the Ontario Science Centre in 1990. Mark was displaying a selection of his original bots which he had built while working at the University of Waterloo. However, there are many other semi-popular names in use, including: Biotechnology Ethology Analogy Morphology Building Evolution Anarchy Modularity Microcontrollers Unlike many other types of robots controlled by microcontrollers, BEAM robots are built on the principle of using multiple simple behaviours linked directly to sensor systems with little signal conditioning. This design philosophy is closely echoed in the classic book "Vehicles: Experiments in Synthetic Psychology". Through a series of thought experiments, this book explores the development of complex robot behaviours through simple inhibitory and excitory sensor links to the actuators. Microcontrollers and computer programming are usually not a part of a traditional (aka., "pure" ) BEAM robot due to the very low-level hardware-centric design philosophy. There are successful robot designs mating the two technologies. These "hybrids" fulfill a need for robust control systems with the added flexibility of dynamic programming, like the "horse-and-rider" topology BEAMbots (e.g. the ScoutWalker 3). 'Horse' behavior is implemented with traditional BEAM technology but a microcontroller based 'rider' can guide that behavior so as to accomplish the goals of the 'rider'. Types There are various "-trope" BEAMbots, which attempt to achieve a specific goal. Of the series, the phototropes are the most prevalent, as light-seeking would be the most beneficial behaviour for a solar-powered robot. Audiotropes react to sound sources. Audiophiles go towards sound sources. Audiophobes go away from sound sources. Phototropes ("light-seekers") react to light sources. Photophiles (also Photovores) go toward light sources. Photophobes go away from light sources. Radiotropes react to radio frequency sources. Radiophiles go toward RF sources. Radiophobes go away from RF sources. Thermotropes react to heat sources. Thermophiles go toward heat sources. Thermophobes go away from heat sources. General BEAMbots have a variety of movements and positioning mechanisms. These include: Sitters: Unmoving robots that have a physically passive purpose. Beacons: Transmit a signal (usually a navigational blip) for other BEAMbots to use. Pummers : Display a "light show" or a pattern of sounds. Pummers are often nocturnal robots that store solar energy during the day, then activate during the night. Ornaments : A catch-all name for sitters which are not beacons or pummers. Many times, these are mostly electronic art. Squirmers: Stationary robots that perform an interesting action (usually by moving some sort of limbs or appendages). Magbots: use magnetic fields for their mode of animation. Flagwavers: Move a display (or "flag") around at a certain frequency. Heads: Pivot and follow some detectable phenomena, such as a light (These are popular in the BEAM community. They can be stand-alone robots, but are more often incorporated into a larger robot.). Vibrators: Use a small pager motor with an off-centre weight to shake themselves about. Sliders: Robots that move by sliding body parts smoothly along a surface while remaining in contact with it. Snakes: Move using a horizontal wave motion. Earthworms: Move using a longitudinal wave motion. Crawlers: Robots that move using tracks or by rolling the robot's body with some sort of appendage. The body of the robot is not dragged on the ground. Turbots: Roll their entire bodies using their arms or flagella. Inchworms: Move part of their bodies ahead, while the rest of the chassis is on the ground. Tracked robots: Use tracked wheels, like a tank. Jumpers: Robots which propel themselves off the ground as a means of locomotion. Vibrobots: Produce an irregular shaking motion moving themselves around a surface. Springbots: Move forward by bouncing in one particular direction. Rollers: Robots that move by rolling all or part of their body. Symets: Driven using a single motor with its shaft touching the ground, and moves in different directions depending on which of several symmetric contact points around the shaft are touching the ground. Solarrollers: Solar-powered cars that use a single motor driving one or more wheels; often designed to complete a fairly short, straight and level course in the shortest amount of time. Poppers: Use two motors with separate solar engines; rely on differential sensors to achieve a goal. Miniballs: Shift their centre of mass, causing their spherical bodies to roll. Walkers: Robots that move using legs with differential ground contact. BEAM walkers generally use Nv networks and are not programmed in any way—they walk and respond to terrain via resistive input from their motors. Motor Driven: Use motors to move their legs (typically 3 motors or less). Muscle Wire Driven: use Nitinol (nickel - titanium alloy) wires for their leg actuators. Swimmers: Also called aquabots or aquavores. Robots that move on or below the surface of a liquid (typically water). Boatbots: Operate on the surface of a liquid. Subbots: Operate under the surface of a liquid. Fliers: Robots that move through the air for sustained periods. Helicopters: Use a powered rotor to provide both lift and propulsion. Planes: Use fixed or flapping wings to generate lift. Blimps: Use a neutrally-buoyant balloon for lift. Climbers: Robot that moves up or down a vertical surface, usually on a track such as a rope or wire. Applications and current progress At present, autonomous robots have seen limited commercial application, with some exceptions such as the iRobot Roomba robotic vacuum cleaner and a few lawn-mowing robots. The main practical application of BEAM has been in the rapid prototyping of motion systems and hobby/education applications. Mark Tilden has successfully used BEAM for the prototyping of products for Wow-Wee Robotics, as evidenced by B.I.O.Bug and RoboRaptor. Solarbotics Ltd., Bug'n'Bots, JCM InVentures Inc., and PagerMotors.com have also brought BEAM-related hobby and educational goods to the marketplace. Vex has also developed Hexbugs, tiny BEAM robots. Aspiring BEAM roboticists often have problems with the lack of direct control over "pure" BEAM control circuits. There is ongoing work to evaluate biomorphic techniques that copy natural systems because they seem to have an incredible performance advantage over traditional techniques. There are many examples of how tiny insect brains are capable of far better performance than the most advanced microelectronics. Another barrier to widespread application of BEAM technology is the perceived random nature of the 'nervous network', which requires new techniques to be learned by the builder to successfully diagnose and manipulate the characteristics of the circuitry. A think-tank of international academics meet annually in Telluride, Colorado to address this issue directly, and until recently, Mark Tilden has been part of this effort (he had to withdraw due to his new commercial commitments with Wow-Wee toys). Having no long-term memory, BEAM robots generally do not learn from past behaviour. However, there has been work in the BEAM community to address this issue. One of the most advanced BEAM robots in this vein is Bruce Robinson's Hider, which has an impressive degree of capability for a microprocessor-less design. Publications Patents - Method of and Apparatus for Controlling Mechanism of Moving Vehicle or Vehicles - Tesla's "telautomaton" patent; First logic gate. - Adaptive robotic nervous systems and control circuits therefor - Tilden's patent; A self-stabilizing control circuit using pulse delay circuits for controlling the limbs of a limbed robot, and a robot incorporating such a circuit; artificial "neurons". Books and papers Conrad, James M., and Jonathan W. Mills, "Stiquito: advanced experiments with a simple and inexpensive robot", The future for nitinol-propelled walking robots, Mark W. Tilden. Los Alamitos, Calif., IEEE Computer Society Press, c1998. LCCN 96029883 Tilden, Mark W., and Brosl Hasslacher, "Living Machines". Los Alamos National Laboratory, Los Alamos, NM 87545, USA. Tilden, Mark W. and Brosl Hasslacher, "The Design of "Living" Biomech Machines: How low can one go?"". Los Alamos National Laboratory, Los Alamos, NM 87545, USA. Still, Susanne, and Mark W. Tilden, "Controller for a four legged walking machine". ETH Zuerich, Institute of Neuroinformatics, and Biophysics Division, Los Alamos National Laboratory. Braitenberg, Valentino, "Vehicles: Experiments in Synthetic Psychology", 1984. Rietman, Ed, "Experiments In Artificial Neural Networks", 1988. Tilden, Mark W., and Brosl Hasslacher, "Robotics and Autonomous Machines: The Biology and Technology of Intelligent Autonomous Agents", LANL Paper ID: LA-UR-94-2636, Spring 1995. Dewdney, A.K. "Photovores: Intelligent Robots are Constructed From Castoffs". Scientific American Sept 1992, v267, n3, p42(1) Smit, Michael C., and Mark Tilden, "Beam Robotics". Algorithm, Vol. 2, No. 2, March 1991, Pg 15–19. Hrynkiw, David M., and Tilden, Mark W., "Junkbots, Bugbots, and Bots on Wheels", 2002. (Book support website) See also Analogue robot – a robot that uses analog circuitry to go towards a simple goal Braitenberg vehicle – a robot that can exhibit intelligent behavior while remaining completely stateless Brosl Hasslacher – theoretical physicist Behaviour-based robotics – branch of robotics that does not use an internal model of the environment Emergent behaviour – the process of complex pattern formation from simpler rules Protoscience Stiquito – a hobbyist robot designed as a nitinol-powered hexapod walker Turtle (robot) – early forms of the turtlebot were the beginning of BEAM wor William Grey Walter – neurophysiologist and roboticist Wired intelligence – a robot that has no programmed microprocessor and possesses analogue electronics between its sensors and motors that gives it seemingly intelligent actions References External links BEAM Yahoo! Group Archive Solarbotics, "BEAM community server and hosting", 2003 Miller, Andrew, "The MicroCore" Bolt, Steven, "PiTronics", October 2004 Van Zoelen, A. A., "BEAM Robotics", 1998 Robinson, Bruce N., "Hider", 2005 Walke, Kevin, "Mark Tilden Interview", March 2000 Fang, Chiu-Yuan, "BEAM Robotics", 1999 Bernstein, Ian, "BEAM Online", 2003 Beamitaly, "BeamItaly", 1998.
BEAM robotics
Biology
2,833
20,839,335
https://en.wikipedia.org/wiki/C/2008%20T2%20%28Cardinal%29
C/2008 T2 (Cardinal), is a non-periodic comet. It was discovered by Rob. D. Cardinal from the University of Calgary. It was visible as a telescopic and binocular object during 2009. It passed near the Perseus star clusters NGC 1528 on March 15 and NGC 1545 on March 17, 2009. It also passed near the Auriga star clusters M38 on April 14, M36 on April 17, and M37 in on April 21, 2009, and passed near Comet Lulin on May 12, 2009, for observers on Earth. It peaked in brightness in June–July 2009 at 8.5-9m. References External links Non-periodic comets Hyperbolic comets 2009 in science 20081001
C/2008 T2 (Cardinal)
Astronomy
156
48,191,844
https://en.wikipedia.org/wiki/Scandiobabingtonite
Scandiobabingtonite was first discovered in the Montecatini granite quarry near Baveno, Italy in a pegmatite cavity. Though found in pegmatites, the crystals of scandiobabingtonite are sub-millimeter sized, and are tabular shaped. Scandiobabingtonite was the sixth naturally occurring mineral discovered with the rare earth element scandium, and grows around babingtonite, with which it is isostructural, hence the namesake. It is also referred to as scandian babingtonite. The ideal chemical formula for scandiobabingtonite is Ca2(Fe2+,Mn)ScSi5O14(OH). Occurrence Scandiobabingtonite is found in association with orthoclase, quartz, light blue albite, stilbite, fluorite, and mica. When found with these minerals, the scandiobabingtonite crystals are emplanted on the surface of the other minerals. It also occurs as growth around green-black prismatic crystals of babingtonite. The samples of scandiobabingtonite that have been discovered have shown that they start out growing from a seed of babingtonite crystal. This is how scandiobabingtonite gets its chemical structure. The starting seed of babingtonite is still present in the center of the resulting crystal and can be detected with optical and chemical studies. Scandiobabingtonite is a uniquely rare mineral, as it occurs in very small amounts in few locations around the world. It is one of thirteen naturally occurring minerals where scandium is a dominant member. The other scandium minerals are bazzite, cascandite, hetftetjernite, jervisite, juonniite, kolbeckite, kristiansenite, magbasite, oftedalite, pretulite, thortveitite, and titanowodginite. Scandium can also concentrate in other minerals, such as in ferromagnesian minerals, aluminum phosphate minerals, meteoric minerals, and other minerals containing rare earth elements, but it occurs in trace amounts. Physical properties Scandiobabingtonite is a colorless or lightly gray-green colored transparent mineral with a glassy or vitreous luster. It exhibits a hardness of 6 on the Mohs hardness scale. Scandiobabingtonite occurs as short, prismatic crystals that are slightly elongated on the [001] axis which gives it a tabular or platy shape. Its crystals are characterized by the {010}, {001}, {110}, {1-10}, and {101} faces. Scandiobabingtonite is brittle and shows perfect cleavage along the {001} and {1-10} planes. The measured density is 3.24 g/cm3. Optical properties Scandiobabingtonite is biaxial positive, which means it will refract light along two axes. It exhibits a 2V(measured)=64(2)°, strong dispersion with r>v, and displays strong pleochroism with colors ranging from pink (γ') to green(α'). The extinction angle along the (110) is 6°. Z:Φ=-250°, ρ=47°; Y:Φ=146°, ρ=75°; X:Φ=42°, ρ=47°. Chemical properties Scandiobabingtonite is isostructural with babingtonite, and has the same chemical properties as well. It is an inosilicate with 5-periodic single chains. Scandium replaces the Fe3+ in babingtonite in six-fold coordination. The empirical chemical formula for scandiobabingtonite is (Ca1.71,Na0.25)Σ0.97(Fe2+0.65,Mn0.32)Σ0.97(Sc0.91,Sn0.04,Fe3+0.03)Σ0.98Si5.09O14.00(OH)1.00. Simplified, the formula is Ca2(Fe2+,Mn)ScSi5O14(OH) Chemical composition X-ray crystallography Scandiobabingtonite is in the triclinic crystal system, with space group P1. The unit cell dimensions are a=7.536(2) Å, b=11.734(2) Å, c=6.748(2) Å, α=91.70(2)°, β=93.86(2)°, γ=104.53(2)°. These dimensions are almost identical to those of babingtonite. The difference in dimensions is caused by the replacement of iron with scandium in the Fe3+-centered octahedra. The Fe3+-O distance measures as 2.048 Å, while the Sc-O distance is 2.092 Å. This equates to a slightly larger octahedra in scandiobabingtonite than babingtonite. See also List of minerals References Natural materials Scandium minerals Scandium compounds Triclinic minerals Minerals in space group 2
Scandiobabingtonite
Physics
1,099
17,699,830
https://en.wikipedia.org/wiki/Shutdown%20%28computing%29
To shut down or power off a computer is to remove power from a computer's main components in a controlled way. After a computer is shut down, main components such as CPUs, RAM modules and hard disk drives are powered down, although some internal components, such as an internal clock, may retain power. Implementations The shutdown feature and command is available in Microsoft Windows, ReactOS, HP MPE/iX, and in a number of Unix and Unix-like operating systems such as Apple macOS. Microsoft Windows and ReactOS In Microsoft Windows and ReactOS, a PC or server is shut down by selecting the item from the Start menu on the desktop. Options include shutting down the system and powering off, automatically restarting the system after shutting down, or putting the system into stand-by mode. Just like other operating systems, Windows has the option to prohibit selected users from shutting down a computer. On a home PC, every user may have the shutdown option, but in computers on large networks (such as Active Directory), an administrator can revoke the access rights of selected users to shut down a Windows computer. Nowadays there are many software utilities which can automate the task of shutting down a Windows computer, enabling automatic computer control. The Windows Shutdown website lists various software utilities to automate the task of shutting down. In Windows, a program can shut down the system by calling the ExitWindowsEx or NtShutdownSystem function. Command-line interface There is also a shutdown command that can be executed within a command shell window. shutdown.exe is the command-line shutdown application (located in %windir%\System32\shutdown.exe) that can shut down the user's computer or another computer on the user's network. Different parameters allow different functions. More than one parameter can be used at a time for this command. Apple macOS In Apple macOS the computer can be shut down by choosing "Shut Down…" from the Apple Menu, by pressing key/button (or key), or by pressing the power key to bring up the power management dialog box and selecting button "Shut down". An administrator may also use the Unix shutdown command as well. It can also be shut down by pressing key/button (or key) or clicking Shut Down on the Apple Menu while holding the key, but this will not prompt the user anything at all. On newer and some older Apple computers, starting with Mac OS 9, the user is given a time limit in which the computer will automatically shut down if the user does not click the "Shut Down" button. Unix and Linux In Unix and Linux, the shutdown command can be used to turn off or reboot a computer. Only the superuser or a user with special privileges can shut the system down. One commonly issued form of this command is shutdown -h now, which will shut down a system immediately. Another one is shutdown -r now to reboot. Another form allows the user to specify an exact time or a delay before shutdown: shutdown -h 20:00 will turn the computer off at 8:00 PM, and shutdown -r +1 will automatically reboot the machine in one minute of issuing the command. See also Booting Hibernation Sleep mode Ctrl+Alt+Del References Further reading External links shutdown Microsoft Docs shutdown.cc – an article about various ways of automated and manual shutting down of Microsoft Windows Operating system technology Windows commands Windows administration MacOS Unix process- and task-management-related software
Shutdown (computing)
Technology
735
140,558
https://en.wikipedia.org/wiki/Fiber
Fiber (also spelled fibre in British English; from ) is a natural or artificial substance that is significantly longer than it is wide. Fibers are often used in the manufacture of other materials. The strongest engineering materials often incorporate fibers, for example carbon fiber and ultra-high-molecular-weight polyethylene. Synthetic fibers can often be produced very cheaply and in large amounts compared to natural fibers, but for clothing natural fibers have some benefits, such as comfort, over their synthetic counterparts. Natural fibers Natural fibers develop or occur in the fiber shape, and include those produced by plants, animals, and geological processes. They can be classified according to their origin: Vegetable fibers are generally based on arrangements of cellulose, often with lignin: examples include cotton, hemp, jute, flax, abaca, piña, ramie, sisal, bagasse, and banana. Plant fibers are employed in the manufacture of paper and textile (cloth), and dietary fiber is an important component of human nutrition. Wood fiber, distinguished from vegetable fiber, is from tree sources. Forms include groundwood, lacebark, thermomechanical pulp (TMP), and bleached or unbleached kraft or sulfite pulps. Kraft and sulfite refer to the type of pulping process used to remove the lignin bonding the original wood structure, thus freeing the fibers for use in paper and engineered wood products such as fiberboard. Animal fibers consist largely of particular proteins. Instances are silkworm silk, spider silk, sinew, catgut, wool, sea silk and hair such as cashmere wool, mohair and angora, fur such as sheepskin, rabbit, mink, fox, beaver, etc. Mineral fibers include the asbestos group. Asbestos is the only naturally occurring long mineral fiber. Six minerals have been classified as "asbestos" including chrysotile of the serpentine class and those belonging to the amphibole class: amosite, crocidolite, tremolite, anthophyllite and actinolite. Short, fiber-like minerals include wollastonite and palygorskite. Biological fibers, also known as fibrous proteins or protein filaments, consist largely of biologically relevant and biologically very important proteins, in which mutations or other genetic defects can lead to severe diseases. Instances include the collagen family of proteins, tendons, muscle proteins like actin, cell proteins like microtubules and many others, such as spider silk, sinew, and hair. Artificial fibers Artificial or chemical fibers are fibers whose chemical composition, structure, and properties are significantly modified during the manufacturing process. In fashion, a fiber is a long and thin strand or thread of material that can be knit or woven into a fabric. Artificial fibers consist of regenerated fibers and synthetic fibers. Semi-synthetic fibers Semi-synthetic fibers are made from raw materials with naturally long-chain polymer structure and are only modified and partially degraded by chemical processes, in contrast to completely synthetic fibers such as nylon (polyamide) or dacron (polyester), which the chemist synthesizes from low-molecular weight compounds by polymerization (chain-building) reactions. The earliest semi-synthetic fiber is the cellulose regenerated fiber, rayon. Most semi-synthetic fibers are cellulose regenerated fibers. Cellulose regenerated fibers Cellulose fibers are a subset of artificial fibers, regenerated from natural cellulose. The cellulose comes from various sources: rayon from tree wood fiber, bamboo fiber from bamboo, seacell from seaweed, etc. In the production of these fibers, the cellulose is reduced to a fairly pure form as a viscous mass and formed into fibers by extrusion through spinnerets. Therefore, the manufacturing process leaves few characteristics distinctive of the natural source material in the finished products. Some examples of this fiber type are: rayon Lyocell, a brand of rayon Modal diacetate fiber triacetate fiber. Historically, cellulose diacetate and -triacetate were classified under the term rayon, but are now considered distinct materials. Synthetic fibers Synthetic come entirely from synthetic materials such as petrochemicals, unlike those artificial fibers derived from such natural substances as cellulose or protein. Fiber classification in reinforced plastics falls into two classes: (i) short fibers, also known as discontinuous fibers, with a general aspect ratio (defined as the ratio of fiber length to diameter) between 20 and 60, and (ii) long fibers, also known as continuous fibers, the general aspect ratio is between 200 and 500. Metallic fibers Metallic fibers can be drawn from ductile metals such as copper, gold or silver and extruded or deposited from more brittle ones, such as nickel, aluminum or iron. Carbon fiber Carbon fibers are often based on oxidized and via pyrolysis carbonized polymers like PAN, but the end product is almost pure carbon. Silicon carbide fiber Silicon carbide fibers, where the basic polymers are not hydrocarbons but polymers, where about 50% of the carbon atoms are replaced by silicon atoms, so-called poly-carbo-silanes. The pyrolysis yields an amorphous silicon carbide, including mostly other elements like oxygen, titanium, or aluminium, but with mechanical properties very similar to those of carbon fibers. Fiberglass Fiberglass, made from specific glass, and optical fiber, made from purified natural quartz, are also artificial fibers that come from natural raw materials, silica fiber, made from sodium silicate (water glass) and basalt fiber made from melted basalt. Mineral fibers Mineral fibers can be particularly strong because they are formed with a low number of surface defects; asbestos is a common one. Polymer fibers Polymer fibers are a subset of artificial fibers, which are based on synthetic chemicals (often from petrochemical sources) rather than arising from natural materials by a purely physical process. These fibers are made from: polyamide nylon PET or PBT polyester phenol-formaldehyde (PF) polyvinyl chloride fiber (PVC) vinyon polyolefins (PP and PE) olefin fiber acrylic polyesters, pure polyester PAN fibers are used to make carbon fiber by roasting them in a low oxygen environment. Traditional acrylic fiber is used more often as a synthetic replacement for wool. Carbon fibers and PF fibers are noted as two resin-based fibers that are not thermoplastic, most others can be melted. aromatic polyamids (aramids) such as Twaron, Kevlar and Nomex thermally degrade at high temperatures and do not melt. These fibers have strong bonding between polymer chains polyethylene (PE), eventually with extremely long chains / HMPE (e.g. Dyneema or Spectra). Elastomers can even be used, e.g. spandex although urethane fibers are starting to replace spandex technology. polyurethane fiber Elastolefin Coextruded fibers have two distinct polymers forming the fiber, usually as a core-sheath or side by side. Coated fibers exist such as nickel-coated to provide static elimination, silver-coated to provide anti-bacterial properties and aluminum-coated to provide RF deflection for radar chaff. Radar chaff is actually a spool of continuous glass tow that has been aluminum coated. An aircraft-mounted high speed cutter chops it up as it spews from a moving aircraft to confuse radar signals. Microfibers Invented in Japan in the early 1980s, microfibers are also known as microdenier fibers. Acrylic, nylon, polyester, lyocell and rayon can be produced as microfibers. In 1986, Hoechst A.G. of Germany produced microfiber in Europe. This fiber made it way into the United States in 1990 by DuPont. Microfibers in textiles refer to sub-denier fiber (such as polyester drawn to 0.5 denier). Denier and Dtex are two measurements of fiber yield based on weight and length. If the fiber density is known, you also have a fiber diameter, otherwise it is simpler to measure diameters in micrometers. Microfibers in technical fibers refer to ultra-fine fibers (glass or meltblown thermoplastics) often used in filtration. Newer fiber designs include extruding fiber that splits into multiple finer fibers. Most synthetic fibers are round in cross-section, but special designs can be hollow, oval, star-shaped or trilobal. The latter design provides more optically reflective properties. Synthetic textile fibers are often crimped to provide bulk in a woven, non woven or knitted structure. Fiber surfaces can also be dull or bright. Dull surfaces reflect more light while bright tends to transmit light and make the fiber more transparent. Very short and/or irregular fibers have been called fibrils. Natural cellulose, such as cotton or bleached kraft, show smaller fibrils jutting out and away from the main fiber structure. Typical properties of selected fibers Fibers can be divided into natural and artificial (synthetic) substance, their properties can affect their performance in many applications. Synthetic fiber materials are increasingly replacing other conventional materials like glass and wood in a number of applications. This is because artificial fibers can be engineered chemically, physically, and mechanically to suit particular technical engineering. In choosing a fiber type, a manufacturer would balance their properties with the technical requirements of the applications. Various fibers are available to select for manufacturing. Here are typical properties of the sample natural fibers as compared to the properties of artificial fibers. The tables above just show typical properties of fibers, in fact there are more properties which could be referred as follows (from a to z): Arc Resistance, Biodegradable, Coefficient of Linear Thermal Expansion, Continuous Service Temperature, Density of Plastics, Ductile / Brittle Transition Temperature, Elongation at Break, Elongation at Yield, Fire Resistance, Flexibility, Gamma Radiation Resistance, Gloss, Glass Transition Temperature, Hardness, Heat Deflection Temperature, Shrinkage, Stiffness, Ultimate tensile strength, Thermal Insulation, Toughness, Transparency, UV Light Resistance, Volume Resistivity, Water absorption, Young's Modulus See also Ceramic matrix composite Dietary fiber Fiber crop Fiber simulation Fibers in Differential Geometry Molded fiber Nerve fiber Optical fiber References Materials Textiles
Fiber
Physics
2,165
11,403,687
https://en.wikipedia.org/wiki/Uredo%20nigropuncta
Uredo nigropuncta is a fungal plant pathogen. It is known as a pathogen of Cattleya orchids. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Orchid diseases Pucciniomycotina Fungus species
Uredo nigropuncta
Biology
55
52,002,298
https://en.wikipedia.org/wiki/National%20Centre%20for%20Biotechnology%20Education
The National Centre for Biotechnology Education (NCBE) is a national resource centre at the University of Reading to teach pre-university biotechnology in schools in the UK. It was founded in 1990. History It began as the National Centre for School Biotechnology (NCSB) in 1985 in the Department of Microbiology. It became the NCBE in 1990. For many years it was the only centre in Europe that was devoted to the teaching of biotechnology in schools. The Dolan DNA Learning Center had been set up in the USA. It was set up as an education project by the Society for General Microbiology, now the Microbiology Society. Money from the Laboratory of the Government Chemist set up the National Centre for School Biotechnology (NCSB). Money also came from the Gatsby Charitable Foundation. For the first five years, the UK government's DTI was involved, but from 1990 onwards wanted the organization to become self-supporting as it had to cut back on budgets. By 1992 the government provided no money for the centre. Structure The site was set up in former buildings of the University of Reading's Department of Microbiology. In 2001, the NCBE moved to new purpose-built premises in the University’s School of Food Biosciences, however the creation of a new School of Pharmacy at the University forced the NCBE to move to new premises elsewhere on the campus in 2005. Function It reaches out to schools to give up-to-date information on biotechnology. Biotechnology is a rapidly evolving subject, and schools cannot keep up-to-date with all that they would be required to know. It produces educational resources. It runs the Microbiology in Schools Advisory Committee (MISAC). See also Centre for Industry Education Collaboration at York National Centre for Excellence in the Teaching of Mathematics, University of York Science and Plants for Schools, another well-known science resource for UK schools References External links NCBE DNA to Darwin Education resources from the University of Leicester European Initiative for Biotechnology Education 1985 establishments in the United Kingdom Biology education in the United Kingdom Biotechnology in the United Kingdom Biotechnology organizations Educational institutions established in 1985 Genetics education Science education in the United Kingdom Scientific organizations established in 1985 University of Reading
National Centre for Biotechnology Education
Engineering,Biology
442
18,303,844
https://en.wikipedia.org/wiki/HD%2047186%20b
HD 47186 b is a “hot Neptune” extrasolar planet located approximately 123 light years away in the constellation of Canis Major, orbiting the star HD 47186. This planet has a minimum mass of 22.78 times that of Earth and orbits very close to the star at a similar distance from the star as 51 Pegasi b is from 51 Pegasi. As in consequence, it takes 4.0845 days to complete an orbit with an eccentricity of 0.038, which is similar to the 5.66 year-period planet HD 70642 b. References External links Canis Major Exoplanets discovered in 2008 Giant planets Exoplanets detected by radial velocity Hot Neptunes
HD 47186 b
Astronomy
146
35,042,000
https://en.wikipedia.org/wiki/Epidemiology%20of%20malnutrition
There were 735.1 million malnourished people in the world in 2022, a decrease of 58.3 million since 2005, despite the fact that the world already produces enough food to feed everyone (8 billion people) and could feed more than that (12 billion people). Reducing malnutrition is key part of Sustainable Development Goal 2, "Zero hunger", with a malnutrition target alongside reducing under nutrition and stunted child growth. Because of the Sustainable Development Goals, various UN agencies are responsible for measuring and coordinating action to reduce malnutrition. According to the World Food Programme, 135 million suffer from acute hunger, largely due to manmade conflicts, climate changes, and economic downturns. COVID-19 could double the number of people at risk of suffering acute hunger by the end of 2020. By country The number of undernourished people (million) in 2010–2012 and 2014–2016 (projected). According to the UN Food and Agriculture Organization (FAO), these countries had 5 million or more undernourished people in 2001–2003 and in 2005–2007. Note: This table measures "undernourishment", as defined by the FAO, and represents the number of people consuming (on average for years 2010 to 2012) less than the minimum amount of food energy (measured in kilocalories per capita per day) necessary for the average person to stay in good health while performing light physical activity. It is a conservative indicator that does not take into account the extra needs of people performing extraneous physical activity, nor seasonal variations in food consumption or other sources of variability such as inter-individual differences in energy requirements. Malnutrition and undernourishment are cumulative or average situations, and not the work of a single day's food intake (or lack thereof). This table does not represent the number of people who "went to bed hungry today." The below is a list of countries by percentage of population with undernourishment, as defined by the United Nations World Food Programme and the FAO in its "The State of Food Insecurity in the World" 2009 report. Middle East Malnutrition rates in Iraq had risen from 19% before the US-led invasion to a national average of 28% four years later. By 2010, according to the UN Food and Agriculture Organization, only 8% were malnourished. (See data above.) South Asia According to the Global Hunger Index, South Asia (also known as the Indian Subcontinent) has the highest child malnutrition rate of world's regions. India, a largely vegetarian country and second largest country in the world by population, contributes most number in malnutrition in the region. The 2006 report mentioned that "the low status of women in South Asian countries and their lack of nutritional knowledge are important determinants of high prevalence of underweight children in the region" and was concerned that South Asia has "inadequate feeding and caring practices for young children". 30% children of India are underweight, one of the highest rates in the world and nearly double the rate of Sub-Saharan Africa. Research on overcoming persistent under-nutrition published by the Institute of Development Studies, argues that the co-existence of India as an 'economic powerhouse' and home to one-third of the world's under-nourished children reflects a failure of the governance of nutrition: "A poor capacity to deliver the right services at the right time to the right populations, an inability to respond to citizens' needs and weak accountability are all features of weak nutrition governance." The research suggests that to make under-nutrition history in India the governance of nutrition needs to be strengthened and new research needs to focus on the politics and governance of nutrition. At the current rate of progress the MDG1 target for nutrition will only be reached in 2042 with severe consequences for human wellbeing and economic growth. United States According to the United States Department of Agriculture in 2015, 50 million Americans experienced food insecurity in 2009, including 17 million children. This represents nearly one in four American children. Although the United States Department of Agriculture reported in 2012 that an estimated 85.5 percent of households in the country are food secure, millions of people in America struggle with the threat of hunger or experience hunger on a daily basis. The USDA defines food security as the economic condition of a household wherein which there is reliable access to a sufficient amount of food so all household members can lead a healthy productive life. Hunger is most commonly related to poverty since a lack of food helps perpetuate the cycle of poverty. Most obviously, when individuals live in poverty they lack the financial resources to purchase food or pay for unexpected events, such as a medical emergency. When such emergencies arise, families are forced to cut back on food spending so they can meet the financial demands of the unexpected emergency. There is not one single cause of hunger but rather a complex interconnected web of various factors. Some of the most vulnerable populations to hunger are the elderly, children, people from a low socioeconomic status, and minority groups; however, hunger's impact is not limited to these individuals. The largest nonprofit food relief organization in the United States, Feeding America, feeds 46.5 million citizens a year to address the nation's food insecurity issue. This equates to one in seven Americans requiring their aid in a given year. An organization that focuses on providing food for the elderly population is Meals on Wheels, which is a nonprofit that delivers meals to seniors' homes. The government also works towards providing relief through programs such as the Supplemental Nutrition Assistance Program (SNAP) which was formerly known to the public as Food Stamps. Another well known government program is the National School Lunch Program (NSLP) which provides free or reduced lunches to students who qualify for the program. The number of Americans suffering from hunger rose after the 2008 financial crisis, with children and working adults now making up a large proportion of those affected. In 2012, Gleaners Indiana Food bank reported that there were now 50 million Americans struggling with food insecurity (about 1 in 6 of the population), and that the number of folks seeking help from food banks had increased by 46% since 2005. According to a 2012 study by UCLA Center for Health Policy Research, even married couples who both work but have low incomes sometimes require the aid of food banks. Childhood malnutrition is generally thought of as being limited to developing countries, but although most malnutrition occurs there, it is also an ongoing presence in developed nations. For example, in the United States of America, one out of every six children is at risk of hunger. A study, based on 2005–2007 data from the U.S. Census Bureau and the Agriculture Department, shows that an estimated 3.5 million children under the age of five are at risk of hunger in the United States. In developed countries, this persistent hunger problem is not due to lack of food or food programs, but is largely due to an underutilization of existing programs designed to address the issue, such as food stamps or school meals. Many citizens of rich countries such as the United States of America attach stigmas to food programs or otherwise discourage their use. In the USA, only 60% of those eligible for the food stamp program actually receive benefits. The U.S. Department of Agriculture reported that in 2003, 1 out of 200 U.S. households with children were so severely food insecure that any of the children went hungry at least once during the year. A substantially larger proportion of these same households (3.8 percent) had adult members who were hungry at least one day during the year because of their households' inability to afford enough food. Africa According to World Vision there are 257 million people in Africa who are experiencing malnutrition. This is around 20% of the entire population of Africa. The regions in Africa with the highest rates of malnutrition are the Sub-Saharan region and parts of southern Africa. In the Sub-Saharan region, the countries that have the highest rates include, but are not limited to South Sudan, Sudan, Central African Republic, and Chad. In this region there are 237 million people who are experiencing hunger and according to Action Against Hunger, there are 319 million people without a reliable source of drinking water. In the Southern region of Africa, the countries that have the highest rates include, but are not limited to Mozambique, Zimbabwe, Zambia, and Angola. In this region there are 41 million people who are food insecure and 9 million who are in a food crisis and need immediate assistance with food. There are many factors that contribute to malnutrition in Africa. There are environmental factors such as degradation of land and unexpected weather changes. The changes in weather such as droughts and storms, impact their food and water supply. Another factor that contributes to malnutrition is conflict. Conflict can lead to uncertainty in resources, which puts them at a higher risk of malnutrition. In addition, the areas in Africa with the highest rates of malnutrition also experience poverty which impact and limit the supply of food and necessary services. For example, some experience limited access to health services, sanitation, clean water, consistent food supply. Not only do these things directly contribute to malnutrition, but they can also lead to illnesses such as malaria and water-borne disease. References Malnutrition Malnutrition
Epidemiology of malnutrition
Environmental_science
1,937
77,535,075
https://en.wikipedia.org/wiki/Laplace%20sphere
In astronomy and orbital mechanics, the Laplace sphere concerns a specific kind of Three-body problem with orbits. The prototype idea is to study the Sun-Earth-Moon system, and determine if it would be possible for the Sun to steal away the Moon from Earth orbit, into solar orbit. More generally, it is applied to any satellite of a body (often called the 'planet') that is, in turn, orbiting a much more massive body (often called the 'star'). Besides the moon, the satellite is usually a small planetoid, exoplanet, or a spacecraft that is orbiting the earth. The Laplace sphere is a region around a planet where a satellite would maintain a stable orbit around the planet, rather than being pulled off toward the star, with its greater gravitational force, despite its larger distance. The 'sphere' region is actually an ellipsoid, specifically a prolate spheroid with its long axis perpendicular to the star-planet orbit. This results in the fact that the satellite with an eccentric orbit is safer with its apsis pointing up or down, than pointing in the plane of the planet's orbit. The derivation eliminates higher-order terms on the assumption that the star's mass is much larger than the planet's, and the planet's mass is much larger than the satellite's. See also Hill sphere References Orbits
Laplace sphere
Astronomy
285
73,614,883
https://en.wikipedia.org/wiki/C2HClF2
{{DISPLAYTITLE:C2HClF2}} C2HClF2 is the chemical formula for two isomers of hydrochlorofluoroolefins. 2-Chloro-1,1-difluoroethylene 1-Chloro-1,2-difluoroethylene
C2HClF2
Chemistry
71
48,373,211
https://en.wikipedia.org/wiki/Pachyphloeus%20depressus
Pachyphloeus depressus is a species of ascomycete fungus that forms truffle-like fruitbodies. It is found in southwestern China, where it has been reported from Qiaojia County, Yunnan Province, and Huili County, Sichuan Province. These counties are both near the Jinsha River. Fruitbodies of the fungus are smooth and greenish-brown—distinctive features in the genus Pachyphloeus. They measure in diameter, and have a rubbery texture. When ripe, the odor of the flesh is similar to burned potatoes. Spores are spherical, measuring 15.7–20 μm with coarse rod-like spines up to 2.5 μm on the surface. The fungus has been called the "green female truffle" because of its superficial resemblance to the locally common species Tuber pseudohimalayense. References External links Fungi described in 2015 Fungi of China Pezizaceae Fungus species
Pachyphloeus depressus
Biology
194
77,878,398
https://en.wikipedia.org/wiki/WD%200816%E2%80%93310
WD 0816–310 (PM J08186–3110) is a magnetic white dwarf with metal pollution, originating from the tidal disruption of a planetary body. The metals are guided by the magnetic field onto the surface of the white dwarf, creating a "scar" on the surface of the white dwarf. This scar is rich in the accreted planetary material. The object was first identified as a possible white dwarf in 2005, from data of the Digitized Sky Survey. It was confirmed as a white dwarf in 2008 with spectroscopic data from CTIO and the same team found that the white dwarf is polluted with calcium, magnesium and iron. In 2019 a variable magnetic field was discovered thanks to Zeeman splitting. This observation was made with archived spectropolarimetric data from FORS1 at the Very Large Telescope (VLT). In 2021 the white dwarf was studied in detail with the 4 m telescope at CTIO, and with the VLT (FORS1 and X-shooter). The elements sodium, magnesium, calcium, chromium, manganese, iron and nickel were detected in the atmosphere of the white dwarf. The atmosphere is enriched in magnesium, relative to other elements, which is predicted for old stellar systems. The researchers also found hydrogen in this otherwise helium-dominated atmosphere of WD 0816–310. The presence of hydrogen could be explained with the pollution of an asteroid containing water ice. These researchers found that the abundance of metals changed between two spectra 10 years apart. They suggested that spots enriched in metals are present on the surface of the white dwarfs, a process controlled by the magnetic field of the white dwarfs. In 2024 this was confirmed with circular spectropolarimetric observations with FORS2 on the VLT. The observations measured a dipolar field strength at the pole of about 140 Kilogauss. Around 310,000 years ago WD 0816–310 accreted a Vesta-sized object with a composition similar to chondritic meteorites. The observations showed that the variation metal line strength and magnetic field intensity are synchronized. This is seen as evidence that the magnetic field determines the local density of metals on the surface. These patches are likely present near one of the magnetic poles of the white dwarf. The material from an accreted asteroid will first form a disk around the white dwarf. Closer to the white dwarf the dusty material will sublimate into a metal-gas. The researchers claim that white dwarf will ionize at least a part of the gas. These ions will follow the magnetic field of the white dwarf and as a result of the Lorentz force it will follow a spiral orbit around the local field line. On their way to the poles of the white dwarf, the ions will collide with neutral atoms in the gas disk, ionizing them in the process. This leads to a substantial level of ionization of the gas disk. A study in 2024 that discovered the second metal scar around WD 2138-332, suggests that metal scars are common around magnetic white dwarfs with metal pollution. See also List of exoplanets and planetary debris around white dwarfs References White dwarfs Variable stars Magnetism in astronomy Circumstellar disks Puppis
WD 0816–310
Astronomy
660
36,399,227
https://en.wikipedia.org/wiki/HA-966
HA-966 or (±)-3-amino-1-hydroxy-pyrrolidin-2-one is a molecule used in scientific research as a glycine receptor and NMDA receptor antagonist / low efficacy partial agonist. It has neuroprotective and anticonvulsant, anxiolytic, antinociceptive and sedative / hypnotic effects in animal models. Pilot human clinical trials in the early 1960s showed that HA-966 appeared to benefit patients with tremors of extrapyramidal origin. The two enantiomers of HA-966 have differing pharmacological activity. The glycine/N-methyl-D-aspartate receptor antagonist activity is specific to the (R)-(+)-enantiomer, whereas the sedative and ataxic effects are specific to the (S)-(-)-enantiomer. (R)-(+)-HA-966 did not induce drug-appropriate responding in animals trained to discriminate phencyclidine (PCP) from saline, suggesting that the glycine receptor ligand (R)-(+)-HA-966 has a significantly different behavioral profile than drugs affecting the ion channel of the NMDA receptor complex. (S)-(-)-HA-966 has been described as a "γ-hydroxybutyric acid (GHB)-like agent" and a "potent y-butyrolactone-like sedative", but it shows no affinity for the GABAB receptor (GABABR). See also Rapastinel NRX-1074 References Amines NMDA receptor antagonists Pyrrolidones
HA-966
Chemistry
367
26,749,963
https://en.wikipedia.org/wiki/National%20Glass%20Association
The National Glass Association is the largest trade association for the auto glass, architectural glass, and window and door markets. The NGA was founded in 1948, and currently has close to 3,000 member companies. This international association represents the interests of companies worldwide. The NGA's Mission: To provide information and education, as well as promote quality workmanship, ethics, and safety in the architectural, automotive and window and door glass industries. NGA acts as a clearinghouse for industry information, a catalyst in education and training matters, and a powerful voice on behalf of our members. The NGA publishes Glass Magazine and Window & Door and organizes GlassBuild America: The Glass, Window, and Door Expo. References Trade associations based in the United States Glass architecture Glass industry
National Glass Association
Materials_science,Engineering
159
44,311,175
https://en.wikipedia.org/wiki/Grit%2C%20not%20grass%20hypothesis
The grit, not grass hypothesis is an evolutionary hypothesis that explains the evolution of high-crowned teeth, particularly in New World mammals. The hypothesis is that the ingestion of gritty soil is the primary driver of hypsodont tooth development, not the silica-rich composition of grass, as was previously thought. Traditional co-evolution hypothesis Since the morphology of the hypsodont tooth is suited to a more abrasive diet, hypsodonty was thought to have evolved concurrently with the spread of grasslands. During the Cretaceous Period (145-66 million years ago), the Great Plains were covered by a shallow inland sea called the Western Interior Seaway which began to recede during the Late Cretaceous to the Paleocene (65-55 million years ago), leaving behind thick marine deposits and relatively flat terrain. During the Miocene and Pliocene epochs (25 million years), the continental climate became favorable to the evolution of grasslands. Existing forest biomes declined and grasslands became much more widespread. The grasslands provided a new niche for mammals, including many ungulates that switched from browsing diets to grazing diets. Grass contains silica-rich phytoliths (abrasive granules), which wear away dental tissue more quickly. So the spread of grasslands was linked to the development of high-crowned (hypsodont) teeth in grazers. Modern evolutionary hypothesis Early evidence In 2006 Strömberg examined the independent acquisition of high-crowned cheek teeth (hypsodonty) in several ungulate lineages (e.g., camelids, equids, rhinoceroses) from the early to middle Miocene of North America, which had been classically linked to the spread of grasslands. She showed habitats dominated by C3 grasses (cool-season grasses) were established in the Central Great Plains by early late Arikareean (≥21.9 Million years ago), at least 4 million years prior to the emergence of hypsodonty in Equidae. In 2008 Mendoza and Palmqvist determined the relative importance of grass consumption and open habitat foraging in the development of hypsodont teeth using a dataset of 134 species of artiodactyls and perissodactyls. The results suggested that high-crowned teeth represent are adapted for a particular feeding environment, not diet preference. Morphology More recent examination of mammalian teeth suggests that it is the open, gritty habitat and not the grass itself which is linked to diet changes. Analysis of dental microwear patterns of hypsodont notoungulates from the Late Oligocene Salla Beds of Bolivia showed shearing movements are associated with a diet rich in tough plants, not necessarily grasses. Hence the relationship between high-crowned mammals and the source of tooth wear in the fossil record may not be straightforward and the spread of grasslands in South America, traditionally linked with the development of notoungulate hypsodonty, was called into question. Temporal discontinuity Most importantly, evidence has shown, that the development of hypsodonty in Cenozoic mammals is out of sync with the flourishing of grasslands both in North America and South America, where grasslands spread 10 million years earlier. Observations of this temporal discontinuity between the spread grasslands and the development of hypsodonty in mammals is also supported by earlier evidence of hypsodonty in dinosaurs. For example, hadrosaurs, a group of herbivorous dinosaurs, likely grazed on low-lying vegetation and microwear patterns show that their diet contained an abrasive material, such as grit or silica. Grasses had evolved by the Late Cretaceous, but were not particularly common, so this study concluded that grass probably did not play a major component in the hadrosaur's diet. Modern Examples of Hypsodonty Hypsodonty is observed both in the fossil record and the modern world. It is a characteristic of large clades (equids) as well as subspecies level specialization. For example, the Sumatran rhinoceros and the Javan rhinoceros both have brachydont, lophodont cheek teeth whereas the Indian rhinoceros, has hypsodont dentition. A mammal may have exclusively hypsodont molars or have a mix of dentitions. Hypsodont dentition is characterized by: High-crowned teeth A rough, flattish occlusal surface adapted for crushing and grinding Cementum both above and below the gingival line Enamel which covers the entire length of the body and likewise extends past the gum line The cementum and the enamel invaginate into the thick layer of dentine References Evolutionary biology Evolution of mammals Biological hypotheses
Grit, not grass hypothesis
Biology
982
36,268,685
https://en.wikipedia.org/wiki/N-tert-Butylbenzenesulfinimidoyl%20chloride
N-tert-Butylbenzenesulfinimidoyl chloride is a useful oxidant for organic synthesis reactions. It is a good electrophile, and the sulfimide S=N bond can be attacked by nucleophiles, such as alkoxides, enolates, and amide ions. The nitrogen atom in the resulting intermediate is basic, and can abstract an α-hydrogen to create a new double bond. Preparation This reagent can be synthesized quickly and in near-quantitative yield by reacting phenyl thioacetate with tert-butyldichloroamine in hot benzene. After the reaction is complete, the product can be isolated as a yellow, moisture-sensitive solid by vacuum distillation. Mechanism A nucleophile, such as an alkoxide (1), attacks the S=N bond in 2. The resulting intermediate (3) collapses and ejects chloride ion, which is a good leaving group. The resulting sulfimide has two resonance forms - 4a and 4b. Because of this, the nitrogen is basic, and via a five-membered ring transition state, it can abstract the hydrogen adjacent to the oxygen. This forms a new C=O bond and ejects a neutral sulfenamide (5), giving ketone 6 as the product. N-tert-Butylbenzenesulfinimidoyl chloride reacts with enolates, amides, and primary alkoxides by the same general mechanism. The Swern oxidation, which converts primary and secondary alcohols to aldehydes and ketones, respectively, also uses a sulfur-containing compound (DMSO) as the oxidant and proceeds by a similar mechanism. In the Swern oxidation, elimination also occurs via a five-membered ring transition state, but the basic species is a sulfur ylide instead of a negatively charged nitrogen. Several other oxidation reactions also make use of DMSO as the oxidant and pass through a similar transition state (see #See also). Reactions Reacting an aldehyde with a Grignard reagent or organolithium and treating the resulting secondary alkoxide with N-tert-butylbenzenesulfinimidoyl chloride is a convenient one-pot reaction for converting aldehydes to ketones. While Grignards can be used for this reaction, organolithium compounds give higher yields, due to the higher reactivity of a lithium alkoxide compared to the corresponding magnesium salt. In some cases, an equivalent of DMPU, a Lewis base, will increase yields. For example, treating benzaldehyde with n-butyllithium and N-tert-butylbenzenesulfinimidoyl chloride in THF gives 1-phenyl-1-pentanone in good yield. N-tert-Butylbenzenesulfinimidoyl chloride can also be used to synthesize imines from amines. Imines synthesized in this fashion have been shown to undergo a one-pot Mannich reaction with 1,3-dicarbonyl compounds, such as malonate esters and 1,3-diketones. In this example, Cbz-protected benzylamine is deprotonated using n-butyllithium, then treated with N-tert-butylbenzenesulfinimidoyl chloride to form the protected imine. Dimethyl malonate acts as the nucleophile and reacts with the imine to give the final product, a Mannich base. See also Swern oxidation Pfitzner–Moffatt oxidation Corey–Kim oxidation Parikh–Doering oxidation References External links N-tert-butylbenzenesulfinimidoyl chloride on Organic Chemistry Portal Reagents for organic chemistry Organochlorides Tert-butyl compounds
N-tert-Butylbenzenesulfinimidoyl chloride
Chemistry
818
31,250,262
https://en.wikipedia.org/wiki/Frictional%20contact%20mechanics
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. This can be divided into compressive and adhesive forces in the direction perpendicular to the interface, and frictional forces in the tangential direction. Frictional contact mechanics is the study of the deformation of bodies in the presence of frictional effects, whereas frictionless contact mechanics assumes the absence of such effects. Frictional contact mechanics is concerned with a large range of different scales. At the macroscopic scale, it is applied for the investigation of the motion of contacting bodies (see Contact dynamics). For instance the bouncing of a rubber ball on a surface depends on the frictional interaction at the contact interface. Here the total force versus indentation and lateral displacement are of main concern. At the intermediate scale, one is interested in the local stresses, strains and deformations of the contacting bodies in and near the contact area. For instance to derive or validate contact models at the macroscopic scale, or to investigate wear and damage of the contacting bodies' surfaces. Application areas of this scale are tire-pavement interaction, railway wheel-rail interaction, roller bearing analysis, etc. Finally, at the microscopic and nano-scales, contact mechanics is used to increase our understanding of tribological systems (e.g., investigate the origin of friction) and for the engineering of advanced devices like atomic force microscopes and MEMS devices. This page is mainly concerned with the second scale: getting basic insight in the stresses and deformations in and near the contact patch, without paying too much attention to the detailed mechanisms by which they come about. History Several famous scientists, engineers and mathematicians contributed to our understanding of friction. They include Leonardo da Vinci, Guillaume Amontons, John Theophilus Desaguliers, Leonhard Euler, and Charles-Augustin de Coulomb. Later, Nikolai Pavlovich Petrov, Osborne Reynolds and Richard Stribeck supplemented this understanding with theories of lubrication. Deformation of solid materials was investigated in the 17th and 18th centuries by Robert Hooke, Joseph Louis Lagrange, and in the 19th and 20th centuries by d’Alembert and Timoshenko. With respect to contact mechanics the classical contribution by Heinrich Hertz stands out. Further the fundamental solutions by Boussinesq and Cerruti are of primary importance for the investigation of frictional contact problems in the (linearly) elastic regime. Classical results for a true frictional contact problem concern the papers by F.W. Carter (1926) and H. Fromm (1927). They independently presented the creep versus creep force relation for a cylinder on a plane or for two cylinders in steady rolling contact using Coulomb’s dry friction law (see below). These are applied to railway locomotive traction, and for understanding the hunting oscillation of railway vehicles. With respect to sliding, the classical solutions are due to C. Cattaneo (1938) and R.D. Mindlin (1949), who considered the tangential shifting of a sphere on a plane (see below). In the 1950s, interest in the rolling contact of railway wheels grew. In 1958, Kenneth L. Johnson presented an approximate approach for the 3D frictional problem with Hertzian geometry, with either lateral or spin creepage. Among others he found that spin creepage, which is symmetric about the center of the contact patch, leads to a net lateral force in rolling conditions. This is due to the fore-aft differences in the distribution of tractions in the contact patch. In 1967, Joost Jacques Kalker published his milestone PhD thesis on the linear theory for rolling contact. This theory is exact for the situation of an infinite friction coefficient in which case the slip area vanishes, and is approximative for non-vanishing creepages. It does assume Coulomb's friction law, which more or less requires (scrupulously) clean surfaces. This theory is for massive bodies such as the railway wheel-rail contact. With respect to road-tire interaction, an important contribution concerns the so-called magic tire formula by Hans Pacejka. In the 1970s, many numerical models were devised. Particularly variational approaches, such as those relying on Duvaut and Lion’s existence and uniqueness theories. Over time, these grew into finite element approaches for contact problems with general material models and geometries, and into half-space based approaches for so-called smooth-edged contact problems for linearly elastic materials. Models of the first category were presented by Laursen and by Wriggers. An example of the latter category is Kalker’s CONTACT model. A drawback of the well-founded variational approaches is their large computation times. Therefore, many different approximate approaches were devised as well. Several well-known approximate theories for the rolling contact problem are Kalker’s FASTSIM approach, the Shen-Hedrick-Elkins formula, and Polach’s approach. More information on the history of the wheel/rail contact problem is provided in Knothe's paper. Further Johnson collected in his book a tremendous amount of information on contact mechanics and related subjects. With respect to rolling contact mechanics an overview of various theories is presented by Kalker as well. Finally the proceedings of a CISM course are of interest, which provide an introduction to more advanced aspects of rolling contact theory. Problem formulation Central in the analysis of frictional contact problems is the understanding that the stresses at the surface of each body are spatially varying. Consequently, the strains and deformations of the bodies are varying with position too. And the motion of particles of the contacting bodies can be different at different locations: in part of the contact patch particles of the opposing bodies may adhere (stick) to each other, whereas in other parts of the contact patch relative movement occurs. This local relative sliding is called micro-slip. This subdivision of the contact area into stick (adhesion) and slip areas manifests itself a.o. in fretting wear. Note that wear occurs only where power is dissipated, which requires stress and local relative displacement (slip) between the two surfaces. The size and shape of the contact patch itself and of its adhesion and slip areas are generally unknown in advance. If these were known, then the elastic fields in the two bodies could be solved independently from each other and the problem would not be a contact problem anymore. Three different components can be distinguished in a contact problem. First of all, there is the deformation of the separate bodies in reaction to loads applied on their surfaces. This is the subject of general continuum mechanics. It depends largely on the geometry of the bodies and on their (constitutive) material behavior (e.g. elastic vs. plastic response, homogeneous vs. layered structure etc.). Secondly, there is the overall motion of the bodies relative to each other. For instance the bodies can be at rest (statics) or approaching each other quickly (impact), and can be shifted (sliding) or rotated (rolling) over each other. These overall motions are generally studied in classical mechanics, see for instance multibody dynamics. Finally there are the processes at the contact interface: compression and adhesion in the direction perpendicular to the interface, and friction and micro-slip in the tangential directions. The last aspect is the primary concern of contact mechanics. It is described in terms of so-called contact conditions. For the direction perpendicular to the interface, the normal contact problem, adhesion effects are usually small (at larger spatial scales) and the following conditions are typically employed: The gap between the two surfaces must be zero (contact) or strictly positive (separation, ); The normal stress acting on each body is zero (separation) or compressive ( in contact). Mathematically: . Here are functions that vary with the position along the bodies' surfaces. In the tangential directions the following conditions are often used: The local (tangential) shear stress (assuming the normal direction parallel to the -axis) cannot exceed a certain position-dependent maximum, the so-called traction bound ; Where the magnitude of tangential traction falls below the traction bound , the opposing surfaces adhere together and micro-slip vanishes, ; Micro-slip occurs where the tangential tractions are at the traction bound; the direction of the tangential traction is then opposite to the direction of micro-slip . The precise form of the traction bound is the so-called local friction law. For this Coulomb's (global) friction law is often applied locally: , with the friction coefficient. More detailed formulae are also possible, for instance with depending on temperature , local sliding velocity , etc. Solutions for static cases Rope on a bollard, the capstan equation Consider a rope where equal forces (e.g., ) are exerted on both sides. By this the rope is stretched a bit and an internal tension is induced ( on every position along the rope). The rope is wrapped around a fixed item such as a bollard; it is bent and makes contact to the item's surface over a contact angle (e.g., ). Normal pressure comes into being between the rope and bollard, but no friction occurs yet. Next the force on one side of the bollard is increased to a higher value (e.g., ). This does cause frictional shear stresses in the contact area. In the final situation the bollard exercises a friction force on the rope such that a static situation occurs. The tension distribution in the rope in this final situation is described by the capstan equation, with solution: The tension increases from on the slack side () to on the high side . When viewed from the high side, the tension drops exponentially, until it reaches the lower load at . From there on it is constant at this value. The transition point is determined by the ratio of the two loads and the friction coefficient. Here the tensions are in Newtons and the angles in radians. The tension in the rope in the final situation is increased with respect to the initial state. Therefore, the rope is elongated a bit. This means that not all surface particles of the rope can have held their initial position on the bollard surface. During the loading process, the rope slipped a little bit along the bollard surface in the slip area . This slip is precisely large enough to get to the elongation that occurs in the final state. Note that there is no slipping going on in the final state; the term slip area refers to the slippage that occurred during the loading process. Note further that the location of the slip area depends on the initial state and the loading process. If the initial tension is and the tension is reduced to at the slack side, then the slip area occurs at the slack side of the contact area. For initial tensions between and , there can be slip areas on both sides with a stick area in between. Generalization for a rope lying on an arbitrary orthotropic surface If a rope is laying in equilibrium under tangential forces on a rough orthotropic surface then three following conditions (all of them) are satisfied: This generalization has been obtained by Konyukhov A., Sphere on a plane, the (3D) Cattaneo problem Consider a sphere that is pressed onto a plane (half space) and then shifted over the plane's surface. If the sphere and plane are idealised as rigid bodies, then contact would occur in just a single point, and the sphere would not move until the tangential force that is applied reaches the maximum friction force. Then it starts sliding over the surface until the applied force is reduced again. In reality, with elastic effects taken into consideration, the situation is much different. If an elastic sphere is pressed onto an elastic plane of the same material then both bodies deform, a circular contact area comes into being, and a (Hertzian) normal pressure distribution arises. The center of the sphere is moved down by a distance called the approach, which is equivalent to the maximum penetration of the undeformed surfaces. For a sphere of radius and elastic constants this Hertzian solution reads: Now consider that a tangential force is applied that is lower than the Coulomb friction bound . The center of the sphere will then be moved sideways by a small distance that is called the shift. A static equilibrium is obtained in which elastic deformations occur as well as frictional shear stresses in the contact interface. In this case, if the tangential force is reduced then the elastic deformations and shear stresses reduce as well. The sphere largely shifts back to its original position, except for frictional losses that arise due to local slip in the contact patch. This contact problem was solved approximately by Cattaneo using an analytical approach. The stress distribution in the equilibrium state consists of two parts: In the central, sticking region , the surface particles of the plane displace over to the right whereas the surface particles of the sphere displace over to the left. Even though the sphere as a whole moves over relative to the plane, these surface particles did not move relative to each other. In the outer annulus , the surface particles did move relative to each other. Their local shift is obtained as This shift is precisely as large such that a static equilibrium is obtained with shear stresses at the traction bound in this so-called slip area. So, during the tangential loading of the sphere, partial sliding occurs. The contact area is thus divided into a slip area where the surfaces move relative to each other and a stick area where they do not. In the equilibrium state no more sliding is going on. Solutions for dynamic sliding problems The solution of a contact problem consists of the state at the interface (where the contact is, division of the contact area into stick and slip zones, and the normal and shear stress distributions) plus the elastic field in the bodies' interiors. This solution depends on the history of the contact. This can be seen by extension of the Cattaneo problem described above. In the Cattaneo problem, the sphere is first pressed onto the plane and then shifted tangentially. This yields partial slip as described above. If the sphere is first shifted tangentially and then pressed onto the plane, then there is no tangential displacement difference between the opposing surfaces and consequently there is no tangential stress in the contact interface. If the approach in normal direction and tangential shift are increased simultaneously ("oblique compression") then a situation can be achieved with tangential stress but without local slip. This demonstrates that the state in the contact interface is not only dependent on the relative positions of the two bodies, but also on their motion history. Another example of this occurs if the sphere is shifted back to its original position. Initially there was no tangential stress in the contact interface. After the initial shift micro-slip has occurred. This micro-slip is not entirely undone by shifting back. So in the final situation tangential stresses remain in the interface, in what looks like an identical configuration as the original one. Influence of friction on dynamic contacts (impacts) is considered in detail in. Solution of rolling contact problems Rolling contact problems are dynamic problems in which the contacting bodies are continuously moving with respect to each other. A difference to dynamic sliding contact problems is that there is more variety in the state of different surface particles. Whereas the contact patch in a sliding problem continuously consists of more or less the same particles, in a rolling contact problem particles enter and leave the contact patch incessantly. Moreover, in a sliding problem the surface particles in the contact patch are all subjected to more or less the same tangential shift everywhere, whereas in a rolling problem the surface particles are stressed in rather different ways. They are free of stress when entering the contact patch, then stick to a particle of the opposing surface, are strained by the overall motion difference between the two bodies, until the local traction bound is exceeded and local slip sets in. This process is in different stages for different parts of the contact area. If the overall motion of the bodies is constant, then an overall steady state may be attained. Here the state of each surface particle is varying in time, but the overall distribution can be constant. This is formalised by using a coordinate system that is moving along with the contact patch. Cylinder rolling on a plane, the (2D) Carter-Fromm solution Consider a cylinder that is rolling over a plane (half-space) under steady conditions, with a time-independent longitudinal creepage . (Relatively) far away from the ends of the cylinders a situation of plane strain occurs and the problem is 2-dimensional. If the cylinder and plane consist of the same materials then the normal contact problem is unaffected by the shear stress. The contact area is a strip , and the pressure is described by the (2D) Hertz solution. The distribution of the shear stress is described by the Carter-Fromm solution. It consists of an adhesion area at the leading edge of the contact area and a slip area at the trailing edge. The length of the adhesion area is denoted . Further the adhesion coordinate is introduced by . In case of a positive force (negative creepage ) it is: The size of the adhesion area depends on the creepage, the wheel radius and the friction coefficient. For larger creepages such that full sliding occurs. Half-space based approaches When considering contact problems at the intermediate spatial scales, the small-scale material inhomogeneities and surface roughness are ignored. The bodies are considered as consisting of smooth surfaces and homogeneous materials. A continuum approach is taken where the stresses, strains and displacements are described by (piecewise) continuous functions. The half-space approach is an elegant solution strategy for so-called "smooth-edged" or "concentrated" contact problems. If a massive elastic body is loaded on a small section of its surface, then the elastic stresses attenuate proportional to and the elastic displacements by when one moves away from this surface area. If a body has no sharp corners in or near the contact region, then its response to a surface load may be approximated well by the response of an elastic half-space (e.g. all points with ). The elastic half-space problem is solved analytically, see the Boussinesq-Cerruti solution. Due to the linearity of this approach, multiple partial solutions may be super-imposed. Using the fundamental solution for the half-space, the full 3D contact problem is reduced to a 2D problem for the bodies' bounding surfaces. A further simplification occurs if the two bodies are “geometrically and elastically alike”. In general, stress inside a body in one direction induces displacements in perpendicular directions too. Consequently, there is an interaction between the normal stress and tangential displacements in the contact problem, and an interaction between the tangential stress and normal displacements. But if the normal stress in the contact interface induces the same tangential displacements in both contacting bodies, then there is no relative tangential displacement of the two surfaces. In that case, the normal and tangential contact problems are decoupled. If this is the case then the two bodies are called quasi-identical. This happens for instance if the bodies are mirror-symmetric with respect to the contact plane and have the same elastic constants. Classical solutions based on the half-space approach are: Hertz solved the contact problem in the absence of friction, for a simple geometry (curved surfaces with constant radii of curvature). Carter considered the rolling contact between a cylinder and a plane, as described above. A complete analytical solution is provided for the tangential traction. Cattaneo considered the compression and shifting of two spheres, as described above. Note that this analytical solution is approximate. In reality small tangential tractions occur which are ignored. See also s References External links Biography of Prof.dr.ir. J.J. Kalker (Delft University of Technology). Kalker's Hertzian/non-Hertzian CONTACT software. Mechanical engineering Solid mechanics
Frictional contact mechanics
Physics,Engineering
4,129
65,587,254
https://en.wikipedia.org/wiki/CYP14%20family
Cytochrome P450, family 14, also known as CYP14, is a nematoda cytochrome P450 monooxygenase family. The first gene identified in this family is the CYP14A1 from the Caenorhabditis elegans. The function of most genes in this family is unknown. References Animal genes 14 Protein families
CYP14 family
Biology
78
2,249,951
https://en.wikipedia.org/wiki/Deadly%20Friend
Deadly Friend is a 1986 American science fiction horror film directed by Wes Craven, and starring Matthew Laborteaux, Kristy Swanson, Michael Sharrett, Anne Twomey, Richard Marcus, and Anne Ramsey. Its plot follows a teenage computer prodigy who implants a robot's processor into the brain of his teenage neighbor after she is pronounced brain dead; the experiment proves successful, but she swiftly begins a killing spree in their neighborhood. It is based on the 1985 novel Friend by Diana Henstell, which was adapted for the screen by Bruce Joel Rubin. Originally, the film was a sci-fi thriller without any graphic scenes, with a bigger focus on plot and character development and a dark love story centering on the two main characters, which were not typical aspects of Craven's previous films. After Craven's original cut was shown to a test audience by Warner Bros., the audience criticized the lack of graphic, bloody violence and gore that Craven's other films included. Warner Bros. executive vice president Mark Canton and the film's producers then demanded script re-writes and re-shoots, which included filming gorier death scenes and nightmare sequences, similar to the ones from Craven's previous film, A Nightmare on Elm Street. Due to studio imposed re-shoots and re-editing, the film was drastically altered in post-production, losing much of the original plot and more scenes between characters, while other scenes, including more grisly deaths and a new ending, were added. According to the screenwriter, this version was criticized by the studio for containing too much graphic, bloody violence and was cut back for release. In April 2014, an online petition for the release of the original cut was made. Source material Friend is a 1985 science fiction horror novel by Diana Henstell. It tells of a 13-year-old boy, Paul "Piggy" Conway who moves to a small town after his parents get divorced. There he befriends a girl named Samantha, but their friendship is cut short when her abusive father throws her down the stairs, mortally injuring her. Piggy tries to save her by implanting a microchip in her, but the reanimated Samantha is much more dangerous than she appears. Plot Teenage prodigy Paul Conway and his mother Jeannie move into their new house in the town of Welling. He soon becomes friends with paperboy Tom Toomey. Living next door to Paul is Samantha Pringle and her abusive, alcoholic father Harry. Paul built a robot named BB, which occasionally displays autonomous behavior, such as being protective of Paul. Paul, Jeannie, and BB meet Paul's professor, Dr. Johanson, at Polytech, a prestigious university where Paul has a scholarship. One day, Tom, Paul and BB stop at the house of reclusive harridan Elvira Parker, who threatens them with a shotgun. The trio then encounters a motorcycle gang led by bully Carl. When Carl intimidates Paul, BB assaults him. Another day, while playing basketball, BB accidentally tosses the ball onto Elvira's porch. She takes the ball away from them and refuses to give it back. On Halloween night, Tom decides to pull a prank on Elvira with the help of Paul, Samantha and BB. BB unlocks her gate and Samantha rings her doorbell. When alarms go off, they hide in a shrubbery nearby. When Elvira sees BB standing near her porch, she destroys him with her shotgun, devastating Paul. On Thanksgiving, Samantha has dinner with Paul and his mother, and Samantha and Paul share their first kiss. Samantha returns home late at night, outraging her father, who pushes her down the stairs. At the hospital, Paul learns that Samantha is brain dead and will be on life support for 24 hours before the plug is pulled. As BB's microchip can interface with the human brain, Paul decides to use it to revive Samantha with Tom's help. The boys enter the hospital using a key taken from Tom's father, who works there as a security guard. After Tom deactivates the power from the basement, Paul takes Samantha to his lab. He inserts the microchip into Samantha's brain and takes her back to his house, hiding her in the shed. After he activates the microchip, Samantha "wakes up", but her mannerisms are completely mechanical, suggesting BB is in control of her body. In the middle of the night, Paul finds Samantha staring at the window, looking at her father, and he deactivates her. The next morning, Paul finds Samantha gone. When Harry finds the cellar door open and goes downstairs, Samantha attacks him, breaks his wrist and snaps his neck. Paul finds Samantha, and Harry's corpse, in the cellar. Horrified, he hides the body, takes Samantha back to his home and locks her in his bedroom. At night, Samantha breaks into Elvira's house and corners her by throwing her to the wall of her living room. As Elvira screams in horror, Samantha kills her by smashing her head with the basketball stolen from Tom. When Tom learns of Samantha's rampage, he gets into a fight with Paul and threatens to call the police. Still being protective of Paul, Samantha jumps out the attic window and attacks Tom, with Paul and Jeannie intervening. Trying to get her under control, Paul slaps Samantha, resulting in her strangling him. Samantha, quickly coming to her senses, lets him go and runs away. As Paul goes after her, he again encounters Carl, who gets into a fight with him. Samantha goes back for Paul, grabs Carl and kills him by throwing him at an incoming police car. She runs back to Paul's shed, where Paul comforts her and realizes she's regaining some of her humanity. However, the police arrive with their guns aimed at Samantha, who yells out Paul's name in her human voice. She runs towards him, trying to protect him, but Sergeant Volchek (Lee Paul), thinking she's trying to attack him, shoots her. She says Paul's name one more time before dying in his arms. Later at the morgue, Paul tries to steal Samantha's body once more. Suddenly, Samantha grabs Paul's neck and her face rips apart, revealing a terrifying variant of BB's head. Her skin strips away, revealing half-robotic bones underneath. With a robotic voice, Samantha tells him to come with her. When a horrified Paul screams, she snaps his neck, killing him. Cast Production Development Wes Craven and Bruce Joel Rubin's original intent for the film was for it to be a science fiction thriller with the primary focus being on the dark love story between Paul and Samantha. Casting Kristy Swanson, 16 years old at the time of filming, was cast as Samantha. She admitted that Craven was unsure of her capability to play the role, but ultimately cast her, and was "always encouraging... always prodding me in subtle ways." She elaborated in a 1996 interview: "I committed myself completely to it. I just went full out with it. I wanted to do the best job I could possibly do. I was having the time of my life. As for the movie itself, some people love it, some people hate it. It is what it is. I really enjoyed making Deadly Friend. At that point in my life, it was spectacular." Filming Professional mime artist Richmond Shepard taught Swanson all of the robotic movements that her character has in the film. In an interview, Swanson said this about learning to walk in that specific way: "Getting those moves down was difficult at first. You don't think walking that way is hard until you actually try doing it. But Richmond was a good teacher and I picked up on most of the moves pretty quickly." During filming of one of the studio-demanded scenes where Sam has a nightmare where her father attacks her in her room and she stabs him with a glass vase, there were difficulties on set with the special effects. Swanson mentioned, "The scene was set up so that I would hit a protective device inside his shirt. But during one take, I missed the device and glass actually shattered on his chest. I freaked out because I thought I had really stuck this glass into his chest. Everybody else just laughed." In another incident, the great amount of fake blood turned out to be a problem. "We had been working on that scene a long time. Finally, it was time for blood to spray out, but something leaked and we had blood spraying all over the set and myself. I was so tired that I started yelling, "More blood!" and the effects people really pumped it out." In an interview with Maxim magazine in May 2000, Swanson said that the fake head of Elvira that was decimated by the basketball was stuffed with actual cow brains that the production crew picked up from a butcher shop. In a 2006 interview for The Hills Have Eyes, Craven mentioned problems that the basketball scene had with the MPAA: "On Deadly Friend, we had a scene where a nasty old lady gets her head knocked off with a basketball. The actual scene as it was originally cut was fabulous. She was running around the room like a chicken with its head cut off for ten, fifteen seconds. It was bizarre and wonderful and they cut the shit out of it. So I compiled what we called our "Decapitation Compilation," all the films that I knew of that had decapitations in them that had an R, and sent it to them. They immediately sent it back saying they just base it on what they feel in the room at the time. And we had like eight or ten films in there, like The Omen where the guy gets his head cut off by the sheet of glass, and it didn't matter to them." Craven had a hand in selecting Bruce Joel Rubin to write the screenplay for Deadly Friend. Rubin agreed with Craven that the film should have a gentler tone than his other features. Craven couldn't write the script himself because he was directing episodes of The Twilight Zone at the time. Craven and producer Robert M. Sherman hired Rubin as the screenwriter because they read his script for Jacob's Ladder, which was unproduced at the time. For the scene chronicling the transplant of BB's microchip into Samantha's brain, Craven called on the advice of retired neurosurgeon William H. Faeth, who has a cameo in the film as a coroner in Sam's hospital room. Craven said that he was very helpful on all the anatomical details. The robot, BB, cost over $20,000 to build. Craven used a company called Robotics 21. His eyes were constructed from two 1950's camera lenses, a garage remote control unit, and a radio antenna taken from a Corvette. BB could actually lift 7,500 pounds in weight. The voice of BB was provided by Charles Fleischer, who appeared in Wes Craven's previous film A Nightmare on Elm Street as a doctor. Earlier in production when the film was originally going to be a PG-rated sci-fi thriller, Craven wanted to make something that was similar to John Carpenter's 1984 sci-fi film Starman. Also, according to Swanson in a 1987 interview with Fangoria writer Mark Shapiro, "Craven suggested that I take a look at the movie Starman because what he wanted to do with Deadly Friend was similar in tone to that film." John Carpenter directed Starman because he wanted to get away from his reputation as a director of violent films, just like Wes Craven wanted to make Deadly Friend with a PG rating in mind so he could prove that he could make a film that was not simply "blood and guts" horror. Post-production According to the book Wes Craven: The Art of Horror by John Kenneth Muir, Craven's original cut of the film was "a teenage film filled with charm, wit, and solid performances by likeable teens Swanson and Laborteaux. It was definitely a mainstream, PG film all the way, similar in tone to Real Genius or Short Circuit, but the point was made that Craven could direct something other than double-barreled horror." After principal photography was completed, Craven's original version of the film was screened to a test audience mostly consisting of his fanbase. The response from them was negative, criticizing the lack of violence and gore seen in Craven's other films. Finding that Craven had a large fanbase within the horror genre, Warner Bros.' marketing team insisted that additional scenes of gore and horror be incorporated into the finished film. The executive vice president of Warner Bros. at the time, Mark Canton, had Rubin write six additional gore scenes into his script, each bloodier than the last. Following the negative reactions from test audiences that saw Craven's first cut of the film and wanted a much more grisly product, it was re-edited in post-production and the more graphic deaths and other re-shot scenes were included, making the final film appear tonally jumbled. Furthermore, with the additional gore introduced, the film struggled being granted an R rating with the Motion Picture Association of America (MPAA) instead of an X due to the overt violence. According to Craven, the film was submitted a total of thirteen times before it was passed. Editor Michael Eliot was brought in by Warner Bros. to re-edit the original cut of Deadly Friend. Eliot went on to do the same for two other Warner Bros. films, Out for Justice and Showdown in Little Tokyo. While new scenes were added, others such as more scenes between Paul and Samantha that would have made the film more of a love story as originally intended were deleted for length and pacing reasons. Since re-writes, re-shoots, and post production re-editing heavily changed the original story, Craven and Rubin expressed strong anger and heartbreak at the studio and then virtually disowned the film. Craven was no longer attracted to the story because of Samantha going on a killing spree when she is revived. He was much more interested in exploring the adults around her, all of whom seem to be monsters in human skin. In his own words: "The scares don't come from her, but from the ordinary people, who are actually much more frightening. A father who beats a child is a terrifying figure. That's the one person you're afraid of in the movie. The idea is along the lines that adults can be horrible, without being outside what society says is acceptable." Swanson commented that she found herself and the other actors caught up in the studio's attempts to strong-arm Craven into making the film more visceral than what was originally intended. During both production and re-shoots, changes to the script were being made, title changes were being discussed, and there were many discussions about how violent and bloody the final film would be. All of these issues caused problems for the actors. Regarding the title changes, when Craven started the project, it was titled Friend, much like the Diana Henstell novel it was based on. The title was later changed to Artificial Intelligence and then to A.I. before the studio and producers finally settled on Deadly Friend. In a 1990 interview with Fangoria journalist Daniel Schweiger, screenwriter Bruce Joel Rubin said this about the ending and why it stayed in the film: "That robot coming out of the girl's head belongs solely to Mark Canton, and you don't tell the president of Warner Bros. that his idea stinks!" Rubin also said how at the time, people were still blaming him for the ending where Samantha turns into a robot, even though Canton was the one who conceived it. He also mentioned that despite the fact that the studio destroyed the love story of the movie that he and Craven enjoyed, he still enjoyed working with Craven, confirming that he was not the one who wanted to change the film and that he should not be blamed for what happened to it. Rubin even said that production was one of the happiest experiences he ever had. In another interview, Rubin told the story about how the $36,000 that he got paid for writing the script for Deadly Friend saved him from going nearly broke due to the four months long Writer's Guild strike and also helped him with a bar mitzvah for his son and to buy a house. In the same interview, Rubin said how at first, he did not want to write the script, but after changing his mind, he called Robert M. Sherman and got the job. He also said how working on the film was one of the most extraordinary experiences of his life: "It was a horror film with a lot of elements that are not things I wanted on my resume. And it didn't do very good business, but it was total fun. My kids were on the set every night. My five-year-old Ari was totally in love with Kristy Swanson, who was the lead. She later became Buffy the Vampire Slayer in the movie. She was really sweet to him and even took him on a date." Release Censorship Due to all of the gore scenes that were added into the film—as well as Craven's contentious history with the Motion Picture Association of America (MPAA)—it was initially given an X rating. The film was trimmed and resubmitted to the MPAA thirteen times before it was granted an R-rating. Most of the cuts were made to the death scenes of Harry and Elvira. Marketing The theatrical trailer for the film released by Warner Bros. represented it as a straightforward horror film, omitting any reference to its science fiction elements, with BB not appearing in a single frame. The mixture of teenagers and terror as seen in the trailer implied that Deadly Friend would be like Craven's A Nightmare on Elm Street. In an interview with Fangoria, Craven said that the deadline for delivering the first cut of Deadly Friend with all of the studio-demanded sequences included, and delivering his original script for A Nightmare on Elm Street 3: Dream Warriors, which he was writing with Bruce Wagner, was virtually the same, making it very difficult for him to do both things at once. Box office Hoping to score a financial success with the Halloween trade, Warner Bros. released Deadly Friend in theaters on October 10, 1986, but the film was a box office bomb, grossing $8,988,731 in the United States against an $11 million budget. Critical response AllMovie gave the film a generally negative review, writing, "It's an intriguing combination of elements, but the end result is a schizoid mess", calling Craven's direction "awkward" and opining that it "lacks the intense, sustained atmosphere of his previous horror hits." On Rotten Tomatoes the film has a 20% approval rating based on 35 reviews, with an average rating of 3.7/10, with the consensus reading, "An uninspired departure for Wes Craven, mired by an uneven premise; beware, this is one Deadly Friend. On Metacritic it has a score of 44% based on reviews from 11 critics, indicating "mixed or average reviews". Home media In 2007, Warner Bros. released a DVD edition featuring all of the death scenes in their fully uncut form. In 2021, numerous Twitter users called for Craven's original cut of the film to be released, sharing the hashtag #ReleaseTheCravenCut for both Deadly Friend and Cursed. In October 2021, Scream Factory released the film for the first time on Blu-ray. The Blu-ray features the same cut of the film as issued on the previous Warner Bros. DVD. In a press announcement regarding the Blu-ray release, Scream Factory wrote: "We anticipate being asked if we found any alternate footage from the film (as seen in the original theatrical trailer) or Craven's more milder original feature-length cut. Unfortunately, we could not locate any lost footage after investigating. Sorry, we tried. As fans of the film ourselves we wanted to see that too!" References Sources External links 1986 films 1986 horror films 1980s science fiction horror films 1980s teen horror films American robot films American science fiction horror films American teen horror films Films about androids Films about child abuse Films about computing Films based on American horror novels Films directed by Wes Craven Films scored by Charles Bernstein Films shot in Los Angeles Films with screenplays by Bruce Joel Rubin Mad scientist films Techno-horror films Warner Bros. films 1980s English-language films 1980s American films American novels adapted into films 1986 science fiction films English-language science fiction horror films
Deadly Friend
Technology
4,238
24,023,788
https://en.wikipedia.org/wiki/Crystatech
CrystaTech Inc. is a supplier of process technology to the energy industry. CrystaTech commercializes the patented Crystasulf process. CrystaSulf is the first commercially available product to provide low cost hydrogen sulfide (H2S) removal from gas streams. The company was founded in 1999 and is financially backed by the Gas Technology Institute and major energy companies through sponsored clean energy technology development. The corporate office is located in Austin, Texas. CrystaTech is a member of the Gas Processors Suppliers Association. Regional offices are in Alberta, Canada and Houston, Texas. All early stage R&D takes place at the Gas Technology Institute in Des Plaines, Illinois. Representative customers include Total, Petrobank Energy and Resources Ltd., Queensland Energy Resources, U.S. Department of Energy, Luminant, and American Electric Power. Key People David Work - Chairman Don Carlton - Independent Director Notes References http://gpaglobal.org/gpsa/membercompanies/xealapp/index.php#C https://web.archive.org/web/20090905204521/http://www.gastechnology.org/webroot/app/xn/xd.aspx?it=enweb&xd=MarketResults%2Fmkt_portfolioCo.xml External links CrystaTech's Web Site Green chemistry Chemical process engineering Companies based in Austin, Texas Privately held companies based in Texas Technology companies established in 1999 1999 establishments in Texas
Crystatech
Chemistry,Engineering,Environmental_science
320
58,172,797
https://en.wikipedia.org/wiki/Predatory%20marriage
Predatory marriage is the practice of marrying an elderly person exclusively for the purpose of gaining access to their estate upon their death. While the requirements for mental capacity to make a valid will are high, in most jurisdictions the requirements for entering into a valid marriage are much lower; even a person suffering dementia may enter into marriage. In many jurisdictions, a marriage arrangement will invalidate any previous will left by the person, resulting in the spouse inheriting the estate. In the United Kingdom a campaign, Predatory Marriage UK (originally known as Justice for Joan) was started, working to change laws and procedures around marriage to reduce this practice, supported by lawyer Sarah Young of Ridley and Hall. The local MP, Fabian Hamilton MP, introduced a bill in Parliament during 2018 entitled the Marriage and Civil Partnership (Consent) Bill, to establish that marriage should no longer always revoke a previous will and have introduced other protections against predatory marriage. The bill was passed but ran out of parliamentary time, but work is continuing. Common scams and methods There are several techniques known to have been employed in targeting vulnerable people in this way, often involving coercive and controlling behavior. One example is convincing a vulnerable person to sign over assets before or after marriage to avoid prenuptials. This can be disguised either as a way to avoid tax or in exchange for care and affection, and is often accompanied with legal documentation purporting to protect the person being scammed but such documentation often does not hold up in court, especially if provided by the scammer. See also Elder financial abuse Sham marriage References Sham marriage Elder law Abuse Psychological manipulation
Predatory marriage
Biology
325
72,852,419
https://en.wikipedia.org/wiki/BharOS
BharOS (formerly IndOS) is a closed source mobile operating system designed by IIT Madras. It is an Indian government-funded project to develop an operating system (OS) for use in government and public systems. History Google is facing a crackdown from the Competition Commission of India (CCI) for its practices pertaining to Android. There have been several demands for the need for an Indian app store that does not levy exorbitant fees for sales. BharOS aims to reduce India's dependence on foreign-made operating systems in smartphones and promote the use of India-made technology. It was developed by JandK Operations Private Limited (JandKops), which was incubated at IIT Madras. The minister for telecommunications and information technology Ashwini Vaishnaw and education minister Dharmendra Pradhan launched the operating system in a public event. Features BharOS targets security-conscious groups. BharOS does not come with any preinstalled services or apps. This approach gives the user more freedom and control over the permissions that are available to apps on their device. Users can choose to grant permissions only to apps that they require to access certain features or data on their device. The software can be installed on commercially available handsets, providing users with a secure environment, the company stated in a statement. The new operating system will provide access to trusted apps via organisation-specific Private App Store Services (PASS), which is a list of curated apps that meet security and privacy standards. At a panel discussion, Karthik Ayyar, the Director of JandKops, indicated that only applicable security updates will be applied to BharOS devices in closed group networks Criticism Divya Bhati writing for India Today noted that instructions on downloading, installing BharOS on compatible devices, or plans for new devices, or its support for security and software updates were scant. In September 2023, a fork of GrapheneOS containing references to BharOS was made public on GitHub. Although the Github Profile of Sadhasiva, which hosted the code has since been deleted, it can be viewed through unofficial forks by archival websites. Through a tweet, IITM Pravartak Technologies Foundation identified the code to have originated from Megam Solutions, a Chennai-based software company which was not connected with JandKops. However, IITM Pravartak Technologies Foundation is a client of Megam Solutions. In a subsequent tweet, the organization highlighted communications with the CEO of Megam Solutions, that the name BharOS was unintentionally used. Ayyar, stated that the operating system would remain closed source software due to "organizational reasons". Ayyar indicated that he did not have the authority to make decisions regarding whether BharOS's source code would be open or closed. External links https://jandkops.in/, JandKops website References State-sponsored Linux distributions 2023 software Mobile Linux Android (operating system) software ARM operating systems Computing platforms Custom Android firmware Embedded Linux distributions Linux distributions Linux distributions without systemd Mobile software Operating system families Software using the Apache license
BharOS
Technology
661
16,018,002
https://en.wikipedia.org/wiki/Trans-Spliced%20Exon%20Coupled%20RNA%20End%20Determination
Trans-Spliced Exon Coupled RNA End Determination (TEC-RED) is a transcriptomic technique that, like SAGE, allows for the digital detection of messenger RNA sequences. Unlike SAGE, detection and purification of transcripts from the 5’ end of the messenger RNA require the presence of a trans-spliced leader sequence. Trans-splicing Background Spliced leader sequences are short sequences of non coding RNA, not found within a gene itself, that are attached to the 5’ end of all, or a portion of, mRNAs transcribed in an organism. They have been found in several species to be responsible for separating polycistronic transcripts into single gene mRNAs, and in others to splice onto monocistronic transcripts. The major role of trans-splicing on monocistronic transcripts is largely unknown. It has been proposed that they may act as an independent promoter that aids in tissue specific expression of independent protein isoforms. Spliced leaders have been seen in trypanosomatids, Euglena, flatworms, Caenorhabditis. Some species contain only one spliced leader sequence found on all mRNAs. In C. elegans two are seen and are labeled SL1 and SL2. TEC-RED Methods Total RNA is purified from the specimen of interest. Poly A messenger RNA is then purified from total RNA and subsequently translated into cDNA using a reverse transcription reaction. The cDNA produced from the mRNA is labeled using primers homologous to the spliced leader sequences of the organism. In a nine step PCR reaction the cDNAs are concurrently embedded with the BpmI restriction endonuclease site (though any class IIs restriction endonuclease may work) and a biotin label which are present in the primers. These tagged cDNAs are then cleaved 14 bp downstream from the recognition site using BpmI restriction endonuclease and blunt ended with T4 DNA polymerase. The fragments are further purified away from extraneous DNA material by using the biotin labels to bind them to a strepdavidin matrix. They are then ligated to adapter DNA, in six separate reactions, containing six different restriction endonuclease recognition sites. These tags are then amplified by PCR with primers containing a mismatch changing the Bpm1 site to a Xho1 site. The amplicons are concatenated and ligated into a plasmid vector. The clonal vectors are then sequenced and mapped to the genome. Concatenation Concatenation of the tags, as developed in 2004, is different from that seen in SAGE. The cleavage of the tags with Xho1 and mixture of the different samples, followed by ligation, form the first concatenation step. The second step uses one of the restriction endonucleases with consensus to the adapter molecule attached to the 3’ end. They are again ligated, and PCR is performed to purify samples for the next joining. The concatenation is continued with the second restriction endonuclease, followed by the third and finally the fourth. This results in the concatamer formed by the six endonuclease ligations containing 32 tags, arranged 5’ to 5’ around the Xho1 site. In SAGE, concatenation takes place after ditags are formed and amplified by PCR. The linkers on the outside of the ditags are cleaved with the enzyme that provided their binding and these sticky end ditags are concatenated randomly and placed into a cloning vector. Advantages The advantage of TEC-RED over SAGE is that no restriction endonuclease is needed for the initial linker binding. This prevents bias associated with restriction site sequences that will be missing from some genes, as is seen in SAGE. The ability to have a snapshot of specific RNA isoforms allows the deduction of differential regulation of isoforms through alternative selection of promoters. This may also aid in the discernment of expression patterns unique to the SL1 or SL2 sequence. TEC-RED also allows characterization of the 5’ ends of RNA produced and therefore of isoforms that differ by the amino terminal splicing. The technology permits the determination and verification of all known and unknown genes that may be predicted as well as the 5’ splice isoforms or 5’ RNA ends that may be produced. Using TEC-RED in conjunction with SAGE or a modified protocol will allow discernment of the 5’ and 3’ ends of transcripts, respectively. The identification of alternative splice variants, and possibly the relative quantities, containing a trans-spliced leader sequence is therefore possible. Variations Two alternate techniques have been described that allow for 5’ tag analysis in organisms that do not have trans-spliced leader sequences. The techniques presented by Toshiyuki et al. and Shin-ichi et al. are called CAGE and 5’ SAGE respectively. CAGE utilizes biotinylated cap-trapper technology to maintain mRNA signal long enough to create and select full length cDNAs, which have adapter sequences ligated on the 5‘ end. 5’ SAGE utilizes oligo-capping technology. Both use their adapter sequence to prime from after the cDNA is created. Both of these methods have disadvantages though. CAGE has shown tags with addition of a guanine on the first position and oligo-capping may lead to sequence bias due to the use of RNA ligase. See also RNA-seq DNA microarray References External links CAGE Tags http://genome.gsc.riken.jp/absolute/ 5’ SAGE results https://archive.today/20040821030224/http://5sage.gi.k.u-tokyo.ac.jp/" https://archive.today/20040821030224/http://5sage.gi.k.u-tokyo.ac.jp/ TEC RED Tags seen in wormbase https://web.archive.org/web/20080909025225/http://www.wormbase.org/db/searches/advanced/dumper RNA Molecular biology
Trans-Spliced Exon Coupled RNA End Determination
Chemistry,Biology
1,310
1,777,994
https://en.wikipedia.org/wiki/1%2C1-Difluoroethane
1,1-Difluoroethane, or DFE, is an organofluorine compound with the chemical formula CHF. This colorless gas is used as a refrigerant, where it is often listed as R-152a (refrigerant-152a) or HFC-152a (hydrofluorocarbon-152a). It is also used as a propellant for aerosol sprays and in gas duster products. As an alternative to chlorofluorocarbons, it has an ozone depletion potential of zero, a lower global warming potential than other hydrofluorocarbons and a shorter atmospheric lifetime (1.4 years). Production 1,1-Difluoroethane is a synthetic substance that is produced by the mercury-catalyzed addition of hydrogen fluoride to acetylene: HCCH + 2 HF → CHCHF The intermediate in this process is vinyl fluoride (C2H3F), the monomeric precursor to polyvinyl fluoride. Uses With a relatively low global warming potential (GWP) index of 124 and favorable thermophysical properties, 1,1-difluoroethane has been proposed as an environmentally friendly alternative to R134a. Despite its flammability, R152a also presents operating pressures and volumetric cooling capacity (VCC) similar to R134a so it can be used in large chillers or in more particular applications like heat pipe finned heat exchangers. In addition, 1,1-difluoroethane is also commonly used in gas dusters and numerous other retail aerosol products, particularly those subject to stringent volatile organic compound (VOC) requirements. The molecular weight of difluoroethane is 66, making it a useful and convenient tool for detecting vacuum leaks in Gas chromatography–mass spectrometry (GC-MS) systems. The cheap and freely available gas has a molecular weight and fragmentation pattern (base peak 51 m/z in typical EI-MS, major peak at 65 m/z) distinct from anything in air. If mass peaks corresponding to 1,1-difluoroethane are observed immediately after spraying a suspect leak point, leaks may be identified. Safety Difluoroethane is an extremely flammable gas, which decomposes rapidly on heating or burning, producing toxic and irritating fumes, including hydrogen fluoride and carbon monoxide. In a DuPont study, rats were exposed to up to 25,000 ppm (67,485 mg/m) for six hours daily, five days a week for two years. This has become the no-observed-adverse-effect level for this substance. Prolonged exposure to 1,1-difluoroethane has been linked in humans to the development of coronary disease and angina. Repeated or sufficiently high levels of exposure, particularly purposeful inhalation, can precipitate fatal cardiac arrhythmia. Abuse Difluoroethane is an intoxicant with abuse potential. It appears to act primarily through GABAA and glutamate receptors. Fatalities linked to difluoroethane abuse include actress Skye McCole Bartusiak, singer Aaron Carter and wrestler Mike Bell. Bitterants, added voluntarily to some brands to deter purposeful inhalation, are often not legally required; they do not negate or counteract difluoroethane's intoxicating effects. Environmental abundance Most production, use, and emissions of HFC-152a have occurred within Earth's more industrialized and populated northern hemisphere following the substance's introduction in the 1990s. Its concentration in the northern troposphere reached an annual average of about 10 parts per trillion by year 2011. The concentration of HFC-152a in the southern troposphere is about 50% lower due to its removal rate (i.e. lifetime) of about 1.5 years being similar in magnitude to the global atmospheric mixing time of one to two years. See also List of refrigerants IPCC list of greenhouse gases Canned air References Fluoroalkanes Hydrofluorocarbons Refrigerants Greenhouse gases Organic compounds with 2 carbon atoms
1,1-Difluoroethane
Chemistry,Environmental_science
895
43,882,480
https://en.wikipedia.org/wiki/Silicon%20tombac
Silicon tombac () is an alloy made of copper (80%), zinc (16%) and silicon (4%). General properties The silicon content leads to a strengthening of the metal matrix. The appearance is similar to ordinary brass. Silicon tombac has good friction bearing characteristics and is corrosion resistant but is not resistant to ammonia atmosphere. The strength properties are largely retained at application temperatures up to 200 °C. It is a special alloy in terms of the combination of casting process and casting temperature. In most cases, parts made of silicon tombac, are produced through the high pressure die casting process, which is normally specialized on metals with relatively low melting temperatures. But in this case the temperature melting range of silicon tombac is in the area of 950 to 1000 °C, which is relatively high for casting into permanent moulds. The advantage is the productivity of this highly automated casting process. The disadvantage is the temperature stress of the surface of the permanent mould, so that the lifetime of these moulds is limited. Comparison to investment cast steel parts High pressure die cast silicon tombac is often used as an alternative for investment cast steel parts, because the mechanical strength is comparable (500 MPa), but the production process is more efficient. There can be found a break-even-point when comparing both processes, whereas the advantages of high pressure die casting regularly predominate at high unit numbers (for instance greater than 5000 units) to produce. This alloy has outstanding casting properties and good strength properties, which is required for the die casting process. It is often chosen for small to medium size parts in terms of casting metal volume. For large parts often investment casting of steels is applied because of the lower material cost. Metallurgical aspects The silicon content limits the solubility of zinc in copper in the α-phase. In the given alloy the maximum amount of silicon at a very high zinc content is added. The consequence is that the α-phase crystallized silicon supersaturated when it comes to high cooling rates of the alloy. As a result the α-solid solution does not disintegrate, which leads to the described high mechanical properties. References E. Paul DeGarmo, J. T. Black, Ronald A. Kohser: DeGarmo's Materials and Processes in Manufacturing. 10 Auflage. John Wiley & Sons, 2007, (section: copper-zinc alloys). Copper alloys Silicon alloys Zinc alloys
Silicon tombac
Chemistry
500
10,327,157
https://en.wikipedia.org/wiki/Nosema%20ceranae
Nosema ceranae is a microsporidian, a small, unicellular parasite that mainly affects Apis cerana, the Asiatic honey bee. Along with Nosema apis, it causes the disease nosemosis, the most widespread of the diseases of adult honey bees. N. ceranae can remain dormant as a long-lived spore which is resistant to temperature extremes and dehydration. This fungus has been shown to act in a synergistic fashion with diverse insecticides such as fipronil or neonicotinoids, by increasing the toxicity of pesticides for bees, leading to higher bee mortality. It may thus play an indirect role in colony collapse disorder. In addition, the interaction between fipronil and N. ceranae induces changes in male physiology leading to sterility. Range Nosema ceranae was first described in 1996 and was identified as a disease of Apis mellifera in 2004 in Taiwan. Since its emergence in honeybees, N. ceranae has been identified in bumblebee species in South America, China, and England where infection studies indicate N. ceranae has a higher virulence in bumblebees than honeybees. Researchers in Spain have analysed samples of Apis mellifera, the European honey bee, mostly sent from colonies suffering unexpected decreases in bee population per hive or lower honey production, as reported by the beekeepers during the last two to three years. In 2004, 90% of some 3,000 samples had positive results for N. ceranae. In 2005, of 800 samples, 97% had positive results. During 2006, both France and Germany have detected the disease and recognized the genetic sequence of N. ceranae in their respective territories. In the United States, N. ceranae has been detected in honey bees from Nebraska, Wisconsin, Arkansas, New York, and South Dakota using PCR of the 16S gene. In New York, N. ceranae was detected in 49 counties, and of the 1200 honey bee samples collected, 528 (44%) were positive for Nosema, from which, PCR analysis of 371 spore positive samples revealed that 96% were N. ceranae, 3% had both N. ceranae and N. apis, and 1% had N. apis only. Effects on bees This pathogen has been tentatively linked to colony collapse disorder, a phenomenon reported primarily from the United States, since fall of 2006. Highly preliminary evidence of N. ceranae was reported in a few hives in the Central Valley area of California. "Tests of genetic material taken from a "collapsed colony" in Merced County point to a once-rare microbe that previously affected only Asian bees but might have evolved into a strain lethal to those in Europe and the United States." The researcher did not, however, believe this was conclusive evidence of a link to CCD; "We don't want to give anybody the impression that this thing has been solved." A USDA bee scientist has similarly stated, "while the parasite nosema ceranae may be a factor, it cannot be the sole cause. The fungus has been seen before, sometimes in colonies that were healthy." Likewise, a Washington State beekeeper familiar with N. ceranae in his own hives discounts it as being the cause of CCD. In early 2009, Higes et al. reported an association between CCD and N. ceranae was established free of confounding factors, and that weakened colonies treated with fumagillin recovered. News articles published in October 2010 quoted researchers who had discovered that Nosema fungus had joined with a previously unsuspected virus, invertebrate iridescent virus, or IIV6, dealing test bee colonies a lethal blow. Neither the fungus nor the virus alone kills all the test group, but the two combined do. Both the fungus and the virus are found with high frequency in hives that have suffered CCD. Final testing is in progress with field tests on colonies. N. ceranae and N. apis have similar lifecycles, but they differ in spore morphology. Spores of N. ceranae seem to be slightly smaller under the light microscope and the number of polar filament coils is between 20 and 23, rather than the more than 30 often seen in N. apis. The disease afflicts adult bees and depopulation occurs with consequent losses in honey production. One does not detect symptoms of diarrhea as in N. apis. The most significant difference between the two types is how quickly N. ceranae can cause a colony to die. Bees can die within 8 days after exposure to N. ceranae, a finding not yet confirmed by other researchers. The forager caste seems the most affected, leaving the colony presumably to forage, but never returning. This results in a reduced colony consisting mostly of nurse bees with their queen, a state very similar to that seen in CCD. Little advice on treatment is available, but it has been suggested that the most effective control of N. ceranae is the antibiotic fumagillin as recommended for N. apis. The genome of N. ceranae was sequenced in 2009. This should help scientists trace its migration patterns, establish how it became dominant, and help measure the spread of infection by enabling diagnostic tests and treatments to be developed. Treatment N. ceranae is apparently released from the suppressive effects of fumagillin at higher concentrations than that of N. apis. At fumagillin concentrations that continue to impact honey bee physiology, N. ceranae thrives and doubles its spore production. The current application protocol for fumagillin may exacerbate N. ceranae infection rather than suppress it. Fumagillin application should be a major cause of N. ceranae dominance in this time. References Microsporidia Bee diseases Fungi described in 1996 Fungus species
Nosema ceranae
Biology
1,249
63,883,409
https://en.wikipedia.org/wiki/Surface%20Book%203
The Surface Book 3 is the third generation of Microsoft's Surface Book series, and a successor to the Surface Book 2. Like its previous generation, the Surface Book 3 is part of the Microsoft Surface lineup of personal computers. It is a 2-in-1 PC that can be used like a conventional laptop, or detached from its base for use as a separate tablet, with touch and stylus input support in both scenarios. It was announced by Microsoft online alongside the Surface Go 2 on May 6, 2020, and later released for purchase on May 12, 2020. Configurations Features Hardware Surface Book 3 retains most of the hardware from the previous generation, released in November 2017. This includes the same full-body magnesium alloy construction and design, footprint, keyboard, touchpad, cameras, discrete TPM chip with identical support for AES full-drive encryption, and the same display panel options. The 13.5-inch model Surface Book 3 features a 3000×2000 pixels resolution screen at 267 pixels per inch, and 3240×2160 pixels resolution at 260 pixels per inch for the 15-inch model. Both screens feature a 3:2 aspect ratio, to echo a key feature of the Surface lineup. The new generation offers some hardware improvements, including new Dolby-certified speakers, improved battery life, a new hinge release, and an updated Surface Connect port that supports a higher electrical input. It is the first device in the Microsoft Surface lineup to offer the Intel 10th generation quad-core processors, optional Nvidia Quadro graphics, up to 32 GB of system memory, and up to 2 TB for data storage. The 13.5-inch model is sold with a 102 W charger, while a more powerful 127 W charger comes with the 15-inch model. Both devices no longer suffer from battery drain during heavy workloads, which was a problem occasionally observed with the last generation. Much like the previous generation, Microsoft has opted to forego Thunderbolt 3 due to overall security concerns with the protocol. Software As of May 2020, both the 13.5-inch and 15-inch models ship with a pre-installed trial of Microsoft Office 365, as well as a pre-installed 64-bit Windows 10 Home for all general customers. It is a downgrade compared to the predecessor, which offered Windows 10 Professional to all consumers, business and enterprise customers. Unlike its predecessor, the Surface Book 3 only comes pre-installed with Windows 10 Pro if it is ordered via business procurement channels. For most consumers, it will only come with a step down, Windows 10 Home. Accessories The new Surface Book 3 is backwards-compatible with some of the same peripheral accessories of its direct predecessors, such as the Surface Pen and the Surface Docks, however notably despite advertising otherwise, it is not fully compatible with the surface dial and lacks the advertised on-screen functionality. As with its predecessor, the Surface Book 2, has the ability to use built-in pen computing capabilities based on N-trig technology Microsoft acquired in 2015, although no significant updates have been made for this new release. All major tweaks and improvements, which Microsoft had first released for the Surface Book 2, are also applied to this new generation. Both the Surface Book 2 & 3 share the same display options, with the same 10-point touch support. With that said, the tablet and keyboard base portions are not interchangeable between the Surface Book 2 and 3. A series of magnets are installed in opposing positions, alongside additional software controls, to ensure that users will not accidentally mix hardware between the two generations. Release timeline Reception Compared to the broadly positive feedback awarded to its predecessor, the Surface Book 3 only received lukewarm reviews. Most reviewers mentioned the Surface Book 3 continues to feel like a premium product. The updated graphics options, effective cooling for the GPU, high-quality cameras, keyboard, touch and pen capabilities continue to be applauded, as is the improved tablet release. With that said, the underwhelming CPU options, poor thermals in the main computing unit (despite the tablet being nearly identical in thickness to the Apple MacBook Pro 16-inch's base), thick screen bezels, and an outdated design were all common complaints, with the product appearing largely identical to original Surface Book introduced 5 years ago, in 2015. For creative production, reviewers noted the screen suffered from poor overall accuracy, contrast, and color range (at less than 70% coverage of the DCI-P3 standard) compared to other direct competitors, such as the Apple MacBook Pro and Dell XPS lineup, both of which come with factory-calibrated displays and significantly better visual reproduction than the Surface Book 3. For gaming and entertainment consumption, the Surface Book 3's thick screen bezels, slow response time, and the lack of higher refresh-rate display panels negatively impacted the product proposition in this area. For other high-performance workloads, the Surface Book 3 also fell short compared to several key competitors, many of which offer 6 or 8-core processors and up to 64 GB system memory (128 GB in some cases); in contrast, the Surface Book 3 has a low-powered 4-core ultrabook processor and up to 32 GB memory. Aside from the device's poor market fit and consequential niche appeal, some reviewers also raised concerns about stagnation in product innovation. When reviewing the 13.5-inch model, Dieter Bohn of The Verge said, "The idea here is you're supposed to get a full-powered, pro laptop with a GPU, and lots of horsepower and battery at the base, but if you want you may also detach the screen and detach it into a tablet. Now, with the third iteration, we finally understand the trade-offs (...) You have to ask yourself, how much the detach means to you." While he continues to highlight the device's good quality hardware, touch and pen capabilities, and impressive graphics performance, he also noted the Intel Core i7 CPU equipped inside the device is restrictive, "the extra cost that you pay doesn't really fit on the specs sheet." Devindra Hardawar of Engadget, who gave positive remarks to the predecessor Surface Book 2, notes similar problems with the lackluster CPU performance in 2020, "The Surface Book 3 features Intel's quad-core 10th generation Ice Lake CPUs, which max out at a 3.9GHz Turbo Boost speed. Those chips also appear in the Surface Laptop 3, an ultraportable that doesn’t even pretend to handle heavy lifting. The MacBook Pro 16-inch, on the other hand, offers Intel's recent six and eight-core CPUs, including the monstrously powerful 5GHz Core i9. Dell's XPS 15 can also be configured with similar chips reaching up to 5.1GHz. You do the math. There's just no way the Surface Book 3 can compete in a CPU fight." Luke Larsen of Digital Trends writes, "CPU performance on its own isn’t impressive for a device this large. There’s one primary reason for this: It uses the same 15-watt chip that appears in small laptops like the Dell XPS 13, Surface Laptop 3, and HP Spectre x360 13," "The difference in core count makes a massive difference in performance. Add four cores with a laptop like the Dell XPS 15, and you’ll see a 53% better score in Cinebench R20’s multi-core test than the Surface Book 3." Jordan Novet, on CNBC, noted the Surface Book 3's ability to handle heavy graphical workloads, but also criticized the device's dated design and poor battery life, "Microsoft could stand to get more experimental with this product. Performance is excellent. The computer stays quiet and cool to the touch while handling workloads that can challenge lesser machines. (However,) I typically got around six and a half hours’ worth of battery life on the Surface Book 3. That's disappointing because I got almost seven and a half hours on the previous model (...) Don't get me wrong. The Surface Book 3 isn't a bad PC. If you need a new PC, you could do worse. It's just iterative, and no longer feels fresh. It's not a major leap forward for Microsoft's most powerful portable PC. When Microsoft redesigns the Surface Book and makes this otherwise very good laptop look modern again, then it'll be easier to justify the splurge." Known issues Some devices suffer from screen blackout issues. Some devices are known to have battery connection issues that worsen over time and may require battery replacement. References External links Microsoft Surface Tablet computers introduced in 2020 2-in-1 PCs
Surface Book 3
Technology
1,821