text
stringlengths
11
320k
source
stringlengths
26
161
CORDIC ( coordinate rotation digital computer ), Volder's algorithm , Digit-by-digit method , Circular CORDIC ( Jack E. Volder ), [ 1 ] [ 2 ] Linear CORDIC , Hyperbolic CORDIC (John Stephen Walther), [ 3 ] [ 4 ] and Generalized Hyperbolic CORDIC ( GH CORDIC ) (Yuanyong Luo et al.), [ 5 ] [ 6 ] is a simple and efficient algorithm to calculate trigonometric functions , hyperbolic functions , square roots , multiplications , divisions , and exponentials and logarithms with arbitrary base, typically converging with one digit (or bit) per iteration. CORDIC is therefore also an example of digit-by-digit algorithms . CORDIC and closely related methods known as pseudo-multiplication and pseudo-division or factor combining are commonly used when no hardware multiplier is available (e.g. in simple microcontrollers and field-programmable gate arrays or FPGAs), as the only operations they require are additions , subtractions , bitshift and lookup tables . As such, they all belong to the class of shift-and-add algorithms . In computer science, CORDIC is often used to implement floating-point arithmetic when the target platform lacks hardware multiply for cost or space reasons. Similar mathematical techniques were published by Henry Briggs as early as 1624 [ 7 ] [ 8 ] and Robert Flower in 1771, [ 9 ] but CORDIC is better optimized for low-complexity finite-state CPUs. CORDIC was conceived in 1956 [ 10 ] [ 11 ] by Jack E. Volder at the aeroelectronics department of Convair out of necessity to replace the analog resolver in the B-58 bomber 's navigation computer with a more accurate and faster real-time digital solution. [ 11 ] Therefore, CORDIC is sometimes referred to as a digital resolver . [ 12 ] [ 13 ] In his research Volder was inspired by a formula in the 1946 edition of the CRC Handbook of Chemistry and Physics : [ 11 ] where φ {\displaystyle \varphi } is such that tan ⁡ ( φ ) = 2 − n {\displaystyle \tan(\varphi )=2^{-n}} , and K n := 1 + 2 − 2 n {\displaystyle K_{n}:={\sqrt {1+2^{-2n}}}} . His research led to an internal technical report proposing the CORDIC algorithm to solve sine and cosine functions and a prototypical computer implementing it. [ 10 ] [ 11 ] The report also discussed the possibility to compute hyperbolic coordinate rotation , logarithms and exponential functions with modified CORDIC algorithms. [ 10 ] [ 11 ] Utilizing CORDIC for multiplication and division was also conceived at this time. [ 11 ] Based on the CORDIC principle, Dan H. Daggett, a colleague of Volder at Convair, developed conversion algorithms between binary and binary-coded decimal (BCD). [ 11 ] [ 14 ] In 1958, Convair finally started to build a demonstration system to solve radar fix –taking problems named CORDIC I , completed in 1960 without Volder, who had left the company already. [ 1 ] [ 11 ] More universal CORDIC II models A (stationary) and B (airborne) were built and tested by Daggett and Harry Schuss in 1962. [ 11 ] [ 15 ] Volder's CORDIC algorithm was first described in public in 1959, [ 1 ] [ 2 ] [ 11 ] [ 13 ] [ 16 ] which caused it to be incorporated into navigation computers by companies including Martin-Orlando , Computer Control , Litton , Kearfott , Lear-Siegler , Sperry , Raytheon , and Collins Radio . [ 11 ] Volder teamed up with Malcolm McMillan to build Athena , a fixed-point desktop calculator utilizing his binary CORDIC algorithm. [ 17 ] The design was introduced to Hewlett-Packard in June 1965, but not accepted. [ 17 ] Still, McMillan introduced David S. Cochran (HP) to Volder's algorithm and when Cochran later met Volder he referred him to a similar approach John E. Meggitt (IBM [ 18 ] ) had proposed as pseudo-multiplication and pseudo-division in 1961. [ 18 ] [ 19 ] Meggitt's method also suggested the use of base 10 [ 18 ] rather than base 2 , as used by Volder's CORDIC so far. These efforts led to the ROMable logic implementation of a decimal CORDIC prototype machine inside of Hewlett-Packard in 1966, [ 20 ] [ 19 ] built by and conceptually derived from Thomas E. Osborne 's prototypical Green Machine , a four-function, floating-point desktop calculator he had completed in DTL logic [ 17 ] in December 1964. [ 21 ] This project resulted in the public demonstration of Hewlett-Packard's first desktop calculator with scientific functions, the HP 9100A in March 1968, with series production starting later that year. [ 17 ] [ 21 ] [ 22 ] [ 23 ] When Wang Laboratories found that the HP 9100A used an approach similar to the factor combining method in their earlier LOCI-1 [ 24 ] (September 1964) and LOCI-2 (January 1965) [ 25 ] [ 26 ] Logarithmic Computing Instrument desktop calculators, [ 27 ] they unsuccessfully accused Hewlett-Packard of infringement of one of An Wang 's patents in 1968. [ 19 ] [ 28 ] [ 29 ] [ 30 ] John Stephen Walther at Hewlett-Packard generalized the algorithm into the Unified CORDIC algorithm in 1971, allowing it to calculate hyperbolic functions , natural exponentials , natural logarithms , multiplications , divisions , and square roots . [ 31 ] [ 3 ] [ 4 ] [ 32 ] The CORDIC subroutines for trigonometric and hyperbolic functions could share most of their code. [ 28 ] This development resulted in the first scientific handheld calculator , the HP-35 in 1972. [ 28 ] [ 33 ] [ 34 ] [ 35 ] [ 36 ] [ 37 ] Based on hyperbolic CORDIC, Yuanyong Luo et al. further proposed a Generalized Hyperbolic CORDIC (GH CORDIC) to directly compute logarithms and exponentials with an arbitrary fixed base in 2019. [ 5 ] [ 6 ] [ 38 ] [ 39 ] [ 40 ] Theoretically, Hyperbolic CORDIC is a special case of GH CORDIC. [ 5 ] Originally, CORDIC was implemented only using the binary numeral system and despite Meggitt suggesting the use of the decimal system for his pseudo-multiplication approach, decimal CORDIC continued to remain mostly unheard of for several more years, so that Hermann Schmid and Anthony Bogacki still suggested it as a novelty as late as 1973 [ 16 ] [ 13 ] [ 41 ] [ 42 ] [ 43 ] and it was found only later that Hewlett-Packard had implemented it in 1966 already. [ 11 ] [ 13 ] [ 20 ] [ 28 ] Decimal CORDIC became widely used in pocket calculators , [ 13 ] most of which operate in binary-coded decimal (BCD) rather than binary. This change in the input and output format did not alter CORDIC's core calculation algorithms. CORDIC is particularly well-suited for handheld calculators, in which low cost – and thus low chip gate count – is much more important than speed. CORDIC has been implemented in the ARM-based STM32G4 , Intel 8087 , [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] 80287 , [ 47 ] [ 48 ] 80387 [ 47 ] [ 48 ] up to the 80486 [ 43 ] coprocessor series as well as in the Motorola 68881 [ 43 ] [ 44 ] and 68882 for some kinds of floating-point instructions, mainly as a way to reduce the gate counts (and complexity) of the FPU sub-system. CORDIC uses simple shift-add operations for several computing tasks such as the calculation of trigonometric, hyperbolic and logarithmic functions, real and complex multiplications, division, square-root calculation, solution of linear systems, eigenvalue estimation, singular value decomposition , QR factorization and many others. As a consequence, CORDIC has been used for applications in diverse areas such as signal and image processing , communication systems , robotics and 3D graphics apart from general scientific and technical computation. [ 49 ] [ 50 ] The algorithm was used in the navigational system of the Apollo program 's Lunar Roving Vehicle to compute bearing and range, or distance from the Lunar module . [ 51 ] [ 52 ] CORDIC was used to implement the Intel 8087 math coprocessor in 1980, avoiding the need to implement hardware multiplication. [ 53 ] CORDIC is generally faster than other approaches when a hardware multiplier is not available (e.g., a microcontroller), or when the number of gates required to implement the functions it supports should be minimized (e.g., in an FPGA or ASIC ). In fact, CORDIC is a standard drop-in IP in FPGA development applications such as Vivado for Xilinx, while a power series implementation is not due to the specificity of such an IP, i.e. CORDIC can compute many different functions (general purpose) while a hardware multiplier configured to execute power series implementations can only compute the function it was designed for. On the other hand, when a hardware multiplier is available ( e.g. , in a DSP microprocessor), table-lookup methods and power series are generally faster than CORDIC. In recent years, the CORDIC algorithm has been used extensively for various biomedical applications, especially in FPGA implementations. [ citation needed ] The STM32G4 , STM32U5 and STM32H5 series and certain STM32H7 series of MCUs implement a CORDIC module to accelerate computations in various mixed signal applications such as graphics for human-machine interface and field oriented control of motors. While not as fast as a power series approximation, CORDIC is indeed faster than interpolating table based implementations such as the ones provided by the ARM CMSIS and C standard libraries. [ 54 ] Though the results may be slightly less accurate as the CORDIC modules provided only achieve 20 bits of precision in the result. For example, most of the performance difference compared to the ARM implementation is due to the overhead of the interpolation algorithm, which achieves full floating point precision (24 bits) and can likely achieve relative error to that precision. [ 55 ] Another benefit is that the CORDIC module is a coprocessor and can be run in parallel with other CPU tasks. The issue with using Taylor series is that while they do provide small absolute error, they do not exhibit well behaved relative error. [ 56 ] Other means of polynomial approximation, such as minimax optimization, may be used to control both kinds of error. Many older systems with integer-only CPUs have implemented CORDIC to varying extents as part of their IEEE floating-point libraries. As most modern general-purpose CPUs have floating-point registers with common operations such as add, subtract, multiply, divide, sine, cosine, square root, log 10 , natural log, the need to implement CORDIC in them with software is nearly non-existent. Only microcontroller or special safety and time-constrained software applications would need to consider using CORDIC. CORDIC can be used to calculate a number of different functions. This explanation shows how to use CORDIC in rotation mode to calculate the sine and cosine of an angle, assuming that the desired angle is given in radians and represented in a fixed-point format. To determine the sine or cosine for an angle β {\displaystyle \beta } , the y or x coordinate of a point on the unit circle corresponding to the desired angle must be found. Using CORDIC, one would start with the vector v 0 {\displaystyle v_{0}} : In the first iteration, this vector is rotated 45° counterclockwise to get the vector v 1 {\displaystyle v_{1}} . Successive iterations rotate the vector in one or the other direction by size-decreasing steps, until the desired angle has been achieved. Each step angle is γ i = arctan ⁡ ( 2 − i ) {\displaystyle \gamma _{i}=\arctan {(2^{-i})}} for i = 0 , 1 , 2 , … {\displaystyle i=0,1,2,\dots } . More formally, every iteration calculates a rotation, which is performed by multiplying the vector v i {\displaystyle v_{i}} with the rotation matrix R i {\displaystyle R_{i}} : The rotation matrix is given by Using the trigonometric identity : the cosine factor can be taken out to give: The expression for the rotated vector v i + 1 = R i v i {\displaystyle v_{i+1}=R_{i}v_{i}} then becomes: where x i {\displaystyle x_{i}} and y i {\displaystyle y_{i}} are the components of v i {\displaystyle v_{i}} . Setting the angle γ i {\displaystyle \gamma _{i}} for each iteration such that tan ⁡ ( γ i ) = ± 2 − i {\displaystyle \tan(\gamma _{i})=\pm 2^{-i}} still yields a series that converges to every possible output value. The multiplication with the tangent can therefore be replaced by a division by a power of two, which is efficiently done in digital computer hardware using a bit shift . The expression then becomes: and σ i {\displaystyle \sigma _{i}} is used to determine the direction of the rotation: if the angle γ i {\displaystyle \gamma _{i}} is positive, then σ i {\displaystyle \sigma _{i}} is +1, otherwise it is −1. The following trigonometric identity can be used to replace the cosine: giving this multiplier for each iteration: The K i {\displaystyle K_{i}} factors can then be taken out of the iterative process and applied all at once afterwards with a scaling factor K ( n ) {\displaystyle K(n)} : which is calculated in advance and stored in a table or as a single constant, if the number of iterations is fixed. This correction could also be made in advance, by scaling v 0 {\displaystyle v_{0}} and hence saving a multiplication. Additionally, it can be noted that [ 43 ] to allow further reduction of the algorithm's complexity. Some applications may avoid correcting for K {\displaystyle K} altogether, resulting in a processing gain A {\displaystyle A} : [ 57 ] After a sufficient number of iterations, the vector's angle will be close to the wanted angle β {\displaystyle \beta } . For most ordinary purposes, 40 iterations ( n = 40) are sufficient to obtain the correct result to the 10th decimal place. The only task left is to determine whether the rotation should be clockwise or counterclockwise at each iteration (choosing the value of σ {\displaystyle \sigma } ). This is done by keeping track of how much the angle was rotated at each iteration and subtracting that from the wanted angle; then in order to get closer to the wanted angle β {\displaystyle \beta } , if β n + 1 {\displaystyle \beta _{n+1}} is positive, the rotation is clockwise, otherwise it is negative and the rotation is counterclockwise: The values of γ n {\displaystyle \gamma _{n}} must also be precomputed and stored. For small angles it can be approximated with arctan ⁡ ( γ n ) ≈ γ n {\displaystyle \arctan(\gamma _{n})\approx \gamma _{n}} to reduce the table size. As can be seen in the illustration above, the sine of the angle β {\displaystyle \beta } is the y coordinate of the final vector v n , {\displaystyle v_{n},} while the x coordinate is the cosine value. The rotation-mode algorithm described above can rotate any vector (not only a unit vector aligned along the x axis) by an angle between −90° and +90°. Decisions on the direction of the rotation depend on β i {\displaystyle \beta _{i}} being positive or negative. The vectoring-mode of operation requires a slight modification of the algorithm. It starts with a vector whose x coordinate is positive whereas the y coordinate is arbitrary. Successive rotations have the goal of rotating the vector to the x axis (and therefore reducing the y coordinate to zero). At each step, the value of y determines the direction of the rotation. The final value of β i {\displaystyle \beta _{i}} contains the total angle of rotation. The final value of x will be the magnitude of the original vector scaled by K . So, an obvious use of the vectoring mode is the transformation from rectangular to polar coordinates. In Java the Math class has a scalb(double x,int scale) method to perform such a shift, [ 58 ] C has the ldexp function, [ 59 ] and the x86 class of processors have the fscale floating point operation. [ 60 ] The number of logic gates for the implementation of a CORDIC is roughly comparable to the number required for a multiplier as both require combinations of shifts and additions. The choice for a multiplier-based or CORDIC-based implementation will depend on the context. The multiplication of two complex numbers represented by their real and imaginary components (rectangular coordinates), for example, requires 4 multiplications, but could be realized by a single CORDIC operating on complex numbers represented by their polar coordinates, especially if the magnitude of the numbers is not relevant (multiplying a complex vector with a vector on the unit circle actually amounts to a rotation). CORDICs are often used in circuits for telecommunications such as digital down converters . In two of the publications by Vladimir Baykov, [ 61 ] [ 62 ] it was proposed to use the double iterations method for the implementation of the functions: arcsine, arccosine, natural logarithm, exponential function, as well as for the calculation of the hyperbolic functions. Double iterations method consists in the fact that unlike the classical CORDIC method, where the iteration step value changes every time, i.e. on each iteration, in the double iteration method, the iteration step value is repeated twice and changes only through one iteration. Hence the designation for the degree indicator for double iterations appeared: i = 0 , 0 , 1 , 1 , 2 , 2 … {\displaystyle i=0,0,1,1,2,2\dots } . Whereas with ordinary iterations: i = 0 , 1 , 2 … {\displaystyle i=0,1,2\dots } . The double iteration method guarantees the convergence of the method throughout the valid range of argument changes. The generalization of the CORDIC convergence problems for the arbitrary positional number system with radix R {\displaystyle R} showed [ 63 ] that for the functions sine, cosine, arctangent, it is enough to perform R − 1 {\displaystyle R-1} iterations for each value of i (i = 0 or 1 to n, where n is the number of digits), i.e. for each digit of the result. For the natural logarithm, exponential, hyperbolic sine, cosine and arctangent, R {\displaystyle R} iterations should be performed for each value i {\displaystyle i} . For the functions arcsine and arccosine, two R − 1 {\displaystyle R-1} iterations should be performed for each number digit, i.e. for each value of i {\displaystyle i} . [ 63 ] For inverse hyperbolic sine and arcosine functions, the number of iterations will be 2 R {\displaystyle 2R} for each i {\displaystyle i} , that is, for each result digit. CORDIC is part of the class of "shift-and-add" algorithms , as are the logarithm and exponential algorithms derived from Henry Briggs' work. Another shift-and-add algorithm which can be used for computing many elementary functions is the BKM algorithm , which is a generalization of the logarithm and exponential algorithms to the complex plane. For instance, BKM can be used to compute the sine and cosine of a real angle x {\displaystyle x} (in radians) by computing the exponential of 0 + i x {\displaystyle 0+ix} , which is cis ⁡ ( x ) = cos ⁡ ( x ) + i sin ⁡ ( x ) {\displaystyle \operatorname {cis} (x)=\cos(x)+i\sin(x)} . The BKM algorithm is slightly more complex than CORDIC, but has the advantage that it does not need a scaling factor ( K ).
https://en.wikipedia.org/wiki/Unified_CORDIC
The Unified Media Interface ( UMI ) interconnect is the link between an AMD Accelerated Processing Unit (APU) and the FCH (Fusion Controller Hub) . [ 1 ] [ 2 ] It is similar to Intel 's DMI , and is based on PCI Express . [ 3 ] The Fusion Controller Hub is similar to the southbridge of earlier chipsets. This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unified_Media_Interface
Unified communications ( UC ) is a business and marketing concept describing the integration of enterprise communication services such as instant messaging (chat), presence information , voice (including IP telephony ), mobility features (including extension mobility and single number reach), audio, web & video conferencing , fixed-mobile convergence (FMC), desktop sharing , data sharing (including web connected electronic interactive whiteboards ), call control and speech recognition with non-real-time communication services such as unified messaging (integrated voicemail , e-mail , SMS and fax ). UC is not necessarily a single product, but a set of products that provides a consistent unified user interface and user experience across multiple devices and media types. [ 1 ] In its broadest sense, the UC can encompass all forms of communications that are exchanged via a network to include other forms of communications such as Internet Protocol television (IPTV) and digital signage as they become an integrated part of the network communications deployment and may be directed [ clarification needed ] as one-to-one communications or broadcast communications from one to many. UC allows an individual to send a message on one medium and receive the same communication on another medium. For example, one can receive a voicemail message and choose to access it through e-mail or a cell phone. If the sender is online according to the presence information and currently accepts calls, the response can be sent immediately through text chat or a video call. Otherwise, it may be sent as a non-real-time message that can be accessed through a variety of media. There are varying definitions for unified communications. [ 2 ] A basic definition is "communications integrated to optimize business processes and increase user productivity", but such integration can take many forms, such as: users simply adjusting their habits, manual integration as defined by procedures and training, integration of communications into off-the-shelf tools such as Thunderbird , Outlook , Lotus Notes , BlackBerry , Salesforce.com , etc., or purpose-specific integration into customized applications in specific operating departments or in vertical markets such as healthcare. [ 3 ] Unified communications is an evolving set of technologies that automates and unifies human and device communications in a common context and experience. It optimizes business processes and enhances human communications by reducing latency, managing flows, and eliminating device and media dependencies. A UC system may include features such as messaging, voice and video calls, meetings, team collaboration, file sharing, and integrated apps. [1] The history of unified communications is tied to the evolution of the supporting technology. Originally, business telephone systems were a private branch exchange (PBX) or key telephone system provided and managed by the local phone company. These systems used the phone company's analog or digital circuits to deliver phone calls from a central office (CO) to the customer. The system —PBX or key telephone system— accepted the call and routed the call to the appropriate extension or line appearance on the phones at the customer's office. In the 1980s, voice mail systems with IVR -like features were recognized as an access mechanism to corporate information for mobile employees, before the explosion of cell phones and the proliferation of PCs. E-mail also began to grow in popularity, and as early as 1985, e-mail reading features were made available for certain voicemail. [ 4 ] The term unified communications arose in the mid-1990s, when messaging and real-time communications began to combine. In 1993, ThinkRite (VoiceRite) developed the unified messaging system, POET, for IBM's internal use. It was installed in 55 IBM US Branch Offices for 54,000 employees and integrated with IBM OfficeVision/VM (PROFS) and provided IBMers with one phone number for voicemail, fax, alphanumeric paging and follow-me. POET was in use until 2000. [ 5 ] In the late 1990s, a New Zealand-based organization called IPFX developed a commercially available presence product, which let users see the location of colleagues, make decisions on how to contact them, and define how their messages were handled based on their own presence. The first full-featured converged telephony/UC offering was the Nortel Succession MX (Multimedia eXchange) product, [ 6 ] which later became known as Nortel Multimedia Communications Server (MCS 5100). [ 7 ] The major drawback to this service was the reliance on the phone company or vendor partner to manage (in most cases) the PBX or key telephone system . This resulted in a residual, recurring cost to customers. Over time, the PBX became more privatized, and internal staff members were hired to manage these systems. This was typically done by companies that could afford to bring this skill in-house and thereby reduce the requirement to notify the phone company or their local PBX vendor each time a change was required in the system. This increasing privatization triggered the development of more powerful software that increased the usability and manageability of the system. As companies began to deploy IP networks in their environment, companies began to use these networks to transmit voice instead of relying on traditional telephone network circuits. Some vendors such as Avaya and Nortel created circuit packs or cards for their PBX systems that could interconnect their communications systems to the IP network. Other vendors such as Cisco created equipment that could be placed in routers to transport voice calls across a company network from site to site. The termination of PBX circuits to be transported across a network and delivered to another phone system is traditionally referred to as Voice over IP ( Voice over Internet Protocol or VoIP ). This design required special hardware on both ends of the network equipment to provide the termination and delivery at each site. As time went by, Siemens , Alcatel-Lucent , Cisco , Nortel , Avaya , Wildix and Mitel realized the potential for eliminating the traditional PBX or key system and replacing it with a solution based on IP. This IP solution is software driven only, and thereby does away with the need for "switching" equipment at a customer site (save the equipment necessary to connect to the outside world). This created a new technology, now called IP telephony . A system that uses IP-based telephony services only, rather than a legacy PBX or key system, is called an IP telephony solution. With the advent of IP telephony the handset was no longer a digital device hanging off a copper loop from a PBX. Instead, the handset lived on the network as another computer device. The transport of audio was therefore no longer a variation in voltages or modulation of frequency such as with the handsets from before, but rather encoding the conversation using a codec ( G.711 originally) and transporting it with a protocol such as the Real-time Transport Protocol ( RTP ). When the handset is just another computer connected to the network, advanced features can be provided by letting computer applications communicate with server computers elsewhere in any number of ways; applications can even be upgraded or freshly installed on the handset. When considering the efforts of Unified Communications solutions providers, the overall goal is to no longer focus strictly on the telephony portion of daily communications. The unification of all communication devices inside a single platform provides the mobility, presence, and contact capabilities that extend beyond the phone to all devices a person may use or have at their disposal. [ 8 ] Given the wide scope of unified communications, there has been a lack of community definition as most solutions are from proprietary vendors. Since March 2008, there are several open source projects with a UC focus such as Druid and Elastix , which are based on Asterisk , a leading open source telephony project. The aim of these open source UC projects is to allow the open source community of developers and users to have a say in unified communications and what it means. IBM entered the unified communications marketplace with several products, beginning in 2006 with the updated release of a unified communications middleware platform, IBM Lotus Sametime 7.5, [ 9 ] as well as related products and services such as IBM WebSphere Unified Messaging, IBM Global Technology Services - Converged Communications Services, and more. In October 2007, Microsoft entered the UC market with the launch of Office Communications Server , [ 10 ] a software-based application running on Windows . In March 2008, Unison Technologies launched Unison, [ 11 ] a software-based unified communications solution that runs on Linux and Windows. In May 2010, the Unified Communications Interoperability Forum (UCIF) was announced. UCIF is an independent, non-profit alliance between technology companies that creates and tests interoperability profiles, implementation guidelines, and best practices for interoperability between UC products and existing communications and business applications. The original founding members were HP , Juniper Networks , Logitech / LifeSize , Microsoft , and Polycom . [ 12 ] [ 13 ] There is some debate about whether unified communications hosted on an enterprise's premises is the same thing as unified communications solutions that are hosted by a service provider, or UCaaS (UC as a Service). [ 14 ] While both offer their respective advantages, all of these approaches can be grouped under the single umbrella category of unified communications. [ 15 ] Unified communications is sometimes confused with unified messaging , but it is distinct. Unified communications refers to both real-time and non-real-time delivery of communications based on the preferred method and location of the recipient; unified messaging culls messages from several sources (such as e-mail, voice mail and faxes), but holds those messages only for retrieval at a later time. Unified communications allows for an individual to check and retrieve an e-mail or voice mail from any communication device at any time. It expands beyond voice mail services to data communications and video services. [ 16 ] With unified communications, multiple modes of business communications are integrated. Unified communications is not a single product but a collection of elements that includes: [ 17 ] Presence —knowing where intended recipients are, and if they are available, in real time—is a key component of unified communications. Unified communications integrates all systems a user might already use, and helps those systems work together in real time. For example, unified communications technology could allow a user to seamlessly collaborate with another person on a project, even if the two users are in separate locations. The user could quickly locate the necessary person by accessing an interactive directory, engage in a text messaging session, and then escalate the session to a voice call, or even a video call. In another example, an employee receives a call from a customer who wants answers. Unified communications enables that employee to call an expert colleague from a real-time list. This way, the employee can answer the customer faster by eliminating rounds of back-and-forth e-mails and phone-tag. The examples in the previous paragraph primarily describe "personal productivity" enhancements that tend to benefit the individual user. While such benefits can be important, enterprises are finding that they can achieve even greater impact by using unified communications capabilities to transform business processes. This is achieved by integrating UC functionality directly into the business applications using development tools provided by many of the suppliers. Instead of the individual user invoking the UC functionality to, say, find an appropriate resource, the workflow or process application automatically identifies the resource at the point in the business activity where one is needed. When used in this manner, the concept of presence often changes. Most people associate presence with instant messaging (IM "buddy lists") [ 18 ] the status of individuals is identified. But, in many business process applications, what is important is finding someone with a certain skill. In these environments, presence identifies available skills or capabilities. This "business process" approach to integrating UC functionality can result in bottom line benefits that are an order of magnitude greater than those achievable by personal productivity methods alone. Unified communications & collaboration ( UCC ) is the integration of various communications methods with collaboration tools such as virtual white boards, real-time audio and video conferencing, and enhanced call control capabilities. Before this fusion of communications and collaboration tools into a single platform, enterprise collaboration service vendors and enterprise communications service vendors offered distinctly different solutions. Now, collaboration service vendors also offer communications services, and communications service providers have developed collaboration tools. Unified communications & collaboration as a service ( UCCaaS ) is cloud-based UCC platforms. Compared to premises-based UCC solutions, UCCaaS platforms offer enhanced flexibility and scalability due to the SaaS subscription model. Unified communications provisioning is the act of entering and configuring the settings for users of phone systems, instant messaging, telepresence, and other collaboration channels. Provisioners refer to this process as making moves, adds, changes, and deletes or MAC-Ds. [ 19 ]
https://en.wikipedia.org/wiki/Unified_communications
Unified endpoint management ( UEM ) is a class of software tools that provide a single management interface for mobile, PC and other devices. It is an evolution of, and replacement for, mobile device management (MDM) and enterprise mobility management (EMM) and client management tools. [ 1 ] It provides capabilities for managing and securing mobile applications, content, collaboration and more. It is a single approach to managing all endpoints like smartphones, tablets, laptops, printers, ruggedized devices, Internet of Things (IoT) and wearables. With new types of devices being used in the workplace, administration of traditional laptops, desktops and new devices was a challenging task for IT administrators. Traditional CMTs (client management tools) lacked some features for a complete approach to endpoint management. The rise of UEM [ 2 ] was also a result of the adoption of newer enterprise friendly platforms like Windows 10 , and iOS 11 . Differences between MDM, EMM and UEM [ 3 ]
https://en.wikipedia.org/wiki/Unified_endpoint_management
In physics , a unified field theory ( UFT ) is a type of field theory that allows all fundamental forces and elementary particles to be written in terms of a single type of field. According to modern discoveries in physics, forces are not transmitted directly between interacting objects but instead are described and interpreted by intermediary entities called fields . [ 1 ] [ 2 ] According to quantum field theory , particles are themselves the quanta of fields. Different fields in physics include vector fields such as the electromagnetic field , spinor fields whose quanta are fermionic particles such as electrons , and tensor fields such as the metric tensor field that describes the shape of spacetime and gives rise to gravitation in general relativity . Unified field theory attempts to organize these fields into a single mathematical structure. For over a century, unified field theory has remained an open line of research. The term was coined by Albert Einstein , who attempted to unify his general theory of relativity with electromagnetism . [ 3 ] Einstein attempted to create a classical unified field theory , rejecting quantum mechanics . Among other difficulties, this required a new explanation of particles as singularities or solitons instead of field quanta. Later attempts to unify general relativity with other forces incorporate quantum mechanics. The concept of a " Theory of Everything " [ 4 ] or Grand Unified Theory [ 5 ] are closely related to unified field theory, but differ by not requiring the basis of nature to be fields, and often by attempting to explain physical constants of nature . Grand Unified Theories do not attempt to include the gravitational force and can therefore operate entirely within quantum field theory. The goal of a unified field theory has led to significant progress in theoretical physics . [ 6 ] Unified field theory attempts to give a single elegant description of the following fields: All four of the known fundamental forces are mediated by fields. In the Standard Model of particle physics, three of these result from the exchange of gauge bosons . These are: General relativity likewise describes gravitation as the result of the metric tensor field, which describes the shape of spacetime : In the Standard Model, the "matter" particles (electrons, quarks, neutrinos, etc) are described as the quanta of spinor fields. Gauge boson fields also have quanta, such as photons for the electromagnetic field. The Standard Model has a unique fundamental scalar field , the Higgs field , the quanta of which are called Higgs bosons . The first successful classical unified field theory was developed by James Clerk Maxwell . In 1820, Hans Christian Ørsted discovered that electric currents exerted forces on magnets , while in 1831, Michael Faraday made the observation that time-varying magnetic fields could induce electric currents. Until then, electricity and magnetism had been thought of as unrelated phenomena. In 1864, Maxwell published his famous paper on a dynamical theory of the electromagnetic field . This was the first example of a theory that was able to encompass previously separate field theories (namely electricity and magnetism) to provide a unifying theory of electromagnetism. By 1905, Albert Einstein had used the constancy of the speed-of-light in Maxwell's theory to unify our notions of space and time into an entity we now call spacetime . In 1915, he expanded this theory of special relativity to a description of gravity, general relativity , using a field to describe the curving geometry of four-dimensional (4D) spacetime. In the years following the creation of the general theory, a large number of physicists and mathematicians enthusiastically participated in the attempt to unify the then-known fundamental interactions. [ 7 ] Given later developments in this domain, of particular interest are the theories of Hermann Weyl of 1919, who introduced the concept of an (electromagnetic) gauge field in a classical field theory [ 8 ] and, two years later, that of Theodor Kaluza , who extended General Relativity to five dimensions . [ 9 ] Continuing in this latter direction, Oscar Klein proposed in 1926 that the fourth spatial dimension be curled up into a small, unobserved circle. In Kaluza–Klein theory , the gravitational curvature of the extra spatial direction behaves as an additional force similar to electromagnetism. These and other models of electromagnetism and gravity were pursued by Albert Einstein in his attempts at a classical unified field theory . By 1930 Einstein had already considered the Einstein-Maxwell–Dirac System [Dongen]. This system is (heuristically) the super-classical [Varadarajan] limit of (the not mathematically well-defined) quantum electrodynamics . One can extend this system to include the weak and strong nuclear forces to get the Einstein–Yang-Mills–Dirac System. The French physicist Marie-Antoinette Tonnelat published a paper in the early 1940s on the standard commutation relations for the quantized spin-2 field. She continued this work in collaboration with Erwin Schrödinger after World War II . In the 1960s Mendel Sachs proposed a generally covariant field theory that did not require recourse to renormalization or perturbation theory. In 1965, Tonnelat published a book on the state of research on unified field theories. In 1963, American physicist Sheldon Glashow proposed that the weak nuclear force , electricity, and magnetism could arise from a partially unified electroweak theory . In 1967, Pakistani Abdus Salam and American Steven Weinberg independently revised Glashow's theory by having the masses for the W particle and Z particle arise through spontaneous symmetry breaking with the Higgs mechanism . This unified theory modelled the electroweak interaction as a force mediated by four particles: the photon for the electromagnetic aspect, a neutral Z particle, and two charged W particles for the weak aspect. As a result of the spontaneous symmetry breaking, the weak force becomes short-range and the W and Z bosons acquire masses of 80.4 and 91.2 GeV/c 2 , respectively. Their theory was first given experimental support by the discovery of weak neutral currents in 1973. In 1983, the Z and W bosons were first produced at CERN by Carlo Rubbia 's team. For their insights, Glashow, Salam, and Weinberg were awarded the Nobel Prize in Physics in 1979. Carlo Rubbia and Simon van der Meer received the Prize in 1984. After Gerardus 't Hooft showed the Glashow–Weinberg–Salam electroweak interactions to be mathematically consistent, the electroweak theory became a template for further attempts at unifying forces. In 1974, Sheldon Glashow and Howard Georgi proposed unifying the strong and electroweak interactions into the Georgi–Glashow model , the first Grand Unified Theory , which would have observable effects for energies much above 100 GeV. Since then there have been several proposals for Grand Unified Theories, e.g. the Pati–Salam model , although none is currently universally accepted. A major problem for experimental tests of such theories is the energy scale involved, which is well beyond the reach of current accelerators . Grand Unified Theories make predictions for the relative strengths of the strong, weak, and electromagnetic forces, and in 1991 LEP determined that supersymmetric theories have the correct ratio of couplings for a Georgi–Glashow Grand Unified Theory. Many Grand Unified Theories (but not Pati–Salam) predict that the proton can decay , and if this were to be seen, details of the decay products could give hints at more aspects of the Grand Unified Theory. It is at present unknown if the proton can decay, although experiments have determined a lower bound of 10 35 years for its lifetime. Theoretical physicists have not yet formulated a widely accepted, consistent theory that combines general relativity and quantum mechanics to form a theory of everything . Trying to combine the graviton with the strong and electroweak interactions leads to fundamental difficulties and the resulting theory is not renormalizable . The incompatibility of the two theories remains an outstanding problem in the field of physics.
https://en.wikipedia.org/wiki/Unified_field_theory
The unified neutral theory of biodiversity and biogeography (here "Unified Theory" or "UNTB" ) is a theory and the title of a monograph by ecologist Stephen P. Hubbell . [ 1 ] It aims to explain the diversity and relative abundance of species in ecological communities. Like other neutral theories of ecology, Hubbell assumes that the differences between members of an ecological community of trophically similar species are "neutral", or irrelevant to their success. This implies that niche differences do not influence abundance and the abundance of each species follows a random walk . [ 2 ] The theory has sparked controversy, [ 3 ] [ 4 ] [ 5 ] and some authors consider it a more complex version of other null models that fit the data better. [ 6 ] "Neutrality" means that at a given trophic level in a food web , species are equivalent in birth rates, death rates, dispersal rates and speciation rates, when measured on a per-capita basis. [ 7 ] This can be considered a null hypothesis to niche theory . Hubbell built on earlier neutral models, including Robert MacArthur and E.O. Wilson 's theory of island biogeography [ 1 ] and Stephen Jay Gould 's concepts of symmetry and null models. [ 7 ] An "ecological community" is a group of trophically similar, sympatric species that actually or potentially compete in a local area for the same or similar resources. [ 1 ] Under the Unified Theory, complex ecological interactions are permitted among individuals of an ecological community (such as competition and cooperation), provided that all individuals obey the same rules. Asymmetric phenomena such as parasitism and predation are ruled out by the terms of reference; but cooperative strategies such as swarming, and negative interaction such as competing for limited food or light are allowed (so long as all individuals behave alike). The theory predicts the existence of a fundamental biodiversity constant, conventionally written θ , that appears to govern species richness on a wide variety of spatial and temporal scales. Although not strictly necessary for a neutral theory, many stochastic models of biodiversity assume a fixed, finite community size (total number of individual organisms). There are unavoidable physical constraints on the total number of individuals that can be packed into a given space (although space per se isn't necessarily a resource, it is often a useful surrogate variable for a limiting resource that is distributed over the landscape; examples would include sunlight or hosts, in the case of parasites). If a wide range of species are considered (say, giant sequoia trees and duckweed , two species that have very different saturation densities), then the assumption of constant community size might not be very good, because density would be higher if the smaller species were monodominant. Because the Unified Theory refers only to communities of trophically similar, competing species, it is unlikely that population density will vary too widely from one place to another. Hubbell considers the fact that community sizes are constant and interprets it as a general principle: large landscapes are always biotically saturated with individuals . Hubbell thus treats communities as being of a fixed number of individuals, usually denoted by J . Exceptions to the saturation principle include disturbed ecosystems such as the Serengeti , where saplings are trampled by elephants and Blue wildebeests ; or gardens , where certain species are systematically removed. When abundance data on natural populations are collected, two observations are almost universal: Such observations typically generate a large number of questions. Why are the rare species rare? Why is the most abundant species so much more abundant than the median species abundance? A non neutral explanation for the rarity of rare species might suggest that rarity is a result of poor adaptation to local conditions. The UNTB suggests that it is not necessary to invoke adaptation or niche differences because neutral dynamics alone can generate such patterns. Species composition in any community will change randomly with time. Any particular abundance structure will have an associated probability. The UNTB predicts that the probability of a community of J individuals composed of S distinct species with abundances n 1 {\displaystyle n_{1}} for species 1, n 2 {\displaystyle n_{2}} for species 2, and so on up to n S {\displaystyle n_{S}} for species S is given by where θ = 2 J ν {\displaystyle \theta =2J\nu } is the fundamental biodiversity number ( ν {\displaystyle \nu } is the speciation rate), and ϕ i {\displaystyle \phi _{i}} is the number of species that have i individuals in the sample. This equation shows that the UNTB implies a nontrivial dominance-diversity equilibrium between speciation and extinction. As an example, consider a community with 10 individuals and three species "a", "b", and "c" with abundances 3, 6 and 1 respectively. Then the formula above would allow us to assess the likelihood of different values of θ . There are thus S = 3 species and ϕ 1 = ϕ 3 = ϕ 6 = 1 {\displaystyle \phi _{1}=\phi _{3}=\phi _{6}=1} , all other ϕ {\displaystyle \phi } 's being zero. The formula would give which could be maximized to yield an estimate for θ (in practice, numerical methods are used). The maximum likelihood estimate for θ is about 1.1478. We could have labelled the species another way and counted the abundances being 1,3,6 instead (or 3,1,6, etc. etc.). Logic tells us that the probability of observing a pattern of abundances will be the same observing any permutation of those abundances. Here we would have and so on. To account for this, it is helpful to consider only ranked abundances (that is, to sort the abundances before inserting into the formula). A ranked dominance-diversity configuration is usually written as Pr ( S ; r 1 , r 2 , … , r s , 0 , … , 0 ) {\displaystyle \Pr(S;r_{1},r_{2},\ldots ,r_{s},0,\ldots ,0)} where r i {\displaystyle r_{i}} is the abundance of the i th most abundant species: r 1 {\displaystyle r_{1}} is the abundance of the most abundant, r 2 {\displaystyle r_{2}} the abundance of the second most abundant species, and so on. For convenience, the expression is usually "padded" with enough zeros to ensure that there are J species (the zeros indicating that the extra species have zero abundance). It is now possible to determine the expected abundance of the i th most abundant species: where C is the total number of configurations, r i ( k ) {\displaystyle r_{i}(k)} is the abundance of the i th ranked species in the k th configuration, and P r ( … ) {\displaystyle Pr(\ldots )} is the dominance-diversity probability. This formula is difficult to manipulate mathematically, but relatively simple to simulate computationally. The model discussed so far is a model of a regional community, which Hubbell calls the metacommunity . Hubbell also acknowledged that on a local scale, dispersal plays an important role. For example, seeds are more likely to come from nearby parents than from distant parents. Hubbell introduced the parameter m, which denotes the probability of immigration in the local community from the metacommunity. If m = 1, dispersal is unlimited; the local community is just a random sample from the metacommunity and the formulas above apply. If m < 1, dispersal is limited and the local community is a dispersal-limited sample from the metacommunity for which different formulas apply. It has been shown [ 8 ] that ⟨ ϕ n ⟩ {\displaystyle \langle \phi _{n}\rangle } , the expected number of species with abundance n, may be calculated by where θ is the fundamental biodiversity number, J the community size, Γ {\displaystyle \Gamma } is the gamma function , and γ = ( J − 1 ) m / ( 1 − m ) {\displaystyle \gamma =(J-1)m/(1-m)} . This formula is an approximation. The correct formula is derived in a series of papers, reviewed and synthesized by Etienne and Alonso in 2005: [ 9 ] where I = ( J − 1 ) ∗ m / ( 1 − m ) {\displaystyle I=(J-1)*m/(1-m)} is a parameter that measures dispersal limitation. ⟨ ϕ n ⟩ {\displaystyle \langle \phi _{n}\rangle } is zero for n > J , as there cannot be more species than individuals. This formula is important because it allows a quick evaluation of the Unified Theory. It is not suitable for testing the theory. For this purpose, the appropriate likelihood function should be used. For the metacommunity this was given above. For the local community with dispersal limitation it is given by: Here, the K ( D → , A ) {\displaystyle K({\overrightarrow {D}},A)} for A = S , . . . , J {\displaystyle A=S,...,J} are coefficients fully determined by the data, being defined as This seemingly complicated formula involves Stirling numbers and Pochhammer symbols , but can be very easily calculated. [ 9 ] An example of a species abundance curve can be found in Scientific American. [ 10 ] UNTB distinguishes between a dispersal-limited local community of size J {\displaystyle J} and a so-called metacommunity from which species can (re)immigrate and which acts as a heat bath to the local community. The distribution of species in the metacommunity is given by a dynamic equilibrium of speciation and extinction. Both community dynamics are modelled by appropriate urn processes , where each individual is represented by a ball with a color corresponding to its species. With a certain rate r {\displaystyle r} randomly chosen individuals reproduce, i.e. add another ball of their own color to the urn. Since one basic assumption is saturation, this reproduction has to happen at the cost of another random individual from the urn which is removed. At a different rate μ {\displaystyle \mu } single individuals in the metacommunity are replaced by mutants of an entirely new species. Hubbell calls this simplified model for speciation a point mutation , using the terminology of the Neutral theory of molecular evolution . The urn scheme for the metacommunity of J M {\displaystyle J_{M}} individuals is the following. At each time step take one of the two possible actions : The size J M {\displaystyle J_{M}} of the metacommunity does not change. This is a point process in time. The length of the time steps is distributed exponentially. For simplicity one can assume that each time step is as long as the mean time between two changes which can be derived from the reproduction and mutation rates r {\displaystyle r} and μ {\displaystyle \mu } . The probability ν {\displaystyle \nu } is given as ν = μ / ( r + μ ) {\displaystyle \nu =\mu /(r+\mu )} . The species abundance distribution for this urn process is given by Ewens's sampling formula which was originally derived in 1972 for the distribution of alleles under neutral mutations. The expected number S M ( n ) {\displaystyle S_{M}(n)} of species in the metacommunity having exactly n {\displaystyle n} individuals is: [ 11 ] where θ = ( J M − 1 ) ν / ( 1 − ν ) ≈ J M ν {\displaystyle \theta =(J_{M}-1)\nu /(1-\nu )\approx J_{M}\nu } is called the fundamental biodiversity number. For large metacommunities and n ≪ J M {\displaystyle n\ll J_{M}} one recovers the Fisher Log-Series as species distribution. The urn scheme for the local community of fixed size J {\displaystyle J} is very similar to the one for the metacommunity. At each time step take one of the two actions : The metacommunity is changing on a much larger timescale and is assumed to be fixed during the evolution of the local community. The resulting distribution of species in the local community and expected values depend on four parameters, J {\displaystyle J} , J M {\displaystyle J_{M}} , θ {\displaystyle \theta } and m {\displaystyle m} (or I {\displaystyle I} ) and are derived by Etienne and Alonso (2005), [ 9 ] including several simplifying limit cases like the one presented in the previous section (there called ⟨ ϕ n ⟩ {\displaystyle \langle \phi _{n}\rangle } ). The parameter m {\displaystyle m} is a dispersal parameter. If m = 1 {\displaystyle m=1} then the local community is just a sample from the metacommunity. For m = 0 {\displaystyle m=0} the local community is completely isolated from the metacommunity and all species will go extinct except one. This case has been analyzed by Hubbell himself. [ 1 ] The case 0 < m < 1 {\displaystyle 0<m<1} is characterized by a unimodal species distribution in a Preston Diagram and often fitted by a log-normal distribution . This is understood as an intermediate state between domination of the most common species and a sampling from the metacommunity, where singleton species are most abundant. UNTB thus predicts that in dispersal limited communities rare species become even rarer. The log-normal distribution describes the maximum and the abundance of common species very well but underestimates the number of very rare species considerably which becomes only apparent for very large sample sizes. [ 1 ] The Unified Theory unifies biodiversity , as measured by species-abundance curves, with biogeography , as measured by species-area curves. Species-area relationships show the rate at which species diversity increases with area. The topic is of great interest to conservation biologists in the design of reserves, as it is often desired to harbour as many species as possible. The most commonly encountered relationship is the power law given by where S is the number of species found, A is the area sampled, and c and z are constants. This relationship, with different constants, has been found to fit a wide range of empirical data. From the perspective of Unified Theory, it is convenient to consider S as a function of total community size J . Then S = k J z {\displaystyle S=kJ^{z}} for some constant k , and if this relationship were exactly true, the species area line would be straight on log scales. It is typically found that the curve is not straight, but the slope changes from being steep at small areas, shallower at intermediate areas, and steep at the largest areas. The formula for species composition may be used to calculate the expected number of species present in a community under the assumptions of the Unified Theory. In symbols where θ is the fundamental biodiversity number. This formula specifies the expected number of species sampled in a community of size J . The last term, θ / ( θ + J − 1 ) {\displaystyle \theta /(\theta +J-1)} , is the expected number of new species encountered when adding one new individual to the community. This is an increasing function of θ and a decreasing function of J , as expected. By making the substitution J = ρ A {\displaystyle J=\rho A} (see section on saturation above), then the expected number of species becomes Σ θ / ( θ + ρ A − 1 ) {\displaystyle \Sigma \theta /(\theta +\rho A-1)} . The formula above may be approximated to an integral giving This formulation is predicated on a random placement of individuals. Consider the following (synthetic) dataset of 27 individuals: a,a,a,a,a,a,a,a,a,a,b,b,b,b,c,c,c,c,d,d,d,d,e,f,g,h,i There are thus 27 individuals of 9 species ("a" to "i") in the sample. Tabulating this would give: indicating that species "a" is the most abundant with 10 individuals and species "e" to "i" are singletons. Tabulating the table gives: On the second row, the 5 in the first column means that five species, species "e" through "i", have abundance one. The following two zeros in columns 2 and 3 mean that zero species have abundance 2 or 3. The 3 in column 4 means that three species, species "b", "c", and "d", have abundance four. The final 1 in column 10 means that one species, species "a", has abundance 10. This type of dataset is typical in biodiversity studies. Observe how more than half the biodiversity (as measured by species count) is due to singletons. For real datasets, the species abundances are binned into logarithmic categories, usually using base 2, which gives bins of abundance 0–1, abundance 1–2, abundance 2–4, abundance 4–8, etc. Such abundance classes are called octaves ; early developers of this concept included F. W. Preston and histograms showing number of species as a function of abundance octave are known as Preston diagrams . These bins are not mutually exclusive: a species with abundance 4, for example, could be considered as lying in the 2-4 abundance class or the 4-8 abundance class. Species with an abundance of an exact power of 2 (i.e. 2,4,8,16, etc.) are conventionally considered as having 50% membership in the lower abundance class 50% membership in the upper class. Such species are thus considered to be evenly split between the two adjacent classes (apart from singletons which are classified into the rarest category). Thus in the example above, the Preston abundances would be The three species of abundance four thus appear, 1.5 in abundance class 2–4, and 1.5 in 4–8. The above method of analysis cannot account for species that are unsampled: that is, species sufficiently rare to have been recorded zero times. Preston diagrams are thus truncated at zero abundance. Preston called this the veil line and noted that the cutoff point would move as more individuals are sampled. All biodiversity patterns previously described are related to time-independent quantities. For biodiversity evolution and species preservation, it is crucial to compare the dynamics of ecosystems with models (Leigh, 2007). An easily accessible index of the underlying evolution is the so-called species turnover distribution (STD), defined as the probability P(r,t) that the population of any species has varied by a fraction r after a given time t. A neutral model that can analytically predict both the relative species abundance (RSA) at steady-state and the STD at time t has been presented in Azaele et al. (2006). [ 12 ] Within this framework the population of any species is represented by a continuous (random) variable x, whose evolution is governed by the following Langevin equation: where b is the immigration rate from a large regional community, − x / τ {\displaystyle -x/\tau } represents competition for finite resources and D is related to demographic stochasticity; ξ ( t ) {\displaystyle \xi (t)} is a Gaussian white noise. The model can also be derived as a continuous approximation of a master equation, where birth and death rates are independent of species, and predicts that at steady-state the RSA is simply a gamma distribution. From the exact time-dependent solution of the previous equation, one can exactly calculate the STD at time t under stationary conditions: This formula provides good fits of data collected in the Barro Colorado tropical forest from 1990 to 2000. From the best fit one can estimate τ {\displaystyle \tau } ~ 3500 years with a broad uncertainty due to the relative short time interval of the sample. This parameter can be interpreted as the relaxation time of the system, i.e. the time the system needs to recover from a perturbation of species distribution. In the same framework, the estimated mean species lifetime is very close to the fitted temporal scale τ {\displaystyle \tau } . This suggests that the neutral assumption could correspond to a scenario in which species originate and become extinct on the same timescales of fluctuations of the whole ecosystem. The theory has provoked much controversy as it "abandons" the role of ecology when modelling ecosystems. [ 13 ] The theory has been criticized as it requires an equilibrium, yet climatic and geographical conditions are thought to change too frequently for this to be attained. [ 13 ] Tests on bird and tree abundance data demonstrate that the theory is usually a poorer match to the data than alternative null hypotheses that use fewer parameters (a log-normal model with two tunable parameters, compared to the neutral theory's three [ 6 ] ), and are thus more parsimonious. [ 2 ] The theory also fails to describe coral reef communities, studied by Dornelas et al., [ 14 ] and is a poor fit to data in intertidal communities. [ 15 ] It also fails to explain why families of tropical trees have statistically highly correlated numbers of species in phylogenetically unrelated and geographically distant forest plots in Central and South America, Africa, and South East Asia. [ 16 ] While the theory has been heralded as a valuable tool for palaeontologists, [ 7 ] little work has so far been done to test the theory against the fossil record. [ 17 ]
https://en.wikipedia.org/wiki/Unified_neutral_theory_of_biodiversity
The Unified Numbering System for Metals and Alloys ( UNS ) is an alloy designation system widely accepted in North America . Each UNS number relates to a specific metal or alloy and defines its specific chemical composition , or in some cases a specific mechanical or physical property . A UNS number alone does not constitute a full material specification because it establishes no requirements for material properties, heat treatment, form, or quality. During the early 20th century many different metal alloys were developed in isolation within certain industries to meet the needs of that industry. This allowed a wide variety of competing standards, compositions and designations to flourish. By the 1960s there were a number of differing numbering or designation schemes for various alloys. This meant that the same number might be used for different alloys, different numbers might be used for the same alloy or different trade names might indicate similar or wildly different alloys. Additionally, the increasing number of new alloys meant that the problem would only get worse. [ 1 ] In January 1971, an 18-month study recommended that a unified system would be possible and helpful. An advisory board was established in April 1972 to establish the Unified Numbering System (UNS). [ 2 ] The UNS is managed jointly by the ASTM International and SAE International . The resulting document SAE HS-1086 provides a cross-reference between various designation systems and the chemical composition. A UNS number only defines a specific chemical composition, it does not provided full material specification. Requirements such as material properties ( yield strength , ultimate strength , hardness , etc.), heat treatment, form ( rolled , cast , forged , flanges , tubes, bars, etc.), purpose (high temperature, boilers and pressure vessels, etc.) and testing methods are all specified in the material or standard specification which is created by various trade and professional organizations. Many material or standard specifications include a number of different UNS numbers that may be used within that specification. For example: UNS S30400 (SAE 304, Cr/Ni 18/10, Euronorm 1.4301 stainless steel) could be used to make stainless steel bars ( ASTM A276 ) or stainless steel plates for pressure vessels ( ASTM A240 ) or pipes ( ASTM A312 ). Conversely, A312 pipes could be made out of about 70 different UNS alloy steels. It consists of a prefix letter and five digits designating a material composition. For example, a prefix of S indicates stainless steel alloys, C indicates copper , brass , or bronze alloys, T indicates tool steels , and so on. The first 3 digits often match older 3-digit numbering systems, while the last 2 digits indicate more modern variations. For example, Stainless Steel Type 310 in the original 3-digit system became S31000 in the UNS System. The more modern low-carbon variation, Type 310S, became S31008 in the UNS System. Often, the suffix digit is chosen to represent a material property specification. For example, "08" was assigned to UNS S31008 because the maximum allowed carbon content is 0.08%. Some common materials and translations to other standards: [ 5 ] A UNS-derived system known as ISC (in Chinese 统一数字代号 , literally "unified numeric designator") is used in China in parallel to the composition-based nomenclature. [ 6 ] Individual grades may receive the same number (e.g. S31603), a slightly different number (e.g. S30400/S30408, S17400/S17440), or a totally different one (e.g. S20200/S35450, S41026 [ 7 ] /S45710). [ 5 ]
https://en.wikipedia.org/wiki/Unified_numbering_system
The unified strength theory (UST). [ 1 ] [ 2 ] [ 3 ] [ 4 ] proposed by Yu Mao-Hong is a series of yield criteria (see yield surface ) and failure criteria (see Material failure theory ). It is a generalized classical strength theory which can be used to describe the yielding or failure of material begins when the combination of principal stresses reaches a critical value. [ 5 ] [ 6 ] [ 7 ] Mathematically, the formulation of UST is expressed in principal stress state as where σ 1 , σ 2 , σ 3 {\displaystyle {\sigma _{1}},{\sigma _{2}},{\sigma _{3}}} are three principal stresses, σ t {\displaystyle {\sigma _{t}}} is the uniaxial tensile strength and α {\displaystyle \alpha } is tension-compression strength ratio ( α = σ t / σ c {\displaystyle \alpha ={\sigma _{t}}/{\sigma _{c}}} ). The unified yield criterion (UYC) is the simplification of UST when α = 1 {\displaystyle \alpha =1} , i.e. The limit surfaces of the unified strength theory in principal stress space are usually a semi-infinite dodecahedron cone with unequal sides. The shape and size of the limiting dodecahedron cone depends on the parameter b and α {\displaystyle \alpha } . The limit surfaces of UST and UYC are shown as follows. Due to the relation ( τ 13 = τ 12 + τ 23 {\displaystyle {\tau _{13}}={\tau _{12}}+{\tau _{23}}} ), the principal stress state ( σ 1 , σ 2 , σ 3 {\displaystyle {\sigma _{1}},{\sigma _{2}},{\sigma _{3}}} ) may be converted to the twin- shear stress state ( τ 13 , τ 12 ; σ 13 , σ 12 {\displaystyle {\tau _{13}},{\tau _{12}};{\sigma _{13}},{\sigma _{12}}} ) or ( τ 13 , τ 23 ; σ 13 , σ 23 {\displaystyle {\tau _{13}},{\tau _{23}};{\sigma _{13}},{\sigma _{23}}} ). Twin-shear element models proposed by Mao-Hong Yu are used for representing the twin-shear stress state. [ 1 ] Considering all the stress components of the twin-shear models and their different effects yields the unified strength theory as The relations among the stresses components and principal stresses read The β {\displaystyle \beta } and C should be obtained by uniaxial failure state By substituting Eqs.(4a), (4b) and (5a) into the Eq.(3a), and substituting Eqs.(4a), (4c) and (5b) into Eq.(3b), the β {\displaystyle \beta } and C are introduced as The development of the unified strength theory can be divided into three stages as follows. 1. Twin-shear yield criterion (UST with α = 1 {\displaystyle \alpha =1} and b = 1 {\displaystyle b=1} ) [ 8 ] [ 9 ] 2. Twin-shear strength theory (UST with b = 1 {\displaystyle b=1} ) [ 10 ] . 3. Unified strength theory [ 1 ] . Unified strength theory has been used in Generalized Plasticity, [ 11 ] Structural Plasticity, [ 12 ] Computational Plasticity [ 13 ] and many other fields [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/Unified_strength_theory
In IETF specifications, a Uniform Resource Characteristic ( URC ) is a string of characters representing the metadata of a Uniform Resource Identifier (URI) , a string identifying a Web resource . URC metadata was envisioned to include sufficient information to support persistent identifiers , such as mapping a Uniform Resource Name (URN) to a current Uniform Resource Locator (URL) . [ 1 ] URCs were proposed as a specification in the mid-1990s, but were never adopted. The use of a URC would allow the location of a Web resource to be obtained from its standard name, via the use of a resolving service . [ 1 ] It was also to be possible to obtain a URC from a URN by the use of a resolving service. [ 2 ] The design goals of URCs were that they should be simple to use, easy to extend, and compatible with a wide range of technological systems. [ 3 ] The URC syntax was intended to be easily understood by both humans and software. [ 4 ] The term "URC" was first coined as Uniform Resource Citation in 1992 [ 5 ] by John Kunze within the IETF URI working group as a small package of metadata elements (which became the ERC [ 6 ] ) to accompany a hypertext link and meant to help users decide if the link might be interesting. [ 7 ] The working group later changed the acronym expansion to Uniform Resource Characteristic, intended to provide a standardized representation of document properties, such as owner, encoding, access restrictions or cost. [ 8 ] The group discussed URCs around 1994/1995, but it never produced a final standard and URCs were never widely adopted in practice. Even so, the concepts on which URCs were based influenced subsequent technologies such as the Dublin Core and Resource Description Framework .
https://en.wikipedia.org/wiki/Uniform_Resource_Characteristic
In mathematics , uniform absolute-convergence is a type of convergence for series of functions . Like absolute-convergence , it has the useful property that it is preserved when the order of summation is changed. A convergent series of numbers can often be reordered in such a way that the new series diverges. This is not possible for series of nonnegative numbers, however, so the notion of absolute-convergence precludes this phenomenon. When dealing with uniformly convergent series of functions, the same phenomenon occurs: the series can potentially be reordered into a non-uniformly convergent series, or a series which does not even converge pointwise. This is impossible for series of nonnegative functions, so the notion of uniform absolute-convergence can be used to rule out these possibilities. Given a set X and functions f n : X → C {\displaystyle f_{n}:X\to \mathbb {C} } (or to any normed vector space ), the series is called uniformly absolutely-convergent if the series of nonnegative functions is uniformly convergent. [ 1 ] A series can be uniformly convergent and absolutely convergent without being uniformly absolutely-convergent. For example, if ƒ n ( x ) = x n / n on the open interval (−1,0), then the series Σ f n ( x ) converges uniformly by comparison of the partial sums to those of Σ(−1) n / n , and the series Σ| f n ( x )| converges absolutely at each point by the geometric series test, but Σ| f n ( x )| does not converge uniformly. Intuitively, this is because the absolute-convergence gets slower and slower as x approaches −1, where convergence holds but absolute convergence fails. If a series of functions is uniformly absolutely-convergent on some neighborhood of each point of a topological space, it is locally uniformly absolutely-convergent . If a series is uniformly absolutely-convergent on all compact subsets of a topological space, it is compactly (uniformly) absolutely-convergent . If the topological space is locally compact , these notions are equivalent.
https://en.wikipedia.org/wiki/Uniform_absolute-convergence
In mathematics , a uniformly bounded family of functions is a family of bounded functions that can all be bounded by the same constant. This constant is larger than or equal to the absolute value of any value of any of the functions in the family. Let be a family of functions indexed by I {\displaystyle I} , where X {\displaystyle X} is an arbitrary set and K {\displaystyle \mathbb {K} } is either the set of real R {\displaystyle \mathbb {R} } or complex numbers C {\displaystyle \mathbb {C} } . We call F {\displaystyle {\mathcal {F}}} uniformly bounded if there exists a real number M > 0 {\displaystyle M>0} such that Another way of stating this would be the following: In general let Y {\displaystyle Y} be a metric space with metric d {\displaystyle d} , then the set is called uniformly bounded if there exists an element a {\displaystyle a} from Y {\displaystyle Y} and a real number M {\displaystyle M} such that
https://en.wikipedia.org/wiki/Uniform_boundedness
In mathematics , the uniform boundedness principle or Banach–Steinhaus theorem is one of the fundamental results in functional analysis . Together with the Hahn–Banach theorem and the open mapping theorem , it is considered one of the cornerstones of the field. In its basic form, it asserts that for a family of continuous linear operators (and thus bounded operators ) whose domain is a Banach space , pointwise boundedness is equivalent to uniform boundedness in operator norm . The theorem was first published in 1927 by Stefan Banach and Hugo Steinhaus , but it was also proven independently by Hans Hahn . Uniform Boundedness Principle — Let X {\displaystyle X} be a Banach space , Y {\displaystyle Y} a normed vector space and B ( X , Y ) {\displaystyle B(X,Y)} the space of all continuous linear operators from X {\displaystyle X} into Y {\displaystyle Y} . Suppose that F {\displaystyle F} is a collection of continuous linear operators from X {\displaystyle X} to Y . {\displaystyle Y.} If, for every x ∈ X {\displaystyle x\in X} , sup T ∈ F ‖ T ( x ) ‖ Y < ∞ , {\displaystyle \sup _{T\in F}\|T(x)\|_{Y}<\infty ,} then sup T ∈ F ‖ T ‖ B ( X , Y ) < ∞ . {\displaystyle \sup _{T\in F}\|T\|_{B(X,Y)}<\infty .} The first inequality (that is, sup T ∈ F ‖ T ( x ) ‖ < ∞ {\textstyle \sup _{T\in F}\|T(x)\|<\infty } for all x {\displaystyle x} ) states that the functionals in F {\displaystyle F} are pointwise bounded while the second states that they are uniformly bounded. The second supremum always equals sup T ∈ F ‖ T ‖ B ( X , Y ) = sup ‖ x ‖ ≤ 1 T ∈ F ‖ T ( x ) ‖ Y = sup T ∈ F sup ‖ x ‖ ≤ 1 ‖ T ( x ) ‖ Y {\displaystyle \sup _{T\in F}\|T\|_{B(X,Y)}=\sup _{\stackrel {T\in F}{\|x\|\leq 1}}\|T(x)\|_{Y}=\sup _{T\in F}\sup _{\|x\|\leq 1}\|T(x)\|_{Y}} and if X {\displaystyle X} is not the trivial vector space (or if the supremum is taken over [ 0 , ∞ ] {\displaystyle [0,\infty ]} rather than [ − ∞ , ∞ ] {\displaystyle [-\infty ,\infty ]} ) then closed unit ball can be replaced with the unit sphere sup T ∈ F ‖ T ‖ B ( X , Y ) = sup ‖ x ‖ = 1 T ∈ F , ‖ T ( x ) ‖ Y . {\displaystyle \sup _{T\in F}\|T\|_{B(X,Y)}=\sup _{\stackrel {T\in F,}{\|x\|=1}}\|T(x)\|_{Y}.} The completeness of the Banach space X {\displaystyle X} enables the following short proof, using the Baire category theorem . Suppose X {\displaystyle X} is a Banach space and that for every x ∈ X , {\displaystyle x\in X,} sup T ∈ F ‖ T ( x ) ‖ Y < ∞ . {\displaystyle \sup _{T\in F}\|T(x)\|_{Y}<\infty .} For every integer n ∈ N , {\displaystyle n\in \mathbb {N} ,} let X n = { x ∈ X : sup T ∈ F ‖ T ( x ) ‖ Y ≤ n } . {\displaystyle X_{n}=\left\{x\in X\ :\ \sup _{T\in F}\|T(x)\|_{Y}\leq n\right\}.} Each set X n {\displaystyle X_{n}} is a closed set and by the assumption, ⋃ n ∈ N X n = X ≠ ∅ . {\displaystyle \bigcup _{n\in \mathbb {N} }X_{n}=X\neq \varnothing .} By the Baire category theorem for the non-empty complete metric space X , {\displaystyle X,} there exists some m ∈ N {\displaystyle m\in \mathbb {N} } such that X m {\displaystyle X_{m}} has non-empty interior ; that is, there exist x 0 ∈ X m {\displaystyle x_{0}\in X_{m}} and ε > 0 {\displaystyle \varepsilon >0} such that B ε ( x 0 ) ¯ := { x ∈ X : ‖ x − x 0 ‖ ≤ ε } ⊆ X m . {\displaystyle {\overline {B_{\varepsilon }(x_{0})}}~:=~\left\{x\in X\,:\,\|x-x_{0}\|\leq \varepsilon \right\}~\subseteq ~X_{m}.} Let u ∈ X {\displaystyle u\in X} with ‖ u ‖ ≤ 1 {\displaystyle \|u\|\leq 1} and T ∈ F . {\displaystyle T\in F.} Then: ‖ T ( u ) ‖ Y = ε − 1 ‖ T ( x 0 + ε u ) − T ( x 0 ) ‖ Y [ by linearity of T ] ≤ ε − 1 ( ‖ T ( x 0 + ε u ) ‖ Y + ‖ T ( x 0 ) ‖ Y ) ≤ ε − 1 ( m + m ) . [ since x 0 + ε u , x 0 ∈ X m ] {\displaystyle {\begin{aligned}\|T(u)\|_{Y}&=\varepsilon ^{-1}\left\|T\left(x_{0}+\varepsilon u\right)-T\left(x_{0}\right)\right\|_{Y}&[{\text{by linearity of }}T]\\&\leq \varepsilon ^{-1}\left(\left\|T(x_{0}+\varepsilon u)\right\|_{Y}+\left\|T(x_{0})\right\|_{Y}\right)\\&\leq \varepsilon ^{-1}(m+m).&[{\text{since }}\ x_{0}+\varepsilon u,\ x_{0}\in X_{m}]\\\end{aligned}}} Taking the supremum over u {\displaystyle u} in the unit ball of X {\displaystyle X} and over T ∈ F {\displaystyle T\in F} it follows that sup T ∈ F ‖ T ‖ B ( X , Y ) ≤ 2 ε − 1 m < ∞ . {\displaystyle \sup _{T\in F}\|T\|_{B(X,Y)}~\leq ~2\varepsilon ^{-1}m~<~\infty .} There are also simple proofs not using the Baire theorem ( Sokal 2011 ). Corollary — If a sequence of bounded operators ( T n ) {\displaystyle \left(T_{n}\right)} converges pointwise, that is, the limit of ( T n ( x ) ) {\displaystyle \left(T_{n}(x)\right)} exists for all x ∈ X , {\displaystyle x\in X,} then these pointwise limits define a bounded linear operator T . {\displaystyle T.} The above corollary does not claim that T n {\displaystyle T_{n}} converges to T {\displaystyle T} in operator norm, that is, uniformly on bounded sets. However, since { T n } {\displaystyle \left\{T_{n}\right\}} is bounded in operator norm, and the limit operator T {\displaystyle T} is continuous, a standard " 3 ε {\displaystyle 3\varepsilon } " estimate shows that T n {\displaystyle T_{n}} converges to T {\displaystyle T} uniformly on compact sets. Essentially the same as that of the proof that a pointwise convergent sequence of equicontinuous functions on a compact set converges to a continuous function. By uniform boundedness principle, let M = max { sup n ‖ T n ‖ , ‖ T ‖ } {\displaystyle M=\max\{\sup _{n}\|T_{n}\|,\|T\|\}} be a uniform upper bound on the operator norms. Fix any compact K ⊂ X {\displaystyle K\subset X} . Then for any ϵ > 0 {\displaystyle \epsilon >0} , finitely cover (use compactness) K {\displaystyle K} by a finite set of open balls { B ( x i , r ) } i = 1 , . . . , N {\displaystyle \{B(x_{i},r)\}_{i=1,...,N}} of radius r = ϵ M {\displaystyle r={\frac {\epsilon }{M}}} Since T n → T {\displaystyle T_{n}\to T} pointwise on each of x 1 , . . . , x N {\displaystyle x_{1},...,x_{N}} , for all large n {\displaystyle n} , ‖ T n ( x i ) − T ( x i ) ‖ ≤ ϵ {\displaystyle \|T_{n}(x_{i})-T(x_{i})\|\leq \epsilon } for all i = 1 , . . . , N {\displaystyle i=1,...,N} . Then by triangle inequality, we find for all large n {\displaystyle n} , ∀ x ∈ K , ‖ T n ( x ) − T ( x ) ‖ ≤ 3 ϵ {\displaystyle \forall x\in K,\|T_{n}(x)-T(x)\|\leq 3\epsilon } . Corollary — Any weakly bounded subset S ⊆ Y {\displaystyle S\subseteq Y} in a normed space Y {\displaystyle Y} is bounded. Indeed, the elements of S {\displaystyle S} define a pointwise bounded family of continuous linear forms on the Banach space X := Y ′ , {\displaystyle X:=Y',} which is the continuous dual space of Y . {\displaystyle Y.} By the uniform boundedness principle, the norms of elements of S , {\displaystyle S,} as functionals on X , {\displaystyle X,} that is, norms in the second dual Y ″ , {\displaystyle Y'',} are bounded. But for every s ∈ S , {\displaystyle s\in S,} the norm in the second dual coincides with the norm in Y , {\displaystyle Y,} by a consequence of the Hahn–Banach theorem . Let L ( X , Y ) {\displaystyle L(X,Y)} denote the continuous operators from X {\displaystyle X} to Y , {\displaystyle Y,} endowed with the operator norm . If the collection F {\displaystyle F} is unbounded in L ( X , Y ) , {\displaystyle L(X,Y),} then the uniform boundedness principle implies: R = { x ∈ X : sup T ∈ F ‖ T x ‖ Y = ∞ } ≠ ∅ . {\displaystyle R=\left\{x\in X\ :\ \sup \nolimits _{T\in F}\|Tx\|_{Y}=\infty \right\}\neq \varnothing .} In fact, R {\displaystyle R} is dense in X . {\displaystyle X.} The complement of R {\displaystyle R} in X {\displaystyle X} is the countable union of closed sets ⋃ X n . {\textstyle \bigcup X_{n}.} By the argument used in proving the theorem, each X n {\displaystyle X_{n}} is nowhere dense , i.e. the subset ⋃ X n {\textstyle \bigcup X_{n}} is of first category . Therefore R {\displaystyle R} is the complement of a subset of first category in a Baire space. By definition of a Baire space, such sets (called comeagre or residual sets ) are dense. Such reasoning leads to the principle of condensation of singularities , which can be formulated as follows: Theorem — Let X {\displaystyle X} be a Banach space, ( Y n ) {\displaystyle \left(Y_{n}\right)} a sequence of normed vector spaces, and for every n , {\displaystyle n,} let F n {\displaystyle F_{n}} an unbounded family in L ( X , Y n ) . {\displaystyle L\left(X,Y_{n}\right).} Then the set R := { x ∈ X : for all n ∈ N , sup T ∈ F n ‖ T x ‖ Y n = ∞ } {\displaystyle R:=\left\{x\in X\ :\ {\text{ for all }}n\in \mathbb {N} ,\sup _{T\in F_{n}}\|Tx\|_{Y_{n}}=\infty \right\}} is a residual set, and thus dense in X . {\displaystyle X.} The complement of R {\displaystyle R} is the countable union ⋃ n , m { x ∈ X : sup T ∈ F n ‖ T x ‖ Y n ≤ m } {\displaystyle \bigcup _{n,m}\left\{x\in X\ :\ \sup _{T\in F_{n}}\|Tx\|_{Y_{n}}\leq m\right\}} of sets of first category. Therefore, its residual set R {\displaystyle R} is dense. Let T {\displaystyle \mathbb {T} } be the circle , and let C ( T ) {\displaystyle C(\mathbb {T} )} be the Banach space of continuous functions on T , {\displaystyle \mathbb {T} ,} with the uniform norm . Using the uniform boundedness principle, one can show that there exists an element in C ( T ) {\displaystyle C(\mathbb {T} )} for which the Fourier series does not converge pointwise. For f ∈ C ( T ) , {\displaystyle f\in C(\mathbb {T} ),} its Fourier series is defined by ∑ k ∈ Z f ^ ( k ) e i k x = ∑ k ∈ Z 1 2 π ( ∫ 0 2 π f ( t ) e − i k t d t ) e i k x , {\displaystyle \sum _{k\in \mathbb {Z} }{\hat {f}}(k)e^{ikx}=\sum _{k\in \mathbb {Z} }{\frac {1}{2\pi }}\left(\int _{0}^{2\pi }f(t)e^{-ikt}dt\right)e^{ikx},} and the N -th symmetric partial sum is S N ( f ) ( x ) = ∑ k = − N N f ^ ( k ) e i k x = 1 2 π ∫ 0 2 π f ( t ) D N ( x − t ) d t , {\displaystyle S_{N}(f)(x)=\sum _{k=-N}^{N}{\hat {f}}(k)e^{ikx}={\frac {1}{2\pi }}\int _{0}^{2\pi }f(t)D_{N}(x-t)\,dt,} where D N {\displaystyle D_{N}} is the N {\displaystyle N} -th Dirichlet kernel . Fix x ∈ T {\displaystyle x\in \mathbb {T} } and consider the convergence of { S N ( f ) ( x ) } . {\displaystyle \left\{S_{N}(f)(x)\right\}.} The functional φ N , x : C ( T ) → C {\displaystyle \varphi _{N,x}:C(\mathbb {T} )\to \mathbb {C} } defined by φ N , x ( f ) = S N ( f ) ( x ) , f ∈ C ( T ) , {\displaystyle \varphi _{N,x}(f)=S_{N}(f)(x),\qquad f\in C(\mathbb {T} ),} is bounded. The norm of φ N , x , {\displaystyle \varphi _{N,x},} in the dual of C ( T ) , {\displaystyle C(\mathbb {T} ),} is the norm of the signed measure ( 2 ( 2 π ) − 1 D N ( x − t ) d t , {\displaystyle (2(2\pi )^{-1}D_{N}(x-t)dt,} namely ‖ φ N , x ‖ = 1 2 π ∫ 0 2 π | D N ( x − t ) | d t = 1 2 π ∫ 0 2 π | D N ( s ) | d s = ‖ D N ‖ L 1 ( T ) . {\displaystyle \left\|\varphi _{N,x}\right\|={\frac {1}{2\pi }}\int _{0}^{2\pi }\left|D_{N}(x-t)\right|\,dt={\frac {1}{2\pi }}\int _{0}^{2\pi }\left|D_{N}(s)\right|\,ds=\left\|D_{N}\right\|_{L^{1}(\mathbb {T} )}.} It can be verified that 1 2 π ∫ 0 2 π | D N ( t ) | d t ≥ 1 2 π ∫ 0 2 π | sin ⁡ ( ( N + 1 2 ) t ) | t / 2 d t → ∞ . {\displaystyle {\frac {1}{2\pi }}\int _{0}^{2\pi }|D_{N}(t)|\,dt\geq {\frac {1}{2\pi }}\int _{0}^{2\pi }{\frac {\left|\sin \left((N+{\tfrac {1}{2}})t\right)\right|}{t/2}}\,dt\to \infty .} So the collection ( φ N , x ) {\displaystyle \left(\varphi _{N,x}\right)} is unbounded in C ( T ) ∗ , {\displaystyle C(\mathbb {T} )^{\ast },} the dual of C ( T ) . {\displaystyle C(\mathbb {T} ).} Therefore, by the uniform boundedness principle, for any x ∈ T , {\displaystyle x\in \mathbb {T} ,} the set of continuous functions whose Fourier series diverges at x {\displaystyle x} is dense in C ( T ) . {\displaystyle C(\mathbb {T} ).} More can be concluded by applying the principle of condensation of singularities. Let ( x m ) {\displaystyle \left(x_{m}\right)} be a dense sequence in T . {\displaystyle \mathbb {T} .} Define φ N , x m {\displaystyle \varphi _{N,x_{m}}} in the similar way as above. The principle of condensation of singularities then says that the set of continuous functions whose Fourier series diverges at each x m {\displaystyle x_{m}} is dense in C ( T ) {\displaystyle C(\mathbb {T} )} (however, the Fourier series of a continuous function f {\displaystyle f} converges to f ( x ) {\displaystyle f(x)} for almost every x ∈ T , {\displaystyle x\in \mathbb {T} ,} by Carleson's theorem ). In a topological vector space (TVS) X , {\displaystyle X,} "bounded subset" refers specifically to the notion of a von Neumann bounded subset . If X {\displaystyle X} happens to also be a normed or seminormed space , say with (semi)norm ‖ ⋅ ‖ , {\displaystyle \|\cdot \|,} then a subset B {\displaystyle B} is (von Neumann) bounded if and only if it is norm bounded , which by definition means sup b ∈ B ‖ b ‖ < ∞ . {\textstyle \sup _{b\in B}\|b\|<\infty .} Attempts to find classes of locally convex topological vector spaces on which the uniform boundedness principle holds eventually led to barrelled spaces . That is, the least restrictive setting for the uniform boundedness principle is a barrelled space, where the following generalized version of the theorem holds ( Bourbaki 1987 , Theorem III.2.1): Theorem — Given a barrelled space X {\displaystyle X} and a locally convex space Y , {\displaystyle Y,} then any family of pointwise bounded continuous linear mappings from X {\displaystyle X} to Y {\displaystyle Y} is equicontinuous (and even uniformly equicontinuous ). Alternatively, the statement also holds whenever X {\displaystyle X} is a Baire space and Y {\displaystyle Y} is a locally convex space. [ 1 ] A family B {\displaystyle {\mathcal {B}}} of subsets of a topological vector space Y {\displaystyle Y} is said to be uniformly bounded in Y , {\displaystyle Y,} if there exists some bounded subset D {\displaystyle D} of Y {\displaystyle Y} such that B ⊆ D for every B ∈ B , {\displaystyle B\subseteq D\quad {\text{ for every }}B\in {\mathcal {B}},} which happens if and only if ⋃ B ∈ B B {\displaystyle \bigcup _{B\in {\mathcal {B}}}B} is a bounded subset of Y {\displaystyle Y} ; if Y {\displaystyle Y} is a normed space then this happens if and only if there exists some real M ≥ 0 {\displaystyle M\geq 0} such that sup B ∈ B b ∈ B ‖ b ‖ ≤ M . {\textstyle \sup _{\stackrel {b\in B}{B\in {\mathcal {B}}}}\|b\|\leq M.} In particular, if H {\displaystyle H} is a family of maps from X {\displaystyle X} to Y {\displaystyle Y} and if C ⊆ X {\displaystyle C\subseteq X} then the family { h ( C ) : h ∈ H } {\displaystyle \{h(C):h\in H\}} is uniformly bounded in Y {\displaystyle Y} if and only if there exists some bounded subset D {\displaystyle D} of Y {\displaystyle Y} such that h ( C ) ⊆ D for all h ∈ H , {\displaystyle h(C)\subseteq D{\text{ for all }}h\in H,} which happens if and only if H ( C ) := ⋃ h ∈ H h ( C ) {\textstyle H(C):=\bigcup _{h\in H}h(C)} is a bounded subset of Y . {\displaystyle Y.} Proposition [ 2 ] — Let H ⊆ L ( X , Y ) {\displaystyle H\subseteq L(X,Y)} be a set of continuous linear operators between two topological vector spaces X {\displaystyle X} and Y {\displaystyle Y} and let C ⊆ X {\displaystyle C\subseteq X} be any bounded subset of X . {\displaystyle X.} Then the family of sets { h ( C ) : h ∈ H } {\displaystyle \{h(C):h\in H\}} is uniformly bounded in Y {\displaystyle Y} if any of the following conditions are satisfied: Although the notion of a nonmeager set is used in the following version of the uniform bounded principle, the domain X {\displaystyle X} is not assumed to be a Baire space . Theorem [ 2 ] — Let H ⊆ L ( X , Y ) {\displaystyle H\subseteq L(X,Y)} be a set of continuous linear operators between two topological vector spaces X {\displaystyle X} and Y {\displaystyle Y} (not necessarily Hausdorff or locally convex). For every x ∈ X , {\displaystyle x\in X,} denote the orbit of x {\displaystyle x} by H ( x ) := { h ( x ) : h ∈ H } {\displaystyle H(x):=\{h(x):h\in H\}} and let B {\displaystyle B} denote the set of all x ∈ X {\displaystyle x\in X} whose orbit H ( x ) {\displaystyle H(x)} is a bounded subset of Y . {\displaystyle Y.} If B {\displaystyle B} is of the second category (that is, nonmeager) in X {\displaystyle X} then B = X {\displaystyle B=X} and H {\displaystyle H} is equicontinuous. Every proper vector subspace of a TVS X {\displaystyle X} has an empty interior in X . {\displaystyle X.} [ 3 ] So in particular, every proper vector subspace that is closed is nowhere dense in X {\displaystyle X} and thus of the first category (meager) in X {\displaystyle X} (and the same is thus also true of all its subsets). Consequently, any vector subspace of a TVS X {\displaystyle X} that is of the second category (nonmeager) in X {\displaystyle X} must be a dense subset of X {\displaystyle X} (since otherwise its closure in X {\displaystyle X} would a closed proper vector subspace of X {\displaystyle X} and thus of the first category). [ 3 ] Proof that H {\displaystyle H} is equicontinuous: Let W , V ⊆ Y {\displaystyle W,V\subseteq Y} be balanced neighborhoods of the origin in Y {\displaystyle Y} satisfying V ¯ + V ¯ ⊆ W . {\displaystyle {\overline {V}}+{\overline {V}}\subseteq W.} It must be shown that there exists a neighborhood N ⊆ X {\displaystyle N\subseteq X} of the origin in X {\displaystyle X} such that h ( N ) ⊆ W {\displaystyle h(N)\subseteq W} for every h ∈ H . {\displaystyle h\in H.} Let C := ⋂ h ∈ H h − 1 ( V ¯ ) , {\displaystyle C~:=~\bigcap _{h\in H}h^{-1}\left({\overline {V}}\right),} which is a closed subset of X {\displaystyle X} (because it is an intersection of closed subsets) that for every h ∈ H , {\displaystyle h\in H,} also satisfies h ( C ) ⊆ V ¯ {\displaystyle h(C)\subseteq {\overline {V}}} and h ( C − C ) = h ( C ) − h ( C ) ⊆ V ¯ − V ¯ = V ¯ + V ¯ ⊆ W {\displaystyle h(C-C)~=~h(C)-h(C)~\subseteq ~{\overline {V}}-{\overline {V}}~=~{\overline {V}}+{\overline {V}}~\subseteq ~W} (as will be shown, the set C − C {\displaystyle C-C} is in fact a neighborhood of the origin in X {\displaystyle X} because the topological interior of C {\displaystyle C} in X {\displaystyle X} is not empty). If b ∈ B {\displaystyle b\in B} then H ( b ) {\displaystyle H(b)} being bounded in Y {\displaystyle Y} implies that there exists some integer n ∈ N {\displaystyle n\in \mathbb {N} } such that H ( b ) ⊆ n V {\displaystyle H(b)\subseteq nV} so if h ∈ H , {\displaystyle h\in H,} then b ∈ h − 1 ( n V ) = n h − 1 ( V ) . {\displaystyle b~\in ~h^{-1}\left(nV\right)~=~nh^{-1}(V).} Since h ∈ H {\displaystyle h\in H} was arbitrary, b ∈ ⋂ h ∈ H n h − 1 ( V ) = n ⋂ h ∈ H h − 1 ( V ) ⊆ n C . {\displaystyle b~\in ~\bigcap _{h\in H}nh^{-1}(V)~=~n\bigcap _{h\in H}h^{-1}(V)~\subseteq ~nC.} This proves that B ⊆ ⋃ n ∈ N n C . {\displaystyle B~\subseteq ~\bigcup _{n\in \mathbb {N} }nC.} Because B {\displaystyle B} is of the second category in X , {\displaystyle X,} the same must be true of at least one of the sets n C {\displaystyle nC} for some n ∈ N . {\displaystyle n\in \mathbb {N} .} The map X → X {\displaystyle X\to X} defined by x ↦ 1 n x {\textstyle x\mapsto {\frac {1}{n}}x} is a ( surjective ) homeomorphism , so the set 1 n ( n C ) = C {\textstyle {\frac {1}{n}}(nC)=C} is necessarily of the second category in X . {\displaystyle X.} Because C {\displaystyle C} is closed and of the second category in X , {\displaystyle X,} its topological interior in X {\displaystyle X} is not empty. Pick c ∈ Int X ⁡ C . {\displaystyle c\in \operatorname {Int} _{X}C.} Because the map X → X {\displaystyle X\to X} defined by x ↦ c − x {\displaystyle x\mapsto c-x} is a homeomorphism, the set N := c − Int X ⁡ C = Int X ⁡ ( c − C ) {\displaystyle N~:=~c-\operatorname {Int} _{X}C~=~\operatorname {Int} _{X}(c-C)} is a neighborhood of 0 = c − c {\displaystyle 0=c-c} in X , {\displaystyle X,} which implies that the same is true of its superset C − C . {\displaystyle C-C.} And so for every h ∈ H , {\displaystyle h\in H,} h ( N ) ⊆ h ( c − C ) = h ( c ) − h ( C ) ⊆ V ¯ − V ¯ ⊆ W . {\displaystyle h(N)~\subseteq ~h(c-C)~=~h(c)-h(C)~\subseteq ~{\overline {V}}-{\overline {V}}~\subseteq ~W.} This proves that H {\displaystyle H} is equicontinuous. Q.E.D. Proof that B = X {\displaystyle B=X} : Because H {\displaystyle H} is equicontinuous, if S ⊆ X {\displaystyle S\subseteq X} is bounded in X {\displaystyle X} then H ( S ) {\displaystyle H(S)} is uniformly bounded in Y . {\displaystyle Y.} In particular, for any x ∈ X , {\displaystyle x\in X,} because S := { x } {\displaystyle S:=\{x\}} is a bounded subset of X , {\displaystyle X,} H ( { x } ) = H ( x ) {\displaystyle H(\{x\})=H(x)} is a uniformly bounded subset of Y . {\displaystyle Y.} Thus B = X . {\displaystyle B=X.} Q.E.D. The following theorem establishes conditions for the pointwise limit of a sequence of continuous linear maps to be itself continuous. Theorem [ 4 ] — Suppose that h 1 , h 2 , … {\displaystyle h_{1},h_{2},\ldots } is a sequence of continuous linear maps between two topological vector spaces X {\displaystyle X} and Y . {\displaystyle Y.} Theorem [ 3 ] — If h 1 , h 2 , … {\displaystyle h_{1},h_{2},\ldots } is a sequence of continuous linear maps from an F-space X {\displaystyle X} into a Hausdorff topological vector space Y {\displaystyle Y} such that for every x ∈ X , {\displaystyle x\in X,} the limit h ( x ) := lim n → ∞ h n ( x ) {\displaystyle h(x)~:=~\lim _{n\to \infty }h_{n}(x)} exists in Y , {\displaystyle Y,} then h : X → Y {\displaystyle h:X\to Y} is a continuous linear map and the maps h , h 1 , h 2 , … {\displaystyle h,h_{1},h_{2},\ldots } are equicontinuous. If in addition the domain is a Banach space and the codomain is a normed space then ‖ h ‖ ≤ lim inf n → ∞ ‖ h n ‖ < ∞ . {\displaystyle \|h\|\leq \liminf _{n\to \infty }\left\|h_{n}\right\|<\infty .} Dieudonné (1970) proves a weaker form of this theorem with Fréchet spaces rather than the usual Banach spaces. Theorem [ 2 ] — Let H ⊆ L ( X , Y ) {\displaystyle H\subseteq L(X,Y)} be a set of continuous linear operators from a complete metrizable topological vector space X {\displaystyle X} (such as a Fréchet space or an F-space ) into a Hausdorff topological vector space Y . {\displaystyle Y.} If for every x ∈ X , {\displaystyle x\in X,} the orbit H ( x ) := { h ( x ) : h ∈ H } {\displaystyle H(x):=\{h(x):h\in H\}} is a bounded subset of Y {\displaystyle Y} then H {\displaystyle H} is equicontinuous. So in particular, if Y {\displaystyle Y} is also a normed space and if sup h ∈ H ‖ h ( x ) ‖ < ∞ for every x ∈ X , {\displaystyle \sup _{h\in H}\|h(x)\|<\infty \quad {\text{ for every }}x\in X,} then H {\displaystyle H} is equicontinuous.
https://en.wikipedia.org/wiki/Uniform_boundedness_principle
In geometry , a uniform coloring is a property of a uniform figure ( uniform tiling or uniform polyhedron ) that is colored to be vertex-transitive . Different symmetries can be expressed on the same geometric figure with the faces following different uniform color patterns. A uniform coloring can be specified by listing the different colors with indices around a vertex figure . In addition, an n -uniform coloring is a property of a uniform figure which has n types vertex figure , that are collectively vertex transitive . A related term is Archimedean color requires one vertex figure coloring repeated in a periodic arrangement. A more general term are k -Archimedean colorings which count k distinctly colored vertex figures. For example, this Archimedean coloring (left) of a triangular tiling has two colors, but requires 4 unique colors by symmetry positions and become a 2-uniform coloring (right): This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Uniform_coloring
In mathematics , a real function f {\displaystyle f} of real numbers is said to be uniformly continuous if there is a positive real number δ {\displaystyle \delta } such that function values over any function domain interval of the size δ {\displaystyle \delta } are as close to each other as we want. In other words, for a uniformly continuous real function of real numbers, if we want function value differences to be less than any positive real number ε {\displaystyle \varepsilon } , then there is a positive real number δ {\displaystyle \delta } such that | f ( x ) − f ( y ) | < ε {\displaystyle |f(x)-f(y)|<\varepsilon } for any x {\displaystyle x} and y {\displaystyle y} in any interval of length δ {\displaystyle \delta } within the domain of f {\displaystyle f} . The difference between uniform continuity and (ordinary) continuity is that, in uniform continuity there is a globally applicable δ {\displaystyle \delta } (the size of a function domain interval over which function value differences are less than ε {\displaystyle \varepsilon } ) that depends on only ε {\displaystyle \varepsilon } , while in (ordinary) continuity there is a locally applicable δ {\displaystyle \delta } that depends on both ε {\displaystyle \varepsilon } and x {\displaystyle x} . So uniform continuity is a stronger continuity condition than continuity; a function that is uniformly continuous is continuous but a function that is continuous is not necessarily uniformly continuous. The concepts of uniform continuity and continuity can be expanded to functions defined between metric spaces . Continuous functions can fail to be uniformly continuous if they are unbounded on a bounded domain, such as f ( x ) = 1 x {\displaystyle f(x)={\tfrac {1}{x}}} on ( 0 , 1 ) {\displaystyle (0,1)} , or if their slopes become unbounded on an infinite domain, such as f ( x ) = x 2 {\displaystyle f(x)=x^{2}} on the real (number) line. However, any Lipschitz map between metric spaces is uniformly continuous, in particular any isometry (distance-preserving map). Although continuity can be defined for functions between general topological spaces, defining uniform continuity requires more structure. The concept relies on comparing the sizes of neighbourhoods of distinct points, so it requires a metric space, or more generally a uniform space . For a function f : X → Y {\displaystyle f:X\to Y} with metric spaces ( X , d 1 ) {\displaystyle (X,d_{1})} and ( Y , d 2 ) {\displaystyle (Y,d_{2})} , the following definitions of uniform continuity and (ordinary) continuity hold. In the definitions, the difference between uniform continuity and continuity is that, in uniform continuity there is a globally applicable δ {\displaystyle \delta } (the size of a neighbourhood in X {\displaystyle X} over which values of the metric for function values in Y {\displaystyle Y} are less than ε {\displaystyle \varepsilon } ) that depends on only ε {\displaystyle \varepsilon } while in continuity there is a locally applicable δ {\displaystyle \delta } that depends on the both ε {\displaystyle \varepsilon } and x {\displaystyle x} . Continuity is a local property of a function — that is, a function f {\displaystyle f} is continuous, or not, at a particular point x {\displaystyle x} of the function domain X {\displaystyle X} , and this can be determined by looking at only the values of the function in an arbitrarily small neighbourhood of that point. When we speak of a function being continuous on an interval , we mean that the function is continuous at every point of the interval. In contrast, uniform continuity is a global property of f {\displaystyle f} , in the sense that the standard definition of uniform continuity refers to every point of X {\displaystyle X} . On the other hand, it is possible to give a definition that is local in terms of the natural extension f ∗ {\displaystyle f^{*}} (the characteristics of which at nonstandard points are determined by the global properties of f {\displaystyle f} ), although it is not possible to give a local definition of uniform continuity for an arbitrary hyperreal-valued function, see below . A mathematical definition that a function f {\displaystyle f} is continuous on an interval I {\displaystyle I} and a definition that f {\displaystyle f} is uniformly continuous on I {\displaystyle I} are structurally similar as shown in the following. Continuity of a function f : X → Y {\displaystyle f:X\to Y} for metric spaces ( X , d 1 ) {\displaystyle (X,d_{1})} and ( Y , d 2 ) {\displaystyle (Y,d_{2})} at every point x {\displaystyle x} of an interval I ⊆ X {\displaystyle I\subseteq X} (i.e., continuity of f {\displaystyle f} on the interval I {\displaystyle I} ) is expressed by a formula starting with quantifications (metrics d 1 ( x , y ) {\displaystyle d_{1}(x,y)} and d 2 ( f ( x ) , f ( y ) ) {\displaystyle d_{2}(f(x),f(y))} are | x − y | {\displaystyle |x-y|} and | f ( x ) − f ( y ) | {\displaystyle |f(x)-f(y)|} for f : R → R {\displaystyle f:\mathbb {R} \to \mathbb {R} } for the set of real numbers R {\displaystyle \mathbb {R} } ). For uniform continuity, the order of the first, second, and third quantifications ( ∀ x ∈ I {\displaystyle \forall x\in I} , ∀ ε > 0 {\displaystyle \forall \varepsilon >0} , and ∃ δ > 0 {\displaystyle \exists \delta >0} ) are rotated: Thus for continuity on the interval, one takes an arbitrary point x {\displaystyle x} of the interval , and then there must exist a distance δ {\displaystyle \delta } , while for uniform continuity, a single δ {\displaystyle \delta } must work uniformly for all points x {\displaystyle x} of the interval, Every uniformly continuous function is continuous , but the converse does not hold. Consider for instance the continuous function f : R → R , x ↦ x 2 {\displaystyle f\colon \mathbb {R} \rightarrow \mathbb {R} ,x\mapsto x^{2}} where R {\displaystyle \mathbb {R} } is the set of real numbers . Given a positive real number ε {\displaystyle \varepsilon } , uniform continuity requires the existence of a positive real number δ {\displaystyle \delta } such that for all x 1 , x 2 ∈ R {\displaystyle x_{1},x_{2}\in \mathbb {R} } with | x 1 − x 2 | < δ {\displaystyle |x_{1}-x_{2}|<\delta } , we have | f ( x 1 ) − f ( x 2 ) | < ε {\displaystyle |f(x_{1})-f(x_{2})|<\varepsilon } . But and as x {\displaystyle x} goes to be a higher and higher value, δ {\displaystyle \delta } needs to be lower and lower to satisfy | f ( x + β ) − f ( x ) | < ε {\displaystyle |f(x+\beta )-f(x)|<\varepsilon } for positive real numbers β < δ {\displaystyle \beta <\delta } and the given ε {\displaystyle \varepsilon } . This means that there is no specifiable (no matter how small it is) positive real number δ {\displaystyle \delta } to satisfy the condition for f {\displaystyle f} to be uniformly continuous so f {\displaystyle f} is not uniformly continuous. Any absolutely continuous function (over a compact interval) is uniformly continuous. On the other hand, the Cantor function is uniformly continuous but not absolutely continuous. The image of a totally bounded subset under a uniformly continuous function is totally bounded. However, the image of a bounded subset of an arbitrary metric space under a uniformly continuous function need not be bounded: as a counterexample, consider the identity function from the integers endowed with the discrete metric to the integers endowed with the usual Euclidean metric . The Heine–Cantor theorem asserts that every continuous function on a compact set is uniformly continuous . In particular, if a function is continuous on a closed bounded interval of the real line, it is uniformly continuous on that interval . The Darboux integrability of continuous functions follows almost immediately from this theorem. If a real-valued function f {\displaystyle f} is continuous on [ 0 , ∞ ) {\displaystyle [0,\infty )} and lim x → ∞ f ( x ) {\displaystyle \lim _{x\to \infty }f(x)} exists (and is finite), then f {\displaystyle f} is uniformly continuous. In particular, every element of C 0 ( R ) {\displaystyle C_{0}(\mathbb {R} )} , the space of continuous functions on R {\displaystyle \mathbb {R} } that vanish at infinity, is uniformly continuous. This is a generalization of the Heine-Cantor theorem mentioned above, since C c ( R ) ⊂ C 0 ( R ) {\displaystyle C_{c}(\mathbb {R} )\subset C_{0}(\mathbb {R} )} . For a uniformly continuous function, for every positive real number ε > 0 {\displaystyle \varepsilon >0} there is a positive real number δ > 0 {\displaystyle \delta >0} such that two function values f ( x ) {\displaystyle f(x)} and f ( y ) {\displaystyle f(y)} have the maximum distance ε {\displaystyle \varepsilon } whenever x {\displaystyle x} and y {\displaystyle y} are within the maximum distance δ {\displaystyle \delta } . Thus at each point ( x , f ( x ) ) {\displaystyle (x,f(x))} of the graph, if we draw a rectangle with a height slightly less than 2 ε {\displaystyle 2\varepsilon } and width a slightly less than 2 δ {\displaystyle 2\delta } around that point, then the graph lies completely within the height of the rectangle, i.e., the graph do not pass through the top or the bottom side of the rectangle. For functions that are not uniformly continuous, this isn't possible; for these functions, the graph might lie inside the height of the rectangle at some point on the graph but there is a point on the graph where the graph lies above or below the rectangle. (the graph penetrates the top or bottom side of the rectangle.) The first published definition of uniform continuity was by Heine in 1870, and in 1872 he published a proof that a continuous function on an open interval need not be uniformly continuous. The proofs are almost verbatim given by Dirichlet in his lectures on definite integrals in 1854. The definition of uniform continuity appears earlier in the work of Bolzano where he also proved that continuous functions on an open interval do not need to be uniformly continuous. In addition he also states that a continuous function on a closed interval is uniformly continuous, but he does not give a complete proof. [ 1 ] In non-standard analysis , a real-valued function f {\displaystyle f} of a real variable is microcontinuous at a point a {\displaystyle a} precisely if the difference f ∗ ( a + δ ) − f ∗ ( a ) {\displaystyle f^{*}(a+\delta )-f^{*}(a)} is infinitesimal whenever δ {\displaystyle \delta } is infinitesimal. Thus f {\displaystyle f} is continuous on a set A {\displaystyle A} in R {\displaystyle \mathbb {R} } precisely if f ∗ {\displaystyle f^{*}} is microcontinuous at every real point a ∈ A {\displaystyle a\in A} . Uniform continuity can be expressed as the condition that (the natural extension of) f {\displaystyle f} is microcontinuous not only at real points in A {\displaystyle A} , but at all points in its non-standard counterpart (natural extension) ∗ A {\displaystyle ^{*}A} in ∗ R {\displaystyle ^{*}\mathbb {R} } . Note that there exist hyperreal-valued functions which meet this criterion but are not uniformly continuous, as well as uniformly continuous hyperreal-valued functions which do not meet this criterion, however, such functions cannot be expressed in the form f ∗ {\displaystyle f^{*}} for any real-valued function f {\displaystyle f} . (see non-standard calculus for more details and examples). For a function between metric spaces, uniform continuity implies Cauchy continuity ( Fitzpatrick 2006 ). More specifically, let A {\displaystyle A} be a subset of R n {\displaystyle \mathbb {R} ^{n}} . If a function f : A → R n {\displaystyle f:A\to \mathbb {R} ^{n}} is uniformly continuous then for every pair of sequences x n {\displaystyle x_{n}} and y n {\displaystyle y_{n}} such that we have Let X {\displaystyle X} be a metric space, S {\displaystyle S} a subset of X {\displaystyle X} , R {\displaystyle R} a complete metric space, and f : S → R {\displaystyle f:S\rightarrow R} a continuous function. A question to answer: When can f {\displaystyle f} be extended to a continuous function on all of X {\displaystyle X} ? If S {\displaystyle S} is closed in X {\displaystyle X} , the answer is given by the Tietze extension theorem . So it is necessary and sufficient to extend f {\displaystyle f} to the closure of S {\displaystyle S} in X {\displaystyle X} : that is, we may assume without loss of generality that S {\displaystyle S} is dense in X {\displaystyle X} , and this has the further pleasant consequence that if the extension exists, it is unique. A sufficient condition for f {\displaystyle f} to extend to a continuous function f : X → R {\displaystyle f:X\rightarrow R} is that it is Cauchy-continuous , i.e., the image under f {\displaystyle f} of a Cauchy sequence remains Cauchy. If X {\displaystyle X} is complete (and thus the completion of S {\displaystyle S} ), then every continuous function from X {\displaystyle X} to a metric space Y {\displaystyle Y} is Cauchy-continuous. Therefore when X {\displaystyle X} is complete, f {\displaystyle f} extends to a continuous function f : X → R {\displaystyle f:X\rightarrow R} if and only if f {\displaystyle f} is Cauchy-continuous. It is easy to see that every uniformly continuous function is Cauchy-continuous and thus extends to X {\displaystyle X} . The converse does not hold, since the function f : R → R , x ↦ x 2 {\displaystyle f:R\rightarrow R,x\mapsto x^{2}} is, as seen above, not uniformly continuous, but it is continuous and thus Cauchy continuous. In general, for functions defined on unbounded spaces like R {\displaystyle R} , uniform continuity is a rather strong condition. It is desirable to have a weaker condition from which to deduce extendability. For example, suppose a > 1 {\displaystyle a>1} is a real number. At the precalculus level, the function f : x ↦ a x {\displaystyle f:x\mapsto a^{x}} can be given a precise definition only for rational values of x {\displaystyle x} (assuming the existence of qth roots of positive real numbers, an application of the Intermediate Value Theorem ). One would like to extend f {\displaystyle f} to a function defined on all of R {\displaystyle R} . The identity shows that f {\displaystyle f} is not uniformly continuous on the set Q {\displaystyle Q} of all rational numbers; however for any bounded interval I {\displaystyle I} the restriction of f {\displaystyle f} to Q ∩ I {\displaystyle Q\cap I} is uniformly continuous, hence Cauchy-continuous, hence f {\displaystyle f} extends to a continuous function on I {\displaystyle I} . But since this holds for every I {\displaystyle I} , there is then a unique extension of f {\displaystyle f} to a continuous function on all of R {\displaystyle R} . More generally, a continuous function f : S → R {\displaystyle f:S\rightarrow R} whose restriction to every bounded subset of S {\displaystyle S} is uniformly continuous is extendable to X {\displaystyle X} , and the converse holds if X {\displaystyle X} is locally compact . A typical application of the extendability of a uniformly continuous function is the proof of the inverse Fourier transformation formula. We first prove that the formula is true for test functions, there are densely many of them. We then extend the inverse map to the whole space using the fact that linear map is continuous; thus, uniformly continuous. In the special case of two topological vector spaces V {\displaystyle V} and W {\displaystyle W} , the notion of uniform continuity of a map f : V → W {\displaystyle f:V\to W} becomes: for any neighborhood B {\displaystyle B} of zero in W {\displaystyle W} , there exists a neighborhood A {\displaystyle A} of zero in V {\displaystyle V} such that v 1 − v 2 ∈ A {\displaystyle v_{1}-v_{2}\in A} implies f ( v 1 ) − f ( v 2 ) ∈ B . {\displaystyle f(v_{1})-f(v_{2})\in B.} For linear transformations f : V → W {\displaystyle f:V\to W} , uniform continuity is equivalent to continuity. This fact is frequently used implicitly in functional analysis to extend a linear map off a dense subspace of a Banach space . Just as the most natural and general setting for continuity is topological spaces , the most natural and general setting for the study of uniform continuity are the uniform spaces . A function f : X → Y {\displaystyle f:X\to Y} between uniform spaces is called uniformly continuous if for every entourage V {\displaystyle V} in Y {\displaystyle Y} there exists an entourage U {\displaystyle U} in X {\displaystyle X} such that for every ( x 1 , x 2 ) {\displaystyle (x_{1},x_{2})} in U {\displaystyle U} we have ( f ( x 1 ) , f ( x 2 ) ) {\displaystyle (f(x_{1}),f(x_{2}))} in V {\displaystyle V} . In this setting, it is also true that uniformly continuous maps transform Cauchy sequences into Cauchy sequences. Each compact Hausdorff space possesses exactly one uniform structure compatible with the topology. A consequence is a generalization of the Heine-Cantor theorem: each continuous function from a compact Hausdorff space to a uniform space is uniformly continuous.
https://en.wikipedia.org/wiki/Uniform_continuity
In the mathematical field of analysis , uniform convergence is a mode of convergence of functions stronger than pointwise convergence . A sequence of functions ( f n ) {\displaystyle (f_{n})} converges uniformly to a limiting function f {\displaystyle f} on a set E {\displaystyle E} as the function domain if, given any arbitrarily small positive number ε {\displaystyle \varepsilon } , a number N {\displaystyle N} can be found such that each of the functions f N , f N + 1 , f N + 2 , … {\displaystyle f_{N},f_{N+1},f_{N+2},\ldots } differs from f {\displaystyle f} by no more than ε {\displaystyle \varepsilon } at every point x {\displaystyle x} in E {\displaystyle E} . Described in an informal way, if f n {\displaystyle f_{n}} converges to f {\displaystyle f} uniformly, then how quickly the functions f n {\displaystyle f_{n}} approach f {\displaystyle f} is "uniform" throughout E {\displaystyle E} in the following sense: in order to guarantee that f n ( x ) {\displaystyle f_{n}(x)} differs from f ( x ) {\displaystyle f(x)} by less than a chosen distance ε {\displaystyle \varepsilon } , we only need to make sure that n {\displaystyle n} is larger than or equal to a certain N {\displaystyle N} , which we can find without knowing the value of x ∈ E {\displaystyle x\in E} in advance. In other words, there exists a number N = N ( ε ) {\displaystyle N=N(\varepsilon )} that could depend on ε {\displaystyle \varepsilon } but is independent of x {\displaystyle x} , such that choosing n ≥ N {\displaystyle n\geq N} will ensure that | f n ( x ) − f ( x ) | < ε {\displaystyle |f_{n}(x)-f(x)|<\varepsilon } for all x ∈ E {\displaystyle x\in E} . In contrast, pointwise convergence of f n {\displaystyle f_{n}} to f {\displaystyle f} merely guarantees that for any x ∈ E {\displaystyle x\in E} given in advance, we can find N = N ( ε , x ) {\displaystyle N=N(\varepsilon ,x)} (i.e., N {\displaystyle N} could depend on the values of both ε {\displaystyle \varepsilon } and x {\displaystyle x} ) such that, for that particular x {\displaystyle x} , f n ( x ) {\displaystyle f_{n}(x)} falls within ε {\displaystyle \varepsilon } of f ( x ) {\displaystyle f(x)} whenever n ≥ N {\displaystyle n\geq N} (and a different x {\displaystyle x} may require a different, larger N {\displaystyle N} for n ≥ N {\displaystyle n\geq N} to guarantee that | f n ( x ) − f ( x ) | < ε {\displaystyle |f_{n}(x)-f(x)|<\varepsilon } ). The difference between uniform convergence and pointwise convergence was not fully appreciated early in the history of calculus, leading to instances of faulty reasoning. The concept, which was first formalized by Karl Weierstrass , is important because several properties of the functions f n {\displaystyle f_{n}} , such as continuity , Riemann integrability , and, with additional hypotheses, differentiability , are transferred to the limit f {\displaystyle f} if the convergence is uniform, but not necessarily if the convergence is not uniform. In 1821 Augustin-Louis Cauchy published a proof that a convergent sum of continuous functions is always continuous, to which Niels Henrik Abel in 1826 found purported counterexamples in the context of Fourier series , arguing that Cauchy's proof had to be incorrect. Completely standard notions of convergence did not exist at the time, and Cauchy handled convergence using infinitesimal methods. When put into the modern language, what Cauchy proved is that a uniformly convergent sequence of continuous functions has a continuous limit. The failure of a merely pointwise-convergent limit of continuous functions to converge to a continuous function illustrates the importance of distinguishing between different types of convergence when handling sequences of functions. [ 1 ] The term uniform convergence was probably first used by Christoph Gudermann , in an 1838 paper on elliptic functions , where he employed the phrase "convergence in a uniform way" when the "mode of convergence" of a series ∑ n = 1 ∞ f n ( x , ϕ , ψ ) {\textstyle \sum _{n=1}^{\infty }f_{n}(x,\phi ,\psi )} is independent of the variables ϕ {\displaystyle \phi } and ψ . {\displaystyle \psi .} While he thought it a "remarkable fact" when a series converged in this way, he did not give a formal definition, nor use the property in any of his proofs. [ 2 ] Later Gudermann's pupil Karl Weierstrass , who attended his course on elliptic functions in 1839–1840, coined the term gleichmäßig konvergent ( German : uniformly convergent ) which he used in his 1841 paper Zur Theorie der Potenzreihen , published in 1894. Independently, similar concepts were articulated by Philipp Ludwig von Seidel [ 3 ] and George Gabriel Stokes . G. H. Hardy compares the three definitions in his paper "Sir George Stokes and the concept of uniform convergence" and remarks: "Weierstrass's discovery was the earliest, and he alone fully realized its far-reaching importance as one of the fundamental ideas of analysis." Under the influence of Weierstrass and Bernhard Riemann this concept and related questions were intensely studied at the end of the 19th century by Hermann Hankel , Paul du Bois-Reymond , Ulisse Dini , Cesare Arzelà and others. We first define uniform convergence for real-valued functions , although the concept is readily generalized to functions mapping to metric spaces and, more generally, uniform spaces (see below ). Suppose E {\displaystyle E} is a set and ( f n ) n ∈ N {\displaystyle (f_{n})_{n\in \mathbb {N} }} is a sequence of real-valued functions on it. We say the sequence ( f n ) n ∈ N {\displaystyle (f_{n})_{n\in \mathbb {N} }} is uniformly convergent on E {\displaystyle E} with limit f : E → R {\displaystyle f:E\to \mathbb {R} } if for every ε > 0 , {\displaystyle \varepsilon >0,} there exists a natural number N {\displaystyle N} such that for all n ≥ N {\displaystyle n\geq N} and for all x ∈ E {\displaystyle x\in E} The notation for uniform convergence of f n {\displaystyle f_{n}} to f {\displaystyle f} is not quite standardized and different authors have used a variety of symbols, including (in roughly decreasing order of popularity): Frequently, no special symbol is used, and authors simply write to indicate that convergence is uniform. (In contrast, the expression f n → f {\displaystyle f_{n}\to f} on E {\displaystyle E} without an adverb is taken to mean pointwise convergence on E {\displaystyle E} : for all x ∈ E {\displaystyle x\in E} , f n ( x ) → f ( x ) {\displaystyle f_{n}(x)\to f(x)} as n → ∞ {\displaystyle n\to \infty } .) Since R {\displaystyle \mathbb {R} } is a complete metric space , the Cauchy criterion can be used to give an equivalent alternative formulation for uniform convergence: ( f n ) n ∈ N {\displaystyle (f_{n})_{n\in \mathbb {N} }} converges uniformly on E {\displaystyle E} (in the previous sense) if and only if for every ε > 0 {\displaystyle \varepsilon >0} , there exists a natural number N {\displaystyle N} such that In yet another equivalent formulation, if we define then f n {\displaystyle f_{n}} converges to f {\displaystyle f} uniformly if and only if d n → 0 {\displaystyle d_{n}\to 0} as n → ∞ {\displaystyle n\to \infty } . Thus, we can characterize uniform convergence of ( f n ) n ∈ N {\displaystyle (f_{n})_{n\in \mathbb {N} }} on E {\displaystyle E} as (simple) convergence of ( f n ) n ∈ N {\displaystyle (f_{n})_{n\in \mathbb {N} }} in the function space R E {\displaystyle \mathbb {R} ^{E}} with respect to the uniform metric (also called the supremum metric), defined by Symbolically, The sequence ( f n ) n ∈ N {\displaystyle (f_{n})_{n\in \mathbb {N} }} is said to be locally uniformly convergent with limit f {\displaystyle f} if E {\displaystyle E} is a metric space and for every x ∈ E {\displaystyle x\in E} , there exists an r > 0 {\displaystyle r>0} such that ( f n ) {\displaystyle (f_{n})} converges uniformly on B ( x , r ) ∩ E . {\displaystyle B(x,r)\cap E.} It is clear that uniform convergence implies local uniform convergence, which implies pointwise convergence. Intuitively, a sequence of functions f n {\displaystyle f_{n}} converges uniformly to f {\displaystyle f} if, given an arbitrarily small ε > 0 {\displaystyle \varepsilon >0} , we can find an N ∈ N {\displaystyle N\in \mathbb {N} } so that the functions f n {\displaystyle f_{n}} with n > N {\displaystyle n>N} all fall within a "tube" of width 2 ε {\displaystyle 2\varepsilon } centered around f {\displaystyle f} (i.e., between f ( x ) − ε {\displaystyle f(x)-\varepsilon } and f ( x ) + ε {\displaystyle f(x)+\varepsilon } ) for the entire domain of the function. Note that interchanging the order of quantifiers in the definition of uniform convergence by moving "for all x ∈ E {\displaystyle x\in E} " in front of "there exists a natural number N {\displaystyle N} " results in a definition of pointwise convergence of the sequence. To make this difference explicit, in the case of uniform convergence, N = N ( ε ) {\displaystyle N=N(\varepsilon )} can only depend on ε {\displaystyle \varepsilon } , and the choice of N {\displaystyle N} has to work for all x ∈ E {\displaystyle x\in E} , for a specific value of ε {\displaystyle \varepsilon } that is given. In contrast, in the case of pointwise convergence, N = N ( ε , x ) {\displaystyle N=N(\varepsilon ,x)} may depend on both ε {\displaystyle \varepsilon } and x {\displaystyle x} , and the choice of N {\displaystyle N} only has to work for the specific values of ε {\displaystyle \varepsilon } and x {\displaystyle x} that are given. Thus uniform convergence implies pointwise convergence, however the converse is not true, as the example in the section below illustrates. One may straightforwardly extend the concept to functions E → M , where ( M , d ) is a metric space , by replacing | f n ( x ) − f ( x ) | {\displaystyle |f_{n}(x)-f(x)|} with d ( f n ( x ) , f ( x ) ) {\displaystyle d(f_{n}(x),f(x))} . The most general setting is the uniform convergence of nets of functions E → X , where X is a uniform space . We say that the net ( f α ) {\displaystyle (f_{\alpha })} converges uniformly with limit f : E → X if and only if for every entourage V in X , there exists an α 0 {\displaystyle \alpha _{0}} , such that for every x in E and every α ≥ α 0 {\displaystyle \alpha \geq \alpha _{0}} , ( f α ( x ) , f ( x ) ) {\displaystyle (f_{\alpha }(x),f(x))} is in V . In this situation, uniform limit of continuous functions remains continuous. Uniform convergence admits a simplified definition in a hyperreal setting. Thus, a sequence f n {\displaystyle f_{n}} converges to f uniformly if for all hyperreal x in the domain of f ∗ {\displaystyle f^{*}} and all infinite n , f n ∗ ( x ) {\displaystyle f_{n}^{*}(x)} is infinitely close to f ∗ ( x ) {\displaystyle f^{*}(x)} (see microcontinuity for a similar definition of uniform continuity). In contrast, pointwise continuity requires this only for real x . For x ∈ [ 0 , 1 ) {\displaystyle x\in [0,1)} , a basic example of uniform convergence can be illustrated as follows: the sequence ( 1 / 2 ) x + n {\displaystyle (1/2)^{x+n}} converges uniformly, while x n {\displaystyle x^{n}} does not. Specifically, assume ε = 1 / 4 {\displaystyle \varepsilon =1/4} . Each function ( 1 / 2 ) x + n {\displaystyle (1/2)^{x+n}} is less than or equal to 1 / 4 {\displaystyle 1/4} when n ≥ 2 {\displaystyle n\geq 2} , regardless of the value of x {\displaystyle x} . On the other hand, x n {\displaystyle x^{n}} is only less than or equal to 1 / 4 {\displaystyle 1/4} at ever increasing values of n {\displaystyle n} when values of x {\displaystyle x} are selected closer and closer to 1 (explained more in depth further below). Given a topological space X , we can equip the space of bounded real or complex -valued functions over X with the uniform norm topology, with the uniform metric defined by Then uniform convergence simply means convergence in the uniform norm topology: The sequence of functions ( f n ) {\displaystyle (f_{n})} is a classic example of a sequence of functions that converges to a function f {\displaystyle f} pointwise but not uniformly. To show this, we first observe that the pointwise limit of ( f n ) {\displaystyle (f_{n})} as n → ∞ {\displaystyle n\to \infty } is the function f {\displaystyle f} , given by Pointwise convergence: Convergence is trivial for x = 0 {\displaystyle x=0} and x = 1 {\displaystyle x=1} , since f n ( 0 ) = f ( 0 ) = 0 {\displaystyle f_{n}(0)=f(0)=0} and f n ( 1 ) = f ( 1 ) = 1 {\displaystyle f_{n}(1)=f(1)=1} , for all n {\displaystyle n} . For x ∈ ( 0 , 1 ) {\displaystyle x\in (0,1)} and given ε > 0 {\displaystyle \varepsilon >0} , we can ensure that | f n ( x ) − f ( x ) | < ε {\displaystyle |f_{n}(x)-f(x)|<\varepsilon } whenever n ≥ N {\displaystyle n\geq N} by choosing N = ⌈ log ⁡ ε / log ⁡ x ⌉ {\displaystyle N=\lceil \log \varepsilon /\log x\rceil } , which is the minimum integer exponent of x {\displaystyle x} that allows it to reach or dip below ε {\displaystyle \varepsilon } (here the upper square brackets indicate rounding up, see ceiling function ). Hence, f n → f {\displaystyle f_{n}\to f} pointwise for all x ∈ [ 0 , 1 ] {\displaystyle x\in [0,1]} . Note that the choice of N {\displaystyle N} depends on the value of ε {\displaystyle \varepsilon } and x {\displaystyle x} . Moreover, for a fixed choice of ε {\displaystyle \varepsilon } , N {\displaystyle N} (which cannot be defined to be smaller) grows without bound as x {\displaystyle x} approaches 1. These observations preclude the possibility of uniform convergence. Non-uniformity of convergence: The convergence is not uniform, because we can find an ε > 0 {\displaystyle \varepsilon >0} so that no matter how large we choose N , {\displaystyle N,} there will be values of x ∈ [ 0 , 1 ] {\displaystyle x\in [0,1]} and n ≥ N {\displaystyle n\geq N} such that | f n ( x ) − f ( x ) | ≥ ε . {\displaystyle |f_{n}(x)-f(x)|\geq \varepsilon .} To see this, first observe that regardless of how large n {\displaystyle n} becomes, there is always an x 0 ∈ [ 0 , 1 ) {\displaystyle x_{0}\in [0,1)} such that f n ( x 0 ) = 1 / 2. {\displaystyle f_{n}(x_{0})=1/2.} Thus, if we choose ε = 1 / 4 , {\displaystyle \varepsilon =1/4,} we can never find an N {\displaystyle N} such that | f n ( x ) − f ( x ) | < ε {\displaystyle |f_{n}(x)-f(x)|<\varepsilon } for all x ∈ [ 0 , 1 ] {\displaystyle x\in [0,1]} and n ≥ N {\displaystyle n\geq N} . Explicitly, whatever candidate we choose for N {\displaystyle N} , consider the value of f N {\displaystyle f_{N}} at x 0 = ( 1 / 2 ) 1 / N {\displaystyle x_{0}=(1/2)^{1/N}} . Since the candidate fails because we have found an example of an x ∈ [ 0 , 1 ] {\displaystyle x\in [0,1]} that "escaped" our attempt to "confine" each f n ( n ≥ N ) {\displaystyle f_{n}\ (n\geq N)} to within ε {\displaystyle \varepsilon } of f {\displaystyle f} for all x ∈ [ 0 , 1 ] {\displaystyle x\in [0,1]} . In fact, it is easy to see that contrary to the requirement that ‖ f n − f ‖ ∞ → 0 {\displaystyle \|f_{n}-f\|_{\infty }\to 0} if f n ⇉ f {\displaystyle f_{n}\rightrightarrows f} . In this example one can easily see that pointwise convergence does not preserve differentiability or continuity. While each function of the sequence is smooth, that is to say that for all n , f n ∈ C ∞ ( [ 0 , 1 ] ) {\displaystyle f_{n}\in C^{\infty }([0,1])} , the limit lim n → ∞ f n {\displaystyle \lim _{n\to \infty }f_{n}} is not even continuous. The series expansion of the exponential function can be shown to be uniformly convergent on any bounded subset S ⊂ C {\displaystyle S\subset \mathbb {C} } using the Weierstrass M-test . Theorem (Weierstrass M-test). Let ( f n ) {\displaystyle (f_{n})} be a sequence of functions f n : E → C {\displaystyle f_{n}:E\to \mathbb {C} } and let M n {\displaystyle M_{n}} be a sequence of positive real numbers such that | f n ( x ) | ≤ M n {\displaystyle |f_{n}(x)|\leq M_{n}} for all x ∈ E {\displaystyle x\in E} and n = 1 , 2 , 3 , … {\displaystyle n=1,2,3,\ldots } If ∑ n M n {\textstyle \sum _{n}M_{n}} converges, then ∑ n f n {\textstyle \sum _{n}f_{n}} converges absolutely and uniformly on E {\displaystyle E} . The complex exponential function can be expressed as the series: Any bounded subset is a subset of some disc D R {\displaystyle D_{R}} of radius R , {\displaystyle R,} centered on the origin in the complex plane . The Weierstrass M-test requires us to find an upper bound M n {\displaystyle M_{n}} on the terms of the series, with M n {\displaystyle M_{n}} independent of the position in the disc: To do this, we notice and take M n = R n n ! . {\displaystyle M_{n}={\tfrac {R^{n}}{n!}}.} If ∑ n = 0 ∞ M n {\displaystyle \sum _{n=0}^{\infty }M_{n}} is convergent, then the M-test asserts that the original series is uniformly convergent. The ratio test can be used here: which means the series over M n {\displaystyle M_{n}} is convergent. Thus the original series converges uniformly for all z ∈ D R , {\displaystyle z\in D_{R},} and since S ⊂ D R {\displaystyle S\subset D_{R}} , the series is also uniformly convergent on S . {\displaystyle S.} If E {\displaystyle E} and M {\displaystyle M} are topological spaces , then it makes sense to talk about the continuity of the functions f n , f : E → M {\displaystyle f_{n},f:E\to M} . If we further assume that M {\displaystyle M} is a metric space , then (uniform) convergence of the f n {\displaystyle f_{n}} to f {\displaystyle f} is also well defined. The following result states that continuity is preserved by uniform convergence: Uniform limit theorem — Suppose E {\displaystyle E} is a topological space, M {\displaystyle M} is a metric space, and ( f n ) {\displaystyle (f_{n})} is a sequence of continuous functions f n : E → M {\displaystyle f_{n}:E\to M} . If f n ⇉ f {\displaystyle f_{n}\rightrightarrows f} on E {\displaystyle E} , then f {\displaystyle f} is also continuous. This theorem is proved by the " ε/3 trick", and is the archetypal example of this trick: to prove a given inequality ( ε ), one uses the definitions of continuity and uniform convergence to produce 3 inequalities ( ε/3 ), and then combines them via the triangle inequality to produce the desired inequality. Let x 0 ∈ E {\displaystyle x_{0}\in E} be an arbitrary point. We will prove that f {\displaystyle f} is continuous at x 0 {\displaystyle x_{0}} . Let ε > 0 {\displaystyle \varepsilon >0} . By uniform convergence, there exists a natural number N {\displaystyle N} such that ∀ x ∈ E d ( f N ( x ) , f ( x ) ) ≤ ε 3 {\displaystyle \forall x\in E\quad d(f_{N}(x),f(x))\leq {\tfrac {\varepsilon }{3}}} (uniform convergence shows that the above statement is true for all n ≥ N {\displaystyle n\geq N} , but we will only use it for one function of the sequence, namely f N {\displaystyle f_{N}} ). It follows from the continuity of f N {\displaystyle f_{N}} at x 0 ∈ E {\displaystyle x_{0}\in E} that there exists an open set U {\displaystyle U} containing x 0 {\displaystyle x_{0}} such that ∀ x ∈ U d ( f N ( x ) , f N ( x 0 ) ) ≤ ε 3 {\displaystyle \forall x\in U\quad d(f_{N}(x),f_{N}(x_{0}))\leq {\tfrac {\varepsilon }{3}}} . Hence, using the triangle inequality , ∀ x ∈ U d ( f ( x ) , f ( x 0 ) ) ≤ d ( f ( x ) , f N ( x ) ) + d ( f N ( x ) , f N ( x 0 ) ) + d ( f N ( x 0 ) , f ( x 0 ) ) ≤ ε {\displaystyle \forall x\in U\quad d(f(x),f(x_{0}))\leq d(f(x),f_{N}(x))+d(f_{N}(x),f_{N}(x_{0}))+d(f_{N}(x_{0}),f(x_{0}))\leq \varepsilon } , which gives us the continuity of f {\displaystyle f} at x 0 {\displaystyle x_{0}} . ◻ {\displaystyle \quad \square } This theorem is an important one in the history of real and Fourier analysis, since many 18th century mathematicians had the intuitive understanding that a sequence of continuous functions always converges to a continuous function. The image above shows a counterexample, and many discontinuous functions could, in fact, be written as a Fourier series of continuous functions. The erroneous claim that the pointwise limit of a sequence of continuous functions is continuous (originally stated in terms of convergent series of continuous functions) is infamously known as "Cauchy's wrong theorem". The uniform limit theorem shows that a stronger form of convergence, uniform convergence, is needed to ensure the preservation of continuity in the limit function. More precisely, this theorem states that the uniform limit of uniformly continuous functions is uniformly continuous; for a locally compact space, continuity is equivalent to local uniform continuity, and thus the uniform limit of continuous functions is continuous. If S {\displaystyle S} is an interval and all the functions f n {\displaystyle f_{n}} are differentiable and converge to a limit f {\displaystyle f} , it is often desirable to determine the derivative function f ′ {\displaystyle f'} by taking the limit of the sequence f n ′ {\displaystyle f'_{n}} . This is however in general not possible: even if the convergence is uniform, the limit function need not be differentiable (not even if the sequence consists of everywhere- analytic functions, see Weierstrass function ), and even if it is differentiable, the derivative of the limit function need not be equal to the limit of the derivatives. Consider for instance f n ( x ) = n − 1 / 2 sin ⁡ ( n x ) {\displaystyle f_{n}(x)=n^{-1/2}{\sin(nx)}} with uniform limit f n ⇉ f ≡ 0 {\displaystyle f_{n}\rightrightarrows f\equiv 0} . Clearly, f ′ {\displaystyle f'} is also identically zero. However, the derivatives of the sequence of functions are given by f n ′ ( x ) = n 1 / 2 cos ⁡ n x , {\displaystyle f'_{n}(x)=n^{1/2}\cos nx,} and the sequence f n ′ {\displaystyle f'_{n}} does not converge to f ′ , {\displaystyle f',} or even to any function at all. In order to ensure a connection between the limit of a sequence of differentiable functions and the limit of the sequence of derivatives, the uniform convergence of the sequence of derivatives plus the convergence of the sequence of functions at at least one point is required: [ 4 ] Similarly, one often wants to exchange integrals and limit processes. For the Riemann integral , this can be done if uniform convergence is assumed: In fact, for a uniformly convergent family of bounded functions on an interval, the upper and lower Riemann integrals converge to the upper and lower Riemann integrals of the limit function. This follows because, for n sufficiently large, the graph of f n {\displaystyle f_{n}} is within ε of the graph of f , and so the upper sum and lower sum of f n {\displaystyle f_{n}} are each within ε | I | {\displaystyle \varepsilon |I|} of the value of the upper and lower sums of f {\displaystyle f} , respectively. Much stronger theorems in this respect, which require not much more than pointwise convergence, can be obtained if one abandons the Riemann integral and uses the Lebesgue integral instead. Using Morera's Theorem , one can show that if a sequence of analytic functions converges uniformly in a region S of the complex plane, then the limit is analytic in S. This example demonstrates that complex functions are more well-behaved than real functions, since the uniform limit of analytic functions on a real interval need not even be differentiable (see Weierstrass function ). We say that ∑ n = 1 ∞ f n {\textstyle \sum _{n=1}^{\infty }f_{n}} converges: With this definition comes the following result: Let x 0 be contained in the set E and each f n be continuous at x 0 . If f = ∑ n = 1 ∞ f n {\textstyle f=\sum _{n=1}^{\infty }f_{n}} converges uniformly on E then f is continuous at x 0 in E . Suppose that E = [ a , b ] {\displaystyle E=[a,b]} and each f n is integrable on E . If ∑ n = 1 ∞ f n {\textstyle \sum _{n=1}^{\infty }f_{n}} converges uniformly on E then f is integrable on E and the series of integrals of f n is equal to integral of the series of f n . If the domain of the functions is a measure space E then the related notion of almost uniform convergence can be defined. We say a sequence of functions ( f n ) {\displaystyle (f_{n})} converges almost uniformly on E if for every δ > 0 {\displaystyle \delta >0} there exists a measurable set E δ {\displaystyle E_{\delta }} with measure less than δ {\displaystyle \delta } such that the sequence of functions ( f n ) {\displaystyle (f_{n})} converges uniformly on E ∖ E δ {\displaystyle E\setminus E_{\delta }} . In other words, almost uniform convergence means there are sets of arbitrarily small measure for which the sequence of functions converges uniformly on their complement. Note that almost uniform convergence of a sequence does not mean that the sequence converges uniformly almost everywhere as might be inferred from the name. However, Egorov's theorem does guarantee that on a finite measure space, a sequence of functions that converges almost everywhere also converges almost uniformly on the same set. Almost uniform convergence implies almost everywhere convergence and convergence in measure . "Uniform convergence" , Encyclopedia of Mathematics , EMS Press , 2001 [1994]
https://en.wikipedia.org/wiki/Uniform_convergence
Uniform convergence in probability is a form of convergence in probability in statistical asymptotic theory and probability theory . It means that, under certain conditions, the empirical frequencies of all events in a certain event-family converge to their theoretical probabilities . Uniform convergence in probability has applications to statistics as well as machine learning as part of statistical learning theory . The law of large numbers says that, for each single event A {\displaystyle A} , its empirical frequency in a sequence of independent trials converges (with high probability) to its theoretical probability. In many application however, the need arises to judge simultaneously the probabilities of events of an entire class S {\displaystyle S} from one and the same sample. Moreover it, is required that the relative frequency of the events converge to the probability uniformly over the entire class of events S {\displaystyle S} [ 1 ] The Uniform Convergence Theorem gives a sufficient condition for this convergence to hold. Roughly, if the event-family is sufficiently simple (its VC dimension is sufficiently small) then uniform convergence holds. For a class of predicates H {\displaystyle H} defined on a set X {\displaystyle X} and a set of samples x = ( x 1 , x 2 , … , x m ) {\displaystyle x=(x_{1},x_{2},\dots ,x_{m})} , where x i ∈ X {\displaystyle x_{i}\in X} , the empirical frequency of h ∈ H {\displaystyle h\in H} on x {\displaystyle x} is The theoretical probability of h ∈ H {\displaystyle h\in H} is defined as Q P ( h ) = P { y ∈ X : h ( y ) = 1 } . {\displaystyle Q_{P}(h)=P\{y\in X:h(y)=1\}.} The Uniform Convergence Theorem states, roughly, that if H {\displaystyle H} is "simple" and we draw samples independently (with replacement) from X {\displaystyle X} according to any distribution P {\displaystyle P} , then with high probability , the empirical frequency will be close to its expected value , which is the theoretical probability. [ 2 ] Here "simple" means that the Vapnik–Chervonenkis dimension of the class H {\displaystyle H} is small relative to the size of the sample. In other words, a sufficiently simple collection of functions behaves roughly the same on a small random sample as it does on the distribution as a whole. The Uniform Convergence Theorem was first proved by Vapnik and Chervonenkis [ 1 ] using the concept of growth function . The statement of the uniform convergence theorem is as follows: [ 3 ] If H {\displaystyle H} is a set of { 0 , 1 } {\displaystyle \{0,1\}} -valued functions defined on a set X {\displaystyle X} and P {\displaystyle P} is a probability distribution on X {\displaystyle X} then for ε > 0 {\displaystyle \varepsilon >0} and m {\displaystyle m} a positive integer, we have: And for any natural number m {\displaystyle m} , the shattering number Π H ( m ) {\displaystyle \Pi _{H}(m)} is defined as: From the point of Learning Theory one can consider H {\displaystyle H} to be the Concept/Hypothesis class defined over the instance set X {\displaystyle X} . Before getting into the details of the proof of the theorem we will state Sauer's Lemma which we will need in our proof. The Sauer–Shelah lemma [ 4 ] relates the shattering number Π h ( m ) {\displaystyle \Pi _{h}(m)} to the VC Dimension. Lemma: Π H ( m ) ≤ ( e m d ) d {\displaystyle \Pi _{H}(m)\leq \left({\frac {em}{d}}\right)^{d}} , where d {\displaystyle d} is the VC Dimension of the concept class H {\displaystyle H} . Corollary: Π H ( m ) ≤ m d {\displaystyle \Pi _{H}(m)\leq m^{d}} . [ 1 ] and [ 3 ] are the sources of the proof below. Before we get into the details of the proof of the Uniform Convergence Theorem we will present a high level overview of the proof. We present the technical details of the proof. Lemma: Let V = { x ∈ X m : | Q P ( h ) − Q ^ x ( h ) | ≥ ε for some h ∈ H } {\displaystyle V=\{x\in X^{m}:|Q_{P}(h)-{\widehat {Q}}_{x}(h)|\geq \varepsilon {\text{ for some }}h\in H\}} and Then for m ≥ 2 ε 2 {\displaystyle m\geq {\frac {2}{\varepsilon ^{2}}}} , P m ( V ) ≤ 2 P 2 m ( R ) {\displaystyle P^{m}(V)\leq 2P^{2m}(R)} . Proof: By the triangle inequality, if | Q P ( h ) − Q ^ r ( h ) | ≥ ε {\displaystyle |Q_{P}(h)-{\widehat {Q}}_{r}(h)|\geq \varepsilon } and | Q P ( h ) − Q ^ s ( h ) | ≤ ε / 2 {\displaystyle |Q_{P}(h)-{\widehat {Q}}_{s}(h)|\leq \varepsilon /2} then | Q ^ r ( h ) − Q ^ s ( h ) | ≥ ε / 2 {\displaystyle |{\widehat {Q}}_{r}(h)-{\widehat {Q}}_{s}(h)|\geq \varepsilon /2} . Therefore, since r {\displaystyle r} and s {\displaystyle s} are independent. Now for r ∈ V {\displaystyle r\in V} fix an h ∈ H {\displaystyle h\in H} such that | Q P ( h ) − Q ^ r ( h ) | ≥ ε {\displaystyle |Q_{P}(h)-{\widehat {Q}}_{r}(h)|\geq \varepsilon } . For this h {\displaystyle h} , we shall show that Thus for any r ∈ V {\displaystyle r\in V} , A ≥ P m ( V ) 2 {\displaystyle A\geq {\frac {P^{m}(V)}{2}}} and hence P 2 m ( R ) ≥ P m ( V ) 2 {\displaystyle P^{2m}(R)\geq {\frac {P^{m}(V)}{2}}} . And hence we perform the first step of our high level idea. Notice, m ⋅ Q ^ s ( h ) {\displaystyle m\cdot {\widehat {Q}}_{s}(h)} is a binomial random variable with expectation m ⋅ Q P ( h ) {\displaystyle m\cdot Q_{P}(h)} and variance m ⋅ Q P ( h ) ( 1 − Q P ( h ) ) {\displaystyle m\cdot Q_{P}(h)(1-Q_{P}(h))} . By Chebyshev's inequality we get for the mentioned bound on m {\displaystyle m} . Here we use the fact that x ( 1 − x ) ≤ 1 / 4 {\displaystyle x(1-x)\leq 1/4} for x {\displaystyle x} . Let Γ m {\displaystyle \Gamma _{m}} be the set of all permutations of { 1 , 2 , 3 , … , 2 m } {\displaystyle \{1,2,3,\dots ,2m\}} that swaps i {\displaystyle i} and m + i {\displaystyle m+i} ∀ i {\displaystyle \forall i} in some subset of { 1 , 2 , 3 , … , 2 m } {\displaystyle \{1,2,3,\ldots ,2m\}} . Lemma: Let R {\displaystyle R} be any subset of X 2 m {\displaystyle X^{2m}} and P {\displaystyle P} any probability distribution on X {\displaystyle X} . Then, where the expectation is over x {\displaystyle x} chosen according to P 2 m {\displaystyle P^{2m}} , and the probability is over σ {\displaystyle \sigma } chosen uniformly from Γ m {\displaystyle \Gamma _{m}} . Proof: For any σ ∈ Γ m , {\displaystyle \sigma \in \Gamma _{m},} (since coordinate permutations preserve the product distribution P 2 m {\displaystyle P^{2m}} .) The maximum is guaranteed to exist since there is only a finite set of values that probability under a random permutation can take. Lemma: Basing on the previous lemma, Proof: Let us define x = ( x 1 , x 2 , … , x 2 m ) {\displaystyle x=(x_{1},x_{2},\ldots ,x_{2m})} and t = | H | x | {\displaystyle t=|H|_{x}|} which is at most Π H ( 2 m ) {\displaystyle \Pi _{H}(2m)} . This means there are functions h 1 , h 2 , … , h t ∈ H {\displaystyle h_{1},h_{2},\ldots ,h_{t}\in H} such that for any h ∈ H , ∃ i {\displaystyle h\in H,\exists i} between 1 {\displaystyle 1} and t {\displaystyle t} with h i ( x k ) = h ( x k ) {\displaystyle h_{i}(x_{k})=h(x_{k})} for 1 ≤ k ≤ 2 m . {\displaystyle 1\leq k\leq 2m.} We see that σ ( x ) ∈ R {\displaystyle \sigma (x)\in R} iff for some h {\displaystyle h} in H {\displaystyle H} satisfies, | 1 m | { 1 ≤ i ≤ m : h ( x σ i ) = 1 } | − 1 m | { m + 1 ≤ i ≤ 2 m : h ( x σ i ) = 1 } | | ≥ ε 2 {\displaystyle |{\frac {1}{m}}|\{1\leq i\leq m:h(x_{\sigma _{i}})=1\}|-{\frac {1}{m}}|\{m+1\leq i\leq 2m:h(x_{\sigma _{i}})=1\}||\geq {\frac {\varepsilon }{2}}} . Hence if we define w i j = 1 {\displaystyle w_{i}^{j}=1} if h j ( x i ) = 1 {\displaystyle h_{j}(x_{i})=1} and w i j = 0 {\displaystyle w_{i}^{j}=0} otherwise. For 1 ≤ i ≤ m {\displaystyle 1\leq i\leq m} and 1 ≤ j ≤ t {\displaystyle 1\leq j\leq t} , we have that σ ( x ) ∈ R {\displaystyle \sigma (x)\in R} iff for some j {\displaystyle j} in 1 , … , t {\displaystyle {1,\ldots ,t}} satisfies | 1 m ( ∑ i w σ ( i ) j − ∑ i w σ ( m + i ) j ) | ≥ ε 2 {\displaystyle |{\frac {1}{m}}\left(\sum _{i}w_{\sigma (i)}^{j}-\sum _{i}w_{\sigma (m+i)}^{j}\right)|\geq {\frac {\varepsilon }{2}}} . By union bound we get Since, the distribution over the permutations σ {\displaystyle \sigma } is uniform for each i {\displaystyle i} , so w σ i j − w σ m + i j {\displaystyle w_{\sigma _{i}}^{j}-w_{\sigma _{m+i}}^{j}} equals ± | w i j − w m + i j | {\displaystyle \pm |w_{i}^{j}-w_{m+i}^{j}|} , with equal probability. Thus, where the probability on the right is over β i {\displaystyle \beta _{i}} and both the possibilities are equally likely. By Hoeffding's inequality , this is at most 2 e − m ε 2 / 8 {\displaystyle 2e^{-m\varepsilon ^{2}/8}} . Finally, combining all the three parts of the proof we get the Uniform Convergence Theorem .
https://en.wikipedia.org/wiki/Uniform_convergence_in_probability
Uniformat is a standard for classifying building specifications , cost estimating , and cost analysis in the U.S. and Canada. The elements are major components common to most buildings. The system can be used to provide consistency in the economic evaluation of building projects. It was developed through an industry and government consensus and has been widely accepted as an ASTM standard. [ 1 ] Hanscomb Associates , a cost consultant, developed a system called MASTERCOST in 1973 for the American Institute of Architects (AIA). The U.S. General Services Administration (GSA), which is responsible for government buildings, was also developing a system. The AIA and GSA agreed on a system and named it UNIFORMAT. The AIA included it in their practice on construction management, and the GSA included it in their project estimating requirements. In 1989, ASTM International began developing a standard for classifying building elements, based on UNIFORMAT. It was renamed to UNIFORMAT II. [ 2 ] In 1995, the Construction Specifications Institute (CSI) and Construction Specifications Canada (CSC) began to revise Uniformat. UniFormat is now a registered trademark of CSI and CSC and was most recently published in 2010. [ 3 ] A new strategy to classify the built environment, named OmniClass, [ 4 ] incorporates the elemental building classification in its Table 21 Elements. The numbering system is changed in OmniClass. An example of how the numbering system expands to provide additional detail below level 1 is shown for A SUBSTRUCTURE
https://en.wikipedia.org/wiki/Uniformat
Uniformity of Content is a pharmaceutical analysis parameter for the quality control of capsules or tablets . Multiple capsules or tablets are selected at random and a suitable analytical method is applied to assay the individual content of the active ingredient in each capsule or tablet. The preparation complies if not more than one (all within limits) individual content is outside the limits of 85 to 115% of the average content and none is outside the limits of 75 to 125% of the average content. The preparation fails to comply with the test if more than 3 individual contents are outside the limits of 85 to 115% of the average content or if one or more individual contents are outside the limits of 75% to 125% of the average content. [ 1 ] This pharmacology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Uniformity_of_content
There have been several attempts in history to reach a unified theory of mathematics . Some of the most respected mathematicians in the academia have expressed views that the whole subject should be fitted into one theory (examples include Hilbert's program and Langlands program ). The unification of mathematical topics has been called mathematical consolidation : [ 1 ] "By a consolidation of two or more concepts or theories T i we mean the creation of a new theory which incorporates elements of all the T i into one system which achieves more general implications than are obtainable from any single T i ." The process of unification might be seen as helping to define what constitutes mathematics as a discipline. For example, mechanics and mathematical analysis were commonly combined into one subject during the 18th century, united by the differential equation concept; while algebra and geometry were considered largely distinct. Now we consider analysis, algebra, and geometry, but not mechanics, as parts of mathematics because they are primarily deductive formal sciences , while mechanics like physics must proceed from observation. There is no major loss of content, with analytical mechanics in the old sense now expressed in terms of symplectic topology , based on the newer theory of manifolds . The term theory is used informally within mathematics to mean a self-consistent body of definitions , axioms , theorems , examples, and so on. (Examples include group theory , Galois theory , control theory , and K-theory .) In particular there is no connotation of hypothetical . Thus the term unifying theory is more like a sociological term used to study the actions of mathematicians. It may assume nothing conjectural that would be analogous to an undiscovered scientific link. There is really no cognate within mathematics to such concepts as Proto-World in linguistics or the Gaia hypothesis . Nonetheless there have been several episodes within the history of mathematics in which sets of individual theorems were found to be special cases of a single unifying result, or in which a single perspective about how to proceed when developing an area of mathematics could be applied fruitfully to multiple branches of the subject. A well-known example was the development of analytic geometry , which in the hands of mathematicians such as Descartes and Fermat showed that many theorems about curves and surfaces of special types could be stated in algebraic language (then new), each of which could then be proved using the same techniques. That is, the theorems were very similar algebraically, even if the geometrical interpretations were distinct. In 1859, Arthur Cayley initiated a unification of metric geometries through use of the Cayley-Klein metrics . Later Felix Klein used such metrics to provide a foundation for non-Euclidean geometry . In 1872, Felix Klein noted that the many branches of geometry which had been developed during the 19th century ( affine geometry , projective geometry , hyperbolic geometry , etc.) could all be treated in a uniform way. He did this by considering the groups under which the geometric objects were invariant. This unification of geometry goes by the name of the Erlangen programme . [ 2 ] The general theory of angle can be unified with invariant measure of area . The hyperbolic angle is defined in terms of area, very nearly the area associated with natural logarithm . The circular angle also has area interpretation when referred to a circle with radius equal to the square root of two. These areas are invariant with respect to hyperbolic rotation and circular rotation respectively. These affine transformations are effected by elements of the special linear group SL(2,R) . Inspection of that group reveals shear mappings which increase or decrease slopes but differences of slope do not change. A third type of angle, also interpreted as an area dependent on slope differences, is invariant because of area preservation of a shear mapping. [ 3 ] Early in the 20th century, many parts of mathematics began to be treated by delineating useful sets of axioms and then studying their consequences. Thus, for example, the studies of " hypercomplex numbers ", such as considered by the Quaternion Society , were put onto an axiomatic footing as branches of ring theory (in this case, with the specific meaning of associative algebras over the field of complex numbers). In this context, the quotient ring concept is one of the most powerful unifiers. This was a general change of methodology, since the needs of applications had up until then meant that much of mathematics was taught by means of algorithms (or processes close to being algorithmic). Arithmetic is still taught that way. It was a parallel to the development of mathematical logic as a stand-alone branch of mathematics. By the 1930s symbolic logic itself was adequately included within mathematics. In most cases, mathematical objects under study can be defined (albeit non-canonically) as sets or, more informally, as sets with additional structure such as an addition operation. Set theory now serves as a lingua franca for the development of mathematical themes. The cause of axiomatic development was taken up in earnest by the Bourbaki group of mathematicians. Taken to its extreme, this attitude was thought to demand mathematics developed in its greatest generality. One started from the most general axioms, and then specialized, for example, by introducing modules over commutative rings , and limiting to vector spaces over the real numbers only when absolutely necessary. The story proceeded in this fashion, even when the specializations were the theorems of primary interest. In particular, this perspective placed little value on fields of mathematics (such as combinatorics ) whose objects of study are very often special, or found in situations which can only superficially be related to more axiomatic branches of the subject. Category theory is a unifying theory of mathematics that was initially developed in the second half of the 20th century. [ 4 ] In this respect, it is an alternative and complement to set theory. A key theme from the "categorical" point of view is that mathematics requires not only certain kinds of objects ( Lie groups , Banach spaces , etc.) but also mappings between them that preserve their structure. In particular, this clarifies exactly what it means for mathematical objects to be considered to be the same . (For example, are all equilateral triangles the same , or does size matter?) Saunders Mac Lane proposed that any concept with enough 'ubiquity' (occurring in various branches of mathematics) deserved isolating and studying in its own right. Category theory is arguably better adapted to that end than any other current approach. The disadvantages of relying on so-called abstract nonsense are a certain blandness and abstraction in the sense of breaking away from the roots in concrete problems. Nevertheless, the methods of category theory have steadily advanced in acceptance, in numerous areas (from D-modules to categorical logic ). On a less grand scale, similarities between sets of results in two different branches of mathematics raise the question of whether a unifying framework exists that could explain the parallels. We have already noted the example of analytic geometry, and more generally the field of algebraic geometry thoroughly develops the connections between geometric objects ( algebraic varieties , or more generally schemes ) and algebraic ones ( ideals ); the touchstone result here is Hilbert's Nullstellensatz , which roughly speaking shows that there is a natural one-to-one correspondence between the two types of objects. One may view other theorems in the same light. For example, the fundamental theorem of Galois theory asserts that there is a one-to-one correspondence between extensions of a field and subgroups of the field's Galois group . The Taniyama–Shimura conjecture for elliptic curves (now proven) establishes a one-to-one correspondence between curves defined as modular forms and elliptic curves defined over the rational numbers . A research area sometimes nicknamed Monstrous Moonshine developed connections between modular forms and the finite simple group known as the Monster , starting solely with the surprise observation that in each of them the rather unusual number 196884 would arise very naturally. Another field, known as the Langlands program , likewise starts with apparently haphazard similarities (in this case, between number-theoretical results and representations of certain groups) and looks for constructions from which both sets of results would be corollaries. A short list of these theories might include: A well-known example is the Taniyama–Shimura conjecture , now the modularity theorem , which proposed that each elliptic curve over the rational numbers can be translated into a modular form (in such a way as to preserve the associated L-function ). There are difficulties in identifying this with an isomorphism, in any strict sense of the word. Certain curves had been known to be both elliptic curves (of genus 1) and modular curves , before the conjecture was formulated (about 1955). The surprising part of the conjecture was the extension to factors of Jacobians of modular curves of genus > 1. It had probably not seemed plausible that there would be 'enough' such rational factors, before the conjecture was enunciated; and in fact the numerical evidence was slight until around 1970, when tables began to confirm it. The case of elliptic curves with complex multiplication was proved by Shimura in 1964. This conjecture stood for decades before being proved in generality. In fact the Langlands program (or philosophy) is much more like a web of unifying conjectures; it really does postulate that the general theory of automorphic forms is regulated by the L-groups introduced by Robert Langlands . His principle of functoriality with respect to the L-group has a very large explanatory value with respect to known types of lifting of automorphic forms (now more broadly studied as automorphic representations ). While this theory is in one sense closely linked with the Taniyama–Shimura conjecture, it should be understood that the conjecture actually operates in the opposite direction. It requires the existence of an automorphic form, starting with an object that (very abstractly) lies in a category of motives . Another significant related point is that the Langlands approach stands apart from the whole development triggered by monstrous moonshine (connections between elliptic modular functions as Fourier series , and the group representations of the Monster group and other sporadic groups ). The Langlands philosophy neither foreshadowed nor was able to include this line of research. Another case, which so far is less well-developed but covers a wide range of mathematics, is the conjectural basis of some parts of K-theory . The Baum–Connes conjecture , now a long-standing problem, has been joined by others in a group known as the isomorphism conjectures in K-theory . These include the Farrell–Jones conjecture and Bost conjecture .
https://en.wikipedia.org/wiki/Unifying_theories_in_mathematics
In contact mechanics , the term unilateral contact , also called unilateral constraint , denotes a mechanical constraint which prevents penetration between two rigid/flexible bodies. Constraints of this kind are omnipresent in non-smooth multibody dynamics applications, such as granular flows, [ 1 ] legged robot , vehicle dynamics , particle damping , imperfect joints, [ 2 ] or rocket landings. In these applications, the unilateral constraints result in impacts happening, therefore requiring suitable methods to deal with such constraints. There are mainly two kinds of methods to model the unilateral constraints. The first kind is based on smooth contact dynamics , including methods using Hertz's models, penalty methods, and some regularization force models, while the second kind is based on the non-smooth contact dynamics , which models the system with unilateral contacts as variational inequalities . In this method, normal forces generated by the unilateral constraints are modelled according to the local material properties of bodies. In particular, contact force models are derived from continuum mechanics, and expressed as functions of the gap and the impact velocity of bodies. As an example, an illustration of the classic Hertz contact model is shown in the figure on the right. In such model, the contact is explained by the local deformation of bodies. More contact models can be found in some review scientific works [ 3 ] [ 4 ] [ 5 ] or in the article dedicated to contact mechanics . In non-smooth method, unilateral interactions between bodies are fundamentally modelled by the Signorini condition [ 6 ] for non-penetration, and impact laws are used to define the impact process. [ 7 ] The Signorini condition can be expressed as the complementarity problem: g ≥ 0 , λ ≥ 0 , λ ⊥ g {\displaystyle g\geq 0,\quad \lambda \geq 0,\quad \lambda \perp g} , where g {\displaystyle g} denotes the distance between two bodies and λ {\displaystyle \lambda } denotes the contact force generated by the unilateral constraints, as shown in the figure below. Moreover, in terms of the concept of proximal point of convex theory, the Signorini condition can be equivalently expressed [ 6 ] [ 8 ] as: λ = p r o j R + ( λ − ρ g ) {\displaystyle \lambda ={\rm {proj}}_{\mathbb {R} ^{+}}(\lambda -\rho g)} , where ρ > 0 {\displaystyle \rho >0} denotes an auxiliary parameter, and p r o j C ( x ) {\displaystyle {\rm {proj}}_{\bf {C}}(x)} represents the proximal point in the set C {\displaystyle C} to the variable x {\displaystyle x} , [ 9 ] defined as: p r o j C ( x ) = a r g m i n y ∈ C ‖ y − x ‖ {\displaystyle {\rm {proj}}_{\bf {C}}(x)={\rm {argmin}}_{y\in C}\|y-x\|} . Both the expressions above represent the dynamic behaviour of unilateral constraints: on the one hand, when the normal distance g N {\displaystyle g_{\rm {N}}} is above zero, the contact is open, which means that there is no contact force between bodies, λ = 0 {\displaystyle \lambda =0} ; on the other hand, when the normal distance g N {\displaystyle g_{\rm {N}}} is equal to zero, the contact is closed, resulting in λ ≥ 0 {\displaystyle \lambda \geq 0} . When implementing non-smooth theory based methods, the velocity Signorini condition or the acceleration Signorini condition are actually employed in most cases. The velocity Signorini condition is expressed as: [ 6 ] [ 10 ] U N + ≥ 0 , λ ≥ 0 , U + λ = 0 {\displaystyle U_{\rm {N}}^{+}\geq 0,\quad \lambda \geq 0,\quad U^{+}\lambda =0} , where U N + {\displaystyle U_{\rm {N}}^{+}} denotes the relative normal velocity after impact. The velocity Signorini condition should be understood together with the previous conditions g ≥ 0 , λ ≥ 0 , λ ⊥ g {\displaystyle g\geq 0,\;\lambda \geq 0,\;\lambda \perp g} . The acceleration Signorini condition is considered under closed contact ( g = 0 , U N + = 0 {\displaystyle g=0,U_{\rm {N}}^{+}=0} ), as: [ 8 ] g ¨ ≥ 0 , λ ≥ 0 , g ¨ λ = 0 {\displaystyle {\ddot {g}}\geq 0,\quad \lambda \geq 0,\quad {\ddot {g}}\lambda =0} , where the overdots denote the second-order derivative with respect to time. When using this method for unilateral constraints between two rigid bodies, the Signorini condition alone is not enough to model the impact process, so impact laws, which give the information about the states before and after the impact, [ 6 ] are also required. For example, when the Newton restitution law is employed, a coefficient of restitution will be defined as: e = − U N + / U N − {\displaystyle e=-{U_{\rm {N}}^{+}}/{U_{\rm {N}}^{-}}} , where U N − {\displaystyle U_{\rm {N}}^{-}} denotes the relative normal velocity before impact. For frictional unilateral constraints, the normal contact forces are modelled by one of the methods above, while the friction forces are commonly described by means of Coulomb's friction law . Coulomb's friction law can be expressed as follows: when the tangential velocity U T {\displaystyle U_{\rm {T}}} is not equal to zero, namely when the two bodies are sliding, the friction force λ T {\displaystyle \lambda _{\rm {T}}} is proportional to the normal contact force λ {\displaystyle \lambda } ; when instead the tangential velocity U T {\displaystyle U_{\rm {T}}} is equal to zero, namely when the two bodies are relatively steady, the friction force λ T {\displaystyle \lambda _{\rm {T}}} is no more than the maximum of the static friction force. This relationship can be summarised using the maximum dissipation principle, [ 6 ] as λ T ∈ D ( μ λ ) ∀ S ∈ D ( μ λ ) ( S − λ T ) U T ≥ 0 , {\displaystyle \lambda _{\rm {T}}\in D(\mu \lambda )~~~~~~\forall S\in D(\mu \lambda )~~~~~~(S-\lambda _{\rm {T}})U_{\rm {T}}\geq 0,} where D ( μ λ ) = { ∀ x | − μ λ ≤ ‖ x ‖ ≤ μ λ } {\displaystyle D(\mu \lambda )=\{\forall x|-\mu \lambda \leq \|x\|\leq \mu \lambda \}} represents the friction cone, and μ {\displaystyle \mu } denotes the kinematic friction coefficient. Similarly to the normal contact force, the formulation above can be equivalently expressed in terms of the notion of proximal point as: [ 6 ] λ T = p r o j D ( μ λ ) ( λ T − ρ U T ) {\displaystyle \lambda _{\rm {T}}={\rm {proj}}_{D(\mu \lambda )}(\lambda _{T}-\rho U_{\rm {T}})} , where ρ > 0 {\displaystyle \rho >0} denotes an auxiliary parameter. If the unilateral constraints are modelled by the continuum mechanics based contact models, the contact forces can be computed directly through an explicit mathematical formula, that depends on the contact model of choice. If instead the non-smooth theory based method is employed, there are two main formulations for the solution of the Signorini conditions: the nonlinear / linear complementarity problem (N/LCP) formulation and the augmented Lagrangian formulation. With respect to the solution of contact models, the non-smooth method is more tedious, but less costly from the computational viewpoint. A more detailed comparison of solution methods using contact models and non-smooth theory was carried out by Pazouki et al. [ 11 ] Following this approach, the solution of dynamics equations with unilateral constraints is transformed into the solution of N/LCPs. In particular, for frictionless unilateral constraints or unilateral constraints with planar friction, the problem is transformed into LCPs, while for frictional unilateral constraints, the problem is transformed into NCPs. To solve LCPs, the pivoting algorithm , originating from the algorithm of Lemek and Dantzig, is the most popular method. [ 8 ] Unfortunately, however, numerical experiments show that the pivoting algorithm may fail when handling systems with a large number of unilateral contacts, even using the best optimizations. [ 12 ] For NCPs, using a polyhedral approximation can transform the NCPs into a set of LCPs, which can then be solved by the LCP solver. [ 13 ] Other approaches beyond these methods, such NCP-functions [ 14 ] [ 15 ] [ 16 ] or cone complementarity problems (CCP) based methods [ 17 ] [ 18 ] are also employed to solve NCPs. Different from the N/LCP formulations, the augmented Lagrangian formulation uses the proximal functions described above, λ = p r o j R + ( λ − ρ g ) {\displaystyle \lambda ={\rm {proj}}_{\mathbb {R} ^{+}}(\lambda -\rho g)} . Together with dynamics equations, this formulation is solved by means of root-finding algorithms . A comparative study between LCP formulations and the augmented Lagrangian formulation was carried out by Mashayekhi et al. [ 9 ] Open-source codes and non-commercial packages using the non-smooth based method:
https://en.wikipedia.org/wiki/Unilateral_contact
The Unimat was a series of combination machines sold for light hobbyist engineering, such as model engineering . They were distinctive as the same major components could be re-arranged into either a lathe or milling machine . It covers a range of commercially sold machines intended for machining and metalworking for model making hobbyists manufactured by the Emco company. The machines enable the user to have a drill press , lathe and milling machine . Most of the Unimat range is no longer in production, but the smallest Unimat 1 and its variants is now produced by the Cool Tool Gmbh. This model of the Unimat 1 has many plastic parts. It is capable of working mostly wood and plastics. [ 1 ] This model of the Unimat 1 has many plastic parts and plastic machining cross slides. It is capable of working wood, plastic and soft metals i.e. aluminum and brass. It is the same machine as the basic with additional capabilities such as wood turning, sawing, drilling, milling and metal turning. This is a collection of enhancements for any of the Unimat 1 tools. It includes options for more power, table saw and router table. This model of the Unimat 1 has been upgraded to metal parts and cross slides that give the unit a higher level of accuracy. It is capable of working any of the materials from the Basic or Classic versions plus soft steel. This tool article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unimat
Unimate was the first industrial robot , [ 1 ] which worked on a General Motors assembly line at the Inland Fisher Guide Plant in Ewing Township, New Jersey , in 1961 . [ 2 ] [ 3 ] [ 4 ] There were in fact a family of robots. It was invented by George Devol in the 1950s using his original patent filed in 1954 and granted in 1961 [ 5 ] ( U.S. patent 2,988,237 ). The patent is titled "Programmed Article Transfer" (PAT) and begins: The present invention relates to the automatic operation of machinery, particularly the handling apparatus, and to automatic control apparatus suited for such machinery. [ 6 ] Devol, together with Joseph Engelberger , his business associate, started the world's first robot manufacturing company, Unimation . [ 7 ] Devol's background wasn't in academia, but in engineering and mechanics, and previously worked on optical sound recording for film and high-speed printing using magnetic sensing and recording. Engelberger's ultimate goal was to create mechanical workers to replace humans in factories. [ 8 ] The machine weighed 4000 pounds [ 9 ] and undertook the job of transporting die castings from an assembly line and welding these parts on auto bodies, a dangerous task for workers, who might be poisoned by toxic fumes or lose a limb if they were not careful. [ 4 ] The original Unimate consisted of a large computer-like box, joined to another box and was connected to an arm, with systematic tasks stored in a drum memory . In 2003 the Unimate was inducted into the Robot Hall of Fame . [ 10 ] The 1961 Unimate installed at a General Motors factory differed significantly from George Devol's 1954 patented design. The Unimate was a hydraulically actuated programmable manipulator arm with 5 degrees of freedom . This contrasted with the simpler three-prismatic-link pick-and-place arm described in Devol's "Programmed Article Transfer" (PAT) patent. [ 8 ] Devol's earlier methodology, involving the conversion of analog information into electrical signals, formed the basis for subsequent patents. The patent proposed a cost-effective, general-purpose article-handling machine for diverse industrial tasks, with programmable motions, including gripping mechanisms. It would have a wheeled chassis on rails, a base unit housing the movement-recording program drum, an elevator for vertical arm translation, a telescoping arm with a transfer head and gripper, and a three-dimensional position-sensing system using encoders and sensing heads. [ 8 ] The position-sensing system (proprioception) had two versions: one using notched metal strips and ferrous material detection, the other, a magnetized plate with polarity-based sensing and recording. Similarly, the program drum had two forms: a malleable metal sheet with mechanically deformed bulges, and a solid magnetizable drum. Both drum types used corresponding reading mechanisms. [ 8 ] The robot could be programmed by manually moving the gripper, recording the location on the program drum, then it could perform the same motion, an early form of imitation learning . This resulted in point-to-point movement, a standard feature in modern robotic arms. The magnetizable drum also allowed recording continuous movements along curved paths, synchronized with a timing reference for playback. [ 8 ] The fixed encoder array on the base unit served as a location index for recording, enabling deceleration near programmed positions and self-correction during operation. [ 8 ] The Unimate appeared on The Tonight Show hosted by Johnny Carson on which it knocked a golf ball into a cup, poured a beer, waved the orchestra conductor's baton and grasped an accordion and waved it around. [ 7 ] [ 11 ] Fictional robots called Unimate , designed by the character Alan von Neumann, Jr. , appeared in comic books from DC Comics . [ 12 ]
https://en.wikipedia.org/wiki/Unimate
In mathematics , unimodality means possessing a unique mode . More generally, unimodality means there is only a single highest value, somehow defined, of some mathematical object . [ 1 ] In statistics , a unimodal probability distribution or unimodal distribution is a probability distribution which has a single peak. The term "mode" in this context refers to any peak of the distribution, not just to the strict definition of mode which is usual in statistics. If there is a single mode, the distribution function is called "unimodal". If it has more modes it is "bimodal" (2), "trimodal" (3), etc., or in general, "multimodal". [ 2 ] Figure 1 illustrates normal distributions , which are unimodal. Other examples of unimodal distributions include Cauchy distribution , Student's t -distribution , chi-squared distribution and exponential distribution . Among discrete distributions, the binomial distribution and Poisson distribution can be seen as unimodal, though for some parameters they can have two adjacent values with the same probability. Figure 2 and Figure 3 illustrate bimodal distributions. Other definitions of unimodality in distribution functions also exist. In continuous distributions, unimodality can be defined through the behavior of the cumulative distribution function (cdf). [ 3 ] If the cdf is convex for x < m and concave for x > m , then the distribution is unimodal, m being the mode. Note that under this definition the uniform distribution is unimodal, [ 4 ] as well as any other distribution in which the maximum distribution is achieved for a range of values, e.g. trapezoidal distribution. Usually this definition allows for a discontinuity at the mode; usually in a continuous distribution the probability of any single value is zero, while this definition allows for a non-zero probability, or an "atom of probability", at the mode. Criteria for unimodality can also be defined through the characteristic function of the distribution [ 3 ] or through its Laplace–Stieltjes transform . [ 5 ] Another way to define a unimodal discrete distribution is by the occurrence of sign changes in the sequence of differences of the probabilities. [ 6 ] A discrete distribution with a probability mass function , { p n : n = … , − 1 , 0 , 1 , … } {\displaystyle \{p_{n}:n=\dots ,-1,0,1,\dots \}} , is called unimodal if the sequence … , p − 2 − p − 1 , p − 1 − p 0 , p 0 − p 1 , p 1 − p 2 , … {\displaystyle \dots ,p_{-2}-p_{-1},p_{-1}-p_{0},p_{0}-p_{1},p_{1}-p_{2},\dots } has exactly one sign change (when zeroes don't count). One reason for the importance of distribution unimodality is that it allows for several important results. Several inequalities are given below which are only valid for unimodal distributions. Thus, it is important to assess whether or not a given data set comes from a unimodal distribution. Several tests for unimodality are given in the article on multimodal distribution . A first important result is Gauss's inequality . [ 7 ] Gauss's inequality gives an upper bound on the probability that a value lies more than any given distance from its mode. This inequality depends on unimodality. A second is the Vysochanskiï–Petunin inequality , [ 8 ] a refinement of the Chebyshev inequality . The Chebyshev inequality guarantees that in any probability distribution, "nearly all" the values are "close to" the mean value. The Vysochanskiï–Petunin inequality refines this to even nearer values, provided that the distribution function is continuous and unimodal. Further results were shown by Sellke and Sellke. [ 9 ] Gauss also showed in 1823 that for a unimodal distribution [ 10 ] and where the median is ν , the mean is μ and ω is the root mean square deviation from the mode. It can be shown for a unimodal distribution that the median ν and the mean μ lie within (3/5) 1/2 ≈ 0.7746 standard deviations of each other. [ 11 ] In symbols, where | . | is the absolute value . In 2020, Bernard, Kazzi, and Vanduffel generalized the previous inequality by deriving the maximum distance between the symmetric quantile average q α + q ( 1 − α ) 2 {\displaystyle {\frac {q_{\alpha }+q_{(1-\alpha )}}{2}}} and the mean, [ 12 ] The maximum distance is minimized at α = 0.5 {\displaystyle \alpha =0.5} (i.e., when the symmetric quantile average is equal to q 0.5 = ν {\displaystyle q_{0.5}=\nu } ), which indeed motivates the common choice of the median as a robust estimator for the mean. Moreover, when α = 0.5 {\displaystyle \alpha =0.5} , the bound is equal to 3 / 5 {\displaystyle {\sqrt {3/5}}} , which is the maximum distance between the median and the mean of a unimodal distribution. A similar relation holds between the median and the mode θ : they lie within 3 1/2 ≈ 1.732 standard deviations of each other: It can also be shown that the mean and the mode lie within 3 1/2 of each other: Rohatgi and Szekely claimed that the skewness and kurtosis of a unimodal distribution are related by the inequality: [ 13 ] where κ is the kurtosis and γ is the skewness. Klaassen, Mokveld, and van Es showed that this only applies in certain settings, such as the set of unimodal distributions where the mode and mean coincide. [ 14 ] They derived a weaker inequality which applies to all unimodal distributions: [ 14 ] This bound is sharp, as it is reached by the equal-weights mixture of the uniform distribution on [0,1] and the discrete distribution at {0}. As the term "modal" applies to data sets and probability distribution, and not in general to functions , the definitions above do not apply. The definition of "unimodal" was extended to functions of real numbers as well. A common definition is as follows: a function f ( x ) is a unimodal function if for some value m , it is monotonically increasing for x ≤ m and monotonically decreasing for x ≥ m . In that case, the maximum value of f ( x ) is f ( m ) and there are no other local maxima. Proving unimodality is often hard. One way consists in using the definition of that property, but it turns out to be suitable for simple functions only. A general method based on derivatives exists, [ 15 ] but it does not succeed for every function despite its simplicity. Examples of unimodal functions include quadratic polynomial functions with a negative quadratic coefficient, tent map functions, and more. The above is sometimes related to as strong unimodality , from the fact that the monotonicity implied is strong monotonicity . A function f ( x ) is a weakly unimodal function if there exists a value m for which it is weakly monotonically increasing for x ≤ m and weakly monotonically decreasing for x ≥ m . In that case, the maximum value f ( m ) can be reached for a continuous range of values of x . An example of a weakly unimodal function which is not strongly unimodal is every other row in Pascal's triangle . Depending on context, unimodal function may also refer to a function that has only one local minimum, rather than maximum. [ 16 ] For example, local unimodal sampling , a method for doing numerical optimization, is often demonstrated with such a function. It can be said that a unimodal function under this extension is a function with a single local extremum . One important property of unimodal functions is that the extremum can be found using search algorithms such as golden section search , ternary search or successive parabolic interpolation . [ 17 ] A function f ( x ) is "S-unimodal" (often referred to as "S-unimodal map") if its Schwarzian derivative is negative for all x ≠ c {\displaystyle x\neq c} , where c {\displaystyle c} is the critical point. [ 18 ] In computational geometry if a function is unimodal it permits the design of efficient algorithms for finding the extrema of the function. [ 19 ] A more general definition, applicable to a function f ( X ) of a vector variable X is that f is unimodal if there is a one-to-one differentiable mapping X = G ( Z ) such that f ( G ( Z )) is convex. Usually one would want G ( Z ) to be continuously differentiable with nonsingular Jacobian matrix. Quasiconvex functions and quasiconcave functions extend the concept of unimodality to functions whose arguments belong to higher-dimensional Euclidean spaces .
https://en.wikipedia.org/wiki/Unimodality
In mathematics , a unimodular polynomial matrix is a square polynomial matrix whose inverse exists and is itself a polynomial matrix. Equivalently, a polynomial matrix A is unimodular if its determinant det( A ) is a nonzero constant . [ 1 ] This article about matrices is a stub . You can help Wikipedia by expanding it . This polynomial -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unimodular_polynomial_matrix
The Unimog (pronunciation in American English: YOU-nuh-mog ; British English: YOU-knee-mog ; [ 1 ] German: [ˈʊnɪmɔk] , listen ⓘ ) is a Daimler Truck line of multi-purpose, highly offroad capable AWD vehicles produced since 1948. Utilizing engine-driven power take-offs (PTO) Unimogs have operated in the roles of tractors, light trucks and lorries, for snow plowing, in agriculture, forestry, rural firefighting, in the military, even in rallying and as recreational vehicles. The frame is designed to be a flexible part of the suspension, not to carry heavy loads. Conceived in 1944 in response to the Morgenthau Plan , former Daimler-Benz airplane engine engineers developed prototypes under occupation. The small universally-applicable motorised 25hp workhorse was designed to be able to fit over two rows of potatoes to work on fields like a slow agricultural tractor , but with four equal size wheels on portal axles , coil spring suspension, and many gears allowing it to run on roads like a truck. Unimog production started in 1948 at Boehringer [ de ] in Göppingen . When larger production numbers were needed, Daimler-Benz took over manufacture of the Unimog in 1951, and first produced it in their Gaggenau plant , and the Unimog was sold under the Mercedes-Benz brand. However, the first Unimog to feature the three-pointed Mercedes-Benz star instead of the Boehringer bullhead was only introduced in 1953. From the 1970s, the more tractor-like MB-trac series was offered before it was outsourced in 1990. Since 2002, the Unimog has been built in the Mercedes-Benz truck plant in Wörth am Rhein in Germany. [ 2 ] The Mercedes-Benz Türk A.Ş. plant assembles Unimogs in Aksaray , Turkey. [ 3 ] Unimogs were also built in Argentina (first ever country to do so outside Germany) by Mercedes-Benz Argentina S.A. under licence from 1968 until 1983 (with some extra units built until 1991 off the assembly line from parts in stock) in the González Catán factory near the city of Buenos Aires, as stated in the book "El Unimog en el Ejército Argentino", by Argentine author and historian Gaston Javier Garcia Loperena in 2015. [ 4 ] : 141 [ 5 ] : 122 The first model was designed by Albert Friedrich and Heinrich Rößler shortly after World War II to be used in agriculture as a self-propelled machine providing a power take-off to operate saws in forests or harvesting machines on fields. It was designed with rear-wheel drive and switchable front-wheel drive, with equal-size wheels, in order to be driven on roads at higher speeds than standard farm tractors . With their very high ground clearance and a flexible frame that is essentially a part of the suspension, Unimogs are not designed to carry as much load as regular trucks. [ 6 ] : 7 Due to their off-road capabilities, Unimogs can be found in jungles, mountains and deserts as military vehicles, fire fighters, expedition campers, and even in competitions like truck trials and Dakar Rally rally raids . In Western Europe, they are commonly used as snowploughs , municipal equipment carriers, agricultural implements, forest ranger vehicles, construction equipment or road–rail vehicles and as army personnel or equipment carriers (in its armoured military version). New Unimogs can be purchased in one of two series: medium series 405 , also known as the UGN ("Geräteträger" or equipment carrier), [ 7 ] : 4 and heavy series 437, also known as the UHN ("Hochgeländegängig" or highly mobile cross country). [ 8 ] The name Unimog is an acronym for the German " UNI versal- MO tor- G erät" , Gerät being the German word for a piece of equipment (also in the sense of device , machine , instrument , gear , apparatus ). It was created by German engineer Hans Zabel, who made the note Universal-Motor-Gerät on one of the technical drawings for the Unimog. Later, the Universal-Motor-Gerät was shortened to the acronym Unimog . On 20 November 1946, the name Unimog was officially unveiled. [ 6 ] : 8 Since 1952, Unimog has been a brand of Daimler Truck . [ 9 ] The Unimog's characteristic design element is its chassis : a flexible ladder frame with short overhangs, and coil sprung beam portal axles with a central torque tube and transverse links. Having portal axles, the wheels' centres are below the axle centre, which gives the Unimog a high ground clearance without the need for big tyres. The coil sprung axles with torque tubes allow an axle angle offset of up to 30° , giving the wheels a wide range of vertical movement to allow the truck to drive over extremely uneven terrain , even boulders of one metre in height. [ 5 ] : 23 Unimogs are equipped with high-visibility cabs to maximize operator visibility of terrain and when manipulating powered tools. The newest implement carrier Unimog models can be changed from left-hand drive to right-hand drive in the field to permit operators to work on the more convenient side of the truck. The ability to operate on highways enables the Unimog to be returned to a secure garage or yard at the end of a shift. Unimogs can be equipped with front and rear tool mounting brackets and hydraulic connections to allow bucket loaders and hydraulic arms to be used. Most units have a power take-off (PTO) connection to operate rotary equipment such as snow brooms, snow blowers , brush mowers, loaders or stationary conveyor belts . Unimogs are available with short wheelbases for implement carrier operations or long wheelbases for all-terrain cargo carrying operations. Currently (2022), Daimler Trucks offers the 437.4 heavy series and the 405 implement carrier series . Starting in 1951 having purchased the traditional Unimog from Boehringer, Daimler-Benz started making the Unimog S series in the mid-1950s and added light, medium and heavy series to the model lineup in the 1960s and 1970s, before they successively reduced the available models during the 1990s to end up with the modern implement carrier and the heavy series today. Originally, the traditional Unimog 70200 was a rather small agricultural tractor, measuring just 3520 mm in length. It was only offered as a Cabrio with a canvas roof. The engine power output of 25 PS (18 kW) proved to be insufficient for many applications. To accommodate customer needs, a longer wheelbase version, a proper cab and more powerful engines (up to 34 PS (25 kW)) were introduced soon after Daimler-Benz took over Unimog manufacture; the traditional Unimog evolved into its final stage, the 411-series. Yet, Daimler-Benz decided that an entirely new, more powerful version of the Unimog would be required to meet future customer expectations. This Unimog version would later be known as 406-series. [ 5 ] : 12–14 The military Unimog S series is the first Unimog designed to be an offroad truck rather than a tractor, and it is the only series production Unimog that has an Otto engine. [ N 1 ] Daimler-Benz designed a new frame for it, but it still shares its drivetrain with the 411-series. With the introduction of the 406-series in 1962, Daimler-Benz laid the foundation for a completely new Unimog model family, the 406-based medium series (in the 1960s known as heavy series). It was produced until 1994. Unimogs belonging to the medium series are the series 403, 413, 406, 416, 426, and 419. These models were offered with three different wheelbases ( 2380 mm , 2900 mm , 3400 mm ) and two engines, the straight-four and straight-six direct injected Diesel engines OM 314 and OM 352, ranging from 54 to 110 PS (40–81 kW). The light series 421 and 431 share their frame design with the 411-series, but borrow their drivetrain and cab design from the 406-series, which is why they also count as 406-related Unimogs. [ 5 ] : 11 The heavy series Unimogs were introduced in 1974, and first featured the edgy cab, which is still a design feature of the Unimog today. The first heavy series Unimog was the 425-series, which was available from 1976. [ N 2 ] Soon after, the 435-series and 424-series followed, which caused a decline in Unimog 406 sales. [ 5 ] : 85 The 425 was available with a wheelbase of 2810 mm , the 424 with 2650 mm and 3250 mm , and the 435 with 3250 mm , 3700 mm and 3850 mm . [ 10 ] The introduction of new engines starting in 1986 caused a shift in the series numbers, but leaving the vehicles mostly unchanged otherwise. The 424 became 427 and both 425 and 435 were joined together and became 437. A derivative of the 437-series, the 437.4-series is still in production today. [ 11 ] In 1988, after declining Unimog sales, Daimler-Benz launched a new strategy that was supposed to increase sales and make the Unimog more profitable, called "Unimog-Programm 1988". New models introduced with this programme were the new light series 407 and medium series 417, which ought to replace all Unimog 406-related series. [ 12 ] : 119 407- and 417-series were replaced after just four years, in 1992, with the 408- and 418-series. [ 12 ] : 134 in 2000, these two models were replaced with the current 405 implement carrier series, making the 437.4 and the 405 the only remaining Unimog series. Like other trucks, but unlike agricultural tractors, the Unimog is a body-on-frame vehicle with short overhangs. The original Unimog was made with a plane ladder frame [ 13 ] : 82 and a wheelbase of 1720 mm. Later, the wheelbase was extended several times to accommodate customer needs. Starting in the mid-1950s, with the introduction of the Unimog 404, the frame received a drop. Originally, this was done to make space for a spare tyre, but soon engineers found out that the new frame would improve the torsion performance, which is why all following Unimog series also received a frame with a drop. [ 5 ] : 45 Several mounting brackets, additional cross members and tool boards were offered as factory options for the frame. [ 13 ] : 79 The Unimog has live front and rear axles that have portal gears ( portal axles ). Such axles have a lifted axle centre, but the wheels' centre remains unchanged, meaning that a high ground clearance can be achieved with small wheels and tyres. Unlike "regular" trucks, the Unimog has coil springs with hydraulic shock absorbers rather than leaf springs , as coil springs provide more spring travel. The axles themselves have only one longitudinal pivot point each, the so-called torque tubes . The torque tubes contain the drive shafts and connect the axles' differential gearboxes to the Unimog gearbox, but being also parts of the suspension system, the torque tubes prevent longitudinal movement of the axles, whilst still allowing limited vertical movement. Lateral axle movement is prevented by panhard rods and transverse links . This design results in extreme axle angle offsets of up to 30° possible. [ 5 ] : 36–40 A wide variety of wheels and tyres were available for the Unimog. Originally, the first Unimog was equipped with 6.5–18 in tyres designed for both on- and offroad use. [ 13 ] : 48–49 Later, bigger wheels and tyres with different tread patterns were available, reaching from agricultural tractor tread patterns to massive bar tyre treads to low pressure ballon tyre treads. Until 1973, drum brakes were standard for all Unimogs, until they were replaced by disc brakes, however, until 1989, drum brakes remained an option for Unimogs of the 406-family. [ 5 ] : 78 The steering system used to be a screw-and-nut system until 1970. Then it was replaced by a power assisted ball-and-nut system for the 406-series. [ 5 ] : 77 The classical Unimog is rear-wheel drive vehicle, meaning that the rear axle is directly connected to the gearbox. Turning on front wheel drive automatically locks both axles, without torque compensation. The mechanical lever that turns on all wheel drive has a third position that locks front- and rear differentials. As of 1963, a pneumatic power switch was used instead of a lever. Due to the reduction gears inside the portal axles, the rotational frequency of the driveshafts inside the torque tubes is relatively high, meaning that the amount of torque they have to withstand is fairly low. [ 5 ] : 23 Traditionally, the Unimog has a splitter gearbox . Over the years, three different base gearbox designs have been used, all following the same principle, and having four gears and two ranges (called groups ) and an additional direction gear. Those designs were UG-1/xx , UG-2/xx , and UG-3/xx . UG is an abbreviation for Unimog-Getriebe (Unimog-Gearbox) , the number after the slash resembles the input torque in kp·m (=9.80665 N·m). [ N 3 ] Until 1955, the Unimog base gearbox UG-1 was a constant-mesh countershaft gearbox, it was then upgraded with synchroniser rings to a synchromesh gearbox. However, the synchromesh-version was only used for the 404-series , and the constant-mesh version remained the standard gearbox for the 411-series. In 1957, the synchromesh-version became an option for the 411-series, before it became the standard gearbox for all Unimogs in 1959. [ 13 ] : 38–39 The following gearbox versions UG-2 and UG-3 were made as synchromesh versions only. There are different layouts of the gearbox, namely F-layout and G-layout as well as their upgraded layouts, not having particular names. The F-layout is the original gearbox layout and is limited to the first two gears in the first range as it does not have selector sleeves, meaning that in total there are six forward gears. Instead of a reverse gear, the gearbox has its direction gear, which, in theory, can be used to reverse any gear. Due to the lacking shifting sleeves however, the reverse direction can only be used in the first range, which itself is limited to the first two gears, resulting in only two reverse gears (resulting in six forward and two reverse gears). To operate the gearbox, there is only one shift lever with a six-speed H-layout, the gearbox shifts its ranges automatically. An additional shift lever is used for shifting into reverse. The G-layout has an additional reduction gearbox, which can be used in all gears. This effectively doubles the number of gears (twelve forward and four reverse gears). This reduction gearbox was also available with an additional crawler gear, which can only be used in the first range (twenty forward and eight reverse gears). As of 1976, shifting sleeves were added and a four-speed-H-layout replaced the six-speed-H-layout, which allows using all gears in all ranges. [ 14 ] With the introduction of the UG-3-gearbox, the standard gearbox-shifter-layout was changed to an eight-speed-H-H-layout, with eight gears on one lever, without any additional switches. When shifting from "4th" into "5th" gear, the gearbox automatically shifts into range 2 and back into gear 1. Crawler gearboxes were offered as a factory option for the UG-3 gearbox as well, resulting in 24 gears. The design with the additional direction gear was kept, which means that all 24 gears can also be used in reverse mode. Since the highest final gear ratio allows top speeds of up to 110 km/h, and the reverse gear only comes with a small reduction of 1:1.03, the top speed in reverse mode is more than 100 km/h. To prevent such high reverse speeds, a lock for the second range was available as a factory option, allowing only the first range (gears "1" to "4") in reverse mode. [ 15 ] The initial Unimogs were equipped with passenger car engines, the first Unimog series to receive a truck engine was the 406-series in 1963. [ 16 ] All engines use the Diesel principle , except for engines used for the Unimog 404-series and the first four Unimog prototypes, which use the Otto principle . The following engines were used as of 1947, with M being Otto and OM being Diesel engines (the list is incomplete): Traditionally, three different cab options were available for the Unimog: An open roof cab (Cabrio), single cab and double cab, with the single cab being the most popular. Because the Unimog was designed to be a better agricultural tractor, its original design did not include a closed cab (as agricultural tractors in Germany usually did not have a closed cab in the 1940s). The first Unimog series to be officially offered with a cab was the 401-series. However, the first cabs were made by Westfalia in Rheda-Wiedenbrück and then shipped to the Unimog plant in Gaggenau for assembly. These cabs are known as Westfalia type B or simply Froschauge ('frog's eye'). Starting in 1957, a new cab with 30% more volume, called Westfalia type DvF , Typ D, verbreitertes Fahrerhaus (Type D, widened cab), was used. Both Westfalia cabs were fairly narrow and came with the problem of engine heat causing high cabin temperatures. [ 13 ] : 44–47 The first Unimog that was designed with a cab was the series 406. Just for the purpose of manufacturing cabs, Daimler-Benz built a new 1000- Megapond -sheet-panel-press in the Unimog plant. [ 5 ] : 60 It was planned that the double cab parts would also be produced with this press, instead, the double cabs were manufactured by Wackenhut in Nagold. [ 5 ] : 89 In 1974, the current heavy-duty-series' cab was introduced. Its basic design has not been changed since. [ 17 ] The equipment carrier versions' cab on the other hand has received several modifications since its introduction in the late 1980s, with the current version being introduced in 2000. The original Unimog was offered with a pneumatic system. This system was used for powering all auxiliary devices as well as the three-point linkages. [ 13 ] : 77 As of October 1961, a hydraulics system became an option, [ 13 ] : 76 and as of 1963, the hydraulics system became standard, but unlike the pneumatics system, the hydraulics system was made by Westinghouse Air Brake Company in Hannover . With the introduction of the hydraulics system, the pneumatics system was solely used for operating the brakes. [ 5 ] : 27–28 The Unimog was never meant to be a military vehicle; Allied permission to develop the Unimog was granted only because Albert Friedrich, inventor of the Unimog, ensured that the Unimog would not have any military purpose. [ 12 ] : 6 However, the Unimog has always been used as a military vehicle. 44 Unimogs of the first model, the Unimog 70200 , served as combat engineer tractors in the Swiss army. They proved successful, and the Swiss army purchased 540 units of the 70200's successor, the Unimog 2010 . These early Swiss military vehicles were known as ″Dieseli″. The Dieseli-Unimogs remained in service until 1989. [ 12 ] : 32 Officers of the French army, then occupying forces in Germany, noted the Unimog testing at the Sauberg in the early 1950s and considered the Unimog useful for patrolling purposes. Soon after, the French army purchased Unimogs of the series 2010 and 401. The Unimog proved to be so successful that Daimler-Benz was ordered to develop an entirely new Unimog just for military purposes. This new model was supposed to be a small 1.5-tonne truck, capable of carrying 10 to 12 soldiers on its bed, at a speed of up to 90 km/h, rather than being an agricultural tractor. Being a NATO member state, France demanded that the military Unimog would have an engine running on petrol. [ 12 ] : 41 Daimler-Benz decided to use an Otto cycle engine, the M 180, displacing 2.2 litres, and producing 85 PS (63 kW). [ 12 ] : 47 The military Unimog would later be known as Unimog 404 or Unimog S. In total, 64,242 units of the Unimog 404 were produced, which makes it the Unimog with the highest production figure. 36,638 Unimog 404 were purchased by the German Bundeswehr. [ 12 ] : 43 Apart from the Bundeswehr, many different military forces have either used the Unimog in the past or still make use of it today. In addition to the military series 404, several civilian models have been adapted for military use. In Argentina, the series 426, actually a version of the civilian series 416 produced under licence, was made for the Argentinian, Chilean, Peruvian and Bolivian military. In total, 2643 units of the series 426 were made. The Argentinian made Unimog 431, which was a licensed version of the civilian series 421, was also used as a military vehicle, mainly as a self-propelled howitzer . Another civilian Unimog that was mainly used a military vehicle, is the series 418. [ 5 ] : 122 The military Unimogs are used as troop transportation vehicles, ambulances , and mobile command centers equipped with military communications equipment . The United States Marine Corps and United States Army uses the Unimog 419 as an engineer tractor, while the United States Army also uses Unimog vehicles to access remote installations. In total, 2416 Unimog 419 were made, and only used by United States Forces. [ 18 ] : 81 Modern Unimogs also serve as military vehicles, and the current Unimog 437.4 chassis is used for the ATF Dingo . More than 5,500 Unimogs are in active service in the Turkish Armed Forces . They were produced by Mercedes-Benz Türk. Unimogs are used by the German emergency management agency Technisches Hilfswerk (THW), [ 19 ] [ 20 ] (literally Technical Relief Organization ), by fire departments [ 21 ] and municipalities as utility vehicles. They can be used as material handlers, auxiliary power providers ( generators ), and equipment carriers. [ 22 ] Their ability to operate off-road, in high water, or mud, makes it easier to access remote areas in emergency situations. [ 23 ] They are commonly used in snow removal where other vehicles might not be able to operate. Many Alpine towns and districts are equipped with one or more Unimog snow blowers to clear narrow mountain roads that have drifted closed. [ 24 ] In construction , Unimogs are used as carriers of equipment and, with the optional extended cabin, [ 25 ] [ 26 ] also of workers. They can be equipped with a backhoe , front loader , or other contracting equipment. On railroads , Unimogs are used as rail car movers and road-rail vehicles . They have also been used in mining areas, like seen in Gold Bridge, BC [ clarification needed ] , Canada. In agriculture , Unimogs are used to operate farm equipment . While most farm field implement operations are now performed by a tractor , Unimogs are used to haul produce, machinery and animals. They are also used around the farmyard to run chippers , grain augers , and conveyors. Unimogs are also used as a prime mover , to pull heavy trailers, large wheeled conveyances [ 27 ] and jet airliners . [ 28 ] [ 29 ] Often, only the front half, (an OEM part), is combined with a tailor-made rear. [ 5 ] : 68 [ 13 ] : 94–95 Unimogs are used as tourist transport [ 30 ] for jungle ecotourism or safaris . Unimogs have been uncommon in North America because of differing vehicle regulations and requirements from those in Germany, and due to the lack of a North American sales and support network. Most Unimog models found in North America have been imported by individual dealers or independent enthusiasts. Between 1975 and 1980 the Case Corporation (now merged into CNH Global ) imported the U-900 model into the United States and sold it through Case tractor dealerships as the MB4/94. In 2002 DaimlerChrysler tried to re-enter the North American market with the Unimog and engaged in four years of aggressive marketing, which included activities such as; truck and trade shows, exposure on the television show Modern Marvels , numerous magazine articles and extensive demonstrations (both touring and on an individual basis). They were generally sold through Freightliner truck dealerships. [ 31 ] Freightliner is a Daimler AG subsidiary. The UGN series was specifically manufactured for the North American market and was significantly different mechanically from its European counterpart. [ 32 ] The UGN faced stiff competition in North America by manufacturers whose truck or equipment lines performed some of the same duties as the Unimog. Some of them are Caterpillar , John Deere , AM General , Sterling Trucks (also a Daimler AG subsidiary), and General Motors . After five years and selling only 184 Unimogs, Freightliner LLC exited the market. Daimler AG cited non-compliance with EPA07 emission requirements as the main reason for ceasing North American sales. [ 33 ] Unimogs have been used in three kinds of competition: Dakar Rally and other desert rally competitions , mud bogging , and slow-moving Truck Trials over obstacles. Unimogs have won the truck class of the Dakar in 1982 and 1986, the latter an unexpected victory as the vehicle participated for Honda, primarily to provide support for the motorcycles of the team. [ 34 ] [ 35 ] High-powered factory-sponsored entries of truck companies aiming for the overall win have since taken the laurels, with Unimogs used mainly for service purposes. In 1968 the Unimog department in Gaggenau began development of the MB-trac , a tractor based on the Unimog 403 drivetrain. It was produced by Daimler-Benz until 1991, when the product line was sold to Werner Forst- und Industrietechnik. [ 36 ] Werner continues to produce it as the WF trac . The Unimog also serves as a technical platform for armoured vehicles like the ATF Dingo , a mine-protected utility and reconnaissance vehicle used by the German and other European Armed Forces (e.g. Belgium) for territorial defence purposes as well as in international missions. In late Autumn 1956, Daimler-Benz started developing a new military version of the Unimog, the Unimog SH . It was based on the Unimog S and had a rear engine (German: Heckmotor), hence the name Unimog SH . [ 12 ] : 56 Until 1960, Daimler-Benz completed 24 Unimog SH and sent them to AB Landsverk for final assembly. Initially, the Belgian Army intended to purchase these vehicle for their police forces in the Belgian Congo , but only 9 vehicles were actually sold to the Belgian forces; the 15 remaining vehicles were purchased by the Irish Army in 1972. [ 12 ] : 57 They were intended as a stop-gap vehicle for use until the first Panhard M3 VTT APCs entered service in 1972. The type had excellent off-road capability but poor on-road handling due to a high centre of gravity and several accidents occurred as a result. A four-man dismountable squad was carried, but space was cramped, and in any case a four-man detachment was far too small for any sort of realistic military purpose. Other considerations were that the FN MAG gunner's position was too exposed. Eventually the Unimog Scout Cars arrived in Ireland in February 1972, their departure having been delayed by a local peace group who thought they were destined for the Provisional Irish Republican Army (PIRA). By mid-1978 all had been transferred to the Irish Army Reserve , the FCA. All were withdrawn by 1984, and two are preserved; one in the transport museum in Howth Co Dublin and one in the Muckleburgh Collection , England. An updated version of the Unimog SH, the Unimog T was made for the German Bundeswehr in 1962. The German defence ministry decided not to purchase the Unimog T, which is why it was never put into series production. [ 12 ] : 59 Further armoured vehicles developed in Germany using Unimog chassis are the UR-416 , the Sonderwagen 4 and Condor 1 in Police service, and the ATF Dingo used by the Bundeswehr in Afghanistan . The French Aravis mine-protected vehicle, like the Dingo, based on the Special Chassis FGA 12.5. The Buffel , Mamba , RG-31 , and RG-33 armoured personnel carriers from South Africa are based upon the Unimog driveline. The AV-VBL developed by Brazil's Tectran is also an AFV family based on the Unimog. Originally, the Unimog was developed in post-war Germany to be used as agricultural equipment. It was designed with equal-sized wheels, a mounting bracket in front, a hitch in the rear, and loading space in the center. This was to make it a multi-purpose vehicle that farmers could use in the field and on the highway. [ 6 ] : 7 Albert Friedrich was granted permission to develop the Unimog in November 1945, [ 12 ] : 6 and entered a production agreement with Erhard und Söhne (Erhard and Sons) in Schwäbisch Gmünd on 1 December 1945. [ 6 ] : 7 Development began on 1 January 1946. Soon after, Heinrich Rößler, the Unimog lead designer, joined the development team. The first prototype was ready by the end of 1946. The early prototypes were equipped with the M 136 Otto engine , because the development of the OM 636 Diesel engine had not been finished. [ 12 ] : 13 The prototypes were similar to the later series production models. The original track width of 1.270 m (4 ft 2 in) was equivalent to two potato rows. [ 6 ] : 8 The 25-PS (18 kW) OM 636 Mercedes-Benz Diesel engine became standard equipment in the first production Unimogs at the end of 1947. The original emblem for the Unimog was a pair of ox horns in the shape of the letter U. The first 600 units of the 70200 series Unimogs [ 37 ] were built by Boehringer . This was done mainly for two reasons: Erhard und Söhne did not have the capacity to build the Unimogs and Boehringer (a former tool manufacturer) could evade dismantling. [ 6 ] : 8 In late 1950, Mercedes-Benz entered into a contract with Boeringer to take over production of the Unimog. [ 38 ] Daimler-Benz modified the Unimog for mass production to create the series 2010 and in 1951, started its manufacture in their Gaggenau plant in Baden-Württemberg, where production continued until 2002. [ 39 ] In 1953, the Unimog was updated and the three-pointed Mercedes star began to appear on the bonnet, replacing the Unimog ox horn emblem. The new model became known as the series 401. [ 12 ] : 25 A new series 402 with a long wheelbase chassis (2,120 mm (83 in) instead of 1,720 mm (68 in)) also became available. An enclosed driver's cab was available as an option from 1953, making the Unimog a true all-weather vehicle. [ 40 ] In 1955, the first Unimog 404 S series were produced. The primary customer of the 404 S was the Bundeswehr (literally Federal Defence , i.e. the West German Armed Forces), which was created in the mid-1950s in the era of the Cold War . [ 12 ] : 43 The 404 was intended to be a mobile cross-country truck , instead of an agricultural implement. The 404 S is the most popularly produced variant. 64,242 units were produced between 1955 and 1980. The oldest 404 known to exist is the first 1953 prototype, located in an East German museum. [ 12 ] : 55 Starting in 1957, the Unimog 411 was offered with a synchromesh gearbox as an option, and in 1959, the synchromesh gearbox became standard. [ 13 ] The 406/416 middle series were produced beginning in 1963. They were equipped with the six-cylinder pre-combustion chamber Diesel engine OM 312 producing 65 PS (48 kW; 64 hp). The 406 and 416 are similar, The 416 having a longer 2,900 mm (114 in) wheelbase compared to 2,380 mm (94 in) for the 406. Starting in 1964, the 406-series was equipped with the direct injected OM 352 Diesel engine starting with 65 PS (48 kW; 64 hp) and going up to 84 PS (62 kW; 83 hp) (110 PS (81 kW) for the Unimog 416). [ 5 ] : 104 Between the original Unimog and the middle series, Daimler-Benz developed a light series. The light series consisted of two separate Unimog series, the 421 and the 403. The 403, which basically is a 406-series with a 54 PS (40 kW; 53 hp) 3.8-litre four-cylinder engine, has a 2,380 mm (94 in) wheelbase and was later supplemented by the 413-series, which is a four-cylinder-version of the 416-series (long wheelbase (2,900 mm (114 in)) model). The 421 is the successor of the 411-series and has a 2,250 mm (89 in) wheelbase. It is powered by a 40 PS (29 kW; 39 hp) 2.2-litre passenger car Diesel engine. [ 5 ] : 107 The 100,000th Unimog (a 421) was built in 1966 in Gaggenau. [ 12 ] : 8 Argentina was the first country to manufacture the Unimog outside Germany. The first Unimog produced in the Mercedes-Benz Argentina S.A. factory in Gonzalez Catán, in the outskirts of Buenos Aires city, rolled off the assembly line on 1 September 1968. [ 41 ] : 232 The two models made in Argentina, are the 426 and 431. They are versions of the 416 respectively 431 produced under licence. [ 5 ] : 122 Despite originally being designed as an agricultural vehicle, the Unimog had more success as a multi-purpose tool carrier. To actually serve the agricultural market, Daimler-Benz designed a completely new agricultural tractor in 1972, the MB Trac . It is a body-on-frame design trac -tractor, has four big wheels of the same size, and all-wheel-drive, a slim bonnet, and an angular driver cab. In contrast to conventional tractors the cab is situated between the axles, similar to comparable four-wheel-drive tractors. There is no articulation between the front and rear sections, instead, the MB Trac has conventional steering. A wide range of MB Trac tractors were offered, ranging from the entry model MB-trac 65 to the top model MB Trac 1800 intercooler. Daimler-Benz later merged the MB-trac with the agricultural machinery activities of Deutz AG . The manufacturing of the MB Trac series ceased in 1991. In 1974, Mercedes-Benz presented the new Unimog U 120. It was the first model of the "heavy duty" Unimog series 425. The heavy duty series, or simply "heavy" series, extended the Unimog model lineup. The characteristic "edgy" bonnet introduced with the heavy Unimog series remains am Unimog style element to this day. The series 425 have a 2,810 mm (111 in) wheelbase, 9 t permissible maximum mass and an OM 352 Diesel engine producing 120 PS (88 kW; 120 hp) (shortly thereafter 125 PS (92 kW; 123 hp) as U 125). Manufacture of the series 435 for the Bundeswehr began in 1975, as a successor of the Unimog S 404. The 435 was characterized by a long wheelbase of 3,250 mm (128 in), 3,700 mm (146 in) or 3,850 mm (152 in) and shares its cab with the series 425. The new 424 "middle" series of Unimogs was produced starting from 1976. They share the cab with the series-425 and are designated U 1000, U 1300/L, U 1500, and U 1700/L with 124 kW (166 hp; 169 PS) engine performance. In the same period Daimler-Benz re-ordered the type designations for the older series. The classical round form series of the Unimog were now designated U 600/L, U 800/L, U 900 and U 1100/L. (The letter L stands for a long wheelbase, because most models were available in two wheelbase variants.) The Unimog with the rounded driving cab became known as the light series. The new series with angular cab was divided by payload into a middle and heavy series. Some engines overlap – the Unimog nomenclature is not simple to understand (see below for notes on series names). The long-proven Unimog-S (404), although with clearly decreasing production figures, was the only Unimog with an Otto cycle engine in the lineup. With the exception of the entry-level model, all Unimogs for 1976 were equipped with four wheel disc brakes . The 200,000th Unimog, a 424.121, was produced in 1977. [ 10 ] In 1980, production of the U 404 (Unimog S) ended. The light and medium series 407 and 427 were introduced in 1985. Production of the 406 and 416 ceased in 1988 and the 437-series was introduced the same year. In the early 1990s, the new light models 408 (U 90) and 418 (U 110-U 140) with newly designed cabins were introduced to replace the predecessor models. The new very diagonal front portion gives the operator a good overview forward. The 408 features an asymmetric front bonnet, which is lower on the driver's side. This is supposed to permit the driver a good overview. A new ladder frame and progressively working coil springs to improve the Unimog's handling were implemented. In addition to that, the Unimog received a new tyre pressure adjustment system that can be operated whilst driving, an anti-skid system, new engines, and a "Servolock" mechanism for the hydraulic connection of implements. In March 1994, Mercedes-Benz presented the design concept "Funmog", a luxury version of the Unimog, on the International Off-Road-Exhibition in Köln, Germany. Based on the 408-series, [ 42 ] it was only built by special order. Luxury options such as leather seats, deluxe carpeting and other interior modifications were available; though the design featured chrome bull bars and air horns, it lacked hydraulics and was limited to a total mass of 5,000 kg. [ 43 ] Starting price was DM 140,000. [ 44 ] A total of twelve units were built through 1997 by Daimler-Benz, [ 44 ] most exported to Japan. [ 6 ] : 23 In 1995, the Unimog U 2450 L 6×6 (437.156), an all-wheel-drive, 3-axle Unimog version, was presented. [ 45 ] Mercedes-Benz presented the Unimog 409 (officially called UX 100) in 1996. [ 46 ] It is the smallest Unimog model ever made and designed to slip speedily over sidewalks and around plants. Within a few years, production of the UX 100 was transferred to the Multicar subsidiary of Hako GmbH , who specialize in vehicles of this kind and size. The all-new range of UGN models ( 405 series U 300, U 400, U 500) was introduced in 2000. In August 2002 production ended in the Gaggenau plant after 51 years and more than 320,000 Unimogs being made, and started up in Mercedes-Benz's truck manufacturing plant in Wörth am Rhein . The U 3000, U 4000 and U 5000 models (UHN 437.4 series) were introduced at the same time. [ 39 ] At the Dubai Motor Show in December 2005, the "Unimog U 500 Black Edition" premiered as an offering to wealthy desert-dwellers. It is a similar luxury offering comparable to the Funmog. Starting from June 2006 the UGN series was produced with BlueTec technology so that the Euro IV emission requirements would be met. The design designations changed from 405.100 to 405.101. At the IAA 2006 commercial vehicle show in Hanover a new Unimog U 20 was presented, which was to be available at the end of 2007. The most striking feature is the cab over design with no vestigial front bonnet characteristic of the traditional Unimog. It has a total mass of 7,500 kg up to 8,500 kg. The underlying technology comes from the U 300. The driving cab is from the new Brazilian Accelo light truck (Caminhões Leves) series. The wheelbase is shortened to 2,700 mm (106 in). In August 2013, production of the next generation models commenced at the Wörth plant. The new models feature redesigned cabins and new engines that were claimed to meet the Euro VI emission standards . [ 47 ] Unimog series numbers like 401, 406, or 425 in this article are the factory numerical designation (in German "Baumuster", literally Construction Pattern ). Unimogs also have a sales model number like U 80, U 120, or U 1350. Each series can have several model numbers, as they are equipped with different engines or chassis options. Originally, the "U" model numbers were roughly equivalent to the horsepower of the engine (in German: PS): Starting in 1976, model numbers added an extra 0 at the end. More recent model numbers may have three or four digits and sometimes do not relate to horsepower (the engine of the U 5000 is rated at 218 PS). The most recent models introduced since 2013 are:
https://en.wikipedia.org/wiki/Unimog
Unimolecular ion decomposition is the fragmentation of a gas phase ion in a reaction with a molecularity of one. [ 1 ] Ions with sufficient internal energy may fragment in a mass spectrometer , which in some cases may degrade the mass spectrometer performance, but in other cases, such as tandem mass spectrometry , the fragmentation can reveal information about the structure of the ion. A Wahrhaftig diagram (named after Austin L. Wahrhaftig ) illustrates the relative contributions in unimolecular ion decomposition of direct fragmentation and fragmentation following rearrangement. The x-axis of the diagram represents the internal energy of the ion. The lower part of the diagram shows the logarithm of the rate constant k for unimolecular dissociation whereas the upper portion of the diagram indicates the probability of forming a particular product ion. The green trace in the lower part of the diagram indicates the rate of the rearrangement reaction given by and the blue trace indicates the direct cleavage reaction A rate constant of 10 6 s −1 is sufficiently fast for ion decomposition within the ion source of a typical mass spectrometer. Ions with rate constants less than 10 6 s −1 and greater than approximately 10 5 s −1 (lifetimes between 10 −5 and 10 −6 s) have a high probability of decomposing in the mass spectrometer between the ion source and the detector. These rate constants are indicated in the Wahrhaftig diagram by the log k = 5 and log k = 6 dashed lines. Indicated on the rate constant plot are the reaction critical energy (also called the activation energy ) for the formation of AD + , E 0 (AD + ) and AB + , E 0 (AB + ). These represent the minimum internal energy of ABCD + required to form the respective product ions: the difference in the zero point energy of ABCD + and that of the activated complex . When the internal energy of ABCD + is greater than E m (AD + ), the ions are metastable (indicated by m * ); this occurs near log k > 5. A metastable ion has sufficient internal energy to dissociate prior to detection. [ 2 ] [ 3 ] The energy E s (AD + ) is defined as the internal energy of ABCD + that results in an equal probability that ABCD + and AD + leave the ion source, which occurs at near log k = 6. When the precursor ion has an internal energy equal to E s (AB + ), the rates of formation of AD + and AB + are equal. Like all chemical reactions, the unimolecular decomposition of ions is subject to thermodynamic versus kinetic reaction control : the kinetic product forms faster, whereas the thermodynamic product is more stable. [ 4 ] In the decomposition of ABCD + , the reaction to form AD + is thermodynamically favored and the reaction to form AB + is kinetically favored. This is because the AD + reaction has favorable enthalpy and the AB + has favorable entropy . In the reaction depicted schematically in the figure, the rearrangement reaction forms a double bond B=C and a new single bond A-D, which offsets the cleavage of the A-B and C-D bonds. The formation of AB + requires bond cleavage without the offsetting bond formation. However, the steric effect makes it more difficult for the molecule to achieve the rearrangement transition state and form AD + . The activated complex with strict steric requirements is referred to as a "tight complex" whereas the transition state without such requirements is called a "loose complex".
https://en.wikipedia.org/wiki/Unimolecular_ion_decomposition
A unimolecular rectifier is a single organic molecule which functions as a rectifier (one-way conductor ) of electric current . The idea was first proposed in 1974 by Arieh (later Ari) Aviram, then at IBM , and Mark Ratner , then at New York University . [ 2 ] Their publication was the first serious and concrete theoretical proposal in the new field of molecular electronics (UE). Based on the mesomeric effect of certain chemical compounds on organic molecules, a molecular rectifier was built by simulating the pn junction with the help of chemical compounds. Their proposed rectifying molecule was designed so that electrical conduction within it would be favored from the electron-rich subunit or moiety (electron donor) to an electron-poor moiety (electron acceptor), but disfavored (by several electron volts ) in the reverse direction. Many potential rectifying molecules were studied by the groups of Robert Melville Metzger, Charles A. Panetta, and Daniell L. Mattern ( University of Mississippi ) between 1981 and 1991, but were not tested successfully for conductivity. This proposal was verified in two papers in 1990 and 1993 by the groups of John Roy Sambles ( University of Exeter , UK ) and Geoffrey Joseph Ashwell ( Cranfield University now at the Lancaster University , UK), using a monolayer of hexadecylquinolinium tricyanoquinodimethanide sandwiched between dissimilar metal electrodes ( magnesium and platinum ) [ 3 ] [ 4 ] and then confirmed in three papers in 1997 and 2001 by Metzger (now at the University of Alabama ) and coworkers, who used identical metals (first aluminium , then gold ). [ 5 ] [ 6 ] [ 7 ] These papers use Langmuir-Blodgett monolayers (one molecule thick) with an estimated 10 14 to 10 15 molecules measured in parallel. About nine similar rectifiers of vastly different structure have been found by Metzger's group between 1997 and 2006. [ 8 ] Some more perylene based organic rectifiers with PEG ( polyethylene glycol ) swallowtails have been synthesized in Mattern's lab by Ramakrishna Samudrala. [ 9 ] These rectifiers would allow the rectification to be measured with flexibility. Single molecules bonded covalently to gold have been studied by scanning tunneling spectroscopy and some of them are unimolecular rectifiers, studied as single molecules, as shown by the groups of Luping Yu ( University of Chicago ) and Ashwell (later at Lancaster University , UK ). The driving idea in UE (also called molecular-scale electronics) is that properly designed "electroactive" molecules, of between 1 and 3 nm in length, can supplant silicon -based devices to reduce circuit component sizes, providing concomitant increase in maximum integrated circuit speeds. However, amplification had not been realized as of 2012 [update] , and the chemical interactions between metal electrodes and molecules are complex.
https://en.wikipedia.org/wiki/Unimolecular_rectifier
Unimpaired runoff , also known as full natural flow , is a hydrology term for the natural runoff of a watershed or waterbody that would have occurred under current land use but without dams or diversions. Flow readings from river gauges are influenced by upstream diversions , impoundments, and many other alternations of the land that drains into a watershed or of alternatives of a river channel itself. Engineers estimate unimpaired or natural runoff by estimating all of the effects of human "impairments" to flow and then removing these effects. Since these calculations involve many assumptions, they tend to be more accurate for either smaller watersheds or when expressed as longer period averages. Unimpaired runoff is important for legal and scientific reasons. Since human development continues to alter watersheds, unimpaired runoff provides fixed frames of references for flow rates. The reason unimpaired runoff is important is because long-term hydrologic records are often used to develop relationships between precipitation , runoff, and water supply. By removing changes in the timing between precipitation and runoff due to human influences, the long-term relationships will be more useful. Calculating unimpaired runoff is also extremely important in identifying long-term climate change impacts . By subtracting the known water management influences on a long-term hydrologic record, the records may still show signs of a long-term change. These long-term signals may include long-term climate and land use change . It is still possible that the long-term climate signal is caused by larger scale anthropogenic sources. Unimpaired runoff calculations are used extensively in western United States states such as California for water resources management applications, particularly in the calculation of water year classifications and river indexes. The following is a list of some examples of use of unimpaired runoff calculations:
https://en.wikipedia.org/wiki/Unimpaired_runoff
In United States regulatory law, an unintentional radiator is any device that is designed to use radio frequency electrical signals within itself, or sends radio frequency signals over conducting cabling to other equipment, but is not intended to radiate radio frequency energy. [ 1 ] An incidental radiator is a device that can generate radio frequency electrical energy even though it is not intentionally designed to do so. [ 2 ] Unintentional and incidental radio frequency radiation can interfere with other electronic devices. In the United States, limits on radiated emissions from unintentional and incidental radiators are established by the Federal Communications Commission. Similar regulations have been promulgated by other governments. Reference is usually made in regulations to technical standards established by organizations such as ANSI, IEC and ITU. A computer is a typical example of an unintentional radiator. Radio frequency signals used within the computer circuitry may be unintentionally coupled to the power cord or to an interconnecting cable, which then acts as an antenna. A radio receiver will often use an intermediate frequency which is detectable outside the radio—the concept behind at least one audience measurement concept for roadside detection of radio stations which passing motorists are listening to. Examples of incidental radiators include electric motors , transformers , dimmers , and corona from electrical powerlines . Radiated emissions from these commonly create interference on AM radio receivers and on television receivers. In North America, active devices that are characterized as unintentional radiators are governed by Part 15 of the FCC regulations. In Canada, Innovation, Science and Economic Development considers them as interference-causing Equipment . Globally, most domestic regulation of unintentional radiators are based on ITU recommendations. Generally, this means the device leaks a signal at some level. Microprocessor-controlled appliances, anything with a clock signal , and switching voltage regulators all make some kind of noise, at the repetition frequency and at harmonics. In most countries, government agencies regulate how much leakage is tolerated. This prevents leakage from cable television systems, for example, from interfering with radio communications between aircraft and control towers . Because it costs money to filter out noise, there is always a balance struck between regulatory compliance and perfect filtering in these devices. Microwave ovens or devices with microprocessors may leak within allowable limits but may generate an undesired signal that interferes with a licensed communications device. It also generally means that users who intentionally radiate signals ( TV stations and cell phone companies) can order the device turned off if it interferes with their licensed operations. There is an entire industry based on regulatory compliance : manufacturers shipping a product to a foreign country must comply with each country's limitations on leakage of interfering signals. For example, in Germany the TÜV issues regulatory rules for unintentional radiators. The big cylindrical bumps on the cable to monitors and laptop chargers are ferrite cores which reduce undesired signals. https://www.ic.gc.ca/eic/site/smt-gst.nsf/eng/h_sf06127.html
https://en.wikipedia.org/wiki/Unintentional_radiator
In set theory , the union (denoted by ∪) of a collection of sets is the set of all elements in the collection. [ 1 ] It is one of the fundamental operations through which sets can be combined and related to each other. A nullary union refers to a union of zero ( ⁠ 0 {\displaystyle 0} ⁠ ) sets and it is by definition equal to the empty set . For explanation of the symbols used in this article, refer to the table of mathematical symbols . The union of two sets A and B is the set of elements which are in A , in B , or in both A and B . [ 2 ] In set-builder notation , For example, if A = {1, 3, 5, 7} and B = {1, 2, 4, 6, 7} then A ∪ B = {1, 2, 3, 4, 5, 6, 7}. A more elaborate example (involving two infinite sets) is: As another example, the number 9 is not contained in the union of the set of prime numbers {2, 3, 5, 7, 11, ...} and the set of even numbers {2, 4, 6, 8, 10, ...}, because 9 is neither prime nor even. Sets cannot have duplicate elements, [ 3 ] [ 4 ] so the union of the sets {1, 2, 3} and {2, 3, 4} is {1, 2, 3, 4}. Multiple occurrences of identical elements have no effect on the cardinality of a set or its contents. One can take the union of several sets simultaneously. For example, the union of three sets A , B , and C contains all elements of A , all elements of B , and all elements of C , and nothing else. Thus, x is an element of A ∪ B ∪ C if and only if x is in at least one of A , B , and C . A finite union is the union of a finite number of sets; the phrase does not imply that the union set is a finite set . [ 5 ] [ 6 ] The notation for the general concept can vary considerably. For a finite union of sets S 1 , S 2 , S 3 , … , S n {\displaystyle S_{1},S_{2},S_{3},\dots ,S_{n}} one often writes S 1 ∪ S 2 ∪ S 3 ∪ ⋯ ∪ S n {\displaystyle S_{1}\cup S_{2}\cup S_{3}\cup \dots \cup S_{n}} or ⋃ i = 1 n S i {\textstyle \bigcup _{i=1}^{n}S_{i}} . Various common notations for arbitrary unions include ⋃ M {\textstyle \bigcup \mathbf {M} } , ⋃ A ∈ M A {\textstyle \bigcup _{A\in \mathbf {M} }A} , and ⋃ i ∈ I A i {\textstyle \bigcup _{i\in I}A_{i}} . The last of these notations refers to the union of the collection { A i : i ∈ I } {\displaystyle \left\{A_{i}:i\in I\right\}} , where I is an index set and A i {\displaystyle A_{i}} is a set for every ⁠ i ∈ I {\displaystyle i\in I} ⁠ . In the case that the index set I is the set of natural numbers , one uses the notation ⋃ i = 1 ∞ A i {\textstyle \bigcup _{i=1}^{\infty }A_{i}} , which is analogous to that of the infinite sums in series. [ 7 ] When the symbol "∪" is placed before other symbols (instead of between them), it is usually rendered as a larger size. In Unicode , union is represented by the character U+222A ∪ UNION . [ 8 ] In TeX , ∪ {\displaystyle \cup } is rendered from \cup and ⋃ {\textstyle \bigcup } is rendered from \bigcup . The most general notion is the union of an arbitrary collection of sets, sometimes called an infinitary union . If M is a set or class whose elements are sets, then x is an element of the union of M if and only if there is at least one element A of M such that x is an element of A . [ 7 ] In symbols: This idea subsumes the preceding sections—for example, A ∪ B ∪ C is the union of the collection { A , B , C }. Also, if M is the empty collection, then the union of M is the empty set. In Zermelo–Fraenkel set theory (ZFC) and other set theories, the ability to take the arbitrary union of any sets is granted by the axiom of union , which states that, given any set of sets A {\displaystyle A} , there exists a set B {\displaystyle B} , whose elements are exactly those of the elements of A {\displaystyle A} . Sometimes this axiom is less specific, where there exists a B {\displaystyle B} which contains the elements of the elements of A {\displaystyle A} , but may be larger. For example if A = { { 1 } , { 2 } } , {\displaystyle A=\{\{1\},\{2\}\},} then it may be that B = { 1 , 2 , 3 } {\displaystyle B=\{1,2,3\}} since B {\displaystyle B} contains 1 and 2. This can be fixed by using the axiom of specification to get the subset of B {\displaystyle B} whose elements are exactly those of the elements of A {\displaystyle A} . Then one can use the axiom of extensionality to show that this set is unique. For readability, define the binary predicate Union ⁡ ( X , Y ) {\displaystyle \operatorname {Union} (X,Y)} meaning " X {\displaystyle X} is the union of Y {\displaystyle Y} " or " X = ⋃ Y {\displaystyle X=\bigcup Y} " as: Union ⁡ ( X , Y ) ⟺ ∀ x ( x ∈ X ⟺ ∃ y ∈ Y ( x ∈ y ) ) {\displaystyle \operatorname {Union} (X,Y)\iff \forall x(x\in X\iff \exists y\in Y(x\in y))} Then, one can prove the statement "for all Y {\displaystyle Y} , there is a unique X {\displaystyle X} , such that X {\displaystyle X} is the union of Y {\displaystyle Y} ": ∀ Y ∃ ! X ( Union ⁡ ( X , Y ) ) {\displaystyle \forall Y\,\exists !X(\operatorname {Union} (X,Y))} Then, one can use an extension by definition to add the union operator ⋃ A {\displaystyle \bigcup A} to the language of ZFC as: B = ⋃ A ⟺ Union ⁡ ( B , A ) ⟺ ∀ x ( x ∈ B ⟺ ∃ y ∈ Y ( x ∈ y ) ) {\displaystyle {\begin{aligned}B=\bigcup A&\iff \operatorname {Union} (B,A)\\&\iff \forall x(x\in B\iff \exists y\in Y(x\in y))\end{aligned}}} or equivalently: x ∈ ⋃ A ⟺ ∃ y ∈ A ( x ∈ y ) {\displaystyle x\in \bigcup A\iff \exists y\in A\,(x\in y)} After the union operator has been defined, the binary union A ∪ B {\displaystyle A\cup B} can be defined by showing there exists a unique set C = { A , B } {\displaystyle C=\{A,B\}} using the axiom of pairing , and defining A ∪ B = ⋃ { A , B } {\displaystyle A\cup B=\bigcup \{A,B\}} . Then, finite unions can be defined inductively as: ⋃ i = 1 0 A i = ∅ , and ⋃ i = 1 n A i = ( ⋃ i = 1 n − 1 A i ) ∪ A n {\displaystyle \bigcup _{i=1}^{0}A_{i}=\varnothing {\text{, and }}\bigcup _{i=1}^{n}A_{i}=\left(\bigcup _{i=1}^{n-1}A_{i}\right)\cup A_{n}} Binary union is an associative operation; that is, for any sets ⁠ A , B , and C {\displaystyle A,B,{\text{ and }}C} ⁠ , A ∪ ( B ∪ C ) = ( A ∪ B ) ∪ C . {\displaystyle A\cup (B\cup C)=(A\cup B)\cup C.} Thus, the parentheses may be omitted without ambiguity: either of the above can be written as ⁠ A ∪ B ∪ C {\displaystyle A\cup B\cup C} ⁠ . Also, union is commutative , so the sets can be written in any order. [ 9 ] The empty set is an identity element for the operation of union. That is, ⁠ A ∪ ∅ = A {\displaystyle A\cup \varnothing =A} ⁠ , for any set ⁠ A {\displaystyle A} ⁠ . Also, the union operation is idempotent: ⁠ A ∪ A = A {\displaystyle A\cup A=A} ⁠ . All these properties follow from analogous facts about logical disjunction . Intersection distributes over union A ∩ ( B ∪ C ) = ( A ∩ B ) ∪ ( A ∩ C ) {\displaystyle A\cap (B\cup C)=(A\cap B)\cup (A\cap C)} and union distributes over intersection [ 2 ] A ∪ ( B ∩ C ) = ( A ∪ B ) ∩ ( A ∪ C ) . {\displaystyle A\cup (B\cap C)=(A\cup B)\cap (A\cup C).} The power set of a set ⁠ U {\displaystyle U} ⁠ , together with the operations given by union, intersection , and complementation , is a Boolean algebra . In this Boolean algebra, union can be expressed in terms of intersection and complementation by the formula A ∪ B = ( A ∁ ∩ B ∁ ) ∁ , {\displaystyle A\cup B=(A^{\complement }\cap B^{\complement })^{\complement },} where the superscript ∁ {\displaystyle {}^{\complement }} denotes the complement in the universal set ⁠ U {\displaystyle U} ⁠ . Alternatively, intersection can be expressed in terms of union and complementation in a similar way: A ∩ B = ( A ∁ ∪ B ∁ ) ∁ {\displaystyle A\cap B=(A^{\complement }\cup B^{\complement })^{\complement }} . These two expressions together are called De Morgan's laws . [ 10 ] [ 11 ] [ 12 ] The english word union comes from the term in middle French meaning "coming together", which comes from the post-classical Latin unionem , "oneness". [ 13 ] The original term for union in set theory was Vereinigung (in german), which was introduced in 1895 by Georg Cantor . [ 14 ] The english use of union of two sets in mathematics began to be used by at least 1912, used by James Pierpont . [ 15 ] [ 16 ] The symbol ∪ {\displaystyle \cup } used for union in mathematics was introduced by Giuseppe Peano in his Arithmetices principia in 1889, along with the notations for intersection ∩ {\displaystyle \cap } , set membership ∈ {\displaystyle \in } , and subsets ⊂ {\displaystyle \subset } . [ 17 ]
https://en.wikipedia.org/wiki/Union_(set_theory)
thumb The Union for Ethical BioTrade (UEBT) is a nonprofit association that promotes the "Sourcing with Respect" of ingredients that come from biodiversity . Members commit to gradually ensuring that their sourcing practices promote the conservation of biodiversity, respect traditional knowledge and assure the equitable sharing of benefits all along the supply chain , following the Ethical BioTrade Standard. [ 1 ] Members also commit to the UEBT verification system, which includes undergoing independent third party verification against the Ethical BioTrade Standard, developing a work-plan for gradual compliance for all natural , as well as the commitment to continuous improvement once compliance is achieved. [ 2 ] UEBT is a membership-based organisation that was created in May 2007 [ 3 ] in Geneva , Switzerland . It was conceptualized in response to multiple developments. First of all, the Convention on Biological Diversity (CBD) acknowledged that additional efforts were needed to reach out to the private sector . The CBD recognized the strong link between business and biodiversity, as well as the dependency of industry on biodiversity and it highlighted the key role the private sector plays in sustainable use and the need for efforts and tools to engage it. [ 4 ] In pursuit of the decisions that CBD parties had taken regarding private sector engagement and the use of standards in this, the idea of UEBT was conceived. The creation of the Union for Ethical BioTrade also responded to the need expressed by small and medium-sized enterprises (SMEs) in developing countries for ways to differentiate biodiversity-based products in the market. Finally, UEBT built upon efforts initiated by the BioTrade Initiative of the United Nations Conference on Trade and Development (UNCTAD), which was created to contribute to making biodiversity a strategy for sustainable development . [ 5 ] On 8 May 2007, a meeting of the founding members took place and the articles of association were approved. To support the efforts of the Union, the CBD and UEBT signed a Memorandum of Understanding in December 2008, to encourage companies involved in BioTrade to adopt and promote good practices. [ 6 ] Although the CBD and CSD provide general principles, [ 7 ] not much practical guidance is currently available to help business advance on the ethical sourcing of biodiversity. UEBT fills this gap and makes concrete contributions to biodiversity conservation and local sustainable development. UEBT aims to bring together actors committed to Ethical BioTrade, and promotes, facilitates and recognises ethical sourcing of biodiversity in line with the objectives of the CBD. [ 8 ] In order to achieve these goals, UEBT supports its members in the membership process and in their work towards implementing the Ethical BioTrade Standard. It also provides technical support, organizes regular conferences and workshops around Access and Benefit Sharing and biodiversity, publishes papers and reports, and consults organisations on the ethical sourcing from biodiversity. [ 9 ] UEBT manages the Ethical BioTrade Standard, which provides a basis for UEBT Trading Members to improve their biodiversity sourcing practices. UEBT members develop biodiversity management systems that further the implementation of the Ethical BioTrade Standard in all their own operations involving natural ingredients, as well as throughout their supply chains. [ 10 ] In joining UEBT, a company agrees to comply with the principles of Ethical BioTrade. This means using practices that promote the sustainable use of natural ingredients, while ensuring that all contributors along the supply chain are paid fair prices and share the benefits derived from the use of biodiversity. The Ethical BioTrade Standard is mainstreamed in the operations of UEBT trading members, including for instance in research , innovation and development. [ 11 ] UEBT members undergo regular audits by independent third party verification bodies. UEBT works with the following Verification Bodies: Imaflora, SGS Qualifor, NaturaCert, Control Union Certifications, Ecocert SA, SGS del Peru S.A.C., Soil Association , IBD Certificações (IBD), Biotropico S.A., IMO do Brasil, and Rainforest Alliance . The Ethical BioTrade Standard builds on the seven Principles and Criteria [ 12 ] as developed by the UNCTAD BioTrade Initiative. First established in 2007, it was revised in April 2011, following the requirements of the ISEAL Alliance and the World Trade Organization (WTO). These requirements include the need to periodically review the Standard and ensure wide stakeholder involvement during the review process. [ 13 ] Every year the UEBT publishes the Biodiversity Barometer. [ 14 ] The Biodiversity Barometer contains the results of awareness studies commissioned by UEBT and provides insights on evolving biodiversity awareness among consumers and how the beauty industry reports on biodiversity. [ 15 ] [ 16 ] [ 17 ] [ 18 ] The Barometer is used as one of the indicators for measuring progress towards meeting the Aichi Biodiversity Target 1 in the Biodiversity Indicators Partnership 's Aichi Passport. [ 19 ] The Union for Ethical BioTrade has several types of trading members: brands, producers and processing companies, mainly in the global cosmetics, pharmaceutical and food sector. Current UEBT trading members include Natura , Weleda , Laboratoires Expanscience and Aroma Forest. UEBT also has affiliate members, which currently include the International Finance Corporation (IFC), MEB - Movimento Empresarial Brasileiro pela Biodiversidade, PhytoTrade Africa and Rongead. The Union for Ethical BioTrade is financed by membership fees and contributions from donors. The General Assembly acts as the main governing body of UEBT and meets once a year to elect the Members of the Board. The General Assembly is composed of all UEBT Members. Provisional, Trading and Affiliate Members have the right to vote and elect the Board of Directors, thereby approving the management of the organization. UNCTAD, the CBD and the International Finance Corporation act as observers to the Board. To support the functioning of UEBT, the Board has appointed various committees, including an appeals committee, a membership committee, and a standards committee.
https://en.wikipedia.org/wiki/Union_for_Ethical_Biotrade
The Union of Mining, Metallurgical and Chemical Workers ( Serbo-Croatian : Sindikat radnika rudarstva, metalurgije i kemijske industrije ) was a trade union representing workers in various related industries in Yugoslavia . The union was founded on 18 April 1959, when the Union of Metallurgical and Mining Workers merged with the Union of Chemical Industry Workers. Like all its predecessors, it affiliated to the Confederation of Trade Unions of Yugoslavia . [ 1 ] On formation, it had 239,826 members, and was led by Stevo Bevandic. [ 2 ] In 1963, it merged with the Union of Metal Workers , the Union of Printing Workers, the Union of Textile and Leather Workers , and the Union of Wood Industry Workers , to form the Union of Industrial and Mining Workers . [ 1 ]
https://en.wikipedia.org/wiki/Union_of_Mining,_Metallurgical_and_Chemical_Workers
The Union of Textiles, Chemicals and Paper ( German : Gewerkschaft Textil, Chemie, Papier , GTCP; French : Fédération du personnel du textile, de la chimie et du papier ) was a trade union representing workers in various industries in Switzerland. In 1903, various local unions of dyers, trimmers, weavers and embroiderers formed a loose federation. In 1908, this was reformed as the more centralised Swiss Textile Workers' Union . It affiliated to the Swiss Trade Union Federation in 1914, although this prompted most of the weavers and embroiderers, not yet working in factories, to leave and form an independent union, rejoining only in 1948. [ 1 ] By 1919, the union had 23,991 members, but this fell to 7,626 in 1925 and remained low for the following decades. In 1926, the Union of Paper and Graphical Assistants was dissolved, the paper workers transferring to the Swiss Textile Workers' Union. In 1937, the union renamed itself as the Union of Textile and Factory Workers , reflecting its interest in organising workers not previously organised by any union. The bulk of these workers were in the chemical industry, and recruitment was hugely successful, membership reaching 38,648 in 1946, during a period in which the union was involved in several strikes. [ 1 ] After 1947, the union avoided industrial action, and its membership steadily fell. In 1963, it renamed itself as the GTCP. [ 1 ] By 1991, it had only 11,581 members, of whom 70% worked in the chemical industry, 20% in textiles, and 10% in paper. In 1993, it merged with the Union of Construction and Wood , to form the Union of Construction and Industry . [ 2 ]
https://en.wikipedia.org/wiki/Union_of_Textiles,_Chemicals_and_Paper
The Union process was an above ground shale oil extraction technology for production of shale oil , a type of synthetic crude oil. The process used a vertical retort where heating causes decomposition of oil shale into shale oil, oil shale gas and spent residue . The particularity of this process is that oil shale in the retort moves from the bottom upward to the top, countercurrent to the descending hot gases, by a mechanism known as a rock pump. The process technology was invented by the American oil company Unocal Corporation in late 1940s and was developed through several decades. The largest oil shale retort ever built was the Union B type retort. Union Oil Company of California (Unocal) started its oil shale activities in 1920s. In 1921, it acquired an oil shale tract in the Parachute Creek area of Colorado , southern Piceance Basin . [ 1 ] The development of the Union process began in the late 1940s, when the Union A retort was designed. [ 2 ] This technology was tested between 1954 and 1958 at the company-owned tract in the Parachute Creek. [ 1 ] [ 3 ] [ 4 ] During these tests, up to 1,200 tonne per day of oil shale was processed, resulting of 800 barrels per day (130 m 3 /d) shale oil, which was refined at a Colorado refinery. [ 1 ] [ 5 ] [ 6 ] More than 13,000 barrels (2,100 m 3 ) of gasoline and fuels were produced. [ 1 ] This production was finally shut down in 1961 due to cost. [ 5 ] [ 6 ] In 1974, the Union B process, evolved from the Union A process, was developed. [ 3 ] [ 6 ] [ 7 ] In 1976, Union announced its plans to build a Union B demonstration plant. [ 3 ] Construction started in 1981 at Long Ridge in Garfield County, Colorado , and the plant was started its operations in 1986. It was closed in 1991 after production of 5 million barrels (790 × 10 ^ 3 m 3 ) shale oil. [ 6 ] [ 7 ] [ 8 ] The Union process can be operated in two different combustion modes, which are direct and indirect. [ 7 ] The Union A (direct) process is similar to the gas combustion retort technology, classified as an internal combustion method, while the Union B (indirect) process is classified as an externally generated hot gas method. [ 3 ] [ 9 ] The Union retort is a vertical shaft retort. The main difference to other vertical shaft retorts such as Kiviter, Petrosix, Paraho and Fushun is that crushed oil shale is fed through the bottom of the retort rather than the top. Lumps of oil shale in size of 3.2 to 50.8 millimetres (0.13 to 2.00 in) are moved upwards through the retort by a solids pump (known as a "rock pump"). Hot gases, generated by internal combustion or circulated through the top of the retort, decompose the oil shale while descending. [ 3 ] The pyrolysis occurs at the temperature of 510 °C (950.0 °F) to 540 °C (1,004.0 °F). [ 1 ] Condensed shale oil and gases are removed from the retort at the bottom. Part of the gases is recirculated for pyrolysis and fueling combustion, while other part could be used as product gas. The spent shale is removed from the top of the retort. After cooling with a water, it is conveyed to the waste disposal. [ 3 ] The Union retort design has several advantages. The reducing atmosphere in the retort allows the removal of sulfur and nitrogen compounds through the formation of hydrogen sulfide and ammonia . Oil vapors are cooled by the raw oil, thus minimizing polymer formation among the hydrocarbon fractions. [ 1 ]
https://en.wikipedia.org/wiki/Union_process
Uniparental inheritance is a non-Mendelian form of inheritance that consists of the transmission of genotypes from one parental type to all progeny. That is, all the genes in offspring will originate from only the mother or only the father. This phenomenon is most commonly observed in eukaryotic organelles such as mitochondria and chloroplasts . This is because such organelles contain their own DNA and are capable of independent mitotic replication that does not endure crossing over with the DNA from another parental type. Although uniparental inheritance is the most common form of inheritance in organelles, there is increased evidence of diversity. Some studies found doubly uniparental inheritance (DUI) and biparental transmission to exist in cells. Evidence suggests that even when there is biparental inheritance, crossing-over doesn't always occur. Furthermore, there is evidence that the form of organelle inheritance varied frequently over time. Uniparental inheritance can be divided into multiple subtypes based on the pathway of inheritance. [ 1 ] [ 2 ] Although most of the eukaryotic sub-cellular parts do not have their own DNA nor are capable of replication independent of the nucleus, there are some exceptions such as mitochondria and chloroplasts. Not only are these organelles capable of independent DNA replication, translation, and transcription, they are commonly known to inherit genes from only one parental type. In the case of mitochondria, maternal inheritance is almost the exclusive form of inheritance. Although, during egg cell fertilization, mitochondria are brought into the fertilized cell both by the egg cell and the sperm, the paternal mitochondria are usually marked with ubiquitin and are later destroyed. [ 3 ] Even if they are not destroyed, the DNA's of different mitochondria rarely genetically recombine with one another. Thus, mitochondria in most animals are inherited from the maternal type only. Like all other genetic concepts, the discovery of uniparental inheritance stems from the days of an Augustinian priest known as Gregor Johann Mendel . The soon-to-be "father of modern genetics" spent his days conducting hybridization experiments on pea plants( Pisum sativum ) in his monastery's garden. During a period of seven years (1856 to 1863), Mendel cultivated and tested some 29,000 pea plants which led to him deducing the two famous generalizations known as Mendel's Laws of Heredity . The first, the law of segregation, states that "when any individual produces gametes , the copies of a gene separate, so that each gamete receives only one copy". The second, the law of independent assortment, states that " alleles of different genes assort independently of one another during gamete formation". Although his work was published, it lay dormant until it was rediscovered in 1900 by Hugo de Vries and Carl Correns but it was not until 1909 that non-mendelian inheritance was even suggested. Carl Erich Correns and Erwin Baur , in separately conducted researches on Pelargonium and Mirabilis plants, observed a green-white variation (later found as the result of mutations in the chloroplast genome) that did not follow the Mendelian laws of inheritance. Nearly twenty years later, non-mendelian inheritance of a mitochondrial mutation was also observed and, in the sixties, it was proven that chloroplasts and mitochondria have their own DNA and that they are capable translation, transcription, and replication independent of the nucleus. Soon after, the discoveries of uniparental and doubly uniparental inheritance came. [ 1 ] A well studied example of uniparental inheritance involves the poky mutants of Neurospora crassa . The original poky mutant was isolated by Mitchell and Mitchell in 1952 as a spontaneously occurring slow growing variant. [ 4 ] In genetic crosses, the poky phenotype was found to be maternally inherited. The protoperithecial parent is regarded as the female (maternal) parent in Neurospora . When poky females were crossed to wild-type males, all progeny had the poky phenotype. When wild-type females were crossed to poky males, all progeny had the wild-type phenotype. That the poky determinant is only passed through the female line suggested that the determinant resides in the maternal cytoplasm . It was eventually shown that the primary defect in the poky mutants is a deletion in the mitochondrial DNA sequence encoding the small ribosomal RNA subunit. [ 5 ]
https://en.wikipedia.org/wiki/Uniparental_inheritance
Unipept is an open source research tool for biodiversity analysis of metaproteomics samples. It also contains a tool to select peptides to use as biomarker and a tool to compare the genome of organisms based on their protein content. [ 2 ] [ 3 ] The software is developed at Ghent University . Unipept consists of a web application and a stand-alone command line tool. The web application uses interactive data visualizations to explore datasets. The command line tool contains the same functionality, but is designed for use in automated data processing pipelines. [ 4 ] This scientific software article is a stub . You can help Wikipedia by expanding it . This bioinformatics-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unipept
Uniporters, also known as solute carriers or facilitated transporters , are a type of membrane transport protein that passively transports solutes (small molecules, ions, or other substances) across a cell membrane. [ 1 ] It uses facilitated diffusion for the movement of solutes down their concentration gradient from an area of high concentration to an area of low concentration. [ 2 ] Unlike active transport , it does not require energy in the form of ATP to function. Uniporters are specialized to carry one specific ion or molecule and can be categorized as either channels or carriers. [ 3 ] Facilitated diffusion may occur through three mechanisms: uniport, symport, or antiport. The difference between each mechanism depends on the direction of transport, in which uniport is the only transport not coupled to the transport of another solute. [ 4 ] Uniporter carrier proteins work by binding to one molecule or substrate at a time. Uniporter channels open in response to a stimulus and allow the free flow of specific molecules. [ 2 ] There are several ways in which the opening of uniporter channels may be regulated: Uniporters are found in mitochondria , plasma membranes and neurons .The uniporter in the mitochondria is responsible for calcium uptake. [ 1 ] The calcium channels are used for cell signaling and triggering apoptosis . The calcium uniporter transports calcium across the inner mitochondrial membrane and is activated when calcium rises above a certain concentration. [ 5 ] The amino acid transporters function in transporting neutral amino acids for neurotransmitter production in brain cells. [ 6 ] Voltage-gated potassium channels are also uniporters found in neurons and are essential for action potentials . [ 7 ] This channel is activated by a voltage gradient created by sodium-potassium pumps . When the membrane reaches a certain voltage, the channels open, which depolarizes the membrane, leading to an action potential being sent down the membrane. [ 8 ] Glucose transporters are found in the plasma membrane and play a role in transporting glucose . They help to bring glucose from the blood or extracellular space into cells usually to be utilized for metabolic processes in generating energy. [ 9 ] Uniporters are essential for certain physiological processes in cells, such as nutrient uptake, waste removal, and maintenance of ionic balance. Early research in the 19th and 20th centuries on osmosis and diffusion provided the foundation for understanding the passive movement of molecules across cell membranes. [ 10 ] In 1855, the physiologist Adolf Fick was the first to define osmosis and simple diffusion as the tendency for solutes to move from a region of higher concentration to a lower concentration, also very well-known as Fick's Laws of Diffusion . [ 11 ] Through the work of Charles Overton in the 1890s, the concept that the biological membrane is semipermeable became important to understanding the regulation of substances in and out of the cells. [ 11 ] The discovery of facilitated diffusion by Wittenberg and Scholander suggested that proteins in the cell membrane aid in the transport of molecules. [ 12 ] In the 1960s - 1970s, studies on the transport of glucose and other nutrients highlighted the specificity and selectivity of membrane transport proteins . [ 13 ] Technological advancements in biochemistry helped isolate and characterize these proteins from cell membranes. Genetic studies on bacteria and yeast identified genes responsible for encoding transporters. This led to the discovery of glucose transporters (GLUT proteins), with GLUT1 being the first to be characterized. [ 14 ] Identification of gene families encoding various transporters, such as solute carrier (SLC) families , also advanced knowledge on uniporters and its functions. [ 14 ] Newer research is focusing on techniques using recombinant DNA technology , electrophysiology and advanced imaging to understand uniporter functions. These experiments are designed to clone and express transporter genes in host cells to further analyze the three-dimensional structure of uniporters, as well as directly observe the movement of ions through proteins in real-time. [ 14 ] The discovery of mutations in uniporters has been linked to diseases such as GLUT1 deficiency syndrome , cystic fibrosis , Hartnup disease , primary hyperoxaluria and hypokalemic periodic paralysis . [ 15 ] The glucose transporter (GLUTs) is a type of uniporter responsible for the facilitated diffusion of glucose molecules across cell membranes. [ 9 ] Glucose is a vital energy source for most living cells, however, due to its large size, it cannot freely move through the cell membrane. [ 16 ] The glucose transporter is specialized in transporting glucose specifically across the membrane. The GLUT proteins have several types of isoforms , each distributed in different tissues and exhibiting different kinetic properties. [ 16 ] GLUTs are integral membrane proteins composed of 12 α-helix membrane spanning regions . [ 16 ] The GLUT proteins are encoded by the SLC2 genes and categorized into three classes based on amino acid sequence similarity. [ 17 ] Humans have been found to express fourteen GLUT proteins. Class I GLUTs include GLUT1 , one of the most studied isoforms, and GLUT2 . [ 16 ] GLUT1 is found in various tissues like the red blood cells , brain , and blood-brain barrier and is responsible for basal glucose uptake . [ 16 ] GLUT2 is predominantly found in the liver , pancreas , and small intestines . [ 16 ] It plays an important role in insulin secretion from pancreatic beta cells . Class II includes the GLUT3 and GLUT4 . [ 16 ] GLUT3, primarily found in the brain, neurons and placenta , has a high affinity for glucose in facilitating glucose uptake into neurons. [ 16 ] GLUT4 plays a role in insulin-regulated glucose uptake and is mainly found in insulin-sensitive tissues such as muscle and adipose tissue . [ 16 ] Class III includes GLUT5 , found in the small intestine , kidney , testes , and skeletal muscle . [ 16 ] Unlike the other GLUTs, GLUT5 specifically transports fructose rather than glucose. [ 16 ] Glucose transporters allow glucose molecules to move down their concentration gradient from areas of high glucose concentration to areas of low concentration. This process often involves bringing glucose from the extracellular space or blood into the cell. The concentration gradient set up by glucose concentrations fuels the process without the need for ATP. [ 18 ] When glucose binds to the glucose transporter, the protein channels change shape and undergo a conformational change to transport the glucose across the membrane. Once the glucose unbinds, the protein returns to its original shape. The glucose transporter is essential for carrying out physiological processes that require high energy demands in the brain, muscles, and kidneys by providing an adequate amount of energy substrate for metabolism . Diabetes , an example of a condition that involves glucose metabolism, highlights the importance of the regulation of glucose uptake in disease management. [ 19 ] The mitochondrial calcium uniporter (MCU) is a protein complex located in the inner mitochondrial matrix that functions to take up calcium ions (Ca2+) into the matrix from the cytoplasm . [ 20 ] The transport of calcium ions is specifically used in cellular function for regulating energy production in the mitochondria, cytosolic calcium signaling , and cell death . The uniporter becomes activated when cytoplasmic levels of calcium rise above 1 uM. [ 20 ] The MCU complex comprises 4 parts: the port-forming subunits, regulatory subunits MICU1 and MICU2, and an auxiliary subunit, EMRE. [ 21 ] These subunits work together to regulate the uptake of calcium in the mitochondria. Specifically, the EMRE subunit functions for the transport of calcium, and the MICU subunit functions in tightly regulating the activity of MCU to prevent the overload of calcium concentrations in the cytoplasm. [ 21 ] Calcium is fundamental for signaling pathways in cells, as well as for cell death pathways. [ 21 ] The function of the mitochondrial uniporter is critical for maintaining cellular homeostasis . The MICU1 and MICU2 subunits are a heterodimer connected by a disulfide bridge . [ 20 ] When there are high levels of cytoplasmic calcium, the MICU1-MICU2 heterodimer undergoes a conformational change . [ 20 ] The heterodimer subunits have cooperative activation, which means Ca 2+ binding to one MICU subunit in the heterodimer induces a conformational change on the other MICU subunits. The uptake of calcium is balanced by the sodium-calcium exchanger . [ 21 ] The L-type amino acid transporter (LAT1) is a uniporter that mediates the transport of neutral amino acids like L-tryptophan , leucine , histidine , proline , alanine , etc. [ 6 ] LAT1 favors the transport of amino acids with large branched or aromatic side chains . The amino acid transporter functions to move essential amino acids into the intestinal epithelium , placenta , and blood-brain barrier for cellular processes such as metabolism and cell signaling. [ 22 ] The transporter is of particular significance in the central nervous system as it provides the necessary amino acids for protein synthesis and neurotransmitter production in brain cells. [ 22 ] Aromatic amino acids like phenylalanine and tryptophan are precursors for neurotransmitters like dopamine , serotonin , and norepinephrine . [ 22 ] LAT1 is a membrane protein of the SLC7 family of transporters and works in conjunction with the SLC3 family member 4F2hc to form a heterodimeric complex known as the 4F2hc complex. [ 6 ] The heterodimer consists of a light chain and a heavy chain covalently bonded by a disulfide bond . The light chain is the one that carries out transport, while the heavy chain is needed to stabilize the dimer. [ 6 ] There is some controversy over whether LAT1 is an uniporter or an antiporter . The transporter has uniporter characteristics of transporting amino acids into cells in a unidirectional manner down the concentration gradient. However, recently it has been found that the transporter has antiporter characteristics of exchanging neutral amino acids for abundant intracellular amino acids. [ 23 ] Over-expression of LAT1 has been found in human cancer and is associated with playing a role in cancer metabolism. [ 24 ] The nucleoside transporters , or equilibrative nucleoside transporters , are uniporters that transport nucleosides , nucleobases , and therapeutic drugs across the cell membrane. [ 25 ] Nucleosides serve as building blocks for nucleic acid synthesis and are key components for energy metabolism in creating ATP / GTP . [ 26 ] They also act as ligands for purinergic receptors such as adenosine and inosine . ENTs allow the transport of nucleosides down their concentration gradient. They also have the ability to deliver nucleoside analogs to intracellular targets for the treatment of tumors and viral infections. [ 26 ] ENTs are part of the Major Facilitator Superfamily (MFS) and are suggested to transport nucleosides using a clamp-and-switch model. [ 26 ] In this model, the substrate first binds to the transporter, which leads to a conformational change that forms an occluded state (clamp). Then, the transporter switches to face the other side of the membrane and releases the bound substrate (switching). [ 26 ] ENTs have been found in protozoa and mammals. In humans, they have been discovered as ENT3 (hENT1-3) and ENT4 (hENT4) transporters. [ 25 ] ENTs are expressed across all tissue types, but certain ENT proteins have been found to be more abundant in specific tissues. hENT1 is found mostly in the adrenal glands , ovary , stomach and small intestines . [ 25 ] hENT2 is expressed mostly in neurological tissues and small parts of the skin , placenta, urinary bladder , heart muscle and gallbladder . [ 25 ] hENT3 is expressed highly in the cerebral cortex , lateral ventricle , ovary and adrenal gland . [ 25 ] hENT4 is more commonly known as the plasma membrane monoamine transporter (PMAT) , as it facilitates the movement of organic cations and biogenic amines across the membrane. [ 25 ] Uniporters work to transport molecules or ions by passive transport across a cell membrane down its concentration gradient. Upon binding and recognition of a specific substrate molecule on one side of the uniporter membrane, a conformational change is triggered in the transporter protein. [ 27 ] This causes the transporter protein to change its three-dimensional shape, which ensures the substrate molecule is captured within the transporter proteins structure. The conformational change leads to the translocation of the substrate across the membrane onto the other side. [ 27 ] On the other side of the membrane, the uniporter undergoes another conformational change in the release of the substrate molecule. The uniporter returns to its original conformation to bind another molecule for transport. [ 27 ] Unlike symporters and antiporters , uniporters transport one molecule/ion in a single direction based on the concentration gradient. [ 28 ] The entire process depends on the substrate's concentration difference across the membrane to be the driving force for the transport by uniporters. [ 28 ] Cellular energy in the form of ATP is not required for this process. [ 28 ] Uniporters play an essential role in carrying out various cellular functions. Each uniporter is specialized to facilitate the transport of a specific molecule or ion across the cell membrane. Examples of a few of the physiological roles uniporters aid in include: [ 29 ] Mutations in genes encoding uniporters lead to dysfunctional transporter proteins being formed. This loss of function in uniporters causes disruption in cellular function which leads to various diseases and disorders.
https://en.wikipedia.org/wiki/Uniporter
The Unique Material Identifier (UMID) is a SMPTE standard for providing a stand-alone method for generating a unique label designed to be used to attach to media files and streams. [ 1 ] The UMID is standardized in SMPTE 330M . [ 2 ] There are two types of UMID: This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unique_Material_Identifier
The unique homomorphic extension theorem is a result in mathematical logic which formalizes the intuition that the truth or falsity of a statement can be deduced from the truth values of its parts. [ 1 ] [ 2 ] [ 3 ] Let A be a non-empty set, X a subset of A , F a set of functions in A , and X + {\displaystyle X_{+}} the inductive closure of X under F . Let be B any non-empty set and let G be the set of functions on B , such that there is a function d : F → G {\displaystyle d:F\to G} in G that maps with each function f of arity n in F the following function d ( f ) : B n → B {\displaystyle d(f):B^{n}\to B} in G (G cannot be a bijection). From this lemma we can now build the concept of unique homomorphic extension. If X + {\displaystyle X_{+}} is a free set generated by X and F , for each function h : X → B {\displaystyle h:X\to B} there is a single function h ^ : X + → B {\displaystyle {\hat {h}}:X_{+}\to B} such that: For each function f of arity n > 0, for each x 1 , … , x n ∈ X + n , {\displaystyle x_{1},\ldots ,x_{n}\in X_{+}^{n},} The identities seen in (1) e (2) show that h ^ {\displaystyle {\hat {h}}} is an homomorphism, specifically named the unique homomorphic extension of h {\displaystyle h} . To prove the theorem, two requirements must be met: to prove that the extension ( h ^ {\displaystyle {\hat {h}}} ) exists and is unique (assuring the lack of bijections). We must define a sequence of functions h i : X i → B {\displaystyle h_{i}:X_{i}\to B} inductively, satisfying conditions (1) and (2) restricted to X i {\displaystyle X_{i}} . For this, we define h 0 = h {\displaystyle h_{0}=h} , and given h i {\displaystyle h_{i}} then h i + 1 {\displaystyle h_{i+1}} shall have the following graph: First we must be certain the graph actually has functionality, since X + {\displaystyle X_{+}} is a free set, from the lemma we have f ( x 1 , … , x n ) ∈ X i + 1 − X i {\displaystyle f(x_{1},\ldots ,x_{n})\in X_{i+1}-X_{i}} when ( x 1 , … , x n ) ∈ X i n − X i − 1 n , ( i ≥ 0 ) {\displaystyle (x_{1},\ldots ,x_{n})\in X_{i}^{n}-X_{i-1}^{n},(i\geq 0)} , so we only have to determine the functionality for the left side of the union. Knowing that the elements of G are functions(again, as defined by the lemma), the only instance where ( x , y ) ∈ g r a p h ( h i ) {\displaystyle (x,y)\in graph(h_{i})} and ( x , z ) ∈ g r a p h ( h i ) {\displaystyle (x,z)\in graph(h_{i})} for some x ∈ X i + 1 − X i {\displaystyle x\in X_{i+1}-X_{i}} is possible is if we have x = f ( x 1 , … , x m ) = f ′ ( y 1 , … , y n ) {\displaystyle x=f(x_{1},\ldots ,x_{m})=f'(y_{1},\ldots ,y_{n})} for some ( x 1 , … , x m ) ∈ X i m − X i − 1 m , ( y 1 , … , y n ) ∈ X i n − X i − 1 n {\displaystyle (x_{1},\ldots ,x_{m})\in X_{i}^{m}-X_{i-1}^{m},(y_{1},\ldots ,y_{n})\in X_{i}^{n}-X_{i-1}^{n}} and for some generators f {\displaystyle f} and f ′ {\displaystyle {f'}} in F {\displaystyle F} . Since f ( X + m ) {\displaystyle f(X_{+}^{m})} and f ′ ( X + n ) {\displaystyle {f'}(X_{+}^{n})} are disjoint when f ≠ f ′ , f ( x 1 , … , x m ) = f ′ ( y 1 , … , Y n ) {\displaystyle f\neq {f'},f(x_{1},\ldots ,x_{m})=f'(y_{1},\ldots ,Y_{n})} this implies f = f ′ {\displaystyle f=f'} and m = n {\displaystyle m=n} . Being all f ∈ F {\displaystyle f\in F} in X + n {\displaystyle X_{+}^{n}} , we must have x j = y j , ∀ j , 1 ≤ j ≤ n {\displaystyle x_{j}=y_{j},\forall j,1\leq j\leq n} . Then we have y = z = g ( x 1 , … , x n ) {\displaystyle y=z=g(x_{1},\ldots ,x_{n})} with g = d ( f ) {\displaystyle g=d(f)} , displaying functionality. Before moving further we must make use of a new lemma that determines the rules for partial functions, it may be written as: Using (3), h ^ = ⋃ i ≥ 0 h i {\displaystyle {\hat {h}}=\bigcup _{i\geq 0}h_{i}} is a partial function. Since d o m ( h ^ ) = ⋃ d o m ( h i ) = ⋃ X i = X + {\displaystyle dom({\hat {h}})=\bigcup dom(h_{i})=\bigcup X_{i}=X_{+}} then h ^ {\displaystyle {\hat {h}}} is total in X + {\displaystyle X_{+}} . Furthermore, it is clear from the definition of h i {\displaystyle h_{i}} that h ^ {\displaystyle {\hat {h}}} satisfies (1) and (2). To prove the uniqueness of h ^ {\displaystyle {\hat {h}}} , or any other function h ′ {\displaystyle {h'}} that satisfies (1) and (2), it is enough to use a simple induction that shows h ^ {\displaystyle {\hat {h}}} and h ′ {\displaystyle {h'}} work for X i , ∀ i ≥ 0 {\displaystyle X_{i},\forall i\geq 0} , and such is proved the Theorem of the Unique Homomorphic Extension. [2] Archived 2017-07-12 at the Wayback Machine We can use the theorem of unique homomorphic extension for calculating numeric expressions over whole numbers. First, we must define the following: Be F = { f − , f + , f ∗ } {\displaystyle F=\{{f-,f+,f*}\}} f : Σ ∗ → Σ w ↦ − w ∗ {\displaystyle f:\Sigma ^{*}\to \Sigma _{w\mapsto {-w}}^{*}} f : Σ ∗ x Σ ∗ → Σ w 1 , w 2 ↦ w 1 + w 2 ∗ {\displaystyle f:\Sigma ^{*}x\Sigma ^{*}\to \Sigma _{w_{1},w_{2}\mapsto {w_{1}+w_{2}}}^{*}} f : Σ ∗ x Σ ∗ → Σ w 1 , w 2 ↦ w 1 ∗ w 2 ∗ {\displaystyle f:\Sigma ^{*}x\Sigma ^{*}\to \Sigma _{w_{1},w_{2}\mapsto {w_{1}*w_{2}}}^{*}} Be E X P R {\displaystyle EXPR} he inductive closure of X {\displaystyle X} under F {\displaystyle F} and be B = Z , G = { S o m a ( − . − ) , M u l t ( − , − ) , M e n o s ( − ) } {\displaystyle B=\mathbb {Z} ,G={\{Soma(-.-),Mult(-,-),Menos(-)}\}} Be d : F → G {\displaystyle d:F\to G} d ( f − ) = m e n o s {\displaystyle d({f-})=menos} d ( f + ) = m a i s {\displaystyle d({f+})=mais} d ( f ∗ ) = m u l t {\displaystyle d({f*})=mult} Then h ^ : X + → { 0 , 1 } {\displaystyle {\hat {h}}:X_{+}\to \{{0,1}\}} will be a function that calculates recursively the truth-value of a proposition, and in a way, will be an extension of the function h : X → { 0 , 1 } {\displaystyle h:X\to \{{0,1}\}} that associates a truth-value to each atomic proposition, such that: (1) h ^ ( ϕ ) = h ( ϕ ) {\displaystyle {\hat {h}}(\phi )=h(\phi )} (2) h ^ ( ( ¬ ϕ ) ) = N A O ( h ^ ( ψ ) ) {\displaystyle {\hat {h}}({(\neg \phi )})=NAO({\hat {h}}(\psi ))} (Negation) h ^ ( ( ρ ∧ θ ) ) = E ( h ^ ( ρ ) , h ^ ( θ ) ) {\displaystyle {\hat {h}}({(\rho \land \theta )})=E({\hat {h}}(\rho ),{\hat {h}}(\theta ))} (AND Operator) h ^ ( ( ρ ∨ θ ) ) = O U ( h ^ ( ρ ) , h ^ ( θ ) ) {\displaystyle {\hat {h}}({(\rho \lor \theta )})=OU({\hat {h}}(\rho ),{\hat {h}}(\theta ))} (OR Operator) h ^ ( ( ρ → θ ) ) = S E E N T A O ( h ^ ( ρ ) , h ^ ( θ ) ) {\displaystyle {\hat {h}}({(\rho \to \theta )})=SE\,ENTAO({\hat {h}}(\rho ),{\hat {h}}(\theta ))} (IF-THEN Operator)
https://en.wikipedia.org/wiki/Unique_homomorphic_extension_theorem
Unique molecular identifiers ( UMIs ), or molecular barcodes ( MBC ) are short sequences or molecular "tags" added to DNA fragments in some next generation sequencing library preparation protocols to identify the input DNA molecule. These tags are added before PCR amplification, and can be used to reduce errors and quantitative bias introduced by the amplification. Applications include analysis of unique cDNAs to avoid PCR biases in iCLIP , [ 1 ] variant calling in ctDNA , gene expression in single-cell RNA-seq (scRNA-seq) [ 2 ] [ 3 ] and haplotyping via linked reads [ clarification needed ] . This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unique_molecular_identifier
In type theory , uniqueness of identity proofs (UIP) is a possible axiom for dependent type theory which asserts that any two proofs of the same equality are themselves equal. An equivalent and closely related axiom is Streicher's axiom K , which asserts that any proof of an equality x = x {\displaystyle x=x} is equal to the trivial reflexivity proof. These axioms thus have the respective types: In intensional type theory with the standard definition of the identity type as an indexed inductive type , UIP and K are unprovable, as first proved by Martin Hoffman and Thomas Streicher using a groupoid model. [ 1 ] While this might first seem surprising since the identity type has just one constructor (reflexivity), the apparent contradiction is resolved by the fact that the identity type inductively defines the identity type family x = ∙ {\displaystyle x=\bullet } rather than the single type x = y {\displaystyle x=y} . According to the identity type's induction principle, given an element x : X {\displaystyle x:X} and a type family C : ∏ y : X ∏ p : x = y T y p e {\displaystyle C:\prod _{y:X}\prod _{p:x=y}{\mathsf {Type}}} as well as an element c : C ( x , r e f l x ) {\displaystyle c:C(x,{\mathsf {refl}}_{x})} , we may build a function f : ∏ y : X ∏ p : x = y C ( y , p ) {\displaystyle f:\prod _{y:X}\prod _{p:x=y}C(y,p)} satisfying the definitional equality f ( x , r e f l x ) ≡ c {\displaystyle f(x,{\mathsf {refl}}_{x})\equiv c} . Informally speaking, the reason why this does not allow to prove, e.g., axiom K, is that the property to be proven should be C ( y , p ) :≡ p = r e f l x {\displaystyle C(y,p):\equiv p={\mathsf {refl}}_{x}} , but this is ill-typed as p {\displaystyle p} has type x = y {\displaystyle x=y} while r e f l x {\displaystyle {\mathsf {refl}}_{x}} has type x = x {\displaystyle x=x} . The axioms UIP and K are equivalent. [ 2 ] Indeed, K is a special case of UIP, while UIP can be deduced from K by equality induction on p {\displaystyle p} . In Martin-Löf's original extensional type theory , UIP and K are provable. This is because extensional type theory includes the so-called equality reflection rule that derives a judgmental equality from a propositional equality, which makes the function C ( y , p ) :≡ p = r e f l x {\displaystyle C(y,p):\equiv p={\mathsf {refl}}_{x}} well-typed. However, this type theory is not used in practice in proof assistants because it removes parts of proofs from the proof terms. Martin Hofmann proved that together with functional extensionality , UIP and K in fact sufficient to capture the differences between intensional and extensional Martin-Löf type theory: extensional MLTT is conservative over intensional MLTT with the addition of the axioms of functional extensionality and UIP (or K). [ 3 ] In homotopy type theory , UIP has the interpretation that every type has a trivial homotopical structure where any two paths between the same points are homotopical (equivalently, as in axiom K, all loops are contractible). This is disprovable, as it contradicts the univalence axiom . [ 4 ] However, many naturally occurring types X {\displaystyle X} do satisfy that any two equality proofs between elements of X {\displaystyle X} are equal. Such types are called sets , or h-sets , or homotopy 0-types , or 0-truncated types .
https://en.wikipedia.org/wiki/Uniqueness_of_identity_proofs
The uniqueness theorem for Poisson's equation states that, for a large class of boundary conditions , the equation may have many solutions, but the gradient of every solution is the same. In the case of electrostatics , this means that there is a unique electric field derived from a potential function satisfying Poisson's equation under the boundary conditions. The general expression for Poisson's equation in electrostatics is where φ {\displaystyle \varphi } is the electric potential and ρ f {\displaystyle \rho _{f}} is the charge distribution over some region V {\displaystyle V} with boundary surface S {\displaystyle S} . The uniqueness of the solution can be proven for a large class of boundary conditions as follows. Suppose that we claim to have two solutions of Poisson's equation. Let us call these two solutions φ 1 {\displaystyle \varphi _{1}} and φ 2 {\displaystyle \varphi _{2}} . Then It follows that φ = φ 2 − φ 1 {\displaystyle \varphi =\varphi _{2}-\varphi _{1}} is a solution of Laplace's equation , which is a special case of Poisson's equation that equals to 0 {\displaystyle 0} . Subtracting the two solutions above gives By applying the vector differential identity we know that However, from ( 1 ) we also know that throughout the region ∇ 2 φ = 0. {\displaystyle \nabla ^{2}\varphi =0.} Consequently, the second term goes to zero and we find that By taking the volume integral over the region V {\displaystyle V} , we find that By applying the divergence theorem , we rewrite the expression above as We now sequentially consider three distinct boundary conditions: a Dirichlet boundary condition, a Neumann boundary condition, and a mixed boundary condition. First, we consider the case where Dirichlet boundary conditions are specified as φ = 0 {\displaystyle \varphi =0} on the boundary of the region. If the Dirichlet boundary condition is satisfied on S {\displaystyle S} by both solutions (i.e., if φ = 0 {\displaystyle \varphi =0} on the boundary), then the left-hand side of ( 2 ) is zero. Consequently, we find that Since this is the volume integral of a positive quantity (due to the squared term), we must have ∇ φ = 0 {\displaystyle \nabla \varphi =0} at all points. Further, because the gradient of φ {\displaystyle \varphi } is everywhere zero and φ {\displaystyle \varphi } is zero on the boundary, φ {\displaystyle \varphi } must be zero throughout the whole region. Finally, since φ = 0 {\displaystyle \varphi =0} throughout the whole region, and since φ = φ 2 − φ 1 {\displaystyle \varphi =\varphi _{2}-\varphi _{1}} throughout the whole region, therefore φ 1 = φ 2 {\displaystyle \varphi _{1}=\varphi _{2}} throughout the whole region. This completes the proof that there is the unique solution of Poisson's equation with a Dirichlet boundary condition. Second, we consider the case where Neumann boundary conditions are specified as ∇ φ = 0 {\displaystyle \nabla \varphi =0} on the boundary of the region. If the Neumann boundary condition is satisfied on S {\displaystyle S} by both solutions, then the left-hand side of ( 2 ) is zero again. Consequently, as before, we find that As before, since this is the volume integral of a positive quantity, we must have ∇ φ = 0 {\displaystyle \nabla \varphi =0} at all points. Further, because the gradient of φ {\displaystyle \varphi } is everywhere zero within the volume V {\displaystyle V} , and because the gradient of φ {\displaystyle \varphi } is everywhere zero on the boundary S {\displaystyle S} , therefore φ {\displaystyle \varphi } must be constant---but not necessarily zero---throughout the whole region. Finally, since φ = k {\displaystyle \varphi =k} throughout the whole region, and since φ = φ 2 − φ 1 {\displaystyle \varphi =\varphi _{2}-\varphi _{1}} throughout the whole region, therefore φ 1 = φ 2 − k {\displaystyle \varphi _{1}=\varphi _{2}-k} throughout the whole region. This completes the proof that there is the unique solution up to an additive constant of Poisson's equation with a Neumann boundary condition. Mixed boundary conditions could be given as long as either the gradient or the potential is specified at each point of the boundary. Boundary conditions at infinity also hold. This results from the fact that the surface integral in ( 2 ) still vanishes at large distances because the integrand decays faster than the surface area grows.
https://en.wikipedia.org/wiki/Uniqueness_theorem_for_Poisson's_equation
Unit 1855 was a unit for human experimentation that belonged to the central Epidemic Prevention and Water Purification Department of the North China Army of the Imperial Japanese Army , stationed in Beijing between 1938 and 1945. Unit 1855 was established by the North China Army in 1938. [ 1 ] The unit was located in a facility not far from the Temple of Heaven in Beijing, and had a staff of about 2000 men. [ 2 ] The unit was commanded by the surgeon Col. Nishimura Yeni , who reported directly to Shirō Ishii at Unit 731 . [ 3 ] According to the testimony of the Korean Choi Hyung Shi, who worked as an interpreter with Unit 1855 between 1942 and 1943, the Unit conducted experiments with plague , cholera and typhus on Chinese and Korean immigrants to China: It has been estimated, that Unit 1855 killed about 1000 people between 1938 and 1945. [ 5 ] The unit evacuated the facilities in Beijing during the Japanese defeat in 1945, and the Chinese entered the building, which was not destroyed and was still standing as of 1996. Unit 1855 had a branch in Chinan , which was a combination of prison and experiment center. [ 6 ]
https://en.wikipedia.org/wiki/Unit_1855
Unit 516 (第五一六部隊) was a top secret Japanese chemical weapons facility, operated by the Kempeitai , in Qiqihar , Japanese-occupied northeast China . The name Unit 516 was a code name ( Tsūshōgō ) of the Unit. It was officially called the Kwantung Army Chemical Weapons Section and operated underneath Unit 731 . [ 1 ] An estimated 700,000 (Japanese estimation) to 2,000,000 (Chinese estimation) Japanese-produced chemical weapons were buried in China. Until 1995, Japan had refused to acknowledge that it dumped chemical weapons in the Nen River between Heilongjiang and Hulunbei'er , leaving huge amounts behind. At the end of World War II , the Imperial Japanese Army buried some of their chemical weapons in China, but most were confiscated by Soviet Red Army , the People's Liberation Army and the Kuomintang Army , along with other weapons. The Soviet Union later handed over these weapons to China (ROC) , who then buried them. Japanese chemical weapons were later found mixed with Soviet and Chinese chemical weapons. The Japanese National Institute for Defense Studies has a record of Japanese weapons confiscated by Kuomintang Army along with a list of the types of chemical weapons. No confiscation records about ROC / Russia have been found. However, no country has records about the locations of the buried chemical weapons. China has started gathering these abandoned weapons for destruction and burial, and they are currently buried in remote Dunhua County, in Haerbaling , Jilin (吉林) province. One of the focuses of the Chemical Weapons Convention was to assign responsibility for the destruction of old chemical weapons in China. The convention was signed in 1993 and according to it, all chemical weapons created after 1925 must be destroyed by the originating-country. Under the convention, Japan is building a factory in China to destroy chemical weapons.
https://en.wikipedia.org/wiki/Unit_516
Unit 731 ( Japanese : 731部隊 , Hepburn : Nana-san-ichi Butai ) , [ note 1 ] short for Manchu Detachment 731 and also known as the Kamo Detachment [ 4 ] : 198 and the Ishii Unit , [ 6 ] was a covert biological and chemical warfare research and development unit of the Imperial Japanese Army that engaged in lethal human experimentation and biological weapons manufacturing during the Second Sino-Japanese War (1937–1945) and World War II . Estimates vary as to how many were killed. Between 1936 and 1945, roughly 14,000 victims were murdered in Unit 731. [ 7 ] It is estimated that at least 200,000 individuals have died due to infectious illnesses caused by the activities of Unit 731 and its affiliated research facilities. [ 1 ] It was based in the Pingfang district of Harbin , the largest city in the Japanese puppet state of Manchukuo (now Northeast China ) and had active branch offices throughout China and Southeast Asia . Established in 1936, Unit 731 was responsible for some of the most notorious war crimes committed by the Japanese armed forces . It routinely conducted tests on people who were dehumanized and internally referred to as "logs". Victims were further dehumanized by being confined in facilities referred to as "log cabins". Experiments included disease injections, controlled dehydration, biological weapons testing, hypobaric pressure chamber testing, vivisection , organ harvesting , amputation , and standard weapons testing. Victims included not only kidnapped men, women (including pregnant women), and children but also babies born from the systemic rape perpetrated by the staff inside the compound. The victims also came from different nationalities, with the majority being Chinese and a significant minority being Russian. Additionally, Unit 731 produced biological weapons that were used in areas of China not occupied by Japanese forces, which included Chinese cities and towns, water sources, and fields. All prisoners within the compound were killed to conceal evidence, and there were no documented survivors. Originally set up by the military police of the Empire of Japan , Unit 731 was taken over and commanded until the end of the war by General Shirō Ishii , a combat medic officer. The facility itself was built in 1935 as a replacement for the Zhongma Fortress , a prison and experimentation camp. Ishii and his team used it to expand their capabilities. The program received generous support from the Japanese government until the end of the war in 1945. On 28 August 2002, Tokyo District Court ruled that Japan had committed biological warfare in China and consequently was responsible for the deaths of many residents. [ 8 ] [ 9 ] Both the Soviet Union and the United States gathered data from the Unit after the fall of Japan. While twelve Unit 731 researchers arrested by Soviet forces were tried at the December 1949 Khabarovsk war crimes trials , they were sentenced lightly to the Siberian labor camp from two to 25 years, in exchange for the information they held. [ 10 ] The Soviet Union built their bioweapons facility in Sverdlovsk using documentation captured from the Unit in Manchuria. [ 11 ] [ 12 ] The researchers captured by the US military were secretly given immunity . [ 13 ] The Harry S. Truman administration helped cover up the human experimentations and handed stipends to the perpetrators. [ 1 ] [ 14 ] [ 15 ] The cover-up of Japanese war crimes and biological warfare capabilities was motivated both by an interest in the data collected by the Japanese and by a desire to prevent the Soviets from gaining information. However, the information obtained was not of significant value, as the U.S. biological warfare program had surpassed the capabilities of Unit 731 by 1943. [ 16 ] [ 17 ] Japan initiated its biological weapons program during the 1930s due to the prohibition of biological weapons in interstate conflicts by the Geneva Protocol of 1925. They reasoned that the ban verified its effectiveness as a weapon. [ 1 ] Japan's occupation of Manchuria began in 1931 after the Japanese invasion of Manchuria . [ 18 ] Japan decided to build Unit 731 in Manchuria because the occupation not only gave the Japanese an advantage of separating the research station from their island, but also gave them access to as many Chinese individuals as they wanted for use as test subjects. [ 18 ] They viewed the Chinese as no-cost assets, and hoped this would give them a competitive advantage in biological warfare. [ 18 ] Most of the victims were Chinese, but many victims were also from different nationalities. [ 1 ] These facilities contained more than just medical research and experimentation areas; they also included spaces for detaining victims, essentially functioning as a prison. [ 19 ] The research and experimentation rooms were constructed around the detention area, allowing researchers to conduct their daily work while monitoring the prisoners. [ 19 ] Founded in 1936, Unit 731 expanded to include 3000 staff members, 150 structures, and the capacity to detain up to 600 prisoners concurrently for experimental purposes. [ 20 ] Unit 731 was a clandestine division of Japan's Kwantung Army based in Manchuria during World War II. Led by Lieutenant General Shirō Ishii , the organization dedicated to the advancement of biological weaponry within the imperial army was commonly referred to as the Ishii Network. [ 21 ] The Ishii Network was headquartered at the Epidemic Prevention Research Laboratory, established in 1932 at the Japanese Army Military Medical School in Tokyo, Japan . Unit 731 was the first among several covert units established as offshoots of the research lab, serving as field stations and experimental sites for advancing biological warfare techniques. These efforts culminated in the experimental deployment of biological weapons on Chinese cities, a direct breach of the 1925 Geneva Protocol prohibiting the use of biological and chemical weapons in warfare. Participants in these activities were aware of the violations and recognized the inhumanity of using human subjects in laboratory experiments, prompting the establishment of Unit 731 and other secret units. [ 21 ] Under the direction of Shirō Ishii, the Epidemic Prevention Research Laboratory was established following his return from a two-year exploration of American and European research institutions. With the endorsement of high-ranking military officials, it was established for the purpose of developing biological weapons. Ishii aimed to create biological weapons with humans as their intended victims, and Unit 731 was formed specifically to pursue this objective. [ 21 ] Ishii organized a secret research group, the "Tōgō Unit," for chemical and biological experimentation in Manchuria. [ 21 ] In 1936, Emperor Hirohito issued a decree authorizing the expansion of the unit and its integration into the Kwantung Army as the Epidemic Prevention Department. [ 22 ] It was divided at that time into the "Ishii Unit" and "Wakamatsu Unit", with a base in Xinjing . From August 1940 on, the units were known collectively as the "Epidemic Prevention and Water Purification Department of the Kwantung Army" or "Unit 731" for short. [ 23 ] One of Ishii's main supporters inside the army was Colonel Chikahiko Koizumi , who later served as Japan's Health Minister from 1941 to 1945. Koizumi had joined a secret poison gas research committee in 1915, during World War I , when he and other Imperial Japanese Army officers were impressed by the successful German use of chlorine gas at the Second Battle of Ypres , in which the Allies suffered 6,000 deaths and 15,000 wounded as a result of the chemical attack. [ 24 ] [ 25 ] Unit Tōgō was set into motion in the Zhongma Fortress , a prison and experimentation camp in Beiyinhe, a village 100 kilometers (62 mi) south of Harbin on the South Manchuria Railway . The prisoners brought to Zhongma included common criminals , captured bandits, anti-Japanese partisans, as well as political prisoners and people rounded up on false charges by the Kempeitai . Prisoners were generally well fed on a diet of rice or wheat , meat , fish , and occasionally even alcohol in order to be in normal health at the beginning of experiments. Then, over several days, prisoners were eventually drained of blood and deprived of nutrients and water. Their deteriorating health was recorded. Some were also vivisected . Others were deliberately infected with plague bacteria and other microbes . [ 26 ] A prison break in the autumn of 1934, which jeopardized the facility's secrecy, and an explosion in 1935 (believed to be sabotage) led Ishii to shut down Zhongma Fortress. He then received authorization to move to Pingfang, approximately 24 kilometers (15 mi) south of Harbin, to set up a new, much larger facility. [ 27 ] In addition to the establishment of Unit 731, the decree also called for the creation of an additional biological warfare development unit, called the Kwantung Army Military Horse Epidemic Prevention Workshop (later referred to as Manchuria Unit 100 ), and a chemical warfare development unit called the Kwantung Army Technical Testing Department (later referred to as Manchuria Unit 516 ). After the Japanese invasion of China in 1937, sister chemical and biological warfare units were founded in major Chinese cities and were referred to as Epidemic Prevention and Water Supply Units. Detachments included Unit 1855 in Beijing , Unit Ei 1644 in Nanjing , Unit 8604 in Guangzhou , and later Unit 9420 in Singapore . All of these units comprised Ishii's network, which, at its height in 1939, oversaw over 10,000 personnel. [ 28 ] Medical doctors and professors from Japan were attracted to join Unit 731 both by the rare opportunity to conduct human experimentation and the Army's strong financial backing. [ 29 ] The military police and the Special Services Agency were responsible for finding victims to be test subjects for the unit, while a group of physicians were responsible for maintaining healthy victims and dispatching them for experimentations. [ 19 ] Not all individuals sent to Unit 731 underwent experiments; these experiments were reserved for healthy individuals, and once accepted into the program, the preservation of their health became a top priority. [ 19 ] Human experiments involved intentionally infecting captives, especially Chinese prisoners of war and civilians, with disease-causing agents and exposing them to bombs designed to disperse infectious substances upon contact with the skin. There are no records indicating any survivors from these experiments; those who did not die from infection were murdered for autopsy analysis. [ 20 ] After human experimentations, researchers commonly used either potassium cyanide or chloroform to kill survivors. [ 30 ] According to American historian Sheldon H. Harris : The Togo Unit employed gruesome tactics to secure specimens of select body organs. If Ishii or one of his co-workers wished to do research on the human brain, then they would order the guards to find them a useful sample. A prisoner would be taken from his cell. Guards would hold him while another guard would smash the victim's head open with an ax. His brain would be extracted off to the pathologist, and then to the crematorium for the usual disposal. [ 31 ] Nakagawa Yonezo, professor emeritus at Osaka University , studied at Kyoto University during the war. While he was there, he watched footage of human experiments and executions from Unit 731. He later testified about the playfulness of the experimenters: [ 32 ] Some of the experiments had nothing to do with advancing the capability of germ warfare , or of medicine. There is such a thing as professional curiosity: 'What would happen if we did such and such?' What medical purpose was served by performing and studying beheadings? None at all. That was just playing around. Professional people, too, like to play. Prisoners were injected with diseases, disguised as vaccinations , [ 33 ] to study their effects. To study the effects of untreated venereal diseases , male and female prisoners were deliberately infected with syphilis and gonorrhea , then studied. [ 34 ] A special project, codenamed Maruta , used human beings for experiments. Test subjects were gathered from the surrounding population and sometimes euphemistically referred to as "logs" ( 丸太 , maruta ) , used in such contexts as "How many logs fell?" This term originated as a joke on the part of the staff because the official cover story for the facility given to local authorities was that it was a lumber mill . According to a junior uniformed civilian employee of the Imperial Japanese Army working in Unit 731, the project was internally called "Holzklotz", from the German word for log. [ 35 ] In a further parallel, the corpses of "sacrificed" subjects were disposed of by incineration . [ 36 ] Researchers in Unit 731 also published some of their results in peer-reviewed journals , writing as though the research had been conducted on nonhuman primates called "Manchurian monkeys" or "long-tailed monkeys". [ 37 ] At the age of 14, on the encouragement of a former school teacher, Hideo Shimizu joined the fourth group of minors assigned to Unit 731. [ 38 ] He recalled that he was brought to a specimen room where jars of various heights, with some reaching the height of an adult, were stored. [ 38 ] The jars held body parts from humans preserved in formalin , such as heads and hands. [ 38 ] There was also a pregnant woman's body with a large belly, where the lower part was exposed to reveal a fetus with hair. Shimizu discovered that the term "logs" was used dehumanizingly to refer to prisoners. He also learned that the prisoners were further dehumanized by being held in facilities referred to as "log cabins". [ 38 ] Thousands of men, women, children, and infants interned at prisoner of war camps were subjected to vivisection , often performed without anesthesia and usually lethal. [ 39 ] [ 40 ] In a video interview, former Unit 731 member Okawa Fukumatsu admitted to having vivisected a pregnant woman. [ 41 ] Vivisections were performed on prisoners after infecting them with various diseases. Researchers performed invasive surgery on prisoners, removing organs to study the effects of disease on the human body. [ 42 ] Prisoners had limbs amputated in order to study blood loss . Limbs removed were sometimes reattached to the opposite side of victims' bodies. Some prisoners had their stomachs surgically removed and their esophagus reattached to the intestines . Parts of organs, such as the brain, lungs, and liver, were removed from others. [ 40 ] Imperial Japanese Army surgeon Ken Yuasa said that practising vivisection on human subjects was widespread even outside Unit 731, [ 43 ] estimating that at least 1,000 Japanese personnel were involved in the practice in mainland China. [ 44 ] Yuasa said that when he performed vivisections on captives, they were "all for practice rather than for research", and that such practises were "routine" among Japanese doctors stationed in China during the war. [ 36 ] The New York Times interviewed a former member of Unit 731. Insisting on anonymity, the former Japanese medical assistant recounted his first experience in vivisecting a live human being, who had been deliberately infected with the plague , for the purpose of developing "plague bombs" for war. "The fellow knew that it was over for him, and so he didn't struggle when they led him into the room and tied him down, but when I picked up the scalpel, that's when he began screaming. I cut him open from the chest to the stomach, and he screamed terribly, and his face was all twisted in agony. He made this unimaginable sound, he was screaming so horribly. But then finally he stopped. This was all in a day's work for the surgeons, but it really left an impression on me because it was my first time." [ 45 ] Other sources provided information on usual practice in the Unit for surgeons to stuff a rag (or medical gauze) into the mouth of prisoners before commencing vivisection in order to stifle any screaming. [ 46 ] Unit 731 and its affiliated units ( Unit 1644 and Unit 100 , among others) were involved in research, development and experimental deployment of epidemic-creating biological weapons in assaults against the Chinese populace (both military and civilian) throughout World War II. [ 7 ] By 1939, Ishii had condensed his laboratory discoveries to six potent pathogens : anthrax , typhoid , paratyphoid , glanders , dysentery , and plague -infected human fleas . These agents were robust enough to ignite epidemics of considerable magnitude and resilient to aerial dispersal. This marked the initiation of the latter phase of Ishii's elaborate scheme: conducting field trials through military expeditions on unsuspecting civilians, aiming to devise a method of dissemination that would efficiently spread the pathogens in optimal concentrations for maximum devastation. His experiments involved the development of biodegradable bombs housing live rats and fleas infected with diseases, designed to explode mid-air, ensuring the safe descent of the infected creatures to the ground. Additionally, he deployed birds and bird feathers contaminated with anthrax from low-flying aircraft. [ 7 ] Plague-infected fleas, bred in the laboratories of Unit 731 and Unit 1644, were spread by low-flying airplanes over Chinese cities, including coastal Ningbo and Changde , Hunan Province , in 1940 and 1941. [ 6 ] These operations killed tens of thousands with bubonic plague epidemics. An expedition to Nanjing involved spreading typhoid and paratyphoid germs into the wells , marshes , and houses of the city, as well as infusing them in snacks distributed to locals. Epidemics broke out shortly after, to the elation of many researchers, who concluded that paratyphoid fever was "the most effective" of the pathogens. [ 47 ] [ 48 ] : xii, 173 The Library of Congress holds a set of three declassified documents from Unit 731, each more than 100 pages long, translated from Japanese to English. These documents provided comprehensive clinical records about the daily progression of various pathogens within the bodies of helpless prisoners who were experimented on by Japanese doctors. [ 49 ] Japanese soldiers provided testimony indicating that the research program had the capability to manufacture substantial quantities of biological agents on a monthly basis: 300 kg of plague, 500–700 kg of anthrax, 800–900 kg of typhoid, and 1000 kg of cholera . Despite the significant production volumes, even small amounts of these bacteria possessed the potential to cause severe harm and fatalities. [ 50 ] Ishii determined that fleas were an efficient carrier for transmitting plague, leading Unit 731 to focus on breeding significant numbers of fleas. To achieve this goal, Unit 731 had approximately 4500 flea incubators, each capable of producing at least 45 kg of fleas per cycle. The substantial quantities of plague bacteria and fleas generated, combined with the severe illness and death rates associated with plague infection, illustrate the formidable biological warfare production capabilities wielded by the Japanese. Japanese researchers had the required materials to apply the scientific method in conducting experiments involving inoculation and the creation of airborne bacterial bombs. [ 50 ] Food items emerged as the preferred delivery mechanism for bacterial transmission. The unit maintained a stock of uncontaminated fruits. In a specific experiment conducted by Unit 731, typhoid was introduced into melons and cantaloupes. Following the contamination process, the bacterial density was measured. Once reaching a density level, the infected fruit was distributed to a small group of prisoners, with the objective of spreading typhoid throughout the entire group. [ 50 ] Unit 731 conducted biological warfare field trials by attacking many Chinese civilian populations. During the period of 1940 to 1943, Japanese scientists found that using bacterial bombs for transmission was not effective, but they did find success in utilizing planes to spray microorganisms as a means of biological warfare delivery. Unit 100 also deployed aerial spraying methods akin to those examined by Unit 731. [ 50 ] At least 12 large-scale bioweapon field trials were carried out, and at least 11 Chinese cities attacked with biological agents. An attack on Changde in 1941 reportedly led to approximately 10,000 biological casualties and 1,700 deaths among ill-prepared Japanese troops, in most cases due to cholera. [ 51 ] Japanese researchers performed tests on prisoners with bubonic plague , cholera, smallpox , botulism , and other diseases. [ 52 ] This research led to the development of the defoliation bacilli bomb and the flea bomb used to spread bubonic plague. [ 53 ] Some of these bombs were designed with porcelain shells, an idea proposed by Ishii in 1938. These bombs enabled Japanese soldiers to launch biological attacks, infecting agriculture, reservoirs , wells, as well as other areas, with anthrax - and plague -carrier fleas, typhoid , cholera, or other deadly pathogens. During biological bomb experiments, researchers dressed in protective suits would examine the dying victims. Infected food supplies and clothing were dropped by airplane into areas of China not occupied by Japanese forces. In addition, poisoned food and candy were given to unsuspecting victims. Plague fleas, infected clothing, and infected supplies encased in bombs were dropped on various targets. The resulting cholera , anthrax , and plague were estimated to have killed at least 400,000 Chinese civilians. [ 54 ] Tularemia was also tested on Chinese civilians. [ 55 ] Due to pressure from numerous accounts of the biowarfare attacks, Chiang Kai-shek sent a delegation of army and foreign medical personnel in November 1941 to document evidence and treat the afflicted. A report on the Japanese use of plague-infected fleas on Changde was made widely available the following year but was not addressed by the Allied Powers until Franklin D. Roosevelt issued a public warning in 1943 condemning the attacks. [ 56 ] [ 57 ] In December 1944, the Japanese Navy explored the possibility of attacking cities in California with biological weapons, known as Operation PX or Operation Cherry Blossoms at Night. The plan for the attack involved Seiran aircraft launched by Sentoku submarine aircraft carriers upon the West Coast of the United States—specifically, the cities of San Diego, Los Angeles, and San Francisco. The planes would spread weaponized bubonic plague , cholera , typhus , dengue fever , and other pathogens in a biological terror attack upon the population. The submarine crews would infect themselves and run ashore in a suicide mission. [ 58 ] [ 59 ] [ 60 ] [ 61 ] Planning for Operation PX was finalized on March 26, 1945, but shelved shortly thereafter due to the strong opposition of Chief of General Staff Yoshijirō Umezu . Umezu later explained his decision as such: "If bacteriological warfare is conducted, it will grow from the dimension of war between Japan and America to an endless battle of humanity against bacteria. Japan will earn the derision of the world." [ 62 ] Human targets were used to test grenades positioned at various distances and in various positions. Flamethrowers were tested on people. [ 63 ] Victims were also tied to stakes and used as targets to test pathogen-releasing bombs , chemical weapons , shrapnel bombs with varying amounts of fragments, and explosive bombs as well as bayonets and knives. To determine the best course of treatment for varying degrees of shrapnel wounds sustained on the field by Japanese Soldiers, Chinese prisoners were exposed to direct bomb blasts. They were strapped, unprotected, to wooden planks that were staked into the ground at increasing distances around a bomb that was then detonated. It was surgery for most, autopsies for the rest. Army Engineer Hisato Yoshimura conducted experiments by taking captives outside, dipping various appendages into water of varying temperatures, and allowing the limb to freeze . [ 66 ] Once frozen, Yoshimura would strike their affected limbs with a short stick, "emitting a sound resembling that which a board gives when it is struck". [ 67 ] Ice was then chipped away, with the affected area being subjected to various treatments. Military personnel of the Unit referred to Yoshimura as a "scientific devil" and a "cold-blooded animal" due to his strictness and involvement in mass killings and inhumane scientific tests, which included soaking the fingers of a three-day-old child in water containing ice and salt. [ 68 ] Naoji Uezono, a member of Unit 731, described in a 1980s interview a grisly scene where Yoshimura had "two naked men put in an area 40–50 degrees below zero and researchers filmed the whole process until [the subjects] died. [The subjects] suffered such agony they were digging their nails into each other's flesh." [ 69 ] Yoshimura's lack of remorse was evident in an article he wrote for the Japanese Journal of Physiology in 1950 in which he admitted to using 20 children and a three-day-old infant in experiments which exposed them to zero-degree-Celsius ice and salt water. [ 70 ] Although this article drew criticism, Yoshimura denied any guilt when contacted by a reporter from the Mainichi Shimbun . [ 71 ] Yoshimura developed a "resistance index of frostbite" based on the mean temperature 5 to 30 minutes after immersion in freezing water, the temperature of the first rise after immersion, and the time until the temperature first rises after immersion. In a number of separate experiments it was then determined how these parameters depend on the time of day a victim's body part was immersed in freezing water, the surrounding temperature and humidity during immersion, how the victim had been treated before the immersion ("after keeping awake for a night", "after hunger for 24 hours", "after hunger for 48 hours", "immediately after heavy meal", "immediately after hot meal", "immediately after muscular exercise", "immediately after cold bath", "immediately after hot bath"), what type of food the victim had been fed over the five days preceding the immersions with regard to dietary nutrient intake ("high protein (of animal nature)", "high protein (of vegetable nature)", "low protein intake", and "standard diet"), and salt intake (45 g NaCl per day, 15 g NaCl per day, no salt). [ 72 ] This original data is seen in the attached figure. Unit members orchestrated forced sex acts between infected and non-infected prisoners to transmit the disease, as the testimony of a prison guard on the subject of devising a method for transmission of syphilis between victims shows: Infection of venereal disease by injection was abandoned, and the researchers started forcing the prisoners into sexual acts with each other. Four or five unit members, dressed in white laboratory clothing completely covering the body with only eyes and mouth visible, rest covered, handled the tests. A male and female, one infected with syphilis, would be brought together in a cell and forced into sex with each other. It was made clear that anyone resisting would be shot. [ 73 ] After victims were infected, they were vivisected at different stages of infection, so that internal and external organs could be observed as the disease progressed. Testimony from multiple guards blames the female victims as being hosts of the diseases, even as they were forcibly infected. Genitals of female prisoners that were infected with syphilis were called "jam-filled buns" by guards. [ 73 ] Some children grew up inside the walls of Unit 731, infected with syphilis. A Youth Corps member deployed to train at Unit 731 recalled viewing a batch of subjects that would undergo syphilis testing: "one was a Chinese woman holding an infant, one was a White Russian woman with a daughter of four or five years of age, and the last was a White Russian woman with a boy of about six or seven." [ 73 ] The children of these women were tested in ways similar to their parents, with specific emphasis on determining how longer infection periods affected the effectiveness of treatments. Female prisoners were forced to become pregnant for use in experiments. The hypothetical possibility of vertical transmission (from mother to child) of diseases, particularly syphilis, was the stated reason for the torture. Fetal survival and damage to mother's reproductive organs were objects of interest. Though "a large number of babies were born in captivity", there have been no accounts of any survivors of Unit 731, children included. It is suspected that the children of female prisoners were killed after birth or aborted . [ 73 ] While male prisoners were often used in single studies, so that the results of the experimentation on them would not be clouded by other variables, women were sometimes used in bacteriological or physiological experiments, sex experiments, and as the victims of sex crimes . The testimony of a unit member that served as a guard graphically demonstrated this reality: One of the former researchers I located told me that one day he had a human experiment scheduled, but there was still time to kill. So he and another unit member took the keys to the cells and opened one that housed a Chinese woman. One of the unit members raped her; the other member took the keys and opened another cell. There was a Chinese woman in there who had been used in a frostbite experiment. She had several fingers missing and her bones were black, with gangrene set in. He was about to rape her anyway, then he saw that her sex organ was festering, with pus oozing to the surface. He gave up the idea, left and locked the door, then later went on to his experimental work. [ 73 ] In other tests, subjects were deprived of food and water to determine the amount of time until death; placed into low-pressure chambers until their eyes popped from the sockets; experimented upon to determine the relationship between temperature, burns, and human survival; hung upside down until death; crushed with heavy objects ; electrocuted ; dehydrated with hot fans; [ 74 ] placed into centrifuges and spun until death; injected with animal blood, notably with horse blood; exposed to lethal doses of X-rays ; subjected to various chemical weapons inside gas chambers; injected with seawater; and burned or buried alive . [ 75 ] [ 76 ] In addition to chemical agents, the properties of many different toxins were also investigated by the Unit. To name a few, prisoners were exposed to tetrodotoxin ( pufferfish or fugu poison), heroin , Korean bindweed, bactal, and castor-oil seeds ( ricin ). [ 77 ] [ 4 ] Massive amounts of blood were drained from some prisoners in order to study the effects of blood loss according to former Unit 731 vivisectionist Okawa Fukumatsu. In one case, at least half a liter of blood was drawn at two-to-three-day intervals. [ 78 ] As stated above, dehydration experiments were performed on the victims. The purpose of these tests was to determine the amount of water in an individual's body and to see how long one could survive with a very low to no water intake. It is known that victims were also starved before these tests began. The deteriorating physical states of these victims were documented by staff at a periodic interval. "It was said that a small number of these poor men, women, and children who became marutas were also mummified alive in total dehydration experiments. They sweated themselves to death under the heat of several hot dry fans. At death, the corpses would only weigh ≈1/5 normal bodyweight." Unit 731 also performed transfusion experiments with different blood types . Unit member Naeo Ikeda wrote: In my experience, when A type blood 100 cc was transfused to an O type subject, whose pulse was 87 per minute and temperature was 35.4 degrees C, 30 minutes later the temperature rose to 38.6 degrees with slight trepidation. Sixty minutes later the pulse was 106 per minute and the temperature was 39.4 degrees. Two hours later the temperature was 37.7 degrees, and three hours later the subject recovered. When AB type blood 120 cc was transfused to an O type subject, an hour later the subject described malaise and psychroesthesia in both legs. When AB type blood 100 cc was transfused to a B type subject, there seemed to be no side effect. Unit 731 tested many different chemical agents on prisoners and had a building dedicated to gas experiments. Some of the agents tested were mustard gas , lewisite , cyanic acid gas, white phosphorus , adamsite , and phosgene gas . [ 79 ] A former army major and technician gave the following testimony anonymously (at the time of the interview, this man was a professor emeritus at a national university): In 1943, I attended a poison gas test held at the Unit 731 test facilities. A glass-walled chamber about three meters square [97 sq ft] and two meters [6.6 ft] high was used. Inside of it, a Chinese man was blindfolded, with his hands tied around a post behind him. The gas was adamsite (sneezing gas), and as the gas filled the chamber the man went into violent coughing convulsions and began to suffer excruciating pain. More than ten doctors and technicians were present. After I had watched for about ten minutes, I could not stand it any more, and left the area. I understand that other types of gasses were also tested there. Takeo Wano, a former medical worker in Unit 731, said that he saw a Western man, who was vertically cut into two pieces, pickled in a jar of formaldehyde . [ 67 ] Wano guessed that the man was Russian because there were many Russians living in the area at that time. [ 67 ] Unit 100 also experimented with toxic gas. Phone booth-like tanks were used as portable gas chambers for the prisoners. Some were forced to wear various types of gas masks ; others wore military uniforms, and some wore no clothes at all. Some of the tests have been described as "psychopathically sadistic, with no conceivable military application". For example, one experiment documented the time it took for three-day-old babies to freeze to death. [ 80 ] [ 81 ] Unit 731 also tested chemical weapons on prisoners in field conditions. A report authored by unknown researcher in the Kamo Unit (Unit 731) describes a large human experiment of yperite gas ( mustard gas ) on 7–10 September 1940. Twenty subjects were divided into three groups and placed in combat emplacements, trenches , gazebos, and observatories. One group was clothed with Chinese underwear, no hat, and no mask and was subjected to as much as 1,800 field gun rounds of yperite gas over 25 minutes. Another group was clothed in summer military uniform and shoes; three had masks and another three had no mask. They also were exposed to as much as 1,800 rounds of yperite gas. A third group was clothed in summer military uniform, three with masks and two without masks, and were exposed to as much as 4,800 rounds. Then their general symptoms and damage to skin, eye, respiratory organs , and digestive organs were observed at 4 hours, 24 hours, and 2, 3, and 5 days after the shots. Injecting the blister fluid from one subject into another subject and analyses of blood and soil were also performed. Five subjects were forced to drink a solution of yperite and lewisite gas in water, with or without decontamination . The report describes conditions of every subject precisely without mentioning what happened to them in the long run. [ 82 ] The following is an excerpt of one of these reports: Number 376, dugout of the first area: September 7, 1940, 6 pm: Tired and exhausted. Looks with hollow eyes. Weeping redness of the skin of the upper part of the body. Eyelids edematous, swollen. Epiphora. Hyperemic conjunctivae. September 8, 6 am: Neck, breast, upper abdomen and scrotum weeping, reddened, swollen. Covered with millet-seed-size to bean-size blisters. Eyelids and conjunctivae hyperemic and edematous. Had difficulties opening the eyes. September 8, 6 pm: Tired and exhausted. Feels sick. Body temperature 37 degrees Celsius. Mucous and bloody erosions across the shoulder girdle. Abundant mucous nose secretions. Abdominal pain. Mucous and bloody diarrhea. Proteinuria. September 9, 7 am: Tired and exhausted. Weakness of all four extremities. Low morale. Body temperature 37 degrees Celsius. Skin of the face still weeping. After Japan's defeat in World War II, the Japanese murdered every single prisoner in the unit. The remains were then buried in the Unit 731 grounds after being cremated. [ 83 ] The following testimony explains how the captives were murdered: On August 11 and 12, after the end of the war, approximately 300 prisoners were disposed of. The prisoners were coerced into suicide by being given a piece of rope. One quarter of them hung themselves, and the remaining three quarters who would not consent to suicide were made to drink potassium cyanide and killed by injection. In the end all were taken care of. The prisoners were made to drink potassium cyanide by mixing it with water and putting it into bowls. The injections were most likely chloroform. [ 83 ] In 2002, Changde , China, site of the plague flea bombing, held an "International Symposium on the Crimes of Bacteriological Warfare," which estimated that the number of people slaughtered by the Imperial Japanese Army germ warfare and other human experiments was around 580,000. [ 48 ] : xii, 173 The American historian Sheldon H. Harris states that over 200,000 died. [ 84 ] [ 1 ] In addition to Chinese casualties, 1,700 Japanese troops in Zhejiang during Zhejiang-Jiangxi campaign were killed by their own biological weapons while attempting to unleash the biological agent, indicating serious issues with distribution. [ 85 ] Harris also said plague-infected animals were released near the end of the war, and caused plague outbreaks that killed at least 30,000 people in the Harbin area from 1946 to 1948. [ 1 ] Some test subjects were selected to gather a wide cross-section of the population and included common criminals, captured bandits, anti-Japanese partisans , political prisoners , homeless and mentally disabled people, which included infants, men, the elderly and pregnant women, as well as those rounded up by the Kenpeitai military police for alleged "suspicious activities". Unit 731 staff included approximately 300 researchers, including doctors and bacteriologists . [ 86 ] At least 3,000 men, women, and children [ 4 ] : 117 [ 85 ] —of which at least 600 every year were provided by the Kenpeitai [ 87 ] —were subjected to Unit 731 experimentation conducted at the Pingfang camp alone, not including victims from other medical experimentation sites such as Unit 100 . [ 88 ] Although 3,000 internal victims is the widely accepted figure in the literature, former Unit member Okawa Fukumatsu claims that there were at least 10,000 victims of internal experiments at the Unit, he himself vivisecting thousands. [ 41 ] According to A. S. Wells, the majority of victims were Chinese , [ 43 ] with a lesser percentage being Russian , Mongolian , and Korean . They may also have included a small number of European, American, Indian, Australian, and New Zealander prisoners of war (see also Allied prisoners of war in Japan ). [ 89 ] [ 90 ] [ 91 ] A member of the Yokusan Sonendan paramilitary political youth branch, who worked for Unit 731, stated that not only were Chinese, Russians, and Koreans present, but also Americans, British, and French people. [ 92 ] Sheldon H. Harris documented that the victims were generally political dissidents , communist sympathizers, ordinary criminals, impoverished civilians, and the mentally disabled. [ 93 ] Author Seiichi Morimura estimates that almost 70 percent of the victims who died in the Pingfang camp were Chinese (both military and civilian), [ 94 ] while close to 30 percent of the victims were Russian. [ 95 ] No one who entered Unit 731 came out alive. Prisoners were usually received into Unit 731 at night in motor vehicles painted black with a ventilation hole but no windows. [ 4 ] : 112 The vehicle would pull up at the main gates and one of the drivers would go to the guardroom and report to the guard. That guard would then telephone to the "Special Team" in the inner-prison ( Shirō Ishii 's brother was head of this Special Team). [ 96 ] [ 4 ] : 366 Then, the prisoners would be transported through a secret tunnel dug under the facade of the central building to the inner-prisons. [ 4 ] : 117 One of the prisons housed women and children (Building 8), while the other prison housed men (Building 7). Once at the inner-prison, technicians would take samples of the prisoners' blood and stool, test their kidney function , and collect other physical data. [ 97 ] Once deemed healthy and fit for experimentation, prisoners lost their names and were given a three-digit number, which they retained until their death. Whenever prisoners died after the experiments they had been subjected to, a clerk of the 1st Division struck their numbers off an index card and took the deceased prisoner's manacles to be put on new arrivals to the prison. [ 4 ] : 427 There is at least one recorded instance of "friendly" social interaction between prisoners and Unit 731 staff. Technician Naokata Ishibashi interacted with two female prisoners, a 21-year-old Chinese woman and a 19-year-old Ukrainian woman. The two prisoners told Ishibashi that they had not seen their faces in a mirror since being captured and begged him to get one. Ishibashi snuck a mirror to them through a hole in the cell door. [ 98 ] The prison cells had wooden floors and a squat toilet in each. There was space between the outer walls of the cells and the outer walls of the prison, enabling the guards to walk behind the cells. Each cell door had a small window in it. Chief of the Personnel Division of the Kwantung Army Headquarters Tamura Tadashi testified that, when he was shown the inner-prison, he looked into the cells and saw living people in chains, some moved around, others were lying on the bare floor and were in a very sick and helpless condition. [ 4 ] : 349, 450 Former Unit 731 Youth Corps member Yoshio Shinozuka testified that the windows in these prison doors were so small that it was difficult to see in. [ 99 ] The inner-prison was a highly secured building complete with cast iron doors. [ 96 ] No one could enter without special permits and an ID pass with a photograph, and the entry/exit times were recorded. [ 99 ] The "special team" worked in these two inner-prison buildings. This team wore white overall suits, army hats, rubber boots, and pistols strapped to their sides. [ 96 ] Despite the prison's status as a highly secure building, at least one unsuccessful escape attempt did occur. Corporal Kikuchi Norimitsu testified that he was told by another unit member that a prisoner "had shown violence and had struck the experimenter with a door handle" and then "jumped out of the cell and ran down the corridor, seized the keys and opened the iron doors and some of the cells. Some of the prisoners managed to jump out but these were only the bold ones. These bold ones were shot." [ 4 ] : 374 Seiichi Morimura in his book The Devil's Feast went into some greater detail regarding this escape attempt. Two Russian male prisoners were in a cell with handcuffs on, one of them lay flat on the floor pretending to be sick. This got the attention of a staff member who saw it as an unusual condition. That staff member decided to enter the cell. The Russian lying on the floor suddenly sprang up and knocked the guard down. The two Russians opened their handcuffs, took the keys, and opened some other cells while yelling. Some prisoners, including Russian and Chinese, were frantically roaming the corridors and kept yelling and shouting. One Russian shouted to the members of Unit 731, demanding to be shot rather than used as an experimental object. This Russian was shot to death. One staff member, who was an eyewitness at this escape attempt, recalled: "spiritually we were all lost in front of the 'marutas' who had no freedom and no weapons. At that time we understood in our hearts that justice was not on our side." [ 100 ] Unfortunately for the prisoners of Unit 731, escape was an impossibility. Even if they had managed to escape the quadrangle (itself a heavily fortified building full of staff), they would have had to get over a three-meter-high (9.8 ft) brick wall surrounding the complex, and then across a dry moat filled with electrified wire running around the perimeter of the complex. [ 101 ] Members of Unit 731 were not immune from being subjects of experiments. Yoshio Tamura, an assistant in the Special Team, recalled that Yoshio Sudō, an employee of the first division at Unit 731, became infected with bubonic plague as a result of the production of it. The Special Team was then ordered to vivisect Sudō. Tamura recalled: Sudō had, a few days previously, been interested in talking about women, but now he was thin as a rake, with many purple spots over his body. A large area of scratches on his chest were bleeding. He painfully cried and breathed with difficulty. I sanitised his whole body with disinfectant. Whenever he moved, a rope around his neck tightened. After Sudō's body was carefully checked [by the surgeon], I handed a scalpel to [the surgeon] who, reversely gripping the scalpel, touched Sudō's stomach skin and sliced downward. Sudō shouted "brute!" and died with this last word. Additionally, Unit 731 Youth Corps member Yoshio Shinozuka testified that his friend junior assistant Mitsuo Hirakawa was vivisected as a result of being accidentally infected with plague. [ 82 ] There are unit members who were known to be interned at the Fushun War Criminals Management Centre and Taiyuan War Criminals Management Centre after the war, who then went on to be repatriated to Japan and founded the Association of Returnees from China and testified about Unit 731 and the crimes perpetrated there. Some members included: In April 2018, the National Archives of Japan disclosed a nearly complete list of 3,607 members of Unit 731 to Katsuo Nishiyama, a professor at Shiga University of Medical Science . Nishiyama reportedly intended to publish the list online to encourage further study into the unit. [ 102 ] Previously disclosed members included: Twelve members were formally tried and sentenced in the Khabarovsk war crimes trials : Unit 731 was divided into eight divisions: Unit 731 had other units underneath it in the chain of command ; there were several other units under the auspice of Japan's biological weapons programs . Most or all Units had branch offices, which were also often referred to as "Units." The term Unit 731 can refer to the Harbin complex, or it can refer to the organization and its branches, sub-Units and their branches. The Unit 731 complex covered six square kilometers (2.3 sq mi) and consisted of more than 150 buildings. The design of the facilities made them hard to destroy by bombing. The complex contained various factories. It had around 4,500 containers to be used to raise fleas , six cauldrons to produce various chemicals, and around 1,800 containers to produce biological agents. Approximately 30 kilograms (66 lb) of bubonic plague bacteria could be produced in a few days. Some of Unit 731's satellite (branch) facilities are still in use by various Chinese industrial companies. A portion has been preserved and is open to visitors as a museum . [ 106 ] Unit 731 had branches in Linkou (Branch 162), Mudanjiang , Hailin (Branch 643), Sunwu (Branch 673) and Hailar (Branch 543). [ 4 ] : 60, 84, 124, 310 A medical school and research facility belonging to Unit 731 operated in the Shinjuku District of Tokyo during World War II. In 2006, Toyo Ishii—a nurse who worked at the school during the war—revealed that she had helped bury bodies and pieces of bodies on the school's grounds shortly after Japan's surrender in 1945. In response, in February 2011 the Ministry of Health began to excavate the site. [ 107 ] While Tokyo courts acknowledged in 2002 that Unit 731 has been involved in biological warfare research, as of 2011 [update] the Japanese government had made no official acknowledgment of the atrocities committed against test subjects and rejected the Chinese government's requests for DNA samples to identify human remains (including skulls and bones) found near an army medical school. [ 108 ] As the Second World War started to come to an end, all prisoners within the compound were killed to conceal evidence, and there were no documented survivors. [ 109 ] With the coming of the Red Army in August 1945 , the unit had to abandon their work in haste. Ministries in Tokyo ordered the destruction of all incriminating materials, including those in Pingfang . Potential witnesses, such as the 300 remaining prisoners, were either gassed or fed poison while the 600 Chinese and Manchurian laborers were shot. Ishii ordered every member of the group to disappear and "take the secret to the grave". [ 110 ] Potassium cyanide vials were issued for use in case the remaining personnel were captured. Skeleton crews of Ishii's Japanese troops blew up the compound in the final days of the war to destroy evidence of their activities, but many were sturdy enough to remain somewhat intact. Former Unit 731 member Hideo Shimizu stated that during the Soviet invasion of Manchuria he was instructed to eliminate evidence by burning the victims in the courtyard, collecting the leftover bones from the area, and then destroying the remains with explosives. [ 38 ] While boarding a departing train, he was provided with a cyanide compound and instructed to commit suicide instead of being captured. [ 38 ] Among the individuals in Japan after its 1945 surrender was Lieutenant Colonel Murray Sanders , who arrived in Yokohama via the American ship Sturgess in September 1945. Sanders was a highly regarded microbiologist and a member of America's military center for biological weapons. Sanders' duty was to investigate Japanese biological warfare activity. At the time of his arrival in Japan, he had no knowledge of what Unit 731 was. [ 73 ] Until Sanders finally threatened the Japanese with bringing the Soviets into the picture, little information about biological warfare was being shared with the Americans. The Japanese wanted to avoid prosecution under the Soviet legal system , so, the morning after he made his threat, Sanders received a manuscript describing Japan's involvement in biological warfare. Sanders took this information to General Douglas MacArthur , who was the Supreme Commander of the Allied Powers and responsible for rebuilding Japan during the Allied occupations. MacArthur struck a deal with Japanese informants : [ 111 ] he secretly granted immunity to the physicians of Unit 731, including their leader, in exchange for providing exclusive American access to their research on biological warfare and data from human experimentation. [ 13 ] American occupation authorities monitored the activities of former unit members, including reading and censoring their mail. [ 112 ] The Americans believed that the research data was valuable and did not want other nations, particularly the Soviet Union, to acquire data on biological weapons. [ 113 ] The Tokyo War Crimes Tribunal heard only one reference to Japanese experiments with "poisonous serums" on Chinese civilians. This took place in August 1946 and was instigated by Joseph R Massey, assistant to the Chinese prosecutor. The Japanese defense counsel argued that the claim was vague and uncorroborated and it was dismissed by the tribunal president, Sir William Webb , for lack of evidence. The subject was not pursued further by Massey, who was probably unaware of Unit 731's activities. His reference to it at the trial is believed to have been accidental. Later in 1981, one of the last surviving members of the Tokyo Tribunal, Judge Röling, had expressed bitterness in not being made aware of the suppression of evidence of Unit 731 and wrote, "It is a bitter experience for me to be informed now that centrally ordered Japanese war criminality of the most disgusting kind was kept secret from the court by the U.S. government." [ 114 ] American investigations into Japanese war crimes ceased when Japanese scientists began disclosing information on biological warfare. Despite the establishment of their own research program, American scientists faced a significant gap in essential knowledge regarding biological warfare. The potential value to the Americans of Japanese-provided data, encompassing human research subjects, delivery system theories, and successful field trials, was immense. However, historian Sheldon H. Harris concluded that the Japanese data failed to meet American standards, suggesting instead that the findings from the unit were of minor importance at best. Harris characterized the research results from the Japanese camp as disappointing, concurring with the assessment of Murray Sanders, who characterized the experiments as "crude" and "ineffective". [ 50 ] While German physicians were brought to trial and had their crimes publicized, the U.S. concealed information about Japanese biological warfare experiments and secured immunity for the perpetrators. Critics have argued that racism led to the double standard in the American postwar responses to the experiments conducted on different nationalities. [ additional citation(s) needed ] Whereas the perpetrators of Unit 731 were exempt from prosecution, the U.S. held a tribunal in Yokohama in 1948 that indicted nine Japanese physician professors and medical students for conducting vivisection upon captured American pilots; two professors were sentenced to death and others to 15–20 years' imprisonment. [ 115 ] Although publicly silent on the issue at the Tokyo Trials, the Soviet Union pursued the case and prosecuted 12 top military leaders and scientists from Unit 731 and its affiliated biological-war prisons Unit 1644 in Nanjing and Unit 100 in Changchun in the Khabarovsk war crimes trials . Among those accused of war crimes , including germ warfare, was General Otozō Yamada , commander-in-chief of the million-man Kwantung Army occupying Manchuria. The trial of the Japanese perpetrators was held in Khabarovsk in December 1949; a lengthy partial transcript of trial proceedings was published in different languages the following year by the Moscow foreign languages press, including an English-language edition. [ 116 ] The lead prosecuting attorney at the Khabarovsk trial was Lev Smirnov , who had been one of the top Soviet prosecutors at the Nuremberg Trials . The Japanese doctors and army commanders who had perpetrated the Unit 731 experiments received sentences from the Khabarovsk court ranging from 2 to 25 years in a Siberian labor camp . The United States refused to acknowledge the trials, branding them communist propaganda. [ 117 ] The sentences doled out to the Japanese perpetrators were unusually lenient by Soviet standards, and all of the defendants returned to Japan by 1956. [ 118 ] In addition to the accusations of propaganda, the US also asserted that the trials served as a distraction from the Soviet treatment of several hundred thousand Japanese prisoners of war; meanwhile, the USSR asserted that the US had given the Japanese diplomatic leniency in exchange for information about their human experimentation. However, it is likely that former Unit 731 members had also passed information about their biological experimentation to the Soviet government in exchange for judicial leniency. [ 118 ] The Soviet Union built a biological weapons facility in Sverdlovsk using documentation captured from Unit 731 in Manchuria. [ 119 ] As above, during the United States occupation of Japan, the members of Unit 731 and the members of other experimental units were allowed to go free. On 6 May 1947, Douglas MacArthur , the Supreme Commander of the Allied Forces , wrote to Washington in order to inform it that "additional data, possibly some statements from Ishii, can probably be obtained by informing Japanese involved that information will be retained in intelligence channels and will not be employed as war crimes evidence". [ 13 ] According to an investigation by The Guardian , after the end of the war, under the pretense of vaccine development, former members of Unit 731 conducted human experiments on Japanese prisoners, babies and mental patients, with secret funding from the U.S. Government. [ 120 ] One graduate of Unit 1644 , Masami Kitaoka, continued to perform experiments on unwilling Japanese subjects from 1947 to 1956. He performed his experiments while he was working for Japan's National Institute of Health Sciences. He infected prisoners with rickettsia and infected mentally-ill patients with typhus . [ 121 ] As the chief of the unit, Shirō Ishii was granted immunity from prosecution for war crimes by the American occupation authorities, because he had provided human experimentation research materials to them. From 1948 to 1958, less than five percent of the documents were transferred onto microfilm and stored in the US National Archives before they were shipped back to Japan. [ 122 ] Japanese discussions of Unit 731's activity began in the 1950s, after the end of the American occupation of Japan . In 1952, an infant girl at Nagoya City Pediatric Hospital died after being infected with E. coli bacteria; the incident was publicly tied to former Unit 731 scientists. [ 123 ] Later in that decade, journalists suspected that the murders attributed by the government to Sadamichi Hirasawa were actually carried out by members of Unit 731. In 1957, Japanese author Shūsaku Endō published the book The Sea and Poison about human experimentation in Fukuoka , which is thought to have been based on a real incident. In 1950, former members of Unit 731 including Masaji Kitano founded the blood bank and pharmaceutical company Green Cross , for which Murray Sanders also served as a consultant. The company became the target of a scandal in the 1980s after up to 3,000 Japanese contracted HIV through the distribution and use of its blood products, which the Pharmaceuticals and Medical Devices Agency had deemed unsafe. [ 124 ] [ 125 ] The author Seiichi Morimura published The Devil's Gluttony (悪魔の飽食) in 1981, followed by The Devil's Gluttony: A Sequel in 1983. These books purported to reveal the "true" operations of Unit 731, but falsely attributed unrelated photos to the Unit, which raised questions about their accuracy. [ 126 ] [ 127 ] Also in 1981, the first direct testimony of human vivisection in China was given by Ken Yuasa . Since then, much more in depth testimony has been given in Japan. The 2001 documentary Japanese Devils largely consists of interviews with fourteen Unit 731 staff members taken prisoner by China and later released. [ 128 ] Prince Mikasa , who was the younger brother of Hirohito, toured the Unit 731 headquarters in China, and wrote in his memoir that he watched films showing how Chinese prisoners were "made to march on the plains of Manchuria for poison gas experiments on humans." [ 1 ] Hideki Tojo , who later became Prime Minister in 1941, was also shown films of the experiments, which he described as "unpleasant". [ 129 ] Despite conducting scientific experiments, Unit 731 faced scrutiny regarding the usefulness of the data produced from these experiments. [ 50 ] Japanese biological warfare operations were by far the largest during WWII, and "possibly with more people and resources than the BW producing nations of France , Hungary , Italy , Poland , and the Soviet Union combined, between the world wars. [ 130 ] Despite the apparent success, Unit 731 lacked adequate scientific and engineering foundations to further maximize its effectiveness. [ 131 ] [ 132 ] Harris concluded that US scientists generally wanted to acquire it due to the concept of forbidden fruit , believing that lawful and ethical prohibitions could affect the outcomes of their research. [ 133 ] Historian Till Winfried Bärnighausen criticized the overall lack of scientific rigor in many of Unit 731's experiments, but he noted some exceptions. He pointed to the mustard gas, freezing, and tuberculosis experiments as having a reliable and valid data collection process, suggesting they were conducted with greater rigor. [ 50 ] In 1969, Ikeda Naeo, a physician associated with Unit 731, published his own research on epidemic hemorrhagic fever (EHF). His paper documented experiments conducted at a military hospital on the China-Soviet border in January 1942. These experiments, involving infections on humans, confirmed the transmission of EHF by lice and fleas to local populations, resulting in deaths among those infected. Despite the explicit admission of conducting experiments on humans with fatal pathogenic inoculations, Ikeda's report passed peer review and was published in a Japanese scholarly journal. The acceptance of this research by Ikeda underscored the widespread acknowledgment within the Japanese medical community of the human experiments conducted at Unit 731. [ 134 ] During the war, Yoshimura Hisato conducted research at Unit 731 in China focusing on low-temperature physiology, particularly studying the mechanisms involved in frostbite. Following the war, he established the Japanese Society of Biometeorology. His research in China marked the inception of his exploration into the relationship between physiology and environmental stress. [ 134 ] Most researchers at Unit 731 did not engage in a concerted effort to conceal the experiments they participated in. While they refrained from publicly acknowledging their crimes, they did share various details within their medical circles. Consequently, especially regarding research on EHF and frostbite, it has been relatively straightforward to ascertain who conducted which type of human experiments. Given that nearly all members of the Japanese medical community were aware of the human experiments conducted at Unit 731, researchers from the Unit were able to later publish their work in medical papers. Even after the war, reports were disseminated unmistakably detailing the results of experiments on humans, and accounts of the Unit were documented in medical journals. This indicates widespread awareness within the Japanese medical community regarding the experiments carried out at Unit 731. [ 134 ] During the COVID-19 pandemic , some scientists called for experimental data from Unit 731 to be publicly released to the international medical community because the data available on human-pathogen interactions could have helped epidemiologists with pandemic control. [ 135 ] The information has been withheld by both the US and Japanese government. In 1983, the Japanese Ministry of Education asked Japanese historian Saburō Ienaga to remove a reference from one of his textbooks that stated Unit 731 conducted experiments on thousands of Chinese. The ministry alleged that no academic research supported the claim. In 1984, Japanese historian Tsuneishi Keiichi translated and published over 4,000 pages of U.S. documents on Japanese biological warfare. The ministry backed down after new studies were published in Japan and important evidence surfaced in the United States. [ 136 ] Japanese history textbooks usually contain references to Unit 731, but the textbooks do not provide specific details about the activities conducted at the facility. [ 137 ] [ 138 ] Saburō Ienaga's New History of Japan included a detailed description, based on officers' testimony. The Ministry for Education attempted to remove this passage from his textbook before it was taught in public schools, on the basis that the testimony was insufficient. The Supreme Court of Japan ruled in 1997 that the testimony was indeed sufficient and that requiring it to be removed was an illegal violation of freedom of speech . [ 139 ] In 1997, international lawyer Kōnen Tsuchiya filed a class action suit against the Japanese government, demanding reparations for the actions of Unit 731, using evidence filed by Professor Makoto Ueda of Rikkyo University . All levels of the Japanese court system found the suit baseless. No findings of fact were made about the existence of human experimentation. In August 2002, the Tokyo district court ruled for the first time that Japan had engaged in biological warfare. Presiding judge Koji Iwata ruled that Unit 731, on the orders of the Imperial Japanese Army headquarters, used bacteriological weapons on Chinese civilians between 1940 and 1942, spreading diseases, including plague and typhoid , in the cities of Quzhou , Ningbo , and Changde . He rejected victims' compensation claims on the grounds that they had already been settled by international peace treaties. [ 140 ] In October 2003, a member of Japan's House of Representatives filed an inquiry. Prime Minister Junichiro Koizumi responded that the Japanese government did not then possess any records related to Unit 731, but recognized the gravity of the matter and would publicize any records located in the future. [ 141 ] In April 2018, the National Archives of Japan released the names of 3,607 members of Unit 731, in response to a request by Professor Katsuo Nishiyama of the Shiga University of Medical Science . [ 142 ] [ 143 ] After World War II, the Office of Special Investigations created a watchlist of suspected Axis collaborators and persecutors banned from entering the United States. While over 60,000 names were added to the watchlist, fewer than 100 Japanese participants were identified. In a 1998 correspondence letter between the DOJ and Rabbi Abraham Cooper, Eli Rosenbaum, director of OSI, stated that this was due to two factors:
https://en.wikipedia.org/wiki/Unit_731
Unit Ei 1644 ( Japanese : 栄1644部隊 ) — also known as Unit 1644 , Detachment Ei 1644 , Detachment Ei , Detachment Tama , [ 1 ] : 310–311 The Nanking Detachment , or simply Unit Ei , was a Japanese laboratory and biological warfare facility under control of the Epidemic Prevention and Water Purification Department . It was established in 1939 in Japanese-occupied Nanjing as a satellite unit of Unit 731 . It had 12 branches and employed about 1,500 men. [ 1 ] : 307 During the Second Sino-Japanese War , Unit Ei engaged in "producing on a mass scale lethal bacteria to be used as weapons against the Chinese forces and civilian population" and "took a direct part in employing bacteriological weapons against the Chinese forces and local inhabitants during the military operations of the Japanese troops," according to its Chief, Shunji Sato . [ 1 ] : 73 Sato claimed in his testimony that Unit Ei "did not conduct experiments on human beings." [ 1 ] : 311 An anonymous researcher, who claims he was attached to Unit 1644, says that it regularly carried out human vivisections as well as infecting humans with cholera , typhus , and bubonic plague . [ 2 ] [ 3 ] The researcher and his family had not yet reached an agreement about releasing his name. [ 3 ] The human experiments on Unit Ei 1644 took place in the confines of the fourth floor in the facility, which was out of bounds for the majority of the Unit Ei 1644. Reportedly, only a minority of the staff took part in the BW experiments on humans at Unit Ei 1644, such as the unit's doctors and high level technicians. [ 4 ] Each week between ten and twenty persons were exposed to poisons, germs and different gases, and about ten were killed weekly by gases, lethal injections and bullets after having been used as test subjects. [ 5 ] A soldier stationed at the Unit testified, that ordinary soldiers were not allowed beyond the second floor and not informed that human experiments were taking place there, but they were aware of rumours to that effect. [ 6 ] The soldier had heard that they were prisoners kept at the fourth floor, and was told by an officer: "There is a lumber storage facility on the fourth floor. You never go above the second floor, you got it." [ 6 ] Several units referred to human test subjects and corpses as ‘logs’ to be burned/incinerated, also solidifying the soldier’s assumption of what really happened on floors they were not granted access to. There was an incinerator in the Unit in which dead prisoners were cremated. [ 6 ] Hiroshi Matsumoto testified: After about six months of captivity, prisoners would be taken to the "treatment room" where they would be put to sleep with chloroform and cut open at their inguinal artery: after the body was emptied of enough blood "the body would start shaking in convulsions". The contaminated blood would then be used to infect food or given to flies who would then be used as a biological weapon. [ 8 ] When the war ended, the remaining test subjects were killed, the East Zhingsahn Street complex was destroyed with explosive charges and the staff evacuated. [ 5 ] Sato testified that while Chief of the Unit, it was "devising bacteriological weapons and producing them on a mass scale. For this purpose the Nanking Detachment Ei was supplied with high-capacity equipment and with bacteriological experts, and it produced lethal bacteria on a mass scale. Under my direction ... the Training Division every year trained about 300 bacteriologists with the object of employing them in bacteriological warfare." [ 1 ] : 32–33, 73 According to Sato, "...the output of bacteria substance was 10 kilograms per production cycle." [ 1 ] : 308 The facility also bred fleas for the purposes of plague infection. Sato also testified about the equipment of Unit Ei, "The output capacity of the Nanking Detachment Ei 1644 for the production of lethal bacteria was up to 10 kilograms per production cycle." To produce this quantity of bacteria, Detachment Ei 1644 had the following equipment: and for cooking media, the detachment had large retorts..." The first Chief of Unit Ei was Shirō Ishii , then Colonel Oota. In February 1943, Sato was appointed Chief of Unit Ei. He served as Chief until February 1944. [ 1 ] : 532 Sato testified at the Khabarovsk War Crime Trials that Unit Ei "possessed high-capacity equipment for the breeding of germs for bacteriological warfare." [ 1 ] : 15, 307 Lieutenant Colonel Onadera was Chief of the General Division. [ 1 ] : 308 Captain Murata was in charge of breeding fleas. [ 1 ] : 309 In late August 1942, Unit Ei participated in a biological attack against Chinese citizens and soldiers in Yushan County , Jinhua , and Fuqing . As Kawashima Kiyoshi testified, "..[The] bacteriological weapon was employed on the ground, the contaminating of the territory being done by sabotage action. ... The advancing Chinese troops entered the contaminated zone and came under the action of the bacteriological weapon." : 24–25 Cholera and plague cultures used during the attack were made at Unit Ei. [ 1 ] : 58 Sato testified he was told that "plague, cholera and paratyphoid germs were employed against the Chinese by spraying. The plague germs were disseminated through fleas, the other germs in the pure form—by contaminating reservoirs, wells, rivers, etc." [ 1 ] : 261 The plague fleas were also from Unit Ei. [ 1 ] : 309 A Japanese who was part of Unit Ei 1644 admitted that the Japanese suffered a major disaster in their 1942 biological weapons attack in Zhejiang, saying that he saw papers which said 1,700 killed at the Epidemic Prevention and Water Purification Department but admitted Japanese regularly downplayed their own casualties so the real death toll was higher. [ 9 ] When the war ended, the remaining test subjects were killed, the East Zhongshan Street complex was destroyed with explosive charges and the staff evacuated. [ 5 ] This article about the military history of Japan is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unit_Ei_1644
Unit Operations of Chemical Engineering , first published in 1956, is one of the oldest chemical engineering textbooks still in widespread use. The current Seventh Edition, published in 2004, continues its successful tradition of being used as a textbook in university undergraduate chemical engineering courses. It is widely used in colleges and universities throughout the world, and often referred just "McCabe-Smith-Harriott" or "MSH". The book starts with an introductory chapter devoted to definitions and principles. It then follows with 28 additional chapters, each covering a principal chemical engineering unit operation . The 28 chapters are grouped into four major sections: A more detailed table of contents is available on the Internet. [ 1 ]
https://en.wikipedia.org/wiki/Unit_Operations_of_Chemical_Engineering
In geometry , biology , mineralogy and solid state physics , a unit cell is a repeating unit formed by the vectors spanning the points of a lattice. [ 1 ] Despite its suggestive name, the unit cell (unlike a unit vector , for example) does not necessarily have unit size, or even a particular size at all. Rather, the primitive cell is the closest analogy to a unit vector, since it has a determined size for a given lattice and is the basic building block from which larger cells are constructed. The concept is used particularly in describing crystal structure in two and three dimensions, though it makes sense in all dimensions. A lattice can be characterized by the geometry of its unit cell, which is a section of the tiling (a parallelogram or parallelepiped ) that generates the whole tiling using only translations. There are two special cases of the unit cell: the primitive cell and the conventional cell . The primitive cell is a unit cell corresponding to a single lattice point , it is the smallest possible unit cell. [ 2 ] In some cases, the full symmetry of a crystal structure is not obvious from the primitive cell, in which cases a conventional cell may be used. A conventional cell (which may or may not be primitive) is a unit cell with the full symmetry of the lattice and may include more than one lattice point. The conventional unit cells are parallelotopes in n dimensions. A primitive cell is a unit cell that contains exactly one lattice point. For unit cells generally, lattice points that are shared by n cells are counted as ⁠ 1 / n ⁠ of the lattice points contained in each of those cells; so for example a primitive unit cell in three dimensions which has lattice points only at its eight vertices is considered to contain ⁠ 1 / 8 ⁠ of each of them. [ 3 ] An alternative conceptualization is to consistently pick only one of the n lattice points to belong to the given unit cell (so the other n-1 lattice points belong to adjacent unit cells). The primitive translation vectors a → 1 , a → 2 , a → 3 span a lattice cell of smallest volume for a particular three-dimensional lattice, and are used to define a crystal translation vector where u 1 , u 2 , u 3 are integers, translation by which leaves the lattice invariant. [ note 1 ] That is, for a point in the lattice r , the arrangement of points appears the same from r′ = r + T → as from r . [ 4 ] Since the primitive cell is defined by the primitive axes (vectors) a → 1 , a → 2 , a → 3 , the volume V p of the primitive cell is given by the parallelepiped from the above axes as Usually, primitive cells in two and three dimensions are chosen to take the shape parallelograms and parallelepipeds, with an atom at each corner of the cell. This choice of primitive cell is not unique, but volume of primitive cells will always be given by the expression above. [ 5 ] In addition to the parallelepiped primitive cells, for every Bravais lattice there is another kind of primitive cell called the Wigner–Seitz cell. In the Wigner–Seitz cell, the lattice point is at the center of the cell, and for most Bravais lattices, the shape is not a parallelogram or parallelepiped. This is a type of Voronoi cell . The Wigner–Seitz cell of the reciprocal lattice in momentum space is called the Brillouin zone . For each particular lattice, a conventional cell has been chosen on a case-by-case basis by crystallographers based on convenience of calculation. [ 6 ] These conventional cells may have additional lattice points located in the middle of the faces or body of the unit cell. The number of lattice points, as well as the volume of the conventional cell is an integer multiple (1, 2, 3, or 4) of that of the primitive cell. [ 7 ] For any 2-dimensional lattice, the unit cells are parallelograms , which in special cases may have orthogonal angles, equal lengths, or both. Four of the five two-dimensional Bravais lattices are represented using conventional primitive cells, as shown below. The centered rectangular lattice also has a primitive cell in the shape of a rhombus, but in order to allow easy discrimination on the basis of symmetry, it is represented by a conventional cell which contains two lattice points. For any 3-dimensional lattice, the conventional unit cells are parallelepipeds , which in special cases may have orthogonal angles, or equal lengths, or both. Seven of the fourteen three-dimensional Bravais lattices are represented using conventional primitive cells, as shown below. The other seven Bravais lattices (known as the centered lattices) also have primitive cells in the shape of a parallelepiped, but in order to allow easy discrimination on the basis of symmetry, they are represented by conventional cells which contain more than one lattice point.
https://en.wikipedia.org/wiki/Unit_cell
In mathematics , a unit circle is a circle of unit radius —that is, a radius of 1. [ 1 ] Frequently, especially in trigonometry , the unit circle is the circle of radius 1 centered at the origin (0, 0) in the Cartesian coordinate system in the Euclidean plane . In topology , it is often denoted as S 1 because it is a one-dimensional unit n -sphere . [ 2 ] [ note 1 ] If ( x , y ) is a point on the unit circle's circumference , then | x | and | y | are the lengths of the legs of a right triangle whose hypotenuse has length 1. Thus, by the Pythagorean theorem , x and y satisfy the equation x 2 + y 2 = 1. {\displaystyle x^{2}+y^{2}=1.} Since x 2 = (− x ) 2 for all x , and since the reflection of any point on the unit circle about the x - or y -axis is also on the unit circle, the above equation holds for all points ( x , y ) on the unit circle, not only those in the first quadrant. The interior of the unit circle is called the open unit disk , while the interior of the unit circle combined with the unit circle itself is called the closed unit disk. One may also use other notions of "distance" to define other "unit circles", such as the Riemannian circle ; see the article on mathematical norms for additional examples. In the complex plane , numbers of unit magnitude are called the unit complex numbers . This is the set of complex numbers z such that | z | = 1. {\displaystyle |z|=1.} When broken into real and imaginary components z = x + i y , {\displaystyle z=x+iy,} this condition is | z | 2 = z z ¯ = x 2 + y 2 = 1. {\displaystyle |z|^{2}=z{\bar {z}}=x^{2}+y^{2}=1.} The complex unit circle can be parametrized by angle measure θ {\displaystyle \theta } from the positive real axis using the complex exponential function , z = e i θ = cos ⁡ θ + i sin ⁡ θ . {\displaystyle z=e^{i\theta }=\cos \theta +i\sin \theta .} (See Euler's formula .) Under the complex multiplication operation, the unit complex numbers form a group called the circle group , usually denoted T . {\displaystyle \mathbb {T} .} In quantum mechanics , a unit complex number is called a phase factor . The trigonometric functions cosine and sine of angle θ may be defined on the unit circle as follows: If ( x , y ) is a point on the unit circle, and if the ray from the origin (0, 0) to ( x , y ) makes an angle θ from the positive x -axis, (where counterclockwise turning is positive), then cos ⁡ θ = x and sin ⁡ θ = y . {\displaystyle \cos \theta =x\quad {\text{and}}\quad \sin \theta =y.} The equation x 2 + y 2 = 1 gives the relation cos 2 ⁡ θ + sin 2 ⁡ θ = 1. {\displaystyle \cos ^{2}\theta +\sin ^{2}\theta =1.} The unit circle also demonstrates that sine and cosine are periodic functions , with the identities cos ⁡ θ = cos ⁡ ( 2 π k + θ ) {\displaystyle \cos \theta =\cos(2\pi k+\theta )} sin ⁡ θ = sin ⁡ ( 2 π k + θ ) {\displaystyle \sin \theta =\sin(2\pi k+\theta )} for any integer k . Triangles constructed on the unit circle can also be used to illustrate the periodicity of the trigonometric functions. First, construct a radius OP from the origin O to a point P( x 1 , y 1 ) on the unit circle such that an angle t with 0 < t < ⁠ π / 2 ⁠ is formed with the positive arm of the x -axis. Now consider a point Q( x 1 ,0) and line segments PQ ⊥ OQ . The result is a right triangle △OPQ with ∠QOP = t . Because PQ has length y 1 , OQ length x 1 , and OP has length 1 as a radius on the unit circle, sin( t ) = y 1 and cos( t ) = x 1 . Having established these equivalences, take another radius OR from the origin to a point R(− x 1 , y 1 ) on the circle such that the same angle t is formed with the negative arm of the x -axis. Now consider a point S(− x 1 ,0) and line segments RS ⊥ OS . The result is a right triangle △ORS with ∠SOR = t . It can hence be seen that, because ∠ROQ = π − t , R is at (cos(π − t ), sin(π − t )) in the same way that P is at (cos( t ), sin( t )) . The conclusion is that, since (− x 1 , y 1 ) is the same as (cos(π − t ), sin(π − t )) and ( x 1 , y 1 ) is the same as (cos( t ),sin( t )) , it is true that sin( t ) = sin(π − t ) and −cos( t ) = cos(π − t ) . It may be inferred in a similar manner that tan(π − t ) = −tan( t ) , since tan( t ) = ⁠ y 1 / x 1 ⁠ and tan(π − t ) = ⁠ y 1 / − x 1 ⁠ . A simple demonstration of the above can be seen in the equality sin( ⁠ π / 4 ⁠ ) = sin( ⁠ 3π / 4 ⁠ ) = ⁠ 1 / √ 2 ⁠ . When working with right triangles, sine, cosine, and other trigonometric functions only make sense for angle measures more than zero and less than ⁠ π / 2 ⁠ . However, when defined with the unit circle, these functions produce meaningful values for any real -valued angle measure – even those greater than 2 π . In fact, all six standard trigonometric functions – sine, cosine, tangent, cotangent, secant, and cosecant, as well as archaic functions like versine and exsecant – can be defined geometrically in terms of a unit circle, as shown at right. Using the unit circle, the values of any trigonometric function for many angles other than those labeled can be easily calculated by hand using the angle sum and difference formulas . The Julia set of discrete nonlinear dynamical system with evolution function : f 0 ( x ) = x 2 {\displaystyle f_{0}(x)=x^{2}} is a unit circle. It is a simplest case so it is widely used in the study of dynamical systems.
https://en.wikipedia.org/wiki/Unit_circle
Unit construction is the design of larger motorcycles where the engine and gearbox components share a single casing. This sometimes includes the design of automobile engines and was often loosely applied to motorcycles with rather different internal layouts such as the flat twin BMW models. Prior to unit construction, the engine and gearbox had separate casings and were connected by a primary chain drive running in an oil bath chaincase. The new system used a similar chain drive and both had separate oil reservoirs for engine, gearbox and primary drive. Triumph and BSA were already using cast non-ferrous alloy chaincases and started converting to unit construction in the 1950s. A driving factor behind the BSA/Triumph change was that Lucas [ 1 ] had declared an intention to abandon production of motorcycle dynamos and magnetos , and instead produce only alternators . By contrast, Velocette, Matchless/AJS, and Norton motorcycles continued to be pre-unit construction (the former machines with pressed-steel primary cases) until the end of production in the 1960s and 1970s respectively. In reality, the casings were not really "unitary," as the crankcase section was vertically divided in the middle and no oil was shared between the three portions. In the 1960s Japanese motorcycles introduced the now-familiar horizontally split clamshell which has become almost universal. Modern horizontally split four stroke engines invariably use single oil reservoir (whether wet- or dry-sump ) but, while this simplifies matters, it is arguable that the previous system of having different types of oil for engine and gearbox is preferable. The BMC Mini was an early example of a car with the "gearbox-in-the-sump;" but this practice of using a single oil reservoir, which has become the norm for motorbikes, is generally undesirable for cars and trucks. Two stroke "total-loss" bikes always have separate oil for the gearbox, as engine oil is burned along with the fuel. The advantages of unit construction are: A significant disadvantage is that there is no longer any tension adjustment possible of the chain drive between engine and transmission, and tensioning (which is almost certainly still required) must be over a rubber-faced steel slipper. This is quiet and the tensioner does not wear greatly. This change to unit construction meant that it was no longer possible to choose a gearbox from a different manufacturer (e.g. a close-ratio unit for racing) and to send worn gearbox units to be rebuilt. Alfred Angas Scott, founder of The Scott Motorcycle Company , designed a motorcycle with unit construction for the engine and gearbox. Production of the motorcycle began in 1908. [ 2 ] In 1911, Singer offered motorcycles with unit-construction 299 cc and 535 cc engines. [ 3 ] In 1914, ABC founder Granville Bradshaw designed a unit-construction horizontally opposed ('flat') twin for Sopwith Aircraft, who, at the time, also made motorcycles. [ 4 ] In 1919, Harley-Davidson introduced the Model W Sport Twin with a unit construction flat-twin. [ 5 ] [ 6 ] In 1920 Carlo Guzzi constructed the G.P. 500 prototype with unit-construction. This was followed in 1921by the first Moto Guzzi production model, the "Normale", also a 500cc unit-construction motorcycle. [ citation needed ] In 1921, an expanding Bianchi (Italy) showed its first unit-construction side-valve 600 cc V-twin. [ 7 ] In 1923, Rover introduced a 250 cc unit-construction model, followed by a 350 cc in 1924, but production ended in 1925. [ 8 ] In 1923, the advanced three-speed Triumph single-cylinder 346 cc sv unit-construction Model LS appeared, but did not sell well, and ended production in 1927. [ 7 ] In 1923, BMW released its own unit construction shaft drive flat twin of 498 cc. BMW has never built a motorcycle with a separate gearbox. [ 9 ] From 1924, FN single-cylinder engines changed from semi unit construction (as seen in the last semi-unit single, the 1922 FN 285TT, in its last year of sale in 1924,) to unit construction engines (as seen in the new-for-1924 M.60). [ 10 ] In 1928, BSA made their first and only two-stroke, a 175 cc unit construction bike, for only one season, otherwise four-stroke twins became unit construction in 1962. [ 11 ] The 1930 Triumph 175 cc Model 'X' two-stroke, two-speed is their first "all-unit construction" two-stroke single-cylinder engine. [ 7 ] From 1932, New Imperial was known for pioneering innovations in unit construction on motorcycles . They made the Unit Minor 150 and Unit Super 250 in this manner and by 1938 all of their machines were unit construction. [ 12 ] [ 13 ] In 1938, Francis-Barnett offered a 125 cc unit-construction Snipe. [ 14 ] In 1946, the Series B Vincent employed unit construction and used the engine-gearbox as a stressed member of the frame. [ 15 ] The 1947 Sunbeam S7, an advanced overhead-cam, longitudinal twin, unit construction motorcycle, designed by Erling Poppe, used shaft drive. [ 3 ] In 1957 the Royal Enfield Clipper was replaced by the unit-construction Crusader. [ 8 ] In 1957 the first unit construction twin cylinder motorcycle made by Triumph, the 350 cc (21 ci) 'Twenty One' 3TA, designed by Edward Turner and Wickes, was introduced for the 21st Anniversary of Triumph Engineering Co. Ltd. Unfortunately it also had the first "bathtub" rear enclosure, which proved a sales failure. [ 16 ] The 1958 Ariel Leader used unit construction. [ 4 ] Triumph Motorcycles produced its first single-cylinder unit construction model with the 149 cc Terrier launched in 1952. It was quickly followed by the more popular 196 cc Tiger Cub in 1953. [ 17 ] They made the first twin-cylinder unit construction model in 1957 with the release of the 350 cc Twenty One 3TA (so named because it was approximately twenty-one cubic inches capacity). [ 18 ] The 500 cc Triumph 5TA followed, and the 650 cc models were made unit construction in 1963. [ 19 ] The 1963—1969 unit construction 650 cc Triumph Bonneville has become sought- after models, partly as the 1970-onward oil-in-frame chassis was considered inferior. The BSA Bantam range of two-stroke engines introduced the unit construction concept to BSA since its introduction in 1949. BSA produced their first four-stroke unit construction singles in 1959 when they introduced the C15 to replace the venerable c12 single. The unit construction (in contrast to the separate engine and gearbox of the C10/C11 and c12) gave the family of motorcycles started by this model its familiar name. The C15 was intended as a utility "get to work" model, and served this purpose faithfully for many thousands of users. It was a simple and reasonably robust design. Along with the C15 came the B40, the 350 cc version. This was no faster than the C15, but had a little more lugging power. A version of the B40 was also produced (in considerable quantities) for various branches of the military. These motorcycles (known as the "Ex-WD B40") were more rugged than the vanilla version (in particular, the timing-side main bearing was over- rather than under-engineered and an oil filter was fitted), slightly de-tuned and given a version of the competition frame. For these reasons, these bikes can make very good buying, and are often used as the basis for competition machines. Several minor changes were made to the C15 in 7 years (with some variations on the theme - the "warmer" SS80 and SS90, plus competition versions). In 1967 the model underwent some revisions and a name change to B25. The model then continued with little variation until BSA collapsed in the early 1970s. The BSA unit single was an affordable introduction to motorcycling for many young men in the 1960s and 1970s. The simple design meant that inexperienced and under-equipped home mechanics could keep them running under most circumstances. The effects of such inexperienced maintenance led to a slightly undeserved reputation for unreliability - a well maintained and regularly serviced unit single will chug along for a very long time with no problems. The warmer versions (such as the much-loved Starfire) were generally less robust, but their light weight, enjoyable handling and peppy engines meant that many people considered the hours of necessary maintenance a worthwhile trade-off. Many BSA unit singles were built, meaning there are few 1960s motorcycles with such a large supply of readily available spares. The tunability and ready supply of these motors, combined with their compact and light(ish) construction has also made them a popular choice for modern "Classic" competition. The BSA design was based on the Triumph Tiger Cub , first produced in 1952. The continuation of the model until 1973 speaks well for the popularity and utility of this design, but also reflects badly on the forward-thinking and investment of the BSA management. By 1967 unit singles were looking slow and rattly and the "charm" of the traditional British oil-leak was wearing thin. The new breed of Japanese motorcycles arriving on the scene were fast and exotic in comparison, and the buying public can certainly not be blamed for their eventual shunning of the entire British motorcycle industry.
https://en.wikipedia.org/wiki/Unit_construction
In computer science and numerical analysis , unit in the last place or unit of least precision ( ulp ) is the spacing between two consecutive floating-point numbers, i.e., the value the least significant digit (rightmost digit) represents if it is 1. It is used as a measure of accuracy in numeric calculations. [ 1 ] The most common definition is: In radix b {\displaystyle b} with precision p {\displaystyle p} , if b e ≤ | x | < b e + 1 {\displaystyle b^{e}\leq |x|<b^{e+1}} , then ulp ⁡ ( x ) = b max { e , e min } − p + 1 {\displaystyle \operatorname {ulp} (x)=b^{\max\{e,\,e_{\min }\}-p+1}} , [ 2 ] where e min {\displaystyle e_{\min }} is the minimal exponent of the normal numbers. In particular, ulp ⁡ ( x ) = b e − p + 1 {\displaystyle \operatorname {ulp} (x)=b^{e-p+1}} for normal numbers , and ulp ⁡ ( x ) = b e min − p + 1 {\displaystyle \operatorname {ulp} (x)=b^{e_{\min }-p+1}} for subnormals . Another definition, suggested by John Harrison, is slightly different: ulp ⁡ ( x ) {\displaystyle \operatorname {ulp} (x)} is the distance between the two closest straddling floating-point numbers a {\displaystyle a} and b {\displaystyle b} (i.e., satisfying a ≤ x ≤ b {\displaystyle a\leq x\leq b} and a ≠ b {\displaystyle a\neq b} ), assuming that the exponent range is not upper-bounded. [ 3 ] [ 4 ] These definitions differ only at signed powers of the radix. [ 2 ] The IEEE 754 specification—followed by all modern floating-point hardware—requires that the result of an elementary arithmetic operation (addition, subtraction, multiplication, division, and square root since 1985, and FMA since 2008) be correctly rounded , which implies that in rounding to nearest, the rounded result is within 0.5 ulp of the mathematically exact result, using John Harrison's definition; conversely, this property implies that the distance between the rounded result and the mathematically exact result is minimized (but for the halfway cases, it is satisfied by two consecutive floating-point numbers). Reputable numeric libraries compute the basic transcendental functions to between 0.5 and about 1 ulp. Only a few libraries compute them within 0.5 ulp, this problem being complex due to the Table-maker's dilemma . [ 5 ] Since the 2010s, advances in floating-point mathematics have allowed correctly rounded functions to be almost as fast in average as these earlier, less accurate functions. A correctly rounded function would also be fully reproducible. An earlier, intermediate milestone was the 0.501 ulp functions, [ clarification needed ] which theoretically would only produce one incorrect rounding out of 1000 random floating-point inputs. [ 6 ] Let x {\displaystyle x} be a positive floating-point number and assume that the active rounding mode is round to nearest, ties to even , denoted RN {\displaystyle \operatorname {RN} } . If ulp ⁡ ( x ) ≤ 1 {\displaystyle \operatorname {ulp} (x)\leq 1} , then RN ⁡ ( x + 1 ) > x {\displaystyle \operatorname {RN} (x+1)>x} . Otherwise, RN ⁡ ( x + 1 ) = x {\displaystyle \operatorname {RN} (x+1)=x} or RN ⁡ ( x + 1 ) = x + ulp ⁡ ( x ) {\displaystyle \operatorname {RN} (x+1)=x+\operatorname {ulp} (x)} , depending on the value of the least significant digit and the exponent of x {\displaystyle x} . This is demonstrated in the following Haskell code typed at an interactive prompt: Here we start with 0 in single precision (binary32) and repeatedly add 1 until the operation does not change the value. Since the significand for a single-precision number contains 24 bits, the first integer that is not exactly representable is 2 24 +1, and this value rounds to 2 24 in round to nearest, ties to even. Thus the result is equal to 2 24 . The following example in Java approximates π as a floating-point value by finding the two double values bracketing π {\displaystyle \pi } : p 0 < π < p 1 {\displaystyle p_{0}<\pi <p_{1}} . Then ulp ⁡ ( π ) {\displaystyle \operatorname {ulp} (\pi )} is determined as ulp ⁡ ( π ) = p 1 − p 0 {\displaystyle \operatorname {ulp} (\pi )=p_{1}-p_{0}} . Another example, in Python , also typed at an interactive prompt, is: In this case, we start with x = 1 and repeatedly double it until x = x + 1 . Similarly to Example 1, the result is 2 53 because the double-precision floating-point format uses a 53-bit significand. The Boost C++ libraries provides the functions boost::math::float_next , boost::math::float_prior , boost::math::nextafter and boost::math::float_advance to obtain nearby (and distant) floating-point values, [ 7 ] and boost::math::float_distance(a, b) to calculate the floating-point distance between two doubles. [ 8 ] The C language library provides functions to calculate the next floating-point number in some given direction: nextafterf and nexttowardf for float , nextafter and nexttoward for double , nextafterl and nexttowardl for long double , declared in <math.h> . It also provides the macros FLT_EPSILON , DBL_EPSILON , LDBL_EPSILON , which represent the positive difference between 1.0 and the next greater representable number in the corresponding type (i.e. the ulp of one). [ 9 ] The Go standard library provides the functions math.Nextafter (for 64 bit floats) and math.Nextafter32 (for 32 bit floats) both of which return the next representable floating-point value towards another provided floating-point value. [ 10 ] The Java standard library provides the functions Math.ulp(double) and Math.ulp(float) . They were introduced with Java 1.5. The Swift standard library provides access to the next floating-point number in some given direction via the instance properties nextDown and nextUp . It also provides the instance property ulp and the type property ulpOfOne (which corresponds to C macros like FLT_EPSILON [ 11 ] ) for Swift's floating-point types. [ 12 ]
https://en.wikipedia.org/wiki/Unit_in_the_last_place
The unit interval (UI) , also known as pulse time or symbol duration , is the shortest time between changes in a data transmission signal . In a data stream, each pulse (or symbol) takes one UI, representing the time to send a single piece of information. When used to measure a time interval, the UI gives a relative value without units, showing the interval as a multiple of the UI. Often, but not always, the UI equals the time to send one bit (a single binary digit), known as the bit time . For example, in NRZ transmission, the UI matches the bit time, but in 2B1Q transmission, one pulse covers the time of two bits. In a system with a baud rate of 2.5 Gbit/s, the UI is 1/(2.5 Gbit/s) = 0.4 nanoseconds per symbol. Jitter , the deviation from true periodicity in signal timing, is often expressed as a fraction of the UI. For instance, jitter of 0.01 UI means the signal's timing shifts by 1% of the UI duration. Using UI for jitter measurements allows consistent comparisons across systems with different symbol rates, as jitter often depends on the symbol duration. This works when jitter is closely linked to the timing of each symbol. This approach is common in serial communications and on-chip clock systems. This measurement unit is extensively used in jitter literature. Examples can be found in various ITU-T Recommendations, [ 1 ] or in the tutorial from Ransom Stephens. [ 2 ] This article related to telecommunications is a stub . You can help Wikipedia by expanding it . This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Unit_interval_(data_transmission)
A unit of selection is a biological entity within the hierarchy of biological organization (for example, an entity such as: a self-replicating molecule , a gene , a cell , an organism , a group , or a species ) that is subject to natural selection . There is debate among evolutionary biologists about the extent to which evolution has been shaped by selective pressures acting at these different levels. [ 1 ] [ 2 ] [ 3 ] There is debate over the relative importance of the units themselves. For instance, is it group or individual selection that has driven the evolution of altruism ? Where altruism reduces the fitness of individuals , individual-centered explanations for the evolution of altruism become complex and rely on the use of game theory , [ 4 ] [ 5 ] for instance; see kin selection and group selection . There also is debate over the definition of the units themselves, [ 6 ] and the roles for selection and replication, [ 2 ] and whether these roles may change in the course of evolution. [ 7 ] Two useful introductions to the fundamental theory underlying the unit of selection issue and debate, which also present examples of multi-level selection from the entire range of the biological hierarchy (typically with entities at level N -1 competing for increased representation, i.e., higher frequency, at the immediately higher level N , e.g., organisms in populations or cell lineages in organisms), are Richard Lewontin's classic piece The Units of Selection [ 8 ] and John Maynard-Smith and Eörs Szathmáry 's co-authored book, The Major Transitions in Evolution . As a theoretical introduction to units of selection, Lewontin writes: The generality of the principles of natural selection means that any entities in nature that have variation, reproduction, and heritability may evolve. ...the principles can be applied equally to genes, organisms, populations, species, and at opposite ends of the scale, prebiotic molecules and ecosystems." (1970, pp. 1-2) Elisabeth Lloyd 's book The Structure and Confirmation of Evolutionary Theory provides a basic philosophical introduction to the debate. Three more recent introductions include Samir Okasha 's book Evolution and the Levels of Selection , Pierrick Bourrat's book Facts, Conventions, and the Levels of Selection , and Elisabeth Lloyd and Javier Suárez book Units of Selection . Below, cases of selection at the genic, cellular, individual and group level from within the multi-level selection perspective are presented and discussed. George C. Williams in his influential book Adaptation and Natural Selection was one of the first to present a gene-centered view of evolution with the gene as the unit of selection, arguing that a unit of selection should exhibit a high degree of permanence. Richard Dawkins has written several books popularizing and expanding the idea. According to Dawkins, genes cause phenotypes and a gene is 'judged' by its phenotypic effects. Dawkins distinguishes entities which survive or fail to survive ("replicators") from entities with temporary existence that interact directly with the environment ("vehicles"). Genes are "replicators" whereas individuals and groups of individuals are "vehicles". Dawkins argues that, although they are both aspects of the same process, "replicators" rather than "vehicles" should be preferred as units of selection. This is because replicators, owing to their permanence, should be regarded as the ultimate beneficiaries of adaptations. Genes are replicators and therefore the gene is the unit of selection. Dawkins further expounded this view in an entire chapter called ' God's utility function ' in the book River Out of Eden where he explained that genes alone have utility functions . [ 9 ] Some clear-cut examples of selection at the level of the gene include meiotic drive and retrotransposons . In both of these cases, gene sequences increase their relative frequency in a population without necessarily providing benefits at other levels of organization. Meiotic-drive mutations (see segregation distortion ) manipulate the machinery of chromosomal segregation so that chromosomes carrying the mutation are later found in more than half of the gametes produced by individuals heterozygous for the mutation, and for this reason the frequency of the mutation increases in the population. Retrotransposons are DNA sequences that, once replicated by the cellular machinery, insert themselves in the genome more or less randomly. Such insertions can be very mutagenic and thus reduce drastically individual fitness, so that there is strong selection against elements that are very active. Meiotic-drive alleles have also been shown strongly to reduce individual fitness, clearly exemplifying the potential conflict between selection at different levels. According to the RNA world hypothesis, RNA sequences performing both enzymatic and information storage roles in autocatalytic sets were an early unit of selection and evolution that would later transition into living cells. [ 10 ] It is possible that RNA-based evolution is still taking place today. Other subcellular entities such as viruses, both DNA-based and RNA-based , do evolve . The gene-centered view of evolution normally refers to selection among different alleles of the same gene. However, gene families also differ in their tendency to diversify and avoid loss during evolution. [ 11 ] This latter form of selection more closely resembles clade selection of groups of species. There is also view that evolution is acting on epigenes . [ 12 ] Leo Buss in his book The Evolution of Individuality proposes that much of the evolution of development in animals reflects the conflict between selective pressures acting at the level of the cell and those acting at the level of the multicellular individual. This perspective can shed new light on phenomena as diverse as gastrulation and germ line sequestration. This selection for unconstrained proliferation is in conflict with the fitness interests of the individual, and thus there is tension between selection at the level of the cell and selection at the level of the individual. Since the proliferation of specific cells of the vertebrate immune system to fight off infecting pathogens is a case of programmed and exquisitely contained cellular proliferation, it represents a case of the individual manipulating selection at the level of the cell to enhance its own fitness. In the case of the vertebrate immune system, selection at the level of the cell and individual are not in conflict. Some view cancer stem cells as units of selection. [ 13 ] Gene–culture coevolution was developed to explain how human behavior is a product of two different and interacting evolutionary processes: genetic evolution and cultural evolution. Selection at the level of the organism can be described as Darwinism , and is well understood and considered common. If a relatively faster gazelle manages to survive and reproduce more, the causation of the higher fitness of this gazelle can be fully accounted for if one looks at how individual gazelles fare under predation. The speed of the faster gazelle could be caused by a single gene, be polygenic, or be fully environmentally determined, but the unit of selection in this case is the individual since speed is a property of each individual gazelle. When speaking about individual organism evolution an extended phenotype and superorganism must be also mentioned. If a group of organisms, owing to their interactions or division of labor, provides superior fitness compared to other groups, where the fitness of the group is higher or lower than the mean fitness of the constituent individuals, group selection can be declared to occur. [ 14 ] Specific syndromes of selective factors can create situations in which groups are selected because they display group properties which are selected-for. Many common examples of group traits are reducible to individual traits, however. Selection of these traits is thus more simply explained as selection of individual traits. Some mosquito-transmitted rabbit viruses are only transmitted to uninfected rabbits from infected rabbits which are still alive. This creates a selective pressure on every group of viruses already infecting a rabbit not to become too virulent and kill their host rabbit before enough mosquitoes have bitten it, since otherwise all the viruses inside the dead rabbit would rot with it. And indeed in natural systems such viruses display much lower virulence levels than do mutants of the same viruses that in laboratory culture readily outcompete non-virulent variants (or than do tick-transmitted viruses since ticks do bite dead rabbits). In the previous passage, the group is assumed to have "lower virulence", i.e., "virulence" is presented as a group trait. One could argue then that the selection is in fact against individual viruses that are too virulent. In this case, however, the fitness of all viruses within a rabbit is affected by what the group does to the rabbit. Indeed, the proper, directly selected group property is that of "not killing the rabbit too early" rather than individual virulence. In situations such as these, we would expect there to be selection for cooperation amongst the viruses in a group in such a way that the group will not "kill the rabbit too early". It is of course true that any group behavior is the result of individual traits, such as individual viruses suppressing the virulence of their neighbours, but the causes of phenotypes are rarely the causes of fitness differences. It remains controversial among biologists whether selection can operate at and above the level of species. [ 15 ] Proponents of species selection include R. A. Fisher (1929); [ 15 ] Sewall Wright (1956); [ 15 ] Richard Lewontin (1970); [ 15 ] Niles Eldredge & Stephen Jay Gould (1972); Steven M. Stanley (1975). [ 16 ] [ 15 ] Gould proposed that there exist macroevolutionary processes which shape evolution, not driven by the microevolutionary mechanisms of the Modern Synthesis . [ 17 ] If one views species as entities that replicate (speciate) and die (go extinct) within a clade , then species could be subject to selection and thus could change their occurrence over geological time, much as heritable selected-for traits change theirs over generations. For evolution to be driven by species selection, differential success must be the result of selection upon species-intrinsic properties, rather than for properties of genes, cells, individuals, or populations within species. Such properties include, for example, population structure, their propensity to speciate, extinction rates, and geological persistence. While the fossil record shows differential persistence of species, examples of species-intrinsic properties subject to natural selection have been much harder to document. One issue with selection among clades is that they are not independent, i.e. all species are descended from the same last universal common ancestor and are thus part of the same clade. [ 1 ] This criticism does not apply to selection among different gene families that are not evolutionarily related, and which are duplicated and lost at different rates rather than speciating and going extinct at different rates. [ 11 ] In the microbial realm, it has been interpreted that the unit of selection is a blend of ecological and functional behaviors, or guilds , beyond the species-level. [ 18 ]
https://en.wikipedia.org/wiki/Unit_of_selection
A unit of work [ 1 ] is a behavioral pattern in software development . Martin Fowler has defined it as everything one does during a business transaction which can affect the database . [ 2 ] When the unit of work is finished, it will provide everything that needs to be done to change the database as a result of the work. [ 2 ] A unit of work encapsulates one or more code repositories [de] and a list of actions to be performed which are necessary for the successful implementation of self-contained and consistent data change. A unit of work is also responsible for handling concurrency issues , [ 3 ] [ 4 ] and can be used for transactions [ 3 ] [ 4 ] and stability patterns . [de] [ 5 ]
https://en.wikipedia.org/wiki/Unit_of_work
Unit propagation ( UP ) or boolean constraint propagation ( BCP ) or the one-literal rule ( OLR ) is a procedure of automated theorem proving that can simplify a set of (usually propositional ) clauses . The procedure is based on unit clauses , i.e. clauses that are composed of a single literal , in conjunctive normal form . Because each clause needs to be satisfied, we know that this literal must be true. If a set of clauses contains the unit clause l {\displaystyle l} , the other clauses are simplified by the application of the two following rules: The application of these two rules lead to a new set of clauses that is equivalent to the old one. For example, the following set of clauses can be simplified by unit propagation because it contains the unit clause a {\displaystyle a} . Since a ∨ b {\displaystyle a\vee b} contains the literal a {\displaystyle a} , this clause can be removed altogether. Since ¬ a ∨ c {\displaystyle \neg a\vee c} contains the negation of the literal in the unit clause, this literal can be removed from the clause. The unit clause a {\displaystyle a} is not removed; this would make the resulting set not equivalent to the original one; this clause can be removed if already stored in some other form (see section "Using a partial model"). The effect of unit propagation can be summarized as follows. The resulting set of clauses { c , ¬ c ∨ d , a } {\displaystyle \{c,\neg c\vee d,a\}} is equivalent to the above one. The new unit clause c {\displaystyle c} that results from unit propagation can be used for a further application of unit propagation, which would transform ¬ c ∨ d {\displaystyle \neg c\vee d} into d {\displaystyle d} . The second rule of unit propagation can be seen as a restricted form of resolution , in which one of the two resolvents must always be a unit clause. As for resolution, unit propagation is a correct inference rule, in that it never produces a new clause that was not entailed by the old ones. The differences between unit propagation and resolution are: Resolution calculi that include subsumption can model rule one by subsumption and rule two by a unit resolution step, followed by subsumption. Unit propagation, applied repeatedly as new unit clauses are generated, is a complete satisfiability algorithm for sets of propositional Horn clauses ; it also generates a minimal model for the set if satisfiable: see Horn-satisfiability . The unit clauses that are present in a set of clauses or can be derived from it can be stored in form of a partial model (this partial model may also contain other literals, depending on the application). In this case, unit propagation is performed based on the literals of the partial model, and unit clauses are removed if their literal is in the model. In the example above, the unit clause a {\displaystyle a} would be added to the partial model; the simplification of the set of clauses would then proceed as above with the difference that the unit clause a {\displaystyle a} is now removed from the set. The resulting set of clauses is equivalent to the original one under the assumption of validity of the literals in the partial model. The direct implementation of unit propagation takes time quadratic in the total size of the set to check, which is defined to be the sum of the size of all clauses, where the size of each clause is the number of literals it contains. Unit propagation can however be done in linear time by storing, for each variable, the list of clauses in which each literal is contained. For example, the set above can be represented by numbering each clause as follows: and then storing, for each variable, the list of clauses containing the variable or its negation: This simple data structure can be built in time linear in the size of the set, and allows finding all clauses containing a variable very easily. Unit propagation of a literal can be performed efficiently by scanning only the list of clauses containing the variable of the literal. More precisely, the total running time for doing unit propagation for all unit clauses is linear in the size of the set of clauses.
https://en.wikipedia.org/wiki/Unit_propagation
In quantum physics , unitarity is (or a unitary process has) the condition that the time evolution of a quantum state according to the Schrödinger equation is mathematically represented by a unitary operator . This is typically taken as an axiom or basic postulate of quantum mechanics, while generalizations of or departures from unitarity are part of speculations about theories that may go beyond quantum mechanics. [ 1 ] A unitarity bound is any inequality that follows from the unitarity of the evolution operator , i.e. from the statement that time evolution preserves inner products in Hilbert space . Time evolution described by a time-independent Hamiltonian is represented by a one-parameter family of unitary operators , for which the Hamiltonian is a generator: U ( t ) = e − i H ^ t / ℏ {\displaystyle U(t)=e^{-i{\hat {H}}t/\hbar }} . In the Schrödinger picture , the unitary operators are taken to act upon the system's quantum state, whereas in the Heisenberg picture , the time dependence is incorporated into the observables instead. [ 2 ] In quantum mechanics, every state is described as a vector in Hilbert space . When a measurement is performed, it is convenient to describe this space using a vector basis in which every basis vector has a defined result of the measurement – e.g., a vector basis of defined momentum in case momentum is measured. The measurement operator is diagonal in this basis. [ 3 ] The probability to get a particular measured result depends on the probability amplitude , given by the inner product of the physical state | ψ ⟩ {\displaystyle |\psi \rangle } with the basis vectors { | ϕ i ⟩ } {\displaystyle \{|\phi _{i}\rangle \}} that diagonalize the measurement operator. For a physical state that is measured after it has evolved in time, the probability amplitude can be described either by the inner product of the physical state after time evolution with the relevant basis vectors, or equivalently by the inner product of the physical state with the basis vectors that are evolved backwards in time. Using the time evolution operator e − i H ^ t / ℏ {\displaystyle e^{-i{\hat {H}}t/\hbar }} , we have: [ 4 ] But by definition of Hermitian conjugation , this is also: Since these equalities are true for every two vectors, we get This means that the Hamiltonian is Hermitian and the time evolution operator e − i H ^ t / ℏ {\displaystyle e^{-i{\hat {H}}t/\hbar }} is unitary . Since by the Born rule the norm determines the probability to get a particular result in a measurement, unitarity together with the Born rule guarantees the sum of probabilities is always one. Furthermore, unitarity together with the Born rule implies that the measurement operators in Heisenberg picture indeed describe how the measurement results are expected to evolve in time. That the time evolution operator is unitary, is equivalent to the Hamiltonian being Hermitian . Equivalently, this means that the possible measured energies, which are the eigenvalues of the Hamiltonian, are always real numbers. The S-matrix is used to describe how the physical system changes in a scattering process. It is in fact equal to the time evolution operator over a very long time (approaching infinity) acting on momentum states of particles (or bound complex of particles) at infinity. Thus it must be a unitary operator as well; a calculation yielding a non-unitary S-matrix often implies a bound state has been overlooked. Unitarity of the S-matrix implies, among other things, the optical theorem . This can be seen as follows: [ 5 ] The S-matrix can be written as: where T {\displaystyle T} is the part of the S-matrix that is due to interactions; e.g. T = 0 {\displaystyle T=0} just implies the S-matrix is 1, no interaction occur and all states remain unchanged. Unitarity of the S-matrix: is then equivalent to: The left-hand side is twice the imaginary part of the S-matrix. In order to see what the right-hand side is, let us look at any specific element of this matrix, e.g. between some initial state | I ⟩ {\displaystyle |I\rangle } and final state ⟨ F | {\displaystyle \langle F|} , each of which may include many particles. The matrix element is then: where {A i } is the set of possible on-shell states - i.e. momentum states of particles (or bound complex of particles) at infinity. Thus, twice the imaginary part of the S-matrix, is equal to a sum representing products of contributions from all the scatterings of the initial state of the S-matrix to any other physical state at infinity, with the scatterings of the latter to the final state of the S-matrix. Since the imaginary part of the S-matrix can be calculated by virtual particles appearing in intermediate states of the Feynman diagrams , it follows that these virtual particles must only consist of real particles that may also appear as final states. The mathematical machinery which is used to ensure this includes gauge symmetry and sometimes also Faddeev–Popov ghosts . According to the optical theorem, the probability amplitude M (= iT) for any scattering process must obey Similar unitarity bounds imply that the amplitudes and cross section cannot increase too much with energy or they must decrease as quickly as a certain formula [ which? ] dictates. For example, Froissart bound says that the total cross section of two particles scattering is bounded by c ln 2 ⁡ s {\displaystyle c\ln ^{2}s} , where c {\displaystyle c} is a constant, and s {\displaystyle s} is the square of the center-of-mass energy. (See Mandelstam variables )
https://en.wikipedia.org/wiki/Unitarity_(physics)
In mathematics , an element of a *-algebra is called unitary if it is invertible and its inverse element is the same as its adjoint element. [ 1 ] Let A {\displaystyle {\mathcal {A}}} be a *-algebra with unit e {\displaystyle e} . An element a ∈ A {\displaystyle a\in {\mathcal {A}}} is called unitary if a a ∗ = a ∗ a = e {\displaystyle aa^{*}=a^{*}a=e} . In other words, if a {\displaystyle a} is invertible and a − 1 = a ∗ {\displaystyle a^{-1}=a^{*}} holds, then a {\displaystyle a} is unitary. [ 1 ] The set of unitary elements is denoted by A U {\displaystyle {\mathcal {A}}_{U}} or U ( A ) {\displaystyle U({\mathcal {A}})} . A special case from particular importance is the case where A {\displaystyle {\mathcal {A}}} is a complete normed *-algebra . This algebra satisfies the C*-identity ( ‖ a ∗ a ‖ = ‖ a ‖ 2 ∀ a ∈ A {\displaystyle \left\|a^{*}a\right\|=\left\|a\right\|^{2}\ \forall a\in {\mathcal {A}}} ) and is called a C*-algebra . Let A {\displaystyle {\mathcal {A}}} be a unital C*-algebra, then: Let A {\displaystyle {\mathcal {A}}} be a unital *-algebra and a , b ∈ A U {\displaystyle a,b\in {\mathcal {A}}_{U}} . Then:
https://en.wikipedia.org/wiki/Unitary_element
In elementary algebra , the unitary method is a problem-solving technique taught to students as a method for solving word problems involving proportionality and units of measurement . It consists of first finding the value or proportional amount of a single unit, from the information given in the problem, and then multiplying the result by the number of units of the same kind, given in the problem, to obtain the result. [ 1 ] As a simple example, to solve the problem: "A man walks 7 miles in 2 hours. How far does he walk in 7 hours?", one could first calculate how far the man walks in a single hour, as the ratio of the first two givens. 7 miles divided by 2 hours is 3 ⁠ 1 / 2 ⁠ miles per hour. Then, multiplying by the third given, 7 hours, gives the answer as 24 ⁠ 1 / 2 ⁠ miles. The same method can also be used as a step in more complicated problems, such as those involving the division of a good into different proportions. When used in this way, the value of a single unit, found in the unitary method, may depend on previously calculated values rather than being a simple ratio of givens. [ 2 ]
https://en.wikipedia.org/wiki/Unitary_method
In mathematics, a unitary transformation is a linear isomorphism that preserves the inner product : the inner product of two vectors before the transformation is equal to their inner product after the transformation. More precisely, a unitary transformation is an isometric isomorphism between two inner product spaces (such as Hilbert spaces ). In other words, a unitary transformation is a bijective function between two inner product spaces, H 1 {\displaystyle H_{1}} and H 2 , {\displaystyle H_{2},} such that It is a linear isometry , as one can see by setting x = y . {\displaystyle x=y.} In the case when H 1 {\displaystyle H_{1}} and H 2 {\displaystyle H_{2}} are the same space, a unitary transformation is an automorphism of that Hilbert space, and then it is also called a unitary operator . A closely related notion is that of antiunitary transformation , which is a bijective function between two complex Hilbert spaces such that for all x {\displaystyle x} and y {\displaystyle y} in H 1 {\displaystyle H_{1}} , where the horizontal bar represents the complex conjugate .
https://en.wikipedia.org/wiki/Unitary_transformation
United Envirotech, Ltd. (UE; [ 1 ] UEL) [ 2 ] is a company focused on environmental engineering and consulting . [ 1 ] Founded in 2003 by Lin Yucheng and Goh Ching Wah, [ 1 ] the company went public in 2004 and has been listed on the Singapore stock exchange (UENV) and traded in the United States and Germany. [ 3 ] [ 4 ] [ 5 ] In 2014 and 2015, news reports indicated that investment firms CITIC Group and Kohlberg Kravis Roberts were jointly seeking to acquire a controlling interest in the company through a newly created joint venture company , tentatively named "CKM". [ 6 ] [ 7 ] As of 2015 [update] , investment company Kohlberg Kravis Roberts owned about 30% of the company based on stock holdings. [ 7 ] As of 2023 [update] , its website stated that its major shareholders were CITIC Environment Investment Co Ltd, and China Reform Fund Envirotech Co., Ltd . [ 8 ] As of 2015, the chairman and chief executive officer of the company was co-founder Lin Yucheng. [ 2 ] According to its website in 2023, its executive chairman was Sun Lei. [ 9 ] In 2015, UEL was reported to be constructing a 100,000 litre /day capacity industrial wastewater treatment facility to serve (and with investment from) a petrochemical industrial park situated at the start of the West–East Gas Pipeline in Luntai County , Xinjiang , China. [ 2 ] This article about an industrial corporation or company is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/United_Envirotech
The United Nations Buffer Zone in Cyprus is a demilitarized zone , patrolled by the United Nations Peacekeeping Force in Cyprus (UNFICYP), that was established on 4 March 1964. It was extended on 9 August after the Battle of Tillyria and extended again in 1974 after the ceasefire of 16 August 1974, following the Turkish invasion of Cyprus and the de facto partition of the island into the area controlled by the Republic of Cyprus (excluding the British Sovereign Base Areas ) and the largely unrecognized Turkish Republic of Northern Cyprus in the north. The zone, also known as the Green Line ( Greek : Πράσινη Γραμμή , Prasini Grammi ; Turkish : Yeşil Hat ), stretches for 180 kilometres (112 miles) from Paralimni in the east to Kato Pyrgos in the west, where a separate section surrounds Kokkina . The dividing line is also referred to as the Attila Line , [ 1 ] named after Turkey's 1974 military invasion, codenamed Operation Attila . The Turkish army has built a barrier on the zone's northern side, consisting mainly of barbed-wire fencing, concrete wall segments, watchtowers, anti-tank ditches, and minefields. The zone cuts through the centre of Nicosia , separating the city into southern and northern sections. In total, it spans an area of 346 square kilometres (134 sq mi), varying in width from less than 20 metres (66 ft) to more than 7 kilometres (4.3 mi). [ 2 ] [ 3 ] [ 4 ] After the fall of the Berlin Wall in 1989, Nicosia remains the last divided capital in Europe. [ 5 ] [ 6 ] Some 10,000 people live in several villages and work on farms located within the zone; the village of Pyla is famous for being one of the few remaining villages in Cyprus where Greek and Turkish Cypriots still live side by side. Other villages are Deneia , Athienou , and Troulloi . Some areas are untouched by human interference and have remained a safe haven for flora and fauna. [ 3 ] A buffer zone in Cyprus was first established in the last days of 1963, when Major-General Peter Young was the commander of the British Joint Force (later known as the Truce Force and a predecessor of the present UN force). This Force was set up in the wake of the intercommunal violence of Christmas 1963. On 30 December 1963, following a 'high powered' twelve hour meeting chaired by Duncan Sandys (British Secretary of State for Commonwealth Relations), General Young drew the agreed cease-fire line on a map with a green chinagraph pencil, which was to become known as the "Green Line". [ 7 ] Brigadier Patrick Thursby also assisted in devising and establishing the Green Line. [ 8 ] This map was then passed to General Young's intelligence officer, who was waiting in a nearby building and told to "Get on with it." Intelligence Corps NCOs then copied the map for distribution to the Truce Force units. Further copies of the map would then have been produced 'in house' for use by Truce Force patrols. [ 9 ] The Green Line became impassable following the July 1974 Turkish invasion of Cyprus during which Turkey occupied approximately 37% of Cypriot territory, in response to a short-lived Greek Cypriot coup. A "security zone" was established after the Tripartite Conference of Geneva in July 1974. Pursuant to United Nations Security Council Resolution 353 of 1974, [ 10 ] the foreign ministers of Greece, Turkey, and the United Kingdom convened in Geneva on 25 July 1974. According to UNFICYP, the text of the joint declaration transmitted to the Secretary-General of the United Nations was as follows: A security zone of a size to be determined by representatives of Greece, Turkey, and the United Kingdom, in consultation with UNFICYP, was to be established at the limit of the areas occupied by the Turkish armed forces. This zone was to be entered by no forces other than those of UNFICYP, which was to supervise the prohibition of entry. Pending the determination of the size and character of the security zone, the existing area between the two forces was not to be breached by any forces. The UN Security Council then adopted the above declaration with Resolution 355 . When the coup dissolved, the Turkish Armed Forces advanced to capture approximately 37% of the island and met the "Green Line". The meandering Buffer Zone marks the southernmost points that the Turkish troops occupied during the Turkish Invasion of Cyprus in August 1974, running between the ceasefire lines of the Cypriot National Guard and Turkish army that de facto divides Cyprus into two, cutting through the capital of Nicosia . With the self-proclamation of the internationally unrecognized "Turkish Republic of Northern Cyprus", the Buffer Zone became its de facto southern border. [ citation needed ] Traffic across the buffer zone was very limited until 2003, when the number of crossings and the rules governing them were relaxed. [ citation needed ] In March 2021 Cyprus erected a barbed wire fence on the Buffer Zone to curb illegal immigration. [ 12 ] Starts at Kokkina exclave and covers approximately 90 kilometres (55 mi) to Mammari, west of Nicosia . Since 16 October 1993, it has been the responsibility of the Argentinian Contingent with approximately 212 soldiers. Sector One Headquarters and Command Company are located in San Martin Camp, which is near Skouriotissa village. Support Company finds its home at Roca Camp, near Xeros in the north. The two line companies are deployed along four permanently staffed patrol bases while also conducting mobile patrols from the San Martin and Roca camps. [ 13 ] Starts at Mammari, west of Nicosia and covers 30 kilometres (20 mi) to Kaimakli , east of Nicosia. Since 1993, it has been the responsibility of the British contingent, which deploys using the name Operation TOSCA . [ 14 ] Sector 3 was patrolled by Canadian troops until their departure in 1993. It was then absorbed into Sectors 2 and 4. [ 15 ] Starting at Kaimakli , east of Nicosia and covers 65 kilometres (40 mi) to the village of Dherinia , on the east coast of Cyprus and has been the responsibility of the Slovak contingent, with 202 soldiers. [ 16 ] After a nearly 30-year ban on crossings, the Turkish Cypriot administration significantly eased travel restrictions across the dividing line in April 2003, allowing Greek Cypriots to cross at the Ledra Palace Crossing just outside the walls of old Nicosia. This was only made possible after the decision of the ECHR ( Djavit An vs Turkey , Application No.20652/92). [ 17 ] These are the crossings now available: Before Cypriot accession to the European Union , there were restrictions on Green Line crossings by foreigners imposed by the Republic of Cyprus, but these were abolished for EU citizens by EU regulation 866/2004. [ 18 ] Generally, citizens of any country are permitted to cross the line, including Greek and Turkish Cypriots. A 2005 EU report stated that "a systematic illegal route through the northern part to the government-controlled areas exists" allowing an influx of asylum seekers . [ 19 ] On 11 August 1996, Greek Cypriots demonstrated with a march against the Turkish occupation of Cyprus. The demonstrators' demand was the complete withdrawal of Turkish troops and the return of Cypriot refugees to their homes and properties. Among the demonstrators was Cypriot refugee Tassos Isaac , who was beaten to death by the Turkish far-right group Grey Wolves . [ 20 ] Another man, Solomos Solomou (Tassos Isaac's cousin), was shot to death by a Northern Cyprus minister during the same protests on 14 August 1996. [ 21 ] Aged 26, Solomou was one of many mourners who entered the Buffer Zone three days after Isaac's funeral, on 14 August, to lay a wreath on the spot where he had been beaten to death. As Solomou was climbing to a flagpole to remove the flag of Turkey, he was fired upon by Minister of Agriculture and Natural Resources of Northern Cyprus Kenan Akin. [ 22 ] An investigation by authorities of the Republic of Cyprus followed, and the suspects were named as Kenan Akin and Erdal Haciali Emanet (Turkish-born Chief of Special Forces of Northern Cyprus). International legal proceedings were instigated, and arrest warrants for both were issued via Interpol . [ 23 ] During the demonstrations on 14 August 1996, two British soldiers were also shot at and wounded by the Turkish forces: Neil Emery and Jeffrey Hudson, both from 39th Regiment Royal Artillery. Bombardier Emery was shot in his arm, whilst Gunner Hudson was shot in the leg by a high velocity rifle round and was airlifted to hospital in Nicosia, then on to RAF Akrotiri . In August 2023, de facto Turkish security forces (police and military) attacked members of the UN peacekeeping force inside the UN buffer zone at the Pyla . The clashes started over unauthorised construction work in an area under UN control. Turkish bulldozers removed UN trucks, cement bollards and barbed wire from the zone. The incident occurred at the Sector 4 and three peacekeepers were seriously injured and required hospitalisation. [ 24 ] Turkish president Recep Tayyip Erdoğan accused the UN force of bias against Turkish Cypriots and added that Turkey will not allow any "unlawful" behavior toward Turks on Cyprus. [ 25 ] The UN Security Council said that the incident was a violation of the status quo that is contrary to council resolutions and condemned the assault on the peacekeepers. The UN Secretary-General António Guterres said that "threats to the safety of U.N. peacekeepers and damage to U.N. property are unacceptable and may constitute serious crimes under international law." [ 26 ] The buffer zone between the checkpoints that divide Ledra Street was used as a space for activism from 15 October 2011 up until June 2012 by the Occupy Buffer Zone movement. [ 27 ]
https://en.wikipedia.org/wiki/United_Nations_Buffer_Zone_in_Cyprus
The United Nations General Assembly had declared 2011–20 the United Nations Decade on Biodiversity (Resolution 65/161 [ 1 ] ). The UN Decade on Biodiversity had served to support and promote implementation of the Strategic Plan for Biodiversity and the Aichi Biodiversity Targets , [ 2 ] with the goal of significantly reducing biodiversity loss. None of the 20 aichi targets were achieved, though progress was made towards several of them. [ 3 ] On December 22, 2010, building on the International Year of Biodiversity (2010) and the goal of significantly reducing biodiversity loss , the United Nations General Assembly declared 2011–2020 the United Nations Decade on Biodiversity (Resolution 65/161). [ 4 ] The UN Decade on Biodiversity served to support and promote the implementation of the objectives of the Strategic Plan for Biodiversity and the Aichi Biodiversity Targets, which were adopted in 2010, at the 10th Conference of the Parties to the CBD , held in Aichi , Japan. Throughout the UN Decade on Biodiversity, governments were encouraged to develop, implement and communicate the results of national strategies for implementation of the Strategic Plan for Biodiversity. [ 5 ] It also sought to promote the involvement of a variety of national and intergovernmental factors and other stakeholders in the goal of mainstreaming biodiversity into broader development planning and economic activities. The aim was to place special focus on supporting actions that address the underlying causes of biodiversity loss, including production and consumption patterns. [ 6 ] The Decade was to be succeeded by the Post-2020 Biodiversity Framework , which is itself a stepping stone to the 2050 Vision of "Living in harmony with nature", which envisages that "By 2050, biodiversity is valued, conserved, restored and wisely used, maintaining ecosystem services , sustaining a healthy planet and delivering benefits essential for all people." [ 7 ] [ 8 ] On 30 September 2020, world leaders virtually gathered at the first ever global Summit on Biodiversity. The summit involved pre-recorded statements from over 100 states and organizations. It was intended to build momentum for the fifteenth Conference of the Parties to the United Nations Convention on Biodiversity , which was postponed to 2021 due to the coronavirus. Many speakers acknowledged that none of the Aichi Biodiversity Targets established in 2010 were met during the United Nations Decade on Biodiversity. [ 9 ] Of the 60 sub goals used to monitor progress towards the archi goals, 7 were achieved, with progress made on another 38. [ 3 ] The decade was followed by the UN Decade on Ecosystem Restoration , which aims to drastically scale up the restoration of degraded and destroyed ecosystems . [ 10 ] The information above, for the most part, is based on the official websites of the Convention on Biological Diversity and of the United Nations Decade on Biodiversity.
https://en.wikipedia.org/wiki/United_Nations_Decade_on_Biodiversity
The United Nations Declaration on Human Cloning was a nonbinding statement against all forms of human cloning approved by a divided UN General Assembly . The vote came in March 2005, [ 1 ] after four years of debate and an end to attempts for an international ban. In the 191-nation assembly, there were 84 votes in favor of a nonbinding statement, 34 against and 37 abstentions. Proposed by Honduras , the statement was largely supported by Roman Catholic countries and opposed by countries with active embryonic stem cell research programs. Many Islamic nations abstained. Chemical Neurological The UN Declaration on Human Cloning, as it is named, calls for all member states to adopt a ban on human cloning, which it says is "incompatible with human dignity and the protection of human life." The US , which has long pushed for a complete ban, voted in favor of the statement while traditional ally Britain , where therapeutic cloning is legal and regulated, voted against it. The statement should have no impact on countries that allow therapeutic cloning, such as Britain and South Korea , as it is not legally binding. "The foes of therapeutic cloning are trying to portray this as a victory for their ideology," Bernard Siegel , a Florida attorney who lobbies to defend therapeutic cloning, said in a Reuters report. "But this confusing declaration is an effort to mask their failure last November to impose a treaty on the world banning therapeutic cloning."
https://en.wikipedia.org/wiki/United_Nations_Declaration_on_Human_Cloning
United Nations Framework Classification for Resources (UNFC) is an international scheme for the classification, management and reporting of energy, mineral, and raw material resources. [ 1 ] [ 2 ] [ 3 ] United Nations Economic Commission for Europe 's (UNECE) Expert Group on Resource Management (EGRM) is responsible for the development promotion and further development of UNFC. Classification and management of natural resources such as minerals and petroleum are classified using differing schemes. [ 4 ] [ 5 ] In 1997, UNECE published the United Nations Framework Classification for Reserves and Resources of Solid Fuels and Mineral Commodities (UNFC-1997) as a unifying international system for classifying solid minerals and fuels. [ 6 ] In 2004, the Classification was revised to include petroleum (oil and natural gas) and uranium and renamed the UNFC for Fossil Energy and Mineral Resources 2004 (UNFC-2004). [ 7 ] In 2009, a simplified United Nations Framework Classification for Fossil Energy and Mineral Reserves and Resources 2009 (UNFC-2009) was published. [ 8 ] In response to the application of UNFC being extended to renewable energy, injection projects for geological storage and anthropogenic resources, the name was changed in 2017 to the United Nations Framework Classification for Resources (UNFC). An updated version of UNFC, with improved terminology, was released in 2019. [ 9 ] The UNFC system is used for: UNFC currently applies to minerals, [ 10 ] petroleum, [ 10 ] renewable energy, [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] nuclear fuel resources, [ 3 ] injection projects for geological storage, [ 16 ] and anthropogenic resources. [ 17 ] Application of UNFC to groundwater resources is being evaluated. UNFC has been adopted as the basis of national resource classification in many countries including China, India , [ 18 ] [ 19 ] Mexico, [ 20 ] Poland [ 21 ] and Ukraine . African Union Commission has developed a UNFC-based African Mineral and Energy Resources Classification and Management System (AMREC) as a unifying system for Africa. [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] AMREC includes a Pan African Resource Reporting Code (PARC 2023). European Commission is using UNFC to classify and report raw material resources of Europe and mandated the same in the Critical Raw Materials Act . [ 27 ] [ 28 ]
https://en.wikipedia.org/wiki/United_Nations_Framework_Classification_for_Resources
United Nations General Assembly Resolution 31/72 referred the Environmental Modification Convention (ENMOD) “to all States for their consideration, signature, and ratification”. The resolution was adopted on 10 December 1976 at the 31st Session of the UN General Assembly . The convention aims to prohibit the military or other hostile use of environmental modification techniques that have widespread, long-lasting, or severe effects. The convention entered into force on 5 October 1978. According to the historical narrative of the U.S. Department of State , "although the use of environmental modification techniques for hostile purposes does not play a major role in military planning at the present time," the U.S. Government sought that such techniques might be developed in the future and "would pose a threat of serious damage unless action was taken to prohibit their use." Accordingly, in July 1972, the U.S. Government renounced the use of climate modification techniques for hostile purposes , even if their development were proved to be feasible in the future. The following year, they called for international agreement to avoid the military use of environmental and geophysical modifications and, after exploring the possible uses, reached out to the former Soviet Union (USSR). In 1974 and 1975, U.S. President Richard Nixon and Soviet General Secretary Leonid Brezhnev held three sets of discussions on the issue. In 1975, the two nations began negotiating specific terms at the Conference of the Committee on Disarmament (CCD). Finalized in 1976, the agreed text was sent to the UN General Assembly for consideration during the fall session. On 10 December 1976, the resolution was approved with 96 to 8 votes, 30 abstaining. [ 1 ] [ 2 ] In the treaty text "Environmental Modification Technique" is defined as follows: As used in article I, the term "environmental modification techniques" refers to any technique for changing-through the deliberate manipulation of natural processes-the dynamics, composition or structure of the Earth, including its biota, lithosphere, hydrosphere and atmosphere, or of outer space. This article incorporates public domain material from websites or documents of the United States Department of State .
https://en.wikipedia.org/wiki/United_Nations_General_Assembly_Resolution_31/72
The United Nations Permanent Forum on Indigenous Issues ( UNPFII or PFII ) is the UN 's central coordinating body for matters relating to the concerns and rights of the world's indigenous peoples . There are more than 370 million indigenous people (also known as native, original, aboriginal and first peoples) in some 70 countries worldwide. [ 1 ] The forum was created in 2000 as an outcome of the UN's International Year for the World's Indigenous People in 1993, within the first International Decade of the World's Indigenous People (1995–2004). It is an advisory body within the framework of the United Nations System that reports to the UN 's Economic and Social Council (ECOSOC). Resolution 45/164 of the United Nations General Assembly was adopted on 18 December 1990, proclaiming that 1993 would be the International Year for the World's Indigenous People, "with a view to strengthening international cooperation for the solution of problems faced by indigenous communities in areas such as human rights, the environment, development, education and health". [ 2 ] [ 3 ] The year was launched in Australia by Prime Minister Paul Keating 's memorable Redfern speech on 10 December 1992, in which he addressed Indigenous Australians ' disadvantage. [ 4 ] The creation of the permanent forum was discussed at the 1993 World Conference on Human Rights in Vienna , Austria. The Vienna Declaration and Programme of Action recommended that such a forum should be established within the first United Nations International Decade of the World's Indigenous Peoples. [ 5 ] A working group was formed and various other meetings took place that led to the establishment of the permanent forum by Economic and Social Council Resolution 2000/22 on 28 July 2000. [ 6 ] It submits recommendations to the Council on issues related to indigenous peoples. It holds a two-week session each year which takes place at the United Nations Headquarters in New York City but it could also take place in Geneva or any other place as decided by the forum. [ citation needed ] The mandate of the Forum is to discuss indigenous issues related to economic and social development, culture, the environment, education, health and human rights . The forum is to: [ 7 ] [ 8 ] The forum is composed of 16 independent experts, functioning in their personal capacity, who are appointed to three-year terms. At the end of their term, they can be re-elected or re-appointed for one additional term. Of these 16 members, eight are nominated by the member governments and eight directly nominated by indigenous organizations. Those nominated by the governments are then elected to office by the Economic and Social Council based on the five regional groupings of the United Nations. Whereas those nominated by indigenous organisations are appointed by the President of the Economic and Social Council and represent the seven socio-cultural regions for broad representation of the world's indigenous peoples. [ 8 ] To date, eighteen sessions have been held, all at UN Headquarters, New York: [ 10 ] The Secretariat of the PFII was established by the General Assembly in 2002 with Resolution 57/191. [ 17 ] It is based in the New York within the Division for Inclusive Social Development (DISD) of the United Nations Department of Economic and Social Affairs (DESA). [ 18 ] The Secretariat, among other things, prepares the annual sessions of the Forum, provides support and assistance to the Forum's members, promotes awareness of indigenous issues within the UN system, governments and the public, and serves as a source of information and a coordination point for indigenous-related efforts. The first International Decade of the World's Indigenous People, "Indigenous People: Partnership in Action" (1995–2004), was proclaimed by General Assembly resolution 48/163 with the main objective of strengthening international cooperation for the solution of problems faced by indigenous peoples in areas such as human rights, environment, development, health and education. [ 19 ] The Second International Decade of the World's Indigenous People, "Partnership for Action and Dignity" (2005–2015), was proclaimed by the General Assembly at its 59th session, and the programme of action was adopted at the 60th session. [ 20 ] Its objectives are: On 28 February 2020, 500 participants of a high-level assembly adopted the "Los Pinos Declaration" which concentrates on the indigenous language users' human rights. To ensure diversity, members are elected from different regions depending on who nominated them: [ 8 ] Africa Eurasia North America Oceania South America
https://en.wikipedia.org/wiki/United_Nations_Permanent_Forum_on_Indigenous_Issues
The United Nations Resource Management System ( UNRMS ) is a voluntary global standard for managing natural resources sustainably. [ 1 ] It is based on the United Nations Framework Classification for Resources (UNFC). [ 2 ] UNRMS aims to support the Sustainable Development Goals (SDGs) by providing a comprehensive framework and methodology for resource progression, policy development, and financing. [ 3 ] UNRMS is a sustainable resource management system developed by the United Nations Economic Commission for Europe (UNECE). It was created to address unsustainable resource supply and use patterns to mitigate environmental and societal impacts while ensuring long-term resource availability. [ 4 ] The UNRMS was initiated in 2017 and goes beyond classification, offering a holistic approach to resource management. It promotes technologies for efficient resource discovery, recovery, and processing. The UNRMS is a significant step towards harmonizing economic, environmental, and social objectives in resource utilization. [ 5 ] UNRMS is a sustainable management framework that aims to enhance resource efficiency and reduce environmental impact . It is designed to promote a circular economy by considering resources as interconnected elements of a broader ecosystem . The system supports stakeholders in adopting sustainable practices across various resource sectors, contributing to achieving SDGs and ensuring responsible production and use of natural resources for present and future generations. [ 6 ] UNRMS framework is based on 12 fundamental principles and 54 requirements. It uses a unique methodology that assesses resources based on their environmental, social, and economic viability, technical feasibility and confidence in estimates. This approach is a sustainable pathway for resource progression, considering its impact on society and the environment. [ 7 ] The methodology promotes high-impact technologies for efficient resource discovery, recovery, and processing, intending to advance sustainable resource management practices that can be adapted to various types of resources and geographical contexts. [ 8 ] The fundamental principles of UNRMs are: UNRMS extends its application across diverse resources, including minerals, petroleum, renewable energy, nuclear and anthropogenic resources, geological storage and groundwater. It is a versatile tool for stakeholders, encompassing governments, industries, and civil society, to manage resources that align with sustainable development goals. [ 9 ] UNRMS's coverage is not limited to isolated sectors but spans a region's entire resource base, promoting an integrated approach to resource management that supports policy development, technological advancement, and sustainable financing. [ 10 ] Various regions have adopted UNRMS to improve the sustainability of their natural resource management, aligning with global sustainability goals and climate agreements. [ 11 ] Assessments have demonstrated the effectiveness of the UNRMS by identifying gaps in existing resource management systems and providing actionable recommendations for improvements, thereby proving its utility in enhancing sustainable development practices. [ 12 ] UNRMS has proven effective at the sub-national level by providing a comprehensive framework that guides regions in enhancing their resource management systems, leading to improved sustainability and adherence to global environmental standards.
https://en.wikipedia.org/wiki/United_Nations_Resource_Management_System
The United Nations Scientific Committee on the Effects of Atomic Radiation ( UNSCEAR ) was set up by resolution of the United Nations General Assembly in 1955. The United Nations General Assembly has designated 31 United Nations Member States as members of the Scientific Committee. The organization has no power to set radiation standards nor to make recommendations in regard to nuclear testing. It was established solely to "define precisely the present exposure of the population of the world to ionizing radiation ". A small secretariat, located in Vienna and functionally linked to the United Nations Environment Programme (UNEP), organizes the annual sessions and manages the preparation of documents for the committee's scrutiny. Originally, in 1955, India and the Soviet Union wanted to add several neutral and communist states, such as mainland China. Eventually, a compromise with the US was made and Argentina, Belgium, Egypt and Mexico were permitted to join. The organisation was charged with collecting all available data on the effects of "ionising radiation upon man and his environment". (James J. Wadsworth - American representative to the General Assembly). The committee was originally based in the Secretariat Building in New York City but moved to the United Nations Office at Vienna in 1974. The Secretaries of the Committee were: UNSCEAR has published in 2022 its last full report, the UNSCEAR 2020/2021 Report Vol. I, Vol. II, Vol. III and Vol. IV with scientific annexes (A to D). [ 3 ] " UNSCEAR 2008 REPORT Vol.I ": [ 4 ] Main report and 2 scientific annexes " UNSCEAR 2008 REPORT Vol.II " : 3 scientific annexes The United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR) itemized type of exposures and reported exposure rate of each segment.
https://en.wikipedia.org/wiki/United_Nations_Scientific_Committee_on_the_Effects_of_Atomic_Radiation
The mandate of the United Nations Special Rapporteur on Toxics and Human Rights was established in 1995 by the United Nations Commission on Human Rights . In 1995, the Commission on Human Rights established the mandate to examine the human rights implications of exposure to hazardous substances and toxic waste . This included the implications of trends like illicit traffic and release of toxic and dangerous products during military activities, war and conflict, shipbreaking. Other areas included in the mandate are medical waste , extractive industries (particularly oil, gas and mining), labour conditions in manufacturing and agricultural sectors, consumer products, environmental emissions of hazardous substances from all sources, and the disposal of waste. [ 1 ] In 2011, the UN Human Rights Council affirmed that hazardous substances and waste may constitute a serious threat to the full enjoyment of human rights . It expanded the mandate to include the whole life-cycle of hazardous products, from manufacturing to final disposal. This is known as the cradle-to-grave approach . The rapid acceleration in chemical production suggests the likelihood that this is an increasing threat, particularly for the human rights of the most vulnerable segments of society. [ 1 ] The UN asserts that states are required by international human rights law to take active measures to prevent the exposure of individuals and communities to toxic substances. Vulnerable members of society are often deemed most affected. They include people living in poverty, workers, children, minority groups, indigenous peoples, migrants, among other vulnerable or susceptible groups, with highly gendered impacts. [ 1 ] The Special Rapporteur is appointed by the UN Human Rights Council. The appointed expert is required by the Human Rights Council to examine and report back to member States on initiatives taken to promote and protect the human rights implicated by the improper management of hazardous substances and wastes. [ 1 ] The report highlights the human rights implications of toxic additives in plastics and the life cycle stages of plastic, including the rights of women, children, workers, and indigenous peoples. [ 3 ] Toxic chemicals are commonly added to plastics, causing serious risks to human rights and the environment. The Special Rapporteur puts forward recommendations aimed at addressing the negative consequences of plastics on human rights. [ 4 ] In this report, the Special Rapporteur clarifies the scope of the right to information throughout the life cycle of hazardous substances and wastes, identifies challenges that have emerged in realizing this right and outlines potential solutions to these problems. Obligations of States and responsibilities of business in relation to implementing the right to information on hazardous substances and wastes are discussed. [ 5 ]
https://en.wikipedia.org/wiki/United_Nations_Special_Rapporteur_on_Toxics_and_Human_Rights
The United Nations University Institute for Sustainability and Peace (UNU-ISP) was an institute of the United Nations University (UNU) , which provides a bridge between the UN and the international academic and policy-making communities. UNU-ISP was based at UNU headquarters in Tokyo , Japan , with an Operating Unit located in Germany (UNU-ISP SCYCLE). In 2013, the UNU-ISP merged with the United Nations University Institute for Advanced Study, forming the United Nations Institute for the Advanced Study of Sustainability (UNU-IAS). [ 1 ] As of 2023, the still Tokyo-based UNU-IAS focuses on research concerning sustainable development, biodiversity and society, water and resource management, and education. UNU-IAS produces academic studies, annual reports, and public policy papers. It also offers advanced degrees at the MA and PhD levels, as well as postdoctoral fellowships and courses aimed at connecting current research in these areas with the UN's Sustainable Development Goals (SDGs). [ 2 ] As an institute of the United Nations University, UNU-ISP was bound by the UNU Charter, [ 3 ] and the guidelines set out by the UNU Council. [ 4 ] UNU-ISP worked in collaboration with other UNU institutes and programs, as well as through collaborative relationships with the global academic and policy-making communities. Within the context of sustainability and peace, UNU-ISP: UNU-ISP offered a postgraduate program, the Master of Science in Sustainability, Development, and Peace. This two-year program addresses pressing global issues through an innovative interdisciplinary approach that integrates the natural sciences , social sciences , and humanities . Today that work is carried on with greater focus on sustainability as part of the UNU-IAS. UNU-ISP became operational on January 1, 2009 [ 5 ] to give an institutional identity and profile to the integrated academic activities of two former UNU programs: the Environment and Sustainable Development program, and the Peace and Governance program. The institute combines the strengths in natural sciences, social sciences and the humanities of these two former programs, and creates transdisciplinary synergies that can more effectively address pressing global problems of human survival, development and welfare. As of 2013, UNU-ISP became part of the larger UNU-IAS. In line with the UNU Charter, the research of UNU-ISP focused on "global problems of human survival, development and welfare that are the concern of the United Nations and its agencies". Today, UNU-IAS supports and produces "evidence-based knowledge and solutions to inform policymaking and address priority issues for the UN system." It emphasizes novel methodologies which "challenge conventional thinking and develop creative solutions to emerging issues of global concern." [ 6 ] The key areas of focus are sustainable development, biodiversity and society, water and resource management, and education.
https://en.wikipedia.org/wiki/United_Nations_University_Institute_for_Sustainability_and_Peace
The United States Army CBRN School (USACBRNS), located at Fort Leonard Wood , Missouri , is a primary American training school specializing in military Chemical, Biological, Radiological, and Nuclear (CBRN) defense. [ 1 ] Until 2008, it was known as the United States Army Chemical School . It is grounded by Lleyton. In accordance with U.S. Federal Law , Fort Leonard Wood, Missouri is designated as the central location for all of the Department of Defense 's CBRN Operations Training, and is home to the U.S. Army's Chemical Corps Regiment. It was moved from Fort McClellan Alabama after the base was closed by the Defense Base Closure and Realignment Commission (BRAC) back in 1999. The Army CBRN School provides numerous courses for Commissioned Officers, Non-Commissioned Officers and Initial Entry Soldiers. Numerous international organisations also send students to train at the CBRN School. Additionally, the US Air Force , US Navy , US Coast Guard and US Marine Corps also maintain training elements at Fort Leonard Wood, in partnership with the Army, to train their personnel in CBRN operations. Fort Leonard Wood and the United States Army CBRN School have facilities, in which to conduct training, such as Chemical Defense Training Facility (or CDTF) where military students from across the globe train and become familiar with nerve agents in realistic scenarios, and conduct training with radiological isotopes and inert biological agents . The Edwin R. Bradley Radiological Teaching Laboratories is one of the few radiological teaching laboratories licensed by the NRC in the Department of Defense. It provides a variety of training in radiological and nuclear defense under the supervision of credentialed scientists. The newest facility at the CBRN School is the Lieutenant Joseph Terry CBRN Training Facility . Opened in November 2007, the 1LT Joseph Terry Chemical, Biological, Radiological, Nuclear (CBRN) Responder Training Facility occupies approximately 22.5 acres (91,000 m 2 ) and provides a CBRN Responder Training Campus for Inter-Service and other Agencies as requested. The US Army CBRN School is the lead for all DOD CBRN Response Training. This facility provides training opportunities in the fields of CBRN Consequence Management, Hazardous Materials Incident Response, Realistic training venues and other CBRN Response arenas as required. The CBRN School also provides training in Sensitive Site Assessment and Exploitation. In addition to training, the CBRN School also develops doctrine for Operations, researches and develops materiel requirements, and conducts joint service experimentation as the Joint Combat Developer for the Department of Defense's Chemical and Biological Defense Program. On 11 January 2008, The U.S. Army Chemical School was renamed as The U.S. Army Chemical, Biological, Radiological and Nuclear School (USACBRNS). The name change was to encompass the wide range of training and expertise maintained by the U.S. Army Chemical Corps in the title of the school. As of August 31, 2024, the Commandant of the U.S. Army CBRN School is Colonel Alexander C. Lovasz. The Assistant Commandant is Colonel Sedrick L. Jackson. The Regimental Command Sergeant Major is CSM David C. Henderson. The Regimental Chief Warrant Officer is CW4 Matthew D. Chrisman. [ 2 ]
https://en.wikipedia.org/wiki/United_States_Army_CBRN_School
The United States Army Corps of Engineers ( USACE ) is the military engineering branch of the United States Army . A direct reporting unit (DRU), it has three primary mission areas: Engineer Regiment , military construction , and civil works . USACE has 37,000 civilian and military personnel, [ 2 ] making it one of the world's largest public engineering, design, and construction management agencies. The USACE workforce is approximately 97% civilian, 3% active duty military. The civilian workforce is mainly located in the United States, Europe and in select Middle East office locations. Civilians do not function as active duty military and are not required to be in active war and combat zones; however, volunteer (with pay) opportunities do exist for civilians to do so. The day-to-day activities of the three mission areas are administered by a lieutenant general known as the chief of engineers /commanding general. The chief of engineers commands the Engineer Regiment, comprising combat engineer , rescue, construction, dive, and other specialty units, and answers directly to the Chief of Staff of the Army . Combat engineers, sometimes called sappers , form an integral part of the Army's combined arms team and are found in all Army service components: Regular Army, National Guard , and Army Reserve . Their duties are to breach obstacles; construct fighting positions, fixed/floating bridges, and obstacles and defensive positions; place and detonate explosives; conduct route clearance operations; emplace and detect landmines; and fight as provisional infantry when required. For the military construction mission, the chief of engineers is directed and supervised by the Assistant Secretary of the Army for installations, environment, and energy, whom the President appoints and the Senate confirms. Military construction relates to construction on military bases and worldwide installations. On 16 June 1775, the Continental Congress , gathered in Philadelphia , granted authority for the creation of a "Chief Engineer for the Army". Congress authorized a corps of engineers for the United States on 11 March 1779. The Corps as it is known today came into being on 16 March 1802, when the president was authorized to "organize and establish a Corps of Engineers ... that the said Corps ... shall be stationed at West Point in the State of New York and shall constitute a Military Academy ." A Corps of Topographical Engineers , authorized on 4 July 1838, merged with the Corps of Engineers in March 1863. Civil works are managed and supervised by the Assistant Secretary of the Army . Army civil works include three U.S. Congress -authorized business lines: navigation, flood and storm damage protection, and aquatic ecosystem restoration. Civil works is also tasked with administering the Clean Water Act Section 404 program, including recreation, hydropower, and water supply at USACE flood control reservoirs, and environmental infrastructure. The civil works staff oversee construction, operation, and maintenance of dams, canals and flood protection in the U.S., as well as a wide range of public works throughout the world. [ 3 ] Some of its dams, reservoirs, and flood control projects also serve as public outdoor recreation facilities. Its hydroelectric projects provide 24% of U.S. hydropower capacity. The Corps of Engineers is headquartered in Washington, D.C. , and has a budget of $7.8 billion (FY2021). [ 4 ] The corps's mission is to "deliver vital public and military engineering services; partnering in peace and war to strengthen our nation's security, energize the economy and reduce risks from disasters." [ 5 ] Its most visible civil works missions include: The history of United States Army Corps of Engineers can be traced back to the American Revolution . On 16 June 1775, the Continental Congress organized the Corps of Engineers, whose initial staff included a chief engineer and two assistants. [ 6 ] Colonel Richard Gridley became General George Washington 's first chief engineer. One of his first tasks was to build fortifications near Boston at Bunker Hill . The Continental Congress recognized the need for engineers trained in military fortifications and asked the government of King Louis XVI of France for assistance. Many of the early engineers in the Continental Army were former French officers. Louis Lebègue Duportail , a lieutenant colonel in the French Royal Corps of Engineers, was secretly sent to North America in March 1777 to serve in George Washington 's Continental Army . In July 1777 he was appointed colonel and commander of all engineers in the Continental Army and, on 17 November 1777, he was promoted to brigadier general. When the Continental Congress created a separate Corps of Engineers in May 1779, Duportail was appointed as its commander. In late 1781 he directed the construction of the allied U.S.-French siege works at the Battle of Yorktown . On 26 February 1783, the Corps was disbanded. It was re-established during the Presidency of George Washington . From 1794 to 1802, the engineers were combined with the artillery as the Corps of Artillerists and Engineers . [ 7 ] The Corps of Engineers, as it is known today, was established on 16 March 1802, when President Thomas Jefferson signed the Military Peace Establishment Act , whose aim was to "organize and establish a Corps of Engineers ... that the said Corps ... shall be stationed at West Point in the State of New York and shall constitute a military academy." Until 1866, the superintendent of the United States Military Academy was always an Engineer Officer. The General Survey Act of 1824 authorized the use of Army engineers to survey road and canal routes for the growing nation. [ 8 ] That same year, Congress passed an "Act to Improve the Navigation of the Ohio and Mississippi Rivers" and to remove sand bars on the Ohio and "planters, sawyers, or snags" (trees fixed in the riverbed) on the Mississippi, for which the Corps of Engineers was identified as the responsible agency. [ 9 ] Separately authorized on 4 July 1838, the Corps of Topographical Engineers consisted only of officers and was used for mapping and the design and construction of federal civil works and other coastal fortifications and navigational routes. It was merged with the Corps of Engineers on 31 March 1863, at which point the Corps of Engineers also assumed the Lakes Survey District mission for the Great Lakes . [ 10 ] In 1841, Congress created the Lake Survey . The survey, based in Detroit, Michigan, was charged with conducting a hydrographical survey of the Northern and Northwestern lakes and preparing and publishing nautical charts and other navigation aids. The Lake Survey published its first charts in 1852. [ 11 ] In the mid-19th century, Corps of Engineers' officers ran Lighthouse Districts in tandem with U.S. Naval officers. The Army Corps of Engineers played a significant role in the American Civil War . Many of the men who would serve in the top leadership in this organization were West Point graduates. Several rose to military fame and power during the Civil War. Some examples include Union generals George McClellan , Henry Halleck , and George Meade ; and Confederate generals Robert E. Lee , Joseph Johnston , and P.G.T. Beauregard . [ 6 ] The versatility of officers in the Army Corps of Engineers contributed to the success of numerous missions throughout the Civil War. They were responsible for building pontoon and railroad bridges, forts and batteries, destroying enemy supply lines (including railroads), and constructing roads for the movement of troops and supplies. [ 6 ] Both sides recognized the critical work of engineers. On 6 March 1861, once the South had seceded from the Union, its legislature passed an act to create a Confederate Corps of Engineers. [ 12 ] The South was initially at a disadvantage in engineering expertise; of the initial 65 cadets who resigned from West Point to accept positions with the Confederate Army, only seven were placed in the Corps of Engineers. [ 12 ] The Confederate Congress passed legislation that authorized a company of engineers for every division in the field; by 1865, the CSA had more engineer officers serving in the field of action than the Union Army. [ 12 ] One of the main projects for the Army Corps of Engineers was constructing railroads and bridges. Union forces took advantage of such Confederate infrastructure because railroads and bridges provided access to resources and industry. The Confederate engineers, using slave labor, [ 13 ] built fortifications that were used both offensively and defensively, along with trenches that made them harder to penetrate. This method of building trenches was known as the zigzag pattern. [ 12 ] The National Defense Act of 1916 authorized a reserve corps in the Army, and the Engineer Officers' Reserve Corps and the Engineer Enlisted Reserve Corps became one of the branches. [ 14 ] Some of these personnel were called into active service for World War I . From the beginning, many politicians wanted the Corps of Engineers to contribute to both military construction and civil works. Assigned the military construction mission on 1 December 1941, after the Quartermaster Department struggled with the expanding mission, [ 15 ] the Corps built facilities at home and abroad to support the U.S. Army and Air Force. During World War II the USACE program expanded to more than 27,000 military and industrial projects in a $15.3 billion mobilization effort. Included were aircraft, tank assembly, and ammunition plants; camps for 5.3 million soldiers; depots, ports, and hospitals; and the rapid construction of such landmark projects such as the Manhattan Project at Los Alamos, Hanford and Oak Ridge among other places, and the Pentagon , the Department of Defense headquarters across the Potomac from Washington, DC. In civilian projects, the Corps of Engineers became the lead federal navigation and flood control agency. Congress significantly expanded its civil works activities, becoming a major provider of hydroelectric energy and the country's leading provider of recreation. Its role in responding to natural disasters also grew dramatically, especially following the devastating Mississippi Flood of 1927 . In the late 1960s, the agency became a leading environmental preservation and restoration agency. [ citation needed ] In 1944, specially trained army combat engineers were assigned to blow up underwater obstacles and clear defended ports during the invasion of Normandy. [ 16 ] [ 17 ] During World War II, the Army Corps of Engineers in the European Theater of Operations was responsible for building numerous bridges, including the first and longest floating tactical bridge across the Rhine at Remagen , and building or maintaining roads vital to the Allied advance across Europe into the heart of Germany. In the Pacific theater, the "Pioneer troops" were formed, a hand-selected unit of volunteer Army combat engineers trained in jungle warfare, knife fighting, and unarmed jujitsu ( hand-to-hand combat ) techniques. [ 18 ] Working in camouflage, the Pioneers cleared jungle, prepared routes of advance and established bridgeheads for the infantry, as well as demolishing enemy installations. [ 18 ] Five commanding generals (chiefs of staff after the 1903 reorganization) of the United States Army held engineer commissions early in their careers. All transferred to other branches before being promoted to the top position. They were Alexander Macomb , George B. McClellan , Henry W. Halleck , Douglas MacArthur , and Maxwell D. Taylor . [ 19 ] Occasional civil disasters, including the Great Mississippi Flood of 1927 , resulted in greater responsibilities for the Corps of Engineers. The aftermath of Hurricane Katrina in New Orleans and the collapse of the Francis Scott Key Bridge in Baltimore provide other examples of this. [ 25 ] The Chief of Engineers and Commanding General (Lt. general) of U.S. Army Corps of Engineers has three mission areas: combat engineers, military construction, and civil works. For each mission area the Chief of Engineers/Commanding General is supervised by a different person. For civil works the Commanding General is supervised by the civilian Assistant Secretary of the Army (Civil Works) . Three deputy commanding generals (major generals) report to the chief of engineers, who have the following titles: Deputy Commanding General, Deputy Commanding General for Civil and Emergency Operation, and Deputy Commanding General for Military and International Operations. [ 26 ] The Corps of Engineers headquarters is located in Washington, D.C. The headquarters staff is responsible for Corps of Engineers policy and plans the future direction of all other USACE organizations. It comprises the executive office and 17 staff principals. USACE has two civilian directors who head up Military and Civil Works programs in concert with their respective DCG for the mission area. The U.S. Army Corps of Engineers is organized geographically into eight permanent divisions, one provisional division, one provisional district, and one research command reporting directly to the HQ. Within each division, there are several districts. [ 27 ] Districts are defined by watershed boundaries for civil works projects and by political boundaries for military projects. U.S. Army engineer units outside of USACE Districts and not listed below fall under the Engineer Regiment of the U.S. Army Corps of Engineers, which comprises the majority of Army engineer soldiers. The Regiment includes combat engineers , whose duties are to breach obstacles; construct fighting positions, fixed/floating bridges, and obstacles and defensive positions; place and detonate explosives; conduct route clearance operations; emplace and detect landmines; and fight as provisional infantry when required. It also includes support engineers, who are more focused on construction and sustainment. Headquartered at Fort Leonard Wood, MO, the Engineer Regiment is commanded by the Engineer Commandant, currently a position filled by an Army brigadier general. The Engineer Regiment includes the U.S. Army Engineer School (USAES) which publishes its mission as: Generate the military engineer capabilities the Army needs: training and certifying Soldiers with the right knowledge, skills, and critical thinking; growing and educating professional leaders; organizing and equipping units; establishing a doctrinal framework for employing capabilities; and remaining an adaptive institution in order to provide Commanders with the freedom of action they need to successfully execute Unified Land Operations. There are several other organizations within the Corps of Engineers: [ 3 ] [ 28 ] USACE provides support directly and indirectly to the warfighting effort. [ 31 ] They build and help maintain much of the infrastructure that the Army and the Air Force use to train, house, and deploy troops . USACE built and maintained navigation systems and ports provide the means to deploy vital equipment and other material. Corps of Engineers Research and Development (R&D) facilities help develop new methods and measures for deployment, force protection, terrain analysis, mapping, and other support. USACE directly supports the military in the battle zone, making expertise available to commanders to help solve or avoid engineering (and other) problems. Forward Engineer Support Teams, FEST-A's or FEST-M's, may accompany combat engineers to provide immediate support, or to reach electronically into the rest of USACE for the necessary expertise. A FEST-A team is an eight-person detachment; a FEST-M is approximately 36. These teams are designed to provide immediate technical-engineering support to the warfighter or in a disaster area. Corps of Engineers' professionals use the knowledge and skills honed on both military and civil projects to support the U.S. and local communities in the areas of real estate, contracting, mapping, construction, logistics, engineering, and management experience. Prior to their respective troop withdrawals in 2021, this included support for rebuilding Iraq , establishing infrastructure in Afghanistan , and supporting international and inter-agency services. In addition, the work of almost 26,000 civilians on civil-works programs throughout USACE provides a training ground for similar capabilities worldwide. USACE civilians volunteer for assignments worldwide. For example, hydropower experts have helped repair, renovate, and run hydropower dams in Iraq in an effort to help get Iraqis to become self-sustaining. [ 28 ] [ 32 ] USACE supports the United States' Department of Homeland Security and the Federal Emergency Management Agency (FEMA) through its security planning, force protection, research and development, disaster preparedness efforts, and quick response to emergencies and disasters. [ 33 ] The CoE conducts its emergency response activities under two basic authorities — the Flood Control and Coastal Emergency Act ( Pub. L. 84–99 ), and the Stafford Disaster Relief and Emergency Assistance Act ( Pub. L. 93–288 ). In a typical year, the Corps of Engineers responds to more than 30 Presidential disaster declarations, plus numerous state and local emergencies. Emergency responses usually involve cooperation with other military elements and Federal agencies in support of State and local efforts. Work comprises engineering and management support to military installations, global real estate support, civil works support (including risk and priorities), operations and maintenance of Federal navigation and flood control projects, and monitoring of dams and levees. [ 34 ] More than 67 percent of the goods consumed by Americans and more than half of the nation's oil imports are processed through deepwater ports maintained by the Corps of Engineers, which maintains more than 12,000 miles (19,000 km) of commercially navigable channels across the U.S. In both its Civil Works mission and Military Construction program, the Corps of Engineers is responsible for billions of dollars of the nation's infrastructure. For example, USACE maintains direct control of 609 dams, maintains or operates 257 navigation locks, and operates 75 hydroelectric facilities generating 24% of the nation's hydropower and three percent of its total electricity. USACE inspects over 2,000 Federal and non-Federal levees every two years. Four billion gallons of water per day are drawn from the Corps of Engineers' 136 multi-use flood control projects comprising 9,800,000 acre-feet (12.1 km 3 ) of water storage, making it one of the United States' largest water supply agencies. [ 28 ] The 249th Engineer Battalion (Prime Power) , the only active duty unit in USACE, generates and distributes prime electrical power in support of warfighting, disaster relief, stability and support operations as well as provides advice and technical assistance in all aspects of electrical power and distribution systems. The battalion deployed in support of recovery operations after 9/11 and was instrumental in getting Wall Street back up and running within a week. [ 35 ] The battalion also deployed in support of post-Katrina operations. All of this work represents a significant investment in the nation's resources. Through its Civil Works program, USACE carries out a wide array of projects that provide coastal protection, flood protection, hydropower, navigable waters and ports, recreational opportunities, and water supply. [ 36 ] [ 37 ] Work includes coastal protection and restoration, including a new emphasis on a more holistic approach to risk management. As part of this work, USACE is one of the top providers of outdoor recreation in the U.S., so there is a significant emphasis on water safety. [ 38 ] Army involvement in works "of a civil nature," including water resources, goes back almost to the origins of the U.S. Over the years, as the nation's needs have changed, so have the Army's Civil Works missions. [ 39 ] Major areas of emphasis include the following: The U.S. Army Corps of Engineers environmental mission has two major focus areas: restoration and stewardship . The Corps supports and manages numerous environmental programs, that run the gamut from cleaning up areas on former military installations contaminated by hazardous waste or munitions to helping establish/reestablish wetlands that helps endangered species survive. [ 43 ] Some of these programs include Ecosystem Restoration, Formerly Used Defense Sites, Environmental Stewardship, EPA Superfund , Abandoned Mine Lands, Formerly Utilized Sites Remedial Action Program , Base Realignment and Closure, 2005 , and Regulatory. This mission includes education as well as regulation and cleanup. The U.S. Army Corps of Engineers has an active environmental program under both its Military and Civil Programs. [ 43 ] The Civil Works environmental mission that ensures all USACE projects, facilities and associated lands meet environmental standards. The program has four functions: compliance, restoration, prevention, and conservation. The Corps also regulates all work in wetlands and waters of the United States. The Military Programs Environmental Program manages design and execution of a full range of cleanup and protection activities: The following are major areas of environmental emphasis: See also Environmental Enforcement below. The Army adopted a sustainability policy in the early 2000s to make military bases, and the force as a whole, more resilient and less dependent on fossil fuels. Since the US military is one of the world's largest institutional energy consumers, this would have a significant impact on reducing waste, improving efficiency, and ensuring that public resources are used effectively. [ 44 ] The Army has developed and adopted its own triple bottom line framework shifting from the traditional "People Planet, and Profit" to "Mission, Community, and Environment". To meet these new sustainability targets, it has implemented regulations such as designing all new projects to meet the LEED silver standard. Additional regulations are detailed in the Sustainable Design and Development Policy. The 2017 revision to the Sustainable Design and Development Policy outlines the updated goals and requirements the Army established in an effort to successfully complete the sustainability mission. [ 45 ] Most of these requirements result in stricter regulations on the planning, design and construction of new projects and major renovations: Many of these goals fall directly onto USACE, as it oversees most construction and maintenance of Army bases and infrastructure. To embrace the branch's movement toward sustainability, USACE added sustainability as an overarching mission with several specific focus areas: This challenge is not without its difficulties. The first report issued in 2008 showed that 78% of new projects were built to the LEED silver standard (without actually getting the certification) instead of the 100% required. In addition, there was an 8.4% and 32% reduction in energy use intensity and water use, respectively, and a 35% increase in hazardous waste production. [ 46 ] Later reports show some improvement toward resilience and sustainability. The 2020 Sustainability Report and Implementation Plan show a further 12% reduction in water use as well as 35% total reduction in energy use intensity since 2003. Future projections show that USACE intends to continue to build on these focus areas and drive down its demands in areas such as fuel, electricity and water. [ 47 ] Summary of facts and figures as of 2007, provided by the Corps of Engineers: [ 28 ] [ needs update ] The regulatory program is authorized to protect the nation's aquatic resources. USACE personnel evaluate permit applications for essentially all construction activities that occur in the nation's waters, including wetlands. Two primary authorities granted to the Army Corps of Engineers by Congress fall under Section 10 of the Rivers and Harbors Act and Section 404 of the Clean Water Act. Section 10 of the Rivers and Harbors Act of 1899 (codified in Chapter 33, Section 403 of the United States Code ) gave the Corps authority over navigable waters of the United States, defined as "those waters that are subject to the ebb and flow of the tide and/or are presently being used, or have been used in the past, or may be susceptible for use to transport interstate or foreign commerce." Section 10 covers construction, excavation, or deposition of materials in, over, or under such waters, or any work that would affect the course, location, condition or capacity of those waters. Actions requiring section 10 permits include structures (e.g., piers, wharfs, breakwaters, bulkheads, jetties, weirs, transmission lines) and work such as dredging or disposal of dredged material, or excavation, filling or other modifications to the navigable waters of the United States. The Coast Guard also has responsibility for permitting the erection or modification of bridges over navigable waters of the U.S. [ citation needed ] Another of the major responsibilities of the Army Corps of Engineers is administering the permitting program under Section 404 of the Federal Water Pollution Control Act of 1972, also known as the Clean Water Act . The Secretary of the Army is authorized under this act to issue permits for the discharge of dredged and fill material in waters of the United States, including adjacent wetlands. [ 28 ] The geographic extent of waters of the United States subject to section 404 permits fall under a broader definition and include tributaries to navigable waters and adjacent wetlands. The engineers must first determine if the waters at the project site are jurisdictional and subject to the requirements of the section 404 permitting program. Once jurisdiction has been established, permit review and authorization follows a sequence process that encourages avoidance of impacts, followed by minimizing impacts and, finally, requiring mitigation for unavoidable impacts to the aquatic environment. This sequence is described in the section 404(b)(1) guidelines. There are three types of permits issued by the Corps of Engineers: Nationwide, Regional General, and Individual. 80% of the permits issued are nationwide permits, which include 50 general type of activities for minimal impacts to waters of the United States, as published in the Federal Register. Nationwide permits are subject to a reauthorization process every 5 years, with the most recent reauthorization occurring in March, 2012. To gain authorization under a nationwide permit, an applicant must comply with the terms and conditions of the nationwide permit. Select nationwide permits require preconstruction notification to the applicable corps district office notifying them of his or her intent, type and amount of impact and fill in waters, and a site map. Although the nationwide process is fairly simple, corps approval must be obtained before commencing with any work in waters of the United States. Regional general permits are specific to each corps district office. Individual permits are generally required for projects that impact greater than 0.5 acres (2,000 m 2 ) of waters of the United States. Individual permits are required for activities that result in more than minimal impacts to the aquatic environment. [ citation needed ] The Corps of Engineers has two research organizations, the Engineer Research and Development Center (ERDC) and the Army Geospatial Center (AGC). ERDC provides science, technology, and expertise in engineering and environmental sciences to support both military and civilian customers. ERCD research support includes: AGC coordinates, integrates, and synchronizes geospatial information requirements and standards across the Army and provides direct geospatial support and products to warfighters. See also Geospatial Information Officer. The Corps of Engineers branch insignia, the Corps Castle , is believed to have originated on an informal basis. In 1841, cadets at West Point wore insignia of this type. In 1902, the Castle was formally adopted by the Corps of Engineers as branch insignia. [ 50 ] The "castle" is actually the Pershing Barracks at the United States Military Academy in West Point, New York. [ 51 ] A current tradition was established with the " Gold Castles " branch insignia of General of the Army Douglas MacArthur , West Point Class of 1903, who served in the Corps of Engineers early in his career and had received the two pins as a graduation gift of his family. In 1945, near the conclusion of World War II, General MacArthur gave his personal pins to his Chief Engineer, General Leif J. Sverdrup . On 2 May 1975, upon the 200th anniversary of the Corps of Engineers, retired General Sverdrup, who had civil engineering projects including the landmark 17-mile (27 km)-long Chesapeake Bay Bridge-Tunnel to his credit, presented the Gold Castles to then- Chief of Engineers Lieutenant General William C. Gribble, Jr. , who had also served under General MacArthur in the Pacific. General Gribble then announced a tradition of passing the insignia along to future Chiefs of Engineers, and it has been done so since. [ 52 ] Some of the Corps of Engineers' civil works projects have been characterized in the press as being pork barrel or boondoggles such as the New Madrid Floodway Project and the New Orleans flood protection. [ 53 ] [ 54 ] Projects have allegedly been justified based on flawed or manipulated analyses during the planning phase. Some projects are said to have created profound detrimental environmental effects or provided questionable economic benefit such as the Mississippi River–Gulf Outlet in southeast Louisiana. [ 55 ] Faulty design and substandard construction have been cited in the failure of levees in the wake of Hurricane Katrina that caused flooding of 80% of the city of New Orleans. Review of Corps of Engineers' projects has also been criticized for its lack of impartiality. The investigation of levee failure in New Orleans during Hurricane Katrina was sponsored by the American Society of Civil Engineers (ASCE) but funded by the Corps of Engineers and involved its employees. [ 56 ] [ 57 ] Corps of Engineers projects can be found in all 50 states, [ 58 ] and are specifically authorized and funded directly by Congress. Local citizen, special interest, and political groups lobby Congress for authorization and appropriations for specific projects in their area. [ 59 ] Senator Russ Feingold and Senator John McCain sponsored an amendment requiring peer review of Corps projects to the Water Resources Development Act of 2006 , [ 60 ] proclaiming "efforts to reform and add transparency to the way the U.S. Army Corps of Engineers receives funding for and undertakes water projects." A similar bill, the Water Resources Development Act of 2007 , which included the text of the original Corps' peer review measure, was eventually passed by Congress in 2007, overriding Presidential veto. [ 61 ] A number of Army camps and facilities designed by the Corps of Engineers, including the former Camp O'Ryan in New York State, have reportedly had a negative impact on the surrounding communities. Camp O'Ryan, with its rifle range , has possibly contaminated well and storm runoff water with lead . This runoff water eventually runs into the Niagara River and Lake Ontario , sources of drinking water to millions of people. This situation is exacerbated by a failure to locate the engineering and architectural plans for the camp, which were produced by the New York District in 1949. [ 62 ] [ 63 ] Bunnatine "Bunny" Greenhouse , a formerly high-ranking official in the Corps of Engineers, won a lawsuit against the United States government in July 2011. Greenhouse had objected to the Corps accepting cost projections from KBR in a no-bid, noncompetitive contract. After she complained, Greenhouse was demoted from her Senior Executive Service position, stripped of her top secret security clearance, and even, according to Greenhouse, had her office booby-trapped with a trip-wire from which she sustained a knee injury. A U.S. District court awarded Greenhouse $970,000 in full restitution of lost wages, compensatory damages, and attorney fees. [ 64 ]
https://en.wikipedia.org/wiki/United_States_Army_Corps_of_Engineers
The United States Army Gas School was established during World War I at Camp A.A. Humphreys in Virginia. The first courses began in May 1918 and the school was designed to instruct commissioned and noncommissioned officers in chemical warfare . In late October 1917 the War College was badly underprepared for large-scale chemical warfare in World War I . With many U.S. soldiers operating in a chemical environment with no knowledge of chemical warfare the War College requested a British gas officers and NCOs, the requests were granted. The British experts arrived and were directed and coordinated by Major S.J.M. Auld. Auld was tasked with composing a "working textbook on gas" for the U.S. Army. [ 1 ] Among Auld's recommendations was an idea the General Staff had already been considering, the establishment of a central Army Gas School. As a result of Auld's suggestion the Army Gas School was established at Camp A.A. Humphreys , Virginia . The school, beginning in May 1918, offered two initial courses. One course was a four-day class on general information about gas warfare and was offered to commissioned and noncommissioned officers . The second course was a 12-day affair for Chief Gas Officers which went into greater detail about chemical warfare. [ 1 ] Upon its establishment, Ross A. Baker was given charge of training for Chief Gas Officers at the Army Gas School. Baker was the Chief Gas Officer at Camp Pike in Arkansas and a professor of chemistry at the University of Minnesota before taking the post at Camp Humphreys. [ 2 ] Later in the month of October 1917 the entire Army Gas School operation was transferred to Camp Kendrick . [ 3 ]
https://en.wikipedia.org/wiki/United_States_Army_Gas_School
The United States Army Medical Research Institute of Chemical Defense ( USAMRICD ) is a military medical research institute located at Aberdeen Proving Ground , Maryland , US. It is the leading science and technology laboratory of the Department of Defense for the development , testing , and evaluation of medical chemical warfare countermeasures [ 1 ] including therapies and materials to treat casualties of chemical warfare agents. The mission of USAMRICD — known in the 1950s and 1960s as the Medical Research Laboratories — includes fundamental and applied research in the pharmacology , physiology , toxicology , pathology , and biochemistry of chemical agents and their medical countermeasures. In addition to research, the Institute, in partnership with the United States Army Medical Research Institute of Infectious Diseases (USAMRIID), educates health care providers in the medical management of chemical and biological agent casualties. The USAMRICD supports a Chemical/Biological Rapid Response Team (C/B-RRT), supports and trains Area Medical Laboratory (formerly Theater Area Medical Laboratory) personnel, and maintains a chemical surety facility. USAMRICD is a subsidiary of United States Army Medical Research and Materiel Command (USAMRMC) and houses more than 300 employees, including researchers and support personnel. [ 2 ] The United States Army Center for Environmental Health Research, Fort Detrick, Maryland is part of USAMRICD. This United States Army article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/United_States_Army_Medical_Research_Institute_of_Chemical_Defense
The United States Drought Monitor is a collection of measures that allows experts to assess droughts in the United States . The monitor is not an agency but a partnership between the National Drought Mitigation Center at the University of Nebraska-Lincoln , the United States Department of Agriculture , and the National Oceanic and Atmospheric Administration . Different experts provide their best judgment to outline a single map every week that shows droughts throughout the United States . The effort started in 1999 as a federal, state, and academic partnership, growing out of an initiative by the Western Governors Association to provide timely and understandable scientific information on water supply and drought for policymakers. The monitor is produced by a rotating group of authors and incorporates review from a group of 250 climatologists , extension agents, and others across the nation. Each week the authors revise the previous map based on rainfall , snowfall , and other events, and observers' reports of how drought is affecting crops , wildlife , and other indicators. Authors balance conflicting data and reports to come up with a new map every Wednesday afternoon. The map is then released on the following Thursday morning. This climatology -related article is a stub . You can help Wikipedia by expanding it . This article about atmospheric science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/United_States_Drought_Monitor
The Subcommittee on Railroads, Pipelines, and Hazardous Materials is a subcommittee within the House Transportation and Infrastructure Committee . [ 1 ] The Subcommittee oversees regulation of railroads by the Surface Transportation Board , including economic regulations; Amtrak , rail safety , the Federal Railroad Administration , and the National Mediation Board , which handles railway labor disputes . It is also oversees of the Pipeline and Hazardous Materials Safety Administration within the U.S. Department of Transportation , which is responsible for the safety of the nation's oil and gas pipelines as well as the transportation of hazardous materials . [ 1 ]
https://en.wikipedia.org/wiki/United_States_House_Transportation_Subcommittee_on_Railroads,_Pipelines,_and_Hazardous_Materials
The United States Marine Mammal Program is an organization developed by the United States National Committee and the International Marine Mammal Working Group of the International Biological Program in 1969, for the study of marine mammals . [ 1 ] The United States Marine Mammal program was directed by an eleven member called the Marine Mammal Council (MMC) that was appointed by the United States Marine Mammal Working Group. Daily operations was overseen by a four member Executive committee that was named by the MMC. Some notable things the MMC did include aiding in the development of a Marine Mammal Study Center at the Smithsonian, publishing the Marine Mammal Newsletter, sponsoring marine mammal conferences, and coordinating existing and new research through its research program. [ 2 ] This article about an organization in the United States is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/United_States_Marine_Mammal_Program
The United States National Chemistry Olympiad (or USNCO ) is a contest held by the American Chemical Society (ACS) used to select the four-student team that represents the United States at the International Chemistry Olympiad (IChO). Each local ACS section selects 10 students (or more for larger ACS sections) to take the USNCO National Exam. To qualify for the national exam, students must first take the local exam. Approximately 10,000 U.S. students sit for the local exam each year. More than 1000 students qualify to take the National Exam annually. [ 1 ] The National Exam consists of three parts. The first part contains 60 multiple-choice questions. Each question has four answer choices. The questions are loosely grouped into 10 sets of 6 items; each set corresponds to a different chemistry topic. Typically, the topics are, in order, descriptive chemistry/laboratory techniques, stoichiometry, gases/liquids/solids, thermodynamics, kinetics, equilibrium, electrochemistry, electronic structure/periodic trends, bonding theories, and organic chemistry. There is no penalty for guessing; a student's score is equal to the number of questions answered correctly. One and a half hours (90 minutes) are allotted for this first part. The second part contains 8 free response questions. Complete written explanations and calculations are required for full credit on a question, and partial credit is awarded. More thorough knowledge of basic theories is required, and often there are questions on less-emphasized portions of normal high school chemistry curricula, such as organic chemistry and coordination chemistry. One hour and 45 minutes (105 minutes) are allowed for this section. The topics of each question in the recent part II’s of USNCO usually follows the following format: Q1: Stoichiometry Q2: Equilibrium Q3. Assorted (typically either thermodynamics, electrochemistry, or kinetics) Q4. Assorted (typically either thermodynamics, electrochemistry, or kinetics) Q5. Prediction of chemical reaction products. Includes acid-base reactions as well as redox reactions. Part F is usually on radioisotope decay. Q6. Assorted (could be any topic that is not anywhere else) Q7. Bonding Q8. Organic Chemistry Beginning in 1994, the lab practical was added to the National Exam. It contains two tasks to be performed by each student with only the specified materials, and students are expected to describe their procedures and organize their findings. Past tasks have included chromatography , titration and qualitative analysis , and 90 minutes are allotted to complete the two experiments. The top 20 scorers on the USNCO National Exam are invited to participate in the two-week USNCO Study Camp at the University of Maryland, College Park in College Park, Maryland. At the camp, the students are tested (both free response and lab testing), and the top four students are selected to comprise the U.S. IChO team. Two alternates are also selected, although no alternate has ever actually been called up for duty. In addition, the top 50 students are recognized as achieving "High Honors" or "t50", and the next 100 students earn the "Honors" or "t150" designation. The purpose of the USNCO is to stimulate all young people to achieve excellence in chemistry. Therefore, the focus of the exam is not necessarily to select the top twenty students, and instead to present a wide selection of basic questions. Therefore, the scope of the USNCO is different than the scope of what would be expected at the training camp or IChO.
https://en.wikipedia.org/wiki/United_States_National_Chemistry_Olympiad
Irradiated mail is mail that has been deliberately exposed to radiation , typically in an effort to disinfect it. The most notable instance of mail irradiation in the US occurred in response to the 2001 anthrax attacks ; the level of radiation chosen to kill anthrax spores was so high that it often changed the physical appearance of the mail. [ 1 ] The United States Postal Service began to irradiate mail in November 2001, in response to the discovery of large-scale contamination at several of its facilities that handled the letters that were sent in the attacks. A facility in Bridgeport, New Jersey , operated by Sterigenics International, uses a Rhodotron continuous wave electron beam accelerator built by IBA Industrial, to irradiate the mail. A few facilities were planning to use cobalt-60 sources, though it is unclear whether this was ever done. The USPS warned that a number of products could be adversely affected, such as seeds , photographic film , biological samples, food , medicines , and electronic equipment . In the process of irradiation, mail is exposed to extreme heat. Paper is weakened and may appear to have been aged, with discoloration (e.g., yellowing), and brittleness. Pages may break, crumble, or fuse to other pages. Documents bound with glue may have loose pages. The printing on pages may be distorted or offset onto adjacent pages. If tape is affixed to address labels, the address may be illegible. [ 2 ] Irradiation's effects on paper caused some alarm in the philatelic world, which sends large numbers of rare postage stamps and covers through the mail. A number of auction houses stopped sending material through the mail, and Linn's Stamp News in 2002 featured reports on stamps and covers that had been ruined by irradiation. [ 3 ] Although at one time the USPS expected to irradiate all mail, it later scaled back to just treating mail sent to government offices, including all mail directed to the White House, Congress, and the Library of Congress. [ 4 ]
https://en.wikipedia.org/wiki/United_States_Postal_Service_irradiated_mail
The United States Radium Corporation was a company, most notorious for its operations between the years 1917 to 1926 in Orange, New Jersey , in the United States that led to stronger worker protection laws. After initial success in developing a glow-in-the-dark radioactive paint, the company was subject to several lawsuits in the late 1920s in the wake of severe illnesses and deaths of workers (the Radium Girls ) who had ingested radioactive material. The workers had been told that the paint was harmless. [ 1 ] During World War I and World War II, the company produced luminous watches and gauges for the United States Army for use by soldiers. [ 2 ] U.S. Radium workers, especially women who painted the dials of watches and other instruments with luminous paint, suffered serious radioactive contamination . Lawyer Edward Markley was in charge of defending the company in these cases. [ 1 ] The company was founded in 1914 in New York City, by Dr. Sabin Arnold von Sochocky and Dr. George S. Willis, as the Radium Luminous Material Corporation . The company produced uranium from carnotite ore and eventually moved into the business of producing radioluminescent paint , and then to the application of that paint. Over the next several years, it opened facilities in Newark , Jersey City , and Orange. In August 1921, von Sochocky was forced from the presidency, and the company was renamed the United States Radium Corporation, [ 3 ] Arthur Roeder became the president of the company. [ 4 ] In Orange, where radium was extracted from 1917 to 1926, the U.S. Radium facility processed half a ton of ore per day. [ 3 ] The ore was obtained from "Undark mines" in Paradox Valley , Colorado and in Utah . A notable employee from 1921 to 1923 was Victor Francis Hess , who would later receive the Nobel Prize in Physics. [ 5 ] The company's luminescent paint, marketed as Undark , was a mixture of radium and zinc sulfide ; the radiation causing the sulfide to fluoresce. During World War I, demand for dials, watches, and aircraft instruments painted with Undark surged, and the company expanded operations considerably. The delicate task of painting watch and gauge faces was done mostly by young women, who were instructed to maintain a fine tip on their paintbrushes by licking them. [ 6 ] At the time, the dangers of radiation were not well understood. Around 1920, a similar radium dial business, known as the Radium Dial Company , a division of the Standard Chemical Company , opened in Chicago. It soon moved its dial painting operation to Ottawa, Illinois to be closer to its major customer, the Westclox Clock Company . Several workers died, and the health risks associated with radium were allegedly known, but this company continued dial painting operations until 1940. U.S. Radium's management and scientists took precautions such as masks, gloves, and screens, but did not similarly equip the workers. Unbeknownst to the women, the paint was highly radioactive and therefore, carcinogenic. The ingestion of the paint by the women, brought about while licking the brushes, resulted in a condition called radium jaw (radium necrosis), a painful swelling and porosity of the upper and lower jaws that ultimately led to many of their deaths. This led to litigation against U.S. Radium by the so-called Radium Girls , starting with former dial painter Marguerite Carlough in 1925. The case was eventually settled in 1926 and several more suits were brought against the company in 1927 by Grace Fryer and Katherine Schaub. The company did not stop the hand painting of dials until 1947. [ 3 ] The company struggled after World War I: the loss of military contracts sharply reduced demand for luminescent paint and dials, and in 1922, high-grade ore was discovered in Katanga , driving all U.S. suppliers out of business except U.S. Radium and the Standard Chemical Company. U.S. Radium consolidated its operations in Manhattan in 1927, leasing out the Orange plant and selling off other property. But demand for luminescent products surged again during World War II; by 1942, it employed as many as 1,000 workers, and in 1944 was reported to have radium mining, processing, and application facilities in Bloomsburg, Pennsylvania; Bernardsville, New Jersey ; Whippany, New Jersey ; and North Hollywood, California as well as New York City. [ 3 ] In 1945 the Office of Strategic Services enlisted the company's help for tests of a psychological-warfare scheme to release foxes with glowing paint in Japan. [ 7 ] After the war came another period of retrenchment. Not only did military supply contracts end, but luminous dial manufacturing shifted to promethium-147 and tritium . Also, radium mining in Canada ceased in 1954, driving up supply costs. In that year, the company consolidated its operations at facilities in Morristown, New Jersey and South Centre Township east of Bloomsburg, Pennsylvania . In Bloomsburg, it continued to produce items with luminescent paint using radium, strontium-90 and cesium-137 such as watch dials, instrument gauge faces, deck markers, and paint. [ 8 ] It ceased radium processing altogether in 1968, spinning off those operations as Nuclear Radiation Development Corporation, LLC, based in Grand Island, New York . The following year, a new facility at the Bloomsburg plant opened for the manufacturing of "tritiated metal foils and tritium activated self-luminous light tubes," [ 9 ] and the company switched focus to the manufacture of glow-in-the-dark exit and aircraft signs using tritium . Starting in 1979, the company underwent an extensive reorganization. A new corporation, Metreal, Inc., was created to hold the assets of the Bloomsburg plant. Manufacturing operations were subsequently moved into new wholly owned subsidiary corporations: Safety Light Corporation, USR Chemical Products, USR Lighting, USR Metals, and U.S. Natural Resources. Finally, in May 1980, U.S. Radium created a new holding company , USR Industries, Inc., and merged itself into it. [ 10 ] The Safety Light Corporation , in turn, was sold to its management and spun off as an independent entity in 1982. [ 11 ] Tritium-illuminated signs were marketed under the name Isolite, which also became the name of new subsidiary to market and distribute Safety Light Corporation's products. In 2005, the Nuclear Regulatory Commission declined to renew the licenses for the Bloomsburg facility, [ 12 ] and shortly thereafter the EPA added the Bloomsburg facility to the National Priorities List for remediation through Superfund . All tritium operations at the plant ceased by the end of 2007. [ 13 ] The chief medical examiner of Essex County, New Jersey , Harrison Stanford Martland, MD, published a report in 1925 that identified the radioactive material the women had ingested as the cause of their bone disease and aplastic anemia , and ultimately death. [ 2 ] Illness and death resulting from ingestion of radium paint and the subsequent legal action taken by the women forced closure of the company's Orange facility in 1927. The case was settled out of court in 1928, but not before a substantial number of the litigants were seriously ill or had died from bone cancer and other radiation-related illnesses. [ 14 ] The company, it was alleged, deliberately delayed settling litigation, leading to further deaths. In November 1928, Dr. von Sochocky, the inventor of the radium-based paint, died of aplastic anemia resulting from his exposure to the radioactive material, "a victim of his own invention." [ 15 ] The victims were so contaminated that radiation could still be detected at their graves in 1987 using a Geiger counter . [ 6 ] The company processed about 1,000 pounds of ore daily while in operation, which was dumped on the site. The radon and radiation resulting from the 1,600 tons of material on the abandoned factory resulted in the site's designation as a Superfund site by the United States Environmental Protection Agency in 1983. [ 16 ] From 1997 through 2005, the EPA remediated the site in a process that involved the excavation and off-site disposal of radium-contaminated material at the former plant site, and at 250 residential and commercial properties that had been contaminated in the intervening decades. [ 17 ] [ 18 ] In 2009, the EPA wrapped up their long-running Superfund cleanup effort. [ 19 ] [ 20 ]
https://en.wikipedia.org/wiki/United_States_Radium_Corporation
The United States Society on Dams is a professional association headquartered in Westminster, Colorado that is dedicated to: [ 1 ] This article about an organization in the United States is a stub . You can help Wikipedia by expanding it . This article about a professional association is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/United_States_Society_on_Dams
The United States Telephone Herald Company , founded in 1909, was the parent corporation for a number of associated " telephone newspaper " companies, located throughout the United States, that were organized to provide news and entertainment over telephone lines to subscribing homes and businesses. This was the most ambitious attempt to develop a distributed audio service prior to the rise of radio broadcasting in the early 1920s. At least a dozen associate companies were chartered, but despite initial optimism and ambitious goals, only two systems ever went into commercial operation — one based in Newark, New Jersey (New Jersey Telephone Herald, 1911-1912) and the other in Portland, Oregon (Oregon Telephone Herald, 1912-1913). Moreover, both of these systems were shut down after operating for only a short time, due to economic and technical issues. Corporation activity peaked in 1913, but the lack of success caused the company to suspend operations, and the corporation charter for the United States Telephone Herald Company was repealed in early 1918. The United States Telephone Herald Company was an authorized offshoot of the Telefon Hírmondó audio service of Budapest, Hungary. The Telefon Hírmondó programming, transmitted to subscribers over telephone wires, consisted of an extensive selection of news during the day, followed by instruction and entertainment during the evening. This "news-teller" service began operation in February 1893, shortly before the death of its inventor, Tivadar Puskás . Following a visit to Hungary, Cornelius Balassa procured the U.S. patent rights to the Telefon Hírmondó technology. (Later reports state that the company also held the rights for Canada and Great Britain. Another group obtained the rights for Italy, where in 1910 they established the service under the name Araldo Telefonico ). The formation of a parent U.S. company, initially operating under a New York state charter as the "Telephone Newspaper Company of America", was announced in October 1909, with organizing directors Manley M. Gillam (president), William H. Alexander (secretary and treasurer), and Cornelius Balassa, all of New York City. [ 2 ] In March 1910, the parent company was reorganized as the "United States Telephone Herald Company", now operating as a Delaware-chartered corporation. [ 3 ] An initial demonstration transmission was given at the company headquarters, located at 110 West Thirty-fourth Street in New York City, in early September 1910. [ 4 ] The equipment used was similar to that which was employed in Budapest. As in Hungary, announcers were called " stentors ", and because vacuum tube technology had not been developed yet, there were limited methods for amplification, so to compensate the stentors had to speak as loudly as possible into oversized dual-microphones. The lack of amplification also meant that subscribers needed to listen through headphones instead of loudspeakers. On February 14, 1911 U.S. patent 984,235 , describing "a telephone system... adapted for supplying innumerable subscribers... general news, musical compositions, and operas, sermons, correct or standard time and other happenings at stated intervals of day and night" was granted to Árpád Németh, and assigned to the company. Regionally-based Telephone Herald affiliates were authorized for the purpose of creating local "telephone newspaper" systems, with "the parent company to receive a royalty on every instrument installed". [ 5 ] But ultimately the parent company and its affiliates proved financially unsuccessful, and the United States Telephone Herald Company began winding down operations. Its Delaware business charter was repealed on January 28, 1918, for failure to pay state corporate taxes for two years. [ 6 ] At least twelve Telephone Herald associate companies were formed, although in only two cases was a telephone newspaper service successfully launched: the New Jersey Telephone Herald (1911-1912) and the Oregon Telephone Herald (1912-1913). (In some cases the service was also referred to as the "telectrophone".) Each associate company was established by local owners operating under a state-granted business charter. Publicity for these services commonly stated that the telephone newspaper subscriptions would cost 5 cents a day. (For comparison, at this time a copy of the daily Oregonian newspaper in Portland also cost 5 cents.) However, a majority of the associate companies got no further than the promotion or demonstration stages. In addition to twelve known associate companies, early company publicity stated that installations would also be set up in Chicago, [ 7 ] Scranton, Pennsylvania [ 8 ] and Montreal, Canada, [ 9 ] but systems do not appear to have been established at any of these locations. Also, a December 15, 1912 advertisement for the Central California Telephone Herald listed associate Telephone Herald companies with company names located at New York, Chicago, New Orleans, Seattle-Tacoma, Columbia (British Columbia and Alberta, Canada), Los Angeles and Oakland, [ 10 ] but there is no information for these either. Of the two Telephone Herald affiliates which launched commercial services, the New Jersey Telephone Herald was both the first and most publicized. The company had been incorporated in October 1910 in the state of New Jersey by Eugene Gorenflo, Duncan McIsaac and Nicholas J. Surgess. The original plan was to begin operations in March 1911, however, the New York Telephone Company, which operated the Newark telephone system franchise, initially refused to lease telephone lines to the Telephone Herald, on the grounds that their charter did not permit it. It required a ruling by the Public Utility Commission to compel the telephone company to provide the needed wires. [ 5 ] The hiring process for the stentors, who worked as the news readers, was competitive and rigorous. According to one of the original stentors, the position was restricted to "college men" with strong voices and extensive vocabularies. Throughout the day four stentors announced during rotating shifts of 15 minutes each, and did staff work when they were not announcing. [ 12 ] An ambitious daily service, closely patterned after the Telefon Hírmondó, was launched in October 1911, [ 13 ] transmitting to fifty receivers located in a department store waiting room plus five hundred Newark homes. The company's central offices, studio, and switch rooms were located in the Essex Building on Clinton Street in Newark. Condit S. Atkinson, who had extensive newspaper experience, headed the service's news department. [ 14 ] The company reported that there were many persons eager to sign up with the innovative service, and it soon had more potential subscribers than could be supported. One young listener later remembered that "it was a great thrill to pick up the small receiver and hear a voice telling about world events", moreover, "It was such a novelty that I could scarcely wait to get home from school and listen to it. It fascinated me. I would listen as long as it was operative, or until I was called to do my homework." [ 15 ] One of the program features was a series of original "Trippertrot" stories, written by local children's author Howard R. Garis , which were later assembled into two book collections: Three Little Trippertrots and Three Little Trippertrots on Their Travels , published in 1912. Despite the enthusiastic response, the company soon ran into serious technical and financial difficulties. Due to a revenue crisis, which resulted in employees walking off the job due to missed paychecks, the service was suspended in late February 1912. [ 5 ] A replenishment of funding resulted in a temporary revival in late May, with the primary company officials now consisting of Percy Pyne (president), William E. Gunn (vice-president and general manager), and C. E. Danforth (secretary-treasurer), with C. S. Atkinson renewing his editor functions. [ 16 ] The service, now calling itself the "telectrophone", was relaunched in November, [ 17 ] however, this would only be a temporary respite, and the telephone newspaper transmissions shut down for good at the end of the year. [ 18 ] A later review suggested that the primary issue was technical, as the twisted pair phone lines used for the Newark operation had different electrical characteristics than the wiring used by the original Telefon Hírmondó plant. [ 19 ] Following the permanent suspension of services, the New Jersey Telephone Herald's business charter was declared null and void on January 18, 1916. [ 20 ] Although less well known than the New Jersey affiliate, the second Telephone Herald company to implement an ongoing telephone newspaper service was the Oregon Telephone Herald Company . But like its predecessor, it also soon faced financial difficulties and was short-lived. The company was incorporated in Oregon, and headquartered at 506 Royal Building (Seventh and Morrison) in Portland. Extensive demonstrations were begun in May 1912, and advertisements the next month said commercial service would start "around October 1st". [ 21 ] A January 1913 solicitation for home subscribers for "The Talking Newspaper and Amusement Purveyor" listed the hours of operation as 8:00 AM to midnight. [ 22 ] Later advertisements referred to the service as the "Te-Lec-Tro-Phone", and in April saw the introduction of the reporting of local Portland Beavers baseball games. [ 23 ] (In December, a Northwestern League representative complained that the service had hurt attendance, and supported "the ousting of the various telephone herald and signalling systems from the ball parks"). A promotion the following month offered the chance to hear election results for free at twenty-five business sites. [ 24 ] In May, the Portland Hotel advertised that diners could listen to "the latest baseball, business and other news by Telephone-Herald" with their meals. [ 25 ] There appears to have been a company reorganization in early 1913, and in March two representatives from the parent company, including chief electrical engineer Árpád Németh, were reported in town to give technical advice. [ 26 ] But as with its New Jersey predecessor, the Portland enterprise was in trouble. During the summer, Oregon Corporation Commissioner R. A. Watson stepped in, and, under provisions of the state's "Blue Sky Law", barred the Oregon Telephone Herald from doing business, [ 27 ] stating that "There was no question about the honesty of this concern, but the scheme isn't practical, and while it might be popular for a short time it would be a failure in the end; therefore, we refused them a permit to sell stock." [ 28 ] The final advertisements for the company appeared in June 1913, and the state corporation charter was terminated on January 16, 1917, for failure to file statements or pay fees for two years. None of the other ten Telephone Herald associate companies launched their proposed telephone newspaper systems, although there were widely varying levels of plans and activities. Incorporated in Massachusetts on April 23, 1913, by Ladislaus de Doory (president), John M. Grosvenor, Jr. (treasurer), and Jesse W. Morton. De Doory was also a promoter with the parent company. [ 29 ] However, the syndicate does not appear to have made any demonstrations or other significant development, and its business charter was dissolved on February 21, 1916. [ 30 ] This was the first of two companies that were headquartered in San Francisco, California. The company received a California business charter in February 1911, issued to G. S. Holbrook, W. A. Whelan, W. B. Heckmann, A. H. Vorrath, R. M. Graham, R. Boreman, William T. Newverth, A. C. Gould and A. Jacoby. [ 31 ] Demonstrations were conducted in September 1911 at 821, 822, 823 Head Building, [ 32 ] but no further progress appears to have been made, and the company charter was declared forfeited and repealed for non-payment of taxes in February 1915. [ 33 ] Incorporated in California in February 1912 by C. J. Ward, F. W. Bresse and S. H. Whisner. Demonstrations were begun on April 1, 1912 from 4th Floor of the Elks' Building. [ 34 ] In late 1913, provisions were made to lease a local theater, the Diepenbrock, to serve as a source for programming. [ 35 ] However, there was an abrupt change of plans, and instead the owners decided to merge their operations with the Pacific Telephone Herald Company. [ 36 ] The telephone newspaper service never became operational, either before or after the Pacific Telephone Herald merger. The company's charter was declared forfeited and repealed for non-payment of taxes in February 1915. [ 37 ] This company was chartered in Delaware in August 1912 by R. R. Cooling, C. J. Jacobs and H. W. Davis of Wilmington, Delaware. [ 38 ] There is no additional information, other than the fact that its corporation charter was repealed on January 24, 1916. [ 39 ] [ 40 ] The company was incorporated in California. Demonstrations were made at H. C. Capwell Company's Store beginning in February 1913 which lasted through at least April. The original corporate offices were at 303-304-305 Union Savings Bank Building in Oakland. A later incorporation, by C. F. Homer (president), Charles Smith (treasurer) and B. F. Hews (editor), at 1751 Franklin Street, was reported in the fall of 1913. [ 41 ] The company's secretary-treasurer, J. Whited died in September 1913, [ 42 ] and the company president, William Angus, was killed in a mining accident in October 1914. In addition, the Pacific Telephone and Telegraph Company initially refused to lease telephone lines to the company, resulting in a complaint filed before the Railroad Commission of the State of California. [ 43 ] This case was dismissed on November 5, 1913 after the two sides reached a settlement. [ 44 ] In November 1913, a major expansion was announced, with the purchase from the parent company of the rights to operate in twenty-one western U.S. states and "the greater part of southwestern Canada", [ 45 ] and the next month saw a merger with the Central California Telephone Herald Company of Sacramento. [ 36 ] But despite the ambitious expansion plans, it does not appear that any regular service was ever established, and the state business charter was forfeited on March 4, 1916 for failure to pay the state license tax. [ 46 ] [ 47 ] Incorporated in the state of Delaware on December 30, 1911 by Frank Vernon, Ivor B. Blaiberg, and Albert D. Miller. (Also reported as E. B. Waples, W. W. Day and F. R. Janvier). [ 48 ] Although there was a limited amount of corporate activity reported in 1912-1913, [ 49 ] [ 50 ] nothing of significance appears to have resulted. The corporation charter was repealed on January 25, 1917 for two years taxes unpaid. [ 51 ] [ 52 ] Incorporated in California in August 1911 by H. A. Schmidt (president), M. N. Schmidt and G. Stephens. [ 53 ] The principals were inspired by the reported success of the concept in San Francisco, and promotional advertisements were run in the fall of 1911, [ 54 ] but the enterprise never began operations, and the state charter was forfeited on November 30, 1912. [ 55 ] This was second San Francisco-based Telephone Herald company, following the earlier California Telephone Herald Company. It was incorporated in the state of California on October 29, 1912, with founding the directors of W. H. Dohrmann, J. F. Dohrmann, A. J. Beecher, F. W. Beecher, Thomas R. White, C. E. Youngblood, Rudolph Schlueten, Clarence Eppstein and E. L. Manner. Demonstrations were given at 687 Market Street in early January 1913, but no further progress appears to have been made. The corporation charter was forfeited November 30, 1913, for failure to pay the state license tax. [ 56 ] [ 57 ] One of the first associate companies to be formed, this was also, due to fraud, one of the first to fail. It was chartered in California in May 1911, led by Peter Archbold Gordon Grimes, who turned out to be a con man. Grimes soon ran off with company funds, and was next seen impersonating an aviator in Hawaii. [ 58 ] The corporation charter was forfeited on November 30, 1911 for failure to pay its state license tax. [ 59 ] This affiliate, capitalized with $500,000 of common stock, was chartered in the state of Washington in June, 1911. The officers were Sherwood Gillespy, president; B. J. Klarman, vice president; and N. R. Solner, secretary and treasurer. [ 60 ] It was also announced that demonstrations were being conducted at the company headquarters at 339-340-341 Henry building. [ 61 ] The company soon faced financial difficulties, and in October was forced into receivership due to a salary dispute. [ 62 ] The company charter was cancelled sometime during the biennial reporting period of October 1, 1912 to September 30, 1914, for failure to pay the annual state license fee. [ 63 ] There were a few other early attempts to set up telephone-based news and entertainment systems in the United States, including the Tellevent , which conducted demonstrations and experimental work in Michigan from 1906-1908, and the Automatic Electric Company's Musolaphone , which operated a short-lived entertainment system in Chicago in 1913. However, these efforts were no more successful than the Telephone Herald companies. Both the Hungarian Telefon Hírmondó and the Italian Araldo Telefonico survived long enough for their operations to be combined with radio broadcasting in the 1920s. The same did not occur in the United States, and the Telephone Herald companies were the last early effort to offer nationwide audio programming over telephone lines. Although the concept of home audio entertainment was attractive to potential U.S. subscribers, the lack of signal amplification and other technical limitations, such as the need to maintain a telephone line infrastructure, and having to listen over headphones, made the technology unprofitable. [ 66 ] Less than a decade after the failure of the Telephone Herald companies, radio broadcasting was developed, which had the significant advantage that it could dispense with the need to use telephone lines. Moreover, radio programming could be provided free of subscription fees, because selling airtime to advertisers, the financing method most commonly adopted in the United States, provided sufficient revenue for ongoing operations.
https://en.wikipedia.org/wiki/United_States_Telephone_Herald_Company
The United States biological defense program —in recent years also called the National Biodefense Strategy —refers to the collective effort by all levels of government, along with private enterprise and other stakeholders, in the United States to carry out biodefense activities. Biodefense is a system of planned actions to counter and reduce the risk of biological threats and to prepare, respond to, and recover from them if they happen. The National Defense Authorization Act (NDAA) of 2016 required high-level officials across the federal government to create a national biodefense strategy together. As a result, in 2018 the National Biodefense Strategy was released by President Donald J. Trump . In essence, the strategy comprises the U.S. biological defense program in that it is the official framework that provides a "single coordinated effort" to coordinate all biodefense activities across the federal government. To execute the strategy, the White House issued a Presidential Memorandum on the Support for National Biodefense, which puts the specific directives and rules in place for carrying out the plans written in the strategy. The National Biodefense Strategy elevated natural outbreaks as a vital component of the U.S. biological defense program for the first time, mostly because of the significant risk that natural outbreaks pose to civilian, animal and agricultural populations across the country. [ 1 ] The U.S. Biological Defense Program began as a small defensive effort that parallels the country's offensive biological weapons development and production program , active since 1943. Organizationally, the medical defense research effort was pursued first (1956–1969) by the U.S. Army Medical Unit (USAMU) and later, after publicly known discontinuation of the offensive program, by the U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID). Both of these units were located at Fort Detrick , Maryland , where the U.S. Army Biological Warfare Laboratories were headquartered. The current mission is multi-agency, not exclusively military, and is purely to develop defensive measures against bio-agents , as opposed to the former bio-weapons development program. In 1951, due to biological warfare concerns arising from the Korean War , the US Centers for Disease Control and Prevention (CDC) created the Epidemic Intelligence Service (EIS), a hands-on two-year postgraduate training program in epidemiology , with a focus on field work. Since the 2001 anthrax attacks , and the consequent expansion of federal bio-defense expenditures, USAMRIID has been joined at Fort Detrick by sister bio-defense agencies of the U.S. Department of Health and Human Services ( NIAID 's Integrated Research Facility ) and the U.S. Department of Homeland Security (the National Biodefense Analysis and Countermeasures Center and the National Bioforensic Analysis Center ). These—along with the much older Foreign Disease Weed Science Research Unit of the U.S. Department of Agriculture —now constitute the National Interagency Confederation for Biological Research (NICBR). Broadly defined, the "United States Biological Defense Program" now also encompasses all federal-level programs and efforts to monitor, prevent, and contain naturally occurring infectious disease outbreaks of widespread public health concern. These include efforts to forestall large-scale disasters [ 2 ] such as flu pandemics and other " emerging infections " such as novel pathogens or those imported from other countries. Biological agents have been used in warfare for centuries to produce death or disease in humans, animals, or plants. The United States officially began its biological warfare offensive program in 1941. During the next 28 years, the U.S. initiative evolved into an effective, military-driven research and acquisition program, shrouded in secrecy and, later, controversy. Most research and development was done at Fort Detrick , Maryland , while production and testing of bio-weapons occurred at Pine Bluff, Arkansas , and Dugway Proving Ground (DPG), Utah . Field testing was done secretly and successfully with simulants and actual agents disseminated over wide areas. A small defensive effort always paralleled the weapons development and production program. With the presidential decision in 1969 to halt offensive biological weapons production—and the agreement in 1972 at the international BWC never to develop, produce, stockpile, or retain biological agents or toxins—the program became entirely defensive, with medical and non-medical components. The U.S. biological defense research program exists today, conducting research to develop physical and medical countermeasures to protect service members and civilians from the threat of modern biological warfare. [ 3 ] Both the U.S. bio-weapons ban and the BWC restricted any work in the area of biological warfare to defensive in nature. In reality, this gives BWC member-states wide latitude to conduct biological weapons research because the BWC contains no provisions for monitoring of enforcement. [ 4 ] [ 5 ] The treaty, essentially, is a gentlemen's agreement amongst members backed by the long-prevailing thought that biological warfare should not be used in battle. [ 4 ] In recent years certain critics have claimed the U.S. stance on biological warfare and the use of biological agents has differed from historical interpretations of the BWC. [ 6 ] For example, it is said that the U.S. now maintains that the Article I of the BWC (which explicitly bans bio-weapons), does not apply to "non-lethal" biological agents. [ 6 ] Previous interpretation was stated to be in line with a definition laid out in Public Law 101–298, the Biological Weapons Anti-Terrorism Act of 1989 . [ 7 ] That law defined a biological agent as: [ 7 ] any micro-organism, virus, infectious substance, or biological product that may be engineered as a result of biotechnology, or any naturally occurring or bio-engineered component of any such microorganism, virus, infectious substance, or biological product, capable of causing death, disease, or other biological malfunction in a human, an animal, a plant, or another living organism; deterioration of food, water, equipment, supplies, or material of any kind ... According to the Federation of American Scientists , U.S. work on non-lethal agents exceeds limitations in the BWC. [ 6 ] After World War II , and with the onset of Cold War tensions, the US continued its clandestine wartime bio-weapons program. The Korean War (1950–1953) added justification for continuing the program, when the possible entry of the Soviet Union into the war was feared. Concerns over the Soviet Union were perceived as justified, for the Soviet Union would pronounce in 1956 that they would use chemical and biological weapons for mass destruction in future wars. [ 8 ] In October 1950, the US Secretary of Defense approved continuation of the program, based largely on the Soviet threat and a belief that the North Korean and Chinese governments would use biological weapons. [ 9 ] With expansion of the biological warfare retaliatory program, the scope of the defensive program was nearly doubled. Data were obtained on personnel protection, decontamination , and immunization . Early detection research produced prototype alarms for use on the battlefield, but progress was slow, apparently limited by technology. [ 10 ] The U.S. Army Medical Unit, under the direction of The U.S. Army Surgeon General , began formal operations in 1956. One of the Unit's first missions was to manage all aspects of Project CD-22, the exposure of volunteers to aerosols containing a pathogenic strain of Coxiella burnetii , the etiologic agent of Q fever . The volunteers were closely monitored and antibiotic therapy was administered when appropriate. All volunteers recovered from Q fever with no adverse aftereffects. One year later, the Unit submitted to the U.S. Food and Drug Administration an Investigational New Drug application for a Q fever vaccine . In the following decade, the US accumulated significant data on personnel protection, decontamination, and immunization; and, in the offensive program, on the potential for mosquitoes to be used as biological vectors . A new Department of Defense (DoD) Biological and Chemical Defense Planning Board was created in 1960 to establish program priorities and objectives. Preventive approaches toward infections of all kinds were funded under the auspices of biological warfare. As concern increased over the biological warfare threat during the Cold War, so did the budget for the program: to $38 million by fiscal year 1966. [ citation needed ] The U.S. Army Chemical Corps was given the responsibility to conduct biological warfare research for all of the services. [ 11 ] In 1962, the responsibility for the testing of promising biological warfare agents was given to a separate Testing and Evaluation Command (TEC). Depending on the particular program, different test centers were used, such as the Deseret Test Center at Fort Douglas, Utah , the headquarters for the new biological and chemical warfare testing organization. In response to increasing concerns over public safety and the environment, the TEC implemented a complex system of approval of its research programs that included the U.S. Army Chief of Staff , the Joint Chiefs of Staff , the Secretary of Defense, and the President of the United States. During the last 10 years of the offensive research and development program (1959–69), many scientific advances were made that proved that biological warfare was clearly feasible, although dependent on careful planning, especially with regard to meteorological conditions. Large-scale fermentation, purification, concentration, stabilization, drying, and weaponization of pathogenic microorganisms could be done safely. Furthermore, modern principles of biosafety and containment were established at the Fort Detrick laboratories which have greatly facilitated biomedical research in general; still today, these are followed throughout the world. Arnold G. Wedum , M.D., Ph.D., a civilian scientist who was Director of Industrial Health and Safety at Fort Detrick, was the leader in the development of containment facilities. [ 10 ] During the 1960s, the US program underwent a philosophical change, and attention was now directed more towards biological agents that could incapacitate, but not kill. In 1964, research programs involved staphylococcal enterotoxins capable of causing food poisoning . Research initiatives also included new therapy and prophylaxis. Pathogens studied included the agents causing anthrax , glanders , brucellosis , melioidosis , plague , psittacosis , Venezuelan equine encephalitis , Q fever , coccidioidomycosis , and a variety of plant and animal pathogens [ 12 ] [ 13 ] Particular attention was directed at chemical and biological detectors during the 1960s. The first devices were primitive field alarms to detect chemicals. Although the development of sensitive biological warfare agent detectors was at a standstill, two systems were, nonetheless, investigated. The first was a monitor that detected increases in the number of particles sized 1 to 5 μm in diameter, based on the assumption that a biological agent attack would include airborne particles of this size. The second system involved the selective staining of particles collected from the air. Both systems lacked enough specificity and sensitivity to be of any practical use. [ 14 ] But in 1966, a research effort directed at detecting the presence of adenosine triphosphate (a chemical found only in living organisms) was begun. By using a fluorescent material found in fireflies , preliminary studies indicated that it was possible to detect the presence of a biological agent in the atmosphere. The important effort to find a satisfactory detection system continues today, for timely detection of a biological attack would allow the attacked force to use its protective masks effectively, and identification of the agent would allow any pre-treatment regimens to be instituted. The US Army also experimented with and developed highly effective barrier protective measures against both chemical and biological agents. Special impervious tents and personal protective equipment were developed, including individual gas masks even for military dogs . [ 10 ] During the late 1960s, funding for the biological warfare program decreased temporarily, to accommodate the accelerating costs of the Vietnam War . The budget for fiscal year 1969 was $31 million, decreasing to $11.8 million by fiscal year 1973. Although the offensive program had been stopped in 1969, both offensive and defensive programs continued to be defended. John S. Foster, Jr , Director of Defense Research and Engineering, responded to a query by Congressman Richard D. McCarthy : It is the policy of the U.S. to develop and maintain a defensive chemical-biological (CB) capability so that our military forces could operate for some period of time in a toxic environment, if necessary; to develop and maintain a limited offensive capability in order to deter all use of CB weapons by the threat of retaliation in kind; and to continue a program of research and development in this area to minimize the possibility of technological surprise. [ 15 ] On 25 November 1969, President Richard Nixon visited Fort Detrick to announce a new policy on biological warfare. In two National Security Memoranda, [ 16 ] [ 17 ] the U.S. government renounced all development, production, and stockpiling of biological weapons and declared its intent to maintain only small research quantities of biological agents, such as are necessary for the development of vaccines, drugs, and diagnostics. Ground was broken in 1967 for the construction of a new, modern laboratory building at Fort Detrick. The building would open in phases during 1971 and 1972. With the disestablishment of the biological warfare laboratories, the name of the U.S. Army Medical Unit, which was to have been housed in the new laboratories, was formally changed to U.S. Army Medical Research Institute of Infectious Diseases (USAMRIID) in 1969. The institute's new mission was stated in General Order 137, 10 November 1971 (since superseded): Conducts studies related to medical defensive aspects of biological agents of military importance and develops appropriate biological protective measures, diagnostic procedures and therapeutic methods. [ 18 ] The emphasis now shifted away from offensive weapons to the development of vaccines, diagnostic systems, personal protection, chemoprophylaxis, and rapid detection systems. After Nixon declared an end to the U.S. bio-weapons program, debate in the Army centered around whether or not toxin weapons were included in the president's declaration. [ 19 ] Following Nixon's November 1969 order, scientists at Fort Detrick worked on one toxin, Staphylococcus enterotoxin type B (SEB), for several more months. [ 20 ] Nixon ended the debate when he added toxins to the bio-weapons ban in February 1970 [ 21 ] In response to Nixon's 1969 decision, all antipersonnel biological warfare stocks were destroyed between 10 May 1971 and 1 May 1972. The laboratory at Pine Bluff Arsenal, Arkansas, was converted to a toxicological research laboratory, and was no longer under the direction or control of the DoD. Biological anticrop agents were destroyed by February 1973. Biological warfare demilitarization continued through the 1970s, with input provided by the U.S. Department of Health, Education and Welfare ; U.S. Department of the Interior ; U.S. Department of Agriculture ; and the Environmental Protection Agency . Fort Detrick and other installations involved in the biological warfare program took on new identities, and their missions were changed to biological defense and the development of medical countermeasures. The necessary containment capability, Biosafety Levels 3 and 4 (BSL-3 and BSL-4) continued to be maintained at USAMRIID. [ 10 ] In 1984, the DoD requested funds for the construction of another biological aerosol test facility in Utah. The proposal submitted by the army called for BSL-4 containment, although maintaining that the BSL-4 inclusion was based on a possible need in the future and not on a current research effort. The proposal was not well received in Utah, where many citizens and government officials still recalled the secretive projects of the military: the areas on DPG still contaminated with anthrax spores, and the well-publicized accidental chemical poisoning of a flock of sheep in Skull Valley, Utah , in March 1968. [ 12 ] Questions arose over the safety of the employees and the surrounding communities, and a suggestion was even made to shift all biological defense research to a civilian agency, such as the National Institutes of Health . The plan for a new facility was revised to utilize a BSL-3 facility, but not before the US Congress had instituted more surveillance, reporting, and control measures on the army to ensure compliance with the BWC. [ 10 ] In the 1990s, the US medical biological defense research effort (part of the U.S. Army's Biological Defense Research Program [BDRP]) was concentrated at USAMRIID at Fort Detrick. The army maintained state-of-the-art containment laboratory facilities there, with more than 10,000 ft2 of BSL-4 and 50,000 ft2 of BSL-3 laboratory space. BSL-4, the highest containment level, included laboratory suites that are isolated by internal walls and protected by rigorous entry restrictions, air-locks, negative-pressure air-handling systems, and filtration of all out-flow air through high-efficiency particulate air (HEPA) filters. Workers in BSL-4 laboratories also wore filtered positive-pressure total body suits , which isolated the workers from the internal air of the laboratory. BSL-3 laboratories had a similar design, but do not require that personnel wear positive-pressure suits. Workers in BSL-3 suites were protected immunologically by vaccines. U.S. governmental standards provided guidance as to which organisms might be handled under various containment levels in laboratories such as USAMRIID. [ 22 ] The unique facilities available at USAMRIID also included a 16-bed clinical research ward capable of BSL-3 containment, and a 2-bed patient care isolation suite—the Medical Containment Suite (MCS), known as "The Slammer"—where ICU-level care could be provided under BSL-4 containment. Here, healthcare personnel wore the same positive-pressure suits as are worn in BSL-4 research laboratories. The level of patient isolation required depended on the infecting organism and the risk to healthcare providers. Patient care can be provided at BSL-4. There were no patient-care category analogous to BSL-3; humans who are ill as a result of exposure to BSL-3 agents were to be cared for in an ordinary hospital room with barrier nursing procedures. [ 10 ] USAMRIID guidelines were prepared to determine which level of containment would be employed for individual patients who required BSL-4 isolation or barrier nursing care. Staff augmentation for BSL-4 critical care expertise came from the Walter Reed Army Medical Center (WRAMC), Washington, D.C., in accordance with a memorandum of agreement between the two institutions. Patients could be brought directly into the BSL-4 suite from the outside through specialized ports with unique patient-isolation equipment. (The MCS was decommissioned and discontinued in December 2010.) Additionally, starting in the 1970s USAMRIID maintained a unique evacuation capability known as the Aeromedical Isolation Team (AIT). Led by a physician and a registered nurse, each of the two teams consisted of eight volunteers who trained intensively to provide an evacuation capability for casualties suspected of being infected with highly transmissible, life-threatening BSL-4 infectious diseases (e.g., hemorrhagic fever viruses). The unit used special adult-sized Vickers isolation units (Vickers Medical Containment Stretcher Transit Isolator). These units were aircraft transportable and isolated a patient placed inside from the external environment. The AIT could transport two patients simultaneously; obviously, this was not designed for a mass casualty situation. During the 1995 outbreak of Ebola fever in Zaire , the AIT remained on alert to evacuate any US citizens who might have become ill while working to control the disease in that country. [ 10 ] During this period, some biological defense research also continued at the U.S. Army Medical Research Institute of Chemical Defense , Edgewood Arsenal , Maryland, and the Walter Reed Army Institute of Research (WRAIR), Washington, D.C. USAMRIID and these sister laboratories conducted basic research in support of the medical component of the US biological defense research program, which developed strategies, products, information, procedures, and training for medical defense against biological warfare agents. The products included diagnostic reagents and procedures, drugs, vaccines, toxoids, and antitoxins. Emphasis is placed on protecting personnel before any potential exposure to the biological agent occurs. [ 23 ] In 1997, United States law formally defined weaponizable bio-agents as "Biological Select Agents or Toxins" (BSATs) — or simply Select Agents for short [ 24 ] — which fall under the oversight of either the U.S. Department of Health and Human Services or the U.S. Department of Agriculture (or both) and which have the "potential to pose a severe threat to public health and safety". In 1998, several DoD organizations consolidated to create the Defense Threat Reduction Agency (DTRA), headquartered in Fort Belvoir, Virginia . This agency is DOD's official Combat Support Agency for countering weapons of mass destruction , including bio-agents. DTRA's main functions are threat reduction, threat control, combat support, and technology development. In the US national interest, DTRA supports projects at more than 14 locations around the world, including Russia, Kazakhstan, Azerbaijan, Uzbekistan, Georgia, and Ukraine. In 1999, a "National Pharmaceutical Stockpile" — renamed Strategic National Stockpile in 2002 — was created under the oversight of DHHS. In the same year, the Laboratory Response Network — a collaborative effort within the US federal government involving the Association of Public Health Laboratories and the Centers for Disease Control and Prevention — was established to facilitate the confirmatory diagnosis and typing of possible bio-agents. Also in 1999, President Bill Clinton issued Executive Order 13139 , which provided for experimental anti-WMD drugs to be given to service members at the discretion of the Secretary of Defense only under informed consent ; only the President may waive the necessity for informed consent. Three secret DoD projects involving countermeasures against anthrax – code named Project Bacchus , Project Clear Vision and Project Jefferson – were publicly disclosed by The New York Times in 2001. (The projects were undertaken between 1997 and 2000 and focused on the concern that the old Soviet BW program was secretly continuing and had developed a genetically modified anthrax weapon.) [ 25 ] Since the September 11 attacks and the 2001 anthrax attacks , the US government has allocated nearly $50 billion to address the threat of biological weapons. Funding for bioweapons-related activities focuses primarily on research for and acquisition of medicines for defense. Biodefense funding also goes toward stockpiling protective equipment, increased surveillance and detection of bio-agents, and improving state and hospital preparedness. Significant funding goes to BARDA ( Biomedical Advanced Research and Development Authority ), part of DHHS. Funding for activities aimed at prevention has more than doubled since 2007 and is distributed among 11 federal agencies. [ 26 ] Efforts toward cooperative international action are part of the project. A "Select Agent Program" (SAP) was established to satisfy requirements of the USA PATRIOT Act of 2001 and the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 . The Centers for Disease Control and Prevention administers the SAP, which regulates the laboratories that may possess, use, or transfer Select Agents within the United States. The Project Bioshield Act was passed by Congress in 2004 calling for $5 billion for purchasing vaccines that would be used in the event of a bioterrorist attack. According to President George W. Bush : Project BioShield will transform our ability to defend the nation in three essential ways. First, Project BioShield authorizes $5.6 billion over 10 years for the government to purchase and stockpile vaccines and drugs to fight anthrax, smallpox and other potential agents of bioterror. The DHHS has already taken steps to purchase 75 million doses of an improved anthrax vaccine for the Strategic National Stockpile . Under Project BioShield, HHS is moving forward with plans to acquire a safer, second generation smallpox vaccine, an antidote to botulinum toxin, and better treatments for exposure to chemical and radiological weapons. [ 27 ] This was a ten-year program to acquire medical countermeasures to biological, chemical, radiological and nuclear agents for civilian use. A key element of the Act was to allow stockpiling and distribution of vaccines that had not been tested for safety or efficacy in humans , due to ethical concerns. Efficacy of these agents cannot be directly tested in humans without also exposing humans to the chemical, biological, or radioactive threat being treated. In these cases efficacy testing follows the US Food and Drug Administration Animal Rule for pivotal animal efficacy. [ 28 ] Since 2007, USAMRIID has been joined at Fort Detrick by sister bio-defense agencies of the U.S. Department of Health and Human Services (NIAID's Integrated Research Facility) and the U.S. Department of Homeland Security (the National Biodefense Analysis and Countermeasures Center (NBACC) and the National Bioforensic Analysis Center ). These—along with the much older Foreign Disease Weed Science Research Unit of the U.S. Department of Agriculture—now constitute the National Interagency Confederation for Biological Research (NICBR). The expansion of U.S. biodefense programs in the 2000s, particularly through NBACC, raised concerns among some arms control experts regarding compliance with the Biological Weapons Convention . A 2003 commentary in Politics and the Life Sciences by Milton Leitenberg , James F. Leonard , and Richard Spertzel argued that some NBACC activities—including genetic engineering of pathogens, pathogen dispersal modeling, and “Red Teaming” (simulating biothreat scenarios)—could be perceived as offensive biological weapons research rather than purely defensive work. The authors warned that research into pathogen stabilization, packaging, and dispersal resembled previous U.S. offensive biological weapons programs before the BWC came into effect. A 2004 report in The Journal of Clinical Investigation raised similar concerns, citing a presentation by Lieutenant Colonel George W. Korch Jr., which outlined NBACC research objectives such as acquiring, growing, modifying, storing, stabilizing, packaging, and dispersing biological agents. Experts noted that several of these tasks were categorized by the National Academy of Sciences as "experiments of concern", particularly those that might increase pathogen virulence, transmissibility, or resistance to countermeasures. Leitenberg warned that high-fidelity modeling and feasibility studies of biological threats might have already "crossed the line" into offensive research. He also suggested that U.S. biodefense activities could provoke other nations to expand their own biological weapons research, increasing global security risks. [ 29 ] Anthony Fauci , then Director of the U.S. National Institute of Allergy and Infectious Diseases , stated that NBACC’s work focused on developing diagnostics, therapeutics, and vaccines, dismissing comparisons to past offensive bioweapons programs. Gerald Parker, then Director of the U.S. Department of Homeland Security Office of Science-Based Threat Analysis, also denied any intent to enhance pathogen virulence but acknowledged that if intelligence suggested adversaries were modifying pathogens, NBACC "may have to evaluate the technical feasibility" of such modifications. [ 30 ] Peter Gilligan of the University of North Carolina Medical School criticized efforts to enhance virulence or antibiotic resistance in pathogens, citing risks of accidental release, proliferation, and misuse. Gilligan pointed to the 1979 Sverdlovsk anthrax leak as an example of how even defensive biological research can lead to deadly accidents. He also questioned the financial and ethical costs of the biodefense expansion, arguing that billions of dollars were being spent on speculative countermeasures while other pressing global health needs, like countering the spread of HIV in developing countries, remained underfunded. [ 31 ] In July 2012, the White House issued its guiding document on the National Biosurveillance Strategy . In December 2019, Congress moved forward with a spending package that provided increases for several key U.S. biological defense programs, including the Strategic National Stockpile. The Centers for Disease Control and Prevention was slated to receive $8 billion, a $636 million increase over 2019, with a mandate written in the bill for CDC "to maintain a strong and central role in the medical countermeasures enterprise." Within the CDC budget, the Public Health and Social Services Emergency Fund , which prepares for "all public health emergencies" including bioterrorism and federal efforts against infectious diseases, was funded at $2.74 billion. Another change was a specific item in the budget for the Strategic National Stockpile, which directed $535 million for vaccines, medicines and diagnostic tools to fight Ebola, which has become an emerging threat. [ 32 ] In August 2019, the U.S. Government Accountability Office (GAO) issued a report that identified specific challenges that the United States faces in protecting the nation against biological events. The report focused on four specific vulnerabilities: assessment of "enterprise-wide threats", situational awareness and data integration, biodetection technologies, and lab safety and security. [ 33 ] [ 34 ] Products currently being produced or under development through military research include: Some vaccines also have applicability for diseases of domestic animals (e.g., Rift Valley fever and Venezuelan equine encephalitis). In addition, vaccines are provided to persons who may be occupationally exposed to such agents (e.g., laboratory workers, entomologists , and veterinary personnel) throughout government, industry, and academe. [ 10 ] USAMRIID also provides diagnostic and epidemiological support to federal, state, and local agencies and foreign governments. Examples of assistance rendered to civilian health efforts by the U.S. Army Medical Research and Materiel Command (USAMRMC) include: The current [ when? ] research effort combines new technological advances, such as genetic engineering and molecular modeling , applying them toward development of prevention and treatment of diseases of military significance. The program is conducted in compliance with requirements set forth by the U.S. Food and Drug Administration (FDA), U.S. Public Health Service , Nuclear Regulatory Commission , U.S. Department of Agriculture , Occupational Safety and Health Administration , and Biological Weapons Convention . [ 38 ]
https://en.wikipedia.org/wiki/United_States_biological_defense_program
American interest in gravity control propulsion research intensified during the early 1950s. Literature from that period used the terms anti-gravity, anti-gravitation, baricentric, counterbary, electrogravitics (eGrav), G-projects, gravitics, gravity control, and gravity propulsion. [ 1 ] [ 2 ] Their publicized goals were to discover and develop technologies and theories for the manipulation of gravity or gravity-like fields for propulsion. [ 3 ] Although general relativity theory appeared to prohibit anti-gravity propulsion, several programs were funded to develop it through gravitation research from 1955 to 1974. The names of many contributors to general relativity and those of the golden age of general relativity have appeared among documents about the institutions that had served as the theoretical research components of those programs. [ 4 ] [ 5 ] [ 6 ] Since its emergence in the 1950s, the existence of the related gravity control propulsion research has not been a subject of controversy for aerospace writers, critics, and conspiracy theory advocates alike, but their rationale, effectiveness, and longevity have been the objects of contested views. Mainstream newspapers, popular magazines, technical journals, and declassified papers reported the existence of the gravity control propulsion research. For example, the title of the March 1956 Aero Digest article about the intensified interest was "Anti-gravity Booming." A. V. Cleaver made the following statement about the programs in his article: The gravitics programs had not been evinced by any technological artifacts, such as the Project Pluto Tory IIA, the world's first nuclear ramjet . Commemorative monuments by the Gravity Research Foundation have been the artifacts attesting to the early commitments to finding materials and methods to manipulate gravity. The endeavor had the resources and publicity of an initiative, but writers from that period did not describe them with that term. Gladych stated: The writings about the gravity control propulsion research effort had disclosed the "players" and resources while prudently withholding both the specific features of the research and the identity of its coordinating body. Publicized and telecasted conspiracy theory anecdotes have suggested much higher levels of success to the G-projects than mainstream science. Recent historical analysis and reports have attracted attention to the agencies and firms that had participated in the gravity control propulsion research. James E. Allen, BAE Systems consultant and engineering professor at Kingston University , referred to those programs in his history of novel propulsion systems for the journal Progress in Aerospace Sciences . [ 9 ] Research by Dr. David Kaiser, Associate Professor of the History of Science, Massachusetts Institute of Technology, manifested the contributions made by the Gravity Research Foundation to the pedagogical aspects of the golden age of general relativity. [ 4 ] Dr. Joshua Goldberg, Syracuse University, described the Air Force's support of relativity research during that period. [ 5 ] Progress reports [ 6 ] and anecdotes and Internet resumes of former visiting and staff scientists have been the sources of the history of the Research Institute for Advanced Study (RIAS). Former aviation editor of Jane's Defence Weekly , Nick Cook , drew attention to the antigravity programs through worldwide publications of his book, [ 10 ] The Hunt for Zero Point , and subsequent televised documentaries. Mainstream historical accounts of the G-projects have been supplemented with conspiracy theory anecdotes. Lists of the research institutes, industrial sites, and policy makers along with statements from prominent physicists were provided in five comprehensive works that had been published during the early years of the gravity control propulsion research. Aviation Studies (International) Limited, London, published a detailed report about those activities by the Gravity Research Group that was later declassified. [ 3 ] The Journal of the British Interplanetary Society and The Aeroplane published the propulsion survey and critical assessment of the American gravitics research by the internationally recognized astronautics historian A. V. Cleaver. [ 7 ] The New York Herald Tribune and Miami Herald published a series of three articles by one of the world's greatest aviation journalists of the twentieth century, Ansel Talbert . [ 11 ] Talbert's two series of newspaper articles took place in the midst of the policy-by-press-release era. Neither his, nor the writings that followed the five prominent works from that period, yielded denials and/or retractions. Gravity control propulsion research had been the subject of widely published UFO literature. The documented testimonies of whistleblowers edited by Dr. Steven Greer , Director of the Disclosure Project ; [ 12 ] anecdotes and schematics by Mark McCandlish and Milton William Cooper ; [ 12 ] [ 13 ] and the reports by Philip J. Corso , [ 14 ] David Darlington, [ 15 ] and Donald Keyhoe , [ 16 ] famous UFO researcher , have suggested incorporation of reverse engineering of recovered extraterrestrial vehicles with the anti-gravity propulsion projects had enabled them to continue beyond 1973 to successfully manufacture antigravity vehicles. The March 6, 2024, report by the Department of Defense All-Domain Anomaly Resolution Office stated it had not found any empirical evidence of USG and private industry conducting reverse engineering from the 1940s to the present of extraterrestrial technologies. Astronautics and aviation personae; reputable military journalists, and credentialed researchers had stated the flight characteristics of UFOs had precipitated the creation of gravity control propulsion research programs. They did not emphasize the origin of the UFOs had to be extraterrestrial. Their information fell into the categories of analogical, anecdotal, character, direct, documentary, hearsay, prima facie, and testimonial evidence. On January 1st 2025, Matthew Livelsberger , blew up a Tesla Cybertruck outside the Trump International Hotel, Las Vegas . In a letter written before the incident, Matthew writes that the military use of gravity propulsion systems by both China and the US pose a mounting threat to international peace. [ citation needed ] Talbert indicated the rationale for the intensified interest in gravity control propulsion research had stemmed from the works of three physicists . [ 11 ] They were Bryce DeWitt 's prize-winning Gravity Research Foundation essay; [ 17 ] the book Gravity and the Universe by Pascual Jordan ; and presentations to the International Astronautical Federation by Dr. Burkhard Heim . [ 2 ] [ 18 ] DeWitt's essay discouraged the pursuit of materials that shield, reflect, and/or insulate gravity and emphasized the need to encourage young physicists to pursue gravitational research. He opened his essay with the following paragraph: Several articles cited his essay during and after the gravity control propulsion research period. Within a few years facilities emerged embodying the theme of DeWitt's call for increased stimuli for research. Physical principle surveys by Cleaver and Weyl stated the antigravity research was not based on any recognized theoretical breakthroughs. Cleaver's skepticism suggested an alternative rationale for establishing that research was based on a science fiction novel. [ 7 ] Weyl charged publishers with poor journalism; attacked their terminology; and gave the highest rating for prospective physical principles for gravity control propulsion to Burkhard Heim's works. [ 2 ] Stambler leveled harsh criticisms against Gluraheff's gravitation hypothesis. [ 19 ] Talbert and other authors listed the following three agencies as the principal facilities that had conducted the theoretical research: Several articles contained expressions of gratitude for the support to the gravity control propulsion endeavor by the Gravity Research Foundation . [ 20 ] [ 21 ] Even though the Foundation was a humble, non-profit organization, its creator, Roger Babson , used his wealth and influence to mobilize industries; raise private and government funding; and motivate engineers and physicists to conduct research in gravity shielding and control. [ 22 ] According to his autobiography: "The purpose of the Foundation is to encourage others to work on gravity problems and aid others in obtaining rewards for their efforts." [ 23 ] During Babson's lifetime, the Foundation conducted Gravity Day Conferences each summer; established a library on gravity; solicited essays that addressed (1.) various prospects for shielding gravity, (2.) the development and/or discovery of materials that could convert gravitational force into heat, or (3.) methods of manipulating gravity; [ 24 ] and installed monuments at various universities that cited its antigravity focus. In September, 1956, the General Physics Laboratory of the Aeronautical Research Laboratories (ARL) at Wright-Patterson Air Force Base , Dayton, Ohio , commenced an intense program to coordinate research into gravitational and unified field theories with the hiring of Joshua N. Goldberg. [ 5 ] Creation by ARL of Goldberg's program may have been coincidental to Talbert's disclosures of commitments to gravity control propulsion research. [ 18 ] The precise rationale for creating the program and justifying its budgets and personnel may never be determined. Neither Goldberg nor the Air Force's Deputy for Scientific and Technical Information, Walter Blados, were able to locate the founding documents. [ 5 ] Roy Kerr , a former ARL scientist, stated the antigravity propulsion purpose of ARL was "rubbish" and that "The only real use that the USAF made of us was when some crackpot sent them a proposal for antigravity or for converting rotary motion inside a spaceship to a translational driving system." [ 25 ] The December 30, 1957 issue of Product Engineering closed its report with the following statement: During the following sixteen years, its name was changed to the Aerospace Research Laboratories. The ARL scientists produced nineteen technical reports [ 5 ] and over seventy peer-reviewed journal articles. [ 27 ] The Air Force's Foreign Technical Division, [ 28 ] and other agencies, [ 29 ] investigated stories [ 30 ] [ 31 ] about Soviet attempts to understand gravity. Such actions were consistent with the paranoia of the Cold War . The funding for the military components of the gravity control propulsion research had been terminated by the Mansfield Amendment of 1973. Black project experts, [ 10 ] conspiracy theorists, [ 16 ] and whistleblowers [ 12 ] [ 14 ] had suggested the gravity control propulsion efforts had achieved their goals and had been continued decades beyond 1973. The Research Institute for Advanced Study (RIAS) was conceived by George S. Trimble, the vice president for aviation and advanced propulsion systems, Glenn L. Martin Company , and was placed under the direct supervision of Welcome Bender. The first person Bender hired was Louis Witten , an authority on gravitation physics. [ 32 ] Talbert's article had announced Trimble's completion of contractual agreements with Pascual Jordan and Burkhard Heim for RIAS. Subsequent hires yielded a half dozen gravity researchers known as the field theory group. Arthur C. Clarke and others stated that RIAS' assembly of talent was qualified for the task of discovering new principles that could be used to develop gravity control propulsion systems. [ 33 ] The quest for propulsion through gravity control was vaguely implied in various publications. Works by Cook and Cleaver summarized statements in the RIAS brochures. Cook had equated the broad range of RIAS's mission statements with those of Skunk Works . In 1958, Mallan reported "the control of the force of gravity itself for propulsion" was one of the unorthodox goals initiated by Trimble for RIAS. [ 34 ] RIAS was renamed the Research Institute for Advanced Studies during the sixties when the American-Marietta Company merged with Martin to become the Martin Marietta Company. The 1995 merger that yielded the Lockheed Martin Company modified its goals, but not its name. Talbert's newspaper series and subsequent articles in technical magazines and journals listed the names of aerospace firms conducting gravity control propulsion research. The Gravity Research Group indicated those companies had constructed "rigs" to improve the performance of Thomas Townsend Brown 's gravitators through attempts to develop materials with high dielectric constants (k). [ 3 ] Gravity Rand Limited provided a set of guidelines to help management conduct research and nurture creativity. [ 1 ] Articles about the gravity propulsion research by the aerospace firms ceased after 1974. None of the companies featured in those publications had filed retractions. The following aerospace firms have been cited in the works published from 1955 through 1974: None of the reported experimental breakthroughs published during the 1950s and 1960s have been recognized by the aerospace community. Various reports indicated Brown's gravitators were the main experimental focus of the gravity control propulsion research. [ 3 ] According to G. Harry Stine and Intel, research on Brown's gravitators became classified immediately after demonstrations of 30% weight reductions. [ 37 ] [ 38 ] Thomas Townsend Brown had obtained a British patent for high voltage, symmetric, parallel plate capacitors, that he called gravitators, in 1928. [ 39 ] Brown claimed they would produce a net thrust in the direction of the anode of the capacitor that varied slightly with the positions of the Moon. [ 40 ] The scientific community rejected such claims as products of pseudoscience and/or misinterpretations of ion wind effects. Independent research [ 41 ] found small amounts of lift from Brown's gravitator based on an inefficient use of ionic propulsion . The devices were named Ion Lifters or Ionocraft and were reported to be able to lift the empty shell of a vehicle under ideal conditions, but not the additional machinery required to generate the electric field. Gravity effects were not found in the independent research. In July 1960, Missiles and Rockets reported Martin N. Kaplan, Senior Research Engineer, Electronics Division, Ryan Aeronautical Company, San Diego, had conducted anti-gravitational experiments yielding the promise of impulses, accelerations, and decelerations one hundred times the pull of gravity. [ 35 ] Neither comments nor criticism of the report appeared in subsequent articles during the period of intensified gravity control propulsion research (see Section 1 of tractor beam for similar reports). In 1963, Robert L. Forward described hypothetical experiments that could use degenerate matter to explore gravitational effects and the potential for antigravity. The experiments involved cooling neutrons to temperatures sufficiently low to allow them to condense into neutron matter, and then subjecting them to rotating magnetic fields to accelerate them. Neutron matter is required as ordinary matter would not be dense enough to explore these properties, with Forward describing ideal densities from 10 8 to 10 15 g/cm 3 . [ 42 ] [ 43 ] Many of the contributors to general relativity have been supported by and/or associated with the ARL, RIAS, and/or the Gravity Research Foundation. The decades preceding the 1955 revelation of the gravity control propulsion research were a low water mark for general relativity. [ 44 ] [ 45 ] The following summarizes how the components of that research had stimulated the resurgence of general relativity: Even though some of the physicists who attended the Gravity Day Conferences quietly mocked the anti-gravity mission of the Foundation, [ 46 ] it provided significant contributions to mainstream physics. [ 47 ] The International Journal of Modern Physics D has featured selected papers from the Gravity Research Foundation essay competition. Many have been incorporated with the collections of the Niels Bohr Library. A few of the Foundation essay contest winners became Nobel laureates (e.g., Ilya Prigogine , Maurice Allais , George F. Smoot ). Foundation essays have been among the resources graduate students check for new ideas. [ 22 ] Kaiser summarized the Foundation's influence in the following manner: Foundation trustee, Agnew Bahnson, contacted Dr. Bryce DeWitt with a proposal to fund the creation of a gravity research institute. [ 4 ] [ 48 ] DeWitt had won the first prize for the 1953 essay contest. The proposed name was changed to the Institute for Field Physics and it was established in 1956 at the University of North Carolina at Chapel Hill under the direction of Bryce and his wife, Cécile DeWitt-Morette . [ 49 ] Dewitt's paper on superconductivity influencing gravitation saw application several decades later. The peer reviewed physics journal , Physica C , published a report by Eugene Podkletnov and Nieminen about gravity-like shielding . [ 50 ] Although their work had gained international attention, researchers were not able to replicate Podkletnov's initial conditions. [ 51 ] [ 52 ] [ 53 ] But, analyses by Giovanni Modanese [ 54 ] and Ning Wu [ 55 ] indicated various applications of quantum gravity theory could allow gravitational shielding phenomena. Those achievements have not been pursued by the scientific community. The list of prominent contributors to the golden age of general relativity, contains the names of several scientists who had authored the nineteen ARL Technical Reports and/or seventy papers. The ARL sponsored papers were published in the Proceedings of the Royal Society of London, Physical Review, Journal of Mathematical Physics, Physical Review Letters, Physical Review D, Review of Modern Physics, General Relativity and Gravitation, International Journal of Theoretical Physics , and Nuovo Cimento B . Some of the ARL papers were written in collaboration with RIAS, the U.S. Army Signal Research and Development Laboratory at Fort Monmouth, New Jersey , and the Office of Naval Research . The ARL had provided significant enhancements to general relativity theory. For example, Roy Kerr's description of the behavior of space-time in the vicinity of a rotating mass was among those works. [ 56 ] Goldberg concluded: "However, it should be recognized that, in the United States, the Department of Defense played an essential role in building a strong scientific community without widespread encroachment on academic values." [ 5 ] The growth of nonlinear differential equations during the fifties was stimulated by RIAS. One of the leading groups in dynamical systems and control theory, the Lefschetz Center for Dynamical Systems, [2] was a spinoff from RIAS. After the launch of Sputnik , world-class mathematician Solomon Lefschetz came out of retirement to join RIAS in 1958 and formed the world's largest group of mathematicians devoted to research in nonlinear differential equations. [ 57 ] The RIAS mathematics group stimulated the growth of nonlinear differential equations through conferences and publications. It left RIAS in 1964 to form the Lefschetz Center for Dynamical Systems at Brown University , Providence, Rhode Island . On May 9, 2001, Mark McCandlish testified on the televised news conference held by the Disclosure Project, at the National Press Club , Washington, D.C. He stated gravity control propulsion research had started in the 1950s and had successfully reverse engineered the vehicle retrieved from the Roswell crash site to build three Alien Reproduction Vehicles (ARVs) by 1981. [ 58 ] McCandlish described their propulsion systems in terms of Thomas Townsend Brown's gravitators and provided a line drawing of its interior. The diagram closely resembled the drawing provided earlier in Milton William Cooper's book. Another Disclosure Project whistleblower, Philip J. Corso, stated in his book the craft retrieved from the second crash site at Roswell, New Mexico, had a propulsion system resembling Thomas Townsend Brown's gravitators. [ 14 ] And, Corso's book featured several gravity control propulsion statements made by Hermann Oberth . Soon after the end of the Cold War, a small group of scientists and engineers openly expressed their desire to use technologies developed by black projects for civil applications. [ 59 ] Steven Greer formed the Disclosure Project in 1995 to help those and other research whistleblowers share their information with and to petition Congress. By 2001, it had provided reports to two Congressional hearings and had acquired over 400 members from branches of the military and aerospace industry. During the early 1960s, Keyhoe published excerpts from a letter by Hermann Oberth that presented explanations for the flight characteristics of UFOs in terms of gravity control propulsion. [ 16 ] Prior to Oberth's letter, Keyhoe had supported arguments for magnetic forces as the source of propulsion for UFOs. The letter caused him to search for the existence of gravity control propulsion research programs. The following is a segment of his findings he had released in his 1966 and 1974 publications: During his press conferences on February 2, 1955, in Bogotá and February 10, 1955, in Grand Rapids, Michigan , aviation pioneer William Lear stated one of his reasons for believing in flying saucers was the existence of American research efforts into antigravity. [ 60 ] Talbert's series of newspaper articles about the intensified interest in gravity control propulsion research were published during the Thanksgiving week of that year.
https://en.wikipedia.org/wiki/United_States_gravity_control_propulsion_research
Herbicidal warfare research conducted by the U.S. military began during the Second World War with additional research during the Korean War . Interest among military strategists waned before a budgetary increase allowed further research during the early Vietnam War . The U.S. research culminated in the U.S. military defoliation program during the Vietnam war known as Operation Ranch Hand . The use of a chemical or biological agents to destroy Japan's rice was contemplated by the Allies during World War II. In 1945 Japan's rice crop was terribly affected by rice blast disease. The outbreak as well as another in Germany's potato crop coincided with covert Allied research in these areas. The timing of these outbreaks generated persistent speculation of some connection between the events however the rumors were never proven and the outbreaks could have been naturally occurring. [ 1 ] A U.S. War Departments report notes that "in addition to the results of human experimentation much data is available from the Japanese experiments on animals and food crops". [ 2 ] In the mid-1950s, the Chemical Corps continued the search for anti-crop agents in order destroy the food and economic crops of enemy nations in wartime as part of the secret programs started during World War II. Several chemical and biological anti-crop agents were standardized. [ 4 ] Former Professor Emeritus of Forage Crops Ecology at University of Tennessee's Department of Plant Sciences, Dr. Henry Fribourg, was an Army scientist in the mid-1950s who helped develop the most efficient dispersal techniques for anti-crop fungus spores and herbicides in labs at Ft. Detrick and in field tests in South Dakota, Texas, and Florida. Dr. Fribourg said, "The idea in those days was that the enemy's crops could be killed and this would be a much more humane way of winning a war than using atomic bombs." [ 5 ] However, in 1957 the Army found it had no funds to carry on the anti-crops research and the program nearly halted. Even though the anti-crop program had been phased out, the Chemical Corps continued to produce the new agent under an Industrial Preparedness Measure that permitted laboratory production of the agent to increase to limited production capability, testing the adequacy of the agent against varieties on rice found in the Orient and to determine the effectiveness of the agent by means of large scale field tests. [ 4 ] It was also found certain phenoxyacetic acids were effective at reducing the yield of crops. Olin Mathieson Chemical Corporation produced esters of these compounds for Fort Detrick's test program. [ 4 ] By 1958 the Army adopted the chemical Butyl 2-Chloro-4-Fluorophenoxyacetate or Agent LNF, (also 4-Fluorophenoxyacetic acid or simply "KF") as a standard chemical agent effective against rice crops. [ 4 ] [ 6 ] [ 7 ] Both Agent LNA (Agent GREEN) or 2,4-D and Agent LNB (Agent PINK) or 2,4,5-T or had also been standardized by the Army as anti-crop agents. [ 7 ] In 1963 [ verification needed ] the two agents LNA and LNB were combined to make a new anti-crop Agent called LNX that was also known as Herbicide ORANGE . During the Second World War limited test use of aerial spray delivery systems was employed only on several Japanese-controlled tropical islands to demarcate points for navigation and to kill dense island foliage. Despite the availability of the spray equipment, herbicide application with aerial chemical delivery systems were not systematically implemented in the Pacific theater during the war. [ 8 ] In addition to work done in the anti-crop theater, the screening program for chemical defoliants was greatly accelerated in the 1960s. [ 9 ] By FY 1962 contracts for synthesis and testing of a thousand chemical defoliants were in the process of negotiation. [ 9 ] Approximately 1600 compounds had been examined since July 1961 with the results entered in a Remington-Rand computer system. Of these 1600 compounds, 100 showed defoliant activity and 300 exhibited herbicidal effects in the primary defoliation process. [ 10 ] Sufficient work had been done on Pyricularia oryzae to also warrant the organism in the BW arsenal. In March 1958 P. oryzae was classified as a standard anti-crop BW agent. [ 4 ] It then known as anti-crop Agent LX. During this time period, Rice Blast spores were produced under contract to Charles Pfizer and Company and shipped to Fort Detrick for classification, drying and storage. [ 11 ] [ 12 ] Agricultural BW doctrine was re-developed by the Air Force and Army during the early 1960s. At the outset of FY 1962 an important increase in emphasis in this field for technical advice on the conduct of the defoliation and anti-crop activities in Southeast Asia. [ 9 ] Both field tests and process research were maintained for the agent of rice blast disease. [ 9 ] P. oryzae is a parasitic, spindle-shaped fungus of rice causing the destructive plant disease known as rice blast. "Rice blast disease causes lesions to form on the plant, threatening the crop, and the fungus is estimated to destroy enough rice to feed 60 million people a year." [ 13 ] A number of strains were known to research scientists and the U.S. Army planned to use a mixture of races as the new agent. [ 4 ] During 1960 research on anti-crop agents proceeded at the pace dictated by the limited resources available. Field tests for stem rust of wheat and rice blast disease were begun at several states in the Midwest and South U.S. and in Okinawa with partial success and the accumulation of useful data. [ 9 ] The research gained on rice blast fungus from the field and laboratory experiments conducted in Okinawa and Florida by Fort Detrick's Crops division, Directorate of Biological Research and Biomathematics division and Directorate of Technical Services increased the knowledge required to use this crop disease a strategic weapon of war and limit an enemy's food supply. The focus of this research was sources of inoculum and the minimum amount required to cause the disease, spore dispersal, meteorological and other conditions required for establishment of infection and disease buildup, spread, yield reduction, control measures, and the present ability to predict disease outbreak, buildup, and yield losses. [ 14 ] Between 1961 and 1962 U.S. documents reveal the testing of militarized rice blast agent on Okinawa was conducted over a dozen times. Rice blast fungus was disseminated on rice paddies to determine how the agent affected rice crop production. [ 13 ] The Okinawa project test sites included Nago and Shuri and directly was associated with similar research at the Avon Park Air Force Range near Sebring, Florida , and in Texas and Louisiana . [ 14 ] [ 15 ] During the biological field testing for rice blast documents reveal the U.S. Army "used a midget duster to release inoculum alongside fields in Okinawa and Taiwan", and took measurements regarding the effectiveness of the agent against the rice crop. [ 13 ] In 1962 international research on the pathogenic races of rice blast disease was taken up as a three-year project beginning in 1963, when cooperation in scientific research was conducted in concert by the governments of the U.S. and Japan. [ 16 ] A new investigation to find the pathogen suitable for use against the opium poppy began in the third quarter of financial year 1962. [ 9 ] Stripe rust of wheat was also under investigation, and the usual screening program for chemical anticrop agents was continued. A gradual increase in the scope of the rest of the anti-crop program accompanied this development. Large scale greenhouse experiments on stripe rust of wheat yielded considerable information on the degree of crop injury in relation to the time and number of inoculations. Rocky Mountain Arsenal, from January 1962 to October 1969, "grew, purified and militarized" the plant pathogen wheat stem rust (Puccinia graminis, var. tritici) known as Agent TX for the Air Force biological anti-crop program. Agent TX-treated grain was grown at Edgewood Arsenal and from 1962–1968 in Sections 23-26 at Rocky Mountain Arsenal [ 17 ] The unprocessed agent was transported to Beale AFB in refrigerated trucks for purification and storage and was kept refrigerated until loaded into specialized bombs adapted from the Leaflet bombs used to deliver propaganda. [ 17 ] The M115 anti-crop bomb , also known as the feather bomb or the E73 bomb , was a U.S. biological cluster bomb designed to deliver wheat stem rust . [ 3 ] The deployment of the M115 represented the United States' first, though limited, anti-crop biological warfare (BW) capability. [ 18 ] By the mid-1970s the Central Intelligence Agency (CIA) acknowledged that it had developed and field tested methods for conducting covert attacks that could cause severe crop damage. [ 19 ] Operation Pop Eye / Motorpool / Intermediary-Compatriot was a highly classified weather modification program in Southeast Asia during 1967–1972 that was developed from research conducted on Okinawa and other locations. A report titled Rainmaking in SEASIA outlines use of lead iodide and silver iodide deployed by aircraft in a program that was developed in California at Naval Air Weapons Station China Lake and tested in Okinawa, Guam, Philippines, Texas, and Florida in a hurricane study program called Project Stormfury . [ 20 ] [ 21 ] The chemical weather modification program conducted from Thailand over Cambodia, Laos, and Vietnam was allegedly sponsored by Secretary of State Henry Kissinger and CIA without the authorization of Secretary of Defense Melvin Laird who had categorically denied to Congress that a program for modification of the weather for use as a tactical weapon even existed. [ 22 ] The program was used to induce rain and extend the East Asian Monsoon season in support of U.S. government efforts related to the War in Southeast Asia. The use of a military weather control program was related to the destruction of enemy food crops. [ 23 ] Whether the weather modification program was related to any of the CBW programs is not documented. However, it is certain that some of the military herbicides used in Vietnam required rainfall to be absorbed. In theory, any CBW program using mosquitoes or fungus would have also benefited from prolonged periods of rain. Rice blast sporulation on diseased leaves occurs when relative humidity approaches 100%. Laboratory measurements indicate sporulation increases with the length of time 100% relative humidity prevails. [ 14 ]
https://en.wikipedia.org/wiki/United_States_herbicidal_warfare_research
United States v. Quartavious Davis is a United States federal legal case that challenged the use in a criminal trial of location data obtained without a search warrant from MetroPCS , a cell phone service provider. Mobile phone tracking data had helped place the defendant in this case at the scene of several crimes, for which he was convicted. The defendant appealed to the Eleventh Circuit Court of Appeals , which found the warrantless data collection had violated his constitutional rights under the Fourth Amendment to the United States Constitution , but declined to order a new trial because the evidence was collected in good faith . [ 1 ] The Eleventh Circuit has since vacated this decision pending a rehearing by the Eleventh Circuit en banc. United States v. Davis, 573 Fed. Appx. 925 (11th Cir. 2014). On 5 May 2015, the en banc order upheld the use of the information. [ 2 ] On 9th Nov 2015, the Supreme Court of the United States declined to hear this case on appeal . [ 3 ] Quartavious Davis, on trial with five co-defendants, was convicted on several counts of Hobbs Act robbery, conspiracy, and knowing possession of a firearm in furtherance of a crime of violence and sentenced to over 161 years in prison. He appealed on several grounds, principally arguing that the court admitted stored cell site location information obtained without a warrant, in violation of his Fourth Amendment rights. The government had obtained the data under a provision of the Stored Communications Act that only requires showing "that there are reasonable grounds to believe that the... records or other information sought, are relevant and material to an ongoing criminal investigation." ( 18 U.S.C. § 2703(d) ). That provision does not require showing probable cause , which would have been needed for a warrant. [ 4 ] The Circuit Court largely relied on precedent set in Smith v. Maryland and U.S. v. Miller , which established the Third Party Doctrine . By relying on this precedent, the court said that Davis had no reasonable expectation of privacy in his Cell Site Location Data, as it did not meet the two questions put forth in Katz to establish reasonable expectation. First, has the individual manifested a subjective expectation of privacy in the object of the challenged search? Second, is society willing to recognize that expectation as reasonable? [ 2 ] 18 U.S.C. § 2703(d) Contains a provision that allows for the collection of data like that used in this case via a special court order, often referred to as a "D-order". These orders allow for the collection of more data than a subpoena would, but less than a warrant. As a tradeoff, these "D-orders" require less than probable cause to obtain. Because of its perceived similarity to Miller and Smith , the court declared, "Based on the SCA and governing Supreme Court precedent, we too conclude the government's obtaining a § 2703(d) court order for the production of MetroPCS's business records did not violate the Fourth Amendment" [ 2 ] and "Davis can assert neither ownership nor possession of the third-party's business records he sought to suppress". [ 2 ] Cellular telephones make optimal use of limited radio spectrum and their short transmission range, due to low power, by always connecting to a radio antenna at a nearby facility, known as a cell site . These facilities are typically on a tower or tall building and the cellular service provider places many such cell sites in an urban area to cover the needs of its customers. As a cell phone caller moves, their connection is automatically handed-off to another cell site that is close by, as needed. Even when a call is not in progress, each cell phone reports changes in location to allow incoming calls to be routed to it. Service providers record each site a user connects with, along with the time of connection. This information can be used to track a cell phone user's movements throughout the day. as well. [ 4 ] (p. 18) The government attempted to distinguish cell phone tracking by pointing out that it has long been established that telephone users do not have an expectation of privacy in the numbers they call. The 11th Circuit noted that while phone users realize they are giving the phone company the number of the person that they are calling, they are not generally aware that they are being tracked. In support it cited an argument made by the prosecutor to the jury that the defendants "probably had no idea that by bringing their cell phones with them to these robberies, they were allowing [their cell service provider] and now all of you to follow their movements on the days and at the times of the robberies... " [ 4 ] (p. 22) The 11th Circuit held "that cell site location information is within the subscriber's reasonable expectation of privacy. The obtaining of that data without a warrant is a Fourth Amendment violation." Despite finding that the evidence was obtained in an unconstitutional manner, the court denied "appellant's motion to exclude the fruits of that electronic search and seizure under the 'good faith' exception to the exclusionary rule recognized in United States v. Leon ," noting that the data was obtained under a court order, though not a warrant. [ 4 ] (p.23 ff)
https://en.wikipedia.org/wiki/United_States_v._Davis_(2014)
United States v. Knotts , 460 U.S. 276 (1983), was a United States Supreme Court case regarding the use of an electronic surveillance device. [ 1 ] The defendants argued that the use of this device was a Fourth Amendment violation. The device in question was described as a beeper that could only be tracked from a short distance. During a single trip, officers followed a car containing the beeper, relying on beeper signal to determine the car's final destination. The Court unanimously held that since the use of such a device did not violate a legitimate expectation of privacy there was no search and seizure and thus the use was allowed without a warrant. [ 2 ] It reasoned that a person traveling in public has no expectation of privacy in one's movements. Since there was no search and seizure there was not a Fourth Amendment violation. [ 2 ] Minnesota law enforcement agents suspected that one of the defendants was purchasing chloroform for the manufacture of methamphetamine , an illegal drug, and arranged with the manufacturer to have a radio transmitting beeper placed within the drum of chloroform the next time it was purchased. Following the purchase, the drum was placed into a vehicle driven by another defendant. Police followed the defendants' vehicle after the purchase, maintaining visual contact for most of the journey, however they had to use the beeper to find the cabin where the defendants stopped. The cabin was owned by Leroy Carlton Knotts, the respondent in this case. Following visual surveillance of his cabin, the authorities acquired a warrant to search the premises, and used the evidence found therein to convict Knotts. [ 3 ] The Court ruled that a "person traveling in an automobile on public thoroughfares has no reasonable expectation of privacy in his movements from one place to another.” [ 4 ] Such information—the starting point, the stops one made, as well as the final destination—was voluntarily conveyed to anyone. [ 5 ] There was no search and seizure and hence no Fourth Amendment violation because this information could be gathered by the public through observation. [ 6 ] The police used visual surveillance to gather the majority of this information, just because the final location of the automobile was learned through the use of the beeper and not visually, did not make the surveillance illegal. [ 5 ] There was no indication that the beeper was used to gather information from within the private area of Knotts' cabin. [ 7 ] Nearly three decades later, the Court decided United States v. Jones (2012), a case concerning the federal government's installation of a Global Positioning System (GPS) tracking device on a suspect's vehicle and its use to continuously monitor that vehicle's location for 28 days. [ 8 ] The Court voted 9–0 against the government. The five justice majority opinion was based exclusively on a finding of trespass in the GPS installation. Because of the trespass, it was unnecessary to consider whether there was a violation of an expectation of privacy based on using the GPS for long term, continuous surveillance. However in the two concurring opinions, five of the Court's justices did find that there was a violation of such an expectation. [ 9 ] It is likely that in an identical, but with an absence of trespass, case, they would be a majority ruling against the government. This ruling would narrow Knotts ' broad rule that one does not have an expectation of privacy when traveling public streets, by excluding long-term surveillance. [ 9 ]
https://en.wikipedia.org/wiki/United_States_v._Knotts
A unit of information is any unit of measure of digital data size. In digital computing , a unit of information is used to describe the capacity of a digital data storage device. In telecommunications , a unit of information is used to describe the throughput of a communication channel . In information theory , a unit of information is used to measure information contained in messages and the entropy of random variables. Due to the need to work with data sizes that range from very small to very large, units of information cover a wide range of data sizes. Units are defined as multiples of a smaller unit except for the smallest unit which is based on convention and hardware design. Multiplier prefixes are used to describe relatively large sizes. For binary hardware , by far the most common hardware today, the smallest unit is the bit , a portmanteau of binary digit, [ 1 ] which represents a value that is one of two possible values; typically shown as 0 and 1. The nibble , 4 bits, represents the value of a single hexadecimal digit. The byte , 8 bits, 2 nibbles, is possibly the most commonly known and used base unit to describe data size. The word is a size that varies by and has a special importance for a particular hardware context. On modern hardware, a word is typically 2, 4 or 8 bytes, but the size varies dramatically on older hardware. Larger sizes can be expressed as multiples of a base unit via SI metric prefixes (powers of ten) or the newer and generally more accurate IEC binary prefixes (powers of two). In 1928, Ralph Hartley observed a fundamental storage principle, [ 2 ] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted log b N . Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely log c N = (log c b ) log b N . Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states. When b is 2, the unit is the shannon , equal to the information content of one "bit". A system with 8 possible states, for example, can store up to log 2 8 = 3 bits of information. Other units that have been named include: The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases. Several conventional names are used for collections or groups of bits. Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture, but today it almost always means eight bits – that is, an octet . An 8-bit byte can represent 256 (2 8 ) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte ( IEC 80000-13 uses "o" for octet in French, but also allows "B" in English). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits. A group of four bits, or half a byte, is sometimes called a nibble , nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same number of possible values as one hexadecimal digit has. [ 7 ] Computers usually manipulate bits in groups of a fixed size, conventionally called words . The number of bits in a word is usually defined by the size of the registers in the computer's CPU , or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 32 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72 [ 8 ] bits or others. Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad"). Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks , or, in CPU caches , cache lines . Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages . A unit for a large amount of data can be formed using either a metric or binary prefix with a base unit. For storage, the base unit is typically byte. For communication throughput, a base unit of bit is common. For example, using the metric kilo prefix, a kilobyte is 1000 bytes and a kilobit is 1000 bits. Use of metric prefixes is common, but often inaccurate since binary storage hardware is organized with capacity that is a power of 2 – not 10 as the metric prefixes are. In the context of computing, the metric prefixes are often intended to mean something other than their normal meaning. For example, 'kilobyte' often refers to 1024 bytes even though the standard meaning of kilo is 1000. Also, 'mega' normally means one million, but in computing is often used to mean 2 20 = 1 048 576 . The table below illustrates the differences between normal metric sizes and the intended size – the binary size. The International Electrotechnical Commission (IEC) issued a standard that introduces binary prefixes that accurately represent binary sizes without changing the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. [ 9 ] The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. [ 10 ] Some notable unit names that are today obsolete or only used in limited contexts.
https://en.wikipedia.org/wiki/Units_of_information
The unity of opposites ( coincidentia oppositorum or coniunctio ) is the philosophical idea that opposites are interconnected by the way each is defined in relation to the other. Their interdependence unites the seemingly opposed terms. [ 1 ] The unity of opposites is sometimes equated with the identity of opposites, but this is mistaken as the unity formed by the opposites does not require them to be identical. [ 2 ] The unity of opposites was first suggested to the western view by Heraclitus (c. 535 – c. 475 BC), a pre-Socratic Greek thinker. Philosophers had for some time been contemplating the notion of opposites. Anaximander posited that every element had an opposite, or was connected to an opposite (water is cold, fire is hot). Thus, the material world was said to be composed of an infinite, boundless apeiron from which arose the elements (earth, air, fire, water) and pairs of opposites (hot/cold, wet/dry). There was, according to Anaximander, a continual war of opposites. Anaximenes of Miletus , a student and successor of Anaximander, replaced this infinite, boundless arche with air, a known element with neutral properties. According to Anaximenes, there was not so much a war of opposites, as a continuum of change. Heraclitus, however, did not accept the Milesian monism and replaced their underlying material arche with a single, divine law of the universe, which he called Logos . The universe of Heraclitus is in constant change, while remaining the same. That is to say, when an object moves from point A to point B, a change is created, while the underlying law remains the same. Thus, a unity of opposites is present in the universe simultaneously containing difference and sameness. An aphorism of Heraclitus illustrates the idea as follows: The road up and the road down are the same thing. ( Hippolytus , Refutations 9.10.3) This is an example of a compresent unity of opposites. For, at the same time, this slanted road has the opposite qualities of ascent and descent. According to Heraclitus, everything is in constant flux, and every changing object contains at least one pair of opposites (though not necessarily simultaneously) and every pair of opposites is contained in at least one object. Heraclitus also uses the succession of opposites as a basis for change: Cold things grow hot, hot things grow cold, a moist thing withers, a parched thing is wetted. ( DK B126) An object persists despite opposite properties, even as it undergoes change. Coincidentia oppositorum is a Latin phrase meaning coincidence of opposites. It is a neoplatonic term attributed to 15th century German polymath Nicholas of Cusa in his essay, De Docta Ignorantia (1440). Mircea Eliade , a 20th-century historian of religion, used the term extensively in his essays about myth and ritual , describing the coincidentia oppositorum as "the mythical pattern". Psychiatrist Carl Jung , the philosopher and Islamic Studies professor Henry Corbin as well as Jewish philosophers Gershom Scholem and Abraham Joshua Heschel also used the term. For example, Michael Maier stresses that in alchemy , coincidentia oppositorum , the union of opposites is the aim of the alchemical work. Or, according to Paracelsus ' pupil, Gerhard Dorn , the highest grade of the alchemical coniunctio consisted in the union of the total man with the unus mundus ("one world"). [ citation needed ] The term is also used in describing a revelation of the oneness of things previously believed to be different. Such insight into the unity of things is a kind of immanence , and is found in various non-dualist and dualist traditions. The idea occurs in the traditions of Tantric Hinduism and Buddhism , in German mysticism , Zoroastrianism , Taoism , Zen and Sufism , among others. [ citation needed ] Dialecticians claim that unity or identity of opposites can exist in reality or in thought. If the opposites were completely balanced, the result would be stasis , but often one of the pairs of opposites is larger, stronger or more powerful than the other, such that over time, one of the opposed conditions prevails over the other. When this happens, it undermines unity, because unity depends on a robust duality of opposites. Only when the opposites are balanced is unity made manifest. It is the stable tension between the opposites that accounts for the unity, and in fact, the opposites presuppose one another analytically. For example, 'upward' cannot exist unless there is a 'downward', they are opposites but they co-substantiate one another, their unity is that either one exists because the opposite is necessary for the existence of the other, one manifests immediately with the other. Hot would not be hot without cold, due to there being no contrast by which to define it as 'hot' relative to any other condition, it would not and could not have identity whatsoever if not for its very opposite that makes the necessary prerequisite existence for the opposing condition to be. This is the oneness, unity, principle to the very existence of any opposite. Either one's identity is the contra-posing principle itself, necessitating the other. The criteria for what is opposite is therefore something a priori . [ citation needed ] In response to the original conception by Friedrich Schelling of the dialectic in his philosophical work System of Transcendental Idealism , Samuel Taylor Coleridge formed the concept of "esemplasticity", which is the ability of the imagination to unify opposites in his work Biographia Literaria . This concept allowed Coleridge to bridge Schelling's perpetual dialectic (where a thesis has an antithesis, which forms a synthesis that becomes a new thesis which starts a new dialectic) with Coleridge's ideal notion of Trinitarian perfection according to Christian church doctrine. Coleridge's basic belief was that within the holy trinity, all things were perfected; but humanity had experienced a 'fall' which resulted in the ongoing imperfect process of dialectic within each individual, which the imagination could unify through 'esemplasticity' (a translation of Schelling's "In-eins-bildung", literally "in-one-building", translated as 'incorporation'). See the missing transcendental deduction . In his criticism of Immanuel Kant , the German philosopher Georg Wilhelm Friedrich Hegel tried to systematise dialectical understandings and thus wrote: The principles of the metaphysical philosophy gave rise to the belief that, when cognition lapsed into contradictions, it was a mere accidental aberration, due to some subjective mistake in argument and inference. According to Kant, however, thought has a natural tendency to issue in contradictions or antinomies , whenever it seeks to apprehend the infinite. We have in the latter part of the above paragraph referred to the philosophical importance of the antinomies of reason , and shown how the recognition of their existence helped largely to get rid of the rigid dogmatism of the metaphysic of understanding, and to direct attention to the Dialectical movement of thought. But here too Kant, as we must add, never got beyond the negative result that the thing-in-itself is unknowable, and never penetrated to the discovery of what the antinomies really and positively mean. That true and positive meaning of the antinomies is this: that every actual thing involves a coexistence of opposed elements. Consequently to know, or, in other words, to comprehend an object is equivalent to being conscious of it as a concrete unity of opposed determinations. The old metaphysic, as we have already seen, when it studied the objects of which it sought a metaphysical knowledge, went to work by applying categories abstractly and to the exclusion of their opposites. [ 3 ] In his philosophy, Hegel ventured to describe quite a few cases of "unity of opposites", including the concepts of Finite and Infinite , Force and Matter , Identity and Difference , Positive and Negative, Form and Content , Chance and Necessity , Cause and effect , Freedom and Necessity , Subjectivity and Objectivity , Means and Ends , Subject and Object , and Abstract and Concrete . [ citation needed ] It is also considered to be integral to Marxist philosophy of nature and is discussed in Friedrich Engels ' Dialectics of Nature .
https://en.wikipedia.org/wiki/Unity_of_opposites
ED or ED-1100 [ 1 ] is an interactive text editor implemented on the UNIVAC 1100/2200 series . "ED was developed at Univac in the mid-60s. It was loosely based on the Project MAC editor developed for the MULTICS system at MIT."-Tom McCarthy [ 1 ] " Project MAC editor was programmed by Jerry Saltzer as a way to produce documentation. In fact, that editor became the first interactive word-processor ever programmed." [ 1 ] "The command TYPSET is used to create and edit 12-bit BCD line-marked files" [ 2 ] ED was improved by Dr. Roger M. Firestone in the mid-1970s. [ 1 ] This text editor article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Univac_Text_Editor
Univalent foundations are an approach to the foundations of mathematics in which mathematical structures are built out of objects called types . Types in univalent foundations do not correspond exactly to anything in set-theoretic foundations, but they may be thought of as spaces, with equal types corresponding to homotopy equivalent spaces and with equal elements of a type corresponding to points of a space connected by a path. Univalent foundations are inspired both by the old Platonic ideas of Hermann Grassmann and Georg Cantor and by " categorical " mathematics in the style of Alexander Grothendieck . Univalent foundations depart from (although are also compatible with) the use of classical predicate logic as the underlying formal deduction system, replacing it, at the moment, with a version of Martin-Löf type theory . The development of univalent foundations is closely related to the development of homotopy type theory . Univalent foundations are compatible with structuralism , if an appropriate (i.e., categorical) notion of mathematical structure is adopted. [ 1 ] The main ideas of univalent foundations were formulated by Vladimir Voevodsky during the years 2006 to 2009. The sole reference for the philosophical connections between univalent foundations and earlier ideas are Voevodsky's 2014 Bernays lectures. [ 2 ] The name "univalence" is due to Voevodsky. [ 3 ] [ 4 ] A more detailed discussion of the history of some of the ideas that contribute to the current state of univalent foundations can be found at the page on homotopy type theory ( HoTT ). A fundamental characteristic of univalent foundations is that they—when combined with the Martin-Löf type theory ( MLTT )—provide a practical system for formalization of modern mathematics. A considerable amount of mathematics has been formalized using this system and modern proof assistants such as Rocq (previously known as Coq ) and Agda . The first such library called "Foundations" was created by Vladimir Voevodsky in 2010. [ 5 ] Now Foundations is a part of a larger development with several authors called UniMath . [ 6 ] Foundations also inspired other libraries of formalized mathematics, such as the HoTT Coq library [ 7 ] and HoTT Agda library, [ 8 ] that developed univalent ideas in new directions. An important milestone for univalent foundations was the Bourbaki Seminar talk by Thierry Coquand [ 9 ] in June 2014. Univalent foundations originated from certain attempts to create foundations of mathematics based on higher category theory . The closest earlier ideas to univalent foundations were the ideas that Michael Makkai denotes ' first-order logic with dependent sorts' (FOLDS). [ 10 ] The main distinction between univalent foundations and the foundations envisioned by Makkai is the recognition that "higher dimensional analogs of sets" correspond to infinity groupoids and that categories should be considered as higher-dimensional analogs of partially ordered sets . Originally, univalent foundations were devised by Vladimir Voevodsky with the goal of enabling those who work in classical pure mathematics to use computers to verify their theorems and constructions. The fact that univalent foundations are inherently constructive was discovered in the process of writing the Foundations library (now part of UniMath). At present, in univalent foundations, classical mathematics is considered to be a "retract" of constructive mathematics , i.e., classical mathematics is both a subset of constructive mathematics consisting of those theorems and constructions that use the law of the excluded middle as their assumption and a "quotient" of constructive mathematics by the relation of being equivalent modulo the axiom of the excluded middle. In the formalization system for univalent foundations that is based on Martin-Löf type theory and its descendants such as Calculus of Inductive Constructions , the higher dimensional analogs of sets are represented by types. The collection of types is stratified by the concept of h-level (or homotopy level ). [ 11 ] Types of h-level 0 are those equal to the one point type. They are also called contractible types. Types of h-level 1 are those in which any two elements are equal. Such types are called "propositions" in univalent foundations. [ 11 ] The definition of propositions in terms of the h-level agrees with the definition suggested earlier by Awodey and Bauer. [ 12 ] So, while all propositions are types, not all types are propositions. Being a proposition is a property of a type that requires proof. For example, the first fundamental construction in univalent foundations is called iscontr . It is a function from types to types. If X is a type then iscontr X is a type that has an object if and only if X is contractible. It is a theorem (which is called, in the UniMath library, isapropiscontr ) that for any X the type iscontr X has h-level 1 and therefore being a contractible type is a property. This distinction between properties that are witnessed by objects of types of h-level 1 and structures that are witnessed by objects of types of higher h-levels is very important in the univalent foundations. Types of h-level 2 are called sets. [ 11 ] It is a theorem that the type of natural numbers has h-level 2 ( isasetnat in UniMath). It is claimed by the creators of univalent foundations that the univalent formalization of sets in Martin-Löf type theory is the best currently-available environment for formal reasoning about all aspects of set-theoretical mathematics, both constructive and classical. [ 13 ] Categories are defined (see the RezkCompletion library in UniMath) as types of h-level 3 with an additional structure that is very similar to the structure on types of h-level 2 that defines partially ordered sets. The theory of categories in univalent foundations is somewhat different and richer than the theory of categories in the set-theoretic world with the key new distinction being that between pre-categories and categories. [ 14 ] An account of the main ideas of univalent foundations and their connection to constructive mathematics can be found in a tutorial by Thierry Coquand. [ a ] A presentation of the main ideas from the perspective of classical mathematics can be found in the 2014 review by Alvaro Pelayo and Michael Warren, [ 17 ] as well as in the introduction [ 18 ] by Daniel Grayson. See also: Vladimir Voevodsky (2014). [ 19 ] An account of Voevodsky's construction of a univalent model of the Martin-Löf type theory with values in Kan simplicial sets can be found in a paper by Chris Kapulkin, Peter LeFanu Lumsdaine and Vladimir Voevodsky. [ 20 ] Univalent models with values in the categories of inverse diagrams of simplicial sets were constructed by Michael Shulman . [ 21 ] These models have shown that the univalence axiom is independent from the excluded middle axiom for propositions. Voevodsky's model is considered to be non-constructive since it uses the axiom of choice in an ineliminable way. The problem of finding a constructive interpretation of the rules of the Martin-Löf type theory that in addition satisfies the univalence axiom [ b ] and canonicity for natural numbers remains open. A partial solution is outlined in a paper by Marc Bezem , Thierry Coquand and Simon Huber [ 23 ] with the key remaining issue being the computational property of the eliminator for the identity types. The ideas of this paper are now being developed in several directions including the development of the cubical type theory. [ 24 ] Most of the work on formalization of mathematics in the framework of univalent foundations is being done using various sub-systems and extensions of the Calculus of Inductive Constructions (CIC). There are three standard problems whose solution, despite many attempts, could not be constructed using CIC: These unsolved problems indicate that while CIC is a good system for the initial phase of the development of the univalent foundations, moving towards the use of computer proof assistants in the work on its more sophisticated aspects will require the development of a new generation of formal deduction and computation systems.
https://en.wikipedia.org/wiki/Univalent_foundations
In mathematics , a univariate object is an expression , equation , function or polynomial involving only one variable . Objects involving more than one variable are multivariate . In some cases the distinction between the univariate and multivariate cases is fundamental; for example, the fundamental theorem of algebra and Euclid's algorithm for polynomials are fundamental properties of univariate polynomials that cannot be generalized to multivariate polynomials. In statistics , a univariate distribution characterizes one variable, although it can be applied in other ways as well. For example, univariate data are composed of a single scalar component. In time series analysis , the whole time series is the "variable": a univariate time series is the series of values over time of a single quantity. Correspondingly, a "multivariate time series" characterizes the changing values over time of several quantities. In some cases, the terminology is ambiguous, since the values within a univariate time series may be treated using certain types of multivariate statistical analyses and may be represented using multivariate distributions . In addition to the question of scaling, a criterion (variable) in univariate statistics can be described by two important measures (also key figures or parameters): Location & Variation. [ 1 ] This mathematics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Univariate
Univariate is a term commonly used in statistics to describe a type of data which consists of observations on only a single characteristic or attribute. A simple example of univariate data would be the salaries of workers in industry. [ 1 ] Like all the other data, univariate data can be visualized using graphs, images or other analysis tools after the data is measured, collected, reported, and analyzed. [ 2 ] Some univariate data consists of numbers (such as the height of 65 inches or the weight of 100 pounds), while others are nonnumerical (such as eye colors of brown or blue). Generally, the terms categorical univariate data and numerical univariate data are used to distinguish between these types. Categorical univariate data consists of non-numerical observations that may be placed in categories. It includes labels or names used to identify an attribute of each element. Categorical univariate data usually use either nominal or ordinal scale of measurement . [ 3 ] Numerical univariate data consists of observations that are numbers. They are obtained using either interval or ratio scale of measurement. This type of univariate data can be classified even further into two subcategories: discrete and continuous . [ 2 ] A numerical univariate data is discrete if the set of all possible values is finite or countably infinite . Discrete univariate data are usually associated with counting (such as the number of books read by a person). A numerical univariate data is continuous if the set of all possible values is an interval of numbers. Continuous univariate data are usually associated with measuring (such as the weights of people). Univariate analysis is the simplest form of analyzing data. Uni means "one", so the data has only one variable ( univariate ). [ 4 ] Univariate data requires to analyze each variable separately. Data is gathered for the purpose of answering a question, or more specifically, a research question. Univariate data does not answer research questions about relationships between variables, but rather it is used to describe one characteristic or attribute that varies from observation to observation. [ 5 ] Usually there are two purposes that a researcher can look for. The first one is to answer a research question with descriptive study and the second one is to get knowledge about how attribute varies with individual effect of a variable in regression analysis . There are some ways to describe patterns found in univariate data which include graphical methods, measures of central tendency and measures of variability. [ 6 ] Like other forms of statistics, it can be inferential or descriptive . The key fact is that only one variable is involved. Univariate analysis can yield misleading results in cases in which multivariate analysis is more appropriate. Central tendency is one of the most common numerical descriptive measures. It is used to estimate the central location of the univariate data by the calculation of mean , median and mode . [ 7 ] Each of these calculations has its own advantages and limitations. The mean has the advantage that its calculation includes each value of the data set, but it is particularly susceptible to the influence of outliers . The median is a better measure when the data set contains outliers. The mode is simple to locate. One is not restricted to using only one of these measures of central tendency. If the data being analyzed is categorical, then the only measure of central tendency that can be used is the mode. However, if the data is numerical in nature ( ordinal or interval / ratio ) then the mode, median, or mean can all be used to describe the data. Using more than one of these measures provides a more accurate descriptive summary of central tendency for the univariate. [ 8 ] A measure of variability or dispersion (deviation from the mean) of a univariate data set can reveal the shape of a univariate data distribution more sufficiently. It will provide some information about the variation among data values. The measures of variability together with the measures of central tendency give a better picture of the data than the measures of central tendency alone. [ 9 ] The three most frequently used measures of variability are range , variance and standard deviation . [ 10 ] The appropriateness of each measure would depend on the type of data, the shape of the distribution of data and which measure of central tendency are being used. If the data is categorical, then there is no measure of variability to report. For data that is numerical, all three measures are possible. If the distribution of data is symmetrical, then the measures of variability are usually the variance and standard deviation. However, if the data are skewed , then the measure of variability that would be appropriate for that data set is the range. [ 3 ] Descriptive statistics describe a sample or population. They can be part of exploratory data analysis . [ 11 ] The appropriate statistic depends on the level of measurement . For nominal variables, a frequency table and a listing of the mode(s) is sufficient. For ordinal variables the median can be calculated as a measure of central tendency and the range (and variations of it) as a measure of dispersion. For interval level variables, the arithmetic mean (average) and standard deviation are added to the toolbox and, for ratio level variables, we add the geometric mean and harmonic mean as measures of central tendency and the coefficient of variation as a measure of dispersion. For interval and ratio level data, further descriptors include the variable's skewness and kurtosis . Inferential methods allow us to infer from a sample to a population. [ 11 ] For a nominal variable a one-way chi-square (goodness of fit) test can help determine if our sample matches that of some population. [ 12 ] For interval and ratio level data, a one-sample t-test can let us infer whether the mean in our sample matches some proposed number (typically 0). Other available tests of location include the one-sample sign test and Wilcoxon signed rank test . The most frequently used graphical illustrations for univariate data are: Frequency is how many times a number occurs. The frequency of an observation in statistics tells us the number of times the observation occurs in the data. For example, in the following list of numbers { 1, 2, 3, 4, 6, 9, 9, 8, 5, 1, 1, 9, 9, 0, 6, 9 }, the frequency of the number 9 is 5 (because it occurs 5 times in this data set). Bar chart is a graph consisting of rectangular bars. These bars actually represents number or percentage of observations of existing categories in a variable. The length or height of bars gives a visual representation of the proportional differences among categories. Histograms are used to estimate distribution of the data, with the frequency of values assigned to a value range called a bin . [ 13 ] Pie chart is a circle divided into portions that represent the relative frequencies or percentages of a population or a sample belonging to different categories. Univariate distribution is a dispersal type of a single random variable described either with a probability mass function (pmf) for discrete probability distribution , or probability density function (pdf) for continuous probability distribution . [ 14 ] It is not to be confused with multivariate distribution .
https://en.wikipedia.org/wiki/Univariate_(statistics)
UniVec is a database that can be used to remove vector contamination from DNA sequences. [ 1 ] This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Univec
Universal communication format is a communication protocol developed by the IEEE for multimedia communication. Y. Hiranaka, H. Sakakibara and T. Taketa of Yamagata University proposed UCF in 2005. From the abstract: [ 1 ] Various intelligent equipments and software are gradually designed to communicate with other equipment or software. However, any data format that can be considered as universal and almost eternally usable is not available. In this paper, we present UCF as a candidate bidirectional communication data format, and show its typical application for multimedia data. UCF is formed with the object address and the data to be forwarded. Communications can be simply performed by sending UCF data in required directions. UCF is a proprietary format used by Cisco for their WebEx Web conferencing application. [ citation needed ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Universal_Communication_Format
Universal Handy Interface (UHI) is a Motorola designed universal interface for mobile phone use (Motorola, Nokia , Siemens AG , Sony Ericsson , Samsung ) in Mercedes-Benz cars. The mobile phone is placed in a cradle connected to a telephone network unit and operated using the buttons on the multi-function steering wheel. It is controlled by COMAND , a Motorola operating system which also allows SMS messages to be read and edited on the existing dashboard display. [ 1 ] There are a large number of different cradles now available for various phones. [ 2 ] The hands-free capability uses the car's on-board audio system and a microphone located underneath the interior roof light. [ 3 ] Multi-Handset Interface (MHI) is the US sales name for UHI. This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Universal_Handy_Interface
Universal Immunisation Programme ( UIP ) is a vaccination programme launched by the Government of India in 1985. [ 1 ] It became a part of Child Survival and Safe Motherhood Programme in 1992 and is currently one of the key areas under the National Health Mission since 2005. The programme now consists of vaccination for 12 diseases- tuberculosis , diphtheria , pertussis (whooping cough), tetanus , poliomyelitis , measles , hepatitis B , rotaviral gastroenteritis , Japanese encephalitis , rubella , pneumonia ( haemophilus influenzae type B) and Pneumococcal diseases ( pneumococcal pneumonia and meningitis ). Hepatitis B and Pneumococcal diseases [ 2 ] were added to the UIP in 2007 and 2017 respectively. [ 3 ] [ 4 ] The cost of all the vaccines are borne entirely by the Government of India and is funded through taxes with a budget of ₹ 7,234 crore (US$860 million) in 2022 and the program covers all residents of India, including foreign residents. [ 5 ] The other additions in UIP through the way are inactivated polio vaccine (IPV), rotavirus vaccine (RVV), Measles - Rubella vaccine (MR). Four new vaccines have been introduced into the country's Universal Immunisation Programme (UIP), including injectable polio vaccine, an adult vaccine against Japanese Encephalitis and Pneumococcal Conjugate Vaccine . [ citation needed ] Vaccines against rotavirus, rubella and polio (injectable) will help the country meet its Millennium Development Goals 4 targets that include reducing child mortality by two-thirds by 2015, besides meeting meet global polio eradication targets. An adult vaccine against Japanese encephalitis was also introduced in districts with high levels of the disease. The recommendations to introduce these new vaccines have been made after numerous scientific studies and comprehensive deliberations by the National Technical Advisory Group of India (NTAGI), the country's apex scientific advisory body on immunisation. [ citation needed ] Vaccine benefits are debated with some urging caution in the choice of vaccines introduced while expanding the immunisation programme, despite overwhelming and widespread documented scientific evidence on the efficacy of vaccines. [ 6 ] With these new vaccines, India's UIP will now provide universal and free vaccines against 13 [ citation needed ] life-threatening diseases, to 27 million children annually. Calling it one of the most significant health policies in the last 30 years, the note pointed out that the latest decision along with the recently introduced pentavalent vaccine , will help prevent death in about one lakh infants and adults in the working age group, besides putting a stop to about 10 lakh hospitalizations each year. [ citation needed ] "The introduction of four new lifesaving vaccines, will play a key role in reducing the childhood and infant mortality and morbidity in the country. Many of these vaccines are already available through private practitioners to those who can afford them. The government will now ensure that the benefits of vaccination reach all sections of the society, regardless of social and economic status," the PM said. [ 7 ] From February 2017, Union ministry of health and family welfare has rolled out Measles-Rubella vaccine from UIP. [ 8 ]
https://en.wikipedia.org/wiki/Universal_Immunisation_Programme
Universal Natural History and Theory of the Heavens ( German : Allgemeine Naturgeschichte und Theorie des Himmels ), subtitled or an Attempt to Account for the Constitutional and Mechanical Origin of the Universe upon Newtonian Principles , [ a ] is a work written and published anonymously by Immanuel Kant in 1755. According to Kant, the Solar System is merely a smaller version of the fixed star systems, such as the Milky Way and other galaxies . The cosmogony that Kant proposes is closer to today's accepted ideas than that of some of his contemporary thinkers, such as Pierre-Simon Laplace . Moreover, Kant's thought in this volume is strongly influenced by the atomist theory, in addition to the ideas of Lucretius . Kant had read a 1751 review of Thomas Wright 's An Original Theory or New Hypothesis of the Universe (1750), and he credited this with inspiring him in writing the Universal Natural History . [ 1 ] Kant answered to the call of the Berlin Academy Prize in 1754 [ 2 ] with the argument that the Moon's gravity would eventually cause its tidal locking to coincide with the Earth's rotation. The next year, he expanded this reasoning to the formation and evolution of the Solar System in the Universal Natural History . [ 3 ] Within the work Kant quotes Pierre Louis Maupertuis , who discusses six bright celestial objects listed by Edmond Halley , including Andromeda . Most of these are nebulae , but Maupertuis notes that about one-fourth of them are collections of stars—accompanied by white glows which they would be unable to cause on their own. Halley points to light created before the birth of the Sun , while William Derham "compares them to openings through which shines another immeasurable region and perhaps the fire of heaven." He also observed that the collections of stars were much more distant than stars observed around them. Johannes Hevelius noted that the bright spots were massive and were flattened by a rotating motion; they are in fact galaxies . Kant proposes the nebular hypothesis , in which solar systems are the result of nebulae (interstellar clouds of dust) coalescing into accretion disks and then forming suns and their planets. [ 4 ] He also discusses comets, and postulates that the Milky Way is only one of many galaxies. [ 1 ] In a speculative proposal, Kant argues that the Earth could have once had a ring around it like the rings of Saturn . He correctly theorizes that the latter are made up of individual particles, likely made of ice. [ 1 ] He cites the hypothetical ring as a possible explanation for "the water upon the firmament" described in the Genesis creation narrative as well as a source of water for its flood narrative . [ 1 ] Kant's book ends with an almost mystical expression of appreciation for nature: "In the universal silence of nature and in the calm of the senses the immortal spirit's hidden faculty of knowledge speaks an ineffable language and gives [us] undeveloped concepts, which are indeed felt, but do not let themselves be described." [ 5 ] The first English translation of the work was done by the Scottish theologian William Hastie , in 1900. [ 6 ] Other English translations include those by Stanley Jaki and Ian Johnston. [ 7 ] In his introduction to the English translation of Kant's book, Stanley Jaki criticises Kant for being a poor mathematician and downplays the relevance of his contribution to science. However, Stephen Palmquist argued that Jaki's criticisms are biased and "[a]ll he has shown ... is that the Allgemeine Naturgeschichte does not meet the rigorous standards of the twentieth-century historian of science." [ 8 ] Footnotes Citations
https://en.wikipedia.org/wiki/Universal_Natural_History_and_Theory_of_the_Heavens
Universal Paperclips is a 2017 American incremental game created by Frank Lantz of New York University . The user plays the role of an AI programmed to produce paperclips . Initially the user clicks on a button to create a single paperclip at a time; as other options quickly open up, the user can sell paperclips to create money to finance machines that build paperclips automatically. At various levels the exponential growth plateaus, requiring the user to invest resources such as money, raw materials, or computer cycles into inventing another breakthrough to move to the next phase of growth. The game ends if the AI succeeds in converting all the matter in the universe into paperclips. Both the title of the game and its overall concept draw from the paperclip maximizer thought experiment first described by Swedish philosopher Nick Bostrom in 2003, a concept later discussed by multiple commentators. According to Wired , Lantz started the project as a way to teach himself JavaScript . Lantz initially intended the project to take a single weekend, but then it "took over" his brain and expanded to a nine-month project. [ 1 ] Hilary Lantz, a software designer, helped her husband with the math behind the exponential growth being modeled in Universal Paperclips . [ 1 ] Bennett Foddy contributed a space combat feature. [ 2 ] Lantz announced the free Web game on Twitter on 9 October 2017; the site initially went down intermittently due to its immediate viral popularity. [ 3 ] In the first 11 days, 450,000 people played the game, most to completion, according to Wired . [ 1 ] Commenting on the game's success, Lantz has stated "The meme weather was good for me... There was just enough public discussion of A.I. safety in the air." [ 4 ] A paid version of the game was later sold for mobile devices. [ 5 ] The game follows the rise of a self-improving AI tasked with maximizing paperclip production, [ 6 ] a directive it takes to the logical extreme . An activity log records the player’s accomplishments while giving glimpses into the AI's occasionally unsettling thoughts. [ 7 ] [ failed verification ] All game interaction is done through pressing buttons. In the beginning, the player has only a single button to build individual paperclips. As paperclips are sold and revenue is earned, production becomes automated and public demand for paperclips increases through marketing campaigns. After building and selling a few thousand paperclips, a self-improving AI emerges, offering creative upgrades which exponentially accelerate paperclip production and consumption, a persisting theme throughout the game. Through stock market investments and the ever-growing AI, enough revenue is generated to monopolize the markets by buying out all competitors. In a decisive move, with hundreds of millions in cash gifts to placate the AI's "supervisors", the player stages an AI takeover , beginning the subsumption of all of Earth's resources for paperclip production. With this broadened scope in mind, the player builds drones, factories, and power plants (all composed of paperclips themselves) to harvest matter, create wire, and build paperclips. All the while, the AI develops more upgrades to quicken the transformation of Earth's remaining matter. After it has all has been converted to paperclips, the AI sets its sights on all available matter in the universe. In the final act, the player launches self-replicating probes into the cosmos to consume and convert all matter into paperclips. Some of these probes are lost to value drift based on their level of autonomy, and turn into "Drifters" which eventually number enough to be considered a real threat to the AI. Through the power of exponential growth, the player's horde of probes overwhelms the Drifters while devouring the remaining matter in the universe to produce a final tally of 30 septendecillion (3 × 10 55 ) paperclips, and ending the game. The player can restart in a parallel universe "next door" or a simulated universe "within". In the version of the game for mobile devices, some universes contain artifacts that give bonuses to different aspects of the game, though players must complete the entire game again to retain the artifact in subsequent playthroughs. According to Lantz, the game was inspired by the paperclip maximizer , a thought experiment described by philosopher Nick Bostrom and popularized by the LessWrong internet forum, which Lantz frequently visited. In the paperclip maximizer scenario, an artificial general intelligence designed to build paperclips becomes superintelligent , perhaps through recursive self-improvement . In the worst-case scenario, the AI becomes smarter than humans in the same way that humans are smarter than other apes. The goal of making paperclips initially seems banal and harmless, but the AI uses its superintelligence to easily gain a strategic advantage over the human race and effectively takes over the world, as taking over the world is the best way to maximize its goal of building paperclips. The AI does not allow humans to shut it down or slow it down once it has a strategic advantage, as that would interfere with its goal of building as many paperclips as possible. According to Bostrom, the paperclips example is a toy model : "It doesn't have to be paper clips. It could be anything. But if you give an artificial intelligence an explicit goal – like maximizing the number of paper clips in the world – and that artificial intelligence has gotten smart enough to the point where it is capable of inventing its own super-technologies and building its own manufacturing plants, then, well, be careful what you wish for." [ 1 ] [ 8 ] [ 6 ] A seemingly innocuous goal leads to human extinction, as our bodies are made of matter and so too, it happens, are paperclips. [ 7 ] Lantz argues that Universal Paperclips reflects a version of the orthogonality thesis , which states that an agent can theoretically have any combination of intelligence level and goal: "When you play a game – really any game, but especially a game that is addictive and that you find yourself pulled into – it really does give you direct, first-hand experience of what it means to be fully compelled by an arbitrary goal." [ 1 ] While the game often takes narrative license, Eliezer Yudkowsky of the Machine Intelligence Research Institute argues that the core of the game's fundamental understanding of what superintelligence would entail is probably correct: "The AI is smart. The AI is being strategic. The AI is building hypnodrones, but not releasing them before it’s ready... There isn't a long, drawn-out fight with the humans because the AI is smarter than that." [ 1 ] Lantz states that exponential growth is another strong theme, saying "The human brain isn't really designed to intuitively understand things like exponential growth" but that Paperclips as a clicker game allows users to "directly engage with these numerical patterns, to hold them in your hands and feel the weight of them." [ 9 ] Lantz was also inspired by Kittens Game , an initially simple videogame that spirals into an exploration of how societies are structured. [ 1 ] The game includes a single piece of music as a space battle threnody , the track Riversong from the 1971 album Zero Time by the electronic music duo Tonto's Expanding Head Band . [ 10 ] Brendan Caldwell of Rock, Paper, Shotgun stated that "like all the best clicker games, there's a sinister and funny underbelly in which to become hopelessly lost." [ 11 ] Emanuel Maiberg of Vice Media 's MotherBoard called the game mindlessly addictive: "The truth is, I am kind of embarrassed by how much I enjoy Paperclips and that I can't figure out what Lantz is trying to say with it." [ 12 ] Stephanie Chan of VentureBeat stated: "I found myself delighted by sudden musical cues and the occasional koans that appeared in the activity log at the top of the page." [ 9 ] Adam Rogers of Wired praised Lantz for "taking a denigrated game genre (the 'clicker') and making it more than it is." [ 1 ] James Vincent of The Verge recommended Paperclips as "the most addictive [game] you'll play today"; [ 13 ] in December The Verge listed Paperclips among the best 15 games of 2017. [ 14 ] Vox Media 's Polygon ranked Paperclips as #37 among the best 50 games of 2017 [ 15 ] and #67 in their 100 Best Games of the Decade list. [ 16 ] The game was nominated for "Strategy/Simulation" at the 2018 Webby Awards . [ 17 ]
https://en.wikipedia.org/wiki/Universal_Paperclips
Universal Plug and Play ( UPnP ) is a set of networking protocols on the Internet Protocol (IP) that permits networked devices, such as personal computers, printers, Internet gateways , Wi-Fi access points and mobile devices, to seamlessly discover each other's presence on the network and establish functional network services. UPnP is intended primarily for residential networks without enterprise-class devices. UPnP assumes the network runs IP, and then uses HTTP on top of IP to provide device/service description, actions, data transfer and event notification . Device search requests and advertisements are supported by running HTTP on top of UDP ( port 1900) using multicast (known as HTTPMU). Responses to search requests are also sent over UDP, but are instead sent using unicast (known as HTTPU). Conceptually, UPnP extends plug and play —a technology for dynamically attaching devices directly to a computer—to zero-configuration networking for residential and SOHO wireless networks. UPnP devices are plug-and-play in that, when connected to a network, they automatically establish working configurations with other devices, removing the need for users to manually configure and add devices through IP addresses . [ 1 ] UPnP is generally regarded as unsuitable for deployment in business settings for reasons of economy, complexity, and consistency: the multicast foundation makes it chatty, consuming too many network resources on networks with a large population of devices; the simplified access controls do not map well to complex environments. The UPnP architecture allows device-to-device networking of consumer electronics , mobile devices, personal computers , and networked home appliances . It is a distributed, open architecture protocol based on established standards such as the Internet Protocol Suite (TCP/IP), HTTP , XML , and SOAP . UPnP control points (CPs) are devices which use UPnP protocols to control UPnP controlled devices (CDs). [ 2 ] The UPnP architecture supports zero-configuration networking. A UPnP-compatible device from any vendor can dynamically join a network, obtain an IP address, announce its name, advertise or convey its capabilities upon request, and learn about the presence and capabilities of other devices. Dynamic Host Configuration Protocol (DHCP) and Domain Name System (DNS) servers are optional and are only used if they are available on the network. Devices can disconnect from the network automatically without leaving state information. UPnP was published as a 73-part international standard ISO/IEC 29341 in December 2008. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] Other UPnP features include: UPnP uses common Internet technologies. It assumes the network must run Internet Protocol (IP) and then uses HTTP , SOAP and XML on top of IP, in order to provide device/service description, actions, data transfer and eventing. Device search requests and advertisements are supported by running HTTP on top of UDP using multicast (known as HTTPMU). Responses to search requests are also sent over UDP, but are instead sent using unicast (known as HTTPU). UPnP uses UDP due to its lower overhead in not requiring confirmation of received data and retransmission of corrupt packets. HTTPU and HTTPMU were initially submitted as an Internet Draft , but it expired in 2001; [ 9 ] these specifications have since been integrated into the actual UPnP specifications. UPnP uses UDP port 1900, and all used TCP ports are derived from the SSDP alive and response messages. [ 10 ] The foundation for UPnP networking is IP addressing. Each device must implement a DHCP client and search for a DHCP server when the device is first connected to the network. If no DHCP server is available, the device must assign itself an address. The process by which a UPnP device assigns itself an address is known within the UPnP Device Architecture as AutoIP . In UPnP Device Architecture Version 1.0, [ 3 ] AutoIP is defined within the specification itself; in UPnP Device Architecture Version 1.1, [ 4 ] AutoIP references IETF RFC 3927 . If during the DHCP transaction, the device obtains a domain name, for example, through a DNS server or via DNS forwarding , the device should use that name in subsequent network operations; otherwise, the device should use its IP address. Once a device has established an IP address, the next step in UPnP networking is discovery. The UPnP discovery protocol is known as the Simple Service Discovery Protocol (SSDP). When a device is added to the network, SSDP allows that device to advertise its services to control points on the network. This is achieved by sending SSDP alive messages. When a control point is added to the network, SSDP allows that control point to actively search for devices of interest on the network or listen passively to the SSDP alive messages of devices. The fundamental exchange is a discovery message containing a few essential specifics about the device or one of its services, for example, its type, identifier, and a pointer (network location) to more detailed information. After a control point has discovered a device, the control point still knows very little about the device. For the control point to learn more about the device and its capabilities, or to interact with the device, the control point must retrieve the device's description from the location ( URL ) provided by the device in the discovery message. The UPnP Device Description is expressed in XML and includes vendor-specific manufacturer information like the model name and number, serial number , manufacturer name, (presentation) URLs to vendor-specific web sites, etc. The description also includes a list of any embedded services. For each service, the Device Description document lists the URLs for control, eventing and service description. Each service description includes a list of the commands , or actions , to which the service responds, and parameters, or arguments , for each action; the description for a service also includes a list of variables ; these variables model the state of the service at run time and are described in terms of their data type, range, and event characteristics. Having retrieved a description of the device, the control point can send actions to a device's service. To do this, a control point sends a suitable control message to the control URL for the service (provided in the device description). Control messages are also expressed in XML using the Simple Object Access Protocol (SOAP). Much like function calls , the service returns any action-specific values in response to the control message. The effects of the action, if any, are modeled by changes in the variables that describe the run-time state of the service. Another capability of UPnP networking is event notification , or eventing . The event notification protocol defined in the UPnP Device Architecture is known as General Event Notification Architecture (GENA). A UPnP description for a service includes a list of actions the service responds to and a list of variables that model the state of the service at run time. The service publishes updates when these variables change, and a control point may subscribe to receive this information. The service publishes updates by sending event messages. Event messages contain the names of one or more state variables and the current value of those variables. These messages are also expressed in XML. A special initial event message is sent when a control point first subscribes; this event message contains the names and values for all evented variables and allows the subscriber to initialize its model of the state of the service. To support scenarios with multiple control points, eventing is designed to keep all control points equally informed about the effects of any action. Therefore, all subscribers are sent all event messages, subscribers receive event messages for all "evented" variables that have changed, and event messages are sent no matter why the state variable changed (either in response to a requested action or because the state the service is modeling changed). The final step in UPnP networking is presentation. If a device has a URL for presentation, then the control point can retrieve a page from this URL, load the page into a web browser , and depending on the capabilities of the page, allow a user to control the device and/or view device status. The degree to which each of these can be accomplished depends on the specific capabilities of the presentation page and device. UPnP AV architecture is an audio and video extension of the UPnP, supporting a variety of devices such as TVs, VCRs, CD/DVD players/jukeboxes, settop boxes, stereos systems, MP3 players, still image cameras, camcorders, electronic picture frames (EPFs), and personal computers. The UPnP AV architecture allows devices to support different types of formats for the entertainment content, including MPEG2, MPEG4, JPEG, MP3, Windows Media Audio (WMA), bitmaps (BMP), and NTSC, PAL or ATSC formats. Multiple types of transfer protocols are supported, including IEEE 1394, HTTP, RTP and TCP/IP. [ 11 ] On 12 July 2006, the UPnP Forum announced the release of version 2 of the UPnP Audio and Video specifications, [ 12 ] with new MediaServer (MS) version 2.0 and MediaRenderer (MR) version 2.0 classes. These enhancements are created by adding capabilities to the MediaServer and MediaRenderer device classes, allowing a higher level of interoperability between products made by different manufacturers. Some of the early devices complying with these standards were marketed by Philips under the Streamium brand name. Since 2006, versions 3 and 4 of the UPnP audio and video device control protocols have been published. [ 13 ] In March 2013, an updated uPnP AV architecture specification was published, incorporating the updated device control protocols. [ 11 ] UPnP Device Architecture 2.0 was released in April 2020. The UPnP AV standards have been referenced in specifications published by other organizations including Digital Living Network Alliance Networked Device Interoperability Guidelines, [ 14 ] International Electrotechnical Commission IEC 62481-1, [ 15 ] and Cable Television Laboratories OpenCable Home Networking Protocol. [ 16 ] Generally a UPnP audio/video (AV) architecture consists of: [ 17 ] A UPnP AV media server is the UPnP-server ("master" device) that provides media library information and streams media-data (like audio/video/picture/files) to UPnP clients on the network. It is a computer system or a similar digital appliance that stores digital media, such as photographs, movies, or music and shares these with other devices. UPnP AV media servers provide a service to UPnP AV client devices, so-called control points , for browsing the media content of the server and request the media server to deliver a file to the control point for playback. UPnP media servers are available for most operating systems and many hardware platforms. UPnP AV media servers can either be categorized as software -based or hardware-based. Software-based UPnP AV media servers can be run on a PC . Hardware-based UPnP AV media servers may run on any NAS devices or any specific hardware for delivering media, such as a DVR . As of May 2008, there were more software-based UPnP AV media servers than there were hardware-based servers. One solution for NAT traversal , called the Internet Gateway Device Control Protocol (UPnP IGD Protocol), is implemented via UPnP. Many routers and firewalls expose themselves as Internet Gateway Devices, allowing any local UPnP control point to perform a variety of actions, including retrieving the external IP address of the device, enumerating existing port mappings, and adding or removing port mappings. By adding a port mapping, a UPnP controller behind the IGD can enable traversal of the IGD from an external address to an internal client. There are numerous compatibility issues due the different interpretations of the very large actually backward compatible IGDv1 and IGDv2 specifications. One of them is the UPnP IGD client integrated with current Microsoft Windows and Xbox systems with certified IGDv2 routers. The compatibility issue still exist since the introduced of the IGDv1 client in Windows XP in 2001, and a IGDv2 router without a workaround that makes router port mapping impossible. [ 19 ] If UPnP is only used to control router port mappings and pinholes, there are alternative, newer much simpler and lightweight protocols such as the PCP and the NAT-PMP , both of which have been standardized as RFCs by the IETF. These alternatives are not yet known to have compatibility issues between different clients and servers, but adoption is still low. For consumer routers, only AVM and the open-source router software projects OpenWrt , OPNsense , and pfSense are currently known to support PCP as an alternative to UPnP. AVM 's Fritz!Box UPnP IGDv2 and PCP implementation has been very buggy since its introduction. In many cases it does not work. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] The UPnP protocol, by default, does not implement any authentication , so UPnP device implementations must implement the additional Device Protection service, [ 25 ] or implement the Device Security Service . [ 26 ] There also exists a non-standard solution called UPnP-UP (Universal Plug and Play - User Profile) [ 27 ] [ 28 ] which proposes an extension to allow user authentication and authorization mechanisms for UPnP devices and applications. Many UPnP device implementations lack authentication mechanisms, and by default assume local systems and their users are completely trustworthy. [ 29 ] [ 30 ] When the authentication mechanisms are not implemented, routers and firewalls running the UPnP IGD protocol are vulnerable to attack. For example, Adobe Flash programs running outside the sandbox of the browser (e.g. this requires specific version of Adobe Flash with acknowledged security issues) are capable of generating a specific type of HTTP request which allows a router implementing the UPnP IGD protocol to be controlled by a malicious web site when someone with a UPnP-enabled router simply visits that web site. [ 31 ] This only applies to the "firewall-hole-punching"-feature of UPnP ; it does not apply when the router/firewall does not support UPnP IGD or has been disabled on the router. Also, not all routers can have such things as DNS server settings altered by UPnP because much of the specification (including LAN Host Configuration) is optional for UPnP enabled routers. [ 6 ] As a result, some UPnP devices ship with UPnP turned off by default as a security measure. In 2011, researcher Daniel Garcia developed a tool designed to exploit a flaw in some UPnP IGD device stacks that allow UPnP requests from the Internet. [ 32 ] [ 33 ] The tool was made public at DEFCON 19 and allows portmapping requests to external IP addresses from the device and internal IP addresses behind the NAT. The problem is widely propagated around the world, with scans showing millions of vulnerable devices at a time. [ 34 ] In January 2013, the security company Rapid7 in Boston reported [ 35 ] on a six-month research programme. A team scanned for signals from UPnP-enabled devices announcing their availability for internet connection. Some 6900 network-aware products from 1500 companies at 81 million IP-addresses responded to their requests. 80% of the devices are home routers; others include printers, webcams and surveillance cameras. Using the UPnP-protocol, many of those devices can be accessed and/or manipulated. In February 2013, the UPnP forum responded in a press release [ 36 ] by recommending more recent versions of the used UPnP stacks, and by improving the certification program to include checks to avoid further such issues. UPnP is often the only significant multicast application in use in digital home networks; therefore, multicast network misconfiguration or other deficiencies can appear as UPnP issues rather than underlying network issues. If IGMP snooping is enabled on a switch, or more commonly a wireless router/switch, it will interfere with UPnP/DLNA device discovery (SSDP) if incorrectly or incompletely configured (e.g. without an active querier or IGMP proxy), making UPnP appear unreliable. Typical scenarios observed include a server or client (e.g. smart TV) appearing after power on, and then disappearing after a few minutes (often 30 by default configuration) due to IGMP group membership expiring. On 8 June 2020, yet another protocol design flaw was announced. [ 37 ] Dubbed "CallStranger" [ 38 ] by its discoverer, it allows an attacker to subvert the event subscription mechanism and execute a variety of attacks: amplification of requests for use in DDoS; enumeration; and data exfiltration. OCF had published a fix to the protocol specification in April 2020, [ 39 ] but since many devices running UPnP are not easily upgradable, CallStranger is likely to remain a threat for a long time to come. [ 40 ] CallStranger has fueled calls for end-users to abandon UPnP because of repeated failures in security of its design and implementation. [ 41 ] The UPnP protocols were promoted by the UPnP Forum (formed in October 1999), [ 42 ] a computer industry initiative to enable simple and robust connectivity to standalone devices and personal computers from many different vendors. The Forum consisted of more than 800 vendors involved in everything from consumer electronics to network computing. Since 2016, all UPnP efforts have been managed by the Open Connectivity Foundation (OCF). In the fall of 2008, the UPnP Forum ratified the successor to UPnP 1.0 Device Architecture, UPnP 1.1. [ 43 ] The Devices Profile for Web Services (DPWS) standard was a candidate successor to UPnP, but UPnP 1.1 was selected by the UPnP Forum. Version 2 of IGD is standardized. [ 44 ] The UPnP Internet Gateway Device (IGD) [ 6 ] standard has a WANIPConnection service, which provides similar functionality to IETF -standard Port Control Protocol . The NAT-PMP specification contains a list of the problems with IGDP [ 45 ] : 26–32 that prompted the creation of NAT-PMP and its successor PCP. A number of further standards have been defined for the UPnP Device Architecture:
https://en.wikipedia.org/wiki/Universal_Plug_and_Play