id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
39,134,192
https://en.wikipedia.org/wiki/Furoquinoline%20alkaloid
Furoquinoline alkaloids are a group of alkaloids with simple structure. Distribution of this group of alkaloids is essentially limited to plant family Rutaceae. The simplest member of this group is dictamnine and most widespread member is skimmianine. A furoquinoline alkaloid, dictamnine, is very common within the family Rutaceae. It is the main alkaloid in the roots of Dictamnus albus and responsible for the mutagenicity of the drug derived from crude extracts. Dictamnine was also reported to be a phototoxic and photomutagenic compound. It participates in the severe skin phototoxicity of the plant. Another furoquinoline alkaloid, skimmianine, has strong antiacetylcholinesterase activity. Chemistry Thomas first isolated dictamnine from Rutaceae in 1923. It is very weak base, shows similar reaction with methyl iodide and dimethyl sulfate or diazomethane, does not form a derivative but go through isomerization to isodictamnine. Dictamine have linear structure which is confirmed as it forms dictamnic acid by oxidative degradation with potassium permanganate. Dieckmann cyclization followed by methylation and hydrolysis confirmed the structure of the acid. Skimmianine, another common furoquinoline alkaloid also shows a very similar type of chemistry to dictamine. Skimmianine also has a linear structure as it gave 3-ethyl-4,7,8-trimethoxy-2-quinolone from hydrolysis. Pharmacological properties Some furoquinoline alkaloids have been found to have in vitro pharmacological properties such as antimicrobial, antiviral, mutagenic and cytotoxic activities. They also show antiplatelet aggregation, inhibition of various enzymes, antibacterial, and antifungal activity. Dictamnine has the property of causing smooth muscle contraction. Skimmianine, extracted from Esenbeckia leiocarpa Engl. (Rutaceae), a native tree from Brazil popularly known as guarantã, show acetylcholinesterase inhibition. Furoquinoline alkaloids extracted from Teclea afzelii (Rutaceae) plants, collected at Elounden, centre province of Cameroon, have antiplasmodial activities. Another study shows that some furoquinoline alkaloids have in vitro activity against Plasmodium falciparum, one of the species of Plasmodium that causes malaria in humans. One furoquinoline alkaloid, 5-(1,1-dimethylallyl)-8-hydroxy-furo[2-3-b]quinolone, shows antifungal properties against Rhizoctonia solani, Sclerotium rolfsi, and Fusarium solani. These fungi cause root-rot and wilt diseases in potato, sugar beet and tomato. Spectral properties For UV spectra an intense band is observed at 235 nm and very broad band in region 290-335 nm. Compared to UV, IR spectra shows less characteristics: 1090–1110 cm−1 region shows a band but don't indicate a particular vibration. NMR spectroscopy is the best way to observe the structure of furoquinoline alkaloids. C-2 proton gives response in 7.50-7.60 ppm region and C-3 proton gives response in 6.90-7.10 ppm region. Aromatic methoxy group give responses in 4.0-4.2 ppm region but 4-methoxy group give responses in ~4.40 ppm region. References External links Furans Quinoline alkaloids
Furoquinoline alkaloid
[ "Chemistry" ]
814
[ "Quinoline alkaloids", "Alkaloids by chemical classification" ]
39,134,574
https://en.wikipedia.org/wiki/V-2%20sounding%20rocket
German V-2 rockets captured by the United States Army at the end of World War II were used as sounding rockets to carry scientific instruments into the Earth's upper atmosphere, and into sub-orbital space, at White Sands Missile Range (WSMR) for a program of atmospheric and solar investigation through the late 1940s. Rocket trajectory was intended to carry the rocket about high and horizontally from WSMR Launch Complex 33. Impact velocity of returning rockets was reduced by inducing structural failure of the rocket airframe upon atmospheric re-entry. More durable recordings and instruments might be recovered from the rockets after ground impact, but telemetry was developed to transmit and record instrument readings during flight. History The first of 300 railroad cars of V-2 rocket components began to arrive at Las Cruces, New Mexico in July 1945 for transfer to WSMR. So much equipment was taken from Germany that the Deutsches Museum later had to obtain a V-2 for an exhibit from the US. In November General Electric (GE) employees began to identify, sort, and reassemble V-2 rocket components in WSMR Building 1538, designated as WSMR Assembly Building 1. The Army completed a blockhouse in WSMR Launch Area 1 in September 1945. WSMR Launch Complex 33 for the captured V-2s was built around this blockhouse. Initial V-2 assembly efforts produced 25 rockets available for launch. The Army assembled an Upper Atmosphere Research Panel of representative from the Air Materiel Command, Naval Research Laboratory (NRL), Army Signal Corps, Ballistic Research Laboratory, Applied Physics Laboratory, University of Michigan, Harvard University, Princeton University, and General Electric Company. German rocket scientists of Operation Paperclip arrived at Fort Bliss in January 1946 to assist the V-2 rocket testing program. After a static test firing of a V-2 engine on 15 March 1946, the first V-2 rocket launch from Launch Complex 33 was on 16 April 1946. As the possibilities of the program were realized, GE personnel built new control components to replace deteriorated parts and used replacement parts with salvaged materials to make more than 75 V-2 sounding rockets available for atmospheric and solar investigation at WSMR. Approximately two V-2 launches per month were scheduled from Launch Complex 33 until the supply of V-2 sounding rockets was exhausted. A reduced frequency of V-2 sounding rocket investigations from Launch Complex 33 continued until 1952. See also: Launches of captured V-2 rockets in the United States after 1945 Modifications The explosive warhead in the nose cone was replaced by a package of instrumentation averaging . Instrumentation was sometimes added to the control compartment, in the rear motor section, between the fuel tanks, or on the fins or skin of the rocket. Nose cone instrumentation was typically assembled at participating laboratories and flown to WSMR to be joined to the rocket in Assembly Building 1. Rockets returning to Earth intact created an impact crater about wide and of similar depth which filled with debris to a depth of about . In an effort to preserve instruments, dynamite was strategically placed within the airframe to be detonated at an elevation of during downward flight at end of the high-altitude scientific observation interval. These explosives weakened the rocket structure so it would be torn apart by aerodynamic forces as it re-entered the denser lower atmosphere. Terminal velocity of tumbling fragments was reduced by an order of magnitude. Performance V-2 sounding rockets were long and in diameter and weighed with a full load of liquid fuel contributing two-thirds of that weight. The fuel was consumed in the first minute of flight producing a thrust of . Maximum acceleration of 6 Gs was reached at minimum fuel weight just before burnout, and vibrational accelerations were of similar magnitude during powered flight. Velocity at burnout was approximately per second. The rocket would typically have a small, unpredictable angular momentum at burnout causing unpredictable roll with pitch or yaw as it coasted upward approximately . A typical flight provided an observation window of 5 minutes at altitudes above . Instrumentation Servomechanisms were devised to compensate for rocket aspect changes as it tumbled after burnout. These allowed Sun-tracking devices to measure the solar electromagnetic spectrum. Limited success was achieved with parachute recovery of instrumentation, but some of the more durable instruments or recordings within the rocket airframe could withstand impact with the earth at subsonic velocities. NRL developed a telemetry system using a 23-channel pulse-time modulation. Voltage presented to the input terminals of a given channel determined spacing between two adjacent pulses, not entirely unlike the technique of pulse-position modulation. Space between first and second pulses was determined by channel 1, between second and third pulses by channel 2, and so forth. The system made 200 samplings per second of 24 pulses. Information was transmitted via high-power frequency modulation. Ground receiving stations translated pulse spacings back into voltages which were applied to a bank of string galvanometers to make an approximately continuous record of each channel on a moving roll of film. Accuracy was within approximately 5 percent. Scientific operations A 1946 Naval Research Laboratory launch took the first photographs of the Sun in the ultraviolet spectrum up to an altitude of . The first night flight of a V-2 sounding rocket began at 10:00 pm (MST) 17 December 1946 on an Applied Physics Laboratory flight. This rocket carried several explosive charges that generated artificial meteors, which could be observed photographically. The experiment package was installed by James Van Allen. Though the flight itself was photographed by observers as far away () as Tucson, Arizona, the charges and expected meteors were not, and it is likely they did not fire. Animals tests The first animals sent into space were fruit flies aboard a U.S.-launched V-2 rocket on 20 February 1947 from White Sands Missile Range, New Mexico. The purpose of the experiment was to explore the effects of radiation exposure at high altitudes. The rocket reached 68 miles (109 km) in 3 minutes and 10 seconds, past both the U.S. Air Force 50-mile and the international 100 km definitions of the boundary of space. The Blossom capsule was ejected and successfully deployed its parachute. The fruit flies were recovered alive. Other V-2 missions carried biological samples, including seeds. Albert II, a rhesus monkey, became the first primate and first mammal in space on 14 June 1949, in a U.S.-launched V-2, after the failure of the original Albert's mission on ascent. Albert I reached only 30–39 miles (48–63 km) altitude; Albert II reached about 83 miles (134 km). Albert II died on impact after a parachute failure. Numerous monkeys of several species were flown by the U.S. in the 1950s and 1960s. Monkeys were implanted with sensors to measure vital signs, and many were under anesthesia during launch. The death rate among monkeys at this stage was very high: about two-thirds of all monkeys launched in the 1940s and 1950s died on missions or soon after landing. See also Hermes (missile program) RTV-G-4 Bumper V-2 No. 13 Spaceflight before 1951 References White Sands Missile Range Meteorological instrumentation and equipment 1940s in spaceflight Rockets and missiles
V-2 sounding rocket
[ "Technology", "Engineering" ]
1,458
[ "Meteorological instrumentation and equipment", "Measuring instruments" ]
29,638,267
https://en.wikipedia.org/wiki/Constrained%20Delaunay%20triangulation
In computational geometry, a constrained Delaunay triangulation is a generalization of the Delaunay triangulation that forces certain required segments into the triangulation as edges, unlike the Delaunay triangulation itself which is based purely on the position of a given set of vertices without regard to how they should be connected by edges. It can be computed efficiently and has applications in geographic information systems and in mesh generation. Definition The input to the constrained Delaunay triangulation problem is a planar straight-line graph, a set of points and non-crossing line segments in the plane. The constrained Delaunay triangulation of this input is a triangulation of its convex hull, including all of the input segments as edges, and using only the vertices of the input. For every additional edge added to this input to make it into a triangulation, there should exist a circle through the endpoints of , such that any vertex interior to the circle is blocked from visibility from at least one endpoint of by a segment of the input. This generalizes the defining property of two-dimensional Delaunay triangulations of points, that each edge have a circle through its two endpoints containing no other vertices. A triangulation satisfying these properties always exists. Jonathan Shewchuk has generalized this definition to constrained Delaunay triangulations of three-dimensional inputs, systems of points and non-crossing segments and triangles in three-dimensional space; however, not every input of this type has a constrained Delaunay triangulation according to his generalized definition. Algorithms Several algorithms for computing constrained Delaunay triangulations of planar straight-line graphs in time are known. The constrained Delaunay triangulation of a simple polygon can be constructed in linear time. Applications In topographic surveying, one constructs a triangulation from points shot in the field. If an edge of the triangulation crosses a river, the resulting surface does not accurately model the path of the river. So one draws break lines along rivers, edges of roads, mountain ridges, and the like. The break lines are used as constraints when constructing the triangulation. Constrained Delaunay triangulation can also be used in Delaunay refinement methods for mesh generation, as a way to force the mesh to conform with the domain boundaries as it is being refined. References External links Open Source implementation. Geometry processing Triangulation (geometry)
Constrained Delaunay triangulation
[ "Mathematics" ]
506
[ "Triangulation (geometry)", "Planes (geometry)", "Planar graphs" ]
29,648,687
https://en.wikipedia.org/wiki/Compact%20toroid
A compact toroid (CT) is a type of plasmoid, a class of toroidal plasma configuration that is self-stable, and which does not need magnet coils running through the center of the toroid. They are studied mainly in the field of fusion power, where a lack of complex magnets and a simple geometry may allow building dramatically simpler and less costly fusion reactors. The two best studied compact toroids are the spheromak and field-reversed configuration (FRC). A third configuration, the particle ring, lacks attractive performance to date. A CT containment system for plasma asymmetrically toroidally shaped by the containment, was first introduced into thought as a concept by Alfvén. The two examplar types; Field-reversed configuration plasma with a null toroid, firstly, is generally produced by prolate-theta-pinches with the necessarily existing field condition where the field magnetic bias is in a reversed situation. The second type has a non-null toroid, known as a spheromak configuration, is similar in arrangement to a vortex ring such as a smoke ring. The FRC is also toroidal, but extended into a tubular shape or hollow cylinder. The main difference between the two is that the spheromak contains poloidal (vertical rings) and toroidal (horizontal) magnetic fields, while the FRC has only the poloidal fields and requires an external magnet for confinement. In both cases the combination of electrical currents and their associated magnetic fields result in a series of closed magnetic lines that maintains the ring shape, without need of magnets in the plasma center, unlike a tokamak. Of the two, the FRC naturally has a higher beta, a measure of fusion economics. However, the spheromak had generated better confinement times and temperatures, and recent work suggests that great advances in performance can be made. Compact toroids are also similar to the spherical tokamak, and many spherical tokamak machines were converted from earlier spheromak reactors. See also High beta fusion reactor References Bibliography "ProtoSphera, General Framework" , CR-ENEA Frascati, July 2001 Fusion power
Compact toroid
[ "Physics", "Chemistry" ]
445
[ "Nuclear fusion", "Plasma physics stubs", "Fusion power", "Plasma physics" ]
37,662,322
https://en.wikipedia.org/wiki/Brandt%20matrix
In mathematics, Brandt matrices are matrices, introduced by , that are related to the number of ideals of given norm in an ideal class of a definite quaternion algebra over the rationals, and that give a representation of the Hecke algebra. calculated the traces of the Brandt matrices. Let O be an order in a quaternion algebra with class number H, and Ii,...,IH invertible left O-ideals representing the classes. Fix an integer m. Let ej denote the number of units in the right order of Ij and let Bij denote the number of α in Ij−1Ii with reduced norm N(α) equal to mN(Ii)/N(Ij). The Brandt matrix B(m) is the H×H matrix with entries Bij. Up to conjugation by a permutation matrix it is independent of the choice of representatives Ij; it is dependent only on the level of the order O. References Number theory Matrices
Brandt matrix
[ "Mathematics" ]
207
[ "Discrete mathematics", "Mathematical objects", "Matrices (mathematics)", "Matrix stubs", "Number theory" ]
37,662,627
https://en.wikipedia.org/wiki/Greystone%20%28architecture%29
Greystones are a style of residential building most commonly found in Chicago, Illinois, United States. As the name suggests, the buildings are typically grey in color and were most often built with Bedford Limestone quarried from South Central Indiana. In Chicago, there are roughly 30,000 greystones, usually built as a semi- or fully detached townhouse. The term "greystone" is also used to refer to buildings in Montreal, Quebec, Canada (known in French as pierre grise). It refers to the grey limestone facades of many buildings, both residential and institutional, constructed between 1730 and 1920. History and usage The building style first began to appear in the 1890s, initially in neighborhoods like Woodlawn and then North Lawndale, and Lake View, and continued through 1930s with two major approaches in design. The first style, between 1890 and 1905, was Romanesque in nature with arches and cornices. This initial style and the choice of grey limestone occurred as the city rebuilt and grew in economic power after the Great Chicago Fire in 1871, though the buildings were designed for a wide range of socioeconomic classes. The second style was predominately built in a Neoclassical design incorporating smoother limestone blocks featuring columns and bay windows. Greystones were built in a wide variety of sizes to accommodate different residential needs with most being two to three floors in size, many commonly containing two to three flats but some up to six. Regardless of their size, they were always built with the limestone facade facing the street to take advantage of the limited size of standard Chicago lots . There are an estimated 30,000 greystones still remaining in the city and many citizens, architects and preservationists are working to revive those that remain through the Historic Chicago Greystone Initiative. Many greystones are preserved as the multi-family structures which they were designed and built as. Today, greystones often retain original Romanesque or Neoclassical details such as "roughly carved blocks of greystone and intricately carved column capitals," though many were built in other styles. Styles There are many different styles of greystones, with the City of Chicago defining most attributes for the style for landmark status. Romanesque Revival "Heavy, rough-cut stone walls Round arches and squat columns Deeply recessed windows Pressed metal bays and turrets" Queen Anne "Rich but simple ornament Wide variety of materials, including wood, stone and pressed metal Expansive porches Pressed metal bays and turrets Irregular roofline with many dormers and chimneys" Chateauesque "Vertical proportions Massive-looking masonry walls Ornate carved stone ornament High-peaked hipped roofs, elaborate dormers and tall chimneys" Classical Revival/Beaux Arts "Symmetrical facades Minimal use of bays, towers, or other projecting building elements Classical ornament, including columns, cornices and triangular pediments Wide variety of materials, including brick, stone and wood" See also Brownstone References External links Greystone Certification Program American architectural styles Architecture in Illinois Buildings and structures in Chicago Building materials History of Chicago House styles Industrial minerals Limestone
Greystone (architecture)
[ "Physics", "Engineering" ]
607
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
37,663,509
https://en.wikipedia.org/wiki/ArduPilot
ArduPilot is an open source, uncrewed vehicle Autopilot Software Suite, capable of controlling: Multirotor drones Fixed-wing and VTOL aircraft Helicopters ROVs Ground rovers Boats Submarines Uncrewed Surface Vessels (USVs) Antenna trackers Blimps ArduPilot was originally developed by hobbyists to control model aircraft and rovers and has evolved into a full-featured and reliable autopilot used by industry, research organisations and amateurs. Software and Hardware Software suite The ArduPilot software suite consists of navigation software (typically referred to as firmware when it is compiled to binary form for microcontroller hardware targets) running on the vehicle (either Copter, Plane, Rover, AntennaTracker, or Sub), along with ground station controlling software including Mission Planner, APM Planner, QGroundControl, MavProxy, Tower and others. ArduPilot source code is stored and managed on GitHub, with over 800 contributors. The software suite is automatically built nightly, with continuous integration and unit testing provided by Travis CI, and a build and compiling environment including the GNU cross-platform compiler and Waf. Pre-compiled binaries running on various hardware platforms are available for user download from ArduPilot's sub-websites. Supported hardware Copter, Plane, Rover, AntennaTracker, or Sub software runs on a wide variety of embedded hardware (including full blown Linux computers), typically consisting of one or more microcontroller or microprocessor connected to peripheral sensors used for navigation. These sensors include MEMS gyroscopes and accelerometers at a minimum, necessary for multirotor flight and plane stabilization. Sensors usually include, in addition, one or more compass, altimeter (barometric) and GPS, along with optional additional sensors such as optical flow sensors, airspeed indicators, laser or sonar altimeters or rangefinders, monocular, stereoscopic or RGB-D cameras. Sensors may be on the same electronic board, or external. Ground Station software, used for programming or monitoring vehicle operation, is available for Windows, Linux, macOS, iOS, and Android. ArduPilot runs on a wide variety of hardware platforms, including the following, listed in alphabetical order: Intel Aero (Linux or STM32 Base) APM 2.X (Atmel Mega Microcontroller Arduino base), designed by Jordi Munoz in 2010. APM, for ArduPilotMega, only runs on older versions of ArduPilot. BeagleBone Blue and PXF Mini (BeagleBone Black cape). The Cube, formerly called Pixhawk 2, (ARM Cortex microcontroller base), designed by ProfiCNC in 2015. Edge , drone controller with video streaming system, designed by Emlid. Erle-Brain , (Linux base) designed by Erle Robotics. Intel Minnowboard (Linux Base). Navigator Flight Controller by Blue Robotics Navio2 and Navio+ (Raspberry Pi Linux based), designed by Emlid. Parrot Bebop, and Parrot C.H.U.C.K., designed by Parrot, S.A. Pixhawk, (ARM Cortex microcontroller base), originally designed by Lorenz Meier and ETH Zurich, improved and launched in 2013 by PX4, 3DRobotics, and the ArduPilot development team. PixRacer, (ARM Cortex microcontroller base) designed by AUAV. Qualcomm SnapDragon (Linux base). Virtual Robotics VRBrain (ARM Cortex microcontroller base). Xilinx SoC Zynq processor (Linux base, ARM and FPGA processor). In addition to the above base navigation platforms, ArduPilot supports integration and communication with on-vehicle companion, or auxiliary computers for advanced navigation requiring more powerful processing. These include NVidia TX1 and TX2 ( Nvidia Jetson architecture), Intel Edison and Intel Joule, HardKernel ODROID, and Raspberry Pi computers. Features Common to all vehicles ArduPilot provides a large set of features, including the following common for all vehicles: Fully autonomous, semi-autonomous and fully manual flight modes, programmable missions with 3D waypoints, optional geofencing. Stabilization options to negate the need for a third party co-pilot. Simulation with a variety of simulators, including ArduPilot Software in the Loop (SITL) Simulator. Large number of navigation sensors supported, including several models of RTK GPSs, traditional L1 GPSs, barometers, magnetometers, laser and sonar rangefinders, optical flow, ADS-B transponder, infrared, airspeed, sensors, and computer vision/motion capture devices. Sensor communication via SPI, I²C, CAN Bus, Serial communication, SMBus. Failsafes for loss of radio contact, GPS and breaching a predefined boundary, minimum battery power level. Support for navigation in GPS denied environments, with vision-based positioning, optical flow, SLAM, Ultra Wide Band positioning. Support for actuators such as parachutes and magnetic grippers. Support for brushless and brushed motors. Photographic and video gimbal support and integration. Integration and communication with powerful secondary, or "companion", computers Rich documentation through ArduPilot wiki. Support and discussion through ArduPilot discourse forum, Gitter chat channels, GitHub, Facebook. Copter-specific Flight modes: Stabilize, Alt Hold, Loiter, RTL (Return-to-Launch), Auto, Acro, AutoTune, Brake, Circle, Drift, Guided, (and Guided_NoGPS), Land, PosHold, Sport, Throw, Follow Me, Simple, Super Simple, Avoid_ADSB. Auto-tuning Wide variety of frame types supported, including tricopters, quadcopters, hexacopters, flat and co-axial octocopters, and custom motor configurations Support for traditional electric and gas helicopters, mono copters, tandem helicopters. Plane-specific Fly By Wire modes, loiter, auto, acrobatic modes. Take-off options: Hand launch, bungee, catapult, vertical transition (for VTOL planes). Landing options: Adjustable glide slope, helical, reverse thrust, net, vertical transition (for VTOL planes). Auto-tuning, simulation with JSBSIM, X-Plane and RealFlight simulators. Support for a large variety of VTOL architectures: Quadplanes, Tilt wings, tilt rotors, tail sitters, ornithopters. Optimization of 3 or 4 channel airplanes. Rover-specific Manual, Learning, Auto, Steering, Hold and Guided operational modes. Support for wheeled and track architectures. Submarine-specific Depth hold: Using pressure-based depth sensors, submarines can maintain depth within a few centimeters. Light Control: Control of subsea lighting through the controller. ArduPilot is fully documented within its wiki, totaling the equivalent of about 700 printed pages and divided in six top sections: The Copter, Plane, Rover, and Submarine vehicle related subsections are aimed at users. A "developer" subsection for advanced uses is aimed primarily at software and hardware engineers, and a "common" section regrouping information common to all vehicle types is shared within the first four sections. ArduPilot use cases Hobbyists and amateurs Drone racing Building and operation of radio control models for recreation Professional Aerial photogrammetry Aerial photography and filmmaking. Remote sensing Search and rescue Robotic applications Academic research Package delivery History Early years, 2007–2012 The ArduPilot project earliest roots date back to late 2007 when Jordi Munoz, who later co-founded 3DRobotics with Chris Anderson, wrote an Arduino program (which he called "ArduCopter") to stabilize an RC Helicopter. In 2009 Munoz and Anderson released Ardupilot 1.0 (flight controller software) along with a hardware board it could run on. That same year Munoz, who had built a traditional RC helicopter UAV able to fly autonomously, won the first Sparkfun AVC competition. The project grew further thanks to many members of the DIY Drones community, including Chris Anderson who championed the project and had founded the forum based community earlier in 2007. The first ArduPilot version supported only fixed-wing aircraft and was based on a thermopile sensor, which relies on determining the location of the horizon relative to the aircraft by measuring the difference in temperature between the sky and the ground. Later, the system was improved to replace thermopiles with an Inertial Measurement Unit (IMU) using a combination of accelerometers, gyroscopes and magnetometers. Vehicle support was later expanded to other vehicle types which led to the Copter, Plane, Rover, and Submarine subprojects. The years 2011 and 2012 witnessed an explosive growth in the autopilot functionality and codebase size, thanks in large part to new participation from Andrew "Tridge" Tridgell and HAL author Pat Hickey. Tridge's contributions included automatic testing and simulation capabilities for Ardupilot, along with PyMavlink and Mavproxy. Hickey was instrumental in bringing the AP_ HAL library to the code base: HAL (Hardware Abstraction Layer) greatly simplified and modularized the code base by introducing and confining low-level hardware implementation specifics to a separate hardware library. The year 2012 also saw Randy Mackay taking the role of lead maintainer of Copter, after a request from former maintainer Jason Short, and Tridge taking over the role of lead Plane maintainer, after Doug Weibel who went on to earn a Ph.D. in Aerospace Engineering. Both Randy and Tridge are current lead maintainers to date. The free software approach to ArduPilot code development is similar to that of the Linux Operating system and the GNU Project, and the PX4/Pixhawk and Paparazzi Project, where low cost and availability enabled hobbyists to build autonomous small remotely piloted aircraft, such as micro air vehicles and miniature UAVs. The drone industry, similarly, progressively leveraged ArduPilot code to build professional, high-end autonomous vehicles. Maturity, 2013–2016 While early versions of ArduPilot used the APM flight controller, an AVR CPU running the Arduino open-source programming language (which explains the "Ardu" part of the project name), later years witnessed a significant re-write of the code base in C++ with many supporting utilities written in Python. Between 2013 and 2014 ArduPilot evolved to run on a range of hardware platforms and operating system beyond the original Arduino Atmel based microcontroller architecture, first with the commercial introduction of the Pixhawk hardware flight controller, a collaborative effort between PX4, 3DRobotics and the ArduPilot development team, and later to the Parrot's Bebop2 and the Linux-based flight controllers like Raspberry Pi based NAVIO2 and BeagleBone based ErleBrain. A key event within this time period included the first flight of a plane under Linux in mid 2014. Late 2014 saw the formation of DroneCode, formed to bring together the leading open source UAV software projects, and most notably to solidify the relationship and collaboration of the ArduPilot and the PX4 projects. ArduPilot's involvement with DroneCode ended in September 2016. 2015 was also a banner year for 3DRobotics, a heavy sponsor of ArduPilot development, with its introduction of the Solo quadcopter, an off the shelf quadcopter running ArduPilot. Solo's commercial success, however, was not to be. Fall of 2015 again saw a key event in the history of the autopilot, with a swarm of 50 planes running ArduPilot simultaneously flown at the Advanced Robotic Systems Engineering Laboratory (ARSENL) team at the Naval Postgraduate School. Within this time period, ArduPilot's code base was significantly refactored, to the point where it ceased to bear any similarity to its early Arduino years. Current, 2018–present ArduPilot code evolution continues with support for integrating and communicating with powerful companion computers for autonomous navigation, plane support for additional VTOL architectures, integration with ROS, support for gliders, and tighter integration for submarines. The project evolves under the umbrella of ArduPilot.org, a project within the Software in the Public Interest (spi-inc.org) not-for-profit organisation. ArduPilot is sponsored in part by a growing list of corporate partners. UAV Outback Challenge In 2012, the Canberra UAV Team successfully took first place in the prestigious UAV Outback Challenge. The CanberraUAV Team included ArduPlane Developers and the airplane flown was controlled by an APM 2 Autopilot. In 2014 the CanberraUAV Team and ArduPilot took first place again, by successfully delivering a bottle to the "lost" hiker. In 2016 ArduPilot placed first in the technically more challenging competition, ahead of strong competition from international teams. Community ArduPilot is jointly managed by a group of volunteers located around the world, using the Internet (discourse based forum, gitter channel) to communicate, plan, develop and support it. The development team meets weekly in a chat meeting, open to all, using Mumble. In addition, hundreds of users contribute ideas, code and documentation to the project. ArduPilot is licensed under the GPL Version 3 and is free to download and use. Customizability The flexibility of ArduPilot makes it very popular in the DIY field but it has also gained popularity with professional users and companies. 3DRobotics' Solo quadcopter, for instance, uses ArduPilot, as have a large number of professional aerospace companies such as Boeing. The flexibility allows for support of a wide variety of frame types and sizes, different sensors, camera gimbals and RC transmitters depending on the operator's preferences. ArduPilot has been successfully integrated into many airplanes such as the Bixler 2.0. The customizability and ease of installation have allowed the ArduPilot platform to be integrated for a variety of missions. The Mission Planner (Windows) ground control station allows the user to easily configure, program, use, or simulate an ArduPilot board for purposes such as mapping, search and rescue, and surveying areas. See also Open-source robotics Other projects for autonomous aircraft control: PX4 autopilot Paparazzi Project Slugs Other projects for ground vehicles & cars driven: OpenPilot Tesla Autopilot References Unmanned aerial vehicles Unmanned underwater vehicles Robots Unmanned ground vehicles Software using the GNU General Public License Free software programmed in C++ Free software programmed in Python Cross-platform free software
ArduPilot
[ "Physics", "Technology" ]
3,127
[ "Physical systems", "Machines", "Robots" ]
37,663,646
https://en.wikipedia.org/wiki/Null%20hypersurface
In relativity and in pseudo-Riemannian geometry, a null hypersurface is a hypersurface whose normal vector at every point is a null vector (has zero length with respect to the local metric tensor). A light cone is an example. An alternative characterization is that the tangent space at every point of a hypersurface contains a nonzero vector such that the metric applied to such a vector and any vector in the tangent space is zero. Another way of saying this is that the pullback of the metric onto the tangent space is degenerate. For a Lorentzian metric, all the vectors in such a tangent space are space-like except in one direction, in which they are null. Physically, there is exactly one lightlike worldline contained in a null hypersurface through each point that corresponds to the worldline of a particle moving at the speed of light, and no contained worldlines that are time-like. Examples of null hypersurfaces include a light cone, a Killing horizon, and the event horizon of a black hole. References . James B. Hartle, Gravity: an Introduction To Einstein's General Relativity. General relativity Lorentzian manifolds
Null hypersurface
[ "Physics" ]
244
[ "General relativity", "Relativity stubs", "Theory of relativity" ]
37,663,996
https://en.wikipedia.org/wiki/See-through%20display
A see-through display or transparent display is an electronic display that allows the user to see what is shown on the screen while still being able to see through it. The main applications of this type of display are in head-up displays, augmented reality systems, digital signage, and general large-scale spatial light modulation. They should be distinguished from image-combination systems which achieve visually similar effects by optically combining multiple images in the field of view. Transparent displays embed the active matrix of the display in the field of view, which generally allows them to be more compact than combination-based systems. Broadly, there are two types of underlying transparent display technology, absorptive (chiefly LCDs) and emissive (chiefly electroluminescent, including LEDs and "high-field" emitters). Absorptive devices work by selectively reducing the intensity of the light passing through the display, while emissive devices selectively add to the light passing through the display. Some display systems combine both absorptive and emissive devices to overcome the limitations inherent to either one. Emissive display technologies achieve partial transparency either by interspersing invisibly small opaque emitter elements with transparent areas or by being partially transparent. History The development of practical transparent displays accelerated rapidly around the end of first decade of the 21st century. An early commercial transparent display was the Sony Ericsson Xperia Pureness released in 2009, although it did not succeed in the market due to the screen not being visible outside or in brightly lit rooms. Samsung released their first transparent LCD in late 2011, and Planar published a report on a prototype electroluminescent transparent display in 2012. Not long after, UK-based Crystal Display Systems began to sell transparent LCDs remanufactured from conventional LCD displays. LG demonstrated a transparent LCD in 2015. In the later part of the 2010s, transparent OLEDs started to appear. LG, Prodisplay, and taptl, for example, use conventional LCD technology. LG also uses OLED technology. LUMINEQ transparent displays manufactured by Beneq are Thin Film Electroluminescent Displays enabled by Atomic layer deposition (ALD). This display technology was used by Valtra in 2017 to develop its SmartGlass Head-Up Display on tractors. Samsung and Planar Systems previously made transparent OLED displays but discontinued them in 2016. Prodisplay used both OLED and LCD technology, but no longer makes transparent OLED displays. How it works There are two major see-through display technologies, LCD and LED. The LED technology is older and emitted a red color, OLED is newer than both using an organic substance. Though OLED see-through displays are becoming more widely available, both technologies are largely derivative from conventional display systems. In see-through displays, the difference between the absorptive nature of the LCD and emissive nature of the OLED gives them very different visual appearances. LCD systems impose a pattern of shading and colours on the background seen through the display, while OLED systems impose a glowing image pattern on the background. TASEL displays are essentially transparent thin-film Electroluminescent Displays with transparent electrodes. Pixel Pitch and Brightness: Pixel Pitch: Different pixel pitches (the distance between pixels) affect image clarity. Smaller pixel pitches offer higher pixel densities, resulting in sharper images. Brightness: See-through displays have adjustable brightness levels. Higher brightness levels, up to 7500 nits (depending on the technology), ensure visibility in various lighting conditions, including direct sunlight. Partial Reflection A Partial Reflection Display shows an image by reflecting an image off a smooth transparent surface such as glass or specialty film. Partial Reflection Displays comparatively simple but are limited by the brightness of the reflected image needing to be considerably brighter than the light sources beyond the display. A common example of partial reflective displays is in vehicular the Head-up display of a car or fighter jet. The Pepper's ghost illusion is a classic example that uses this technique passively. Head-mounted displays LCD An LCD panel can be made "see-through" without applied voltage when a twisted nematic LCD is fitted with crossed polarizers. Conventional LCDs have relatively low transmission efficiency due to the use of polarizers so that they tend to appear somewhat dim against natural light. Unlike LED see-through displays, LCD see-throughs do not produce their own light but only modulate incoming light. LCDs intended specifically for see-through displays are usually designed to have improved transmission efficiency. Small scale see-through LCDs have been commercially available for some time, but only recently have vendors begun to offer units with sizes comparable to LCD televisions and displays. Samsung released a specifically see-through designed 22-inch panel in 2011. As of 2016, they were being produced by Samsung, LG, and MMT, with a number of vendors offering products based on OEM systems from these manufacturers. An alternative approach to commercializing this technology is to offer conventional back-lit display systems without the backlight system. LCD displays often also require removing a diffuser layer to adapt them for use as transparent displays. The key limitation to see-through LCD efficiency is its linear polarizing filters. An ideal linear polarizer absorbs half of the incoming unpolarized light. In LCDs, light has to pass two linear polarizers, either in the crossed or parallel-aligned configuration. LED LED screens to have two layers of glass on both sides of a set of addressable LEDs. Both inorganic and organic (OLED) LEDs have been used for this purpose. The more flexible (literally and figuratively) OLEDs have generated more interest for this application, though as of July 2016 the only commercial manufacturer Samsung announced that the product would be discontinued. OLEDs consist of an emissive and conductive layer. Electrical impulses travel through the conductive layer and produce light at the emissive layer. This is different from LCDs in that OLEDs produce their own light, which produces a markedly different visual effect with a see-through display. The narrow gap between the pixels of the screen as well as the clear cathodes within allows the screens to be transparent. These types of the screen have been notoriously difficult and expensive to produce in the past, but are now becoming more common as the method of manufacturing them is advancing. OLED transparent displays generate their own light, but can not show black; this can be solved by the addition of a special LCD layer. Passive transparent displays MIT Researchers developed an inexpensive and passive transparent display system that uses nano-particles. Unlike transparent LCDs and OLEDs that requires integrated electronic modules to process visual signals or emit their own light, a passive transparent display uses a projector as the external light source to project images and videos onto a transparent medium embedded with resonance nanoparticles that selectively scatter the projected light. This approach improves the deficiencies observed with transparent LCDs and OLEDs, such as high cost, difficulty of scaling in size, and delicate maintenance. The MIT research is being commercialized by a startup company, Lux Labs, Inc. TASEL Displays Lumineq TASEL displays are based on the Electroluminescent Display technology. The TASEL glass panel consists of a luminescent phosphorous layer sandwiched between two transparent electrodes layers. The display emits light by itself and has a transparency of 80%. Unlike LCDs and LEDs using organic materials that will be effected by environments, TASEL displays are inorganic and immune to environments. One of the disadvantages of TASEL displays was not being capable of displaying more than one colour. Applications See-through displays can be used for: Brick-and-Mortar Store Windows: These displays offer highly effective advertising by transforming storefronts into dynamic visual experiences. This can be particularly advantageous for ad exchange platforms due to their high CPMs resulting from their effectiveness. Billboards: Transparent displays on billboards allow creative advertising agencies to design ads that seamlessly blend video content with the background, creating eye-catching and innovative advertising campaigns. Large Building Facades: Building owners can utilize transparent displays on facades to generate significant revenue through advertisements. These displays do not obstruct the view for residents, as the displays are transparent, maintaining the visibility of the outside scenery from inside the building. augmented reality, and other applications such as shopping displays and more sophisticated computer screens. See-through displays based on OLED or microLED technology may display black through the addition of an LCD, as they cannot do it on their own. This is because, in OLED and micro-LED, the OFF state corresponds to black (or in this case, transparent since there is no black background) and the ON state corresponds to white; this is because OLED and microLED pixels emit their own light. See-through LCDs cannot display whites because LCD pixels do not emit their own light, rather they selectively block light from a white backlight, although this could theoretically be fixed though the addition of a transparent monochrome microLED or OLED display. In LCDs, this is because, in the OFF state, the pixels turn off, allowing light from a backlight to pass through, while in the ON state, the pixels turn on, blocking light. MIT Researchers were working on creating Transparent Displays inexpensively using nano-particles. As of 2019, the MIT research was being commercialized by a startup company, Lux Labs, Inc. Augmented reality See-through screens are an emerging market that has several potential uses. Cell phones, tablets and other devices are starting to use this technology. It has an appealing appearance but more importantly it is also effective for augmented reality applications. The device can add its own twist to what is behind the screen. For example, if you look through a tablet with a see-through display at a street, the device could overlay the name of the street onto the screen. It could be similar to Google street view, except in real-time. For example, Google Translate has a feature that allows the user to point the camera at a sign or writing in another language and it automatically displays the same view, but with the writing in the language of your choosing. This could be possible with see-through displays as well. A device using a transparent display will have much higher resolution and will display much more realistic augmented reality than video augmented reality, which takes video, adds its own supplement to it, and then displays that onto the screen. It could be simpler to display the addition onto the see-through screen instead. The Microsoft HoloLens is an application of this idea. Retail These displays are also used in shop windows. The shopping windows show the product on the inside as well as show text or advertisements on the glass. This type of showcase is becoming more popular as see-through screens are becoming cheaper and more available. Event stage A transparent LED display can be used by stage designers and event producers to realize creative holographic-like visual effects. See also Head-up display Pepper's ghost References Electronic display devices Display technology
See-through display
[ "Engineering" ]
2,260
[ "Electronic engineering", "Display technology" ]
37,667,343
https://en.wikipedia.org/wiki/Types%20of%20mesh
A mesh is a representation of a larger geometric domain by smaller discrete cells. Meshes are commonly used to compute solutions of partial differential equations and render computer graphics, and to analyze geographical and cartographic data. A mesh partitions space into elements (or cells or zones) over which the equations can be solved, which then approximates the solution over the larger domain. Element boundaries may be constrained to lie on internal or external boundaries within a model. Higher-quality (better-shaped) elements have better numerical properties, where what constitutes a "better" element depends on the general governing equations and the particular solution to the model instance. Common cell shapes Two-dimensional There are two types of two-dimensional cell shapes that are commonly used. These are the triangle and the quadrilateral. Computationally poor elements will have sharp internal angles or short edges or both. Triangle This cell shape consists of 3 sides and is one of the simplest types of mesh. A triangular surface mesh is always quick and easy to create. It is most common in unstructured grids. Quadrilateral This cell shape is a basic 4 sided one as shown in the figure. It is most common in structured grids. Quadrilateral elements are usually excluded from being or becoming concave. Three-dimensional The basic 3-dimensional element are the tetrahedron, quadrilateral pyramid, triangular prism, and hexahedron. They all have triangular and quadrilateral faces. Extruded 2-dimensional models may be represented entirely by the prisms and hexahedra as extruded triangles and quadrilaterals. In general, quadrilateral faces in 3-dimensions may not be perfectly planar. A nonplanar quadrilateral face can be considered a thin tetrahedral volume that is shared by two neighboring elements. Tetrahedron A tetrahedron has 4 vertices, 6 edges, and is bounded by 4 triangular faces. In most cases a tetrahedral volume mesh can be generated automatically. Pyramid A quadrilaterally-based pyramid has 5 vertices, 8 edges, bounded by 4 triangular and 1 quadrilateral face. These are effectively used as transition elements between square and triangular faced elements and other in hybrid meshes and grids. Triangular prism A triangular prism has 6 vertices, 9 edges, bounded by 2 triangular and 3 quadrilateral faces. The advantage with this type of layer is that it resolves boundary layer efficiently. Hexahedron A cuboid, a topological cube, has 8 vertices, 12 edges, and 6 quadrilateral faces, making it a type of hexahedron. In the context of meshes, a cuboid is often called a hexahedron, hex, or brick. For the same cell amount, the accuracy of solutions in hexahedral meshes is the highest. The pyramid and triangular prism zones can be considered computationally as degenerate hexahedrons, where some edges have been reduced to zero. Other degenerate forms of a hexahedron may also be represented. Advanced Cells (Polyhedron) A polyhedron (dual) element has any number of vertices, edges and faces. It usually requires more computing operations per cell due to the number of neighbours (typically 10). Though this is made up for in the accuracy of the calculation. Classification of grids Structured grids Structured grids are identified by regular connectivity. The possible element choices are quadrilateral in 2D and hexahedra in 3D. This model is highly space efficient, since the neighbourhood relationships are defined by storage arrangement. Some other advantages of structured grid over unstructured are better convergence and higher resolution. Unstructured grids An unstructured grid is identified by irregular connectivity. It cannot easily be expressed as a two-dimensional or three-dimensional array in computer memory. This allows for any possible element that a solver might be able to use. Compared to structured meshes, for which the neighborhood relationships are implicit, this model can be highly space inefficient since it calls for explicit storage of neighborhood relationships. The storage requirements of a structured grid and of an unstructured grid are within a constant factor. These grids typically employ triangles in 2D and tetrahedral in 3D. Hybrid grids A hybrid grid contains a mixture of structured portions and unstructured portions. It integrates the structured meshes and the unstructured meshes in an efficient manner. Those parts of the geometry that are regular can have structured grids and those that are complex can have unstructured grids. These grids can be non-conformal which means that grid lines don’t need to match at block boundaries. Mesh quality A mesh is considered to have higher quality if a more accurate solution is calculated more quickly. Accuracy and speed are in tension. Decreasing the mesh size always increases the accuracy but also increases computational cost. Accuracy depends on both discretization error and solution error. For discretization error, a given mesh is a discrete approximation of the space, and so can only provide an approximate solution, even when equations are solved exactly. (In computer graphics ray tracing, the number of rays fired is another source of discretization error.) For solution error, for PDEs many iterations over the entire mesh are required. The calculation is terminated early, before the equations are solved exactly. The choice of mesh element type affects both discretization and solution error. Accuracy depends on both the total number of elements, and the shape of individual elements. The speed of each iteration grows (linearly) with the number of elements, and the number of iterations needed depends on the local solution value and gradient compared to the shape and size of local elements. Solution precision A coarse mesh may provide an accurate solution if the solution is a constant, so the precision depends on the particular problem instance. One can selectively refine the mesh in areas where the solution gradients are high, thus increasing fidelity there. Accuracy, including interpolated values within an element, depends on the element type and shape. Rate of convergence Each iteration reduces the error between the calculated and true solution. A faster rate of convergence means smaller error with fewer iterations. A mesh of inferior quality may leave out important features such as the boundary layer for fluid flow. The discretization error will be large and the rate of convergence will be impaired; the solution may not converge at all. Grid independence A solution is considered grid-independent if the discretization and solution error are small enough given sufficient iterations. This is essential to know for comparative results. A mesh convergence study consists of refining elements and comparing the refined solutions to the coarse solutions. If further refinement (or other changes) does not significantly change the solution, the mesh is an "Independent Grid." Deciding the type of mesh If the accuracy is of the highest concern then hexahedral mesh is the most preferable one. The density of the mesh is required to be sufficiently high in order to capture all the flow features but on the same note, it should not be so high that it captures unnecessary details of the flow, thus burdening the CPU and wasting more time. Whenever a wall is present, the mesh adjacent to the wall is fine enough to resolve the boundary layer flow and generally quad, hex and prism cells are preferred over triangles, tetrahedrons and pyramids. Quad and Hex cells can be stretched where the flow is fully developed and one-dimensional. Based on the skewness, smoothness, and aspect ratio, the suitability of the mesh can be decided. Skewness The skewness of a grid is an apt indicator of the mesh quality and suitability. Large skewness compromises the accuracy of the interpolated regions. There are three methods of determining the skewness of a grid. Based on equilateral volume This method is applicable to triangles and tetrahedral only and is the default method. Based on the deviation from normalized equilateral angle This method applies to all cell and face shapes and is almost always used for prisms and pyramids Equiangular skew Another common measure of quality is based on equiangular skew. where: is the largest angle in a face or cell, is the smallest angle in a face or cell, is the angle for equi-angular face or cell i.e. 60 for a triangle and 90 for a square. A skewness' of 0 is the best possible one and a skewness of one is almost never preferred. For Hex and quad cells, skewness should not exceed 0.85 to obtain a fairly accurate solution. For triangular cells, skewness should not exceed 0.85 and for quadrilateral cells, skewness should not exceed 0.9. Smoothness The change in size should also be smooth. There should not be sudden jumps in the size of the cell because this may cause erroneous results at nearby nodes. Aspect ratio It is the ratio of longest to the shortest side in a cell. Ideally it should be equal to 1 to ensure best results. For multidimensional flow, it should be near to one. Also local variations in cell size should be minimal, i.e. adjacent cell sizes should not vary by more than 20%. Having a large aspect ratio can result in an interpolation error of unacceptable magnitude. Mesh generation and improvement See also mesh generation and principles of grid generation. In two dimensions, flipping and smoothing are powerful tools for adapting a poor mesh into a good mesh. Flipping involves combining two triangles to form a quadrilateral, then splitting the quadrilateral in the other direction to produce two new triangles. Flipping is used to improve quality measures of a triangle such as skewness. Mesh smoothing enhances element shapes and overall mesh quality by adjusting the location of mesh vertices. In mesh smoothing, core features such as non-zero pattern of the linear system are preserved as the topology of the mesh remains invariant. Laplacian smoothing is the most commonly used smoothing technique. See also References External links Mesh generation Computational fluid dynamics
Types of mesh
[ "Physics", "Chemistry" ]
2,069
[ "Mesh generation", "Tessellation", "Computational fluid dynamics", "Computational physics", "Symmetry", "Fluid dynamics" ]
37,668,443
https://en.wikipedia.org/wiki/Hybrid%20difference%20scheme
The hybrid difference scheme is a method used in the numerical solution for convection–diffusion problems. It was introduced by Spalding (1970). It is a combination of central difference scheme and upwind difference scheme as it exploits the favorable properties of both of these schemes. Introduction Source: Hybrid difference scheme is a method used in the numerical solution for convection-diffusion problems. These problems play important roles in computational fluid dynamics. It can be described by the general partial equation as follows: () Where, is density, is the velocity vector, is the diffusion coefficient and is the source term. In this equation property, can be temperature, internal energy or component of velocity vector in x, y and z directions. For one-dimensional analysis of convection-diffusion problem in steady state and without the source the equation reduces to, () With boundary conditions, and , where L is the length, and are the given values. Grid generation Integrating equation over the control volume containing node N, and using Gauss’ theorem i.e., () Yields the following result, = () Where, A is the cross-sectional area of the control volume. The equation must also satisfy the continuity equation, i.e., = 0 () Now let us define variables F and D to represent the convection mass flux and diffusion conductance at cell faces, and () Hence, equations () and () transform into the following equations: () () Where, the lower case letters denote the values at the faces and the upper case letters denote that at the nodes. We also define a non-dimensional parameter Péclet number (Pe) as a measure of the relative strengths of convection and diffusion, () For a low Peclet number (|Pe|<2) the flow is characterized as dominated by diffusion. For large Peclet number the flow is dominated by convection. Central and upwind difference scheme Sources: In the above equations () and (), we observe that the values required are at the faces, instead of the nodes. Hence approximations are required to fulfill this. In the central difference scheme we replace the value at the face with the average of the values at the adjacent nodes, and () By putting these values in equation () and rearranging we get the following result, () where, {| class="wikitable" |- ! scope="col" style="width:200px;"| ! scope="col" style="width:200px;"| ! scope="col" style="width:200px;"| |- | | | |} In the Upwind scheme we replace the value at the face with the value at the adjacent upstream node. For example, for the flow to the right (Pe>0)as shown in the diagram, we replace the values as follows; and () And for Pe < 0, we put the values as shown in the figure 3, and () By putting these values in equation () and rearranging we get the same equation as equation (), with the following values of the coefficients: {| class="wikitable" |- ! scope="col" style="width:200px;"| ! scope="col" style="width:200px;"| ! scope="col" style="width:200px;"| |- | | | |} Hybrid difference scheme Sources: The hybrid difference scheme of Spalding (1970) is a combination of the central difference scheme and upwind difference scheme. It makes use of the central difference scheme, which is second order accurate, for small Peclet numbers (|Pe| < 2). For large Peclet numbers (|Pe| > 2) it uses the Upwind difference scheme, which first order accurate but takes into account the convection of the fluid. As it can be seen in figure 4 that for Pe = 0, it is a linear distribution and for high Pe it takes the upstream value depending on the flow direction. For example, the value at the left face, in different circumstances is, for () for () for () Substituting these values in equation () we get the same equation () with the values of the coefficients as follows, {| class="wikitable" |- ! scope="col" style="width:200px;"| ! scope="col" style="width:200px;"| ! scope="col" style="width:200px;"| |- | | | |} Advantages and disadvantages It exploits the favourable properties of the central difference and upwind scheme. It switches to upwind difference scheme when central difference scheme produces inaccurate results for high Peclet numbers. It produces physically realistic solution and has proved to be helpful in the prediction of practical flows. The only disadvantage associated with hybrid difference scheme is that the accuracy in terms of Taylor series truncation error is only first order. See also Upwind differencing scheme for convection References External links http://proceedings.fyper.com/eccomascfd2006/documents/595.pdf http://www.internonlinearscience.org/upload/papers/20110228093510102.pdf http://www.internonlinearscience.org/upload/papers/20110227034844410.pdf Transport phenomena Diffusion
Hybrid difference scheme
[ "Physics", "Chemistry", "Engineering" ]
1,143
[ "Transport phenomena", "Chemical engineering", "Physical phenomena", "Diffusion" ]
37,669,313
https://en.wikipedia.org/wiki/Pension%20fund%20investment%20in%20infrastructure
Pension fund investment in infrastructure is the investing by pension funds directly in the non traditional asset class of infrastructure assets as part of their investment strategy. Traditionally the preserve of governments and municipal authorities, infrastructure has become an asset class in its own right in the 2010s for private-sector investors, most notably pension funds. History Historically, pension funds have tended to invest mostly in "core assets" (such as money market instruments, government bonds, and large-cap equity) and, to a lesser extent, "alternative assets" (such as real estate, private equity and hedge funds). The average allocation to infrastructure historically represented only 1% of total assets under management by pensions, excluding indirect investment through ownership of stocks of listed utility and infrastructure companies. However, government disengagement from the costly long-term financial commitments required by large infrastructure projects in the wake of the 2008–2012 global recession, combined with the realization that infrastructure could be an ideal asset class providing advantages such as long duration, facilitating cash flow matching with long-term liabilities, protection against inflation, and statistical diversification (i.e., a low correlation with "traditional" listed assets such as equities and fixed income), has prompted an increasing number of pension executives to consider investing in the infrastructure asset class. This macro-financial perspective on pension investment in infrastructure was developed by US, Canadian, and European financial economics and labor law experts, notably from Harvard Law School, the World Pensions Council, and the OECD. Canadian, Californian, and Australian early entrants Pension funds, including superannuation schemes, account for approximately 40% of all investors in the infrastructure asset class, excluding projects directly funded and developed by governments, municipalities, and public authorities. Large Canadian pension funds and sovereign investors have been particularly active in energy assets such as natural gas and natural gas infrastructure, where they have become major players in recent years. Until recently, apart from sophisticated jurisdictions such as Ontario, Quebec, California, and the Netherlands, most North American, European, and UK pensions wishing to gain exposure to infrastructure assets did so indirectly, through investments made in infrastructure funds managed by specialized Canadian, US, or Australian funds. UK Pensions Infrastructure Platform On November 29, 2011, the British government unveiled an unprecedented plan to encourage large-scale pension investments in roads, hospitals, airports, and the like across the UK. The plan was aimed at enticing £20 billion ($30.97 billion) of investment in domestic infrastructure projects over a next decade. On October 18, 2012, HM Treasury announced that the National Association of Pension Funds (NAPF) and the Pension Protection Fund (PPF) had succeeded in "securing a critical mass of Founding Investors needed to move to the next stage of development" and that "several major UK pension funds have signed up to the Pension Infrastructure Platform (PIP). The intention is that the Founding Investors will provide around half of the target £2 billion of investment capital for the fund, before it launches early next year". Infrastructure nationalism Some experts have warned against the risk of "infrastructure nationalism", insisting that steady investment flows from foreign pension and sovereign funds were key to the long-term success of the infrastructure asset class, notably in large European jurisdictions such as France and the UK. References Economic policy Public policy Infrastructure investment Actuarial science Pension funds
Pension fund investment in infrastructure
[ "Mathematics" ]
675
[ "Applied mathematics", "Actuarial science" ]
37,670,148
https://en.wikipedia.org/wiki/Laplacian%20of%20the%20indicator
In potential theory, a branch of mathematics, the Laplacian of the indicator of the domain D is a generalisation of the derivative of the Dirac delta function to higher dimensions, and is non-zero only on the surface of D. It can be viewed as the surface delta prime function. It is analogous to the second derivative of the Heaviside step function in one dimension. It can be obtained by letting the Laplace operator work on the indicator function of some domain D. The Laplacian of the indicator can be thought of as having infinitely positive and negative values when evaluated very near the boundary of the domain D. From a mathematical viewpoint, it is not strictly a function but a generalized function or measure. Similarly to the derivative of the Dirac delta function in one dimension, the Laplacian of the indicator only makes sense as a mathematical object when it appears under an integral sign; i.e. it is a distribution function. Just as in the formulation of distribution theory, it is in practice regarded as a limit of a sequence of smooth functions; one may meaningfully take the Laplacian of a bump function, which is smooth by definition, and let the bump function approach the indicator in the limit. History Paul Dirac introduced the Dirac -function, as it has become known, as early as 1930. The one-dimensional Dirac -function is non-zero only at a single point. Likewise, the multidimensional generalisation, as it is usually made, is non-zero only at a single point. In Cartesian coordinates, the d-dimensional Dirac -function is a product of d one-dimensional -functions; one for each Cartesian coordinate (see e.g. generalizations of the Dirac delta function). However, a different generalisation is possible. The point zero, in one dimension, can be considered as the boundary of the positive halfline. The function 1x>0 equals 1 on the positive halfline and zero otherwise, and is also known as the Heaviside step function. Formally, the Dirac -function and its derivative (i.e. the one-dimensional surface delta prime function) can be viewed as the first and second derivative of the Heaviside step function, i.e. ∂x1x>0 and . The analogue of the step function in higher dimensions is the indicator function, which can be written as 1x∈D, where D is some domain. The indicator function is also known as the characteristic function. In analogy with the one-dimensional case, the following higher-dimensional generalisations of the Dirac -function and its derivative have been proposed: Here n is the outward normal vector. Here the Dirac -function is generalised to a surface delta function on the boundary of some domain D in d ≥ 1 dimensions. This definition gives the usual one-dimensional case, when the domain is taken to be the positive halfline. It is zero except on the boundary of the domain D (where it is infinite), and it integrates to the total surface area enclosing D, as shown below. The one-dimensional Dirac -function is generalised to a multidimensional surface delta prime function on the boundary of some domain D in d ≥ 1 dimensions. In one dimension and by taking D equal to the positive halfline, the usual one-dimensional -function can be recovered. Both the normal derivative of the indicator and the Laplacian of the indicator are supported by surfaces rather than points. The generalisation is useful in e.g. quantum mechanics, as surface interactions can lead to boundary conditions in d > 1, while point interactions cannot. Naturally, point and surface interactions coincide for d=1. Both surface and point interactions have a long history in quantum mechanics, and there exists a sizeable literature on so-called surface delta potentials or delta-sphere interactions. Surface delta functions use the one-dimensional Dirac -function, but as a function of the radial coordinate r, e.g. δ(r−R) where R is the radius of the sphere. Although seemingly ill-defined, derivatives of the indicator function can formally be defined using the theory of distributions or generalized functions: one can obtain a well-defined prescription by postulating that the Laplacian of the indicator, for example, is defined by two integrations by parts when it appears under an integral sign. Alternatively, the indicator (and its derivatives) can be approximated using a bump function (and its derivatives). The limit, where the (smooth) bump function approaches the indicator function, must then be put outside of the integral. Dirac surface delta prime function This section will prove that the Laplacian of the indicator is a surface delta prime function. The surface delta function will be considered below. First, for a function f in the interval (a,b), recall the fundamental theorem of calculus assuming that f is locally integrable. Now for a < b it follows, by proceeding heuristically, that Here 1a<x<b is the indicator function of the domain a < x < b. The indicator equals one when the condition in its subscript is satisfied, and zero otherwise. In this calculation, two integrations by parts (combined with the fundamental theorem of calculus as shown above) show that the first equality holds; the boundary terms are zero when a and b are finite, or when f vanishes at infinity. The last equality shows a sum of outward normal derivatives, where the sum is over the boundary points a and b, and where the signs follow from the outward direction (i.e. positive for b and negative for a). Although derivatives of the indicator do not formally exist, following the usual rules of partial integration provides the 'correct' result. When considering a finite d-dimensional domain D, the sum over outward normal derivatives is expected to become an integral, which can be confirmed as follows: where the limit is of x approaching surface β from inside domain D, nβ is the unit vector normal to surface β, and ∇x is now the multidimensional gradient operator. As before, the first equality follows by two integrations by parts (in higher dimensions this proceeds by Green's second identity) where the boundary terms disappear as long as the domain D is finite or if f vanishes at infinity; e.g. both 1x∈D and ∇x1x∈D are zero when evaluated at the 'boundary' of Rd when the domain D is finite. The third equality follows by the divergence theorem and shows, again, a sum (or, in this case, an integral) of outward normal derivatives over all boundary locations. The divergence theorem is valid for piecewise smooth domains D, and hence D needs to be piecewise smooth. Thus the surface delta prime function (a.k.a. Dirac -function) exists on a piecewise smooth surface, and is equivalent to the Laplacian of the indicator function of the domain D encompassed by that piecewise smooth surface. Naturally, the difference between a point and a surface disappears in one dimension. In electrostatics, a surface dipole (or Double layer potential) can be modelled by the limiting distribution of the Laplacian of the indicator. The calculation above derives from research on path integrals in quantum physics. Dirac surface delta function This section will prove that the (inward) normal derivative of the indicator is a surface delta function. For a finite domain D or when f vanishes at infinity, it follows by the divergence theorem that By the product rule, it follows that Following from the analysis of the section above, the two terms on the left-hand side are equal, and thus The gradient of the indicator vanishes everywhere, except near the boundary of D, where it points in the normal direction. Therefore, only the component of ∇xf(x) in the normal direction is relevant. Suppose that, near the boundary, ∇xf(x) is equal to nxg(x), where g is some other function. Then it follows that The outward normal nx was originally only defined for x in the surface, but it can be defined to exist for all x; for example by taking the outward normal of the boundary point nearest to x. The foregoing analysis shows that −nx ⋅ ∇x1x∈D can be regarded as the surface generalisation of the one-dimensional Dirac delta function. By setting the function g equal to one, it follows that the inward normal derivative of the indicator integrates to the surface area of D. In electrostatics, surface charge densities (or single boundary layers) can be modelled using the surface delta function as above. The usual Dirac delta function be used in some cases, e.g. when the surface is spherical. In general, the surface delta function discussed here may be used to represent the surface charge density on a surface of any shape. The calculation above derives from research on path integrals in quantum physics. Approximations by bump functions This section shows how derivatives of the indicator can be treated numerically under an integral sign. In principle, the indicator cannot be differentiated numerically, since its derivative is either zero or infinite. But, for practical purposes, the indicator can be approximated by a bump function, indicated by Iε(x) and approaching the indicator for ε → 0. Several options are possible, but it is convenient to let the bump function be non-negative and approach the indicator from below, i.e. This ensures that the family of bump functions is identically zero outside of D. This is convenient, since it is possible that the function f is only defined in the interior of D. For f defined in D, we thus obtain the following: where the interior coordinate α approaches the boundary coordinate β from the interior of D, and where there is no requirement for f to exist outside of D. When f is defined on both sides of the boundary, and is furthermore differentiable across the boundary of D, then it is less crucial how the bump function approaches the indicator. Discontinuous test functions If the test function f is possibly discontinuous across the boundary, then distribution theory for discontinuous functions may be used to make sense of surface distributions, see e.g. section V in . In practice, for the surface delta function this usually means averaging the value of f on both sides of the boundary of D before integrating over the boundary. Likewise, for the surface delta prime function it usually means averaging the outward normal derivative of f on both sides of the boundary of the domain D before integrating over the boundary. Applications Quantum mechanics In quantum mechanics, point interactions are well known and there is a large body of literature on the subject. A well-known example of a one-dimensional singular potential is the Schrödinger equation with a Dirac delta potential. The one-dimensional Dirac delta prime potential, on the other hand, has caused controversy. The controversy was seemingly settled by an independent paper, although even this paper attracted later criticism. A lot more attention has been focused on the one-dimensional Dirac delta prime potential recently. A point on the one-dimensional line can be considered both as a point and as surface; as a point marks the boundary between two regions. Two generalisations of the Dirac delta-function to higher dimensions have thus been made: the generalisation to a multidimensional point, as well as the generalisation to a multidimensional surface. The former generalisations are known as point interactions, whereas the latter are known under different names, e.g. "delta-sphere interactions" and "surface delta interactions". The latter generalisations may use derivatives of the indicator, as explained here, or the one-dimensional Dirac -function as a function of the radial coordinate r. Fluid dynamics The Laplacian of the indicator has been used in fluid dynamics, e.g. to model the interfaces between different media. Surface reconstruction The divergence of the indicator and the Laplacian of the indicator (or of the characteristic function, as the indicator is also known) have been used as the sample information from which surfaces can be reconstructed. See also References Mathematics of infinitesimals Generalized functions Measure theory Schwartz distributions
Laplacian of the indicator
[ "Mathematics" ]
2,511
[ "Mathematics of infinitesimals" ]
37,670,745
https://en.wikipedia.org/wiki/Remmert%E2%80%93Stein%20theorem
In complex analysis, a field in mathematics, the Remmert–Stein theorem, introduced by , gives conditions for the closure of an analytic set to be analytic. The theorem states that if F is an analytic set of dimension less than k in some complex manifold D, and M is an analytic subset of D – F with all components of dimension at least k, then the closure of M is either analytic or contains F. The condition on the dimensions is necessary: for example, the set of points (1/n,0) in the complex plane is analytic in the complex plane minus the origin, but its closure in the complex plane is not. Relations to other theorems A consequence of the Remmert–Stein theorem (also treated in their paper), is Chow's theorem stating that any projective complex analytic space is necessarily a projective algebraic variety. The Remmert–Stein theorem is implied by a proper mapping theorem due to , see . References Complex manifolds Theorems in complex analysis
Remmert–Stein theorem
[ "Mathematics" ]
203
[ "Theorems in mathematical analysis", "Mathematical analysis", "Theorems in complex analysis", "Mathematical analysis stubs" ]
40,421,510
https://en.wikipedia.org/wiki/Rhodothermus%20marinus
Rhodothermus marinus is a species of bacteria. It is obligately aerobic, moderately halophilic, thermophilic, Gram-negative and rod-shaped, about 0.5 μm in diameter and 2-2.5 μm long. References Further reading External links LPSN Type strain of Rhodothermus marinus at BacDive - the Bacterial Diversity Metadatabase Bacteria described in 1995 Rhodothermota
Rhodothermus marinus
[ "Biology" ]
99
[ "Bacteria stubs", "Bacteria" ]
40,427,167
https://en.wikipedia.org/wiki/Recombinase%20polymerase%20amplification
Recombinase polymerase amplification (RPA) is a single tube, isothermal alternative to the polymerase chain reaction (PCR). By adding a reverse transcriptase enzyme to an RPA reaction, it can detect RNA as well as DNA, without the need for a separate step to produce cDNA. Because it is isothermal, RPA can use much simpler equipment than PCR, which requires a thermal cycler. Operating best at temperatures of 37–42 °C and still working, albeit more slowly, at room temperature means RPA reactions can in theory be run quickly by simply holding a tube in the hand. This makes RPA an excellent candidate for developing low-cost, rapid, point-of-care molecular tests. An international quality assessment of molecular detection of Rift Valley fever virus performed as well as the best RT-PCR tests, detecting less concentrated samples missed by some PCR tests and an RT-LAMP test. RPA was developed and launched by TwistDx Ltd. (formerly known as ASM Scientific Ltd), a biotechnology company based in Cambridge, UK. Technique The RPA process employs three core enzymes – a recombinase, a single-stranded DNA-binding protein (SSB) and strand-displacing polymerase. Recombinases are capable of pairing oligonucleotide primers with homologous sequence in duplex DNA. SSB bind to displaced strands of DNA and prevent the primers from being displaced. Finally, the strand displacing polymerase begins DNA synthesis where the primer has bound to the target DNA. By using two opposing primers, much like PCR, if the target sequence is indeed present, an exponential DNA amplification reaction is initiated. No other sample manipulation such as thermal or chemical melting is required to initiate amplification. At optimal temperatures (37–42 °C), the reaction progresses rapidly and results in specific DNA amplification from just a few target copies to detectable levels, typically within 10 minutes, for rapid detection of viral genomic DNA or RNA, pathogenic bacterial genomic DNA, as well as short length aptamer DNA. The three core RPA enzymes can be supplemented by further enzymes to provide extra functionality. Addition of exonuclease III allows the use of an exo probe for real-time, fluorescence detection akin to real-time PCR. Addition of endonuclease IV means that an nfo probe can be used for lateral flow strip detection of successful amplification. If a reverse transcriptase that works at 37–42 °C is added then RNA can be reverse transcribed and the cDNA produced amplified all in one step. Currently only the TwistAmp exo version of RPA is available with the reverse transcriptase included, although users can simply supplement other TwistAmp reactions with a reverse transcriptase to produce the same effect. As with PCR, all forms of RPA reactions can be multiplexed by the addition of further primer/probe pairs, allowing the detection of multiple analytes or an internal control in the same tube. Relationship to other amplification techniques RPA is one of several isothermal nucleic acid amplification techniques to be developed as a molecular diagnostic technique, frequently with the objective of simplifying the laboratory instrumentation required relative to PCR. A partial list of other isothermal amplification techniques include LAMP, NASBA, helicase-dependent amplification (HDA), and nicking enzyme amplification reaction (NEAR). The techniques differ in the specifics of primer design and reaction mechanism, and in some cases (like RPA) make use of cocktails of two or more enzymes. Like RPA, many of these techniques offer rapid amplification times with the potential for simplified instrumentation, and reported resistance to substances in unpurified samples that are known to inhibit PCR. With respect to amplification time, modern thermocyclers with rapid temperature ramps can reduce PCR amplification times to less than 30 minutes, particularly for short amplicons using dual-temperature cycling rather than the conventional three-temperature protocols. In addition, the demands of sample prep (including lysis and extraction of DNA or RNA, if necessary) should be considered as part of the overall time and complexity inherent to the technique. These requirements vary according to the technique as well as to the specific target and sample type. Compared to PCR, the guidelines for primer and probe design for RPA are less established, and may take a certain degree of trial and error, although recent results indicate that standard PCR primers can work as well. The general principle of a discrete amplicon bounded by a forward and reverse primer with an (optional) internal fluorogenic probe is similar to PCR. PCR primers may be used directly in RPA, but their short length means that recombination rates are low and RPA will not be especially sensitive or fast. Typically 30–38 base primers are needed for efficient recombinase filament formation and RPA performance. This is in contrast to some other techniques such as LAMP which use a larger number of primers subject to additional design constraints. Although the original 2006 report of RPA describes a functional set of reaction components, the current (proprietary) formulation of the TwistAmp kit is "substantially different" and is available only from the TwistDx supplier. This is in comparison to reaction mixtures for PCR which are available from many suppliers, or LAMP or NASBA for which the composition of the reaction mixture is freely published, allowing researchers to create their own customized "kits" from inexpensive ingredients. Published scientific literature generally lacks detailed comparison of the performance of isothermal amplification techniques such as RPA, HDA, and LAMP relative to each other, often rather comparing a single isothermal technique to a "gold standard" PCR assay. This makes it difficult to judge the merits of these techniques independently from the claims of the manufacturers, inventors, or proponents. Furthermore, performance characteristics of any amplification technique are difficult to decouple from primer design: a "good" primer set for one target for RPA may give faster amplification or more sensitive detection than a "poor" LAMP primer set for the same target, but the converse may be true for different primer sets for a different target. An exception is a recent study comparing RT-qPCR, RT-LAMP, and RPA for detection of Schmallenberg virus and bovine viral diarrhea virus, which effectively makes the point that each amplification technique has strengths and weaknesses, which may vary by the target, and that the properties of the available amplification techniques need to be evaluated in combination with the requirements for each application. As with PCR and any other amplification technique, there is obviously a publication bias, with poorly performing primer sets rarely deemed worthy of reporting. References Molecular biology Laboratory techniques DNA profiling techniques Biotechnology Molecular biology techniques
Recombinase polymerase amplification
[ "Chemistry", "Biology" ]
1,442
[ "Genetics techniques", "DNA profiling techniques", "Biotechnology", "Molecular biology techniques", "nan", "Molecular biology", "Biochemistry" ]
40,428,285
https://en.wikipedia.org/wiki/List%20of%20cheminformatics%20toolkits
Cheminformatics toolkits are notable software development kits that allow cheminformaticians to develop custom computer applications for use in virtual screening, chemical database mining, and structure-activity studies. Toolkits are often used for experimentation with new methodologies. Their most important functions deal with the manipulation of chemical structures and comparisons between structures. Programmatic access is provided to properties of individual bonds and atoms. Functionality Toolkits provide the following functionality: Read and save structures in various chemistry file formats. Determine if one structure is a substructure of another (substructure matching). Determine if two structures are equal (exact matching). Identification of substructures common to structures in a set (maximal common substructure, MCS). Disassemble molecules, splitting into fragments. Assemble molecules from elements or submolecules. Apply reactions on input reactant structures, resulting in output of reaction product structures. Generate molecular fingerprints. Fingerprints are bit-vectors where individual bits correspond to the presence or absence of structural features. The most important use of fingerprints is in indexing of chemistry databases. List of notable cheminformatics toolkits References Computational chemistry Cheminformatics Drug discovery Cheminformatics toolkits
List of cheminformatics toolkits
[ "Chemistry", "Biology" ]
258
[ "Life sciences industry", "Drug discovery", "Theoretical chemistry", "Computational chemistry", "Cheminformatics", "nan", "Medicinal chemistry" ]
40,428,588
https://en.wikipedia.org/wiki/Dynamical%20pictures
In quantum mechanics, dynamical pictures (or representations) are the multiple equivalent ways to mathematically formulate the dynamics of a quantum system. The two most important ones are the Heisenberg picture and the Schrödinger picture. These differ only by a basis change with respect to time-dependency, analogous to the Lagrangian and Eulerian specification of the flow field: in short, time dependence is attached to quantum states in the Schrödinger picture and to operators in the Heisenberg picture. There is also an intermediate formulation known as the interaction picture (or Dirac picture) which is useful for doing computations when a complicated Hamiltonian has a natural decomposition into a simple "free" Hamiltonian and a perturbation. Equations that apply in one picture do not necessarily hold in the others, because time-dependent unitary transformations relate operators in one picture to the analogous operators in the others. Not all textbooks and articles make explicit which picture each operator comes from, which can lead to confusion. Schrödinger picture Background In elementary quantum mechanics, the state of a quantum-mechanical system is represented by a complex-valued wavefunction . More abstractly, the state may be represented as a state vector, or ket, |ψ⟩. This ket is an element of a Hilbert space, a vector space containing all possible states of the system. A quantum-mechanical operator is a function which takes a ket |ψ⟩ and returns some other ket |ψ′⟩. The differences between the Schrödinger and Heiseinberg pictures of quantum mechanics revolve around how to deal with systems that evolve in time: the time-dependent nature of the system must be carried by some combination of the state vectors and the operators. For example, a quantum harmonic oscillator may be in a state |ψ⟩ for which the expectation value of the momentum, , oscillates sinusoidally in time. One can then ask whether this sinusoidal oscillation should be reflected in the state vector |ψ⟩, the momentum operator , or both. All three of these choices are valid; the first gives the Schrödinger picture, the second the Heisenberg picture, and the third the interaction picture. The Schrödinger picture is useful when dealing with a time-independent Hamiltonian , that is, . The time evolution operator Definition The time-evolution operator U(t, t0) is defined as the operator which acts on the ket at time t0 to produce the ket at some other time t: For bras, we instead have Properties Unitarity The time evolution operator must be unitary. This is because we demand that the norm of the state ket must not change with time. That is, Therefore, Identity When t = t0, U is the identity operator, since Closure Time evolution from t0 to t may be viewed as a two-step time evolution, first from t0 to an intermediate time t1, and then from t1 to the final time t. Therefore, Differential equation for time evolution operator We drop the t0 index in the time evolution operator with the convention that and write it as U(t). The Schrödinger equation is where H is the Hamiltonian. Now using the time-evolution operator U to write , we have Since is a constant ket (the state ket at ), and since the above equation is true for any constant ket in the Hilbert space, the time evolution operator must obey the equation If the Hamiltonian is independent of time, the solution to the above equation is Since H is an operator, this exponential expression is to be evaluated via its Taylor series: Therefore, Note that is an arbitrary ket. However, if the initial ket is an eigenstate of the Hamiltonian, with eigenvalue E, we get: Thus we see that the eigenstates of the Hamiltonian are stationary states: they only pick up an overall phase factor as they evolve with time. If the Hamiltonian is dependent on time, but the Hamiltonians at different times commute, then the time evolution operator can be written as If the Hamiltonian is dependent on time, but the Hamiltonians at different times do not commute, then the time evolution operator can be written as where T is time-ordering operator, which is sometimes known as the Dyson series, after Freeman Dyson. The alternative to the Schrödinger picture is to switch to a rotating reference frame, which is itself being rotated by the propagator. Since the undulatory rotation is now being assumed by the reference frame itself, an undisturbed state function appears to be truly static. This is the Heisenberg picture (below). Heisenberg picture The Heisenberg picture is a formulation (made by Werner Heisenberg while on Heligoland in the 1920s) of quantum mechanics in which the operators (observables and others) incorporate a dependency on time, but the state vectors are time-independent. Definition In the Heisenberg picture of quantum mechanics the state vector, , does not change with time, and an observable A satisfies where H is the Hamiltonian and [•,•] denotes the commutator of two operators (in this case H and A). Taking expectation values yields the Ehrenfest theorem featured in the correspondence principle. By the Stone–von Neumann theorem, the Heisenberg picture and the Schrödinger picture are unitarily equivalent. In some sense, the Heisenberg picture is more natural and convenient than the equivalent Schrödinger picture, especially for relativistic theories. Lorentz invariance is manifest in the Heisenberg picture. This approach also has a more direct similarity to classical physics: by replacing the commutator above by the Poisson bracket, the Heisenberg equation becomes an equation in Hamiltonian mechanics. Derivation of Heisenberg's equation The expectation value of an observable A, which is a Hermitian linear operator for a given state , is given by In the Schrödinger picture, the state at time t is related to the state at time 0 by a unitary time-evolution operator, : If the Hamiltonian does not vary with time, then the time-evolution operator can be written as where H is the Hamiltonian and ħ is the reduced Planck constant. Therefore, Define, then, It follows that Differentiation was according to the product rule, while ∂A/∂t is the time derivative of the initial A, not the A(t) operator defined. The last equation holds since exp(−iHt/ħ) commutes with H. Thus whence the above Heisenberg equation of motion emerges, since the convective functional dependence on x(0) and p(0) converts to the same dependence on x(t), p(t), so that the last term converts to ∂A(t)/∂t . [X, Y] is the commutator of two operators and is defined as . The equation is solved by the A(t) defined above, as evident by use of the standard operator identity, which implies This relation also holds for classical mechanics, the classical limit of the above, given the correspondence between Poisson brackets and commutators, In classical mechanics, for an A with no explicit time dependence, so, again, the expression for A(t) is the Taylor expansion around t = 0. Commutator relations Commutator relations may look different from in the Schrödinger picture, because of the time dependence of operators. For example, consider the operators and . The time evolution of those operators depends on the Hamiltonian of the system. Considering the one-dimensional harmonic oscillator, , the evolution of the position and momentum operators is given by: , . Differentiating both equations once more and solving for them with proper initial conditions, leads to , . Direct computation yields the more general commutator relations, , , . For , one simply recovers the standard canonical commutation relations valid in all pictures. Interaction picture The interaction Picture is most useful when the evolution of the observables can be solved exactly, confining any complications to the evolution of the states. For this reason, the Hamiltonian for the observables is called "free Hamiltonian" and the Hamiltonian for the states is called "interaction Hamiltonian". Definition Operators and state vectors in the interaction picture are related by a change of basis (unitary transformation) to those same operators and state vectors in the Schrödinger picture. To switch into the interaction picture, we divide the Schrödinger picture Hamiltonian into two parts, Any possible choice of parts will yield a valid interaction picture; but in order for the interaction picture to be useful in simplifying the analysis of a problem, the parts will typically be chosen so that is well understood and exactly solvable, while contains some harder-to-analyze perturbation to this system. If the Hamiltonian has explicit time-dependence (for example, if the quantum system interacts with an applied external electric field that varies in time), it will usually be advantageous to include the explicitly time-dependent terms with , leaving time-independent. We proceed assuming that this is the case. If there is a context in which it makes sense to have be time-dependent, then one can proceed by replacing by the corresponding time-evolution operator in the definitions below. State vectors A state vector in the interaction picture is defined as where is the same state vector as in the Schrödinger picture. Operators An operator in the interaction picture is defined as Note that will typically not depend on t, and can be rewritten as just . It only depends on t if the operator has "explicit time dependence", for example due to its dependence on an applied, external, time-varying electric field. Hamiltonian operator For the operator itself, the interaction picture and Schrödinger picture coincide, This is easily seen through the fact that operators commute with differentiable functions of themselves. This particular operator then can be called H0 without ambiguity. For the perturbation Hamiltonian H1,I, however, where the interaction picture perturbation Hamiltonian becomes a time-dependent Hamiltonian—unless [H1,s, H0,s] = 0 . It is possible to obtain the interaction picture for a time-dependent Hamiltonian H0,s(t) as well, but the exponentials need to be replaced by the unitary propagator for the evolution generated by H0,s(t), or more explicitly with a time-ordered exponential integral. Density matrix The density matrix can be shown to transform to the interaction picture in the same way as any other operator. In particular, let and be the density matrix in the interaction picture and the Schrödinger picture, respectively. If there is probability to be in the physical state , then Time-evolution equations States Transforming the Schrödinger equation into the interaction picture gives: This equation is referred to as the Schwinger–Tomonaga equation. Operators If the operator is time independent (i.e., does not have "explicit time dependence"; see above), then the corresponding time evolution for is given by: In the interaction picture the operators evolve in time like the operators in the Heisenberg picture with the Hamiltonian . Density matrix Transforming the Schwinger–Tomonaga equation into the language of the density matrix (or equivalently, transforming the von Neumann equation into the interaction picture) gives: Existence The interaction picture does not always exist. In interacting quantum field theories, Haag's theorem states that the interaction picture does not exist. This is because the Hamiltonian cannot be split into a free and an interacting part within a superselection sector. Moreover, even if in the Schrödinger picture the Hamiltonian does not depend on time, e.g. , in the interaction picture it does, at least, if does not commute with , since . Comparison of pictures The Heisenberg picture is closest to classical Hamiltonian mechanics (for example, the commutators appearing in the above equations directly correspond to classical Poisson brackets). The Schrödinger picture, the preferred formulation in introductory texts, is easy to visualize in terms of Hilbert space rotations of state vectors, although it lacks natural generalization to Lorentz invariant systems. The Dirac picture is most useful in nonstationary and covariant perturbation theory, so it is suited to quantum field theory and many-body physics. Summary comparison of evolutions Equivalence It is evident that the expected values of all observables are the same in the Schrödinger, Heisenberg, and Interaction pictures, as they must. See also Hamilton–Jacobi equation Bra-ket notation Notes References Albert Messiah, 1966. Quantum Mechanics (Vol. I), English translation from French by G. M. Temmer. North Holland, John Wiley & Sons. Merzbacher E., Quantum Mechanics (3rd ed., John Wiley 1998) p. 430-1 Online copy R. Shankar (1994); Principles of Quantum Mechanics, Plenum Press, . J. J. Sakurai (1993); Modern Quantum Mechanics (Revised Edition), . External links Pedagogic Aides to Quantum Field Theory Click on the link for Chap. 2 to find an extensive, simplified introduction to the Heisenberg picture. Quantum mechanics
Dynamical pictures
[ "Physics" ]
2,784
[ "Theoretical physics", "Quantum mechanics" ]
40,435,447
https://en.wikipedia.org/wiki/ARGO-HYTOS
The ARGO-HYTOS Group produces and develops components and systems for the hydraulic industry. Its headquarters are in Switzerland and production sites are in Germany, Czech Republic, India, China and Brazil. History The present ARGO-HYTOS group has its seeds in the former company Argo GmbH for precision mechanics. It was established in Stuttgart on July 7, 1947, at the request of Herbert Kienzle, the managing director of the Kienzle Apparate, and has developed from the former sales company for Kienzle-Taximeters. Argo originally was a brand name of the predecessor company Kienzle Apparate for manufacturing of taximeters, referring to the Greek legend of Jason and his incredibly fast ship. An autonomous production program for magnetic filters and strainers was developed; in 1952 Argo received a patent for magnetic filters. The product portfolio was expanded to hydraulic filters for the mobile and industrial hydraulics, which were produced in a former cigar factory in Kraichtal/Germany. In 1965 the head office was also transferred to Kraichtal. The company expanded internationally: At the beginning of the 1980s the first own sales company was established abroad in France, followed by more companies in the Netherlands, Great Britain, in the US, Sweden, Hong Kong, Italy, Poland and China. Development In 1990 Christian H. Kienzle took over the position of the managing director and has managed the ARGO-HYTOS group up until now. In 1993 the Argo GmbH for precision mechanics was renamed as Argo GmbH for fluid technology. In the same year Argo purchased shares in the Czech hydraulic manufacturer Hytos, a company already having a 50-year experience in the production of components for Fluid & Motion Control. Since 2003 both companies have traded under the name ARGO-HYTOS. Locations The ARGO-HYTOS group operates globally with numerous sales companies all over the world and more than 100 international distributors. The group is headquartered in Switzerland and employs more than 1,200 staff members. ARGO-HYTOS maintains locations with production plants in Kraichtal (Germany) - approx. 450 employees Ostrava (Czech Republic) Vrchlabí (Czech Republic) Zator (Poland) Coimbatore (India) Yangzhou (China) Portfolio and customers The portfolio of ARGO-HYTOS includes hydraulic filters and filter elements, components for Fluid & Motion Control, Fluid Management Systems, Sensors- and Measurement Technology as well as systems for Wind Energy Plants. The company's customers include manufacturers of agricultural machinery, production machines and machine tools, construction machines, municipal engineering and energy production. Literature Armin Müller: Kienzle. Ein deutsches Industrieunternehmen im 20. Jahrhundert. Franz Steiner Verlag, Stuttgart 2011, S. 221–225. . Bernhard Foitzik: Filtration Technology for Hydraulic Systems: Optimum concepts for filtration systems in fluid power technology. 2nd edition, Verlag Moderne Industrie, Landsberg/Lech 2000, . References External links History brochure with detailed company history (PDF) Hydraulic engineering Manufacturing companies established in 1947 Engineering companies of Switzerland Swiss companies established in 1947
ARGO-HYTOS
[ "Physics", "Engineering", "Environmental_science" ]
659
[ "Hydrology", "Physical systems", "Hydraulics", "Civil engineering", "Hydraulic engineering" ]
48,009,177
https://en.wikipedia.org/wiki/Coprococcus%20eutactus
Coprococcus (ATCC 27759) is a genus of anaerobic cocci which are all part of the human faecal flora, but rarely seen in human clinical specimens. "Coprococcus includes those gram-positive, anaerobic cocci that actively ferment carbohydrates, producing butyric and acetic acids with formic or propionic and/or lactic acids. Fermentable carbohydrates are either required or are highly stimulatory for growth and continued subculture." - Lillian V. Holdeman & W. E. C. Moore. The genus is bio-chemically closely related to Ruminococcus, and phylogenetically to the genus Lachnospira. Coprococcus eutactus is an obligately anaerobic, nonmotile, gram-positive coccus occurring in pairs or chains of pairs. Cells may lose colour readily and acquire a slightly elongate shape in a medium containing a fermentable carbohydrate, but are normally round, and 0.7 to 1.3 μm in diameter. Coprococcus may be used as a microbial biomarker to assess the health of the human gastro-intestinal tract. Gut microorganisms maintain gastro-intestinal health and the mounting evidence of gastro-intestinal problems in autistic children makes a link between autism and intestinal microbiota highly probable, but the paucity of data on intestinal microflora means a definite link has not yet been demonstrated. Early studies overlooked potentially beneficial gut flora missing in autistic children. Coprococcus, specifically Coprococcus eutactus, may impact on the desire to exercise by augmenting dopamine activity during physical activity. Coprococcus species C. catus Holdeman & Moore C. comes Holdeman & Moore Etymology 'kopros' - excrement, faeces; 'kokkos' - berry; 'Coprococcus' - faecal coccus 'eutaktos' - orderly, well-disciplined (referring to the uniform reactions of the different strains) References Gut flora bacteria Lachnospiraceae Bacteria described in 1974
Coprococcus eutactus
[ "Biology" ]
467
[ "Gut flora bacteria", "Bacteria" ]
48,010,091
https://en.wikipedia.org/wiki/Virtual%20machining
Virtual machining is the practice of using computers to simulate and model the use of machine tools for part manufacturing. Such activity replicates the behavior and errors of a real environment in virtual reality systems. This can provide useful ways to manufacture products without physical testing on the shop floor. As a result, time and cost of part production can be decreased. Applications Virtual machining provides various benefits: Simulated machining process in virtual environments reveals errors without wasting materials, damaging machine tools, or putting workers at risk. A computer simulation helps improve accuracy in the produced part. Virtual inspection systems such as surface finish, surface metrology, and waviness can be applied to the simulated parts in virtual environments to increase accuracy. Systems can augment process planning of machining operations with regards to the desired tolerances of part designing. Virtual machining system can be used in process planning of machining operations by considering the most suitable steps of machining operations with regard to the time and cost of part manufacturing. Optimization techniques can be applied to the simulated machining process to increase efficiency of parts production. Finite element method (FEM) can be applied to the simulated machining process in virtual environments to analyze stress and strain of the machine tool, workpiece and cutting tool. Accuracy of mathematical error modeling in prediction of machined surfaces can be analyzed by using the virtual machining systems. Machining operations of flexible materials can be analyzed in virtual environments to increase accuracy of part manufacturing. Vibrations of machine tools as well as possibility of chatter along cutting tool paths in machining operations can be analyzed by using simulated machining operations in virtual environments. Time and cost of accurate production can be decreased by applying rules of production process management to the simulated manufacturing process in the virtual environment. Feed rate scheduling systems based on virtual machining can also be presented to increase accuracy as well as efficiency of part manufacturing. Material removal rate in machining operations of complex surfaces can be simulated in virtual environments for analysis and optimization. Efficiency of part manufacturing can be improved by analyzing and optimizing production methods. Errors in actual machined parts can be simulated in virtual environments for analysis and compensation. Simulated machining centers in virtual environments can be connected by the network and Internet for remote analysis and modification. Elements and structures of machine tools such as spindle, rotation axis, moving axes, ball screw, numerical control unit, electric motors (step motor and servomotor), bed and et al. can be simulated in virtual environments so they can be analyzed and modified. As a result, optimized versions of machine tool elements can boost levels of technology in part manufacturing. Geometry of cutting tools can be analyzed and modified as a result of simulated cutting forces in virtual environments. Thus, machining time as well as surface roughness can be minimized and tool life can be maximized due to decreasing cutting forces by modified geometries of cutting tools. Also, the modified versions of cutting tool geometries with regards to minimizing cutting forces can decrease cost of cutting tools by presenting a wider range of acceptable materials for cutting tools such as high-speed steel, carbon tool steels, cemented carbide, ceramic, cermet and et al. The generated heat in engagement areas of cutting tool and workpiece can be simulated, analyzed, and decreased. Tool life can be maximized as a result of decreasing generated heat in engagement areas of cutting tool and workpiece. Machining strategies can be analyzed and modified in virtual environments in terms of collision detection processes. 3D vision of machining operations with errors of actual machined parts and tool deflection error in virtual environments can help designers as well as machining strategists to analyze and modify the process of part production. Virtual machining can augment the experience and training of novice machine tool operators in a virtual machining training system. To increase added value in processes of part production, energy consumption of machine tools can be simulated and analyzed in virtual environments by presenting an efficient energy use machine tool. Machining strategies of freeform surfaces can be analyzed and optimized in virtual environments to increase accuracy of part manufacturing. Future research works Some suggestions for the future studies in virtual machining systems are presented as: Machining operations of new alloy can be simulated in virtual environments for study. As a result, deformation, surface properties and residue stress of new alloy can be analyzed and modified. New material of cutting tool can be simulated and analyzed in virtual environments. Thus, tool deflection error of new cutting tools along machining paths can be studied without the need of actual machining operations. Deformation and deflections of large workpieces can be simulated and analyzed in virtual environments. Machining operations of expensive materials such as gold as well as superalloys can be simulated in virtual environments to predict real machining conditions without the need of shop floor testing. References External links Virtual Machining, Automation World AMGM Institute, Virtual Machining MACHpro: THE VIRTUAL MACHINING SYSTEM The Virtual Machine Shop The 5th International Conference on Virtual Machining Process Technology (VMPT 2016) Eureka Virtual Machining SIMNC Products Overview, Virtual Machining Operating system technology Programming language implementation Virtualization
Virtual machining
[ "Engineering" ]
1,040
[ "Computer networks engineering", "Virtualization" ]
48,010,172
https://en.wikipedia.org/wiki/VSL%20International
VSL International (for Vorspann System Losinger) is a specialist construction company founded in 1954. VSL contributes to engineering, building, repairing, upgrading and preserving transport infrastructure (bridges, tunnels, retained earth walls for roads), buildings and energy production facilities. Based in Switzerland, VSL is owned by French construction company Bouygues. VSL specialises in post-tensioned concrete, stay-cable systems and heavy lifting, while its subsidiary Intrafor focuses on ground engineering and foundations. The company has also developed its proprietary systems, mostly related to post-tensioning and stay-cable, and has 370 patents. History In 1943, the Swiss construction company Losinger started to build post-tensioned bridges and began the development of its own post-tensioning system. The patent of this wire-based system was registered in 1954 and Vorspann System Losinger was created to help develop this activity. The patent was first applied in 1956 for the construction of the Pont des Cygnes bridge in Yverdon, Switzerland. In 1966, VSL launched its strand post-tensioning system. In 1978, VSL's stay cable system was installed for the first time on the Liebrüti bridge in Kaiseraugst, Switzerland. While the company developed internationally from the 1970s, in 1991, VSL joined the Bouygues group following the purchase of Losinger. In 2001, VSL diversified its activity to ground engineering. Organization VSL is headquartered in Bern in Switzerland, where the company was founded. In 2015, Jean-Yves Mondon was appointed chief executive officer. VSL operates in more than 30 countries, mostly in Asia, Oceania, the Middle East, Europe and South America. Key figures Workforce: 4,000 employees Patents: 370 3 manufacturing plants (in China, Spain and Thailand) 1 technical centre with offices in Switzerland, Singapore, Hong Kong and Spain Activities VSL's activities are organized in 4 business lines to engineer, build, repair, upgrade and preserve transport infrastructure (bridges, tunnels, retained earth walls for roads), buildings and energy production facilities: Systems and technologies: post-tensioning systems, stay cables, damping systems for buildings and civil works, bearings and joints. Construction: bridges, buildings, containment structures, offshore structures, heavy lifting. Ground engineering: foundations, ground improvement, ground investigation, mechanically stabilised earth walls, ground anchors. Repair, strengthening and preservation, including structural diagnostic, upgrade and retrofitting (de-icing, fire protection…) and monitoring. Projects Among the most important projects VSL has carried out or taken a part in are: Ganter Bridge, in Valais, Switzerland (1980) Tsing Ma Bridge, Hong Kong (1994) Petronas Towers's sky bridge, in Kuala Lumpur, Malaysia (1995) Burj Al Arab hotel (1997) Stadium Australia, in Sydney, Australia (1998) Dubai Metro, UAE (2006) Venitian Macao resort hotel, China (2007) Second Gateway Bridge / Gateway Bridge duplication, in Brisbane, Australia (2010) Marina Bay Sands, Singapore (2010) Hodariyat Bridge, in Abu Dhabi, United Arab Emirates (2012) Baluarte Bridge, Mexico (2012) the highest cable-stayed bridge in the world Newmarket Viaduct replacement, in Auckland, New Zealand (2012) Queensferry Crossing, United Kingdom (2017) Bandra–Worli Sea Link, Mumbai, India (2009) Cable-stayed bridge on the Mumbai Metro, India (2012) HCMC Metro Line 1, Vietnam (2017) Other known projects include: The Dubai Mall, the world's largest shopping mall Incheon Bridge, South Korea Kai Tak Cruise Terminal, Hong Kong Nhật Tân Bridge, Vietnam Rạch Miễu Bridge, Vietnam Stonecutters Bridge, Hong Kong Tarban Creek Bridge, Australia Wadi Leban Bridge, Saudi Arabia VSL also provided heavy lifting (with hydraulic jacks) for fifteen segments of the CMS detector of the Large Hadron Collider. See also :Category:Cable-stayed bridges References External links VSL Swiss companies established in 1954 Bouygues Bridge companies Companies based in Bern Construction and civil engineering companies established in 1954 Concrete pioneers Construction and civil engineering companies of Switzerland Structural steel
VSL International
[ "Engineering" ]
869
[ "Structural engineering", "Structural steel" ]
35,215,206
https://en.wikipedia.org/wiki/Chevalley%E2%80%93Iwahori%E2%80%93Nagata%20theorem
In mathematics, the Chevalley–Iwahori–Nagata theorem states that if a linear algebraic group G is acting linearly on a finite-dimensional vector space V, then the map from V/G to the spectrum of the ring of invariant polynomials is an isomorphism if this ring is finitely generated and all orbits of G on V are closed . It is named after Claude Chevalley, Nagayoshi Iwahori, and Masayoshi Nagata. References Invariant theory Theorems in algebraic geometry
Chevalley–Iwahori–Nagata theorem
[ "Physics", "Mathematics" ]
107
[ "Theorems in algebraic geometry", "Symmetry", "Group actions", "Theorems in geometry", "Invariant theory" ]
35,215,267
https://en.wikipedia.org/wiki/Young%E2%80%93Deruyts%20development
In mathematics, the Young–Deruyts development is a method of writing invariants of an action of a group on an n-dimensional vector space V in terms of invariants depending on at most n–1 vectors . References Invariant theory
Young–Deruyts development
[ "Physics" ]
50
[ "Invariant theory", "Group actions", "Symmetry" ]
35,215,342
https://en.wikipedia.org/wiki/Gram%27s%20theorem
In mathematics, Gram's theorem states that an algebraic set in a finite-dimensional vector space invariant under some linear group can be defined by absolute invariants. . It is named after J. P. Gram, who published it in 1874. References . Reprinted by Academic Press (1971), . . Invariant theory Theorems in algebraic geometry
Gram's theorem
[ "Physics", "Mathematics" ]
70
[ "Theorems in algebraic geometry", "Symmetry", "Group actions", "Theorems in geometry", "Invariant theory" ]
35,215,473
https://en.wikipedia.org/wiki/Bracket%20ring
In mathematics invariant theory, the bracket ring is the subring of the ring of polynomials k[x11,...,xdn] generated by the d-by-d minors of a generic d-by-n matrix (xij). The bracket ring may be regarded as the ring of polynomials on the image of a Grassmannian under the Plücker embedding. For given d ≤ n we define as formal variables the brackets [λ1 λ2 ... λd] with the λ taken from {1,...,n}, subject to [λ1 λ2 ... λd] = − [λ2 λ1 ... λd] and similarly for other transpositions. The set Λ(n,d) of size generates a polynomial ring K[Λ(n,d)] over a field K. There is a homomorphism Φ(n,d) from K[Λ(n,d)] to the polynomial ring K[xi,j] in nd indeterminates given by mapping [λ1 λ2 ... λd] to the determinant of the d by d matrix consisting of the columns of the xi,j indexed by the λ. The bracket ring B(n,d) is the image of Φ. The kernel I(n,d) of Φ encodes the relations or syzygies that exist between the minors of a generic n by d matrix. The projective variety defined by the ideal I is the (n−d)d dimensional Grassmann variety whose points correspond to d-dimensional subspaces of an n-dimensional space. To compute with brackets it is necessary to determine when an expression lies in the ideal I(n,d). This is achieved by a straightening law due to Young (1928). See also Bracket algebra References Invariant theory Algebraic geometry
Bracket ring
[ "Physics", "Mathematics" ]
389
[ "Symmetry", "Group actions", "Fields of abstract algebra", "Algebraic geometry", "Invariant theory" ]
35,215,882
https://en.wikipedia.org/wiki/Isothermal%20microcalorimetry
Isothermal microcalorimetry (IMC) is a laboratory method for real-time monitoring and dynamic analysis of chemical, physical and biological processes. Over a period of hours or days, IMC determines the onset, rate, extent and energetics of such processes for specimens in small ampoules (e.g. 3–20 ml) at a constant set temperature (c. 15 °C–150 °C). IMC accomplishes this dynamic analysis by measuring and recording vs. elapsed time the net rate of heat flow (μJ/s = μW) to or from the specimen ampoule, and the cumulative amount of heat (J) consumed or produced. IMC is a powerful and versatile analytical tool for four closely related reasons: All chemical and physical processes are either exothermic or endothermic—produce or consume heat. The rate of heat flow is proportional to the rate of the process taking place. IMC is sensitive enough to detect and follow either slow processes (reactions proceeding at a few % per year) in a few grams of material, or processes which generate minuscule amounts of heat (e.g. metabolism of a few thousand living cells). IMC instruments generally have a huge dynamic range—heat flows as low as ca. 1 μW and as high as ca. 50,000 μW can be measured by the same instrument. The IMC method of studying rates of processes is thus broadly applicable, provides real-time continuous data, and is sensitive. The measurement is simple to make, takes place unattended and is non-interfering (e.g. no fluorescent or radioactive markers are needed). However, there are two main caveats that must be heeded in use of IMC: Missed data: If externally prepared specimen ampoules are used, it takes ca. 40 minutes to slowly introduce an ampoule into the instrument without significant disturbance of the set temperature in the measurement module. Thus any processes taking place during this time are not monitored. Extraneous data: IMC records the aggregate net heat flow produced or consumed by all processes taking place within an ampoule. Therefore, in order to be sure what process or processes are producing the measured heat flow, great care must be taken in both experimental design and in the initial use of related chemical, physical and biologic assays. In general, possible applications of IMC are only limited by the imagination of the person who chooses to employ it as an analytical tool and the physical constraints of the method. Besides the two general limitations (main caveats) described above, these constraints include specimen and ampoule size, and the temperatures at which measurements can be made. IMC is generally best suited to evaluating processes which take place over hours or days. IMC has been used in an extremely wide range of applications, and many examples are discussed in this article, supported by references to published literature. Applications discussed range from measurement of slow oxidative degradation of polymers and instability of hazardous industrial chemicals to detection of bacteria in urine and evaluation of the effects of drugs on parasitic worms. The present emphasis in this article is applications of the latter type—biology and medicine. Overview Definition, purpose, and scope Calorimetry is the science of measuring the heat of chemical reactions or physical changes. Calorimetry is performed with a calorimeter. Isothermal microcalorimetry (IMC) is a laboratory method for real-time, continuous measurement of the heat flow rate (μJ/s = μW) and cumulative amount of heat (J) consumed or produced at essentially constant temperature by a specimen placed in an IMC instrument. Such heat is due to chemical or physical changes taking place in the specimen. The heat flow is proportional to the aggregate rate of changes taking place at a given time. The aggregate heat produced during a given time interval is proportional to the cumulative amount of aggregate changes which have taken place. IMC is thus a means for dynamic, quantitative evaluation of the rates and energetics of a broad range of rate processes, including biological processes. A rate process is defined here as a physical and/or chemical change whose progress over time can be described either empirically or by a mathematical model (Bibliography: Glasstone, et al. 1941 and Johnson, et al. 1974 and rate equation). The simplest use of IMC is detecting that one or more rate processes are taking place in a specimen because heat is being produced or consumed at a rate that is greater than the detection limit of the instrument used. This can be a useful, for example, as a general indicator that a solid or liquid material is not inert but instead is changing at a given temperature. In biological specimens containing a growth medium, appearance over time of a detectable and rising heat flow signal is a simple general indicator of the presence of some type of replicating cells. However, for most applications it is paramount to know, by some means, what process or processes are being measured by monitoring heat flow. In general this entails first having detailed physical, chemical and biological knowledge of the items placed in an IMC ampoule before it is placed in an IMC instrument for evaluation of heat flow over time. It is also then necessary to analyze the ampoule contents after IMC measurements of heat flow have been made for one or more periods of time. Also, logic-based variations in ampoule contents can be used to identify the specific source or sources of heat flow. When rate process and heat flow relationships have been established, it is then possible to rely directly on the IMC data. What IMC can measure in practice depends in part on specimen dimensions, and they are necessarily constrained by instrument design. A given commercial instrument typically accepts specimens of up to a fixed diameter and height. Instruments accepting specimens with dimensions of up to ca. 1 or 2 cm in diameter x ca. 5 cm in height are typical. In a given instrument larger specimens of a given type usually produce greater heat flow signals, and this can augment detection and precision. Frequently, specimens are simple 3 to 20 ml cylindrical ampoules (Fig. 1) containing materials whose rate processes are of interest—e.g. solids, liquids, cultured cells—or any combination of these or other items expected to result in production or consumption of heat. Many useful IMC measurements can be carried out using simple sealed ampoules, and glass ampoules are common since glass is not prone to undergoing heat-producing chemical or physical changes. However, metal or polymeric ampoules are sometimes employed. Also, instrument/ampoule systems are available which allow injection or controlled through-flow of gasses or liquids and/or provide specimen mechanical stirring. Commercial IMC instruments allow heat flow measurements at temperatures ranging from ca. 15 °C – 150 °C. The range for a given instrument may be somewhat different. IMC is extremely sensitive – e.g. heat from slow chemical reactions in specimens weighing a few grams, taking place at reactant consumption rates of a few percent per year, can be detected and quantified in a matter of days. Examples include gradual oxidation of polymeric implant materials and shelf life studies of solid pharmaceutical drug formulations (Applications: Solid materials). Also the rate of metabolic heat production of e.g. a few thousand living cells, microorganisms or protozoa in culture in an IMC ampoule can be measured. The amount of such metabolic heat can be correlated (through experimentation) with the number of cells or organisms present. Thus, IMC data can be used to monitor in real time the number of cells or organisms present and the net rate of growth or decline in this number (Applications: Biology and medicine). Although some non-biological applications of IMC are discussed (Applications: Solid materials) the present emphasis in this article is on the use of IMC in connection with biological processes (Applications: Biology and medicine). Data obtained A graphic display of a common type of IMC data is shown in Fig. 2. At the top is a plot of recorded heat flow (μJ/s = μW) vs. time from a specimen in a sealed ampoule, due to an exothermic rate process which begins, accelerates, reaches a peak heat flow and then subsides. Such data are directly useful (e.g. detection of a process and its duration under fixed conditions) but the data are also easily assessed mathematically to determine process parameters. For example, Fig. 2 also shows an integration of the heat flow data, giving accumulated heat (J) vs. time. As shown, parameters such as the maximum growth (heat generation) rate of the process, and the duration time of the lag phase before the process reaches maximum heat can be calculated from the integrated data. Calculations using heat flow rate data stored as computer files are easily automated. Analyzing IMC data in this manner to determine growth parameters has important applications the life sciences (Applications: Biology and medicine). Also, heat flow rates obtained at a series of temperatures can be used to obtain the activation energy of the process being evaluated (Hardison et al. 2003). Development history Lavoisier and Laplace are credited with creating and using the first isothermal calorimeter in ca. 1780 (Bibliography: Lavoisier A & Laplace PS 1780). Their instrument employed ice to produce a relatively constant temperature in a confined space. They realized that when they placed a heat-producing specimen on the ice (e.g. a live animal), the mass of liquid water produced by the melting ice was directly proportional to the heat produced by the specimen. Many modern IMC instrument designs stem from work done in Sweden in the late 1960s and early 1970s (Wadsö 1968, Suurkuusk & Wadsö 1974). This work took advantage of the parallel development of solid-state electronic devices—particularly commercial availability of small thermoelectric effect (Peltier-Seebeck) devices for converting heat flow into voltage—and vice versa. In the 1980s, multi-channel designs emerged (Suurkuusk 1982), which allow parallel evaluation of multiple specimens. This greatly increased the power and usefulness of IMC and led to efforts to fine-tune the method (Thorén et al. 1989). Much of the further design and development done in the 1990s was also accomplished in Sweden by Wadsö and Suurkuusk and their colleagues. This work took advantage of the parallel development of personal computer technology which greatly augmented the ability to easily store, process and interpret heat flow vs. time data. Instrument development work since the 1990s has taken further advantage of the continued development of solid-state electronics and personal computer technology. This has created IMC instruments of increasing sensitivity and stability, numbers of parallel channels, and even greater ability to conveniently record, store and rapidly process IMC data. In connection with wider use, substantial attention has been paid to creating standards for describing the performance of IMC instruments (e.g. precision, accuracy, sensitivity) and for methods of calibration (Wadsö and Goldberg 2001). Instruments and measurement principles Instrument configurations Modern IMC instruments are actually semi-adiabatic—i.e. heat transfer between the specimen and its surroundings is not zero (adiabatic), because IMC measurement of heat flow depends on the existence of a small temperature differential—ca. 0.001 °C. However, because the differential is so low, IMC measurements are essentially isothermal. Fig. 3. shows an overview of an IMC instrument which contains 48 separate heat flow measurement modules. One module is shown. The module's measuring unit is typically a Peltier-Seebeck device. The device produces a voltage proportional to the temperature difference between a specimen which is producing or consuming heat and a thermally inactive reference which is at the temperature of the heat sink. The temperature difference is in turn proportional to the rate at which the specimen is producing or consuming heat (see Calibration below). All the modules in an instrument use the same heat sink and thermostat and thus all produce data at the same set temperature. However, it is generally possible to start and stop measurements in each ampoule independently. In a highly parallel (e.g. 48-channel) instrument like the one shown in Fig. 3, this makes it possible to perform (start and stop) several different experiments whenever it is convenient to do so. Alternatively, IMC instruments can be equipped with duplex modules which yield signals proportional to the heat flow difference between two ampoules. One of two such duplex ampoules is often a blank or control—i.e. a specimen which does not contain the material producing the rate process of interest, but whose content is otherwise identical to that which is in the specimen ampoule. This provides a means for eliminating minor heat-producing reactions which are not of interest—for example gradual chemical changes over a period of days in a cell culture medium at the measurement temperature. Many useful IMC measurements can be carried out using simple sealed ampoules. However, as mentioned above, instrument/ampoule systems are available which allow or even control flow of gasses or liquids to and/or from the specimens and/or provide specimen mechanical stirring. Reference inserts Heat flow is usually measured relative to a reference insert, as shown in Fig. 3. This is typically a metal coupon that is chemically and physically stable at any temperature in the instrument's operating range and thus will not produce or consume heat itself. For best performance, the reference should have a heat capacity close to that of the specimen (e.g. IMC ampoule plus contents). Modes of operation Heat conduction (hc) mode Commercial IMC instruments are often operated as heat conduction (hc) calorimeters in which heat produced by the specimen (i.e. material in an ampoule) flows to the heat sink, typically an aluminum block contained in a thermostat (e.g. constant temperature bath). As mentioned above, an IMC instrument operating in hc mode is not precisely isothermal because small differences between the set temperature and the specimen temperature necessarily exist—so that there is measurable heat flow. However, small variations in specimen temperature do not significantly affect heat sink temperature because the heat capacity of the heat sink is much higher than the specimen—usually ca. 100×. Heat transfer between the specimen and the heat sink takes place through a Peltier-Seebeck device, allowing dynamic measurement of heat produced or consumed. In research-quality instruments, thermostat/heat sink temperature is typically accurate to < ±0.1 K and maintained within ca. < ±100 μK/24h. The precision with which heat sink temperature is maintained over time is a major determinant of the precision of the heat flow measurements over time. An advantage of hc mode is a large dynamic range. Heat flows of ca. 50,000 μW can be measured with a precision of ca. ±0.2 μW. Thus measuring a heat flow of ca. >0.2 μW above baseline constitutes detection of heat flow, although a more conservative detection of 10× the precision limit is often used. Power compensation (pc) mode Some IMC instruments operate (or can also be operated) as power compensation (pc) calorimeters. In this case, in order to maintain the specimen at the set temperature, heat produced is compensated using a Peltier-Seebeck device. Heat consumed is compensated either by an electric heater or by reversing the polarity of the device (van Herwaarden, 2000). If a given instrument is operated in pc mode rather than hc, the precision of heat flow measurement remains the same (e.g. ca. ±0.2 μW). The advantage of compensation mode is a smaller time constant – i.e. the time needed to detect a given heat flow pulse is ca.10X shorter than in conduction mode. The disadvantage is a ca. 10X smaller dynamic range compared to hc mode. Calibration For operation in either hc or pc mode, routine calibration in commercial instruments is usually accomplished with built-in electric heaters. The performance of the electrical heaters can in turn be validated using specimens of known heat capacity or which produce chemical reactions whose heat production per unit mass is known from thermodynamics (Wadsö and Goldberg 2001). In either hc or pc mode, the resulting signal is a computer-recordable voltage, calibrated to represent specimen μ W-range heat flow vs. time. Specifically, if no significant thermal gradients exist in the specimen, then P = eC [U + t (dU/dt)], where P is heat flow (i.e. μW), εC is the calibration constant, U the measured potential difference across the thermopile, and t the time constant. Under steady-state conditions—for example during the release of a constant electrical calibration current, this simplifies to P = eC U. (Wadsö and Goldberg 2001). Ampoules Many highly useful IMC measurements can be conducted in sealed ampoules (Fig. 1) which offer advantages of simplicity, protection from contamination and (where needed) a substantial margin of bio-safety for persons handling or exposed to the ampoules. A closed ampoule can contain any desired combination of solids, liquids, gasses or items of biologic origin. Initial gas composition in the ampoule head space can be controlled by sealing the ampoule in the desired gas environment. However, there are also IMC instrument/ampoule designs which permit controlled flow of gas or liquid through the ampoule during measurement and/or mechanical stirring. Also, with proper accessories, some IMC instruments can be operated as ITC (isothermal titration calorimetry) instruments. The topic of ITC is covered elsewhere (see Isothermal titration calorimetry). In addition some IMC instruments can record heat flow while the temperature is slowly changed (scanned) over time. The scanning rate has to be slow (ca. ) in order to keep IMC-scale specimens (e.g. a few grams) sufficiently close to the heat sink temperature (< ca. 0.1 °C). Fast scanning of temperature is the province of differential scanning calorimetry (DSC) instruments which generally use much smaller specimens. Some DSC instruments can be operated in IMC mode, but the small ampoule (and therefore specimen) size needed for scanning limit the utility and sensitivity of DSC instruments used in IMC mode. Basic methodology Setting a temperature Heat flow rate (μJ/s = μW) measurements are accomplished by first setting an IMC instrument thermostat at a selected temperature and allowing the instrument's heat sink to stabilize at that temperature. If an IMC instrument operating at one temperature is set to a new temperature, re-stabilization at the new temperature setting may take several hours—even a day. As explained above, achievement and maintenance of a precisely stable temperature is fundamental to achieving precise heat flow measurements in the μW range over extended times (e.g. days). Introducing a specimen After temperature stabilization, if an externally prepared ampoule (or some solid specimen of ampoule dimensions) is used, it is slowly introduced (e.g. lowered) into an instrument's measurement module, usually in a staged operation. The purpose is to ensure that by the time the ampoule/specimen is in the measurement position, its temperature is close to (within c. 0.001 °C) of the measurement temperature. This is so that any heat flow then measured is due to specimen rate processes rather than due to a continuing process of bringing the specimen to the set temperature. The time for introduction of a specimen in a 3–20 ml IMC ampoule into measurement position is ca. 40 minutes in many instruments. This means that heat flow from any processes which take place within a specimen during that the introduction period will not be recorded. If an in-place ampoule is used, and some agent or specimen is injected, this also produces a period of instability, but it is on the order ca. 1 minute. Fig. 5 provides examples of both the long period needed to stabilize an instrument if an ampoule is introduced directly, and the short period of instability due to injection. Recording data After the introduction process, specimen heat flow can be precisely recorded continuously, for as long as it is of interest. The extreme stability of research-grade instruments (< ±100 μK/24h ) means that accurate measurements can be (and often are) made for a period of days. Since the heat flow signal is essentially readable in real time, it serves as a means for deciding whether or not heat flow of interest is still occurring. Also, modern instruments store heat flow vs. time data as computer files, so both real-time and retrospective graphic display and mathematical analysis of data are possible. Usability As indicated below, IMC has many advantages as a method for analyzing rate processes, but there are also some caveats that must be heeded. Advantages Broadly applicable Any rate process can be studied—if suitable specimens will fit IMC instrument module geometry, and proceed at rates amenable to IMC methodology (see above). As shown under Applications, IMC is in use to quantify an extremely wide range of rate processes in vitro—e.g. from solid-state stability of polymers (Hardison et al. 2003) to efficacy of drug compounds against parasitic worms (Maneck et al. 2011). IMC can also determine the aggregate rate of uncharacterized, complex, or multiple interactions (Lewis & Daniels). This is especially useful for comparative screening—e.g. the effects of different combinations of material composition and/or fabrication processes on overall physico-chemical stability. Real-time and continuous IMC heat flow data are obtained as voltage fluctuations vs. time, stored as computer files and can be displayed essentially in real time—as the rate process is occurring. The heat flow-related voltage is continuous over time, but in modern instruments it is normally sampled digitally. The frequency of digital sampling can be controlled as needed—i.e. frequent sampling of rapid heat flow changes for better time resolution or slower sampling of slow changes in order to limit data file size. Sensitive and fast IMC is sensitive enough to detect and quantify in short times (hours, days) reactions which consume only a few percent of reactants over long times (months). IMC thus avoids long waits often needed until enough reaction product has accumulated for conventional (e.g. chemical) assays. This applies to both physical and biological specimens (see Applications). Direct At each combination of specimen variables and set temperature of interest, IMC provides direct determination of the heat flow kinetics and cumulative heat of rate processes. This avoids any need to assume that a rate process remains the same when temperature or other controlled variables are changed before an IMC measurement. Simple For comparisons of the effect of experimental variables (e.g. initial concentrations) on rate processes, IMC does not require development and use of chemical or other assay methods. If absolute data are required (e.g. quantity of product produced by a process), then assays can be conducted in parallel on specimens identical to those used for IMC (and/or on IMC specimens after IMC runs). The resultant assay data is used to calibrate the rate data obtained by IMC. Non-interfering IMC does not require adding markers (e.g. fluorescent or radioactive substances) to capture rate processes. Unadulterated specimens can be used, and after an IMC run, the specimen is unchanged (except by the processes which have taken place). The post-IMC specimen can be subjected to any kind of physical, chemical, morphological or other evaluation of interest. Caveats Missed data As indicated in the methodology description, when the IMC method of inserting a sealed ampoule is used, it is not possible to capture heat flow during the first ca. 40 minutes while the specimen is slowly being brought to the set temperature. In this mode therefore, IMC is best suited to studying processes which start slowly or occur slowly at a given temperature. This caveat also applies to the time before insertion—i.e. time elapsed between preparing a specimen (in which a rate process may then start) and starting the IMC insertion process (Charlebois et al. 2003). This latter effect is usually minimized if the temperature chosen for IMC is substantially higher (e.g. 37 °C) than the temperature at which the specimen is prepared (e.g. 25 °C). Extraneous data IMC captures the aggregate heat production or consumption resulting from all processes taking place within a specimen, including for example Possible changes in the physico-chemical state of the specimen ampoule itself; e.g. stress relaxation in metal components, oxidation of polymeric components. Degradation of a culture medium in which metabolism and growth of living cells is being studied. Thus great care must be taken in experimental planning and design to identify all possible processes which may be taking place. It is often necessary to design and conduct preliminary studies intended to systematically determine if multiple processes are taking place and if so, their contributions to aggregate heat flow. One strategy, in order to eliminate extraneous heat flow data, is to compare heat flow for a specimen in which the rate process of interest is taking place with that from a blank specimen which includes everything in the specimen of interest—except the item which will undergo the rate process of interest. This can be directly accomplished with instruments having duplex IMC modules which report the net heat flow difference between two ampoules. Applications After a discussion of some special sources of IMC application information, several specific categories of IMC analysis of rate processes are covered, and recent examples (with literature references) are discussed in each category. Special sources of IMC application information Handbooks The Bibliography lists the four extensive volumes of the Handbook of Thermal Analysis and Calorimetry: Vol. 1 Principles and Practice (1998), Vol. 2 Applications to Inorganic and Miscellaneous Materials (2003), Vol. 3 Applications to Polymers and Plastics (2002), and Vol. 4 From Macromolecules to Man (1999). These constitute a prime source of information on (and literature references to) IMC applications and examples published prior to ca. 2000. Application notes Some IMC instrument manufacturers have assembled application notes, and make them available to the public. The notes are often (but not always) adaptations of journal papers. An example is the Microcalorimetry Compendium Vol. I and II offered by TA Instruments, Inc. and listed in the Bibliography. "Proteins" the first section of notes in Vol. I, is not of interest here, as it describes studies employing Isothermal titration calorimetry. The subsequent sections of Vol. I, Life & Biological Sciences and Pharmaceuticals contain application notes for both IMC and Differential scanning calorimetry. Vol. II of the compendium is devoted almost entirely to IMC applications. Its sections are entitled Cement, Energetics, Material and Other. A possible drawback to these two specific compendia is that none of the notes are dated. Although the compendia were published in 2009, some of the notes describe IMC instruments which were in use years ago and are no longer available. Thus, some of the notes, while still relevant and instructive, often describe studies done before 2000. Examples of applications In general, possible applications of IMC are only limited by the imagination of the person who chooses to employ IMC as an analytical tool—within the previously described constraints presented by existing IMC instruments and methodology. This is because it is a universal means for monitoring any chemical, physical or biological rate process. Below are some IMC application categories with examples in each. In most categories, there are many more published examples than those mentioned and referenced. The categories are somewhat arbitrary and often overlap. A different set of categories might be just as logical, and more categories could be added. Solid materials Formation IMC is widely used for studying the rates of formation of a variety of materials by various processes. It is best suited to study processes which occur slowly—i.e. over hours or days. A prime example is the study of hydration and setting reactions of calcium mineral cement formulations. One paper provides an overview (Gawlicki, et al. 2010) and another describes a simple approach (Evju 2003). Other studies focus on insights into cement hydration provided by IMC combined with IR spectroscopy (Ylmen et al. 2010) and on using IMC to study the influence of compositional variables on cement hydration and setting times (Xu et al. 2011). IMC can also be conveniently used to study the rate and amount of hydration (in air of known humidity) of calcium minerals or other minerals. To provide air of known humidity for such studies, small containers of saturated salt solutions can be placed in an IMC ampoule along with a non-hydrated mineral specimen. The ampoule is then sealed and introduced into an IMC instrument. The saturated salt solution keeps the air in the ampoule at a known rH, and various common salt solutions provide humidities ranging from e.g. 32-100% rH. Such studies have been performed on μm size range calcium hydroxyapatite particles and calcium-containing bioactive glass "nano" particles (Doostmohammadi et al. 2011). Stability IMC is well suited for rapidly quantifying the rates of slow changes in materials (Willson et al. 1995). Such evaluations are variously described as studies of stability, degradation, or shelf life. For example, IMC has been widely used for many years in shelf life studies of solid drug formulations in the pharmaceutical industry (Pikal et al. 1989, Hansen et al. 1990, Konigbauer et al. 1992.) IMC has the ability to detect slow degradation during simulated shelf storage far sooner than conventional analytical methods and without the need to employ chemical assay techniques. IMC is also a rapid, sensitive method for determining the often functionally crucial amorphous content of drugs such as nifedipine (Vivoda et al. 2011). IMC can be used for rapidly determining the rate of slow changes in industrial polymers. For example, gamma radiation sterilization of a material frequently used for surgical implants—ultra-high-molecular-weight polyethylene (UHMWPE)—is known to produce free radicals in the polymer. The result is slow oxidation and gradual undesirable embrittlement of the polymer on the shelf or in vivo. IMC could detect oxidation-related heat and quantified an oxidation rate of ca. 1% per year in irradiated UHMWPE at room temperature in air (Charlebois et al. 2003). In a related study the activation energy was determined from measurements at a series of temperatures (Hardison et al. 2003). IMC is also of great utility in evaluating the "runaway potential" of materials which are significant fire or explosion hazards. For example, it has been used to determine autocatalytic kinetics of cumene hydroperoxide (CHP), an intermediate which is used in the chemical industry and whose sudden decomposition has caused a number of fires and explosions. Fig. 4 Shows the IMC data documenting thermal decomposition of CHP at 5 different temperatures (Chen et al. 2008). Biology and medicine The term metabolismics can be used to describe studies of the quantitative measurement of the rate at which heat is produced or consumed vs. time by cells (including microbes) in culture, by tissue specimens, or by small whole organisms. As described subsequently, metabolismics can be useful as a diagnostic tool; especially in either (a) identifying the nature of a specimen from its heat flow vs. time signature under a given set of conditions, or (b) determining the effects of e.g. pharmaceutical compounds on metabolic processes, organic growth or viability. Metabolismics is related to metabolomics. The latter is the systematic study of the unique chemical fingerprints that specific cellular processes leave behind; i.e. the study of their small-molecule metabolite profiles. When IMC is used to determine metabolismics, the products of the metabolic processes studied are subsequently available for metabolomics studies. Since IMC does not employ biochemical or radioactive markers, the post-IMC specimens consist only of metabolic products and remaining culture medium (if any was used). If metabolismics and metabolomics are used together, they can provide a comprehensive record of a metabolic process taking place in vitro: its rate and energetics, and its metabolic products. To determine metabolismics using IMC, there must of course be sufficient cells, tissue or organisms initially present (or present later if replication is taking place during IMC measurements) to generate a heat flow signal above a given instrument's detection limit. A landmark 2002 general paper on the topic of metabolism provides an excellent perspective from which to consider IMC metabolismic studies (see Bibliography, West, Woodruff and Brown 2002). It describes how metabolic rates are related and how they scale over the entire range from "molecules and mitochondria to cells and mammals". Importantly for IMC, the authors also note that while the metabolic rate of a given type of mammalian cell in vivo declines markedly with increasing animal size (mass), the size of the donor animal has no effect on the metabolic rate of the cell when cultured in vitro. Cell and tissue biology Mammalian cells in culture have a metabolic rate of ca. 30×10−12 W/cell (Figs. 2 and 3 in Bibliography: West, Woodruff and Brown 2002). By definition, IMC instruments have a sensitivity of at least 1×10−6 W (i.e. 1 μW). Therefore, the metabolic heat of ca. 33,000 cells is detectable. Based on this sensitivity, IMC was used to perform a large number of pioneering studies of cultured mammalian cell metabolismics in the 1970s and 1980s in Sweden. One paper (Monti 1990) serves as an extensive guide to work done up until 1990. It includes explanatory text and 42 references to IMC studies of heat flow from cultured human erythrocytes, platelets, lymphocytes, lymphoma cells, granulocytes, adipocytes, skeletal muscle, and myocardial tissue. The studies were done to determine how and where IMC might be used as a clinical diagnostic method and/or provide insights into metabolic differences between cells from healthy persons and persons with various diseases or health problems. Developments since ca. 2000 in IMC (e.g. massively parallel instruments, real-time, computer-based storage and analysis of heat flow data) have stimulated further use of IMC in cultured cell biology. For example, IMC has been evaluated for assessing antigen-induced lymphocyte proliferation (Murigande et al. 2009) and revealed aspects of proliferation not seen using a conventional non-continuous radioactive marker assay method. IMC has also been applied to the field of tissue engineering. One study (Santoro et al. 2011) demonstrated that IMC could be used to measure the growth (i.e. proliferation) rate in culture of human chondrocytes harvested for tissue engineering use. It showed that IMC can potentially serve to determine the effectiveness of different growth media formulations and also determine whether cells donated by a given individual can be grown efficiently enough to consider using them to produce engineered tissue. IMC has also been used to measure the metabolic response of cultured macrophages to surgical implant wear debris. IMC showed that the response was stronger to μm size range particles of polyethylene than to similarly sized Co alloy particles (Charlebois et al. 2002). A related paper covers the general topic of applying IMC in the field of synthetic solid materials used in surgery and medicine (Lewis and Daniels 2003). At least two studies have suggested IMC can be of substantial use in tumor pathology. In one study (Bäckman 1990), the heat production rate of T-lymphoma cells cultured in suspension was measured. Changes in temperature and pH induced significant variations, but stirring rate and cell concentration did not. A more direct study of possible diagnostic use (Kallerhoff et al. 1996) produced promising results. For the uro-genital tissue biopsy specimens studied, the results showed "it is possible to differentiate between normal and tumorous tissue samples by microcalorimetric measurement based on the distinctly higher metabolic activity of malignant tissue. Furthermore, microcalorimetry allows a differentiation and classification of tissue samples into their histological grading." Toxicology As of 2012, IMC has not become widely used in cultured cell toxicology even though it has been used periodically and successfully since the 1980s. IMC is advantageous in toxicology when it is desirable to observe cultured cell metabolism in real time and to quantify the rate of metabolic decline as a function of the concentration of a possibly toxic agent. One of the earliest reports (Ankerst et al. 1986) of IMC use in toxicology was a study of antibody-dependent cellular toxicity (ADCC) against human melanoma cells of various combinations of antiserum, monoclonal antibodies and also peripheral blood lymphocytes as effector cells. Kinetics of melanoma cell metabolic heat flow vs. time in closed ampoules were measured for 20 hours. The authors concluded that "...microcalorimetry is a sensitive and particularly suitable method for the analysis of cytotoxicity kinetics." IMC is also being used in environmental toxicology. In an early study (Thorén 1992) toxicity against monolayers of alveolar macrophages of particles of MnO2, TiO2 and SiO2 (silica) were evaluated. IMC results were in accord with results obtained by fluorescein ester staining and microscopic image analysis—except that IMC showed toxic effects of quartz not discernable by image analysis. This latter observation—in accord with known alveolar effects—indicated to the authors that IMC was a more sensitive technique. Much more recently (Liu et al. 2007), IMC has been shown to provide dynamic metabolic data which assess toxicity against fibroblasts of Cr(VI) from potassium chromate. Fig. 5 shows baseline results determining the metabolic heat flow from cultured fibroblasts prior to assessing the effects of Cr(VI). The authors concluded that "Microcalorimetry appears to be a convenient and easy technique for measuring metabolic processes...in...living cells. As opposed to standard bioassay procedures, this technique allows continuous measurements of the metabolism of living cells. We have thus shown that Cr(VI) impairs metabolic pathways of human fibroblasts and particularly glucose utilization." Simple closed ampoule IMC has also been used and advocated for assessing the cultured cell toxicity of candidate surgical implant materials—and thus serve as a biocompatibility screening method. In one study (Xie et al. 2000) porcine renal tubular cells in culture were exposed to both polymers and titanium metal in the form of "microplates" having known surface areas of a few cm2. The authors concluded that IMC "...is a rapid method, convenient to operate and with good reproducibility. The present method can in most cases replace more time-consuming light and electron microscopic investigations for quantitating of adhered cells." In another implant materials study (Doostmohammadi et al. 2011) both a rapidly growing yeast culture and a human chondrocyte culture were exposed to particles (diam.< 50 μm) of calcium hydroxyapatite (HA) and bioactive (calcium-containing) silica glass. The glass particles slowed or curtailed yeast growth as a function of increasing particle concentration. The HA particles had much less effect and never entirely curtailed yeast growth at the same concentrations. The effects of both particle types on chondrocyte growth were minimal at the concentration employed. The authors concluded that "The cytotoxicity of particulate materials such as bioactive glass and hydroxyapatite particles can be evaluated using the microcalorimetry method. This is a modern method for in vitro study of biomaterials biocompatibility and cytotoxicity which can be used alongside the old conventional assays." Microbiology Publications describing use of IMC in microbiology began in the 1980s (Jesperson 1982). While some IMC microbiology studies have been directed at viruses (Heng et al. 2005) and fungi (Antoci et al. 1997), most have been concerned with bacteria. A recent paper (Braissant et al. 2010) provides a general introduction to IMC metabolismic methods in microbiology and an overview of applications in medical and environmental microbiology. The paper also explains how heat flow vs. time data for bacteria in culture are an exact expression—as they occur over time—of the fluctuations in microorganism metabolic activity and replication rates in a given medium (Fig. 6). In general, bacteria are about 1/10 the size of mammalian cells and produce perhaps 1/10 as much metabolic heat-i.e. ca. 3x10−12 W/cell. Thus, compared to mammalian cells (see above) ca. 10X as many bacteria—ca. 330,000—must be present to produce detectable heat flow—i.e. 1 μW. However, many bacteria replicate orders of magnitude more rapidly in culture than mammalian cells, often doubling their number in a matter of minutes (see Bacterial growth). As a result, a small initial number of bacteria in culture and initially undetectable by IMC rapidly produce a detectable number. For example, 100 bacteria doubling every 20 minutes will in less than 4 hours produce >330,000 bacteria and thus an IMC-detectable heat flow. Consequently, IMC can be used for easy, rapid detection of bacteria in the medical field. Examples include detection of bacteria in human blood platelet products (Trampuz et al. 2007) and urine (Bonkat et al. 2011) and rapid detection of tuberculosis (Braissant et al. 2010, Rodriguez et al. 2011). Fig. 7 shows an example of detection times of tuberculosis bacteria as a function of the initial amount of bacteria present in a closed IMC ampoule containing a culture medium. For microbes in growth media in closed ampoules, IMC heat flow data can also be used to closely estimate basic microbial growth parameters; i.e. maximum growth rate and duration time of the lag phase before maximum growth rate is achieved. This is an important special application of the basic analysis of these parameters explained previously (Overview: Data Obtained). Unfortunately, the IMC literature contains some published papers in which the relation between heat flow data and microbial growth in closed ampoules has been misunderstood. However, in 2013 an extensive clarification was published, describing (a) details of the relation between IMC heat flow data and microbial growth, (b) selection of mathematical models which describe microbial growth and (c) determination of microbial growth parameters from IMC data using these models (Braissant et al. 2013). Pharmacodynamics In a logical extension of the ability of IMC to detect and quantify bacterial growth, known concentrations of antibiotics can be added to bacterial culture, and IMC can then be used to quantify their effects on viability and growth. Closed ampoule IMC can easily capture basic pharmacologic information—e.g. minimum inhibitory concentration (MIC) of an antibiotic needed to stop growth of a given organism. In addition it can simultaneously provide dynamic growth parameters—lag time and maximum growth rate (see Fig. 2, Howell et al. 2011, Braissant et al. 2013), which assess mechanisms of action. Bactericidal action (see Bactericide) is indicated by an increased lag time as a function of increasing antibiotic concentration, while bacteriostatic action (see Bacteriostatic agent) is indicated by a decrease in growth rate with concentration. The IMC approach to antibiotic assessment has been demonstrated for a number of a types of bacteria and antibiotics (von Ah et al. 2009). Closed ampoule IMC can also rapidly differentiate between normal and resistant strains of bacteria such as Staphylococcus aureus (von Ah et al. 2008, Baldoni et al. 2009). IMC has also been used to assess the effects of disinfectants on the viability of mouth bacteria adhered to dental implant materials (Astasov-Frauenhoffer et al. 2011). In a related earlier study, IMC was used to measure the heat of adhesion of dental bacteria to glass (Hauser-Gerspach et al. 2008). Analogous successful use of IMC to determine the effects of antitumor drugs on tumor cells in culture within a few hours has been demonstrated (Schön and Wadsö 1988). Rather than the closed-ampoule approach, an IMC setup was used which allowed drug injection into stirred specimens. As of 2013, IMC has been used less widely in mammalian cell in vitro pharmacodynamic studies than in microbial studies. Multicellular organisms It is possible to use IMC to perform metabolismic studies of living multicellular organisms—if they are small enough to be placed in IMC ampoules (Lamprecht & Becker 1988). IMC studies have been made of insect pupa metabolism during ventilating movements (Harak et al. 1996) and effects of chemical agents on pupal growth (Kuusik et al. 1995). IMC has also proved effective in assessing the effects of aging on nematode worm metabolism (Braekman et al. 2002). IMC has also proved highly useful for in vitro assessments of the effects of pharmaceuticals on tropical parasitic worms (Manneck et al. 2011-1, Maneck et al. 2011-2, Kirchhofer et al. 2011). An interesting feature of these studies is the use of a simple manual injection system for introducing the pharmaceuticals into sealed ampoules containing the worms. Also, IMC not only documents the general metabolic decline over time due to the drugs, but also the overall frequency of worm motor activity and its decline in amplitude over time as reflected in fluctuations in the heat flow data. Environmental biology Because of its versatility, IMC can be an effective tool in the fields of plant and environmental biology. In an early study (Hansen et al. 1989), the metabolic rate of larch tree clone tissue specimens was measured. The rate was predictive of long-term tree growth rates, was consistent for specimens from a given tree and was found to correlate with known variations in the long-term growth of clones from different trees. Bacterial oxalotrophic metabolism is common in the environment, particularly in soils. Oxalotrophic bacteria are capable of using oxalate as a sole carbon and energy source. Closed-ampoule IMC was used to study metabolism of oxalotrophic soil bacteria exposed to both an optimized medium containing potassium oxalate as the sole carbon source and a model soil (Bravo et al. 2011). Using an optimized medium, growth of six different strains of soil bacteria was easily monitored and reproducibly quantified and differentiated over a period days. IMC measurement of bacterial metabolic heat flow in the model soil was more difficult, but a proof of concept was demonstrated. Moonmilk is a white, creamy material found in caves. It is a non-hardening, fine crystalline precipitate from limestone and is composed mainly of calcium and/or magnesium carbonates. Microbes may be involved in its formation. It is difficult to infer microbial activities in moonmilk from standard static chemical and microscopic assays of moonmilk composition and structure. Closed ampoule IMC has been used to solve this problem (Braissant, Bindscheidler et al. 2011). It was possible to determine the growth rates of chemoheterotrophic microbial communities on moonmilk after the addition of various carbon sources simulating mixes that would be brought into contact with moonmilk due to snow melt or rainfall. Metabolic activity was high and comparable to that found in some soils. Harris et al. (2012), studying differing fertilizer input regimes, found that, when expressed as heat output per unit soil microbial biomass, microbial communities under organic fertilizer regimes produced less waste heat than those under inorganic regimes. Food science IMC has been shown to have diverse uses in food science and technology. An overview (Wadsö and Galindo 2009) discusses successful applications in assessing vegetable cutting wound respiration, cell death from blanching, milk fermentation, microbiological spoilage prevention, thermal treatment and shelf life. Another publication (Galindo et al. 2005) reviews the successful use of IMC for monitoring and predicting quality changes during storage of minimally processed fruits and vegetables. IMC has also proven effective in accomplishing enzymatic assays for orotic acid in milk (Anastasi et al. 2000) and malic acid in fruits, wines and other beverages and also cosmetic products (Antonelli et al. 2008). IMC has also been used to assess the efficacy of anti-browning agents on fresh-cut potatoes (Rocculi et al. 2007). IMC has also proven effective in assessing the extent to which low-energy pulsed electric fields (PEFs) affect the heat of germination of barley seeds—important in connection with their use in producing malted beverages (Dymek et al. 2012). See also Calorimetry Chemical thermodynamics Differential scanning calorimetry Isothermal titration calorimetry Rate equation Sorption calorimetry Thermal analysis Thermoelectric effect Bibliography Glasstone S, Laidler KJ, Eyring H (1941) The theory of rate processes: the kinetics of chemical reactions, viscosity, diffusion and electrochemical phenomena. McGraw-Hill (New York). 611p. Johnson FH, Eyring H, Stover BJ (1974) The theory of rate processes in biology and medicine. Wiley (New York), , 703p. Lavoisier A & Laplace PS (1780) M´emoire sur la chaleur. Académie des Sciences, Paris. Brown ME, Editor (1998) Vol. 1 Principles and Practice (691p.), in Handbook of Thermal Analysis and Calorimetry. Gallagher PK (Series Editor). Elsevier (London). Brown ME and Gallagher PK, Editors (2003) Vol. 2 Applications to Inorganic and Miscellaneous Materials (905p.), in Handbook of Thermal Analysis and Calorimetry. Gallagher PK (Series Editor). Elsevier (London). Cheng SZD, Editor (2002) Vol. 3 Applications to Polymers and Plastics (828p.) in Handbook of Thermal Analysis and Calorimetry. Gallagher PK (Series Editor). Elsevier (London). Kemp RB, Editor (1999) Vol. 4 From Macromolecules to Man (1032p.), in Handbook of Thermal Analysis and Calorimetry. Gallagher PK (Series Editor). Elsevier (London). Microcalorimetry Compendium Vol. 1: Proteins, Life & Biological Sciences, Pharmaceuticals (2009). TA Instruments, Inc. (New Castle DE, USA). Microcalorimetry Compendium Vol. 2: Cement, Energetics, Material, Other (2009). TA Instruments, Inc. (New Castle DE, USA). References External links Some sources for IMC instruments, accessories, supplies, and software Calmetrix TA Instruments Setaram Symcel Flow Adsorption Microcalorimeter instrument configurations Microscal Ltd (archived 2005) Biological processes Calorimetry Chemical processes Heat transfer Materials science
Isothermal microcalorimetry
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
10,866
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Applied and interdisciplinary physics", "Materials science", "Chemical processes", "Thermodynamics", "nan", "Chemical process engineering" ]
35,226,377
https://en.wikipedia.org/wiki/Rajchman%20measure
In mathematics, a Rajchman measure, studied by , is a regular Borel measure on a locally compact group such as the circle, whose Fourier transform vanishes at infinity. References Measures (measure theory)
Rajchman measure
[ "Physics", "Mathematics" ]
43
[ "Mathematical analysis", "Physical quantities", "Mathematical analysis stubs", "Measures (measure theory)", "Quantity", "Size" ]
35,226,965
https://en.wikipedia.org/wiki/Water%20issues%20in%20developing%20countries
Water issues in developing countries include scarcity of drinking water, poor infrastructure for water and sanitation access, water pollution, and low levels of water security. Over one billion people in developing countries have inadequate access to clean water. The main barriers to addressing water problems in developing nations include poverty, costs of infrastructure, and poor governance. The effects of climate change on the water cycle can make these problems worse. The contamination of water remains a significant issue because of unsanitary social practices that pollute water sources. Almost 80% of disease in developing countries is caused by poor water quality and other water-related issues that cause deadly health conditions such as cholera, malaria, and diarrhea. It is estimated that diarrhea takes the lives of 1.5 million children every year, majority of which are under the age of five. Access to freshwater is unevenly distributed across the globe, with more than two billion people live in countries with significant water stress. According to UN-Water, by 2025, 1.8 billion people will be living in areas across the globe with complete water scarcity. Populations in developing countries attempt to access potable water from a variety of sources, such as groundwater, aquifers, or surface waters, which can be easily contaminated. Freshwater access is also constrained by insufficient wastewater and sewage treatment. Progress has been made over recent decades to improve water access, but billions still live in conditions with very limited access to consistent and clean drinking water. Problems Water scarcity People need fresh water for survival, personal care, agriculture, industry, and commerce. The 2019 UN World Water Development report noted that about four billion people, representing nearly two-thirds of the world population, experience severe water scarcity during at least one month of the year. With rising demand, the quality and supply of water have diminished. Water use has been increasing worldwide by about 1% per year since the 1980s. Global water demand is expected to continue increasing at a similar rate until 2050, accounting for an increase of 20–30% above 2019 usage levels. The steady rise in use has principally been led by surging demand in developing countries and emerging economies. Per capita water use in the majority of these countries remains far below water use in developed countries—they are merely catching up. Agriculture (including irrigation, livestock, and aquaculture) is by far the largest water consumer, accounting for 69% of annual water withdrawals globally. Agriculture's share of total water use is likely to fall in comparison with other sectors, but it will remain the largest user overall in terms of both withdrawal and consumption. Industry (including power generation) accounts for 19% and households for 12%. Water pollution After accounting for availability or access, water quality can reduce the amount of water for consumption, sanitation, agriculture, and industrial purposes. Acceptable water quality depends on its intended purpose: water that is unfit for human consumption could still be used in industrial or agriculture applications. Parts of the world are experiencing extensive deterioration of water quality, rendering the water unfit for agricultural or industrial use. For example, in China, 54% of the Hai River basin surface water is so polluted that it is considered un-usable. Safe water is defined as potable water that will not harm the consumer. It is one of the eight Millennium Development Goals: between 1990 and 2015 to "reduce by half the proportion of the population without sustainable access to safe drinking water and basic sanitation." Even having access to an ‘improved water source’ does not guarantee the water's quality, as it could lack proper treatment and become contaminated during transport or home storage. A study by the World Health Organization (WHO) found that estimates of safe water could be overestimated if accounting for water quality, especially if the water sources were poorly maintained. Polluted drinking water can lead to debilitating or deadly water-borne diseases, such as fever, cholera, dysentery, diarrhea and others. UNICEF cites fecal contamination and high levels of naturally occurring arsenic and fluoride as two of the world's major water quality concerns. Approximately 71% of all illnesses in developing countries are caused by poor water and sanitation conditions. Worldwide, contaminated water leads to 4,000 diarrhea deaths a day in children under 5. However, gaps in wastewater treatment (the amount of wastewater to be treated is greater than the amount that is actually treated) represent the most significant contribution to water pollution and water quality deterioration. In the majority of the developing world, most of the collected wastewater is returned to surface waters directly without treatment, reducing the water's quality. In China, only 38% of China's urban wastewater is treated, and although 91% of China's industrial waste water is treated, it still releases extensive toxins into the water supply. The amount of possible wastewater treatment can also be compromised by the networks required to bring the wastewater to the treatment plants. It is estimated that 15% of China's wastewater treatment facilities are not being used to capacity due to a limited pipe network to collect and transport wastewater. In São Paulo, Brazil, a lack of sanitation infrastructure results in the pollution of the majority of its water supply and forces the city to import over 50% of its water from outside watersheds. Polluted water increases a developing country's operating costs, as lower quality water is more expensive to treat. In Brazil, polluted water from the Guarapiranga Reservoir costs $0.43 per m3 to treat to usable quality, compared to only $0.10 per m3 for water coming from the Cantareira Mountains. Water security Managing water safety To address water scarcity, organizations have focused on increasing the supply of fresh water, mitigating its demand, and enabling reuse and recycling. Clean water plans According to the WHO, consistent access to a safe drinking-water supply is attainable by establishing a system of WSPs, or Water Safety Plans, which determine the quality of water supply's to ensure they are safe for consumption. The Water Safety Plan Manual, published in 2009 by the WHO and the International Water Association, offers guidance to water utilities (or similar entities) as they develop WSPs. This manual provides information to help water utilities assess their water system, develop monitoring systems and procedures, manage their plan, carry out periodic review of the WSP, and to review the WSP following an incident. The WSP manual also includes three case studies drawn from WSP initiatives in three countries/regions. Alternative sources Utilizing wastewater from one process to be used in another process where lower-quality water is acceptable is one way to reduce the amount of wastewater pollution and simultaneously increase water supplies. Recycling and reuse techniques can include the reuse and treatment of wastewater from industrial plant wastewater or treated service water (from mining) for use in lower quality uses. Similarly, wastewater can be re-used in commercial buildings (e.g. in toilets) or for industrial applications (e.g. for industrial cooling). Reducing water pollution Despite the clear benefits of improving water sources (a WHO study showed a potential economic benefit of $3–34 USD for every US$1 invested), aid for water improvements have declined from 1998 to 2008 and generally is less than is needed to meet the MDG targets. In addition to increasing funding resources towards water quality, many development plans stress the importance of improving policy, market and governance structures to implement, monitor and enforce water quality improvements. Reducing the amount of pollution emitted from both point and non-point sources represents a direct method to address the source of water quality challenges. Pollution reduction represents a more direct and low-cost method to improve water quality, compared to costly and extensive wastewater treatment improvements. Various policy measures and infrastructure systems could help limit water pollution in developing countries. These include: Improved management, enforcement and regulation for pre-treatment of industrial and agricultural waste, including charges for pollution Policies to reduce agricultural run-off or subsidies to improve the quality and reduce the quantity needed of water polluting agricultural inputs (e.g. fertilizers) Limiting water abstraction during critical low flow periods to limit the concentration of pollutants Strong and consistent political leadership on water Land planning (e.g. locating industrial sites outside the city) Water treatment Water treatment technologies can convert non-freshwater to freshwater by removing pollutants. Much of water's physical pollution includes organisms, metals, acids, sediment, chemicals, waste, and nutrients. Water can be treated and purified into freshwater with limited or no constituents through certain processes. The processes involved in removing the contaminants include physical processes such as settling and filtration, chemical processes such as disinfection and coagulation, and biological processes such as slow sand filtration. A variety of innovations exist to effectively treat water at the point of use for human consumption. Studies have shown treatment to point of use sources reduces child mortality by diarrhea by 29%. Home water treatments are also a part of the United Nations' Millennium Development Goals, with the goal of providing both clean water supply and sewage connection in homes. Although these interventions have been evaluated by the United Nations, various challenges may reduce the effectiveness of home treatment solutions, such as low education, low-dedication to repair, replacement, and maintenance, or local repair services or parts are unavailable. Current point of use and small scale treatment technologies include: NaDCC, sodium dichloroisocyanurate Boiling water Solar disinfection (SODIS) Chlorine Global programs Central Asia Water and Energy Program Central Asia Water and Energy Program (CAWEP) is a World Bank, European Union, Swiss & UK funded program to organize Central Asian governments on common water resources management through regional organizations, like the International Fund for Saving the Aral Sea (IFAS). The program focuses on three issues: water security, energy security and energy-water linkages. It aims to foster balanced communications between Central Asian countries to achieve a regional goal, water and energy security. To ensure their goal, the program works closely with governments, civil and national organizations. Most recently, the program helped organize The Global Disruptive Tech Challenge: Restoring Landscapes in the Aral Sea Region. This competition was created to encourage bright minds to come up with revolutionary solutions for land degradation and desertification in the Aral Sea Region, which used to be home to one of the largest lakes in the world and has since been reduced near to nothing. There were several winning projects that centered around agriculture and land management, sustainable forestry, socio-economic development and globally expanding people knowledge and access to information on the issue. Sanitation and Water for All Aimed at achieving the United Nation's Sustainable Development Goal 6, Sanitation and Water for All (SWA) was established as a platform for partnerships between governments, civil society, the private sector, UN agencies, research and learning institutions, and the philanthropic community. SWA encourages partners to prioritize water, sanitation and hygiene along with ensuring sufficient finance and building better governance structures. To ensure that these priorities remain so, the SWA holds “High Level Meetings” where partners communicate the recent developments made, measure progress, and continue the discussion on the importance of Sustainable Development Goal 6. The Water Project The Water Project, Inc is a non-profit international organization that develops and implements sustainable water projects in Sub-Saharan Africa like Kenya, Rwanda, Sierra Leone, Sudan, and Uganda. The Water Project has funded or completed over 2,500 projects and 1,500 water sources that have helped over 569,000 people improve their access to clean water and sanitation. These projects focus heavily on teaching proper sanitation and hygiene practices, as well as improving water facilities by drilling boreholes, updating well structures, and introducing rain water harvesting solutions. UN-Water In 2003, the United Nations High Level Committee on Programmes created UN-Water, an inter-agency mechanism, "to add value to UN initiatives by fostering greater co-operation and information-sharing among existing UN agencies and outside partners." UN-Water publishes communication materials for decision-makers that work directly with water issues and provides a platform for discussions regarding global water management. They also sponsor World Water Day on 22 March to focus attention on the importance of freshwater and sustainable freshwater management. Country examples Overview India India's growing population is putting a strain on the country's preciously scarce water resources. According to The World Bank, the population of India as of 2019 was roughly 1,366,417,750 people. Although this number has increased since then, India's population count has made it the second-most populated country in the world, following close behind the first most populated country, China. The country is classified as "water stressed" with a water availability of 1,000–1,700 m3/person/year. 21% of countries' diseases are related to water. In 2008, 88% of the population had access and was using improved drinking water sources. However, "Improved drinking water source" is an ambiguous term, ranging in meaning from fully treated and 24-hour availability to merely being piped through the city and sporadically available. This is in part due to large inefficiencies in the water infrastructure in which up to 40% of water leaks out. In UNICEF's 2008 report, only 31% of the population had access and used improved sanitation facilities. A little more than half of the 16 million residents of New Delhi, the capital city, have access to this service. Every day, 950 million gallons of sewage flows from New Delhi into the Yamuna River without any significant forms of treatment. This river bubbles with methane and was found to have a fecal coliform count 10,000 times the safe limit for bathing. The inequality between urban and rural areas is significant. In rural areas, 84% can access safe water while only 21% for sanitation. In contrast, 96% of people in urban areas have access to water sources and sanitation which meet satisfying quality. Additionally, there are not enough wastewater treatment facilities to dispose of wastewater discharged from the growing population. By 2050 half of India's population will account for urban areas and will face serious water problems. Surface water contamination, due to lack of sewage treatment and industrial discharge, makes groundwater increasingly exploited in many regions of India. This is aggravated by heavily subsidized energy costs for agriculture practices that make up roughly 80% of India's water resource demand. In India, 80% of the health issues come from waterborne diseases. Part of this challenge includes addressing the pollution of the Ganges (Ganga) river, which is home to about 400 million people. The river receives about over 1.3 billion litres of domestic waste, along with 260 million litres of industrial waste, run off from 6 million tons of fertilizers and 9,000 tons of pesticides used in agriculture, thousands of animal carcasses and several hundred human corpses released into the river every day for spiritual rebirth. Two-thirds of this waste is released into the river untreated. Kenya Kenya, a country of 50 million population, struggles with a staggering population growth rate of 2.28% per year. This high population growth rate pushes Kenya's natural resources to the brink of total depletion. 32% of the population don't have access to improved water sources whereas 48% cannot access basic sanitation systems. Much of the country has a severely arid climate, with a few areas enjoying rain and access to water resources. Deforestation and soil degradation have polluted surface water, and the government does not have the capacity to develop water treatment or distribution systems, leaving the vast majority of the country without access to water. This has exacerbated gender politics, as 74% of women must spend an average of 8 hours per day securing water for their families. Low income has worsened the situation. It is estimated that 66% of the total population lives to earn less than $3.20 per day. Despite its poor quality and unreliableness, costs for water in local areas are 9 times higher than that of safe water in urban areas. This regional inequality makes people in rural areas difficult to obtain water on a daily basis. Furthermore, even in urban areas, which are equipped with piped water systems, it's hard to produce a reliable constant flow of water. Practical solutions are needed in the entire country. The Sand dam is one of the decentralized rainwater harvesting infrastructures to deal with this unbalanced water distribution. This low-cost infrastructure has a simple and understandable structure, conserving surplus water for later use, increasing efficiency and rural regions' water access by saving people's time to gathering water on a long road. There are already about 1,800 sand dams in Kitui County. The growing population and stagnant economy have exacerbated urban, suburban, and rural poverty. It also has aggravated the country's lack of access to clean drinking water which leaves most of the non-elite population suffering from disease. Around 240 million people suffer from schistosomiasis which occurs because of parasitic worms that may be contracted through drinking infested waters. This leads to the crippling of Kenya's human capital. Private water companies have taken up the slack from Kenya's government, but the Kenyan government prevents them from moving into the poverty-stricken areas to avoid profiteering activities. Unfortunately, since Kenya's government also refuses to provide services, this leaves the disenfranchised with no options for obtaining clean water. Bangladesh Panama See also List of water-related charities WASH – Water, sanitation and hygiene Human right to water and sanitation References External links UN | Water for Life Decade | Water Quality akvo.org | Water and sanitation projects Global environmental issues Water and politics Water supply Sanitation Water Right to health
Water issues in developing countries
[ "Chemistry", "Engineering", "Environmental_science" ]
3,615
[ "Water", "Hydrology", "Water supply", "Environmental engineering" ]
43,344,021
https://en.wikipedia.org/wiki/Southern%20celestial%20hemisphere
The southern celestial hemisphere, also called the Southern Sky, is the southern half of the celestial sphere; that is, it lies south of the celestial equator. This arbitrary sphere, on which seemingly fixed stars form constellations, appears to rotate westward around a polar axis as the Earth rotates. At all times, the entire Southern Sky is visible from the geographic South Pole; less of the Southern Sky is visible the further north the observer is located. The northern counterpart is the northern celestial hemisphere. Astronomy In the context of astronomical discussions or writing about celestial mapping, it may also simply then be referred to as the Southern Hemisphere. For the purpose of celestial mapping, the sky is considered by astronomers as the inside of a sphere divided in two halves by the celestial equator. The Southern Sky or Southern Hemisphere is, therefore, that half of the celestial sphere that is south of the celestial equator. Even if this one is the ideal projection of the terrestrial equatorial onto the imaginary celestial sphere, the Northern and Southern celestial hemispheres should not be confused with descriptions of the terrestrial hemispheres of Earth itself. Observation From the South Pole, in good visibility conditions, the Southern Sky features over 2,000 fixed stars that are easily visible to the naked eye, while about 20,000 to 40,000 with the aided eye. In large cities, about 300 to 500 stars can be seen depending on the extent of light and air pollution. The farther north, the fewer are visible to the observer. The brightest star in the night sky is located in the southern celestial hemisphere and is larger than the Sun. Sirius in the constellation of Canis Major has the brightest apparent magnitude of −1.46; it has a radius twice that of the Sun and is 8.6 light-years away. Canopus and the next fixed star α Centauri, 4.2 light-years away, are also located in the Southern Sky, having declinations around −60°; too close to the south celestial pole for either to be visible from Central Europe. Of the 88 modern constellations, 45 are only visible from the Southern celestial hemisphere with 15 other constellations along the equator and have portions on the northern hemisphere. The southern constellations are: History The first telescopic chart of the Southern Sky was made by the English astronomer Edmond Halley, from the island of St Helena in the South Atlantic Ocean and published by him in 1678. See also Astronomical coordinate systems Celestial spheres Northern celestial hemisphere References Astronomical coordinate systems Hemispheres Southern celestial hemisphere
Southern celestial hemisphere
[ "Astronomy", "Mathematics" ]
505
[ "Astronomical coordinate systems", "Coordinate systems" ]
43,344,842
https://en.wikipedia.org/wiki/Principal%20orbit%20type%20theorem
In mathematics, the principal orbit type theorem states that compact Lie group acting smoothly on a connected differentiable manifold has a principal orbit type. Definitions Suppose G is a compact Lie group acting smoothly on a connected differentiable manifold M. An isotropy group is the subgroup of G fixing some point of M. An isotropy type is a conjugacy class of isotropy groups. The principal orbit type theorem states that there is a unique isotropy type such that the set of points of M with isotropy groups in this isotropy type is open and dense. The principal orbit type is the space G/H, where H is a subgroup in the isotropy type above. References Lie groups Group actions (mathematics)
Principal orbit type theorem
[ "Physics", "Mathematics" ]
152
[ "Theorems in differential geometry", "Lie groups", "Mathematical structures", "Group actions", "Algebraic structures", "Theorems in geometry", "Symmetry" ]
43,346,375
https://en.wikipedia.org/wiki/Planck%20relation
The Planck relation (referred to as Planck's energy–frequency relation, the Planck–Einstein relation, Planck equation, and Planck formula, though the latter might also refer to Planck's law) is a fundamental equation in quantum mechanics which states that the energy of a photon, known as photon energy, is proportional to its frequency : The constant of proportionality, , is known as the Planck constant. Several equivalent forms of the relation exist, including in terms of angular frequency : where . Written using the symbol for frequency, the relation is The relation accounts for the quantized nature of light and plays a key role in understanding phenomena such as the photoelectric effect and black-body radiation (where the related Planck postulate can be used to derive Planck's law). Spectral forms Light can be characterized using several spectral quantities, such as frequency , wavelength , wavenumber , and their angular equivalents (angular frequency , angular wavelength , and angular wavenumber ). These quantities are related through so the Planck relation can take the following "standard" forms: as well as the following "angular" forms: The standard forms make use of the Planck constant . The angular forms make use of the reduced Planck constant . Here is the speed of light. de Broglie relation The de Broglie relation, also known as de Broglie's momentum–wavelength relation, generalizes the Planck relation to matter waves. Louis de Broglie argued that if particles had a wave nature, the relation would also apply to them, and postulated that particles would have a wavelength equal to . Combining de Broglie's postulate with the Planck–Einstein relation leads to or The de Broglie relation is also often encountered in vector form where is the momentum vector, and is the angular wave vector. Bohr's frequency condition Bohr's frequency condition states that the frequency of a photon absorbed or emitted during an electronic transition is related to the energy difference () between the two energy levels involved in the transition: This is a direct consequence of the Planck–Einstein relation. See also Compton wavelength References Cited bibliography Cohen-Tannoudji, C., Diu, B., Laloë, F. (1973/1977). Quantum Mechanics, translated from the French by S.R. Hemley, N. Ostrowsky, D. Ostrowsky, second edition, volume 1, Wiley, New York, . French, A.P., Taylor, E.F. (1978). An Introduction to Quantum Physics, Van Nostrand Reinhold, London, . Griffiths, D.J. (1995). Introduction to Quantum Mechanics, Prentice Hall, Upper Saddle River NJ, . Landé, A. (1951). Quantum Mechanics, Sir Isaac Pitman & Sons, London. Landsberg, P.T. (1978). Thermodynamics and Statistical Mechanics, Oxford University Press, Oxford UK, . Messiah, A. (1958/1961). Quantum Mechanics, volume 1, translated from the French by G.M. Temmer, North-Holland, Amsterdam. Schwinger, J. (2001). Quantum Mechanics: Symbolism of Atomic Measurements, edited by B.-G. Englert, Springer, Berlin, . van der Waerden, B.L. (1967). Sources of Quantum Mechanics, edited with a historical introduction by B.L. van der Waerden, North-Holland Publishing, Amsterdam. Weinberg, S. (1995). The Quantum Theory of Fields, volume 1, Foundations, Cambridge University Press, Cambridge UK, . Weinberg, S. (2013). Lectures on Quantum Mechanics, Cambridge University Press, Cambridge UK, . Foundational quantum physics Max Planck Old quantum theory
Planck relation
[ "Physics" ]
774
[ "Old quantum theory", "Foundational quantum physics", "Quantum mechanics" ]
43,350,725
https://en.wikipedia.org/wiki/Euclid%E2%80%93Euler%20theorem
The Euclid–Euler theorem is a theorem in number theory that relates perfect numbers to Mersenne primes. It states that an even number is perfect if and only if it has the form , where is a prime number. The theorem is named after mathematicians Euclid and Leonhard Euler, who respectively proved the "if" and "only if" aspects of the theorem. It has been conjectured that there are infinitely many Mersenne primes. Although the truth of this conjecture remains unknown, it is equivalent, by the Euclid–Euler theorem, to the conjecture that there are infinitely many even perfect numbers. However, it is also unknown whether there exists even a single odd perfect number. Statement and examples A perfect number is a natural number that equals the sum of its proper divisors, the numbers that are less than it and divide it evenly (with remainder zero). For instance, the proper divisors of 6 are 1, 2, and 3, which sum to 6, so 6 is perfect. A Mersenne prime is a prime number of the form , one less than a power of two. For a number of this form to be prime, itself must also be prime, but not all primes give rise to Mersenne primes in this way. For instance, is a Mersenne prime, but is not. The Euclid–Euler theorem states that an even natural number is perfect if and only if it has the form , where is a Mersenne prime. The perfect number 6 comes from in this way, as , and the Mersenne prime 7 corresponds in the same way to the perfect number 28. History Euclid proved that is an even perfect number whenever is prime. This is the final result on number theory in Euclid's Elements; the later books in the Elements instead concern irrational numbers, solid geometry, and the golden ratio. Euclid expresses the result by stating that if a finite geometric series beginning at 1 with ratio 2 has a prime sum , then this sum multiplied by the last term in the series is perfect. Expressed in these terms, the sum of the finite series is the Mersenne prime and the last term in the series is the power of two . Euclid proves that is perfect by observing that the geometric series with ratio 2 starting at , with the same number of terms, is proportional to the original series; therefore, since the original series sums to , the second series sums to , and both series together add to , two times the supposed perfect number. However, these two series are disjoint from each other and (by the primality of ) exhaust all the divisors of , so has divisors that sum to , showing that it is perfect. Over a millennium after Euclid, Alhazen conjectured that even perfect number is of the form where is prime, but he was not able to prove this result. It was not until the 18th century, over 2000 years after Euclid, that Leonhard Euler proved that the formula will yield all the even perfect numbers. Thus, there is a one-to-one relationship between even perfect numbers and Mersenne primes; each Mersenne prime generates one even perfect number, and vice versa. After Euler's proof of the Euclid–Euler theorem, other mathematicians have published different proofs, including Victor-Amédée Lebesgue, Robert Daniel Carmichael, Leonard Eugene Dickson, John Knopfmacher, and Wayne L. McDaniel. Dickson's proof, in particular, has been commonly used in textbooks. This theorem was included in a web listing of the "top 100 mathematical theorems", dating from 1999, which later became used by Freek Wiedijk as a benchmark set to test the power of different proof assistants. By 2024, the proof of the Euclid–Euler theorem had been formalized in 7 of the 12 proof assistants recorded by Wiedijk. Proof Euler's proof is short and depends on the fact that the sum of divisors function is multiplicative; that is, if and are any two relatively prime integers, then . For this formula to be valid, the sum of divisors of a number must include the number itself, not just the proper divisors. A number is perfect if and only if its sum of divisors is twice its value. Sufficiency One direction of the theorem (the part already proved by Euclid) immediately follows from the multiplicative property: every Mersenne prime gives rise to an even perfect number. When is prime, The divisors of are . The sum of these divisors is a geometric series whose sum is . Next, since is prime, its only divisors are and itself, so the sum of its divisors is . Combining these, Therefore, is perfect. Necessity In the other direction, suppose that an even perfect number has been given, and partially factor it as , where is odd. For to be perfect, the sum of its divisors must be twice its value: The odd factor on the right side of (∗) is at least 3, and it must divide , the only odd factor on the left side, so is a proper divisor of . Dividing both sides of (∗) by the common factor and taking into account the known divisors and of gives For this equality to be true, there can be no other divisors. Therefore, must be , and must be a prime of the form . References Theorems in number theory Articles containing proofs Leonhard Euler Mersenne primes Perfect numbers Euclid
Euclid–Euler theorem
[ "Mathematics" ]
1,176
[ "Perfect numbers", "Theorems in number theory", "Mathematical problems", "Articles containing proofs", "Mathematical theorems", "Number theory" ]
43,351,965
https://en.wikipedia.org/wiki/Toomre%27s%20stability%20criterion
In astrophysics, Toomre's stability criterion (also known as the Safronov–Toomre criterion) is a relationship between parameters of a differentially rotating, gaseous accretion disc which can be used to determine approximately whether the system is stable. In the case of a stationary gas, the Jeans stability criterion can be used to compare the strength of gravity with that of thermal pressure. In the case of a differentially rotating disk, the shear force can provide an additional stabilizing force. The Toomre criterion for a disk to be stable can be expressed as, where is the speed of sound (and measure of the thermal pressure), is the epicyclic frequency, G is Newton's gravitational constant, and is the surface density of the disk. The Toomre Q parameter is often defined as the left-hand side of Eq., The stability criterion can then simply be stated as, for a disk to be stable against collapse. The previous discussion was for a gaseous disk, but a similar analysis can be applied to a disk of stars (for example, the disk of a galaxy), yielding a kinematic Q parameter, where is the radial velocity dispersion, and is the local epicyclic frequency. Background Many astrophysical objects result from the gravitational collapse of gaseous objects (for example, star formation occurs when molecular clouds collapse under gravity), and thus the stability of gaseous systems is of great interest. In general, a physical system is 'stable' if: 1) It is in equilibrium (there is a balance of forces such that the system is static), and 2) small deviations from equilibrium will tend to damp out, so that the system tends to return to equilibrium. The most basic gravitational stability analysis is the Jeans criteria, which addresses the balance between self-gravity and thermal pressure in a gas. In terms of the two above stability conditions, the system is stable if: i) thermal pressure balances the force of gravity, and ii) if the system is compressed slightly, the outward pressure force must become stronger than the inward gravitational force - to return the system to equilibrium. In the Jeans case, the stability criterion is size dependent, resulting in the concept of a Jeans length and Jeans mass. The Toomre analysis, first studied by Viktor Safronov in the 1960s, considers not only gravity and pressure, but also shear forces from differential rotation. Conceptually, if a fluid is differentially rotating (such as in the keplerian motion of an astrophysical disk), gravity not only has to overcome the internal pressure of the gas, but also needs to halt the relative motion between two parcels of fluid, allowing them to collapse together. The analysis was expanded upon by Alar Toomre in 1964, and presented in a more general and comprehensive framework. References Astrophysics
Toomre's stability criterion
[ "Physics", "Astronomy" ]
585
[ "Astronomical sub-disciplines", "Astrophysics" ]
43,354,983
https://en.wikipedia.org/wiki/AHPL
A Hardware Programming Language (AHPL) is software developed at University of Arizona that has been used as a tool for teaching computer organization. It was initially started as a set of notations for representation of computer hardware for academics, which is later started to be considered as a Hardware Description Language on development of compiler and simulator for it. This language describes a hardware functionality as flow of data between the ports or sub-modules. The notation, syntax, and semantics were based on the APL programming language. References Hardware description languages Educational software
AHPL
[ "Engineering" ]
108
[ "Electronic engineering", "Hardware description languages" ]
49,364,987
https://en.wikipedia.org/wiki/Pentazine
Pentazine is a hypothetical chemical compound that consists of a six-membered aromatic ring containing five nitrogen atoms with the molecular formula CHN5. The name pentazine is used in the nomenclature of derivatives of this compound. Pentazine is predicted to be unstable and to decompose into hydrogen cyanide (HCN) and nitrogen (N2). The activation energy required is predicted to be around 20 kJ/mol. See also 6-membered rings with one nitrogen atom: pyridine 6-membered rings with two nitrogen atoms: diazines 6-membered rings with three nitrogen atoms: triazines 6-membered rings with four nitrogen atoms: tetrazines 6-membered rings with six nitrogen atoms: hexazine References Azines (heterocycles) Hypothetical chemical compounds
Pentazine
[ "Chemistry" ]
173
[ "Theoretical chemistry", "Hypotheses in chemistry", "Hypothetical chemical compounds", "Theoretical chemistry stubs" ]
49,365,038
https://en.wikipedia.org/wiki/VIPRE
VIPRE Security Group, (also known as VIPRE or VIPRE Security) a brand of Ziff Davis, is a privately held cybersecurity company headquartered in New York. VIPRE develops cybersecurity products focused on endpoint and email security along with advanced threat intelligence applications. VIPRE is based globally with operations in Clearwater, Florida, Washington D.C., Vancouver B.C., Keele, United Kingdom, Dublin, Ireland, Copenhagen, Denmark, Stockholm, Sweden, Amsterdam, Netherlands and in Oslo, Norway. Corporate history The company was originally founded in 1994 as Sunbelt Software, which was acquired in 2010 by GFI Software. In 2013 Sunbelt Software was spun off and renamed to ThreatTrack Security. In 2017 they were concentrating on their VIPRE suite and the company now uses that name. The VIPRE portfolio now encompasses endpoint security, with heritage from original Sunbelt Software anti-virus products, email security, with heritage from the UK company Fusemail, Comendo, StaySecure, WeCloud, iCritical and ElectricMail products that had previously been acquired by j2, and security awareness training via the acquisition of Inspired e-Learning. VIPRE was featured in a PC World Magazine article. In February 2018 it was acquired by j2 Global. Acquisitions References Computer network security
VIPRE
[ "Engineering" ]
271
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
49,367,970
https://en.wikipedia.org/wiki/Algebroid%20function
In mathematics, an algebroid function is a solution of an algebraic equation whose coefficients are analytic functions. So y(z) is an algebroid function if it satisfies where are analytic. If this equation is irreducible then the function is d-valued, and can be defined on a Riemann surface having d sheets. References Analytic functions Equations
Algebroid function
[ "Mathematics" ]
77
[ "Algebra stubs", "Equations", "Mathematical objects", "Algebra" ]
49,371,092
https://en.wikipedia.org/wiki/L-Streptose
{{DISPLAYTITLE:L-Streptose}} -Streptose is a branched monosaccharide similar to apiose in structure. -Streptose is one of the sugars in streptomycin, an aminoglycoside antibiotic that has toxic effects in the kidney and other side effects. -Streptose has been prepared from a carbohydrate derivative. The protected monosaccharide was reacted with an organolithium sulfur compound and then catalytically hydrolyzed to produce -streptose. References Monosaccharides Deoxy sugars
L-Streptose
[ "Chemistry" ]
133
[ "Deoxy sugars", "Carbohydrates", "Monosaccharides" ]
49,374,726
https://en.wikipedia.org/wiki/DAPHNE%20platform
The DAPHNE platform, or DAta-as-a-service Platform for Healthy lifestyle and preventive mediciNE, is an ITC ecosystem that uses software platforms, such as HealthTracker, to track individual health data on users or patients so that health service providers can provide personalised guidance remotely to the users or patients in terms of health, lifestyle, exercise and nutrition. It is led by Treelogic SL and partially funded by the European Union 7th Framework Programme for information and communication technology research. The project is listed in the European Commission CORDIS project listings. Description DAPHNE is an acronym for ‘DAta-as-a-service Platform for Healthy lifestyle and preventive mediciNE’ with the objective to develop and test methods for utilising personal activity and fitness information. According to a 2016 article in Procedia Computer Science, the DAPHNE platform and project was developed in response to "growing concerns about obesity". DAPHNE purpose was to develop a digital ecosystem using information and communications technology (ITC) through software such as HealthTracker, to connect patients and their physicians remotely. provide a means for remote personal guidance from the Physicians can review a patient or user's data and provide guidance and feedback regarding obesity prevention. Health care providers can monitor their patients "health parameters, medical history and physical condition" and give guidance related to "health, lifestyle, exercise and nutrition." The goal of DAPHNE is to promote a "combination of a healthy and balanced diet, an active lifestyle and regular exercise". The project aims to develop data analysis platforms for collecting, analysing and delivering information on physical fitness and behaviour. Standardised data platforms are being designed to help hardware and software developers to provide personalised health information to individuals and to their health service providers. Background Initial EU funding covered the period from 2013 to 2016. DAPHNE Project outputs Project outputs will include (i) advanced sensors which can link directly to mobile phones to acquire and store data on lifestyle, behaviour and the surrounding environment; (ii) intelligent data processing for the recognition of behaviour patterns and trends; (iii) software platforms for linking individual health data to health service providers for personalised guidance on healthy lifestyle and disease prevention and to contribute to Big Data services. HealthTracker The HealthTracker can automatically detect a user's physical activity such as "lying down, sitting down, standing up, walking, running and cycling". DAPHNE Project Consortium Treelogic (Coordinator) – Spain IBM Israel ATOS Spain S.A. University of Leeds UK Evalan BV, the Netherlands Ospedale Pediatrico Bambino Gesù, Italy Universidad Politécnica de Madrid, Spain SilverCloud Health Ltd, Ireland World Obesity Federation, UK Nevet Ltd, Israel (division of Maccabi Group Holdings) External links Project official website European Commission CORDIS project listings References Medical software
DAPHNE platform
[ "Biology" ]
565
[ "Medical software", "Medical technology" ]
49,375,299
https://en.wikipedia.org/wiki/Stent-electrode%20recording%20array
Stentrode (Stent-electrode recording array) is a small stent-mounted electrode array permanently implanted into a blood vessel in the brain, without the need for open brain surgery. It is in clinical trials as a brain–computer interface (BCI) for people with paralyzed or missing limbs, who will use their neural signals or thoughts to control external devices, which currently include computer operating systems. The device may ultimately be used to control powered exoskeletons, robotic prosthesis, computers or other devices. The device was conceived by Australian neurologist Thomas Oxley and built by Australian biomedical engineer Nicholas Opie, who have been developing the medical implant since 2010, using sheep for testing. Human trials started in August 2019 with participants who suffer from amyotrophic lateral sclerosis, a type of motor neuron disease. Graeme Felstead was the first person to receive the implant. To date, eight patients have been implanted and are able to wirelessly control an operating system to text, email, shop and bank using direct thought through the Stentrode brain computer interface, marking the first time a brain-computer interface was implanted via the patient's blood vessels, eliminating the need for open brain surgery. The FDA granted breakthrough designation to the device in August 2020. In January 2023, researchers demonstrated that it can record brain activity from a nearby blood vessel and be used to operate a computer with no serious adverse events during the first year in all four patients. Overview Opie began designing the implant in 2010, through Synchron, a company he founded with Oxley and cardiologist Rahul Sharma. The small implant is an electrode array made of platinum electrodes embedded within a nitinol endovascular stent. The device measures about 5 cm long and a maximum of 8 mm in diameter. The implant is capable of two-way communication, meaning it can both sense thoughts and stimulate movement, essentially acting as a feedback loop within the brain, which offers potential applications for helping people with spinal cord injuries and control robotic prosthetic limbs with their thoughts. The Stentrode device, developed by Opie and a team at the Vascular Bionics Laboratory within the Department of Medicine at the University of Melbourne, is implanted via the jugular vein into a blood vessel next to cortical tissue near to the motor cortex and sensory cortex, so open brain surgery is avoided. Insertion via the blood vessel avoids direct penetration and damage of the brain tissue. As for blood clotting concerns, Oxley says neurologists routinely use permanent stents in patients' brains to keep blood vessels open. Once in place, it expands to press the electrodes against the vessel wall close to the brain where it can record neural information and deliver currents directly to targeted areas. The signals are captured and sent to a wireless antenna unit implanted in the chest, which sends them to an external receiver. The patient would need to learn how to control a computer operating system that interacts with assistive technologies. The Stentrode technology has been tested on sheep and humans, with human trials being approved by the St Vincent's Hospital, Melbourne Human Research Ethics Committee, Australia in November 2018. Oxley originally expressed that he expected human clinical trials to help paralyzed people regain movement to operate a motorized wheelchair or even a powered exoskeleton. However, he switched focus before beginning clinical trials. Opie and colleagues began evaluating the Stentrode for its ability to restore functional independence in patients with paralysis, by enabling them to engage in activities of daily living. Clinical study results demonstrated the capability of two ALS patients, surgically fitted with a Stentrode, to learn to control texting and typing, through direct thought and the assistance of eye-tracking technology for cursor navigation. They achieved this with at least 92% accuracy within 3 months of use, and continued to maintain that ability up to 9 months (as of November 2020). This study helped to dispel some criticism that data rates may not be as high as systems requiring open brain surgery, and also pointed out the benefits of using well-established neuro-interventional techniques which do not require any automated assistance, dedicated surgical space or expensive machinery. Selected patients are people with paralyzed or missing limbs, including people who have suffered strokes, spinal cord injuries, ALS, muscular dystrophy, and amputations. See also Cortical implant Neuralink Neurorobotics References Prosthetics Biological engineering Biomedical engineering Brain–computer interface Human–computer interaction Implants (medicine) Neuroprosthetics Neural engineering User interface techniques
Stent-electrode recording array
[ "Engineering", "Biology" ]
947
[ "Biological engineering", "Biomedical engineering", "Human–machine interaction", "Human–computer interaction", "Medical technology" ]
45,108,382
https://en.wikipedia.org/wiki/Solid%20stress
The stresses, one of the physical hallmarks of cancer, is exerted by the solid components of a tissue and accumulated within solid structural components (i.e., cells, collagen, and hyaluronan) during growth and progression. Solid stress in tumors is a residual stress that is elevated because of abnormal tumor growth and resistance to growth from the surrounding normal tissues or from within the tumors. Solid stress, independent of the interstitial fluid pressure, induces hypoxia and impedes drug delivery by compressing blood vessels in tumors. Solid stress is heterogeneous in tumors with tensile stresses distributed more at the periphery of the tumor, and compressive stresses more at the tumor core. References Jain R.K., "An indirect way to tame cancer", Sci Am 310(2): 46–53, 2014 Jain R.K., J. D. Martin and T. Stylianopoulos, "The role of mechanical forces in tumor progression and therapy", Annual Review of Biomedical Engineering, 16:321-46, 2014. Jain R.K., "Normalizing tumor microenvironment to treat cancer: bench to bedside to biomarkers", J Clin Oncol, 31(17):2205-18, 2013. Stylianopoulos T., J.D. Martin, M. Snuderl, F. Mpekris, S. Jain and R.K. Jain, "Coevolution of solid stress and interstitial fluid pressure in tumor during progression: Implications for vascular collapse", Cancer Research, 73(13): 3833–3841, 2013. Helmlinger G., P.A. Netti, H. C. Lichtenbeld, R. J. Melder, R. K. Jain, "Solid stress inhibits the growth of multicellular tumor spheroids", Nat Biotechnol 15:778-783, 1997. Oncology Pressure
Solid stress
[ "Physics" ]
413
[ "Scalar physical quantities", "Mechanical quantities", "Physical quantities", "Pressure", "Wikipedia categories named after physical quantities" ]
45,111,627
https://en.wikipedia.org/wiki/Starlink
Starlink is a satellite internet constellation operated by Starlink Services, LLC, an international telecommunications provider that is a wholly owned subsidiary of American aerospace company SpaceX, providing coverage to over 100 countries and territories. It also aims to provide global mobile broadband. Starlink has been instrumental to SpaceX's growth. SpaceX started launching Starlink satellites in 2019. As of September 2024, the constellation consists of over 7,000 mass-produced small satellites in low Earth orbit (LEO) that communicate with designated ground transceivers. Nearly 12,000 satellites are planned to be deployed, with a possible later extension to 34,400. SpaceX announced reaching more than 1 million subscribers in December 2022 and 4 million subscribers in September 2024. The SpaceX satellite development facility in Redmond, Washington, houses the Starlink research, development, manufacturing, and orbit control facilities. In May 2018, SpaceX estimated the total cost of designing, building and deploying the constellation would be at least US$10 billion. Revenues from Starlink in 2022 were reportedly $1.4 billion accompanied by a net loss, with a small profit being reported that began only in 2023. In May 2024 revenue was expected to reach $6.6 billion in 2024 but later in that year the prediction was raised to $7.7 billion. Revenue is expected to reach $11.8 billion in 2025. Starlink has been extensively used in the Russo-Ukrainian War, a role for which it has been contracted by the United States Department of Defense. Starshield, a military version of Starlink, is designed for government use. Astronomers raised concerns about the effect the constellation may have on ground-based astronomy, and how the satellites will contribute to an already congested orbital environment. SpaceX has attempted to mitigate astronometric interference concerns with measures to reduce the satellites' brightness during operation. The satellites are equipped with Hall-effect thrusters allowing them to raise their orbit, station-keep, and de-orbit at the end of their lives. They are also designed to autonomously and smoothly avoid collisions based on uplinked tracking data. History Background Constellations of low Earth orbit satellites were first conceptualized in the mid-1980s as part of the Strategic Defense Initiative, culminating in Brilliant Pebbles, where weapons were to be staged in low orbits to intercept ballistic missiles at short notice. The potential for low-latency communication was also recognized and development offshoots in the 1990s led to numerous commercial megaconstellations using around 100 satellites such as Celestri, Teledesic, Iridium, and Globalstar. However all entities entered bankruptcy by the dot-com bubble burst, due in part to excessive launch costs at the time. In 2004, Larry Williams, SpaceX VP of Strategic Relations and former VP of Teledesic's "Internet in the sky" program, opened the SpaceX Washington DC office. That June, SpaceX acquired a stake in Surrey Satellite Technology (SSTL) as part of a "shared strategic vision". SSTL was at that time working to extend the Internet into space. However, SpaceX's stake was eventually sold back to EADS Astrium in 2008 after the company became more focused on navigation and Earth observation. In early 2014, Elon Musk and Greg Wyler were working together planning a constellation of around 700 satellites called WorldVu, which would be over 10 times the size of the then largest Iridium satellite constellation. However, these discussions broke down in June 2014, and SpaceX instead filed an International Telecommunications Union (ITU) application via the Norwegian Communications Authority under the name STEAM. SpaceX confirmed the connection in the 2016 application to license Starlink with the Federal Communications Commission (FCC). SpaceX trademarked the name Starlink in the United States for their satellite broadband network; the name was inspired by the 2012 novel The Fault in Our Stars. Design phase (2015–2016) Starlink was publicly announced in January 2015 with the opening of the SpaceX satellite development facility in Redmond, Washington. During the opening, Musk stated there is still significant unmet demand worldwide for low-cost broadband capabilities. and that Starlink would target bandwidth to carry up to 50% of all backhaul communications traffic, and up to 10% of local Internet traffic, in high-density cities. Musk further stated that the positive cash flow from selling satellite internet services would be necessary to fund their Mars plans. Furthermore, SpaceX has long-term plans to develop and deploy a version of the satellite communication system to serve Mars. Starting with 60 engineers, the company operated in of leased space, and by January 2017 had taken on a second facility, both in Redmond. In August 2018, SpaceX consolidated all their Seattle-area operations with a move to a larger three-building facility at Redmond Ridge Corporate Center to support satellite manufacturing in addition to R&D. In July 2016, SpaceX acquired an additional creative space in Irvine, California (Orange County). The Irvine office would include signal processing, RFIC, and ASIC development for the satellite program. By October 2016, the satellite division was focusing on a significant business challenge of achieving a sufficiently low-cost design for the user equipment. SpaceX President Gwynne Shotwell said then that the project remained in the "design phase as the company seeks to tackle issues related to user-terminal cost". Start of development phase (2016–2019) In November 2016, SpaceX filed an application with the FCC for a "non-geostationary orbit (NGSO) satellite system in the fixed-satellite service using the Ku- and Ka- frequency bands". In September 2017, the FCC ruled that half of the constellation must be in orbit within six years to comply with licensing terms, while the full system should be in orbit within nine years from the date of the license. SpaceX filed documents in late 2017 with the FCC to clarify their space debris mitigation plan, under which the company was to: "...implement an operations plan for the orderly de-orbit of satellites nearing the end of their useful lives (roughly five to seven years) at a rate far faster than is required under international standards. [Satellites] will de-orbit by propulsively moving to a disposal orbit from which they will re-enter the Earth's atmosphere within approximately one year after completion of their mission." In March 2018, the FCC granted SpaceX approval for the initial 4,425 satellites, with some conditions. SpaceX would need to obtain a separate approval from the ITU. The FCC supported a NASA request to ask SpaceX to achieve an even higher level of de-orbiting reliability than the standard that NASA had previously used for itself: reliably de-orbiting 90% of the satellites after their missions are complete. In May 2018, SpaceX expected the total cost of development and buildout of the constellation to approach $10 billion (). In mid-2018, SpaceX reorganized the satellite development division in Redmond, and terminated several members of senior management. First launches (2019–2020) After launching two test satellites in February 2018, the first batch of 60 operational Starlink satellites were launched in May 2019. By late 2019, SpaceX was transitioning their satellite efforts from research and development to manufacturing, with the planned first launch of a large group of satellites to orbit, and the clear need to achieve an average launch rate of "44 high-performance, low-cost spacecraft built and launched every month for the next 60 months" to get the 2,200 satellites launched to support their FCC spectrum allocation license assignment. SpaceX said they will meet the deadline of having half the constellation "in orbit within six years of authorization... and the full system in nine years". By July 2020, Starlink's limited beta internet service was opened to invitees from the public. Invitees had to sign non-disclosure agreements, and were only charged $2 per month to test out billing services. In October 2020 a wider public beta was launched, where beta testers were charged the full monthly cost and could speak freely about their experience. Starlink beta testers reported speeds over 150 Mbit/s, above the range announced for the public beta test. Commercial service (2021–present) Pre-orders were first opened to the public in the United States and Canada in early 2021. The FCC had earlier awarded SpaceX with $885.5 million worth of federal subsidies to support rural broadband customers in 35 U.S. states through Starlink. but the $885.5 million aid package was revoked in August 2022, with the FCC stating that Starlink "failed to demonstrate" its ability to deliver the promised service. SpaceX later appealed the decision saying they met or surpassed all RDOF deployment requirements that existed during bidding and that the FCC created "new standards that no bidder could meet today". In December 2023, the FCC formally denied SpaceX's appeal since "Starlink had not shown that it was reasonably capable of fulfilling RDOF's requirements to deploy a network of the scope, scale, and size" required to win the subsidy. In March 2021, SpaceX submitted an application to the FCC for mobile variations of their terminal designed for vehicles, vessels and aircraft, and later in June the company applied to the FCC to use mobile Starlink transceivers on launch vehicles flying to Earth orbit, after having previously tested high-altitude low-velocity mobile use on a rocket prototype in May 2021. In 2022, SpaceX announced the Starlink Business service tier, a higher-performance version of the service. It provides a larger high-performance antenna and listed speeds of between 150 and 500 Mbit/s with a cost of $2500 for the antenna and a $500 monthly service fee. The service includes 24/7, prioritized support. Deliveries are advertised to begin in the second quarter of 2022. The FCC also approved the licensing of Starlink services to boats, aircraft, and moving vehicles. Starlink terminal production being delayed by the 2020–2023 global chip shortage led to only 5,000 subscribers for the last two months of 2021 but this was soon resolved. On December 1, 2022, the FCC issued an approval for SpaceX to launch the initial 7500 satellites for its second-generation (Gen2) constellation, in three low-Earth-orbit orbital shells, at 525, 530, and 535 km (326, 329 and 332 mile) altitude. Overall, SpaceX had requested approval for as many as 29,988 Gen2 satellites, with approximately 10,000 in the 525–535 km (326 to 332 mile) altitude shells, plus ~20,000 in 340–360 km (210 mile to 220 mile) shells and nearly 500 in 604–614 km (375 to 382 mile) shells. However, the FCC noted that this is not a net increase in approved on-orbit satellites for SpaceX since SpaceX is no longer planning to deploy 7518 V-band satellites at altitude that had previously been authorized. In March 2023, the company reported that they were manufacturing six Starlink "v2 mini" satellites per day as well as thousands of users terminals. The v2 mini has Gen2 Starlink satellite features while being assembled in a smaller form factor than the larger Gen2 sats. The Gen2 satellites require the 9 meter (29.5 foot) diameter Starship in order to launch them. The Starlink business unit had a single cash-flow-positive quarter during 2022, and is expecting to be profitable in 2023. In May 2018, SpaceX estimated the total cost of designing, building and deploying the constellation would be at least US$10 billion. In January 2017, SpaceX expected annual revenue from Starlink to reach $12 billion by 2022 and exceed $30 billion by 2025. Starlink was at annual loss in 2021. Revenues from Starlink in 2022 were reportedly $1.4 billion accompanied by a net loss, with a small profit being reported by Musk starting in 2023. Tensions between Brazil and Elon Musk's business ventures escalated in 2024 as the country's telecom regulator Anatel threatened to sanction Starlink after Brazil's top court upheld a ban on X. Luiz Inácio Lula da Silva supported the decision, citing X's role in allegedly spreading hate and misinformation undermining Brazil's democracy. Judge Alexandre de Moraes had frozen Starlink's accounts, and Starlink refused to comply with an order to block domestic access to X until the freeze was lifted, risking its license to operate. The Wall Street Journal reported in October 2024 that Musk had been in regular contact with Russian President Vladimir Putin and other high ranking Russian government officials since late 2022, discussing personal topics, business and geopolitical matters. The Journal reported that Putin had asked Musk to avoid activating his Starlink satellite system over Taiwan, to appease Chinese Communist Party general secretary Xi Jinping. The communications were reported to be a closely held secret in government, given Musk's involvement in promoting the presidential candidacy of Donald Trump, and his security clearance to access classified government information. One person said no alerts were raised by the U.S. government, noting the dilemma of the government being dependent on Musk's technologies. Musk initially voiced support for Ukraine's defense against Russia's 2022 invasion by donating Starlink terminals, but made later decisions to limit Ukrainian access to Starlink, which coincided with Russian pressure in public and in private. In a November 2024 call with President [Volodymyr Zelenskyy], Musk said he will continue supporting Ukraine through Starlink. SpaceX has asked its numerous Taiwanese suppliers to move production abroad citing geopolitical risk concerns. This move was questioned by the Taiwanese government and resulted in significant anger from the Taiwanese public with citizens pointing out that Starlink was unavailable in Taiwan despite its suppliers underlying the technology and others calling for a boycott of Tesla products. In November 2024, SpaceX proposed a constellation of Starlink satellites around Mars, referred to as "Marslink." The proposed system would be capable of providing more than 4 Mbit/s of bandwidth between Earth and Mars as well as imaging services. Starting in July 2024, SpaceX began conducting tests on Starlink in cooperation with the Romanian Ministry of National Defense and National Authority for Communications Administration and Regulation (ANCOM). These tests aim at demonstrating that the Equivalent Power Flux Density (EPFD) limit can be safely increased, thus improving the speed and coverage area of Starlink, without affecting classic, geostationary satellites. The results of these tests will be used to help change a rule set by the International Telecommunication Union in the 1990s regarding the limits of non-geostationary satellites. Subscribers As of December 2024, Starlink reports the number of its customers worldwide as more than 4.6 million. Services Satellite internet Starlink provides satellite-based internet connectivity to underserved areas of the planet, as well as competitively priced service in more urbanized areas. In the United States, Starlink charged, at launch, a one-time hardware fee of $599 for a user terminal and $120 per month for internet service at a fixed service address. An additional $25 per month allows the user terminal to move beyond a fixed location (Starlink For RVs) but with service speeds deprioritized compared to the fixed users in that area. Fixed users are told to expect typical throughput of "50 to 150 Mbit/s and latency from 20 to 40 ms", a study found users averaged download speeds of 90.55 Mbit/s in the first quarter of 2022, but dropped to 62.5 Mbit/s in the second quarter. A higher performance version of the service (Starlink Business) advertises speeds of 150 to 500 Mbit/s in exchange for a more costly $2,500 user terminal and a $500 monthly service fee. Another service called Starlink Maritime became available in July 2022 providing internet access on the open ocean, with speeds of 350 Mbit/s, requiring purchase of a maritime-grade $10,000 user terminal and a $5,000 monthly service fee. Sales are capped to a few hundred fixed users per 20 km (10 mile) "service cell area" due to limited wireless capacity. Starlink alternatively offers a Best Effort service tier allowing homes in capped areas to receive the current unused bandwidth of their cell while they are on the waiting list for more prioritized service. The price and equipment are the same as the residential service at $110 per month. To improve the service quality in densely populated areas, Starlink introduced a monthly 1 TB data cap for all non-business users which was enforced starting in 2023. In August 2022, SpaceX lowered monthly service costs for users in select countries. For example, users in Brazil and Chile saw monthly fee decreases of about 50%. According to internet analysis company Ookla, Starlink speeds degraded during the first half of 2022 as more customers signed up for the service. SpaceX has said that Starlink speeds will improve as more satellites are deployed. In September 2023, satellite operator SES announced a satellite internet service for cruise lines using both the Starlink satellites in Low Earth Orbit (LEO) and SES' own O3b mPOWER satellite constellation in Medium Earth Orbit (MEO). Integrated, sold and delivered by SES, the SES Cruise mPOWERED + Starlink service claims to combine the best features of LEO and MEO orbits to provide high-speed, secure connectivity at up to 3 Gbit/s per ship, to cruise ships anywhere in the world. In February 2024, SES announced that Virgin Voyages will be the first cruise line to deploy the service. Satellite cellular service For future service, T-Mobile US and SpaceX are partnering to add satellite cellular service capability to Starlink satellites. It will provide dead-zone cell phone coverage across the US using the existing midband PCS spectrum owned by T-Mobile. Cell coverage will begin with text messaging and expand to include voice and limited data services later, with testing to begin in 2024. T-Mobile plans to connect to Starlink satellites via existing 4G LTE mobile devices, unlike previous generations of satellite phones, which used specialized radios, modems, and antennas to connect to satellites in higher orbits. Bandwidth will be limited to 2 to 4 megabits per second total, split across a very large cell coverage area, which would be limited to thousands of voice calls or millions of text messages simultaneously in a coverage area. The size of a single coverage cell has not yet been publicly released. The first six cell phone capable satellites launched on January 2, 2024. Rogers Communications, in April 2023, signed an agreement with SpaceX for using Starlink for satellite-to-phone services in Canada. Also in April 2023, One NZ (formerly Vodafone New Zealand) announced that they would be partnering with SpaceX's Starlink to provide 100% mobile network coverage over New Zealand. SMS text service is expected to begin in 2024, with voice and data functionality in 2025. In July 2023, Optus in Australia announced a similar partnership. On January 8, 2024, it was confirmed by SpaceX that they had successfully tested text messaging using the new Direct-to-Cell capability on T-Mobile's network. Military applications SpaceX also designs, builds, and launches customized military satellites based on variants of the Starlink satellite bus, with the largest publicly known customer being the Space Development Agency (SDA). SDA accelerates development of missile defense capabilities, primarily via observation platforms, using industry-procured low-cost low Earth orbit satellite platforms. In October 2020, SDA awarded SpaceX an initial $150 million dual-use contract to develop 4 satellites to detect and track ballistic and hypersonic missiles. The first batch of satellites were originally scheduled to launch September 2022 to form part of the Tracking Layer Tranche 0 of the U.S. Space Force's National Defense Space Architecture (NDSA), a network of satellites performing various roles including missile tracking. The launch schedule slipped multiple times but eventually launched in April 2023. In 2020, SpaceX hired retired four-star general Terrence J. O'Shaughnessy who, according to some sources, is associated with Starlink's military satellite development, and according to one source, is listed as a "chief operating officer" at SpaceX. While still on active duty, O'Shaughnessy advocated before the United States Senate Committee on Armed Services for a layered capability with lethal follow-on that incorporates machine learning and artificial intelligence to gather and act upon sensor data quickly. SpaceX was not awarded a contract for the larger Tranche 1, with awards going to York Space Systems, Lockheed Martin Space, and Northrop Grumman Space Systems. Starshield In December 2022, SpaceX announced Starshield, a separate Starlink service designed for government entities and military agencies. Starshield enables the U.S. Department of Defense (DoD) to own or lease Starshield satellites for partners and allies. Cybernews remarked that Starshield was first announced in late 2022, when Starlink's presence in Ukraine showed the importance it can have in modern warfare. While Starlink had not been adapted for military use, Starshield has the usual requirements for mobile military systems like encryption and anti-jam capabilities. Elon Musk stated that "Starlink needs to be a civilian network, not a participant to combat. Starshield will be owned by the US government and controlled by DoD Space Force. This is the right order of things." Starshield satellites are advertised as capable of integrating a wide variety of payloads. Starshield satellites will be compatible with, and interconnect to, the existing commercial Starlink satellites via optical inter-satellite links. In January 2022, SpaceX deployed four national security satellites for the U.S. government on their Transporter-3 rideshare mission. In the same year they launched another group of four U.S. satellites with an on-orbit spare Globalstar FM-15 satellite in June. In September 2023, the Starshield program received its first contract from the U.S. Space Force to provide customized satellite communications for the military. This is under the Space Force's new "Proliferated Low Earth Orbit" program for LEO satellites, where Space Force will allocate up to $900 million worth of contracts over the next 10 years. Although 16 vendors are competing for awards, the SpaceX contract is the only one to have been issued to date. The one-year Starshield contract was awarded on September 1, 2023. The contract is expected to support 54 mission partners across the Army, Navy, Air Force, and Coast Guard. Military communications In 2019, tests by the United States Air Force Research Laboratory (AFRL) demonstrated a 610 Mbit/s data link through Starlink to a Beechcraft C-12 Huron aircraft in flight. Additionally, in late 2019, the United States Air Force successfully tested a connection with Starlink on an AC-130 Gunship. In 2020, the Air Force used Starlink in support of its Advanced Battlefield management system during a live-fire exercise. They demonstrated Starlink connected to a "variety of air and terrestrial assets" including the Boeing KC-135 Stratotanker. Expert on battlefield communications Thomas Wellington has argued that Starlink signals, because they use narrow focused beams, are less vulnerable to interference and jamming by the enemy in wartime than satellites flying in higher orbits. In May 2022, Chinese military researchers published an article in a peer-reviewed journal describing a strategy for destroying the Starlink constellation if they threaten national security. The researchers specifically highlight concerns with reported Starlink military capabilities. Musk has declared Starlink is meant for peaceful use and has suggested Starlink could enforce peace by taking strategic initiative. Russian officials including the head of Russia's space agency Dmitry Rogozin, have warned Elon Musk and criticized Starlink, including warning that Starlink could become a legitimate military target in the future. Russo-Ukrainian War Starlink was activated during the Russian invasion of Ukraine, after a request from the Ukrainian government. Ukraine's military and government rapidly became dependent on Starlink to maintain Internet access. Starlink is used by Ukraine for communication, such as keeping in touch with the outside world and keeping the energy infrastructure working. The service is also notably used for warfare. Starlink is used for connecting combat drones, naval drones, artillery fire coordination systems and attacks on Russian positions. SpaceX has expressed reservations about the offensive use of Starlink by Ukraine beyond military communications and restricted Starlink communication technology for military use on weapon systems, but has kept most of the service online. Its use in attacking Russian targets has been criticized by the Kremlin. Musk has warned that the service was costing $20 million per month, and a Ukrainian official estimated SpaceX's contributions as over $100 million. In June 2023, the United States Department of Defense signed a contract with SpaceX to finance Starlink use in Ukraine. Israel–Hamas War In October 2023 after the Israel–Hamas conflict started, users shared the hashtag #starlinkforgaza on Elon Musk's social network X (formerly Twitter), demanding he activate Starlink in Gaza after Internet service in the region was lost. Musk answered that Starlink connectivity would be provided for aid groups in Gaza. At the end of November, Musk said the Starlink service would only be provided for Gaza with the approval of the government of Israel. Iran In 2022, the U.S. State Department and U.S. Treasury Department updated rules regarding export of technology to Iran, allowing Starlink to be exported to Iran in support of the Iranian protests against compulsory hijab, which had triggered extensive government censorship. Immediately afterwards, Starlink service was activated in Iran. In 2023, the Iranian government filed a complaint with the ITU against SpaceX for unauthorized Starlink operation in Iran. In October 2023 and March 2024, the ITU ruled in favor of Iran, dismissing a SpaceX assertion that it should not be expected to verify the location of every terminal connecting to its satellites. Iran claimed that SpaceX was capable of determining their user terminal locations by citing a tweet from Musk saying there were 100 Starlink terminals operating within Iran. Internet availability and regulatory approval by country In order to offer satellite services in any nation-state, International Telecommunication Union (ITU) regulations and long-standing international treaties require that landing rights be granted by each country jurisdiction, and within a country, by the national communications regulators. As a result, even though the Starlink network has near-global reach at latitudes below approximately 60°, broadband services can only be provided in 40 countries as of September 2022. SpaceX can also have business operation and economic considerations that may make a difference in which countries Starlink service is offered, in which order, and how soon. For example, SpaceX formally requested authorization for Canada only in June 2020, the Canadian regulatory authority approved it in November 2020, and SpaceX rolled out service two months later, in January 2021. As of September 2022, Starlink services were on offer in 40 countries, with applications pending regulatory approval in many more. Canada was the first outside country to approve the service with the Innovation, Science and Economic Development Canada announcing regulatory approval for the Starlink low Earth orbit satellite constellation on November 6, 2020. In May 2022, Starlink entered the Philippine market, the company's first deployment in Asia, because of a landmark legislative change (RA 11659, Public Services Act) about all-foreign allowance of company ownership in regards to utility entities such as internet and telco companies. Starlink got provisional permission from the country's Department of Information and Communication Technologies (DICT), National Telecommunications Commission (NTC), and Department of Trade and Industry (DTI) and soon began commercial services, aimed at regions with lower internet connectivity. In August 2022, SpaceX secured its first contract for services in the passenger shipping industry. Royal Caribbean Group has added Starlink internet to Freedom of the Seas and planned to offer the service on 50 ships under its Royal Caribbean International, Celebrity Cruises, and Silversea Cruises brands by March 2023. Starlink services on private jet charter flights in the U.S. by JSX airline are expected to begin in late 2022, and Hawaiian Airlines had contracted to provide "Starlink services on transpacific flights to and from Hawaii in 2023." In June 2023, a license to offer internet services in Zambia was granted to Starlink by the Zambian Government through its Electronic Government Division – SMART Zambia, after the completion of many trial projects throughout the country. In October 2023, Starlink officially went live in Zambia. In July 2023, the Mongolian government issued two licenses to SpaceX to provide internet access in the country. In July 2023, it was reported by Bloomberg that attempts to sell the service to Taiwan in 2022 fell through when SpaceX insisted on 100% ownership of the Taiwan subsidiary running Starlink in the country. This went against Taiwanese law that required that internet service providers (ISP) are at least 51% controlled by local companies, an impracticality when dealing with a globe-spanning ISP. Japan's major mobile provider, KDDI, announced a partnership with SpaceX to begin offering in 2022 expanded connectivity for its rural mobile customers via 1,200 remote mobile towers. On April 25, 2022, Hawaiian Airlines announced an agreement with Starlink to provide free internet access on its aircraft, becoming the first airline to use Starlink. By July 2022, Starlink internet service was available in 36 countries and 41 markets. In May 2022, it was announced that regulatory approval had been granted for Nigeria, Mozambique, and the Philippines. In the Philippines, commercial availability began on February 22, 2023. In September 2022, trials began at McMurdo Station in Antarctica and from December 2022 on field missions. Antarctica has no ground stations, so polar-orbiting satellites with optical interlinks are used to connect to ground stations in South America, New Zealand, and Australia. In September 2023, the US-based United Against Nuclear Iran started donating subscriptions and terminals to Iranians to allow them to circumvent Iran's internet blackout. In September 2023, it was reported by some Indian news outlets that Starlink would imminently receive its license to operate in India after Starlink was able to meet all regulatory requirements, but that it would still be required to apply for spectrum allocation in order to provide service. SpaceX had earlier sold 5000 Starlink preorders in India, and in 2021 had announced that Sanjay Bhargava, who had worked with Musk as part of a team that founded electronic payment firm PayPal, would head the tech billionaire entrepreneur's Starlink satellite broadband venture in India. Three months later, Bhargava resigned "for personal reasons" after the Indian government ordered SpaceX to halt selling preorders for Starlink service until SpaceX gained regulatory approval for providing satellite internet services in the country. In April 2024, it was reported in some Indian news outlets that Starlink had received its "in-principle government approval" and that the approval now "lies at the desk of communications minister Ashwini Vaishnaw" In November 2023, Starlink received the licenses to operate in Fiji. The service was launched in Fiji in May 2024. In April 2024, it was reported that the company would begin trial service in Indonesia in May. Starlink received its license to operate in Indonesia in early May. In May 2024, Starlink service was available for pre-order in Sri Lanka, pending regulatory approval. Starlink received its license to operate in Sri Lanka in August of the same year. In August 2024, Starlink received the licenses to operate in Yemen. Starlink services will soon be implemented through the corporation's sales points distributed across most governorates. These points will provide a full range of services, including device sales, activation, subscription fee payments, and direct technical support. On 22 October 2024, Qatar Airways launched the first Starlink-equipped Boeing 777 flight, flying from Doha to London. As of November 2024, Morocco is set to give regulatory approval to Starlink by 2025. Technology Satellite hardware The internet communication satellites were expected to be smallsats, in mass, and were intended to be in low Earth orbit (LEO) at an altitude of approximately , according to early public releases of information in 2015. The first significant deployment of 60 satellites was in May 2019, with each satellite weighing . SpaceX decided to place the satellites at a relatively low due to concerns associated with space debris from failures or low fuel in the space environment, as well as letting them use fewer satellites than what was initially needed. Initial plans were for the constellation to be made up of approximately 4,000 cross-linked satellites, more than twice as many operational satellites as were in orbit in January 2015. The satellites employ optical inter-satellite links and phased array beam-forming and digital processing technologies in the Ku and Ka microwave bands (super high frequency [SHF] to extremely high frequency [EHF]), according to documents filed with the U.S. FCC. While specifics of the phased array technologies have been disclosed as part of the frequency application, SpaceX enforced confidentiality regarding details of the optical inter-satellite links. Early satellites were launched without laser links. The inter-satellite laser links were successfully tested in late 2020. The satellites are mass-produced, at a much lower cost per unit of capability than previously existing satellites. Musk said, "We're going to try and do for satellites what we've done for rockets." "In order to revolutionize space, we have to address both satellites and rockets." "Smaller satellites are crucial to lowering the cost of space-based Internet and communications". In February 2015, SpaceX asked the FCC to consider future innovative uses of the Ka-band spectrum before the FCC commits to 5G communications regulations that would create barriers to entry, since SpaceX is a new entrant to the satellite communications market. The SpaceX non-geostationary orbit communications satellite constellation will operate in the high-frequency bands above 24 GHz, "where steerable Earth station transmit antennas would have a wider geographic impact, and significantly lower satellite altitudes magnify the impact of aggregate interference from terrestrial transmissions". Internet traffic via a geostationary satellite has a minimum theoretical round-trip latency of at least 477 milliseconds (ms; between user and ground gateway), but in practice, current satellites have latencies of 600 ms or more. Starlink satellites are orbiting at to of the height of geostationary orbits, and thus offer more practical Earth-to-satellite latencies of around 25 to 35 ms, comparable to existing cable and fiber networks. The system uses a peer-to-peer protocol claimed to be "simpler than IPv6"; it also incorporates native end-to-end encryption. Starlink satellites use Hall-effect thrusters with krypton or argon gas as the reaction mass for orbit raising and station keeping. Krypton Hall thrusters tend to exhibit significantly higher erosion of the flow channel compared to a similar electric propulsion system operated with xenon, but krypton is much more abundant and has a lower market price. SpaceX claims that its 2nd generation thruster using argon has 2.4× the thrust and 1.5× the specific impulse of the krypton fueled thruster. User terminals The Starlink system has multiple modes of connectivity including direct-to-cell capability as well as broadband satellite internet service. Direct-to-cell provides connectivity to unmodified cellular phones and is being offered globally in partnership with various national cellular service providers. Starlink's broadband internet service is accessed via flat user terminals the size of a pizza box, which have phased array antennas and track the satellites. The terminals can be mounted anywhere, as long as they can see the sky. This includes fast-moving objects like trains. Photographs of the customer antennas were first seen on the internet in June 2020, supporting earlier statements by SpaceX CEO Musk that the terminals would look like a "UFO on a stick. Starlink Terminal has motors to self-adjust optimal angle to view sky". The antenna is known internally as "Dishy McFlatface". In October 2020, SpaceX launched a paid-for beta service in the U.S. called "Better Than Nothing Beta", charging $499 () for a user terminal, with an expected service of "50 to 150 Mbit/s and latency from 20 to 40 ms over the next several months". From January 2021, the paid-for beta service was extended to other continents, starting with the United Kingdom. A larger, high-performance version of the antenna is available for use with the Starlink Business service tier. In September 2020, SpaceX applied for permission to put terminals on 10 of its ships with the expectation of entering the maritime market in the future. In August 2022, and in response to an open invitation from SpaceX to have the terminal examined by the security community, security specialist Lennert Wouters presented several technical architecture details about the then-current starlink terminals: the main control unit of the dish is a STMicroelectronics custom designed chip code-named Catson which is a quad-core ARM Cortex-A53-based control processor running the Linux kernel and booted using U-Boot. The main processor uses several other custom chips such as a digital beam former named Shiraz and a front-end module named Pulsarad. The main control unit controls an array of digital beamformers. Each beamformer controls 16 front-end modules. In addition the terminal has a GPS receiver, motor controllers, synchronous clock generation and Power over Ethernet circuits, all manufactured by STMicroelectronics. In June 2024, a portable user terminal dubbed "Starlink Mini" was announced to be imminently available. The Mini supports 100 Mbit/s of download speed and will fit in a backpack. Initial rollout was in Latin America at a $200 price point. Ground stations SpaceX has made applications to the FCC for at least 32 ground stations in United States, and has approvals for five of them (in five states). Until February 2023, Starlink used the Ka-band to connect with ground stations. With the launch of v2 Mini, frequencies were added in the 71–86 GHz W band (or E band waveguide) range. A typical ground station has nine 2.86 m (9.4 ft) antennas in a 400 m2 (4,306 sq ft) fenced in area. According to their filing, SpaceX's ground stations would also be installed on-site at Google data-centers world-wide. Satellite revisions MicroSat MicroSat-1a and MicroSat-1b were originally slated to be launched into circular orbits at approximately 86.4° inclination, and to include panchromatic video imager cameras to film images of Earth and the satellite. The two satellites, "MicroSat-1a" and "MicroSat-1b" were meant to be launched together as secondary payloads on one of the Iridium NEXT flights, but they were instead used for ground-based tests. Tintin At the time of the June 2015 announcement, SpaceX had stated plans to launch the first two demonstration satellites in 2016, but the target date was subsequently moved out to 2018. SpaceX began flight testing their satellite technologies in 2018 with the launch of two test satellites. The two identical satellites were called MicroSat-2a and MicroSat-2b during development but were renamed Tintin A and Tintin B upon orbital deployment on February 22, 2018. The satellites were launched by a Falcon 9 rocket, and they were piggy-pack payloads launching with the Paz satellite. Tintin A and B were inserted into a orbit. Per FCC filings, they were intended to raise themselves to a orbit, the operational altitude for Starlink LEO satellites per the earliest regulatory filings, but stayed close to their original orbits. SpaceX announced in November 2018 that they would like to operate an initial shell of about 1600 satellites in the constellation at about orbital altitude, at an altitude similar to the orbits Tintin A and B stayed in. The satellites orbit in a circular low Earth orbit at about altitude in a high-inclination orbit for a planned six to twelve-month duration. The satellites communicate with three testing ground stations in Washington State and California for short-term experiments of less than ten minutes duration, roughly daily. v0.9 (test) The 60 Starlink v0.9 satellites, launched in May 2019, had the following characteristics: Flat-panel design with multiple high-throughput antennas and a single solar array Mass: Hall-effect thrusters using krypton as the reaction mass, for position adjustment on orbit, altitude maintenance, and deorbit Star tracker navigation system for precision pointing Able to use U.S. Department of Defense-provided debris data to autonomously avoid collision Altitude of 95% of "all components of this design will quickly burn in Earth's atmosphere at the end of each satellite's lifecycle". v1.0 (operational) The Starlink v1.0 satellites, launched since November 2019, have the following additional characteristics: 100% of all components of this design will completely demise, or burn up, in Earth's atmosphere at the end of each satellite's life. Ka-band added Mass: One of them, numbered 1130 and called DarkSat, had its albedo reduced using a special coating but the method was abandoned due to thermal issues and IR reflectivity. All satellites launched since the ninth launch at August 2020 have visors to block sunlight from reflecting from parts of the satellite to reduce its albedo further. v1.5 (operational) The Starlink v1.5 satellites, launched since January 24, 2021, have the following additional characteristics: Lasers for inter-satellite communication Mass: ~ Visors that blocked sunlight were removed from satellites launched from September 2021 onwards. Starshield (operational) These are satellites buses with two solar arrays derived from Starlink v1.5 and v2.0 for military use and can host classified government or military payloads. v2 (initial deployment) SpaceX was preparing for the production of Starlink v2 satellites by early 2021. According to Musk, Starlink v2 satellites will be "…an order of magnitude better than Starlink 1" in terms of communications bandwidth. SpaceX hoped to begin launching Starlink v2 in 2022. , SpaceX had said publicly that the satellites of second-generation (Gen2) constellation would need to be launched on Starship, as they are too large to fit inside a Falcon 9 fairing. However, in August 2022, SpaceX made formal regulatory filings with the FCC that indicated they would build satellites of the second-generation (Gen2) constellation in two different, but technically identical, form factors: one with the physical structures tailored to launching on Falcon 9, and one tailored for the launching on Starship. Starlink v2 is both larger and heavier than Starlink v1 satellites. Starlink second-generation satellites planned for launch on Starship have the following characteristics: Lasers for inter-satellite communication Mass: ~ Length: ~ Further improvements to reduce its brightness, including the use of a dielectric mirror film. On 2,016 of the initially licensed 7,500 satellites: Gen2 Starlink satellites will also include an approximately 25 square meter antenna that would allow T-Mobile subscribers to be able to communicate directly via satellite through their regular mobile devices. It will be implemented via a German-licensed hosted payload developed together with SpaceX's subsidiary Swarm Technologies and T-Mobile. This hardware is supplemental to the existing Ku-band and Ka-band systems, and inter-satellite laser links, that have been on the first generation satellites launching as of mid-2022. In October 2022, SpaceX revealed the configuration of early v2s to be launched on Falcon 9. In May 2023, SpaceX introduced two more form factors with direct-to-cellular (DtC) capability. Bus F9-1, 303 kg (668 lbs) mass, having roughly the same dimensions and mass as V1.5 satellites. Deployed in Group 5 (see constellation design section). Bus F9-2 (typically called "v2 mini"), up to 800 kg (1,764 lbs) mass and measuring by with a total array of . The Solar arrays are 2 in number. It could offer around 3–4 times more usable bandwidth per satellite. They are smaller than Starlink's original ones (and so can be launched on Falcon 9) and have four times the capacity to the ground station to increase speed and capacity. This is due to a more efficient array of antennas and the use of radio frequencies in the W band (E band waveguide) range. They were deployed in Groups 6 and 7 (see constellation design section). Bus F9-3, F9-2 with direct-to-cellular capability. The bus length increased to . Mass increased to 970 kg (2,152 lbs). Deployed in Group 7 (see constellation design section). Bus Starship-1 (planned), 2000 kg (4,409 lbs) mass and measuring by with a total array of . Bus Starship-2 (planned), Starship-1 with direct-to-cellular capability. The bus length increased to . The first six F9-3 satellites with direct-to-cellular (DtC) capability were launched on January 2, 2024, in Groups 7–9. Launches Between February 2018 and May 2024, SpaceX successfully launched over 6,000 Starlink satellites into orbit, including prototypes and satellites that later failed or were de-orbited before entering operational service. In March 2020, SpaceX reported producing six satellites per day. The deployment of the first 1,440 satellites was planned in 72 orbital planes of 20 satellites each, with a requested lower minimum elevation angle of beams to improve reception: 25° rather than the 40° of the other two orbital shells. SpaceX launched the first 60 satellites of the constellation in May 2019 into a orbit and expected up to six launches in 2019 at that time, with 720 satellites (12 × 60) for continuous coverage in 2020. Starlink satellites are also planned to launch on Starship, an under-development rocket of SpaceX with a much larger payload capability. The initial announcement included plans to launch 400 Starlink (version 1.0) satellites at a time. Current plans now call for Starship to be the only launch vehicle to be used to launch the much larger Starlink version 2.0. Constellation design and status In March 2017, SpaceX filed plans with the FCC to field a second orbital shell of more than 7,500 "V-band satellites in non-geosynchronous orbits to provide communications services" in an electromagnetic spectrum that has not previously been heavily employed for commercial communications services. Called the "Very-low Earth orbit (VLEO) constellation", it was to have comprised 7,518 satellites that were to orbit at just altitude, while the smaller, originally planned group of 4,425 satellites would operate in the Ka- and Ku-bands and orbit at altitude. By 2022, SpaceX had withdrawn plans to field the 7,518-satellite V-band system, superseding it with a more comprehensive design for a second-generation (Gen2) Starlink network. In November 2018, SpaceX received U.S. regulatory approval to deploy 7,518 V-band broadband satellites, in addition to the 4,425 approved earlier; however, the V-band plans were subsequently withdrawn by 2022. At the same time, SpaceX also made new regulatory filings with the U.S. FCC to request the ability to alter its previously granted license in order to operate approximately 1,600 of the 4,425 Ka-/Ku-band satellites approved for operation at in a "new lower shell of the constellation" at only orbital altitude. These satellites would effectively operate in a third orbital shell, a orbit, while the higher and lower orbits at approximately and approximately would be used only later, once a considerably larger deployment of satellites becomes possible in the later years of the deployment process. The FCC approved the request in April 2019, giving approval to place nearly 12,000 satellites in three orbital shells: initially approximately 1,600 in a – altitude shell, and subsequently placing approximately 2,800 Ku- and Ka-band spectrum satellites at and approximately 7,500 V-band satellites at . In total, nearly 12,000 satellites were planned to be deployed, with (as of 2019) a possible later extension to 42,000. In February 2019, a sister company of SpaceX, SpaceX Services Incorporated, filed a request with the FCC to receive a license for the operation of up to a million fixed satellite Earth stations that would communicate with its non-geostationary orbit (NGSO) satellite Starlink system. In June 2019, SpaceX applied to the FCC for a license to test up to 270 ground terminals – 70 nationwide across the United States and 200 in Washington state at SpaceX employee homes – and aircraft-borne antenna operation from four distributed United States airfields; as well as five ground-to-ground test locations. On October 15, 2019, the United States FCC submitted filings to the International Telecommunication Union (ITU) on SpaceX's behalf to arrange spectrum for 30,000 additional Starlink satellites to supplement the 12,000 Starlink satellites already approved by the FCC. That month, Musk publicly tested the Starlink network by using an Internet connection routed through the network to post a first tweet to social media site Twitter. First generation The chart below contains all v0.9 and first generation satellites (Tintin A and Tintin B, as test satellites, are not included). Early designs had all phase 1 satellites in altitudes of around . SpaceX initially requested to lower the first 1584 satellites, and in April 2020 requested to lower all other higher satellite orbits to about . In April 2020, SpaceX modified the architecture of the Starlink network. SpaceX submitted an application to the FCC proposing to operate more satellites in lower orbits in the first phase than the FCC previously authorized. The first phase will still include 1,440 satellites in the first shell orbiting at in planes inclined 53.0°, with no change to the first shell of the constellation launched largely in 2020. SpaceX also applied in the United States for use of the E-band in their constellation The FCC approved the application in April 2021. On January 24, 2021 SpaceX released a new group of 10 Starlink satellites, the first Starlink satellites in polar orbits. The launch surpassed ISRO's record of launching the most satellites in one mission (143), taking to 1,025 the cumulative number of satellites deployed for Starlink to that date. On February 3, 2022, 49 satellites were launched as Starlink Group 4–7. A G2-rated geomagnetic storm occurred on February 4, caused the atmosphere to warm and density at the low deployment altitudes to increase. Predictions were that up to 40 of the 49 satellites might be lost due to drag. After the event, 38 satellites reentered the atmosphere by February 12 while the remaining 11 were able to raise their orbits and avoid loss due to the storm. In March 2023, SpaceX submitted an application to add V-band payload to the second generation satellites rather than fly phase 2 V-band satellites as originally planned and authorized. The request is subject to FCC approval. Second Generation With the unknown of when Starship will be able to launch the second generation satellites, SpaceX modified the original V2 blueprint into a smaller, more compact one named "v2 mini". This adjustment allowed Falcon 9 to transport these satellites, though not as many, into orbit. The first set of 21 of these satellites was launched on February 27, 2023. SpaceX committed to reducing debris by keeping the Starlink tension rods, which hold the V2 mini-satellites together, attached to the Falcon 9 second stage. These tension rods were discarded into orbit while launching earlier versions of Starlink satellites. Observations confirm these V2 mini-satellites host two solar panels like the Starship V2 satellites. SpaceX planned to test the deployment system for a new version of their Starlink satellites. On 16 January 2025, S33 was also expected to deploy ten Starlink "simulators," which were also expected to reenter over the Indian Ocean. Contact with S33 was lost shortly before its engines were scheduled to shut down. Impact on astronomy The planned large number of satellites has been met with criticism from the astronomical community because of concerns over light pollution. Astronomers claim that their brightness in both optical and radio wavelengths will severely impact scientific observations. While astronomers can schedule observations to avoid pointing where satellites currently orbit, it is "getting more difficult" as more satellites come online. The International Astronomical Union (IAU), National Radio Astronomy Observatory (NRAO), and Square Kilometre Array Organization (SKAO) have released official statements expressing concern on the matter. Recent studies have proved that the "unintended electromagnetic radiation" affects radio telescopes creating distortions and excessive noise and the IAU Centre for the Protection of the Dark and Quiet Sky from Satellite Constellation Interference was created to manage these new man made obstacles to space exploration. Visible Optical interference On November 20, 2019, the four-meter (13') Blanco telescope of the Cerro Tololo Inter-American Observatory (CTIO) recorded strong signal loss and the appearance of 19 white lines on a DECam shot (right image). This image noise was correlated to the transit of a Starlink satellite train, launched a week earlier. SpaceX representatives and Musk have claimed that the satellites will have minimal impact, being easily mitigated by pixel masking and image stacking. However, professional astronomers have disputed these claims based on initial observation of the Starlink v0.9 satellites on the first launch, shortly after their deployment from the launch vehicle. In later statements on Twitter, Musk stated that SpaceX will work on reducing the albedo of the satellites and will provide on-demand orientation adjustments for astronomical experiments, if necessary. One Starlink satellite (Starlink 1130 / DarkSat) launched with an experimental coating to reduce its albedo. The reduction in g-band magnitude is 0.8 magnitude (55%). Despite these measures, astronomers found that the satellites were still too bright, thus making DarkSat essentially a "dead end". On April 17, 2020, SpaceX wrote in an FCC filing that it would test new methods of mitigating light pollution, and also provide access to satellite tracking data for astronomers to "better coordinate their observations with our satellites". On April 27, 2020, Musk announced that the company would introduce a new sunshade designed to reduce the brightness of Starlink satellites. , over 200 Starlink satellites had a sunshade. An October 2020 analysis found them to be only marginally fainter than DarkSat. A January 2021 study pinned the brightness at 31% of the original design. According to a May 2021 study, "A large number of fast-moving transmitting stations (i.e. satellites) will cause further interference. New analysis methods could mitigate some of these effects, but data loss is inevitable, increasing the time needed for each study and limiting the overall amount of science done". In February 2022, the International Astronomical Union (IAU) established a center to help astronomers deal with the adverse effects of satellite constellations such as Starlink. Work will include the development of software tools for astronomers, advancement of national and international policies, community outreach and work with industry on relevant technologies. In June 2022, the IAU released a website for astronomers to deal with some adverse effects via satellite tracking. This will enable astronomers to be able to track satellites to be able to avoid and time them for minimal impact on current work. The first batch of Generation 2 spacecraft was launched in February 2023. These satellites are referred to as "Mini" because they are smaller than the full-sized Gen 2 spacecraft that will come later. SpaceX uses brightness mitigation for Gen 2 that includes a mirror-like surface which reflects sunlight back into space and they orient the solar panels so that observers on the ground only see the dark sides. The Minis are fainter than Gen 1 spacecraft despite being four times as large according to an observational study published in June 2023. They are 44% as bright as VisorSats, 24% compared to V1.5 and 19% compared to the original design which had no brightness mitigation. Minis appear 12 times brighter before they reach the target orbit. Radio interference In October 2023, research published in "Astronomy and Astrophysics Letters" had reportedly found that Starlink satellites were "leaking radio signals" finding that at the site of the future Square Kilometer Array, radio emissions from Starlink satellites were brighter than any natural source in the sky. The paper concluded that these emissions will be "detrimental to key SKA science goals without future mitigation". Increased risk of satellite collision The large number of satellites employed by Starlink may create the long-term danger of space debris resulting from placing thousands of satellites in orbit and the risk of causing a satellite collision, potentially triggering a cascade phenomenon known as Kessler syndrome. SpaceX has said that most of the satellites are launched at a lower altitude, and failed satellites are expected to deorbit within five years without propulsion. Early in the program, a near-miss occurred when SpaceX did not move a satellite that had a 1 in 1,000 chance of colliding with a European one, ten times higher than the ESA's threshold for avoidance maneuvers. SpaceX subsequently fixed an issue with its paging system that had disrupted emails between the ESA and SpaceX. The ESA said it plans to invest in technologies to automate satellite collision avoidance maneuvers. In 2021, Chinese authorities lodged a complaint with the United Nations, saying their space station had performed evasive maneuvers that year to avoid Starlink satellites. In the document, Chinese delegates said that the continuously maneuvering Starlink satellites posed a risk of collision, and two close encounters with the satellites in July and October constituted dangers to the life or health of astronauts aboard the Chinese Tiangong space station. All these reported issues, plus current plans for the extension of the constellation, motivated a formal letter from the National Telecommunications and Information Administration (NTIA) on behalf of NASA and the NSF, submitted to the FCC on February 8, 2022, warning about the potential impact on low Earth orbit, increased collision risk, impact on science missions, rocket launches, International Space Station and radio frequencies. SpaceX satellites will maneuver if the probability of collision is greater than (1 in 100,000 chance of collision), as opposed to the industry standard of (1 in 10,000 chance of collision). SpaceX has budgeted sufficient propellant to accommodate approximately 5,000 propulsive maneuvers over the life of a Gen2 satellite, including a budget of approximately 350 collision avoidance maneuvers per satellite over that time period. As of May 2022, the average Starlink satellite had conducted fewer than three collision-avoidance maneuvers over the 6 preceding months. Over 1,700 out of 6,873 maneuvers were performed to avoid Kosmos 1408 debris. Competition and market effects In addition to the OneWeb constellation, announced nearly concurrently with the SpaceX constellation, a 2015 proposal from Samsung outlined a 4,600-satellite constellation orbiting at that could provide a zettabyte per month capacity worldwide, an equivalent of 200 gigabytes per month for 5 billion users of Internet data, but by 2020, no more public information had been released about the Samsung constellation. Telesat announced a smaller 117 satellite constellation in 2015 with plans to deliver initial service in 2021. Amazon announced a large broadband internet satellite constellation in April 2019, planning to launch 3,236 satellites in the next decade in what the company calls "Project Kuiper", a satellite constellation that will work in concert with Amazon's previously announced large network of twelve satellite ground station facilities (the "AWS ground station unit") announced in November 2018. In February 2015, financial analysts questioned established geosynchronous orbit communications satellite fleet operators as to how they intended to respond to the competitive threat of SpaceX and OneWeb LEO communication satellites. In October 2015, SpaceX President Gwynne Shotwell indicated that while development continues, the business case for the long-term rollout of an operational satellite network was still in an early phase. By October 2017, the expectation for large increases in satellite network capacity from emerging lower-altitude broadband constellations caused market players to cancel some planned investments in new geosynchronous orbit broadband communications satellites. SpaceX was challenged regarding Starlink in February 2021 when the National Rural Electric Cooperative Association (NRECA), a political interest group representing traditional rural internet service providers, urged the U.S. Federal Communications Commission (FCC) to "actively, and aggressively, and thoughtfully vet" the subsidy applications of SpaceX and other broadband providers. At the time, SpaceX had provisionally won $886 million for a commitment to provide service to approximately 643,000 locations in 35 states as part of the Rural Digital Opportunity Fund (RDOF). The NRECA criticisms included that the funding allocation to Starlink would include service to locations—such as Harlem and terminals at Newark Liberty International Airport and Miami International Airport—that are not rural, and because SpaceX was planning to build the infrastructure and serve any customers who request service with or without the FCC subsidy. Additionally, Jim Matheson, chief executive officer of the NRECA voiced concern about technologies that had not yet been proven to meet the high speeds required for the award category. Starlink was specifically criticized for being still in beta testing and for unproven technology. While Starlink is deployed worldwide, it has encountered trademark conflicts in some countries such as Mexico and Ukraine. Similar or competitive systems OneWeb satellite constellation – a satellite constellation project that began operational deployment of satellites in 2020. China national satellite internet project – a planned satellite internet offering for the Chinese market. Kuiper Systems – a planned 3,276 LEO satellite Internet constellation by an Amazon subsidiary. Hughes Network Systems – a broadband satellite provider providing fixed, cellular backhaul, and airborne antennas. Viasat, Inc. – a broadband satellite provider providing fixed, ground mobile, and airborne antennas. O3b and O3b mPOWER – medium Earth orbit constellations that provide maritime, aviation and military connectivity, and cellular backhaul; coverage between latitudes 50°N and 50°S. See also Kuiper Systems – Amazon's large internet satellite constellation AST SpaceMobile – a satellite-to-mobile-phone satellite constellation working with large mobile network operators such as Vodafone, AT&T, Orange, Rakuten, Telestra, Telefónica, etc. with the objective to provide broadband internet coverage to existing unmodified mobile phones Orbcomm – an operational constellation used to provide global asset monitoring and messaging services from its constellation of 29 LEO communications satellites orbiting at 775 km (480 miles) Globalstar – an operational low Earth orbit (LEO) satellite constellation for satellite phone and low-speed data communications, covering most of the world's landmass Iridium – an operational constellation of 66 cross-linked satellites in a polar orbit, used to provide satellite phone and low-speed data services over the entire surface of Earth Inmarsat – a satellite based nautical distress network for transmitting telex, fax, and other text messages since 1979 – typically used in nautical scenarios and disaster scenarios Lynk Global – a satellite-to-mobile-phone satellite constellation with the objective to coverage to traditional low-cost mobile devices Teledesic – a former (1990s) venture to accomplish broadband satellite internet services Project Loon – former concept to provide internet access via balloons in the stratosphere Satellite Internet Satellite internet constellation Satellite Flare References External links Articles containing video clips Communications satellite constellations Communications satellites in low Earth orbit Communications satellites of the United States Communications satellite operators High throughput satellites Internet service providers Internet service providers of the United States Satellite Internet access SpaceX satellites Spacecraft launched in 2019 Spacecraft launched in 2020 Spacecraft launched in 2021 Spacecraft launched in 2022 Spacecraft launched in 2023 Spacecraft launched in 2024 Wireless networking Telecommunications companies of the United States Technology companies of the United States Space technology
Starlink
[ "Astronomy", "Technology", "Engineering" ]
13,383
[ "Space technology", "Wireless networking", "Computer networks engineering", "Outer space" ]
45,116,986
https://en.wikipedia.org/wiki/Polynomial%20Wigner%E2%80%93Ville%20distribution
In signal processing, the polynomial Wigner–Ville distribution is a quasiprobability distribution that generalizes the Wigner distribution function. It was proposed by Boualem Boashash and Peter O'Shea in 1994. Introduction Many signals in nature and in engineering applications can be modeled as , where is a polynomial phase and . For example, it is important to detect signals of an arbitrary high-order polynomial phase. However, the conventional Wigner–Ville distribution have the limitation being based on the second-order statistics. Hence, the polynomial Wigner–Ville distribution was proposed as a generalized form of the conventional Wigner–Ville distribution, which is able to deal with signals with nonlinear phase. Definition The polynomial Wigner–Ville distribution is defined as where denotes the Fourier transform with respect to , and is the polynomial kernel given by where is the input signal and is an even number. The above expression for the kernel may be rewritten in symmetric form as The discrete-time version of the polynomial Wigner–Ville distribution is given by the discrete Fourier transform of where and is the sampling frequency. The conventional Wigner–Ville distribution is a special case of the polynomial Wigner–Ville distribution with Example One of the simplest generalizations of the usual Wigner–Ville distribution kernel can be achieved by taking . The set of coefficients and must be found to completely specify the new kernel. For example, we set The resulting discrete-time kernel is then given by Design of a Practical Polynomial Kernel Given a signal , where is a polynomial function, its instantaneous frequency (IF) is . For a practical polynomial kernel , the set of coefficients and should be chosen properly such that When , When Applications Nonlinear FM signals are common both in nature and in engineering applications. For example, the sonar system of some bats use hyperbolic FM and quadratic FM signals for echo location. In radar, certain pulse-compression schemes employ linear FM and quadratic signals. The Wigner–Ville distribution has optimal concentration in the time-frequency plane for linear frequency modulated signals. However, for nonlinear frequency modulated signals, optimal concentration is not obtained, and smeared spectral representations result. The polynomial Wigner–Ville distribution can be designed to cope with such problem. References “Polynomial Wigner–Ville distributions and time-varying higher spectra,” in Proc. Time-Freq. Time-Scale Anal., Victoria, B.C., Canada, Oct. 1992, pp. 31–34. Quantum mechanics Continuous distributions Concepts in physics Mathematical physics Exotic probabilities Polynomials
Polynomial Wigner–Ville distribution
[ "Physics", "Mathematics" ]
515
[ "Applied mathematics", "Polynomials", "Theoretical physics", "Quantum mechanics", "nan", "Mathematical physics", "Algebra" ]
28,239,489
https://en.wikipedia.org/wiki/Lime%20softening
Lime softening (also known as lime buttering, lime-soda treatment, or Clark's process) is a type of water treatment used for water softening, which uses the addition of limewater (calcium hydroxide) to remove hardness (deposits of calcium and magnesium salts) by precipitation. The process is also effective at removing a variety of microorganisms and dissolved organic matter by flocculation. History Lime softening was first used in 1841 to treat Thames River water. The process expanded in use as the other benefits of the process were discovered. Lime softening greatly expanded in use during the early 1900s as industrial water use expanded. Lime softening provides soft water that can, in some cases, be used more effectively for heat transfer and various other industrial uses. Chemistry As lime in the form of limewater is added to raw water, the pH is raised and the equilibrium of carbonate species in the water is shifted. Dissolved carbon dioxide (CO2) is changed into bicarbonate (HCO) and then carbonate (CO). This action causes calcium carbonate to precipitate due to exceeding the solubility product. Additionally, magnesium can be precipitated as magnesium hydroxide in a double displacement reaction. In the process both the calcium (and to an extent magnesium) in the raw water as well as the calcium added with the lime are precipitated. This is in contrast to ion exchange softening where sodium is exchanged for calcium and magnesium ions. In lime softening, there is a substantial reduction in total dissolved solids (TDS) whereas in ion exchange softening (sometimes referred to as zeolite softening), there is no significant change in the level of TDS. Lime softening can also be used to remove iron, manganese, radium and arsenic from water. Future uses Lime softening is now often combined with newer membrane processes to reduce waste streams. Lime softening can be applied to the concentrate (or reject stream) of membrane processes, thereby providing a stream of substantially reduced hardness (and thus TDS), that may be used in the finished stream. Also, in cases with very hard source water (often the case in Midwestern USA ethanol production plants), lime softening can be used to pre-treat the membrane feed water. Waste products Lime softening produces large volumes of a mixture of calcium carbonate and magnesium hydroxide in a very finely divided white precipitate which may also contain some organic matter flocculated out of the raw water. Processing or disposal of this sludge material may be an additional cost to the process. Drying and re-calcining the waste allows the lime to be almost fully re-cycled, but drying and re-calcining is more expensive than producing new lime from limestone. References Water treatment
Lime softening
[ "Chemistry", "Engineering", "Environmental_science" ]
573
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
32,023,901
https://en.wikipedia.org/wiki/Rectified%2024-cell%20honeycomb
In four-dimensional Euclidean geometry, the rectified 24-cell honeycomb is a uniform space-filling honeycomb. It is constructed by a rectification of the regular 24-cell honeycomb, containing tesseract and rectified 24-cell cells. Alternate names Rectified icositetrachoric tetracomb Rectified icositetrachoric honeycomb Cantellated 16-cell honeycomb Bicantellated tesseractic honeycomb Symmetry constructions There are five different symmetry constructions of this tessellation. Each symmetry can be represented by different arrangements of colored rectified 24-cell and tesseract facets. The tetrahedral prism vertex figure contains 4 rectified 24-cells capped by two opposite tesseracts. See also Regular and uniform honeycombs in 4-space: Tesseractic honeycomb 16-cell honeycomb 24-cell honeycomb Truncated 24-cell honeycomb Snub 24-cell honeycomb 5-cell honeycomb Truncated 5-cell honeycomb Omnitruncated 5-cell honeycomb References Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, p. 296, Table II: Regular honeycombs Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] George Olshevsky, Uniform Panoploid Tetracombs, Manuscript (2006) (Complete list of 11 convex uniform tilings, 28 convex uniform honeycombs, and 143 convex uniform tetracombs) Model 93 , o3o3o4x3o, o4x3o3x4o - ricot - O93 5-polytopes Honeycombs (geometry)
Rectified 24-cell honeycomb
[ "Physics", "Chemistry", "Materials_science" ]
429
[ "Tessellation", "Crystallography", "Honeycombs (geometry)", "Symmetry" ]
32,030,110
https://en.wikipedia.org/wiki/Sampling%20valve
A sampling valve is a type of valve used in process industries that allows taking a representative portion of a fluid (gases, liquids, fluidized, solids, or slurries) to test (e.g. by physical measurements, chemical analysis, microbiological examination), typically for the purposes of identification, quality control, or regulatory assessment. It is a valve used for sampling. The sampling valve allows the operator to extract a sample of the product from the production line or reactor and safely store it for transportation to the laboratory where it will be analysed or to the archive room where it can be retrieved for further use. In chemical plants, a sample can be taken during the process to ensure that the output meets specifications (or that the quality is acceptable), before shipping the chemical good or before accepting the chemical product. Sampling problem When sampling chemical products, it is of utmost importance to select representative material to analyse. The sample must be representative of the lot, and the choice of samples is critical to producing a valid analysis. The statistics of the sampling process are also important. And in order for a sample to be representative, it must not contain remains of previous batches that might have stay in the valve's "dead space". For example, a ball valve commonly found in tap valves consists of a ball that control the flow of the liquid. Once the valve is closed, liquid stays in the ball and the next time the valve is opened, that liquid flows first. If a sample is taken using a ball valve, the valve has to be flushed before the sample is taken. This raises two additional problems: If the liquid is a chemical substance that solidifies at room temperature such as polymer or other plastics, it can block the valve and prevent a second sample to be taken. The residue of the chemical substance that was flushed needs to be recycled or destroyed at a cost both for the chemical plant and the environment. When sampling hazardous chemicals or deadly substances, it is important that the operator is not exposed to toxic fumes or vapours. These can fatally harm people and should they be released in the atmosphere, they will pollute the environment. Choosing a sample valve When choosing a sampling valve, different factors have to be considered: The material, pressure and temperature rating, gaskets, size and position of the valve according to the pipe specification of the plant. The handle or actuator type according to the position (reach) of the valve and according to the sampled chemical (e.g. whether if the product contains solid particles or not) The receptacle to store the sample for transportation (bottles, container, bag, piston injector or syringe) See also Other type of valves Sampling References Valves
Sampling valve
[ "Physics", "Chemistry" ]
555
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
32,030,747
https://en.wikipedia.org/wiki/Proceedings%20of%20the%20Institution%20of%20Mechanical%20Engineers%2C%20Part%20J
The Journal of Engineering Tribology, Part J of the Proceedings of the Institution of Mechanical Engineers (IMechE), is a peer-reviewed academic journal that publishes research on engineering science associated with tribology and its applications. The journal was first published in 1994 and is published by SAGE Publications on behalf of IMechE. Abstracting and indexing The Journal of Engineering Tribology is abstracted and indexed in Scopus and the Science Citation Index. According to the Journal Citation Reports, its 2013 impact factor is 0.631, ranking it 81st out of 126 journals in the category "Engineering, Mechanical". References External links Engineering journals English-language journals Institution of Mechanical Engineers academic journals Monthly journals Academic journals established in 1994 SAGE Publishing academic journals Tribology
Proceedings of the Institution of Mechanical Engineers, Part J
[ "Chemistry", "Materials_science", "Engineering" ]
156
[ "Tribology", "Mechanical engineering", "Materials science", "Surface science" ]
32,031,601
https://en.wikipedia.org/wiki/Data%20generating%20process
In statistics and in empirical sciences, a data generating process is a process in the real world that "generates" the data one is interested in. This process encompasses the underlying mechanisms, factors, and randomness that contribute to the production of observed data. Usually, scholars do not know the real data generating model and instead rely on assumptions, approximations, or inferred models to analyze and interpret the observed data effectively. However, it is assumed that those real models have observable consequences. Those consequences are the distributions of the data in the population. Those distributors or models can be represented via mathematical functions. There are many functions of data distribution. For example, normal distribution, Bernoulli distribution, Poisson distribution, etc. References https://stats.stackexchange.com/questions/443320/what-does-a-data-generating-process-dgp-actually-mean Probability distributions
Data generating process
[ "Mathematics" ]
189
[ "Functions and mappings", "Mathematical relations", "Mathematical objects", "Probability distributions" ]
33,605,056
https://en.wikipedia.org/wiki/Bismuth%20selenide
Bismuth selenide () is a gray compound of bismuth and selenium also known as bismuth(III) selenide. Properties Bismuth selenide is a semiconductor and a thermoelectric material. While stoichiometric bismuth selenide should be a semiconductor with a gap of 0.3 eV, naturally occurring selenium vacancies act as electron donors, so Bi2Se3 is intrinsically n-type. Bismuth selenide has a topologically insulating ground-state. Topologically protected Dirac cone surface states have been observed in Bismuth selenide and its insulating derivatives leading to intrinsic topological insulators, which later became the subject of world-wide scientific research. Bismuth selenide is a van der Waals material consisting of covalently bound five-atom layers (quintuple layers) which are held together by van der Waals interactions and spin-orbit coupling effects. Although the (0001) surface is chemically inert (mostly due to the inert-pair effect of Bi), there are metallic surface states, protected by the non-trivial topology of the bulk. For this reason, the Bi2Se3 surface is an interesting candidate for van der Waals epitaxy and subject of scientific research. For instance, different phases of antimony layers can be grown on Bi2Se3, by means of which topological pn-junctions can be realised. More intriguingly, Sb layers undergo topological phase transitions when attached to the Bi2Se3 surface and thus inherit the non-trivial topological properties of the Bi2Se3 substrate. Production Although bismuth selenide occurs naturally (as the mineral guanajuatite) at the Santa Catarina Mine in Guanajuato, Mexico as well as some sites in the United States and Europe, such deposits are rare and contain a significant level of sulfur atoms as an impurity. For this reason, most bismuth selenide used in research into potential commercial applications is synthesized. Commercially-produced samples are available for use in research, but the concentration of selenium vacancies is heavily dependent upon growth conditions, and so bismuth selenide used for research is often synthesized in the laboratory. A stoichiometric mixture of elemental bismuth and selenium, when heated above the melting points of these elements in the absence of air, will become a liquid that freezes to crystalline . Large single crystals of bismuth selenide can be prepared by the Bridgman–Stockbarger method. See also Thermoelectric materials Thermoelectric effect Topological insulators References Bismuth compounds Selenides Semiconductor materials
Bismuth selenide
[ "Chemistry" ]
569
[ "Semiconductor materials" ]
33,605,847
https://en.wikipedia.org/wiki/Vieussens%20valve%20of%20the%20coronary%20sinus
The Vieussens valve of the coronary sinus is a prominent valve at the end of the great cardiac vein, marking the commencement of the coronary sinus. It is often a flimsy valve composed of one to three leaflets. It is present in 80-90% of individuals. It serves as an anatomical landmark. It is clinically important because it is often an obstruction to catheters in 20% of patients. References Valves Anatomy
Vieussens valve of the coronary sinus
[ "Physics", "Chemistry", "Biology" ]
93
[ "Physical systems", "Valves", "Hydraulics", "Piping", "Anatomy" ]
33,607,522
https://en.wikipedia.org/wiki/Ehrenfest%E2%80%93Tolman%20effect
In general relativity, the Ehrenfest–Tolman effect (also known as the Tolman–Ehrenfest effect), created by Richard C. Tolman and Paul Ehrenfest, argues that temperature is not constant in space at thermal equilibrium, but varies with the spacetime curvature. Specifically, it depends on the spacetime metric. In a stationary spacetime with timelike Killing vector field , the temperature satisfies instead the Tolman-Ehrenfest relation: , where is the norm of the timelike Killing vector field. This relationship leads to the concept of thermal time which has been considered as a possible basis for a fully general-relativistic thermodynamics. It has been shown that the Tolman–Ehrenfest effect can be derived by applying the equivalence principle to the concept that temperature is the rate of thermal time with respect to proper time. References General relativity Quantum mechanics
Ehrenfest–Tolman effect
[ "Physics" ]
186
[ "Theoretical physics", "Quantum mechanics", "General relativity", "Relativity stubs", "Theory of relativity" ]
33,611,862
https://en.wikipedia.org/wiki/Eomesodermin
Eomesodermin also known as T-box brain protein 2 (Tbr2) is a protein that in humans is encoded by the EOMES gene. The Eomesodermin/Tbr2 gene, EOMES, encodes a member of a conserved protein family that shares a common DNA-binding domain, the T-box. T-box genes encode transcription factors, which control gene expression, involved in the regulation of developmental processes. Eomesodermin/Tbr2 itself controls regulation of radial glia, as well as other related cells. Eomesodermin/Tbr2 has also been found to have a role in immune response, and there exists some loose evidence for its connections in other systems. Nervous system development Neurogenesis Eomesodermin/Tbr2 is expressed highly in the intermediate progenitor stage of the developing neuron. Neurons, the primary functional cells of the brain, are developed from radial glia cells. This process of cells developing into other types of cells is called differentiation. Radial glia are present in the ventricular zone of the brain, which are on the lateral walls of the lateral ventricles. Radial glia divide and migrate towards the surface of the brain, the cerebral cortex. During this migration, there are three stages of cellular development: radial glia, intermediate progenitors, and postmitotic projection neurons. Radial glia express Pax6, while intermediate progenitor cells express Eomesodermin/Tbr2, and postmitotic projection neurons express Tbr1. This process, known as neurogenesis, occurs mainly in the developing cortex before the organism has fully developed, and thus Eomesodermin/Tbr2 has been implicated in neurodevelopment. Tbr2 has been observed in a transcription factor cascade to enable to development of glutamatergic neurons. Pax6, as expressed by radial glia cells, activates the transcription of Neurogenin-2 which then activates the generation of intermediate progenitor cells (IPC) expressing Tbr2. These cells are localized within the subventricular zone. The IPCs then undergo symmetric division to produce NeuroD expressing cells that can differentiate in TBR1 neurons. Similar mechanisms have been observed in both embryonic and adult neurogenesis. Tbr2 inactivation has also been tied to deficiencies in cortical neurogenesis further suggesting the importance of the cascade in activating and maintaining neuron production. It has been found experimentally through knockout studies that mice lacking Eomesodermin/Tbr2 during early development have a reduced number of actively dividing cells, called proliferating cells, in the subventricular zone. This, may lead to the microcephaly (small head size due to improper brain development) seen in Eomesodermin/Tbr2 deficient mice. Eomesodermin/Tbr2 lacking mice have smaller upper cortical layers and a smaller sub ventricular zone in the brain, and have an absence of a mitral cell (neurons involved in the olfactory pathway) layer, with mitral cells instead being scattered about. Phenotypically, Eomesodermin/Tbr2 lacking mice show high anger levels and perform infanticide. Eomesodermin/Tbr2 lacking mice also seem to have problems with long axon connections. Axons are projections from neurons that connect with other cells in what is called a synapse and send neurotransmitters. In this way, they can communicate with other cells, and form the processing that allows are brains to function. Eomesodermin/Tbr2 lacking mice seem to lack fully formed commissural fibers, which connect the two hemispheres of the brain, and lack the corpus callosum, another region of the brain involved in hemisphere connections. Role in adult development There are locations within the brain that have been discovered to perform neurogenesis into adulthood, including the ventricular zone. The hippocampus, which is involved in memory formation, shows decreased neurogenesis when Eomesodermin/Tbr2 is removed. It was also found that Eomesodermin/Tbr2 functions by reducing amounts of Sox2, which is associated with radial glia. Another study found that mice without Eomesodermin/Tbr2 lacked long term memory formation, which may relate to Eomesodermin/Tbr2's effects on the hippocampus. Cardiac development Early in development, Eomesodermin/Tbr2 controls early differentiation of the cardiac mesoderm. Lack of Eomesodermin/Tbr2 appears to be correlated with failure to differentiate into cardiomyocytes. Eomesodermin/Tbr2 controls the expression of cardiac specific genes Mesp1, Myl7, Myl2, Myocardin, Nkx2.5 and Mef2c. Immune response Eomesodermin/Tbr2 is highly expressed in CD8+ T cells, but not CD4+ T cells. CD4+ T cells are the helper T cells which detect foreign particles in the body, and call CD8+ T cells to facilitate death of the foreign particles. Eomesodermin/Tbr2 was found to play a role in the anti cancer properties of CD8+ T cells. Lack of Eomesodermin/Tbr2, alongside T bet, another T box protein, caused CD8+ T cells to not penetrate tumors so they could perform their anti cancer duties. Eomesodermin/Tbr2 prevents CD8+ cells from differentiating into other types of T cells, but does not play a role in the production of CD8+ T cells itself. See also T-box family TBR1 References Further reading Transcription factors
Eomesodermin
[ "Chemistry", "Biology" ]
1,225
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
22,032,029
https://en.wikipedia.org/wiki/Magnetic%20separation
Magnetic separation is the process of separating components of mixtures by using a magnet to attract magnetic substances. The process that is used for magnetic separation separates non-magnetic substances from those which are magnetic. This technique is useful for the select few minerals which are ferromagnetic (iron-, nickel-, and cobalt-containing minerals) and paramagnetic. Most metals, including gold, silver and aluminum, are nonmagnetic. A large diversity of mechanical means are used to separate magnetic materials. During magnetic separation, magnets are situated inside two separator drums which bear liquids. Due to the magnets, magnetic particles are being drifted by the movement of the drums. This can create a magnetic concentrate (e.g. an ore concentrate). History Michael Faraday discovered that when a substance is put in a magnetic environment, the intensity of the environment is modified by it. With this information, he discovered that different materials can be separated with their magnetic properties. The table below shows the common ferromagnetic and paramagnetic minerals as well as the field intensity that is required in order to separate those minerals. In the 1860s, magnetic separation started to become commercialized. It was used to separate iron from brass. After the 1880s, ferromagnetic materials started to be magnetically separated. Among others, Thomas Edison tried to commercialize the magnetic enrichment of poor iron ores but failed. In the 1900s, high intensity magnetic separation was inaugurated which allowed the separation of pragmatic materials. After the Second World War, systems that were the most common were electromagnets. The technique was used in scrap yards. Magnetic separation was developed again in the late 1970s with new technologies being inaugurated. The new forms of magnetic separation included magnetic pulleys, overhead magnets and magnetic drums. In mines where wolframite was mixed with cassiterite, such as South Crofty and East Pool mine in Cornwall or with bismuth such as at the Shepherd and Murphy mine in Moina, Tasmania, magnetic separation is used to separate the ores. At these mines, a device called a Wetherill's Magnetic Separator (invented by John Price Wetherill, 1844–1906) was used. In this machine, the raw ore, after calcination was fed onto a conveyor belt which passed underneath two pairs of electromagnets under which further belts ran at right angles to the feed belt. The first pair of balls was weakly magnetized and served to draw off any iron ore present. The second pair were strongly magnetized and attracted the wolframite, which is weakly magnetic. These machines were capable of treating 10 tons of ore a day. Common applications Magnetic separation can also be used in electromagnetic cranes that separate magnetic material from scraps and unwanted substances. This explains its use for shipment equipments and waste management. Unwanted metals can be removed from goods with this technique. It keeps all materials pure. Recycling centres use magnetic separation often to separate components from recycling, isolate metals, and purify ores. Overhead magnets, magnetic pulleys, and the magnetic drums were the methods used in the recycling industry. Magnetic separation is also useful in mining iron as it is attracted to a magnet. Another application, not widely known but very important, is to use magnets in process industries to remove metal contaminants from product streams. This takes a lot of importance in food or pharmaceutical industries. Magnetic separation is also used in situations where pollution needs to be controlled, in chemical processing, as well as during the benefaction of nonferrous low-grade ores. Magnetic separation is also used in the following industries: dairy, grain and milling, plastics, food, chemical, oils, textile, and more. N52 magnets are used in magnetic separation for food processing, recycling, and manufacturing. They improve food safety, enhance recycling quality, and protect equipment in manufacturing, ensuring efficiency and high standards across these industries. Magnetic cell separation Magnetic cell separation is on the rise. It is currently being used in clinical therapies, more specifically in cancers and hereditary diseases researches. Magnetic cell separation took a turn when, Zborowski, an Immunomagnetic Cell Separation (IMCS) pioneer, analyzed commercial magnetic cell separation. Zborowski uncovered crucial revelations that were then used, and are still used today, in the human understanding of cell biology. Today, the manufacture of therapeutic products concerning cancers and genetic diseases, are being innovated due to these discoveries. In microbiology Magnetic separation techniques are also used in microbiology. In this case, binding molecules and antibodies are used in order to isolate specific viable organisms, nucleic acids, or antigens. This technology helps isolating bacterial species to identify and give diagnostics of genes targeting certain organisms. When magnetic separation techniques are combined with PCR (polymerase chain reaction), the results increase in sensitivity and specificity. Low-field magnetic separation Low-field magnetic separation is often in environmental contexts such as water purification and the separation of complex mixtures. Low magnetic field gradients are field gradients that are smaller than one hundred tesla per meter. Monodisperse magnetite (Fe3O4) and nanocrystals (NCs) are used for this technique. Magnetic filters are fitted on the boiler's pipework to collect magnetite from the circulating water before it has a chance to build up and lower the efficiency of the heating system. The water circulating around the heating system picks up bits of sludge (or magnetite) which can build up. The magnetic filter attracts all these bits of debris with a strong magnet as the water flows around it, preventing a build-up of sludge in the pipework or in the boiler. Weak magnetic separation Weak magnetic separation is used to create cleaner iron-rich products that can be reused. These products have low levels of impurities and a high iron load. This technique is used as a recycling technology. It is coupled with steelmaking slag fines as well as a selection of particle size screening. Magnetic Separation Force Calculations It can be shown that magnetic force per unit volume on a permeable particle with relative permeability mu sub (pr) is proportional to the spatial gradient of the square of the magnetic flux density. The formula can be used in magnetic finite element analysis software to compute force densities on a wide variety of practical examples, obtaining results agreeing with Oberteuffer's paper. References Solid-solid separation Magnetic devices Magnetism
Magnetic separation
[ "Chemistry" ]
1,325
[ "Solid-solid separation", "Separation processes by phases" ]
22,034,247
https://en.wikipedia.org/wiki/Nasal%20administration
Nasal administration, popularly known as snorting, is a route of administration in which drugs are insufflated through the nose. It can be a form of either topical administration or systemic administration, as the drugs thus locally delivered can go on to have either purely local or systemic effects. Nasal sprays are locally acting drugs such as decongestants for cold and allergy treatment, whose systemic effects are usually minimal. Examples of systemically active drugs available as nasal sprays are migraine drugs, rescue medications for overdose and seizure emergencies, hormone treatments, nicotine nasal spray, and nasal vaccines such as live attenuated influenza vaccine. Risks Nasal septum perforation A nasal septum perforation is a medical condition in which the nasal septum, the bony/cartilaginous wall dividing the nasal cavities, develops a hole or fissure. Nasal administration may cause nasal septum perforation by gradually injuring and ulcerating the epithelium, causing cartilage exposure and necrosis. Risk factors for shared drug paraphernalia Sharing snorting equipment (nasal spray bottles, straws, banknotes, bullets, etc) has been linked to the transmission of hepatitis C. In one study, the University of Tennessee Medical Center researchers warned that other blood-borne diseases such as HIV could be transmitted as well. Advantages The nasal cavity is covered by a thin mucosa which is well vascularised. Therefore, a drug molecule can be transferred quickly across the single epithelial cell layer directly to the systemic blood circulation without first-pass hepatic and intestinal metabolism. The effect is often reached within 5 minutes for smaller drug molecules. Nasal administration can therefore be used as an alternative to oral administration, by crushing or grinding tablets or capsules and snorting or sniffing the resulting powder, providing a rapid onset of effects if a fast effect is desired or if the drug is extensively degraded in the gut or liver. Large-molecule drugs can also be delivered directly to the brain by the intranasal route, the only practical means of doing so, following the olfactory and trigeminal nerves (see section below), for widespread central distribution throughout the central nervous system with little exposure to the blood. This delivery method to the brain was functionally demonstrated in humans in 2006, using insulin, a large peptide hormone that acts as a nerve growth factor in the brain. Limitations Nasal administration is primarily suitable for potent drugs since only a limited volume can be sprayed into the nasal cavity. Drugs for continuous and frequent administration may be less suitable because of the risk of harmful long-term effects on the nasal epithelium. Nasal administration has also been associated with a high variability in the amount of drug absorbed. Upper airway infections may increase the variability as may the extent of sensory irritation of the nasal mucosa, differences in the amount of liquid spray that is swallowed and not kept in the nasal cavity and differences in the spray actuation process. However, the variability in the amount absorbed after nasal administration should be comparable to that after oral administration. Nasal drugs The area of intranasal medication delivery provides a huge opportunity for research – both for specifically developed pharmaceutical drugs designed for intranasal treatment, as well as for investigating off-label uses of commonly available generic medications. Steroids, and a large number of inhalational anaesthetic agents are being used commonly. The recent developments in intranasal drug delivery systems are prodigious. Peptide drugs (hormone treatments) are also available as nasal sprays, in this case to avoid drug degradation after oral administration. The peptide analogue desmopressin is, for example, available for both nasal and oral administration, for the treatment of diabetes insipidus. The bioavailability of the commercial tablet is 0.1% while that of the nasal spray is 3-5% according to the SPC (Summary of Product Characteristics). Intranasal calcitonin, calcitonin-salmon, is used to treat hypercalcaemia arising out of malignancy, Paget's disease of bone, post menopausal and steroid induced osteoporosis, phantom limb pain and other metabolic bone abnormalities, available as Rockbone, Fortical and Miacalcin Nasal Spray. GnRH analogues like nafarelin and busurelin are used for the treatment of anovulatory infertility, hypogonadotropic hypogonadism, delayed puberty and cryptorchidism. Other potential drug candidates for nasal administration include anaesthetics, antihistamines (Azelastine), antiemetics (particularly metoclopramide and ondansetron) and sedatives that all benefit from a fast onset of effect. Intranasal midazolam is found to be highly effective in acute episodes of seizures in children. Recently, the upper part of the nasal cavity, as high as the cribriform plate, has been proposed for drug delivery to the brain. This "transcribrial route", published first in 2014, was suggested by the author for drugs to be given for Primary Meningoencephalitis. Medicines Oxytocin Oxytocin (brand name Syntocinon) nasal spray is used to increase duration and strength of contractions during labour. Intranasal oxytocin is also being actively investigated for many psychiatric conditions including alcohol withdrawal, anorexia nervosa, PTSD, autism, anxiety disorders, pain sensation and schizophrenia. Recreational drugs/entheogens List of substances that have higher bioavailability when administered intranasally compared to oral administration. Cocaine Insufflation of cocaine leads to the longest duration of its effects (60–90 minutes). When insufflating cocaine, absorption through the nasal membranes is approximately 30–60%. Ketamine Among the less invasive routes for ketamine, the intranasal route has the highest bioavailability (45–50%). Snuff Snuff is a type of smokeless tobacco product made from finely ground or pulverized tobacco leaves. It is snorted or "sniffed" (alternatively sometimes written as "snuffed") into the nasal cavity, delivering nicotine and a flavored scent to the user (especially if flavoring has been blended with the tobacco). Traditionally, it is sniffed or inhaled lightly after a pinch of snuff is either placed onto the back surface of the hand, held pinched between thumb and index finger, or held by a specially made "snuffing" device. Yopo Snuff trays and tubes similar to those commonly used for yopo were found in the central Peruvian coast dating back to 1200 BC, suggesting that insufflation of Anadenanthera beans is a more recent method of use. Archaeological evidence of insufflation use within the period 500-1000 AD, in northern Chile, has been reported. Research Olfactory transfer There is about 20 mL capacity in the adult human nasal cavity. The major part of the approximately 150 cm2 surface in the human nasal cavity is covered by respiratory epithelium, across which systemic drug absorption can be achieved. The olfactory epithelium is situated in the upper posterior part and covers approximately 10 cm2 of the human nasal cavity. The nerve cells of the olfactory epithelium project into the olfactory bulb of the brain, which provides a direct connection between the brain and the external environment. The transfer of drugs to the brain from the blood circulation is normally hindered by the blood–brain barrier (BBB), which is virtually impermeable to passive diffusion of all but small, lipophilic substances. However, if drug substances can be transferred along the olfactory nerve cells, they can bypass the BBB and enter the brain directly. The olfactory transfer of drugs into the brain is thought to occur by either slow transport inside the olfactory nerve cells to the olfactory bulb or by faster transfer along the perineural space surrounding the olfactory nerve cells into the cerebrospinal fluid surrounding the olfactory bulbs and the brain. Olfactory transfer could theoretically be used to deliver drugs that have a required effect in the central nervous system such as those for Parkinson's or Alzheimer's diseases. Studies have been presented showing that direct transfer of drugs is achievable. References Medical treatments Dosage forms Routes of administration
Nasal administration
[ "Chemistry" ]
1,734
[ "Pharmacology", "Routes of administration" ]
22,034,657
https://en.wikipedia.org/wiki/Gary%20S.%20Grest
Gary S. Grest is an American computational physicist at Sandia National Laboratories. He was awarded a B.Sc in physics (1971), an M.S in physics (1973) and a Ph.D in physics (1974) by the Louisiana State University. His interest is the theory and simulation of nanoscale phenomena. Since 1998 he has been a member of the technical staff of Sandia Laboratories, since 2009 an adjunct professor in department of chemistry, Clemson University and since 2013 a Distinguished Sandia National Laboratories Professor in the department of chemical and biological engineering, University of New Mexico. He was elected a Fellow of the American Physical Society in 1989 "for contributions to the understanding of the kinetics of domain growth, amorphous glasses, disordered magnets, and polymer dynamics" He was elected to the National Academy of Engineering in 2008. He received the Aneesur Rahman Prize for Computational Physics from the American Physical Society in 2008 for his work in computational physics and the American Physical Society Polymer Physics Prize in 2011. References 21st-century American physicists Year of birth missing (living people) Living people Louisiana State University alumni University of New Mexico faculty Members of the United States National Academy of Engineering Computational physicists Sandia National Laboratories people Fellows of the American Physical Society
Gary S. Grest
[ "Physics" ]
257
[ "Computational physicists", "Computational physics" ]
22,037,631
https://en.wikipedia.org/wiki/Hinsberg%20reaction
The Hinsberg reaction is a chemical test for the detection of primary, secondary and tertiary amines. The reaction was first described by Oscar Hinsberg in 1890. In this test, the amine is shaken well with the Hinsberg reagent (benzenesulfonyl chloride) in the presence of aqueous alkali (either KOH or NaOH). A primary amine will form a soluble sulfonamide salt. Acidification of this salt then precipitates the sulfonamide of the primary amine. A secondary amine in the same reaction will directly form an insoluble sulfonamide. A tertiary amine will not react with the original reagent (benzene sulfonyl chloride) and will remain insoluble. After adding dilute acid this insoluble amine is converted to a soluble ammonium salt. In this way the reaction can distinguish between the three types of amines. Tertiary amines are able to react with benzenesulfonyl chloride under a variety of conditions; the test described above is not absolute. The Hinsberg test for amines is valid only when reaction speed, concentration, temperature, and solubility are taken into account. Reactions Amines serve as nucleophiles in attacking the sulfonyl chloride electrophile, displacing chloride. The sulfonamides resulting from primary and secondary amines are poorly soluble and precipitate as solids from solution. For primary amines (R' = H), the initially formed sulfonamide is deprotonated by base to give a water-soluble sulfonamide salt (Na[PhSO2NR]). Tertiary amines promote hydrolysis of the sulfonyl chloride functional group, which affords water-soluble sulfonate salts. References External links Laboratory procedure: science.csustan.edu Chemical tests
Hinsberg reaction
[ "Chemistry" ]
399
[ "Chemical tests" ]
22,038,253
https://en.wikipedia.org/wiki/Plug%20%28sanitation%29
A plug in sanitation is an object that is used to close a drainage outlet firmly. The insertion of a plug into a drainage outlet allows the container to be filled with water or other fluids. In contrast to screw on caps, plugs are pushed into the hole and are not put over the hole. Plugs are most commonly encountered in the bathroom or kitchen, for use in bathtubs, wash basins or sinks. Traditional plugs Typically plugs are made from a soft material, such as rubber, or have a soft outer rim, so that they can be fitted to holes slightly smaller than their diameter; this ensures a tight seal. They are often connected by a ball chain which ensures the plug may be pulled from the drain with relative ease. Pop-up plugs Some modern plugholes dispense with the need for a separate plug, having instead a built-in 'pop-up plug' operated by a handle on the sink, that can move up or down to open or close the plughole. See also Pipe plug References Sewerage Bathrooms Piping
Plug (sanitation)
[ "Chemistry", "Engineering", "Environmental_science" ]
216
[ "Building engineering", "Chemical engineering", "Water pollution", "Sewerage", "Mechanical engineering", "Environmental engineering", "Piping" ]
26,384,081
https://en.wikipedia.org/wiki/Csplit
The csplit command in Unix and Unix-like operating systems is a utility that is used to split a file into two or more smaller files determined by context lines. History csplit is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX and the Single Unix Specification. It first appeared in PWB UNIX. The version of csplit bundled in GNU coreutils was written by Stuart Kemp and David MacKenzie. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. Usage The command-syntax is: csplit [OPTION]... FILE PATTERN... The patterns may be line numbers or regular expressions. The program outputs pieces of the file separated by the patterns into files xx00, xx01, etc., and outputs the size of each piece, in bytes, to standard output. The optional parameters modify the behaviour of the program in various ways. For example, the default prefix string (xx) and number of digits (2) in the output filenames can be changed. As with most Unix utilities, a return code of 0 indicates success, while nonzero values indicate failure. Comparison to split The split command also splits a file into pieces, except that all the pieces are of a fixed size (measured in lines or bytes). See also List of Unix commands split (Unix) References Further reading Ellen Siever, Aaron Weber, Stephen Figgins, Robert Love, Arnold Robbins, et al. Linux in a Nutshell, 5th Edition. O'Reilly Media: July 2005. . External links Standard Unix programs Unix SUS2008 utilities
Csplit
[ "Technology" ]
359
[ "Computing commands", "Standard Unix programs" ]
26,384,121
https://en.wikipedia.org/wiki/Thermocompression%20bonding
Thermocompression bonding describes a wafer bonding technique and is also referred to as diffusion bonding, pressure joining, thermocompression welding or solid-state welding. Two metals, e.g. gold-gold (Au), are brought into atomic contact applying force and heat simultaneously. The diffusion requires atomic contact between the surfaces due to the atomic motion. The atoms migrate from one crystal lattice to the other one based on crystal lattice vibration. This atomic interaction sticks the interface together. The diffusion process is described by the following three processes: surface diffusion grain boundary diffusion bulk diffusion This method enables internal structure protecting device packages and direct electrical interconnect structures without additional steps beside the surface mounting process. Overview The most established materials for thermocompression bonding are copper (Cu), gold (Au) and aluminium (Al) because of their high diffusion rates. In addition, aluminium and copper are relatively soft metals with good ductility. Bonding with Al or Cu requires temperatures ≥ 400 °C to ensure sufficient hermetical sealing. Furthermore, aluminium needs extensive deposition and requires a high applied force to penetrate the surface oxide, as it is not able to penetrate through the oxide. When using gold for diffusion, a temperature around 300 °C is needed to achieve a successful bond. Compared to Al or Cu, it does not form an oxide. This allows to skip a surface cleaning procedure before bonding. Copper has the disadvantage that the damascene process is very extensive. It also immediately forms a surface oxide which can, however, be removed by formic acid vapor cleaning. Oxide removal doubles as surface passivation. The diffusion of these metals requires good knowledge of the CTE differences between the two wafers to prevent resulting stress. Therefore, the temperature of both heaters needs to be matched and center-to-edge uniform for synchronized wafer expansion. Procedural steps Pre-conditioning Oxidation and impurities in the metal films affect the diffusion reactions by reducing the diffusion rates. Therefore, clean deposition practices and bonding with oxide removal and re-oxidation prevention steps are applied. The oxide layer removal can be realized by various oxide etch chemistry methods. Dry etching processes, i.e. formic acid vapor cleaning, are preferred based on the minimization of the immersion in fluids and the resulting etching of the passivation or the adhesion layer. Using the CMP process, which is especially for Cu and Al required, creates a planarized surface with micro roughness around several nanometres and enables the achievement of void-free diffusion bonds. Further, a surface treatment for organic removal, e.g. UV-ozone exposure, is possible. Methods, i.e. plasma surface pretreatment, provide an accelerated diffusion rate based on an increased surface contact. Also the use of an ultra planarization step is considered to improve the bonding due to a reduction of material transport required for the diffusion. This improvement is based on a defined height Cu, Au and Sn. Deposition The metal films can be deposited by evaporation, sputtering or electroplating. Evaporation and sputtering, producing high quality films with limited impurities, are slow and hence used for micrometre and sub-micrometre layer thicknesses. The electroplating is commonly used for thicker films and needs careful monitoring and control of the film roughness and the layer purity. The gold film can also be deposited on a diffusion barrier film, i.e. oxide or nitride. Also, an additional nano crystalline metal film, e.g. Ta, Cr, W, or Ti, can enhance the adhesion strength of the diffusion bond at decreased applied pressure and bonding temperature. Bonding The factors of the chosen temperature and applied pressure depend on the diffusion rate. The diffusion occurs between the crystal lattices by lattice vibration. Atoms can not leap over free space, i.e. contamination or vacancies. Beside the most rapid diffusion process (surface diffusion), the grain boundary and the bulk diffusion exist. Surface diffusion, also referred to as atomic diffusion, describes the process along the surface interface, when atoms move from surface to surface to free energy. The grain boundary diffusion terms the free migration of atoms in free atomic lattice spaces. This is based on polycrystalline layers and its boundaries of incomplete matching of the atomic lattice and grains. The diffusion through bulk crystal is the exchange of atoms or vacancies within the lattice that enables the mixing. The bulk diffusion starts at 30 to 50% of the materials melting point increasing exponentially with the temperature. To enable the diffusion process, a high force is applied to plastically deform the surface asperities in the film, i.e. reducing bow and warp of the metal. Further, the applied force and its uniformity is important and depends on the wafer diameter and the metal density features. The high degree of force uniformity diminish the total force needed and alleviate the stress gradients and sensitivity to fragility. The bonding temperature can be lowered using a higher applied pressure and vice versa, considering that high pressure increases the chances of damage to the structural material or the films. The bonding process itself takes place in a vacuum or forming gas environment, e.g. N2. The pressure atmosphere supports the heat conduction and prevents thermal gradients vertically across the wafer and re-oxidation. Based on the difficult control of thermal expansion differences between the two wafers, precision alignment and high quality fixtures are used. The bonding settings for the most established metals are following (for 200 mm wafers): Aluminium (Al) bonding temperature can be from 400 to 450 °C with an applied force above 70 kN for 20 to 45 min Gold (Au) bonding temperature is between 260 and 450 °C with an applied force above 40 kN for 20 to 45 min Copper (Cu)bonding temperature lies around 380 to 450 °C with an applied force between 20 and 80 kN for 20 to 60 min Examples 1. Thermocompression bonding is well established in the CMOS industry and realizes vertical integrated devices and production of wafer level packages with smaller form factors. This bonding procedure is used to produce pressure sensors, accelerometers, gyroscopes and RF MEMS. 2. Typically, thermocompression bonds are made with delivering heat and pressure to the mating surface by a hard faced bonding tool. Compliant bonding is a unique method of forming this type of solid state bond between a gold lead and a gold surface since heat and pressure is transmitted through a compliant or deformable media. The use of the compliant medium ensures the physical integrity of the lead by controlling the extent of wire deformation. The process also allows one to bond a multiple number of gold wires of various dimensions simultaneously since the compliant media ensures contacting and deforming all the lead wires. Technical specifications See also Compliant bonding Thermosonic bonding References Electronics manufacturing Packaging (microfabrication) Semiconductor technology Wafer bonding
Thermocompression bonding
[ "Materials_science", "Engineering" ]
1,419
[ "Electronics manufacturing", "Microtechnology", "Packaging (microfabrication)", "Electronic engineering", "Semiconductor technology" ]
26,384,549
https://en.wikipedia.org/wiki/Nuclear%20knowledge%20management
Nuclear knowledge management (NKM) is knowledge management as applied in the nuclear technology field. It supports the gathering and sharing of new knowledge and the updating of the existing knowledge base. Knowledge management is of particular importance in the nuclear sector, owing to the rapid development and complexity of nuclear technologies and their hazards and security implications. The International Atomic Energy Agency (IAEA) launched a nuclear knowledge management programme in 2002. Definition of nuclear knowledge management Nuclear knowledge management is defined as knowledge management in the nuclear domain. This simple definition is consistent with the working definition used in the IAEA document "Knowledge Management for Nuclear Industry Operating Organizations" (2006). Knowledge management (KM) itself is defined as an integrated, systematic approach to identifying, acquiring, transforming, developing, disseminating, using, sharing, and preserving knowledge, relevant to achieving specified objectives. Description Knowledge management systems support nuclear organizations in strengthening and aligning their knowledge. Knowledge is the nuclear energy industry’s most valuable asset and resource, without which the industry cannot operate safely and economically. Nuclear knowledge is also very complex, expensive to acquire and maintain, and easily lost. States, suppliers, and operating organizations that deploy nuclear technology are responsible for ensuring that the associated nuclear knowledge is maintained and accessible. In the organizational context, nuclear knowledge management supports the organization's business processes, and involves applying knowledge management practices. These may be applied at any stage of a nuclear facility's life cycle: research and development, design and engineering, construction, commissioning, operations, maintenance, refurbishment and life time extension, waste management, and decommissioning. Nuclear knowledge management issues and priorities are often unique to the particular circumstances of individual Member States and their nuclear industry organizations. Nuclear knowledge management practices enhance and support traditional business functions and goals such as human resource management, training, planning, operations, maintenance, projects, innovation, performance and risk management, information management, process management, organizational learning and information technology support. A nuclear knowledge management strategy, with clearly defined objectives, provides a framework for establishing principles, policy, priorities and plans to apply knowledge management practices in the workplace. Implementation Knowledge management focuses on people and organizational culture to stimulate and nurture the sharing and use of knowledge; on processes or methods to find, create, capture and share knowledge; and on technology to store and assimilate knowledge and to make it readily accessible in a manner which will allow people to work together even if they are not located together. People are the most important component in a KM system and the creation of new knowledge is one of its most valuable byproducts. For a KM system to function properly, the people involved must be willing to share and re-use existing knowledge and to cooperatively generate new knowledge to the advantage of the organization. Due to the nature of nuclear power plant operating organizations (high hazard but low risk), a number of knowledge management activities and programmes have been in place throughout the industry to manage and control the knowledge and information related to nuclear power plant design, construction, operation and maintenance. Examples of such existing KM activities employed by NPPs and in most other nuclear technology facilities include the following functions: Plant policies and procedures; Communication techniques; Configuration management; Document control; Work control systems; Quality assurance and quality management; Operating experience programmes; Corrective action systems; Safety analysis; Training and development; Human resource management; Company intranet and other web-based strategies. The implementation of a KM system is not intended to replace any of these systems, but rather should increase the benefits to be derived from these systems in conjunction with the deployment of an integrated management system. Properly implemented KM should increase the benefits to the organization of these existing activities, rather than substituting for them. The lessons learned in the nuclear industry in the past 20 years, moving away from inspection by large quality assurance organizations towards building quality into all facility processes, have considerable relevance for KM implementation. Motivation for nuclear knowledge management programs It is probable that nuclear knowledge will continue to expand and change. Without diligence in managing nuclear knowledge, substantial portions of it could be lost due to personnel retirements and the likelihood that much of it could be disused or discarded as a result of either negligence or changing priorities. It will be as important to identify and properly treat obsolete, superseded knowledge as it will be to gather and share new knowledge. It is therefore necessary to maintain effective and efficient KM systems. NKM has become an increasingly important element of the nuclear sector in recent years, resulting from a number of challenges and trends: Countries with expanding nuclear programmes require skilled and trained human resources to design and operate future nuclear installations. Capacity building through training and education and transferring knowledge from centres of knowledge to centres of growth are key issues. In countries with stagnating nuclear programmes, the challenge is to secure the human resources needed to sustain the safe operation of existing installations, including their decommissioning and related programmes for spent fuel and waste. Replacing retiring staff and attracting the young generation to a career in the nuclear field are key challenges. Non-power applications of nuclear technologies require a stable or even growing base of nuclear knowledge and trained human resources, be it for cancer treatment or for food and agriculture. This need is present in all Member States using nuclear technologies, independent of the use of nuclear power. Nuclear energy Concerns about global climate change and the availability of economically exploitable fossil fuels are driving many countries to reconsider the use of nuclear energy. Yet, the innovations required to design, construct, operate and maintain nuclear power plants consistent with international needs and constraints must derive from a strong foundation of well-sustained nuclear knowledge. In contrast to knowledge in other scientific domains, the free sharing and uncontrolled use of nuclear knowledge are severely restricted due to concerns about nuclear security and proliferation. On the other hand, ensuring nuclear safety requires free sharing of information and experience to avoid repetition of accident precursors. Risks to nuclear safety could be very high due to the nature and size of third party liability and the possibility of nuclear security being severely compromised. In managing nuclear knowledge, therefore, an appropriate balance between nuclear safety and security requirements needs to be established. Other sectors The applications of nuclear technology in the non-power areas enumerated above tends to be less controversial than nuclear power. Knowledge in these areas is broadly disseminated and – in many cases – is freely shared. Effective and efficient systems of managing nuclear knowledge form the basis for refining existing applications and developing new, even more widely used applications. IAEA program on nuclear knowledge management The importance of nuclear knowledge management is increasingly being recognized in the industry. The International Atomic Energy Agency (IAEA) has been a repository of knowledge related to peaceful applications of nuclear technology from the time the organization was established in 1957. Nuclear Knowledge Management came in the forefront in the IAEA as a formal programme to address Member States' priorities in the 21st century. Several resolutions adopted at the IAEA General Conference since 2002 include knowledge management topics. The IAEA Secretariat was urged to assist member states, at their request, in fostering and preserving nuclear education and training in all areas of nuclear technology for peaceful purposes; in developing guidance on and methodologies for planning, designing and implementing nuclear knowledge management programmes; in providing Member States with reliable information resources on the peaceful use of nuclear energy; and in continuing to develop tools and methods to capture, retain, share, utilize and preserve nuclear knowledge. The IAEA has organized a number of international meetings, schools and conferences covering a wide range of topics, from general concepts that underpin nuclear knowledge management to specific methods and tools taught at training seminars for practitioners. The IAEA Nuclear Knowledge Management Programme was headed by Yanko Yanev (2002–2012) and by John de Grosbois (since 2012). References External links Managing Nuclear Knowledge, A Pocket Guide, IAEA Vienna 2012. International Journal of Nuclear Knowledge Management, Inderscience Publishers. Knowledge management Nuclear technology
Nuclear knowledge management
[ "Physics" ]
1,598
[ "Nuclear technology", "Nuclear physics" ]
26,390,186
https://en.wikipedia.org/wiki/Catabolite%20Control%20Protein%20A
Catabolite Control Protein A (CcpA) is a master regulator of carbon metabolism in gram-positive bacteria. It is a member of the LacI/GalR transcription regulator family. In contrast to most LacI/GalR proteins, CcpA is allosterically regulated principally by a protein-protein interaction, rather than a protein-small molecule interaction. CcpA interacts with the phosphorylated form of Hpr and Crh, which is formed when high concentrations of glucose or fructose-1,6-bisphosphate are present in the cell. Interaction of Hpr or Crh modulates the DNA sequence specificity of CcpA, allowing it to bind operator DNA to modulate transcription. Small molecules glucose-6-phosphate and fructose-1,6-bisphosphate are also known allosteric effectors, fine-tuning CcpA function. Structure The DNA-binding functional unit of CcpA consists of a homodimer. The N-terminal region of each monomer form a DNA-binding site while the C-terminal portion forms a "regulatory" domain. A short linker connects the N-terminal DNA binding domain and the C-terminal regulatory domain, which partially contacts DNA when bound. The LacI/GalR subfamily can be functionally subdivided based on the presence or absence of a "YxxPxxxAxxL" motif in the liker sequence; CcpA belongs to the subdivision containing this motif. The regulatory domain is further subdivided into a N-terminal and C-terminal subdomain. Small molecule effector binding occurs in the cleft between these subdomains. Binding to phosphorylated Hpr/Crh occurs along the regulatory domain's N-subdomain. References Proteins
Catabolite Control Protein A
[ "Chemistry" ]
368
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
41,864,024
https://en.wikipedia.org/wiki/Oxygenated%20treatment
Oxygenated treatment (OT) is a technique used to reduce corrosion in a boiler and its associated feedwater system in flow-through boilers. Process With oxygenated treatment, oxygen is injected into the feedwater to keep the oxygen level between 30 and 50 ppb. OT programs are most commonly used in supercritical (i.e. >3250psi) power boilers. The ability to change an existing sub-critical boiler over to an OT program is very limited. "Common injection points are just after the condensate polisher and again at the deaerator outlet." This forms a thicker protective layer of hematite (Fe2O3) on top of the magnetite. This is a denser, flatter film (vs. the undulation scale with OT) so that there is less resistance to water flow compared to AVT. Also, OT reduces the risk of flow-accelerated corrosion. When OT is used, conductivity after cation exchange (CACE) at the economiser inlet must be maintained below 0.15μS/cm this can be achieved by the use of a full-flow condensate polisher. Comparison of AVT to OT See also Heat recovery steam generator Flow-accelerated corrosion Oxygen scavenger References External links Lamella Clarifier Water treatment
Oxygenated treatment
[ "Chemistry", "Engineering", "Environmental_science" ]
271
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
41,866,267
https://en.wikipedia.org/wiki/Planck%20star
In loop quantum gravity theory, a Planck star is a hypothetical astronomical object, theorized as a compact, exotic star, that exists within a black hole's event horizon, created when the energy density of a collapsing star reaches the Planck energy density. Under these conditions, assuming gravity and spacetime are quantized, a repulsive "force" arises from Heisenberg's uncertainty principle. The accumulation of mass–energy inside the Planck star cannot collapse beyond this limit because it violates the uncertainty principle for spacetime itself. The key feature of this theoretical object is that this repulsion arises from the energy density, not the Planck length, and starts taking effect far earlier than might be expected. This repulsive "force" is strong enough to stop the star's collapse well before a singularity is formed and, indeed, well before the Planck scale for distance. Since a Planck star is calculated to be considerably larger than the Planck scale, there is adequate room for all the information captured inside a black hole to be encoded in the star, thus avoiding information loss. While it might be expected that such a repulsion would act very quickly to reverse the collapse of a star, it turns out that the relativistic effects of the extreme gravity of such an object slow down time for the Planck star to a similarly extreme degree. Seen from outside the star's Schwarzschild radius, the rebound from a Planck star takes approximately fourteen billion years, such that even primordial black holes are only now starting to rebound from an outside perspective. Furthermore, the emission of Hawking radiation can be calculated to correspond to the timescale of gravitational effects on time, such that the event horizon that "forms" a black hole evaporates as the rebound proceeds. Carlo Rovelli and Francesca Vidotto, who first proposed the existence of Planck stars, theorized in 2014 that Planck stars form inside black holes as a solution to the black hole firewall and the black hole information paradox. Confirmation of emissions from rebounding black holes could provide evidence for loop quantum gravity. Recent work demonstrates that Planck stars may exist inside black holes as part of a cycle between black and white holes. A somewhat analogous object theorized under string theory is the fuzzball, which similarly eliminates the singularity within a black hole and accounts for a way to preserve the quantum information that falls into a black hole's event horizon. See also List of nearest black holes Supermassive black hole Intermediate-mass black hole Stellar black hole Micro black hole Neutron star References Star types Hypothetical stars Black holes
Planck star
[ "Physics", "Astronomy" ]
521
[ "Black holes", "Physical phenomena", "Physical quantities", "Unsolved problems in physics", "Astrophysics", "Density", "Astronomical classification systems", "Stellar phenomena", "Astronomical objects", "Star types" ]
23,491,418
https://en.wikipedia.org/wiki/Transition%20metal%20dioxygen%20complex
Dioxygen complexes are coordination compounds that contain O2 as a ligand. The study of these compounds is inspired by oxygen-carrying proteins such as myoglobin, hemoglobin, hemerythrin, and hemocyanin. Several transition metals form complexes with O2, and many of these complexes form reversibly. The binding of O2 is the first step in many important phenomena, such as cellular respiration, corrosion, and industrial chemistry. The first synthetic oxygen complex was demonstrated in 1938 with cobalt(II) complex reversibly bound O2. Mononuclear complexes of O2 O2 binds to a single metal center either "end-on" (η1-) or "side-on" (η2-). The bonding and structures of these compounds are usually evaluated by single-crystal X-ray crystallography, focusing both on the overall geometry as well as the O–O distances, which reveals the bond order of the O2 ligand. Complexes of η1-O2 ligands O2 adducts derived from cobalt(II) and iron(II) complexes of porphyrin (and related anionic macrocyclic ligands) exhibit this bonding mode. Myoglobin and hemoglobin are famous examples, and many synthetic analogues have been described that behave similarly. Binding of O2 is usually described as proceeding by electron transfer from the metal(II) center to give superoxide () complexes of metal(III) centers. As shown by the mechanisms of cytochrome P450 and alpha-ketoglutarate-dependent hydroxylase, Fe-η1-O2 bonding is conducive to formation of Fe(IV) oxo centers. O2 can bind to one metal of a bimetallic unit via the same modes discussed above for mononuclear complexes. A well-known example is the active site of the protein hemerythrin, which features a diiron carboxylate that binds O2 at one Fe center. Dinuclear complexes can also cooperate in the binding, although the initial attack of O2 probably occurs at a single metal. Complexes of η2-O2 ligands η2-bonding is the most common motif seen in coordination chemistry of dioxygen. Such complexes can be generated by treating low-valent metal complexes with oxygen. For example, Vaska's complex reversibly binds O2 (Ph = C6H5): IrCl(CO)(PPh3)2 + O2 IrCl(CO)(PPh3)2O2 The conversion is described as a 2 e− redox process: Ir(I) converts to Ir(III) as dioxygen converts to peroxide. Since O2 has a triplet ground state and Vaska's complex is a singlet, the reaction is slower than when singlet oxygen is used. The magnetic properties of some η2-O2 complexes show that the ligand, in fact, is superoxide, not peroxide. Most complexes of η2-O2 are generated using hydrogen peroxide, not from O2. Chromate ([CrO4)]2−) can for example be converted to the tetraperoxide [Cr(O2)4]2−. The reaction of hydrogen peroxide with aqueous titanium(IV) gives a brightly colored peroxy complex that is a useful test for titanium as well as hydrogen peroxide. Binuclear complexes of O2 These binding modes include μ2-η2,η2-, μ2-η1,η1-, and μ2-η1,η2-. Depending on the degree of electron-transfer from the dimetal unit, these O2 ligands can again be described as peroxo or superoxo. Hemocyanin is an O2-carrier that utilizes a bridging O2 binding motif. It features a pair of copper centers. . Salcomine, the cobalt(II) complex of salen ligand is the first synthetic O2 carrier. Solvated derivatives of the solid complex bind 0.5 equivalent of O2: 2 Co(salen) + O2 → [Co(salen)]2O2 Reversible electron transfer reactions are observed in some dinuclear O2 complexes. Relationship to other oxygenic ligands and applications Dioxygen complexes are the precursors to other families of oxygenic ligands. Metal oxo compounds arise from the cleavage of the O–O bond after complexation. Hydroperoxo complexes are generated in the course of the reduction of dioxygen by metals. The reduction of O2 by metal catalysts is a key half-reaction in fuel cells. Metal-catalyzed oxidations with O2 proceed via the intermediacy of dioxygen complexes, although the actual oxidants are often oxo derivatives. The reversible binding of O2 to metal complexes has been used as a means to purify oxygen from air, but cryogenic distillation of liquid air remains the dominant technology. References Coordination complexes Biochemistry Inorganic chemistry
Transition metal dioxygen complex
[ "Chemistry", "Biology" ]
1,075
[ "Biochemistry", "Coordination chemistry", "nan", "Coordination complexes" ]
23,492,094
https://en.wikipedia.org/wiki/Tixocortol%20pivalate
Tixocortol pivalate is a corticosteroid. It has anti-inflammatory properties similar to hydrocortisone. It is marketed under the brand name Pivalone. It is sometimes used in patch testing in atopic dermatitis. See also Tixocortol References Corticosteroid esters Corticosteroids
Tixocortol pivalate
[ "Chemistry" ]
76
[ "Pharmacology", "Pharmacology stubs", "Medicinal chemistry stubs" ]
23,493,177
https://en.wikipedia.org/wiki/Proofs%20involving%20ordinary%20least%20squares
The purpose of this page is to provide supplementary materials for the ordinary least squares article, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition. Derivation of the normal equations Define the th residual to be Then the objective can be rewritten Given that S is convex, it is minimized when its gradient vector is zero (This follows by definition: if the gradient vector is not zero, there is a direction in which we can move to minimize it further – see maxima and minima.) The elements of the gradient vector are the partial derivatives of S with respect to the parameters: The derivatives are Substitution of the expressions for the residuals and the derivatives into the gradient equations gives Thus if minimizes S, we have Upon rearrangement, we obtain the normal equations: The normal equations are written in matrix notation as (where XT is the matrix transpose of X). The solution of the normal equations yields the vector of the optimal parameter values. Derivation directly in terms of matrices The normal equations can be derived directly from a matrix representation of the problem as follows. The objective is to minimize Here has the dimension 1x1 (the number of columns of ), so it is a scalar and equal to its own transpose, hence and the quantity to minimize becomes Differentiating this with respect to and equating to zero to satisfy the first-order conditions gives which is equivalent to the above-given normal equations. A sufficient condition for satisfaction of the second-order conditions for a minimum is that have full column rank, in which case is positive definite. Derivation without calculus When is positive definite, the formula for the minimizing value of can be derived without the use of derivatives. The quantity can be written as where depends only on and , and is the inner product defined by It follows that is equal to and therefore minimized exactly when Generalization for complex equations In general, the coefficients of the matrices and can be complex. By using a Hermitian transpose instead of a simple transpose, it is possible to find a vector which minimizes , just as for the real matrix case. In order to get the normal equations we follow a similar path as in previous derivations: where stands for Hermitian transpose. We should now take derivatives of with respect to each of the coefficients , but first we separate real and imaginary parts to deal with the conjugate factors in above expression. For the we have and the derivatives change into After rewriting in the summation form and writing explicitly, we can calculate both partial derivatives with result: which, after adding it together and comparing to zero (minimization condition for ) yields In matrix form: Least squares estimator for β Using matrix notation, the sum of squared residuals is given by Since this is a quadratic expression, the vector which gives the global minimum may be found via matrix calculus by differentiating with respect to the vector  (using denominator layout) and setting equal to zero: By assumption matrix X has full column rank, and therefore XTX is invertible and the least squares estimator for β is given by Unbiasedness and variance of Plug y = Xβ + ε into the formula for and then use the law of total expectation: where E[ε|X] = 0 by assumptions of the model. Since the expected value of equals the parameter it estimates, , it is an unbiased estimator of . For the variance, let the covariance matrix of be (where is the identity matrix), and let X be a known constant. Then, where we used the fact that is just an affine transformation of by the matrix . For a simple linear regression model, where ( is the y-intercept and is the slope), one obtains Expected value and biasedness of First we will plug in the expression for y into the estimator, and use the fact that X'M = MX = 0 (matrix M projects onto the space orthogonal to X): Now we can recognize ε′Mε as a 1×1 matrix, such matrix is equal to its own trace. This is useful because by properties of trace operator, tr(AB) = tr(BA), and we can use this to separate disturbance ε from matrix M which is a function of regressors X: Using the Law of iterated expectation this can be written as Recall that M = I − P where P is the projection onto linear space spanned by columns of matrix X. By properties of a projection matrix, it has p = rank(X) eigenvalues equal to 1, and all other eigenvalues are equal to 0. Trace of a matrix is equal to the sum of its characteristic values, thus tr(P) = p, and tr(M) = n − p. Therefore, Since the expected value of does not equal the parameter it estimates, , it is a biased estimator of . Note in the later section “Maximum likelihood” we show that under the additional assumption that errors are distributed normally, the estimator is proportional to a chi-squared distribution with n – p degrees of freedom, from which the formula for expected value would immediately follow. However the result we have shown in this section is valid regardless of the distribution of the errors, and thus has importance on its own. Consistency and asymptotic normality of Estimator can be written as We can use the law of large numbers to establish that By Slutsky's theorem and continuous mapping theorem these results can be combined to establish consistency of estimator : The central limit theorem tells us that where Applying Slutsky's theorem again we'll have Maximum likelihood approach Maximum likelihood estimation is a generic technique for estimating the unknown parameters in a statistical model by constructing a log-likelihood function corresponding to the joint distribution of the data, then maximizing this function over all possible parameter values. In order to apply this method, we have to make an assumption about the distribution of y given X so that the log-likelihood function can be constructed. The connection of maximum likelihood estimation to OLS arises when this distribution is modeled as a multivariate normal. Specifically, assume that the errors ε have multivariate normal distribution with mean 0 and variance matrix σ2I. Then the distribution of y conditionally on X is and the log-likelihood function of the data will be Differentiating this expression with respect to β and σ2 we'll find the ML estimates of these parameters: We can check that this is indeed a maximum by looking at the Hessian matrix of the log-likelihood function. Finite-sample distribution Since we have assumed in this section that the distribution of error terms is known to be normal, it becomes possible to derive the explicit expressions for the distributions of estimators and : so that by the affine transformation properties of multivariate normal distribution Similarly the distribution of follows from where is the symmetric projection matrix onto subspace orthogonal to X, and thus MX = X′M = 0. We have argued before that this matrix rank n – p, and thus by properties of chi-squared distribution, Moreover, the estimators and turn out to be independent (conditional on X), a fact which is fundamental for construction of the classical t- and F-tests. The independence can be easily seen from following: the estimator represents coefficients of vector decomposition of by the basis of columns of X, as such is a function of Pε. At the same time, the estimator is a norm of vector Mε divided by n, and thus this estimator is a function of Mε. Now, random variables (Pε, Mε) are jointly normal as a linear transformation of ε, and they are also uncorrelated because PM = 0. By properties of multivariate normal distribution, this means that Pε and Mε are independent, and therefore estimators and will be independent as well. Derivation of simple linear regression estimators We look for and that minimize the sum of squared errors (SSE): To find a minimum take partial derivatives with respect to and Before taking partial derivative with respect to , substitute the previous result for Now, take the derivative with respect to : And finally substitute to determine References Article proofs Least squares
Proofs involving ordinary least squares
[ "Mathematics" ]
1,698
[ "Article proofs" ]
23,493,530
https://en.wikipedia.org/wiki/It%20Felt%20Like%20a%20Kiss
It Felt Like a Kiss is an immersive theatre production, first performed between 2 and 19 July 2009 as part of the second Manchester International Festival, co-produced with the BBC. Themed on "how power really works in the world", it is a collaboration between film-maker Adam Curtis and theatre company Punchdrunk, with original music composed by Damon Albarn and performed by the Kronos Quartet. Visitors wandered among sets and watched a short film created from archival footage, weaving together multiple stories about American international and cultural influence beginning in the year 1959, touching on the Cold War, Rock Hudson and Doris Day, Lou Reed, Saddam Hussein, Lee Harvey Oswald, and the AIDS epidemic. The title is taken from The Crystals' 1962 song "He Hit Me (And It Felt Like a Kiss)", written by Gerry Goffin and Carole King. Production The production was staged at Quay House, in the disused former offices of the National Probation Service on Quay Street, central Manchester. The production ran between 2 and 19 July 2009, as part of the second Manchester International Festival. It Felt Like A Kiss won Punchdrunk the Manchester Evening News Theatre Award for Best Special Entertainment. The film and event makes extensive use of archive footage. Upon arrival at the event groups of nine visitors are taken to a darkened sixth floor. The 54 minute film (available for a limited time online in the UK) is only a small section ("the film club") of the event. Featured in the story are Eldridge Cleaver, Doris Day, Little Eva, Philip K Dick, Enos (a chimpanzee sent into space), Sidney Gottlieb, Rock Hudson, Saddam Hussein, Richard Nixon, Lee Harvey Oswald, Lou Reed, Mobutu Sese Seko, B F Skinner, Phil Spector, Tina Turner and Frank Wisner. Unlike Curtis' earlier work which prominently feature the Helvetica typeface, Arial is used for titling. Also, Curtis' trademark narration is absent. Sound and Show Control equipment were supplied by Bradford-based The Stage Management Company (Uk) Ltd who have also collaborated with Punchdrunk on their Duchess of Malfi and Dr Who: Crash of the Elysium projects. The production consisted of elaborate walk-through sets depicting first scenes from an idyllic midcentury America and then a series of decrepit offices, hospital wards, and prison cells taken from horror films. They were separated by a theatre screening the Curtis film. The production generally lacked human performers, with notable rare exceptions including a performer as a chainsaw-wielding serial killer. Genesis The production started life as an experimental film by Adam Curtis, commissioned by the BBC. Curtis approached Felix Barrett of the Punchdrunk theatre company, with the proposal that a production could be created "as though the audience were walking through the story of the film.” The film was shown to Damon Albarn, already associated with the Manchester International Festival through the productions Demon Days Live in 2006 and Monkey: Journey to the West in 2007. He agreed to write a score for the production, which was then recorded by the San Francisco-based Kronos Quartet. Themes According to Adam Curtis the production is "the story of an enchanted world that was built by American power as it became supreme...and how those living in that dream world responded to it". He has also said; "it’s trying to show to you that the way you feel about yourself and the way you feel about the world today is a political product of the ideas of that time”. According to Curtis: "The politics of our time"..."are deeply embedded in the ideas of individualism...but it's not the be-all-and-end-all...the notion that you only achieve your true self if your dreams, your desires, are satisfied...it's a political idea." Felix Barrett has stated that the production was influenced by his love of ghost trains and haunted houses, and by the idea of blurring fiction with reality: "It takes the idea of the viewer as voyeur and asks at what point are you watching, inside or even starring in the film". The development of new techniques of interrogation by "everyone over Level 7" in the CIA during the 1960s is a theme of the production, and the suggestibility of human beings is something that the production seeks to highlight. Reception The NME described the show as a “pop-art-horror walk through” that left the reviewer “breathless, mesmerised, sick to the stomach with fear and in need of a good lie down.” Cultural historian Brett Nicholls sees the show as part of Curtis’ stance of suspicion towards the “political absurdity” of elites. Music References External links 2009 works Culture in Manchester Immersive entertainment Multimedia works
It Felt Like a Kiss
[ "Technology" ]
990
[ "Multimedia", "Multimedia works" ]
23,497,722
https://en.wikipedia.org/wiki/Free%20streaming
In astronomy, a free streaming particle is one that propagates through a medium without scattering. The particle is often a photon, but it can also refer to neutrinos, cosmic rays, and hypothetical dark matter particles. Use in defining surfaces Defining an exact surface for an object such as the Sun is made difficult by the diffuse nature of matter which constitutes the Sun at distances far from the stellar core. An often used definition for the surface of a star is based on the path that photons take. Inside a star, photons travel by random walk, constantly interacting with matter, and the surface of the star is defined as the point at which photons encounter little resistance from the matter in the stellar atmosphere, or in other words, when photons stream freely. The light which constitutes the cosmic microwave background comes from the surface of last scattering. This is, on average, the surface at which primordial photons last interacted with matter in the universe, or in other words, the point at which photons started free streaming. Similarly, the surface of the cosmic neutrino background, if it could be observed, would mark when neutrinos decoupled and began to stream freely through the rest of the matter in the universe. See also Knudsen gas Radiative transfer References Bibliography Atmospheric radiation Electromagnetic radiation
Free streaming
[ "Physics", "Chemistry", "Materials_science" ]
271
[ "Physical phenomena", "Materials science stubs", "Electromagnetic radiation", "Scattering stubs", "Scattering", "Radiation", "Electromagnetism stubs" ]
23,499,524
https://en.wikipedia.org/wiki/Hansen%27s%20problem
In trigonometry, Hansen's problem is a problem in planar surveying, named after the astronomer Peter Andreas Hansen (1795–1874), who worked on the geodetic survey of Denmark. There are two known points , and two unknown points . From and an observer measures the angles made by the lines of sight to each of the other three points. The problem is to find the positions of and . See figure; the angles measured are . Since it involves observations of angles made at unknown points, the problem is an example of resection (as opposed to intersection). Solution method overview Define the following angles: As a first step we will solve for and . The sum of these two unknown angles is equal to the sum of and , yielding the equation A second equation can be found more laboriously, as follows. The law of sines yields Combining these, we get Entirely analogous reasoning on the other side yields Setting these two equal gives Using a known trigonometric identity this ratio of sines can be expressed as the tangent of an angle difference: Where This is the second equation we need. Once we solve the two equations for the two unknowns , we can use either of the two expressions above for to find since is known. We can then find all the other segments using the law of sines. Solution algorithm We are given four angles and the distance . The calculation proceeds as follows: Calculate Calculate Let and then Calculate or equivalently If one of these fractions has a denominator close to zero, use the other one. Solutions via Geometric Algebra In addition to presenting algorithms for solving the problem via Vector Geometric Algebra and Conformal Geometric Algebra, Ventura et al. review previous methods, and compare the various methods' computational speeds and sensitivity to measurement error. See also Solving triangles Snell's problem References Trigonometry Surveying Mathematical problems
Hansen's problem
[ "Mathematics", "Engineering" ]
376
[ "Surveying", "Mathematical problems", "Civil engineering" ]
24,982,445
https://en.wikipedia.org/wiki/Clerici%20solution
Clerici solution is an aqueous solution of equal parts of thallium formate (Tl(HCO2)) and thallium malonate (Tl(C3H3O4)). It is free-flowing and odorless. Its color fades from yellowish to colorless when diluted. At 4.25 g/cm3 at , saturated Clerici solution is one of the densest aqueous solutions. The solution was invented in 1907 by the Italian chemist Enrico Clerici (1862–1938). Its value in mineralogy and gemology was reported in 1930s. It allows the separation of minerals by density with a traditional flotation method. Its advantages include transparency and an easily controllable density in the range 1–5 g/cm3 as a result of changes in solubility (and therefore density of the saturated solution) with temperature. Saturated Clerici solution is more dense than spinel, garnet, diamond, and corundum, as well as many other minerals. A saturated Clerici solution at can separate densities up to 4.2 g/cm3, while a saturated solution at can separate densities up to 5.0 g/cm3. The change in density is due to the increased solubility of the heavy thallium salts at the higher temperature. A range of solution densities between 1.0 and 5.0 g/cm3 can be achieved by diluting with water. The refractive index shows significant, linear and well reproducible variation with the density; it changes from 1.44 for 2 g/cm3 to 1.70 for 4.28 g/cm3. Thus the density can be easily measured by optical techniques. The color of the Clerici solution changes significantly upon minor dilution. In particular, at room temperature the concentrated solution with the density of 4.25 g/cm3 is amber-yellow. However, a minor dilution with water to the density of 4.0 g/cm3 makes it as colorless as glass or water (absorption threshold 350 nm). Procedures for determining mineral density using the Clerici solution are available. Two substantial drawbacks of the Clerici solution are its high toxicity and corrosiveness. Today sodium polytungstate has been introduced as a replacement, but its solutions do not reach as high a density as the Clerici solution. Clerici solution was used by the serial killer Tamara Ivanyutina to poison her victims. References Thallium(I) compounds Formates Malonates Solutions
Clerici solution
[ "Chemistry" ]
524
[ "Homogeneous chemical mixtures", "Solutions" ]
24,986,062
https://en.wikipedia.org/wiki/Santiago%20Schnell
Santiago Schnell FRSB FRSC is a scientist and academic leader, currently serving as the William K. Warren Foundation Dean of the College of Science at the University of Notre Dame, as well as a professor in the Department of Biological Sciences, and Department of Applied and Computational Mathematics and Statistics. Early life and education Santiago Schnell was born and raised in Caracas, Venezuela. Growing up in the tropical rainforest, he developed an appreciation for nature and its intricate interactions with humans. His interest in science was sparked by his neighbor, Serafín Mazparrote, a Spanish biologist, and science educator, who exposed him to the natural world. This early exposure to science and nature motivated Schnell to pursue a career in scientific research. Schnell's father, a lawyer with an understanding of the potential of computers, provided him with a Sinclair ZX81 computer when he was just 10 years old. This early access to technology ignited Schnell's interest for using mathematical approaches to solve complex problems and laid the foundation for his future work in scientific research. He earned his undergraduate degree in biology from Universidad Simón Bolívar in Venezuela and later obtained his doctorate in mathematical biology from the University of Oxford in the United Kingdom. He pursued his doctoral and postdoctoral research under the supervision of Philip Maini, FRS in the Wolfson Centre for Mathematical Biology at the University of Oxford. His academic journey and international experience contributed to shaping his multidisciplinary approach to scientific research. Career From 2001 to 2004, he was Junior Research Fellow at Christ Church (a college of the University of Oxford) and a Research Fellow of the Welcome Trust at the Center for Mathematical Biology in the University of Oxford. He was assistant professor of Informatics and associate director of the Biocomplexity Institute at Indiana University, Bloomington between 2004 and 2008. In 2008, he joined the University of Michigan as associate professor of Molecular & Integrative Physiology and a U-M Brehm Investigator in the Brehm Center for Diabetes Research. In 2013, he was jointly appointed as associate professor in the Department of Computational Medicine and Bioinformatics. He was promoted to professor in both departments in 2015, appointed as the John A. Jacquez Collegiate Professor of Physiology in 2016, and served as chair of the Department of Molecular & Integrative Physiology between 2017 and 2021. In 2021, he was appointed the William K. Warren Foundation Dean of the College of Science at the University of Notre Dame. Schnell is Past-President of the Society for Mathematical Biology. He served as the Editor-in-Chief of Mathematical Biosciences, and is a member of the Standards for Reporting Enzymology Data Commission. Scholars contributions to research and education Schnell's research program departs from the premise that there is a continuum between health and disease; if we are capable of measuring this continuum, we will be in the position of detecting disease earlier and understanding it better to intervene more precisely. His research focuses on two broad areas: (i) the development of standard-methods to obtain high quality measurements in the biomedical sciences and scientometrics, and (ii) the development of mathematical models of complex biomedical systems with the goal of identifying the key mechanisms underlying the behavior of the system as a whole. Schnell has also focused his research attention on deriving mathematical expressions to estimate enzyme kinetics parameters under different reaction conditions. He has systematically obtained equations to estimate kinetic parameters for the family of Michaelis-Menten reaction mechanisms and determined their region of validity for the initial enzyme and substrate concentrations. Schnell derived a generic expression, known nowadays as the Schnell-Mendoza equation, to determine the Michaelis constant and maximum velocity for enzyme catalyzed reactions following Michaelis-Menten kinetics using time course data. He has also systematically investigated for the first time how the rate laws describing intracellular reactions vary as a function of the physico-chemical conditions of the intracellular environments. His work has focused to resolve the ambiguities in the quantitative analysis and modeling of reactions inside cells. In addition, Schnell has also extensive experience in developing multiscale models of developmental processes and cancer. His work has been highlighted in popular science magazines, such as American Scientist (USA), Investigación y Ciencia (Spain and Latin-America), Spektrum der Wissenschaft (Germany). Honors and awards Santiago has garnered some accolades for his research and teaching endeavors. He received the Faculty Award for Teaching Excellence from the School of Informatics at Indiana University in 2006. In 2013, he was inducted to the League of Educational Excellence in the University of Michigan Medical School, and was awarded the Endowment for Basic Science Teaching Award from the same institution. He was also visiting professor of Excellence, Department of Chemistry, University of Barcelona, Barcelona, Spain. Schnell was recognized with James S. McDonnell Foundation 21st Century Scientist Award in 2010. He is Fellow of the Royal Society of Chemistry, Fellow of the Royal Society of Medicine and Fellow of the Royal Society of Biology. He is a Corresponding Member of the :pt:Academia de Ciências da América Latina. Schnell is an elected Fellow of the American Association for the Advancement of Science for distinguished contributions to the field of mathematical biology, particularly for the theoretical modeling of complex biochemical reactions and optimal estimation of their rates. In 2023, The Society for Mathematical Biology honored him with the Arthur T. Winfree Prize for his outstanding contributions to many areas of biology, and in particular his seminal work on enzyme kinetics. Schnell's theories and mathematical modelling of enzyme catalyzed reactions have been transformative for the fields of catalysis and enzyme kinetics while leading, at the same time, to a resurgence of new mathematical biology research in enzyme kinetics. The Society for Advancement of Chicanos and Native Americans in Science conferred upon him the 2023 SACNAS Distinguished Scientist Award in recognition of his significant contributions to enzyme kinetics and the creation of a fundamental quantitative enzymological model of the Polymerase Chain Reaction. Academic administration Society for mathematical biology As President of the Society for Mathematical Biology, Schnell implemented structural changes that strengthened the organization's foundation and membership. To allow members of the Society for Mathematical Biology to meet and interact within more focused areas in smaller groups, Schnell established the SMB Subgroups, which have been truly transformative for the Society, making more dynamics and representative for all the members of the field. He also made major gains in fundraising for the Society. His efforts resulted in a four-fold increase of the Society's endowment. This led to the establishment of awards to recognize excellence in mathematical biology at different career stages. Thanks for his leadership, the Society has the following awards: The H. D. Landahl Mathematical Biophysics Award for graduate students and postdoctoral fellows, The Leah Edelstein-Keshet Prize for Women in Mathematical Biology, the John Jungck Prize for Excellence in Education, and the Society for Mathematical Biology Fellows Program. University of Michigan Dr. Schnell has acted as chair of the Department of Molecular & Integrative Physiology in the Medical School from 2017 to 2021. During his time leading the largest basic science department at Michigan Medicine, he helped it maintain its status as the top National Institutes of Health-funded physiology department in the nation. Under his leadership, the department increased its total annual operating revenue from $20.7 to $26.9 million and total cash and investments from $11.2 to $17.2 million. The overall department endowment increased from $5.4 to $8.3 million during his tenure, he successfully completed fundraising for two endowed collegiate professorships, and he established an endowment to support postdoctoral program activities. During his tenure, six faculty were elected fellows of the American Association for the Advancement of Science and one of the Latin American Academy of Science. He stewarded an increase of the diversity of trainees in the department's educational programs, which now consist of nearly one-third underrepresented minorities. Between 2016 and 2017, in collaboration with Dr. David Brown and the Office for Health Equity and Inclusion, he led the development, coordination and implementation of the strategic plan for diversity, equity and inclusion of the 10 basic science departments/units in the Medical School. As an educator, Schnell co-organized the establishment of a summer fellowship program to attract undergraduate students to gain hands-on research experiences in the department. He also led the independent funding of a summer fellowship program through an NIDDK R25 grant "Interfacing Computation and Engineering with Digestive and Metabolic Physiology Program." This program served as a template to fund two additional R25 programs, effectively creating an umbrella program which attracts approximately 75 students annually from across the nation to pursue research in our medical school. University of Notre Dame Under his leadership, the College of Science has launched a number of new initiatives. In the fall of 2021, the College of Science launched the first minor program in the country focusing on rare disease patient advocacy. A few months later, Notre Dame Patient Advocacy Initiative receives founding gift from Horizon Therapeutics and Dyne Therapeutics. During his tenure, the University of Notre Dame receive a $20 million gift to endow the newly established Berthiaume Institute for Precision Health. He contributed to the expansion of the University of Notre Dame East Campus Research Complex with the addition of a 200,000-square-foot science and engineering building. Additionally, Schnell established the Notre Dame Christmas lectures; this event is an annual gift of science to the community adapted the Royal Institution Christmas Lectures. To ramp up public engagement efforts, he created a Professor of Public Understanding of Science; this is among the first professorships of its kind in the United States. Schnell also established the Rev. Joseph Carrier C.S.C. Science Medal. This is the most prestigious award presented by the College of Science, and is given for sustained, outstanding achievements in any field of science. Personal life Schnell is married to Mariana, with whom he shares two children, Andrea and David. A series of ongoing health challenges in Schnell's life has prompted him to channel his research endeavors into the field of biomedical sciences. Sources External links Fast moving fronts 2009 Santiago Schnell Schnell Lab at the University of Notre Dame Theoretical biologists 21st-century Venezuelan biologists Simón Bolívar University (Venezuela) alumni Living people University of Michigan faculty Computational chemists 1971 births
Santiago Schnell
[ "Chemistry", "Biology" ]
2,128
[ "Computational chemists", "Bioinformatics", "Computational chemistry", "Theoretical chemists", "Theoretical biologists" ]
24,987,602
https://en.wikipedia.org/wiki/Waterborne%20Disease%20and%20Outbreak%20Reporting%20System
The Waterborne Disease and Outbreak Surveillance System (WBDOSS) is a national surveillance system maintained by the U.S. Centers for Disease Control and Prevention (CDC). The WBDOSS receives data about waterborne disease outbreaks and single cases of waterborne diseases of public health importance (for example, Primary Amebic Meningoencephalitis (PAM)) in the United States and then disseminates information about these diseases, outbreaks, and their causes. WBDOSS was initiated in 1971 by CDC, the Council of State and Territorial Epidemiologists (CSTE), and the Environmental Protection Agency (EPA). Data are reported by public health departments in individual states, territories, and the Freely Associated States (composed of the Republic of the Marshall Islands, the Federated States of Micronesia and the Republic of Palau; formerly parts of the U.S.-administered Trust Territories of the Pacific Islands). Although initially designed to collect data about drinking water outbreaks in the United States, WBDOSS now includes outbreaks associated with recreational water, as well as outbreaks associated with water that is not intended for drinking (non-recreational) and water for which the intended use is unknown. Definition of a Waterborne Disease Outbreak Waterborne disease outbreaks may be associated with recreational water, water intended for drinking, water not intended for drinking (non-recreational water, for example, from cooling towers or ornamental fountains) and water of unknown intent. In order for a waterborne disease outbreak to be included in WBDOSS there must be an epidemiologic link between two or more persons that includes a location of water exposure, a clearly defined time period for the water exposure, and one or more waterborne illnesses caused by pathogens such as bacteria, parasites and viruses, or by chemicals/toxins. Common routes of exposure to waterborne pathogens include swallowing contaminated water, inhaling water droplets or airborne chemicals from the water, and direct physical contact with contaminated water. Epidemiologic evidence must implicate water or volatile compounds from the water that have entered the air as the probable source of the illness. WBDOSS outbreaks are further evaluated and classified based on the strength of evidence in the outbreak report that implicates water as the source of the outbreak. Waterborne disease outbreaks that have both strong epidemiologic data and comprehensive water-quality testing data are assigned a higher class than outbreaks with weak epidemiologic data and little or no water-quality testing data. Data Sources for WBDOSS Public health departments investigate waterborne disease outbreaks in states, territories, and Freely Associated States and are essential contributors to the WBDOSS. The primary reporting tool for WBDOSS prior to 2009 was the CDC 52.12 waterborne disease outbreak reporting form. Beginning in 2009, this form was replaced by the electronic National Outbreak Reporting System (NORS). Secondary data sources include case reports of water-associated cases of PAM caused by Naegleria fowleri infections, case reports for chemical/toxin poisoning and wound infections (reported sporadically), data about recreational water-associated Vibrio cases from the Cholera and Other Vibrio Surveillance System, and case reports for pool chemical-related health events not associated with recreational water (reported sporadically. Data Use CDC has published WBDOSS surveillance summaries on an annual or biennial basis since 1971. Summary statistics and descriptions of waterborne disease outbreaks were published in CDC reports until 1984 and have been published in the Morbidity and Mortality Weekly Report (MMWR) since 1985. Public health researchers and policy makers use the data to understand and reduce waterborne disease and outbreaks. WBDOSS data are available to support EPA efforts to improve drinking water quality and to provide direction for CDC’s recreational water activities, such as the Healthy Swimming program. See also National Outbreak Reporting System (NORS) References External links Waterborne Disease and Outbreak Surveillance System (WBDOSS) Healthy Swimming at the U.S. Centers for Disease Control and Prevention (CDC) about swimming and recreational water-related information Healthy Water at the U.S. Centers for Disease Control and Prevention (CDC) Council of State and Territorial Epidemiologists OutbreakNet Team at the United States Centers for Disease Control and Prevention. Public health in the United States Centers for Disease Control and Prevention Water treatment
Waterborne Disease and Outbreak Reporting System
[ "Chemistry", "Engineering", "Environmental_science" ]
894
[ "Water treatment", "Water pollution", "Water technology", "Environmental engineering" ]
31,040,106
https://en.wikipedia.org/wiki/Wiener%E2%80%93Wintner%20theorem
In mathematics, the Wiener–Wintner theorem, named after Norbert Wiener and Aurel Wintner, is a strengthening of the ergodic theorem, proved by . Statement Suppose that τ is a measure-preserving transformation of a measure space S with finite measure. If f is a real-valued integrable function on S then the Wiener–Wintner theorem states that there is a measure 0 set E such that the average exists for all real λ and for all P not in E. The special case for λ = 0 is essentially the Birkhoff ergodic theorem, from which the existence of a suitable measure 0 set E for any fixed λ, or any countable set of values λ, immediately follows. The point of the Wiener–Wintner theorem is that one can choose the measure 0 exceptional set E to be independent of λ. This theorem was even much more generalized by the Return Times Theorem. References Ergodic theory
Wiener–Wintner theorem
[ "Mathematics" ]
191
[ "Ergodic theory", "Dynamical systems" ]
31,041,066
https://en.wikipedia.org/wiki/Newton%20polytope
In mathematics, the Newton polytope is an integral polytope associated with a multivariate polynomial. It can be used to analyze the polynomial's behavior when specific variables are considered negligible relative to the others. Specifically, given a vector of variables and a finite family of pairwise distinct vectors from each encoding the exponents within a monomial, consider the multivariate polynomial where we use the shorthand notation for the monomial . Then the Newton polytope associated to is the convex hull of the vectors ; that is In order to make this well-defined, we assume that all coefficients are non-zero. The Newton polytope satisfies the following homomorphism-type property: where the addition is in the sense of Minkowski. Newton polytopes are the central object of study in tropical geometry and characterize the Gröbner bases for an ideal. See also Toric varieties Hilbert scheme Sources External links Linking Groebner Bases and Toric Varieties Algebraic geometry Polynomial functions Minkowski spacetime Polytopes
Newton polytope
[ "Mathematics" ]
215
[ "Fields of abstract algebra", "Algebraic geometry" ]
31,041,801
https://en.wikipedia.org/wiki/Heck%E2%80%93Matsuda%20reaction
The Heck–Matsuda (HM) reaction is an organic reaction and a type of palladium catalysed arylation of olefins that uses arenediazonium salts as an alternative to aryl halides and triflates. The use of arenediazonium salts presents some advantages over traditional aryl halide electrophiles, for example, the use of phosphines as ligand are not required and thus negating the requirement for anaerobic conditions, which makes the reaction more practical and easier to handle. Additionally, the reaction can be performed with or without a base and is often faster than traditional Heck protocols. Allylic alcohols, conjugated alkenes, unsaturated heterocycles and unactivated alkenes are capable of being arylated with arenediazonium salts using simple catalysts such as palladium acetate (Pd(OAc)2) or tris(dibenzylideneacetone)dipalladium(0) (Pd2dba3) at room temperature in air, and in benign and conventional solvents. In addition to the intermolecular variant of the HM reaction, intramolecular cyclization processes have also been developed for the construction of a range of oxygen and nitrogen heterocycles. The catalytic cycle for the Heck-Matsuda arylation reaction has four main steps: oxidative addition, migratory insertion or carbopalladation, syn β-elimination and reductive elimination. The proposed Heck catalytic cycle involving cationic palladium with diazonium salts was reinforced by studies with mass spectrometry (ESI) by Correia and co-workers. These results also show the complex interactions that occur in the coordination sphere of palladium during the Heck reaction with arenediazonium salt. A related reaction is the Meerwein arylation that precedes the Heck reaction. Meerwein arylation often use copper salts, but may in some cases be done without a transition metal. See also Palladium-catalyzed coupling reactions Meerwein arylation References Organic reactions Name reactions
Heck–Matsuda reaction
[ "Chemistry" ]
452
[ "Name reactions", "Chemical reaction stubs", "Organic reactions" ]
31,046,646
https://en.wikipedia.org/wiki/Non-Archimedean%20ordered%20field
In mathematics, a non-Archimedean ordered field is an ordered field that does not satisfy the Archimedean property. Such fields will contain infinitesimal and infinitely large elements, suitably defined. Definition Suppose is an ordered field. We say that satisfies the Archimedean property if, for every two positive elements and of , there exists a natural number such that . Here, denotes the field element resulting from forming the sum of copies of the field element , so that is the sum of copies of . An ordered field that does not satisfy the Archimedean property is a non-Archimedean ordered field. Examples The fields of rational numbers and real numbers, with their usual orderings, satisfy the Archimedean property. Examples of non-Archimedean ordered fields are the Levi-Civita field, the hyperreal numbers, the surreal numbers, the Dehn field, and the field of rational functions with real coefficients (where we define to mean that for large enough t). Infinite and infinitesimal elements In a non-Archimedean ordered field, we can find two positive elements and such that, for every natural number , . This means that the positive element is greater than every natural number (so it is an "infinite element"), and the positive element is smaller than for every natural number (so it is an "infinitesimal element"). Conversely, if an ordered field contains an infinite or an infinitesimal element in this sense, then it is a non-Archimedean ordered field. Applications Hyperreal fields, non-Archimedean ordered fields containing the real numbers as a subfield, are used to provide a mathematical foundation for nonstandard analysis. Max Dehn used the Dehn field, an example of a non-Archimedean ordered field, to construct non-Euclidean geometries in which the parallel postulate fails to be true but nevertheless triangles have angles summing to . The field of rational functions over can be used to construct an ordered field that is Cauchy complete (in the sense of convergence of Cauchy sequences) but is not the real numbers. This completion can be described as the field of formal Laurent series over . It is a non-Archimedean ordered field. Sometimes the term "complete" is used to mean that the least upper bound property holds, i.e. for Dedekind-completeness. There are no Dedekind-complete non-Archimedean ordered fields. The subtle distinction between these two uses of the word complete is occasionally a source of confusion. References Ordered algebraic structures Real algebraic geometry Nonstandard analysis
Non-Archimedean ordered field
[ "Mathematics" ]
548
[ "Mathematical structures", "Mathematical objects", "Infinity", "Nonstandard analysis", "Mathematics of infinitesimals", "Algebraic structures", "Ordered algebraic structures", "Model theory", "Order theory" ]
31,050,418
https://en.wikipedia.org/wiki/Ultrapure%20water
Ultrapure water (UPW), high-purity water or highly purified water (HPW) is water that has been purified to uncommonly stringent specifications. Ultrapure water is a term commonly used in manufacturing to emphasize the fact that the water is treated to the highest levels of purity for all contaminant types, including: organic and inorganic compounds; dissolved and particulate matter; volatile and non-volatile; reactive, and inert; hydrophilic and hydrophobic; and dissolved gases. UPW and the commonly used term deionized (DI) water are not the same. In addition to the fact that UPW has organic particles and dissolved gases removed, a typical UPW system has three stages: a pretreatment stage to produce purified water, a primary stage to further purify the water, and a polishing stage, the most expensive part of the treatment process. A number of organizations and groups develop and publish standards associated with the production of UPW. For microelectronics and power, they include Semiconductor Equipment and Materials International (SEMI) (microelectronics and photovoltaic), American Society for Testing and Materials International (ASTM International) (semiconductor, power), Electric Power Research Institute (EPRI) (power), American Society of Mechanical Engineers (ASME) (power), and International Association for the Properties of Water and Steam (IAPWS) (power). Pharmaceutical plants follow water quality standards as developed by pharmacopeias, of which three examples are the United States Pharmacopeia, European Pharmacopeia, and Japanese Pharmacopeia. The most widely used requirements for UPW quality are documented by ASTM D5127 "Standard Guide for Ultra-Pure Water Used in the Electronics and Semiconductor Industries" and SEMI F63 "Guide for ultrapure water used in semiconductor processing". Sources and control Bacteria, particles, organic, and inorganic sources of contamination vary depending on a number of factors, including the feed water to make UPW, as well as the selection of the piping materials used to convey it. Bacteria are typically reported in colony-forming units (CFU) per volume of UPW. Particles use number per volume of UPW. Total organic carbon (TOC), metallic contaminants, and anionic contaminants are measured in dimensionless terms of parts per notation, such as ppm, ppb, ppt, and ppq. Bacteria have been referred to as one of the most obstinate in this list to control. Techniques that help to minimize bacterial colony growth within UPW streams include occasional chemical or steam sanitization (which is common in the pharmaceutical industry), ultrafiltration (found in some pharmaceutical, but mostly semiconductor industries), ozonation, and optimization of piping system designs that promote the use of Reynolds Number criteria for minimum flow, along with minimization of dead legs. In modern and advanced UPW systems, positive (higher than zero) bacteria counts are typically observed on newly constructed facilities. This issue is effectively addressed by sanitization using ozone or hydrogen peroxide. With proper design of the polishing and distribution system, no positive bacteria counts are typically detected throughout the life cycle of the UPW system. Particles in UPW are the bane of the semiconductor industry, causing defects in sensitive photolithographic processes that define nanometer-sized features. In other industries, their effects can range from a nuisance to life-threatening defects. Particles can be controlled by filtration and ultrafiltration. Sources can include bacterial fragments, the sloughing of the component walls within the conduit's wetted stream, and the cleanliness of the jointing processes used to build the piping system. Total organic carbon in ultra pure water can contribute to bacterial proliferation by providing nutrients, can substitute as a carbide for another chemical species in a sensitive thermal process, react in unwanted ways with biochemical reactions in bioprocessing, and, in severe cases, leave unwanted residues on production parts. TOC can come from the feed water used to produce UPW, from the components used to convey the UPW (additives in the manufacturing piping products or extrusion aides and mold release agents), from subsequent manufacturing and cleaning operations of piping systems, or from dirty pipes, fittings, and valves. Metallic and anionic contamination in UPW systems can shut down enzymatic processes in bioprocessing, corrode equipment in the electrical power generation industry, and result in either short or long-term failure of electronic components in semiconductor chips and photovoltaic cells. Its sources are similar to those of TOC's. Depending on the level of purity needed, detection of these contaminants can range from simple conductivity (electrolytic) readings to sophisticated instrumentation such as ion chromatography (IC), atomic absorption spectroscopy (AA) and inductively coupled plasma mass spectrometry (ICP-MS). Applications Ultrapure water is treated through multiple steps to meet the quality standards for different users. The primary industries using UPW are: semiconductor devices fabrication process solar photovoltaics pharmaceuticals power generation (sub and super critical boilers) specialty applications such as research laboratories. The term "ultrapure water" became popular in the late 1970s and early 1980s to describe the particular quality of water used by these industries. While each industry uses what it calls "ultrapure water", the quality standards vary, meaning that the UPW used by a pharmaceutical plant is different from that used in a semiconductor fab or a power station. The standards are based on the application. For instance, semiconductor plants use UPW as a cleaning agent, so it is important that the water not contain dissolved contaminants that can precipitate or particles that may lodge on circuits and cause microchip failures. The power industry uses UPW to make steam to drive steam turbines; pharmaceutical facilities use UPW as a cleaning agent, as well as an ingredient in products, so they seek water free of endotoxins, microbials, and viruses. Today, ion exchange (IX) and electrodeionization (EDI) are the primary deionization technologies associated with UPW production, in most cases following reverse osmosis (RO). Depending on the required water quality, UPW treatment plants often also feature degasification, microfiltration, ultrafiltration, ultraviolet irradiation, and measurement instruments (e.g., total organic carbon [TOC], resistivity/conductivity, particles, pH, and specialty measurements for specific ions). Early on, softened water produced by technologies like zeolite softening or cold lime softening was a precursor to modern UPW treatment. From there, the term "deionized" water was the next advancement as synthetic IX resins were invented in 1935 and then became commercialized in the 1940s. The earliest "deionized" water systems relied on IX treatment to produce "high-purity" as determined by resistivity or conductivity measurements. After commercial RO membranes emerged in the 1960s, RO use with IX treatment eventually became common. EDI was commercialized in the 1980s and this technology has now become commonly associated with UPW treatment. Applications in semiconductor industry UPW is used extensively in the semiconductor industry where the highest grade of purity is required. The amount of electronic-grade or molecular-grade water used by the semiconductor industry is comparable to the water consumption of a small city; a single factory can utilize ultrapure water (UPW) at a rate of 2 MGD, or ~5500 m3/day. The UPW is usually produced on-site. The use of UPW varies; it may be used to rinse the wafer after application of chemicals, to dilute the chemicals themselves, in optics systems for immersion photolithography, or as make-up to cooling fluid in some critical applications. UPW is even sometimes used as a humidification source for the cleanroom environment. The primary, and most critical, application of UPW is in wafer cleaning in and after wet etching step during the FEOL stage. Impurities which can cause product contamination or impact process efficiency (e.g. etch rate) must be removed from the water during cleaning and etching stage. In chemical-mechanical polishing processes, water is used in addition to reagents and abrasive particles. As of 2002 1-2 parts of contaminating molecules per one million of water ones was considered to be an "ultrapure water" (e.g. semiconductor grade). Water quality standards for use in the semiconductor industry It is used in other types of electronics manufacturing in a similar fashion, such as flat-panel displays, discrete components (such as LEDs), hard disk drive platters (HDD) and solid-state drives NAND flash (SSDs), image sensors and image processors/ wafer-level optics (WLO), and crystalline silicon photovoltaics; the cleanliness requirements in the semiconductor industry, however, are currently the most stringent. Applications in pharmaceutical industry A typical use of ultrapure water in pharmaceutical and biotechnology industries is summarized in the table below: Uses of ultrapure water in the pharmaceutical and biotechnology industries In order to be used for pharmaceutical and biotechnology applications for production of licensed human and veterinary health care products it must comply with the specification of the following pharmacopeias monographs: British Pharmacopoeia (BP): Purified water Japanese Pharmacopoeia (JP): Purified water European Pharmacopoeia (Ph Eur): Aqua purificata The United States Pharmacopoeia (USP): Purified water Note: Purified Water is typically a main monograph which references other applications that use Ultrapure water Ultrapure water is often used as a critical utility for cleaning applications (as required). It is also used to generate clean steam for sterilization. The following table summarizes the specifications of two major pharmacopoeias for 'water for injection': Pharmacopoeia specifications for water for injection Ultrapure water and deionized water validation Ultrapure water validation must utilize a risk-based lifecycle approach. This approach consists of three stages – design and development, qualification, and continued verification. One should utilize current regulatory guidance to comply with regulatory expectations. Typical guidance documents to consult at the time of writing are: FDA Guide to Inspections of High Purity Water Systems, High Purity Water Systems (7/93), the EMEA CPMP/CVMP Note for Guidance on Quality of Water for Pharmaceutical Use (London, 2002), and USP Monograph <1231> Water For Pharmaceutical Purposes. However, other jurisdictions' documents may exist, and it is a responsibility of practitioners validating water systems to consult those. Currently, the World Health Organization (WHO) as well as the Pharmaceutical Inspection Co-operation Scheme (PIC/S) developed technical documents which outline validation requirements and strategies for water systems. Analytical methods and techniques On-line analytical measurements Conductivity/resistivity In pure water systems, electrolytic conductivity or resistivity measurement is the most common indicator of ionic contamination. The same basic measurement is read out in either conductivity units of microsiemens per centimeter (μS/cm), typical of the pharmaceutical and power industries or in resistivity units of megohm-centimeters (MΩ⋅cm) used in the microelectronics industries. These units are reciprocals of each other. Absolutely pure water has a conductivity of 0.05501 μS/cm and a resistivity of 18.18 MΩ⋅cm at 25 °C, the most common reference temperature to which these measurements are compensated. An example of the sensitivity to contamination of these measurements is that 0.1 ppb of sodium chloride raises the conductivity of pure water to 0.05523 μS/cm and lowers the resistivity to 18.11 MΩ⋅cm. Ultrapure water is easily contaminated by traces of carbon dioxide from the atmosphere passing through tiny leaks or diffusing through thin wall polymer tubing when sample lines are used for measurement. Carbon dioxide forms conductive carbonic acid in water. For this reason, conductivity probes are most often permanently inserted directly into the main ultrapure water system piping to provide real-time continuous monitoring of contamination. These probes contain both conductivity and temperature sensors to enable accurate compensation for the very large temperature influence on the conductivity of pure waters. Conductivity probes have an operating life of many years in pure water systems. They require no maintenance except for periodic verification of measurement accuracy, typically annually. Sodium Sodium is usually the first ion to break through a depleted cation exchanger. Sodium measurement can quickly detect this condition and is widely used as the indicator for cation exchange regeneration. The conductivity of cation exchange effluent is always quite high due to the presence of anions and hydrogen ion and therefore conductivity measurement is not useful for this purpose. Sodium is also measured in power plant water and steam samples because it is a common corrosive contaminant and can be detected at very low concentrations in the presence of higher amounts of ammonia and/or amine treatment which have a relatively high background conductivity. On-line sodium measurement in ultrapure water most commonly uses a glass membrane sodium ion-selective electrode and a reference electrode in an analyzer measuring a small continuously flowing side-stream sample. The voltage measured between the electrodes is proportional to the logarithm of the sodium ion activity or concentration, according to the Nernst equation. Because of the logarithmic response, low concentrations in sub-parts per billion ranges can be measured routinely. To prevent interference from hydrogen ion, the sample pH is raised by the continuous addition of a pure amine before measurement. Calibration at low concentrations is often done with automated analyzers to save time and to eliminate variables of manual calibration. Dissolved oxygen Advanced microelectronics manufacturing processes require low single digit to 10 ppb dissolved oxygen (DO) concentrations in the ultrapure rinse water to prevent oxidation of wafer films and layers. DO in power plant water and steam must be controlled to ppb levels to minimize corrosion. Copper alloy components in power plants require single digit ppb DO concentrations whereas iron alloys can benefit from the passivation effects of higher concentrations in the 30 to 150 ppb range. Dissolved oxygen is measured by two basic technologies: electrochemical cell or optical fluorescence. Traditional electrochemical measurement uses a sensor with a gas-permeable membrane. Behind the membrane, electrodes immersed in an electrolyte develop an electric current directly proportional to the oxygen partial pressure of the sample. The signal is temperature compensated for the oxygen solubility in water, the electrochemical cell output and the diffusion rate of oxygen through the membrane. Optical fluorescent DO sensors use a light source, a fluorophore and an optical detector. The fluorophore is immersed in the sample. Light is directed at the fluorophore which absorbs energy and then re-emits light at a longer wavelength. The duration and intensity of the re-emitted light is related to the dissolved oxygen partial pressure by the Stern–Volmer relationship. The signal is temperature compensated for the solubility of oxygen in water and the fluorophore characteristics to obtain the DO concentration value. Silica Silica is a contaminant that is detrimental to microelectronics processing and must be maintained at sub-ppb levels. In steam power generation silica can form deposits on heat-exchange surfaces where it reduces thermal efficiency. In high temperature boilers, silica will volatilize and carry over with steam where it can form deposits on turbine blades which lower aerodynamic efficiency. Silica deposits are very difficult to remove. Silica is the first readily measurable species to be released by a spent anion exchange resin and is therefore used as the trigger for anion resin regeneration. Silica is non-conductive and therefore not detectable by conductivity. Silica is measured on side stream samples with colorimetric analyzers. The measurement adds reagents including a molybdate compound and a reducing agent to produce a blue silico-molybdate complex color which is detected optically and is related to concentration according to the Beer–Lambert law. Most silica analyzers operate on an automated semi-continuous basis, isolating a small volume of sample, adding reagents sequentially and allowing enough time for reactions to occur while minimizing consumption of reagents. The display and output signals are updated with each batch measurement result, typically at 10 to 20-minute intervals. Particles Particles in UPW have always presented a major problem for semiconductor manufacture, as any particle landing on a silicon wafer can bridge the gap between the electrical pathways in the semiconductor circuitry. When a pathway is short-circuited the semiconductor device will not work properly; such a failure is called a yield loss, one of the most closely watched parameters in the semiconductor industry. The technique of choice to detect these single particles has been to shine a light beam (a laser) through a small volume of UPW and detect the light scattered by any particles (instruments based on this technique are called laser particle counters or LPCs). As semiconductor manufacturers pack more and more transistors into the same physical space, the circuitry line-width has become narrow and narrower. As a result, LPC manufacturers have had to use more and more powerful lasers and very sophisticated scattered light detectors to keep pace. As line-width approaches 10 nm (a human hair is approximately 100,000 nm in diameter) LPC technology is becoming limited by secondary optical effects, and new particle measurement techniques will be required. Recently, one such novel analysis method named NDLS has successfully been brought into use at Electrum Laboratory (Royal Institute of Technology) in Stockholm, Sweden. NDLS is based on Dynamic Light Scattering (DLS) instrumentation. Non-volatile residue Another type of contamination in UPW is dissolved inorganic material, primarily silica. Silica is one of the most abundant minerals on the planet and is found in all water supplies. Any dissolved inorganic material has the potential to remain on the wafer as the UPW dries. Once again this can lead to a significant loss in yield. To detect trace amounts of dissolved inorganic material a measurement of non-volatile residue is commonly used. This technique involves using a nebulizer to create droplets of UPW suspended in a stream of air. These droplets are dried at a high temperature to produce an aerosol of non-volatile residue particles. A measurement device called a condensation particle counter then counts the residue particles to give a reading in parts per trillion (ppt) by weight. TOC Total organic carbon is most commonly measured by oxidizing the organics in the water to CO, measuring the increase in the CO concentration after the oxidation or delta CO, and converting the measured delta CO amount into "mass of carbon" per volume concentration units. The initial CO in the water sample is defined as Inorganic Carbon or IC. The CO produced from the oxidized organics and any initial CO (IC) both together are defined as Total Carbon or TC. The TOC value is then equal to the difference between TC and IC. Organic oxidation methods for TOC analysis Oxidation of organics to CO is most commonly achieved in liquid solutions by the creation of the highly oxidizing chemical species, the hydroxyl radical (OH•). Organic oxidation in a combustion environment involves the creation of other energized molecular oxygen species. For the typical TOC levels in UPW systems most methods utilize hydroxyl radicals in the liquid phase. There are multiple methods to create sufficient concentrations of hydroxyl radicals needed to completely oxidize the organics in water to CO, each method being appropriate for different water purity levels. For typical raw waters feeding into the front end of an UPW purification system the raw water can contain TOC levels between 0.7 mg/L to 15 mg/L and require a robust oxidation method that can ensure there is enough oxygen available to completely convert all the carbon atoms in the organic molecules into CO. Robust oxidation methods that supply sufficient oxygen include the following methods; Ultraviolet light (UV) & persulfate, heated persulfate, combustion, and super critical oxidation. Typical equations showing persulfate generation of hydroxyl radicals follows. + hν (254 nm) → 2 • and • + → + OH • When the organic concentration is less than 1 mg/L as TOC and the water is saturated with oxygen UV light is sufficient to oxidize the organics to CO, this is a simpler oxidation method. The wavelength of the UV light for the lower TOC waters must be less than 200 nm and is typically 184 nm generated by a low pressure Hg vapor lamp. The 184 nm UV light is energetic enough to break the water molecule into OH and H radicals. The hydrogen radicals quickly react to create H. The equations follow: HO + hν (185 nm) → OH• + H • and H • + H • → H Different types of UPW TOC Analyzers IC (Inorganic Carbon) = + + TC (Total Carbon) = Organic Carbon + IC TOC (Total Organic Carbon) = TC – IC HO + hν (185 nm) → OH• + H • + hν (254 nm) → 2 • • + → + OH • Offline lab analysis When testing the quality of UPW, consideration is given to where that quality is required and where it is to be measured. The point of distribution or delivery (POD) is the point in the system immediately after the last treatment step and before the distribution loop. It is the standard location for the majority of analytical tests. The point of connection (POC) is another commonly used point for measuring quality of UPW. It is located at the outlet of the submain or lateral take off valve used for UPW supply to the tool. Grab sample UPW analyses are either complementary to the on-line testing or alternative, depending on the availability of the instruments and the level of the UPW quality specifications. Grab sample analysis is typically performed for the following parameters: metals, anions, ammonium, silica (both dissolved and total), particles by SEM (scanning electron microscope), TOC (total organic compounds) and specific organic compounds. Metal analyses are typically performed by ICP-MS (Inductively coupled plasma mass spectrometry). The detection level depends on the specific type of the instrument used and the method of the sample preparation and handling. Current state-of-the-art methods allow reaching sub-ppt (parts per trillion) level (< 1 ppt) typically tested by ICPMS. The anion analysis for seven most common inorganic anions (sulfate, chloride, fluoride, phosphate, nitrite, nitrate, and bromide) is performed by ion chromatography (IC), reaching single digit ppt detection limits. IC is also used to analyze ammonia and other metal cations. However ICPMS is the preferred method for metals due to lower detection limits and its ability to detect both dissolved and non-dissolved metals in UPW. IC is also used for the detection of urea in UPW down to the 0.5 ppb level. Urea is one of the more common contaminants in UPW and probably the most difficult for treatment. Silica analysis in UPW typically includes determination of reactive and total silica. Due to the complexity of silica chemistry, the form of silica measured is defined by the photometric (colorimetric) method as molybdate-reactive silica. Those forms of silica that are molybdate-reactive include dissolved simple silicates, monomeric silica and silicic acid, and an undetermined fraction of polymeric silica. Total silica determination in water employs high resolution ICPMS, GFAA (graphite furnace atomic absorption), and the photometric method combined with silica digestion. For many natural waters, a measurement of molybdate-reactive silica by this test method provides a close approximation of total silica, and, in practice, the colorimetric method is frequently substituted for other more time-consuming techniques. However, total silica analysis becomes more critical in UPW, where the presence of colloidal silica is expected due to silica polymerization in the ion exchange columns. Colloidal silica is considered more critical than dissolved in the electronic industry due to the bigger impact of nano-particles in water on the semiconductor manufacturing process. Sub-ppb (parts per billion) levels of silica make it equally complex for both reactive and total silica analysis, making the choice of total silica test often preferred. Although particles and TOC are usually measured using on-line methods, there is significant value in complementary or alternative off-line lab analysis. The value of the lab analysis has two aspects: cost and speciation. Smaller UPW facilities that cannot afford to purchase on-line instrumentation often choose off-line testing. TOC can be measured in the grab sample at a concentration as low as 5 ppb, using the same technique employed for the on-line analysis (see on-line method description). This detection level covers the majority of needs of less critical electronic and all pharmaceutical applications. When speciation of the organics is required for troubleshooting or design purposes, liquid chromatography-organic carbon detection (LC-OCD) provides an effective analysis. This method allows for identification of biopolymers, humics, low molecular weight acids and neutrals, and more, while characterizing nearly 100% of the organic composition in UPW with sub-ppb level of TOC. Similar to TOC, SEM particle analysis represents a lower cost alternative to the expensive online measurements and therefore it is commonly a method of choice in less critical applications. SEM analysis can provide particle counting for particle size down to 50 nm, which generally is in-line with the capability of online instruments. The test involves installation of the SEM capture filter cartridge on the UPW sampling port for sampling on the membrane disk with the pore size equal or smaller than the target size of the UPW particles. The filter is then transferred to the SEM microscope where its surface is scanned for detection and identification of the particles. The main disadvantage of SEM analysis is long sampling time. Depending on the pore size and the pressure in the UPW system, the sampling time can be between one week and one month. However, typical robustness and stability of the particle filtration systems allow for successful applications of the SEM method. Application of Energy Dispersive X-ray Spectroscopy (SEM-EDS) provides compositional analysis of the particles, making SEM also helpful for systems with on-line particle counters. Bacteria analysis is typically conducted following ASTM method F1094. The test method covers sampling and analysis of high purity water from water purification systems and water transmission systems by the direct sampling tap and filtration of the sample collected in the bag. These test methods cover both the sampling of water lines and the subsequent microbiological analysis of the sample by the culture technique. The microorganisms recovered from the water samples and counted on the filters include both aerobes and facultative anaerobes. The temperature of incubation is controlled at 28 ± 2 °C, and the period of incubation is 48 h or 72 h, if time permits. Longer incubation times are typically recommended for most critical applications. However 48 hrs is typically sufficient to detect water quality upsets. Purification process UPW system design for semiconductor industry Typically, city feed-water (containing all the unwanted contaminants previously mentioned) is taken through a series of purification steps that, depending on the desired quality of UPW, includes gross filtration for large particulates, carbon filtration, water softening, reverse osmosis, exposure to ultraviolet (UV) light for TOC and/or bacterial static control, polishing by ion exchange resins or electrodeionization (EDI), and finally filtration or ultrafiltration. Some systems use direct return, reverse return or serpentine loops that return the water to a storage area, providing continuous re-circulation, while others are single-use systems that run from point of UPW production to point of use. The constant re-circulation action in the former continuously polishes the water with every pass. The latter can be prone to contamination build up if it is left stagnant with no use. For modern UPW systems it is important to consider specific site and process requirements such as environmental constraints (e.g., wastewater discharge limits) and reclaim opportunities (e.g., is there a mandated minimum amount of reclaim required). UPW systems consist of three subsystems: pretreatment, primary, and polishing. Most systems are similar in design but may vary in the pretreatment section depending on the nature of the source water. Pretreatment: Pretreatment produces purified water. Typical pretreatments employed are two pass reverse osmosis, Demineralization plus reverse osmosis or HERO (high efficiency reverse osmosis). In addition, the degree of filtration upstream of these processes will be dictated by the level of suspended solids, turbidity and organics present in the source water. The common types of filtration are multi-media, automatic backwashable filters and ultrafiltration for suspended solids removal and turbidity reduction and Activated Carbon for the reduction of organics. The Activated Carbon may also be used for removal of chlorine upstream of the reverse osmosis of demineralization steps. If activated carbon is not employed then sodium bisulfite is used to de-chlorinate the feed water. Primary: Primary treatment consists of ultraviolet light (UV) for organic reduction, EDI and or mixed bed ion exchange for demineralization. The mixed beds may be non-regenerable (following EDI), in-situ or externally regenerated. The last step in this section may be dissolved oxygen removal utilizing the membrane degasification process or vacuum degasification. Polishing: Polishing consists of UV, heat exchange to control constant temperature in the UPW supply, non-regenerable ion exchange, membrane degasification (to polish to final UPW requirements) and ultrafiltration to achieve the required particle level. Some semiconductor Fabs require hot UPW for some of their processes. In this instance polished UPW is heated in the range of 70 to 80C before being delivered to manufacturing. Most of these systems include heat recovery wherein the excess hot UPW returned from manufacturing goes to a heat recovery unit before being returned to the UPW feed tank to conserve on the use of heating water or the need to cool the hot UPW return flow. Key UPW design criteria for semiconductor fabrication Remove contaminants as far forward in the system as practical and cost effective. Steady state flow in the makeup and primary sections to avoid TOC and conductivity spikes (NO start/stop operation). Recirculate excess flow upstream. Minimize the use of chemicals following the reverse osmosis units. Consider EDI and non-regenerable primary mixed beds in lieu of in-situ or externally regenerated primary beds to assure optimum quality UPW makeup and minimize the potential for upset. Select materials that will not contribute TOC and particles to the system particularly in the primary and polishing sections. Minimize stainless steel material in the polishing loop and, if used, electropolishing is recommended. Minimize dead legs in the piping to avoid the potential for bacteria propagation. Maintain minimum scouring velocities in the piping and distribution network to ensure turbulent flow. The recommended minimum is based on a Reynolds number of 3,000 Re or higher. This can range up to 10,000 Re depending on the comfort level of the designer. Use only virgin resin in the polishing mixed beds. Replace every one to two years. Supply UPW to manufacturing at constant flow and constant pressure to avoid system upsets such as particle bursts. Utilize reverse return distribution loop design for hydraulic balance and to avoid backflow (return to supply). Capacity considerations Capacity plays an important role in the engineering decisions about UPW system configuration and sizing. For example, polish systems of older and smaller size electronic systems were designed for minimum flow velocity criteria of up to 60 cm (2 ft) per second at the end of pipe to avoid bacterial contamination. Larger fabs required larger size UPW systems. The figure below illustrates the increasing consumption driven by the larger size of wafer manufactured in newer fabs. However, for larger pipe (driven by higher consumption) the 60 cm (2& ft) per second criteria meant extremely high consumption and an oversized polishing system. The industry responded to this issue and through extensive investigation, choice of higher purity materials, and optimized distribution design was able to reduce the design criteria for minimum flow, using Reynolds number criteria. The figure on the right illustrates an interesting coincidence that the largest diameter of the main supply line of UPW is equal to the size of the wafer in production (this relation is known as Klaiber's law). Growing size of the piping as well as the system overall requires new approaches to space management and process optimization. As a result, newer UPW systems look rather alike, which is in contrast with smaller UPW systems that could have less optimized design due to the lower impact of inefficiency on cost and space management. Another capacity consideration is related to operability of the system. Small lab scale (a dozen liters-per-minute/few gallons-per-minute-capacities) systems do not typically involve operators, while large scale systems usually operate 24x7 by well trained operators. As a result, smaller systems are designed with no use of chemicals and lower water and energy efficiency than larger systems. Critical UPW issues Particles control Particles in UPW are critical contaminants, which result in numerous forms of defects on wafer surfaces. With the large volume of UPW, which comes into contact with each wafer, particle deposition on the wafer readily occurs. Once deposited, the particles are not easily removed from the wafer surfaces. With the increased use of dilute chemistries, particles in UPW are an issue not only with UPW rinse of the wafers, but also due to introduction of the particles during dilute wet cleans and etch, where UPW is a major constituent of the chemistry used. Particle levels must be controlled to nm sizes, and current trends are approaching 10 nm and smaller for particle control in UPW. While filters are used for the main loop, components of the UPW system can contribute additional particle contamination into the water, and at the point of use, additional filtration is recommended. The filters themselves must be constructed of ultraclean and robust materials, which do not contribute organics or cations/anions into the UPW, and must be integrity tested out of the factory to assure reliability and performance. Common materials include nylon, polyethylene, polysulfone, and fluoropolymers. Filters will commonly be constructed of a combination of polymers, and for UPW use are thermally welded without using adhesives or other contaminating additives. The microporous structure of the filter is critical in providing particle control, and this structure can be isotropic or asymmetric. In the former case the pore distribution is uniform through the filter, while in the latter the finer surface provides the particle removal, with the coarser structure giving physical support as well reducing the overall differential pressure. Filters can be cartridge formats where the UPW is flowed through the pleated structure with contaminants collected directly on the filter surface. Common in UPW systems are ultrafilters (UF), composed of hollow fiber membranes. In this configuration, the UPW is flowed across the hollow fiber, sweeping contaminants to a waste stream, known as the retentate stream. The retentate stream is only a small percentage of the total flow, and is sent to waste. The product water, or the permeate stream, is the UPW passing through the skin of the hollow fiber and exiting through the center of the hollow fiber. The UF is a highly efficient filtration product for UPW, and the sweeping of the particles into the retentate stream yield extremely long life with only occasional cleaning needed. Use of the UF in UPW systems provides excellent particle control to single digit nanometer particle sizes. Point of use applications (POU) for UPW filtration include wet etch and clean, rinse prior to IPA vapor or liquid dry, as well as lithography dispense UPW rinse following develop. These applications pose specific challenges for POU UPW filtration. For wet etch and clean, most tools are single wafer processes, which require flow through the filter upon tool demand. The resultant intermittent flow, which will range from full flow through the filter upon initiation of UPW flow through the spray nozzle, and then back to a trickle flow. The trickle flow is typically maintained to prevent a dead leg in the tool. The filter must be robust to withstand the pressure and low cycling, and must continue to retain captured particles throughout the service life of the filter. This requires proper pleat design and geometry, as well as media designed to optimized particle capture and retention. Certain tools may use a fixed filter housing with replaceable filters, whereas other tools may use disposable filter capsules for the POU UPW. For lithography applications, small filter capsules are used. Similar to the challenges for wet etch and clean POU UPW applications, for lithography UPW rinse, the flow through the filter is intermittent, though at a low flow and pressure, so the physical robustness is not as critical. Another POU UPW application for lithography is the immersion water used at the lens/wafer interface for 193 nm immersion lithography patterning. The UPW forms a puddle between the lens and the wafer, improving NA, and the UPW must be extremely pure. POU filtration is used on the UPW just prior to the stepper scanner. For POU UPW applications, sub 15 nm filters are currently in use for advanced 2x and 1x nodes. The filters are commonly made of nylon, high-density polyethylene (HDPE), polyarylsulfone (or polysulfone), or polytetrafluoroethylene (PTFE) membranes, with hardware typically consisting of HDPE or PFA. Point of use (POU) treatment for organics Point of use treatment is often applied in critical tool applications such as Immersion lithography and Mask preparation in order to maintain consistent ultrapure water quality. UPW systems located in the central utilities building provide the Fab with quality water but may not provide adequate water purification consistency for these processes. In the case when urea, THM, isopropyl alcohol (IPA) or other difficult to remove (low molecular weight neutral compounds) TOC species may be present, additional treatment is required thru advanced oxidation process (AOP) using systems. This is particularly important when tight TOC specification below 1 ppb is required to be attained. These difficult to control organics have been proven to impact yield and device performance especially at the most demanding process steps. One of the successful examples of the POU organics control down to 0.5 ppb TOC level is AOP combining ammonium persulfate and UV oxidation (refer to the persulfate+UV oxidation chemistry in the TOC measurement section). Available proprietary POU advanced oxidation processes can consistently reduce TOC to 0.5 parts per billion (ppb) in addition to maintaining consistent temperature, oxygen and particles exceeding the SEMI F063 requirements. This is important because the slightest variation can directly affect the manufacturing process, significantly influencing product yields. UPW recycling in the semiconductor industry The semiconductor industry uses a large amount of ultrapure water to rinse contaminants from the surface of the silicon wafers that are later turned into computer chips. The ultrapure water is by definition extremely low in contamination, but once it makes contact with the wafer surface it carries residual chemicals or particles from the surface that then end up in the industrial waste treatment system of the manufacturing facility. The contamination level of the rinse water can vary a great deal depending on the particular process step that is being rinsed at the time. A "first rinse" step may carry a large amount of residual contaminants and particles compared to a last rinse that may carry relatively low amounts of contamination. Typical semiconductor plants have only two drain systems for all of these rinses which are also combined with acid waste and therefore the rinse water is not effectively reused due to risk of contamination causing manufacturing process defects. As noted above, ultrapure water is commonly not recycled in semiconductor applications, but rather reclaimed in other processes. There is one company in the US, Exergy Systems, Inc. of Irvine, California, that offers a patented deionized water recycling process. This product has been successfully tested at a number of semiconductor processes. Definitions: The following definitions are used by ITRS: UPW Recycle – Water reuse in the same application after treatment Water Reuse – Use in secondary application Water Reclaim – Extracting water from wastewater Water reclaim and recycle: Some semiconductor manufacturing plants have been using reclaimed water for non-process applications such as chemical aspirators where the discharge water is sent to industrial waste. Water reclamation is also a typical application where spent rinse water from the manufacturing facility may be used in cooling tower supply, exhaust scrubber supply, or point of use abatement systems. UPW Recycling is not as typical and involves collecting the spent manufacturing rinse water, treating it and re-using it back in the wafer rinse process. Some additional water treatment may be required for any of these cases depending on the quality of the spent rinse water and the application of the reclaimed water. These are fairly common practices in many semiconductor facilities worldwide, however there is a limitation to how much water can be reclaimed and recycled if not considering reuse in the manufacturing process. UPW recycling: Recycling rinse water from the semiconductor manufacturing process has been discouraged by many manufacturing engineers for decades because of the risk that the contamination from the chemical residue and particles may end up back in the UPW feed water and result in product defects. Modern Ultrapure Water systems are very effective at removing ionic contamination down to parts per trillion levels (ppt) whereas organic contamination of ultrapure water systems is still in the parts per billion levels (ppb). In any case recycling the process water rinses for UPW makeup has always been a great concern and until recently this was not a common practice. Increasing water and wastewater costs in parts of the US and Asia have pushed some semiconductor companies to investigate the recycling of manufacturing process rinse water in the UPW makeup system. Some companies have incorporated an approach that uses complex large scale treatment designed for worst case conditions of the combined waste water discharge. More recently new approaches have been developed to incorporate a detailed water management plan to try to minimize the treatment system cost and complexity. Water management plan: The key to maximizing water reclaim, recycle, and reuse is having a well thought out water management plan. A successful water management plan includes full understanding of how the rinse waters are used in the manufacturing process including chemicals used and their byproducts. With the development of this critical component, a drain collection system can be designed to segregate concentrated chemicals from moderately contaminated rinse waters, and lightly contaminated rinse waters. Once segregated into separate collection systems the once considered chemical process waste streams can be repurposed or sold as a product stream, and the rinse waters can be reclaimed. A water management plan will also require a significant amount of sample data and analysis to determine proper drain segregation, application of online analytical measurement, diversions control, and final treatment technology. Collecting these samples and performing laboratory analysis can help characterize the various waste streams and determine the potential of their respective re-use. In the case of UPW process rinse water the lab analysis data can then be used to profile typical and non-typical levels of contamination which then can be used to design the rinse water treatment system. In general it is most cost effective to design the system to treat the typical level of contamination that may occur 80-90% of the time, then incorporate on-line sensors and controls to divert the rinse water to industrial waste or to non-critical use such as cooling towers when the contamination level exceeds the capability of the treatment system. By incorporating all these aspects of a water management plan in a semiconductor manufacturing site the level of water use can be reduced by as much as 90%. Transport Stainless steel remains a piping material of choice for the pharmaceutical industry. Due to its metallic contribution, most steel was removed from microelectronics UPW systems in the 1980s and replaced with high performance polymers of polyvinylidene fluoride (PVDF), perfluoroalkoxy (PFA), ethylene chlorotrifluoroethylene (ECTFE) and polytetrafluoroethylene (PTFE) in the US and Europe. In Asia, polyvinyl chloride (PVC), chlorinated polyvinyl chloride (CPVC) and polypropylene (PP) are popular, along with the high performance polymers. Methods of joining thermoplastics used for UPW transport Thermoplastics can be joined by different thermofusion techniques. Socket fusion (SF) is a process where the outside diameter of the pipe uses a "close fit" match to the inner diameter of a fitting. Both pipe and fitting are heated on a bushing (outer and inner, respectively) for a prescribed period of time. Then the pipe is pressed into the fitting. Upon cooling the welded parts are removed from the clamp. Conventional butt fusion (CBF) is a process where the two components to be joined have the same inner and outer diameters. The ends are heated by pressing them against the opposite sides of a heater plate for a prescribed period of time. Then the two components are brought together. Upon cooling the welded parts are removed from the clamp. Bead and crevice free (BCF), uses a process of placing two thermoplastic components having the same inner and outer diameters together. Next an inflatable bladder is introduced in the inner bore of the components and placed equidistance within the two components. A heater head clamps the components together and the bladder is inflated. After a prescribed period of time the heater head begins to cool and the bladder deflates. Once completely cooled the bladder is removed and the joined components are taken out of the clamping station. The benefit of the BCF system is that there is no weld bead, meaning that the surface of the weld zone is routinely as smooth as the inner wall of the pipe. Infrared fusion (IR) is a process similar to CBF except that the component ends never touch the heater head. Instead, the energy to melt the thermoplastic is transferred by radiant heat. IR comes in two variations; one uses overlap distance when bringing the two components together while the other uses pressure. The use of overlap in the former reduces the variation seen in bead size, meaning that precise dimensional tolerances needed for industrial installations can be maintained better. References Notes References Water Water treatment Liquid water Semiconductor device fabrication
Ultrapure water
[ "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
9,944
[ "Hydrology", "Water", "Microtechnology", "Water treatment", "Water pollution", "Semiconductor device fabrication", "Environmental engineering", "Water technology" ]
36,234,044
https://en.wikipedia.org/wiki/Trichophaea%20hemisphaerioides
Trichophaea hemisphaerioides is a European species of apothecial fungus belonging to the family Pyronemataceae. They appear as whitish cups with brown hairs on the margin and outer surface, up to 1.5 cm across on recently burned ground, often amongst mosses such as Funaria. References Pyronemataceae Fungi described in 1897 Fungus species
Trichophaea hemisphaerioides
[ "Biology" ]
79
[ "Fungi", "Fungus species" ]
36,241,033
https://en.wikipedia.org/wiki/Shape%20dynamics
In theoretical physics, shape dynamics is a theory of gravity that implements Mach's principle, developed with the specific goal to obviate the problem of time and thereby open a new path toward the resolution of incompatibilities between general relativity and quantum mechanics. Shape dynamics is dynamically equivalent to the canonical formulation of general relativity, known as the ADM formalism. Shape dynamics is not formulated as an implementation of spacetime diffeomorphism invariance, but as an implementation of spatial relationalism based on spatial diffeomorphisms and spatial Weyl symmetry. An important consequence of shape dynamics is the absence of a problem of time in canonical quantum gravity. The replacement of the spacetime picture with a picture of evolving spatial conformal geometry opens the door for a number of new approaches to quantum gravity. An important development in this theory was contributed in 2010 by Henrique Gomes, Sean Gryb and Tim Koslowski, building on an approach initiated by Julian Barbour. Background Mach's principle has been an important inspiration for the construction of general relativity, but the physical interpretation of Einstein's formulation of general relativity still requires external clocks and rods and thus fails to be manifestly relational. Mach's principle would be fully implemented if the predictions of general relativity were independent of the choice of clocks and rods. Barbour and Bertotti conjectured that Jacobi's principle and a mechanism they called "best matching" were construction principles for a fully Machian theory. Barbour implemented these principles in collaboration with Niall Ó Murchadha, Edward Anderson, Brendan Foster and Bryan Kelleher to derive the ADM formalism in constant mean curvature gauge. This did not implement Mach's principle, because the predictions of general relativity in constant mean curvature gauge depend on the choice of clocks and rods. Mach's principle was successfully implemented in 2010 by Henrique Gomes, Sean Gryb and Tim Koslowski who drew on the work of Barbour and his collaborators to describe gravity in a fully relational manner as the evolution of the conformal geometry of space. Relation with general relativity Shape dynamics possesses the same dynamics as general relativity, but has different gauge orbits. The link between general relativity and shape dynamics can be established using the ADM formalism in the following way: Shape dynamics can be gauge fixed in such a way that its initial value problem and its equations of motion coincide with the initial value problem and equations of motion of the ADM formalism in constant mean extrinsic curvature gauge. This equivalence ensures that classical shape dynamics and classical general relativity are locally indistinguishable. However, there is the possibility for global differences. Problem of time in shape dynamics The shape dynamics formulation of gravity possesses a physical Hamiltonian that generates evolution of spatial conformal geometry. This disentangles the problem of time in quantum gravity: The gauge problem (the choice of foliation in the spacetime description) is replaced by the problem of finding spatial conformal geometries, leaving an evolution that is comparable to a system with time dependent Hamiltonian. The problem of time is suggested to be completely solved by restricting oneself to "objective observables," which are those observables that do not depend on any external clock or rod. Arrow of time in shape dynamics Recent work by Julian Barbour, Tim Koslowski and Flavio Mercati demonstrates that Shape Dynamics possesses a physical arrow of time given by the growth of complexity and the dynamical storage of locally accessible records of the past. This is a property of the dynamical law and does not require any special initial condition. Further reading Mach's principle References Theoretical physics
Shape dynamics
[ "Physics" ]
745
[ "Theoretical physics" ]
29,662,162
https://en.wikipedia.org/wiki/Phosphatidylethanol
Phosphatidylethanols (PEth) are a group of phospholipids formed only in the presence of ethanol via the action of phospholipase D (PLD). It accumulates in blood and is removed slowly, making it a useful biomarker for alcohol consumption. PEth is also thought to contribute to the symptoms of alcohol intoxication. Structure Chemically, phosphatidylethanols are phospholipids carrying two fatty acid chains, which are variable in structure, and one phosphate ethyl ester. Biosynthesis When ethanol is present, PLD substitutes ethanol for water and covalently attaches the alcohol as the head group of the phospholipid; hence the name phosphatidylethanol. Normally PLD incorporates water to generate phosphatidic acid (PA); the process is termed transphosphatidylation. PLD continues to generate PA in the presence of ethanol and while PEth is generated and the effects of ethanol transphosphatidlyation are through the generation of the unnatural lipid not depletion of PA. Biological effects The lipid accumulates in the human body and competes at agonists sites of lipid-gated ion channels contributing to alcohol intoxication. The chemical similarity of PEth to phosphatidic acid (PA) and phosphatidylinositol 4,5-bisphosphate (PIP2) suggest a likely broad perturbation to lipid signaling; the exact role of PEth as a competitive lipid ligand has not been studied extensively. Marker in blood Levels of phosphatidylethanols in blood are used as markers of previous alcohol consumption. An increase of alcohol intake by ~20 g ethanol/day will raise the PEth 16:0/18:1 concentration by ~0.10 μmol/L, and vice versa if the alcohol consumption has decreased. However, it has been demonstrated that there can be significant inter-personal variation, leading to potential misclassification between moderate and heavy drinkers. After cessation of alcohol intake, the half-life of PEth is between 4.5 and 10 days in the first week and between 5 and 12 days in the second week. As a blood marker PEth is more sensitive than carbohydrate deficient transferrin (CDT), urinary ethyl glucuronide (EtG) and ethyl sulfate (EtS). Interpretation The Society of PEth Research published a harmonization document (2022 Consensus of Basel) for the interpretation of phosphatidylethanol concentrations in the clinical and forensic setting. This supposed consensus was created by those with a vested interest in phosphatidylethanol research. The consensus defined the target measurand (PEth 16:0/18:1 in whole blood), cutoff concentrations (20 ng/mL and 200 ng/mL), and minimal requirements for the applied analytical method (accuracy and precision within 15%). References Phospholipids Ethyl esters
Phosphatidylethanol
[ "Chemistry", "Biology" ]
656
[ "Phospholipids", "Biotechnology stubs", "Signal transduction", "Biochemistry stubs", "Biochemistry" ]
29,662,506
https://en.wikipedia.org/wiki/Rigid%20line%20inclusion
A rigid line inclusion, also called stiffener, is a mathematical model used in solid mechanics to describe a narrow hard phase, dispersed within a matrix material. This inclusion is idealised as an infinitely rigid and thin reinforcement, so that it represents a sort of ‘inverse’ crack, from which the nomenclature ‘anticrack’ derives. From the mechanical point of view, a stiffener introduces a kinematical constraint, imposing that it may only suffer a rigid body motion along its line. Theoretical model The stiffener model has been used to investigate different mechanical problems in classical elasticity (load diffusion, inclusion at bi material interface ). The main characteristics of the theoretical solutions are basically the following. Similarly to a fracture, a square-root singularity in the stress/strain fields is present at the tip of the inclusion. In a homogeneous matrix subject to uniform stress at infinity, such singularity only arises when a normal stress acts parallel or orthogonal to the inclusion line, while a stiffener parallel to a simple shear does not disturb the ambient field. Experimental validation The characteristics of the elastic solution have been experimentally confirmed through photoelastic transmission experiments. Interaction of rigid line inclusions The interaction of rigid line inclusions in parallel, collinear and radial configurations have been studied using the boundary element method (BEM) and validated using photoelasticity. Shear bands emerging at the stiffener tip Analytical solutions obtained in prestressed elasticity show the possibility of the emergence of shear bands at the tip of the stiffener. References External links Laboratory for Physical Modeling of Structures and Photoelasticity (University of Trento, Italy) Solid mechanics Rigid bodies mechanics
Rigid line inclusion
[ "Physics" ]
335
[ "Solid mechanics", "Mechanics" ]
39,138,679
https://en.wikipedia.org/wiki/Radiation%20effects%20on%20optical%20fibers
When optical fibers are exposed to ionizing radiation such as energetic electrons, protons, neutrons, X-rays, Ƴ-radiation, etc., they undergo 'damage'. The term 'damage' primarily refers to added optical absorption, resulting in loss of the propagating optical signal leading to decreased power at the output end, which could lead to premature failure of the component and or system. Description In the professional literature, the effect is often named Radiation Induced Attenuation (RIA), or Radiation-induced darkening. The loss of power or 'darkening' occurs because the chemical bonds forming the optical fiber core are disrupted by the impinging high energy resulting in the appearance of new electronic transition states giving rise to additional absorption in the wavelength regions of interest. The radiation induced defects tend to absorb more at shorter wavelengths, and hence radiation-damaged glass appears to yellow. Once radiation source is removed, the fiber can recover some of its original transparency (a process called recovery or "self-healing"), which occurs due to thermal annealing or photobleaching of the defects. The extent of damage is governed by the balance between defect generation (excess attenuation) on one hand and defect annihilation (recovery) on the other hand. If the dose rate is low, an equilibrium state (between attenuation and recovery) is reached with some degree of darkening. However, if the dose rate is high, the utility of fiber depends on the overall induced attenuation and the recovery time. Understanding these radiation induced effects is important particularly for space based applications where optical fibers are being considered for use in increasing number of applications. Defects Intrinsic defects are present in the matrix of even a single component glass material like pure silica. These include per-oxy linkages, POL (≡Si-O-O-Si≡) which are oxygen interstitials, and oxygen deficient centers, ODC (≡Si-Si≡) which are oxygen vacancies. When exposed to ionizing radiation, these sites trap charge (typically holes) to form per-oxy radicals, POR (≡Si-O-O.) and E’ centers (≡Si.), respectively. These trapped charges interact with the electric field of the electromagnetic wave, causing absorption. In addition, rapidly cooled silica has strained ≡Si-O-Si≡ bonds, which are cleaved upon radiation to form non-bridging oxygen hole centers (NBOHC) depicted as ≡Si-O. and E’ centers by trapping holes and electrons, respectively. When the glass contains a second network former with the same valence as silicon such as germanium, the difference in the electronegativities favors the dopant as a hole trap. Reducing damage Hence radiation damage occurs in doped silica glass. To improve radiation resistance of pure silica core fibers, it is necessary to minimize the number density of these intrinsic defects. Minimization of defects is achieved not only by reducing the incorporation of impurities in glass but also by controlling the input gas composition, optimizing the thermal history of glass at all stages of fiber manufacturing and optimizing the stress in the fiber core. Other strategies include incorporation of dopants (such as fluorine) in the core that minimize formation of defect centers discussed above. Optical fibers All optical fibers undergo some darkening depending on a number of factors that include: ionization type, optical fiber core glass composition, operating wavelength, dose rate, total accumulated dose, temperature and power propagating through the core. Since attenuation is composition dependent, it is observed that fibers having pure silica cores and fluorine down doped claddings are amongst the most radiation hard fibers. The presence of dopants in the core such as germanium, phosphorus, boron, aluminum, erbium, ytterbium, thulium, holmium etc. compromises the radiation hardness of optical fibers. To minimize damage consequences, it is better to use a pure silica core fiber at higher operating wavelength, lower dose rate, lower total accumulated dose, higher temperature (accelerated recovery) and higher signal power (photo-bleaching). In addition to these intrinsic steps, external engineering may be required to shield the fiber from the effects of radiation. Core fibers Germanium-doped core fibers can be radiation hard even at high concentrations of germanium. Such fibers reach saturation, anneal well at higher temperatures and are also responsive to photo-bleaching. In case of phosphorus-doped core fibers, attenuation increases linearly with increasing phosphorus content and these fibers do not reach saturation. Recovery is very difficult even at higher temperatures. Boron, aluminum and all the rare-earth dopants significantly affect fiber loss. Radiation performances of various SM, MM and PM fibers manufactured by different vendors that were tested in wide range of radiation environments have been compiled. References Fiber optics Radiation effects
Radiation effects on optical fibers
[ "Physics", "Materials_science", "Engineering" ]
1,009
[ "Physical phenomena", "Materials science", "Radiation", "Condensed matter physics", "Radiation effects" ]
39,139,120
https://en.wikipedia.org/wiki/Fillet%20weld
Fillet welding refers to the process of joining two pieces of metal together when they are perpendicular or at an angle. These welds are commonly referred to as tee joints, which are two pieces of metal perpendicular to each other, or lap joints, which are two pieces of metal that overlap and are welded at the edges. The weld is triangular in shape and may have a concave, flat or convex surface depending on the welder's technique. Welders use fillet welds when connecting flanges to pipes and welding cross sections of infrastructure, and when bolts are not strong enough and will wear off easily. There are two main types of fillet weld: transverse fillet weld and parallel fillet weld. Aspects There are 5 pieces to each fillet weld known as the root, toe, face, leg and throat. The root of the weld is the part of deepest penetration which is the opposite angle of the hypotenuse. The toes of the weld are essentially the edges or the points of the hypotenuse. The face of the weld is the outer visual or hypotenuse that you see when looking at a fillet weld. The legs are the other two sides of the triangular fillet weld. The leg length is usually designated as the size of the weld. The throat of the weld is the distance from the center of the face to the root of the weld. Typically the depth of the throat should be at least as thick as the thickness of metal you are welding. Notation Fillet welding notation is important to recognize when reading technical drawings. The use of this notation tells the welder exactly what is expected by the fabricator. The symbol for a fillet weld is in the shape of a triangle. This triangle will lie either below a flat line or above it with an arrow coming off of the flat line pointing to a joint. The flat line is called "reference line". The side on which the triangle symbol is placed is important because it gives an indication which side of the joint is to be intersected by the weld. It is recognized that there are two different approaches in the global market to designate the arrow side and other side on drawings; a description of the two approaches is contained in International Standard ISO 2553, they are called "A-System" (which is more commonly used in Europe) and "B-System" (which is basically the ANSI/AWS system used in the US). In "A-System" two parallel lines are used as reference line: one is a continuous line, the other is a dashed line. In the "B-System", there is only one reference line, which is a continuous line. If there is a single reference line (B-System) and the triangle is positioned below the line, then the weld is going to be on the arrow side. If there is a single reference line ("B-System") and the triangle is positioned above the line, then the weld is going to be on the opposite side of the arrow. When you find an arrow pointing to a joint with two triangles, one sitting below and one sitting above the line even with each other, then there is intended to be a fillet weld on the arrow side of the joint as well as the opposite side of the joint. If the weld is to be continuous around a piece of metal such as a pipe or square, then a small circle will be around the point where the flat line and arrow pointing to the joint are connected. Manufacturers also include the strength that the weld must be. This is indicated by a letter and number combination just before the flat line. Examples of this are "E70" meaning the arc electrode must have a tensile strength of . There are also symbols that describe the aesthetics of the weld. A gentle curve pointing away from the hypotenuse means a concave weld is required, a straight line parallel with the hypotenuse calls for a flat faced weld, and a gentle curve towards the hypotenuse calls for a convex weld. The surface of the weld can be manipulated either by welding technique or by use of machining or grinding tools after the weld is completed. When reading a manufacturers technical drawings, you might also come across weld dimensions. The weld can be sized in many different ways such as the length of the weld, the measurements of the legs of the weld, and the spaces between welds. Along with a triangle, there will usually be a size for the weld for example (”x”) to the left of the triangle. This means that the vertical leg of the weld is to be ” whereas the horizontal leg is to ”. To the right of the triangle, there will be a measurement of exactly how long the weld is supposed to be. If the measurements of the drawing are in mm the welds are likewise measured in mm. For example, the weld would be 3 x 10, the mm being understood automatically. Intermittent fillet welds An intermittent fillet weld is one that is not continuous across a joint. These welds are portrayed as a set of two numbers to the right of the triangle instead of just one. The first number as mentioned earlier refers to the length of the weld. The second number, separated from the first by a “-”, refers to the pitch. The pitch is a measurement from midpoint to midpoint of the intermittent welds. Intermittent welding is used when either a continuous weld is not necessary, or when a continuous weld threatens the joint by warping. In some cases intermittent welds are staggered on both sides of the joint. In this case, the notation of the two triangles are not directly on top of each other. Instead, the side of the joint to receive the first weld will have a triangle further to the left than the following side’s triangle notation. As an end result of alternating intermittent fillet welds at each side, the space between welds on one side of the joint will be the midpoint of the opposite side’s weld. See also Butt welding Fillet (mechanics) Welding joint Notes References Hultenius, D. (2008). Lecture 14 – Welded Connections. Weman, Klas (2003). Welding processes handbook. New York, NY: CRC Press LLC. ISO 2553:2013, Welding and allied processes - Symbolic representation on drawings - Welded joints Cary, Howard B; Scott C. Helzer (2005). Modern Welding Technology. Upper Saddle River, New Jersey: Pearson Education. . Haque, M. E. (2010). Weld connections. Informally published manuscript, Department of Construction Science, Texas A&M University, College Station, Retrieved from http://faculty.arch.tamu.edu/mhaque/cosc421/Weld.pdf. Althouse, A. D. (1997). Modern welding. Tinley Park, Ill: Goodheart-Willcox. Welding Metalworking
Fillet weld
[ "Engineering" ]
1,473
[ "Welding", "Mechanical engineering" ]
39,139,834
https://en.wikipedia.org/wiki/Relativistic%20angular%20momentum
In physics, relativistic angular momentum refers to the mathematical formalisms and physical concepts that define angular momentum in special relativity (SR) and general relativity (GR). The relativistic quantity is subtly different from the three-dimensional quantity in classical mechanics. Angular momentum is an important dynamical quantity derived from position and momentum. It is a measure of an object's rotational motion and resistance to changes in its rotation. Also, in the same way momentum conservation corresponds to translational symmetry, angular momentum conservation corresponds to rotational symmetry – the connection between symmetries and conservation laws is made by Noether's theorem. While these concepts were originally discovered in classical mechanics, they are also true and significant in special and general relativity. In terms of abstract algebra, the invariance of angular momentum, four-momentum, and other symmetries in spacetime, are described by the Lorentz group, or more generally the Poincaré group. Physical quantities that remain separate in classical physics are naturally combined in SR and GR by enforcing the postulates of relativity. Most notably, the space and time coordinates combine into the four-position, and energy and momentum combine into the four-momentum. The components of these four-vectors depend on the frame of reference used, and change under Lorentz transformations to other inertial frames or accelerated frames. Relativistic angular momentum is less obvious. The classical definition of angular momentum is the cross product of position x with momentum p to obtain a pseudovector , or alternatively as the exterior product to obtain a second order antisymmetric tensor . What does this combine with, if anything? There is another vector quantity not often discussed – it is the time-varying moment of mass polar-vector (not the moment of inertia) related to the boost of the centre of mass of the system, and this combines with the classical angular momentum pseudovector to form an antisymmetric tensor of second order, in exactly the same way as the electric field polar-vector combines with the magnetic field pseudovector to form the electromagnetic field antisymmetric tensor. For rotating mass–energy distributions (such as gyroscopes, planets, stars, and black holes) instead of point-like particles, the angular momentum tensor is expressed in terms of the stress–energy tensor of the rotating object. In special relativity alone, in the rest frame of a spinning object, there is an intrinsic angular momentum analogous to the "spin" in quantum mechanics and relativistic quantum mechanics, although for an extended body rather than a point particle. In relativistic quantum mechanics, elementary particles have spin and this is an additional contribution to the orbital angular momentum operator, yielding the total angular momentum tensor operator. In any case, the intrinsic "spin" addition to the orbital angular momentum of an object can be expressed in terms of the Pauli–Lubanski pseudovector. Definitions Orbital 3d angular momentum For reference and background, two closely related forms of angular momentum are given. In classical mechanics, the orbital angular momentum of a particle with instantaneous three-dimensional position vector and momentum vector , is defined as the axial vector which has three components, that are systematically given by cyclic permutations of Cartesian directions (e.g. change to , to , to , repeat) A related definition is to conceive orbital angular momentum as a plane element. This can be achieved by replacing the cross product by the exterior product in the language of exterior algebra, and angular momentum becomes a contravariant second order antisymmetric tensor or writing and momentum vector , the components can be compactly abbreviated in tensor index notation where the indices and take the values 1, 2, 3. On the other hand, the components can be systematically displayed fully in a 3 × 3 antisymmetric matrix This quantity is additive, and for an isolated system, the total angular momentum of a system is conserved. Dynamic mass moment In classical mechanics, the three-dimensional quantity for a particle of mass m moving with velocity u has the dimensions of mass moment – length multiplied by mass. It is equal to the mass of the particle or system of particles multiplied by the distance from the space origin to the centre of mass (COM) at the time origin (), as measured in the lab frame. There is no universal symbol, nor even a universal name, for this quantity. Different authors may denote it by other symbols if any (for example μ), may designate other names, and may define N to be the negative of what is used here. The above form has the advantage that it resembles the familiar Galilean transformation for position, which in turn is the non-relativistic boost transformation between inertial frames. This vector is also additive: for a system of particles, the vector sum is the resultant where the system's centre of mass position and velocity and total mass are respectively For an isolated system, N is conserved in time, which can be seen by differentiating with respect to time. The angular momentum L is a pseudovector, but N is an "ordinary" (polar) vector, and is therefore invariant under inversion. The resultant Ntot for a multiparticle system has the physical visualization that, whatever the complicated motion of all the particles are, they move in such a way that the system's COM moves in a straight line. This does not necessarily mean all particles "follow" the COM, nor that all particles all move in almost the same direction simultaneously, only that the collective motion of the particles is constrained in relation to the centre of mass. In special relativity, if the particle moves with velocity u relative to the lab frame, then where is the Lorentz factor and m is the mass (i.e. the rest mass) of the particle. The corresponding relativistic mass moment in terms of , , , , in the same lab frame is The Cartesian components are Special relativity Coordinate transformations for a boost in the x direction Consider a coordinate frame which moves with velocity relative to another frame F, along the direction of the coincident axes. The origins of the two coordinate frames coincide at times . The mass–energy and momentum components of an object, as well as position coordinates and time in frame are transformed to , , , and in according to the Lorentz transformations The Lorentz factor here applies to the velocity v, the relative velocity between the frames. This is not necessarily the same as the velocity u of an object. For the orbital 3-angular momentum L as a pseudovector, we have In the second terms of and , the and components of the cross product can be inferred by recognizing cyclic permutations of and with the components of , Now, is parallel to the relative velocity , and the other components and are perpendicular to . The parallel–perpendicular correspondence can be facilitated by splitting the entire 3-angular momentum pseudovector into components parallel (∥) and perpendicular (⊥) to v, in each frame, Then the component equations can be collected into the pseudovector equations Therefore, the components of angular momentum along the direction of motion do not change, while the components perpendicular do change. By contrast to the transformations of space and time, time and the spatial coordinates change along the direction of motion, while those perpendicular do not. These transformations are true for all , not just for motion along the axes. Considering as a tensor, we get a similar result where The boost of the dynamic mass moment along the direction is Collecting parallel and perpendicular components as before Again, the components parallel to the direction of relative motion do not change, those perpendicular do change. Vector transformations for a boost in any direction So far these are only the parallel and perpendicular decompositions of the vectors. The transformations on the full vectors can be constructed from them as follows (throughout here is a pseudovector for concreteness and compatibility with vector algebra). Introduce a unit vector in the direction of , given by . The parallel components are given by the vector projection of or into while the perpendicular component by vector rejection of L or N from n and the transformations are or reinstating , These are very similar to the Lorentz transformations of the electric field and magnetic field , see Classical electromagnetism and special relativity. Alternatively, starting from the vector Lorentz transformations of time, space, energy, and momentum, for a boost with velocity , inserting these into the definitions gives the transformations. 4d angular momentum as a bivector In relativistic mechanics, the COM boost and orbital 3-space angular momentum of a rotating object are combined into a four-dimensional bivector in terms of the four-position X and the four-momentum P of the object In components which are six independent quantities altogether. Since the components of and are frame-dependent, so is . Three components are those of the familiar classical 3-space orbital angular momentum, and the other three are the relativistic mass moment, multiplied by . The tensor is antisymmetric; The components of the tensor can be systematically displayed as a matrix in which the last array is a block matrix formed by treating N as a row vector which matrix transposes to the column vector NT, and as a 3 × 3 antisymmetric matrix. The lines are merely inserted to show where the blocks are. Again, this tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system: Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields. The angular momentum tensor M is indeed a tensor, the components change according to a Lorentz transformation matrix Λ, as illustrated in the usual way by tensor index notation where, for a boost (without rotations) with normalized velocity , the Lorentz transformation matrix elements are and the covariant βi and contravariant βi components of β are the same since these are just parameters. In other words, one can Lorentz-transform the four position and four momentum separately, and then antisymmetrize those newly found components to obtain the angular momentum tensor in the new frame. Rigid body rotation For a particle moving in a curve, the cross product of its angular velocity (a pseudovector) and position give its tangential velocity which cannot exceed a magnitude of , since in SR the translational velocity of any massive object cannot exceed the speed of light c. Mathematically this constraint is , the vertical bars denote the magnitude of the vector. If the angle between and is (assumed to be nonzero, otherwise u would be zero corresponding to no motion at all), then and the angular velocity is restricted by The maximum angular velocity of any massive object therefore depends on the size of the object. For a given |x|, the minimum upper limit occurs when and are perpendicular, so that and . For a rotating rigid body rotating with an angular velocity , the is tangential velocity at a point inside the object. For every point in the object, there is a maximum angular velocity. The angular velocity (pseudovector) is related to the angular momentum (pseudovector) through the moment of inertia tensor (the dot denotes tensor contraction on one index). The relativistic angular momentum is also limited by the size of the object. Spin in special relativity Four-spin A particle may have a "built-in" angular momentum independent of its motion, called spin and denoted s. It is a 3d pseudovector like orbital angular momentum L. The spin has a corresponding spin magnetic moment, so if the particle is subject to interactions (like electromagnetic fields or spin-orbit coupling), the direction of the particle's spin vector will change, but its magnitude will be constant. The extension to special relativity is straightforward. For some lab frame F, let F′ be the rest frame of the particle and suppose the particle moves with constant 3-velocity u. Then F′ is boosted with the same velocity and the Lorentz transformations apply as usual; it is more convenient to use . As a four-vector in special relativity, the four-spin S generally takes the usual form of a four-vector with a timelike component st and spatial components s, in the lab frame although in the rest frame of the particle, it is defined so the timelike component is zero and the spatial components are those of particle's actual spin vector, in the notation here s′, so in the particle's frame Equating norms leads to the invariant relation so if the magnitude of spin is given in the rest frame of the particle and lab frame of an observer, the magnitude of the timelike component st is given in the lab frame also. The covariant constraint on the spin is orthogonality to the velocity vector, In 3-vector notation for explicitness, the transformations are The inverse relations are the components of spin the lab frame, calculated from those in the particle's rest frame. Although the spin of the particle is constant for a given particle, it appears to be different in the lab frame. The Pauli–Lubanski pseudovector The Pauli–Lubanski pseudovector applies to both massive and massless particles. Spin–orbital decomposition In general, the total angular momentum tensor splits into an orbital component and a spin component, This applies to a particle, a mass–energy–momentum distribution, or field. Angular momentum of a mass–energy–momentum distribution Angular momentum from the mass–energy–momentum tensor The following is a summary from MTW. Throughout for simplicity, Cartesian coordinates are assumed. In special and general relativity, a distribution of mass–energy–momentum, e.g. a fluid, or a star, is described by the stress–energy tensor Tβγ (a second order tensor field depending on space and time). Since T00 is the energy density, Tj0 for j = 1, 2, 3 is the jth component of the object's 3d momentum per unit volume, and Tij form components of the stress tensor including shear and normal stresses, the orbital angular momentum density about the position 4-vector β is given by a 3rd order tensor This is antisymmetric in α and β. In special and general relativity, T is a symmetric tensor, but in other contexts (e.g., quantum field theory), it may not be. Let Ω be a region of 4d spacetime. The boundary is a 3d spacetime hypersurface ("spacetime surface volume" as opposed to "spatial surface area"), denoted ∂Ω where "∂" means "boundary". Integrating the angular momentum density over a 3d spacetime hypersurface yields the angular momentum tensor about , where dΣγ is the volume 1-form playing the role of a unit vector normal to a 2d surface in ordinary 3d Euclidean space. The integral is taken over the coordinates X, not . The integral within a spacelike surface of constant time is which collectively form the angular momentum tensor. Angular momentum about the centre of mass There is an intrinsic angular momentum in the centre-of-mass frame, in other words, the angular momentum about any event on the wordline of the object's center of mass. Since T00 is the energy density of the object, the spatial coordinates of the center of mass are given by Setting Y = XCOM obtains the orbital angular momentum density about the centre-of-mass of the object. Angular momentum conservation The conservation of energy–momentum is given in differential form by the continuity equation where ∂γ is the four-gradient. (In non-Cartesian coordinates and general relativity this would be replaced by the covariant derivative). The total angular momentum conservation is given by another continuity equation The integral equations use Gauss' theorem in spacetime Torque in special relativity The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time: or in tensor components: where F is the 4d force acting on the particle at the event X. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass. Angular momentum as the generator of spacetime boosts and rotations The angular momentum tensor is the generator of boosts and rotations for the Lorentz group. Lorentz boosts can be parametrized by rapidity, and a 3d unit vector pointing in the direction of the boost, which combine into the "rapidity vector" where is the speed of the relative motion divided by the speed of light. Spatial rotations can be parametrized by the axis–angle representation, the angle and a unit vector pointing in the direction of the axis, which combine into an "axis-angle vector" Each unit vector only has two independent components, the third is determined from the unit magnitude. Altogether there are six parameters of the Lorentz group; three for rotations and three for boosts. The (homogeneous) Lorentz group is 6-dimensional. The boost generators and rotation generators can be combined into one generator for Lorentz transformations; the antisymmetric angular momentum tensor, with components and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrix , with entries: where the summation convention over the repeated indices i, j, k has been used to prevent clumsy summation signs. The general Lorentz transformation is then given by the matrix exponential and the summation convention has been applied to the repeated matrix indices α and β. The general Lorentz transformation Λ is the transformation law for any four vector A = (A0, A1, A2, A3), giving the components of this same 4-vector in another inertial frame of reference The angular momentum tensor forms 6 of the 10 generators of the Poincaré group, the other four are the components of the four-momentum for spacetime translations. Angular momentum in general relativity The angular momentum of test particles in a gently curved background is more complicated in GR but can be generalized in a straightforward manner. If the Lagrangian is expressed with respect to angular variables as the generalized coordinates, then the angular momenta are the functional derivatives of the Lagrangian with respect to the angular velocities. Referred to Cartesian coordinates, these are typically given by the off-diagonal shear terms of the spacelike part of the stress–energy tensor. If the spacetime supports a Killing vector field tangent to a circle, then the angular momentum about the axis is conserved. One also wishes to study the effect of a compact, rotating mass on its surrounding spacetime. The prototype solution is of the Kerr metric, which describes the spacetime around an axially symmetric black hole. It is obviously impossible to draw a point on the event horizon of a Kerr black hole and watch it circle around. However, the solution does support a constant of the system that acts mathematically similarly to an angular momentum. See also References Further reading Special relativity General relativity External links Angular momentum Dynamics (mechanics) Angular momentum Rotation Angular momentum
Relativistic angular momentum
[ "Physics", "Mathematics" ]
3,893
[ "Physical phenomena", "Physical quantities", "Quantity", "Classical mechanics", "Rotation", "General relativity", "Special relativity", "Motion (physics)", "Dynamics (mechanics)", "Theory of relativity", "Angular momentum", "Momentum", "Moment (physics)" ]
39,141,214
https://en.wikipedia.org/wiki/Standard%20linear%20solid%20Q%20model%20for%20attenuation%20and%20dispersion
A standard linear solid Q model (SLS) for attenuation and dispersion is one of many mathematical Q models that gives a definition of how the earth responds to seismic waves. When a plane wave propagates through a homogeneous viscoelastic medium, the effects of amplitude attenuation and velocity dispersion may be combined conveniently into a single dimensionless parameter, Q, the medium-quality factor. Transmission losses may occur due to friction or fluid movement, and whatever the physical mechanism, they can be conveniently described with an empirical formulation where elastic moduli and propagation velocity are complex functions of frequency. Ursin and Toverud compared different Q models including the above model (SLS-model). In order to compare the different models they considered plane-wave propagation in a homogeneous viscoelastic medium. They used the Kolsky-Futterman model as a reference and studied the SLS model. This model was compared with the Kolsky-Futterman model. The Kolsky-Futterman model was first described in the article ‘Dispersive body waves’ by Futterman (1962). Kolsky's attenuation-dispersion model The Kolsky model assumes the attenuation α(w) to be strictly linear with frequency over the range of measurement: And defines the phase velocity as: SLS model The standard linear solid model is developed from the stress-strain relation. Using a linear combination of springs and dashpots to represent elastic and viscous components, Ursin and Toverud used one relaxation time. The model was first developed by Zener. The attenuation is given by: And defines the phase velocity as: Computations For each of the Q models, Ursin and Toverud computed the attenuation (1)(3) in the frequency band 0–300 Hz. Figure 1. presents the graph for the Kolsky model (blue) with two datasets (left and right) and same data – attenuation with cr=2000 m/s, Qr=100 and wr=2π100 Hz. The SLS model (green) has two different datasets, left c0=1990 m/s, Qc=100 and τr−1=2π100 right c0=1985 m/s, Qc=84.71 and τr−1=6.75x100 Notes References Seismology measurement Geophysics
Standard linear solid Q model for attenuation and dispersion
[ "Physics" ]
504
[ "Applied and interdisciplinary physics", "Geophysics" ]
39,144,241
https://en.wikipedia.org/wiki/Thermodynamic%20operation
A thermodynamic operation is an externally imposed manipulation that affects a thermodynamic system. The change can be either in the connection or wall between a thermodynamic system and its surroundings, or in the value of some variable in the surroundings that is in contact with a wall of the system that allows transfer of the extensive quantity belonging that variable. It is assumed in thermodynamics that the operation is conducted in ignorance of any pertinent microscopic information. A thermodynamic operation requires a contribution from an independent external agency, that does not come from the passive properties of the systems. Perhaps the first expression of the distinction between a thermodynamic operation and a thermodynamic process is in Kelvin's statement of the second law of thermodynamics: "It is impossible, by means of inanimate material agency, to derive mechanical effect from any portion of matter by cooling it below the temperature of the surrounding objects." A sequence of events that occurred other than "by means of inanimate material agency" would entail an action by an animate agency, or at least an independent external agency. Such an agency could impose some thermodynamic operations. For example, those operations might create a heat pump, which of course would comply with the second law. A Maxwell's demon conducts an extremely idealized and naturally unrealizable kind of thermodynamic operation. Another commonly used term that indicates a thermodynamic operation is 'change of constraint', for example referring to the removal of a wall between two otherwise isolated compartments. An ordinary language expression for a thermodynamic operation is used by Edward A. Guggenheim: "tampering" with the bodies. Distinction between thermodynamic operation and thermodynamic process A typical thermodynamic operation is externally imposed change of position of a piston, so as to alter the volume of the system of interest. Another thermodynamic operation is a removal of an initially separating wall, a manipulation that unites two systems into one undivided system. A typical thermodynamic process consists of a redistribution that spreads a conserved quantity between a system and its surroundings across a previously impermeable but newly semi-permeable wall between them. More generally, a process can be considered as a transfer of some quantity that is defined by a change of an extensive state variable of the system, corresponding to a conserved quantity, so that a transfer balance equation can be written. According to Uffink, "... thermodynamic processes only take place after an external intervention on the system (such as: removing a partition, establishing thermal contact with a heat bath, pushing a piston, etc.). They do not correspond to the autonomous behaviour of a free system." For example, for a closed system of interest, a change of internal energy (an extensive state variable of the system) can be occasioned by transfer of energy as heat. In thermodynamics, heat is not an extensive state variable of the system. The quantity of heat transferred, is however, defined by the amount of adiabatic work that would produce the same change of the internal energy as the heat transfer; energy transferred as heat is the conserved quantity. As a matter of history, the distinction, between a thermodynamic operation and a thermodynamic process, is not found in these terms in nineteenth century accounts. For example, Kelvin spoke of a "thermodynamic operation" when he meant what present-day terminology calls a thermodynamic operation followed by a thermodynamic process. Again, Planck usually spoke of a "process" when our present-day terminology would speak of a thermodynamic operation followed by a thermodynamic process. Planck's "natural processes" contrasted with actions of Maxwell's demon Planck held that all "natural processes" (meaning, in present-day terminology, a thermodynamic operation followed by a thermodynamic process) are irreversible and proceed in the sense of increase of entropy sum. In these terms, it would be by thermodynamic operations that, if he could exist, Maxwell's demon would conduct unnatural affairs, which include transitions in the sense away from thermodynamic equilibrium. They are physically theoretically conceivable up to a point, but are not natural processes in Planck's sense. The reason is that ordinary thermodynamic operations are conducted in total ignorance of the very kinds of microscopic information that is essential to the efforts of Maxwell's demon. Examples of thermodynamic operations Thermodynamic cycle A thermodynamic cycle is constructed as a sequence of stages or steps. Each stage consists of a thermodynamic operation followed by a thermodynamic process. For example, an initial thermodynamic operation of a cycle of a Carnot heat engine could be taken as the setting of the working body, at a known high temperature, into contact with a thermal reservoir at the same temperature (the hot reservoir), through a wall permeable only to heat, while it remains in mechanical contact with the work reservoir. This thermodynamic operation is followed by a thermodynamic process, in which the expansion of the working body is so slow as to be effectively reversible, while internal energy is transferred as heat from the hot reservoir to the working body and as work from the working body to the work reservoir. Theoretically, the process terminates eventually, and this ends the stage. The engine is then subject to another thermodynamic operation, and the cycle proceeds into another stage. The cycle completes when the thermodynamic variables (the thermodynamic state) of the working body return to their initial values. Virtual thermodynamic operations A refrigeration device passes a working substance through successive stages, overall constituting a cycle. This may be brought about not by moving or changing separating walls around an unmoving body of working substance, but rather by moving a body of working substance to bring about exposure to a cyclic succession of unmoving unchanging walls. The effect is virtually a cycle of thermodynamic operations. The kinetic energy of bulk motion of the working substance is not a significant feature of the device, and the working substance may be practically considered as nearly at rest. Composition of systems For many chains of reasoning in thermodynamics, it is convenient to think of the combination of two systems into one. It is imagined that the two systems, separated from their surroundings, are juxtaposed and (by a shift of viewpoint) regarded as constituting a new, composite system. The composite system is imagined amid its new overall surroundings. This sets up the possibility of interaction between the two subsystems and between the composite system and its overall surroundings, for example by allowing contact through a wall with a particular kind of permeability. This conceptual device was introduced into thermodynamics mainly in the work of Carathéodory, and has been widely used since then. Additivity of extensive variables If the thermodynamic operation is entire removal of walls, then extensive state variables of the composed system are the respective sums of those of the component systems. This is called the additivity of extensive variables. Scaling of a system A thermodynamic system consisting of a single phase, in the absence of external forces, in its own state of internal thermodynamic equilibrium, is homogeneous. This means that the material in any region of the system can be interchanged with the material of any congruent and parallel region of the system, and the effect is to leave the system thermodynamically unchanged. The thermodynamic operation of scaling is the creation of a new homogeneous system whose size is a multiple of the old size, and whose intensive variables have the same values. Traditionally the size is stated by the mass of the system, but sometimes it is stated by the entropy, or by the volume. For a given such system , scaled by the real number to yield a new one , a state function, , such that , is said to be extensive. Such a function as is called a homogeneous function of degree 1. There are two different concepts mentioned here, sharing the same name: (a) the mathematical concept of degree-1 homogeneity in the scaling function; and (b) the physical concept of the spatial homogeneity of the system. It happens that the two agree here, but that is not because they are tautologous. It is a contingent fact of thermodynamics. Splitting and recomposition of systems If two systems, and  , have identical intensive variables, a thermodynamic operation of wall removal can compose them into a single system, , with the same intensive variables. If, for example, their internal energies are in the ratio , then the composed system, , has internal energy in the ratio of to that of the system . By the inverse thermodynamic operation, the system can be split into two subsystems in the obvious way. As usual, these thermodynamic operations are conducted in total ignorance of the microscopic states of the systems. More particularly, it is characteristic of macroscopic thermodynamics that the probability vanishes, that the splitting operation occurs at an instant when system is in the kind of extreme transient microscopic state envisaged by the Poincaré recurrence argument. Such splitting and recomposition is in accord with the above defined additivity of extensive variables. Statements of laws Thermodynamic operations appear in the statements of the laws of thermodynamics. For the zeroth law, one considers operations of thermally connecting and disconnecting systems. For the second law, some statements contemplate an operation of connecting two initially unconnected systems. For the third law, one statement is that no finite sequence of thermodynamic operations can bring a system to absolute zero temperature. References Bibliography for citations Bailyn, M. (1994). A Survey of Thermodynamics, American Institute of Physics Press, New York, . Callen, H.B. (1960/1985). Thermodynamics and an Introduction to Thermostatistics, (1st edition 1960) 2nd edition 1985, Wiley, New York, . A translation may be found here. Also a mostly reliable translation is to be found at Kestin, J. (1976). The Second Law of Thermodynamics, Dowden, Hutchinson & Ross, Stroudsburg PA.. Giles, R. (1964). Mathematical Foundations of Thermodynamics, Macmillan, New York. Guggenheim, E.A. (1949/1967). Thermodynamics. An Advanced Treatment for Chemists and Physicists, fifth revised edition, North-Holland, Amsterdam. Guggenheim, E.A. (1949). 'Statistical basis of thermodynamics', Research, 2: 450–454. Gyarmati, I. (1967/1970). Non-equilibrium Thermodynamics. Field Theory and Variational Principles, translated from the 1967 Hungarian by E. Gyarmati and W.F. Heinz, Springer-Verlag, New York. Haase, R. (1971). Survey of Fundamental Laws, chapter 1 of Thermodynamics, pages 1–97 of volume 1, ed. W. Jost, of Physical Chemistry. An Advanced Treatise, ed. H. Eyring, D. Henderson, W. Jost, Academic Press, New York, lcn 73–117081. Kelvin, Lord (1857). On the alteration of temperature accompanying changes of pressure in fluids, Proc. Roy. Soc., June. Landsberg, P.T. (1961). Thermodynamics with Quantum Statistical Illustrations, Interscience, New York. Lieb, E.H., Yngvason, J. (1999). The physics and mathematics of the second law of thermodynamics, Physics Reports, 314: 1–96, p. 14. Planck, M. (1887). 'Ueber das Princip der Vermehrung der Entropie', Annalen der Physik und Chemie, new series 30: 562–582. Planck, M., (1897/1903). Treatise on Thermodynamics, translated by A. Ogg, Longmans, Green, & Co., London. Planck, M. (1935). Bemerkungen über Quantitätsparameter, Intenstitätsparameter und stabiles Gleichgewicht, Physica, 2: 1029–1032. Tisza, L. (1966). Generalized Thermodynamics, M.I.T Press, Cambridge MA. Uffink, J. (2001). Bluff your way in the second law of thermodynamics, Stud. Hist. Phil. Mod. Phys., 32(3): 305–394, publisher Elsevier Science. Thermodynamics Dynamical systems
Thermodynamic operation
[ "Physics", "Chemistry", "Mathematics" ]
2,767
[ "Mechanics", "Thermodynamics", "Dynamical systems" ]
39,145,558
https://en.wikipedia.org/wiki/Crack%20tip%20opening%20displacement
Crack tip opening displacement (CTOD) or is the distance between the opposite faces of a crack tip at the 90° intercept position. The position behind the crack tip at which the distance is measured is arbitrary but commonly used is the point where two 45° lines, starting at the crack tip, intersect the crack faces. The parameter is used in fracture mechanics to characterize the loading on a crack and can be related to other crack tip loading parameters such as the stress intensity factor and the elastic-plastic J-integral. For plane stress conditions, the CTOD can be written as: where is the yield stress, is the crack length, is the Young's modulus, and is the remote applied stress. Under fatigue loading, the range of movement of the crack tip during a loading cycle can be used for determining the rate of fatigue growth using a crack growth equation. The crack extension for a cycle , is typically of the order of . History Examination of fractured test specimens led to the observation that the crack faces had moved apart prior to fracture, due to the blunting of an initially sharp crack by plastic deformation. The degree of crack blunting increased in proportion to the toughness of the material. This observation led to considering the opening at the crack tip as a measure of fracture toughness. The COD was originally independently proposed by Alan Cottrell and A. A. Wells. This parameter became known as CTOD. G. R. Irwin later postulated that crack-tip plasticity makes the crack behave as if it were slightly longer. Thus, estimation of CTOD can be done by solving for the displacement at the physical crack tip. Use as a design parameter CTOD is a single parameter that accommodates crack tip plasticity. It is easy to measure when compared with techniques such as J integral. It is a fracture parameter that has more physical meaning than the rest. However, the equivalence of CTOD and J integral is proven only for non-linear materials, but not for plastic materials. It is hard to expand the concept of CTOD for large deformations. It is easier to calculate J-integral in case of a design process using finite element method techniques. Relation with other crack tip parameters K and CTOD CTOD can be expressed in terms of stress intensity factor as: where is the yield strength, is Young's modulus and for plane stress and for plane strain. G and CTOD CTOD can be related to the energy release rate G as: J-integral and CTOD The relationship between the CTOD and J is given by: where the variable is typically between 0.3 and 0.8. Testing A CTOD test is usually done on materials that undergo plastic deformation prior to failure. The testing material more or less resembles the original one, although dimensions can be reduced proportionally. Loading is done to resemble the expected load. More than 3 tests are done to minimize any experimental deviations. The dimensions of the testing material must maintain proportionality. The specimen is placed on the work table and a notch is created exactly at the centre. The crack should be generated such that the defect length is about half the depth. The load applied on the specimen is generally a three-point bending load. A type of strain gauge called a crack-mouth clip gage is used to measure the crack opening. The crack tip plastically deforms until a critical point after which a cleavage crack is initiated that may lead to either partial or complete failure. The critical load and strain gauge measurements at the load are noted and a graph is plotted. The crack tip opening can be calculated from the length of the crack and opening at the mouth of the notch. According to the material used, the fracture can be brittle or ductile which can be concluded from the graph. Standards for CTOD testing can be found in the ASTM E1820 - 20a code. Laboratory measurement Early experiments used a flat, paddle-shaped gauge that was inserted into the crack; as the crack opens, the paddle gauge rotates and an electronic signal is sent to an x–y plotter. This method was inaccurate, however, because it was difficult to reach the crack tip with the paddle gauge. Today, the displacement V at the crack mouth is measured and the CTOD is inferred by assuming that the specimen halves are rigid and rotate about a hinge point. References Fracture mechanics
Crack tip opening displacement
[ "Materials_science", "Engineering" ]
885
[ "Structural engineering", "Materials degradation", "Materials science", "Fracture mechanics" ]
39,148,004
https://en.wikipedia.org/wiki/Thermal%20fracturing%20in%20glass
Thermal fracturing in glass occurs when a sufficient temperature differential is created within glass. As a warmed area expands or a cooled area contracts, stress forces develop, potentially leading to fracture. A temperature differential may be created in many ways, including solar heating, space heating devices, fire, or hot and cold liquids. Sloping glass surfaces are subject to greater solar radiation than vertical surfaces and so are more prone to solar thermal fracture. In framed window glass, the edges are relatively cooler than the exposed areas, so space heating devices in very close proximity may cause thermal fracture. Factors affecting thermal stress Solar absorption: the temperature of glass depends on the amount of heat absorbed by the glass. So a high performance solar control glass will absorb more heat. so it will be more prone to thermal fracture Shadow: the presence of shadows will result in relatively cooler areas in glass. Thus it results in temperature difference and may result in thermal fracture. Edge strength: crack will form if the tensile strength of glass edge exceeds the critical point. Clean cut glass is the strongest and after that polished edge is strongest. Artificial heating and cooling: if heating or cooling vents are present, the glass can heat or cool excessively and may result in thermal stress. Frame type and colour: insulating materials keep the edges cool but conduction materials are influenced by their colour. dark colours are more absorptive so causes more heating. Glass type : edge strength of wired glass is less than of ordinary glasses due to weakening caused by cutting processes. Different types of thermal fracture Low energy: Most common type of fractures. It is caused by damage to the edge of the glass. This weakens the edge, so less stress is required to cause the failure. The probability of this type of thermal fracture cannot be determined using thermal assessment processes. High energy: These are rare and require high levels of thermal stress. The probability of this type of thermal fracture can be determined using thermal assessment processes. Prevention of thermal fracture Low energy: edges of annealed laminated glasses are polished. Inspection is done to find damages in glass. High energy: glasses are heat strengthened to avoid high energy thermal failure. References Glass
Thermal fracturing in glass
[ "Physics", "Chemistry" ]
433
[ "Homogeneous chemical mixtures", "Amorphous solids", "Unsolved problems in physics", "Glass" ]
48,029,103
https://en.wikipedia.org/wiki/Plasma%20electrochemistry
Plasma electrochemistry is a new field of research where the interaction of plasma with an electrolyte solution is studied. It uses plasma to drive chemical reactions in liquid. References Electrochemistry electrochemistry
Plasma electrochemistry
[ "Physics", "Chemistry" ]
43
[ "Physical chemistry stubs", "Plasma physics", "Electrochemistry", "Plasma physics stubs", "Plasma theory and modeling", "Electrochemistry stubs" ]
48,037,281
https://en.wikipedia.org/wiki/Eroom%27s%20law
Eroom's law is the observation that drug discovery is becoming slower and more expensive over time, despite improvements in technology (such as high-throughput screening, biotechnology, combinatorial chemistry, and computational drug design), a trend first observed in the 1980s. The inflation-adjusted cost of developing a new drug roughly doubles every nine years. In order to highlight the contrast with the exponential advancements of other forms of technology (such as transistors) over time, the name given to the observation is Moore's law spelled backwards. The term was coined by Jack Scannell and colleagues in 2012 in Nature Reviews Drug Discovery. Causes The article that proposed the law attributes it to four main causes: The 'better than the Beatles' problem: The sense that new drugs only have modest incremental benefit over drugs already widely considered as successful, such as Lipitor, and treatment effects on top of already effective treatments are smaller than treatment effects versus placebo. The smaller size of these treatment effects mandates an increase in clinical trial sizes to show the same level of efficacy. This problem was phrased as "better than the Beatles" to highlight the fact that it would be difficult to come up with new successful pop songs if all new songs had to be better than the Beatles. The 'cautious regulator' problem: The progressive lowering of risk tolerance seen by drug regulatory agencies that makes research and development (R&D) both costlier and harder. After older drugs (such as Thalidomide or Vioxx) are removed from the market due to safety reasons, the bar on safety for new drugs is increased. The 'throw money at it' tendency: The tendency to add human resources and other resources to R&D, which may lead to project overrun. The 'basic research–brute force' bias: The tendency to overestimate the ability of advances in basic research and brute force screening methods to show a molecule as safe and effective in clinical trials. From the 1960s to the 1990s (and later), drug discovery has shifted from whole-animal classical pharmacology testing methods (phenotypic screening) to reverse pharmacology target-approaches that result in the discovery of drugs that may tightly bind with high-affinity to target proteins, but which still often fail in clinical trials due to an under-appreciation of the complexity of the whole organism. Furthermore, drug discovery techniques have shifted from small-molecule and iterative low-throughput search strategies to target-based high-throughput screening (HTS) of large compound libraries. But despite being faster and cheaper, HTS approaches may be less productive. While some suspect a lack of "low-hanging fruit" as a significant contribution to Eroom's law, this may be less important than the four main causes, as there are still many decades' worth of new potential drug targets relative to the number of targets which already have been exploited, even if the industry exploits 4 to 5 new targets per year. There is also space to explore selectively non-selective drugs (or "dirty drugs") that interact with several molecular targets, and which may be particularly effective as central nervous system (CNS) therapeutics, even though few of them have been introduced in the last few decades. As of 2018, academic spinouts and small biotech startups have surpassed Big Pharma with respect to the number of best-selling drugs approved, with 24/30 (80%) originating outside of Big Pharma. Critiques An alternative hypothesis is that the pharmaceutical industry has become cartelized and formed a bureaucratic oligopoly, resulting in reduced innovation and efficiency. As of 2022, approximately 20 Big Pharma companies control the majority of global branded drug sales (on the scale of ±$1 trillion annually). Critics point out that Big Pharma has reduced investment in R&D, spending double on marketing, and have focused on elevating drug prices instead of risk-taking. References Drug discovery Rules of thumb
Eroom's law
[ "Chemistry", "Biology" ]
814
[ "Life sciences industry", "Medicinal chemistry", "Drug discovery" ]
37,678,417
https://en.wikipedia.org/wiki/MACS%20J0647.7%2B7015
MACS J0647.7+7015 is a galaxy cluster with a redshift z = 0.592, located at J2000.0 right ascension declination . It lies between the Big Dipper and Little Dipper in the constellation Camelopardalis. It is part of a sample of 12 extreme galaxy clusters at z > 0.5 discovered by the MAssive Cluster Survey (MACS). During 2012 the galaxy cluster was announced as gravitationally lensing the most distant galaxy (MACS0647-JD), then ever imaged (z = 11). References External links Galaxy clusters Camelopardalis
MACS J0647.7+7015
[ "Astronomy" ]
134
[ "Camelopardalis", "Galaxy clusters", "Astronomical objects", "Constellations" ]
37,680,847
https://en.wikipedia.org/wiki/Grothendieck%20trace%20formula
In algebraic geometry, the Grothendieck trace formula expresses the number of points of a variety over a finite field in terms of the trace of the Frobenius endomorphism on its cohomology groups. There are several generalizations: the Frobenius endomorphism can be replaced by a more general endomorphism, in which case the points over a finite field are replaced by its fixed points, and there is also a more general version for a sheaf over the variety, where the cohomology groups are replaced by cohomology with coefficients in the sheaf. The Grothendieck trace formula is an analogue in algebraic geometry of the Lefschetz fixed-point theorem in algebraic topology. One application of the Grothendieck trace formula is to express the zeta function of a variety over a finite field, or more generally the L-series of a sheaf, as a sum over traces of Frobenius on cohomology groups. This is one of the steps used in the proof of the Weil conjectures. Behrend's trace formula generalizes the formula to algebraic stacks. Formal statement for L-functions Let k be a finite field, l a prime number invertible in k, X a smooth k-scheme of dimension n, and a constructible -sheaf on X. Then the following cohomological expression for the L-function of holds: where F is everywhere a geometric Frobenius action on l-adic cohomology with compact supports of the sheaf . Taking logarithmic derivatives of both formal power series produces a statement on sums of traces for each finite field extension E of the base field k: For a constant sheaf (viewed as to qualify as an l-adic sheaf) the left hand side of this formula is the number of E-points of X. References Theorems in algebraic geometry
Grothendieck trace formula
[ "Mathematics" ]
392
[ "Theorems in algebraic geometry", "Theorems in geometry" ]
35,233,688
https://en.wikipedia.org/wiki/Potential%20cultural%20impact%20of%20extraterrestrial%20contact
The cultural impact of extraterrestrial contact is the corpus of changes to terrestrial science, technology, religion, politics, and ecosystems resulting from contact with an extraterrestrial civilization. This concept is closely related to the search for extraterrestrial intelligence (SETI), which attempts to locate intelligent life as opposed to analyzing the implications of contact with that life. The potential changes from extraterrestrial contact could vary greatly in magnitude and type, based on the extraterrestrial civilization's level of technological advancement, degree of benevolence or malevolence, and level of mutual comprehension between itself and humanity. The medium through which humanity is contacted, be it electromagnetic radiation, direct physical interaction, extraterrestrial artifact, or otherwise, may also influence the results of contact. Incorporating these factors, various systems have been created to assess the implications of extraterrestrial contact. The implications of extraterrestrial contact, particularly with a technologically superior civilization, have often been likened to the meeting of two vastly different human cultures on Earth, a historical precedent being the Columbian Exchange. Such meetings have generally led to the destruction of the civilization receiving contact (as opposed to the "contactor", which initiates contact), and therefore destruction of human civilization is a possible outcome. Extraterrestrial contact is also analogous to the numerous encounters between non-human native and invasive species occupying the same ecological niche. However, the absence of verified public contact to date means tragic consequences are still largely speculative. Background Search for extraterrestrial intelligence To detect extraterrestrial civilizations with radio telescopes, one must identify an artificial, coherent signal against a background of various natural phenomena that also produce radio waves. Telescopes capable of this include the Allen Telescope Array in Hat Creek, California and the new Five hundred meter Aperture Spherical Telescope in China and formerly the now demolished Arecibo Observatory in Puerto Rico. Various programs to detect extraterrestrial intelligence have had government funding in the past. Project Cyclops was commissioned by NASA in the 1970s to investigate the most effective way to search for signals from intelligent extraterrestrial sources, but the report's recommendations were set aside in favor of the much more modest approach of Messaging to Extra-Terrestrial Intelligence (METI), the sending of messages that intelligent extraterrestrial beings might intercept. NASA then drastically reduced funding for SETI programs, which have since turned to private donations to continue their search. With the discovery in the late 20th and early 21st centuries of numerous extrasolar planets, some of which may be habitable, governments have once more become interested in funding new programs. In 2006 the European Space Agency launched COROT, the first spacecraft dedicated to the search for exoplanets, and in 2009 NASA launched the Kepler space observatory for the same purpose. By February 2013 Kepler had detected 105 of the confirmed exoplanets, and one of them, Kepler-22b, is potentially habitable. After it was discovered, the SETI Institute resumed the search for an intelligent extraterrestrial civilization, focusing on Keplers candidate planets, with funding from the United States Air Force. Newly discovered planets, particularly ones that are potentially habitable, have enabled SETI and METI programs to refocus projects for communication with extraterrestrial intelligence. In 2009 A Message From Earth (AMFE) was sent toward the Gliese 581 planetary system, which contains two potentially habitable planets, the confirmed Gliese 581d and the more habitable but unconfirmed Gliese 581g. In the SETILive project, which began in 2012, human volunteers analyze data from the Allen Telescope Array to search for possible alien signals that computers might miss because of terrestrial radio interference. The data for the study is obtained by observing Kepler target stars with the radio telescope. In addition to radio-based methods, some projects, such as SEVENDIP (Search for Extraterrestrial Visible Emissions from Nearby Developed Intelligent Populations) at the University of California, Berkeley, are using other regions of the electromagnetic spectrum to search for extraterrestrial signals. Various other projects are not searching for coherent signals, but want to rather use electromagnetic radiation to find other evidence of extraterrestrial intelligence, such as megascale astroengineering projects. Several signals, such as the Wow! signal, have been detected in the history of the search for extraterrestrial intelligence, but none have yet been confirmed as being of intelligent origin. Impact assessment The implications of extraterrestrial contact depend on the method of discovery, the nature of the extraterrestrial beings, and their location relative to the Earth. Considering these factors, the Rio scale has been devised in order to provide a more quantitative picture of the results of extraterrestrial contact. More specifically, the scale gauges whether communication was conducted through radio, the information content of any messages, and whether discovery arose from a deliberately beamed message (and if so, whether the detection was the result of a specialized SETI effort or through general astronomical observations) or by the detection of occurrences such as radiation leakage from astroengineering installations. The question of whether or not a purported extraterrestrial signal has been confirmed as authentic, and with what degree of confidence, will also influence the impact of the contact. The Rio scale was modified in 2011 to include a consideration of whether contact was achieved through an interstellar message or through a physical extraterrestrial artifact, with a suggestion that the definition of artifact be expanded to include "technosignatures", including all indications of intelligent extraterrestrial life other than the interstellar radio messages sought by traditional SETI programs. A study by astronomer Steven J. Dick at the United States Naval Observatory considered the cultural impact of extraterrestrial contact by analyzing events of similar significance in the history of science. The study argues that the impact would be most strongly influenced by the information content of the message received, if any. It distinguishes short-term and long-term impact. Seeing radio-based contact as a more plausible scenario than a visit from extraterrestrial spacecraft, the study rejects the commonly stated analogy of European colonization of the Americas as an accurate model for information-only contact, preferring events of profound scientific significance, such as the Copernican and Darwinian revolutions, as more predictive of how humanity might be impacted by extraterrestrial contact. The physical distance between the two civilizations has also been used to assess the cultural impact of extraterrestrial contact. Historical examples show that the greater the distance, the less the contacted civilization perceives a threat to itself and its culture. Therefore, contact occurring within the Solar System, and especially in the immediate vicinity of Earth, is likely to be the most disruptive and negative for humanity. On a smaller scale, people close to the epicenter of contact would experience a greater effect than would those living farther away, and a contact having multiple epicenters would cause a greater shock than one with a single epicenter. Space scientists Martin Dominik and John Zarnecki state that in the absence of any data on the nature of extraterrestrial intelligence, one must predict the cultural impact of extraterrestrial contact on the basis of generalizations encompassing all life and of analogies with history. The beliefs of the general public about the effect of extraterrestrial contact have also been studied. A poll of United States and Chinese university students in 2000 provides factor analysis of responses to questions about, inter alia, the participants' belief that extraterrestrial life exists in the Universe, that such life may be intelligent, and that humans will eventually make contact with it. The study shows significant weighted correlations between participants' belief that extraterrestrial contact may either conflict with or enrich their personal religious beliefs and how conservative such religious beliefs are. The more conservative the respondents, the more harmful they considered extraterrestrial contact to be. Other significant correlation patterns indicate that students took the view that the search for extraterrestrial intelligence may be futile or even harmful. Psychologists Douglas Vakoch and Yuh-shiow Lee conducted a survey to assess people's reactions to receiving a message from extraterrestrials, including their judgments about likelihood that extraterrestrials would be malevolent. "People who view the world as a hostile place are more likely to think extraterrestrials will be hostile," Vakoch told USA Today. Post-detection protocols Various protocols have been drawn up detailing a course of action for scientists and governments after extraterrestrial contact. Post-detection protocols must address three issues: what to do in the first weeks after receiving a message from an extraterrestrial source; whether or not to send a reply; and analyzing the long-term consequences of the message received. No post-detection protocol, however, is binding under national or international law, and Dominik and Zarnecki consider the protocols likely to be ignored if contact occurs. One of the first post-detection protocols, the "Declaration of Principles for Activities Following the Detection of Extraterrestrial Intelligence", was created by the SETI Permanent Committee of the International Academy of Astronautics (IAA). It was later approved by the Board of Trustees of the IAA and by the International Institute of Space Law, and still later by the International Astronomical Union (IAU), the Committee on Space Research, the International Union of Radio Science, and others. It was subsequently endorsed by most researchers involved in the search for extraterrestrial intelligence, including the SETI Institute. The Declaration of Principles contains the following broad provisions: Any person or organization detecting a signal should try to verify that it is likely to be of intelligent origin before announcing it. The discoverer of a signal should, for the purposes of independent verification, communicate with other signatories of the Declaration before making a public announcement, and should also inform their national authorities. Once a given astronomical observation has been determined to be a credible extraterrestrial signal, the astronomical community should be informed through the Central Bureau for Astronomical Telegrams of the IAU. The Secretary-General of the United Nations and various other global scientific unions should also be informed. Following confirmation of an observation's extraterrestrial origin, news of the discovery should be made public. The discoverer has the right to make the first public announcement. All data confirming the discovery should be published to the international scientific community and stored in an accessible form as permanently as possible. Should evidence for extraterrestrial intelligence take the form of electromagnetic signals, the Secretary-General of the International Telecommunication Union (ITU) should be contacted, and may request in the next ITU Weekly Circular to minimize terrestrial use of the electromagnetic frequency bands in which the signal was detected. Neither the discoverer nor anyone else should respond to an observed extraterrestrial intelligence; doing so requires international agreement under separate procedures. The SETI Permanent Committee of the IAA and Commission 51 of the IAU should continually review procedures regarding detection of extraterrestrial intelligence and management of data related to such discoveries. A committee comprising members from various international scientific unions, and other bodies designated by the committee, should regulate continued SETI research. A separate "Proposed Agreement on the Sending of Communications to Extraterrestrial Intelligence" was subsequently created. It proposes an international commission, membership of which would be open to all interested nations, to be constituted on detection of extraterrestrial intelligence. This commission would decide whether to send a message to the extraterrestrial intelligence, and if so, would determine the contents of the message on the basis of principles such as justice, respect for cultural diversity, honesty, and respect for property and territory. The draft proposes to forbid the sending of any message by an individual nation or organization without the permission of the commission, and suggests that, if the detected intelligence poses a danger to human civilization, the United Nations Security Council should authorize any message to extraterrestrial intelligence. However, this proposal, like all others, has not been incorporated into national or international law. Paul Davies, a member of the SETI Post-Detection Taskgroup, has stated that post-detection protocols, calling for international consultation before taking any major steps regarding the detection, are unlikely to be followed by astronomers, who would put the advancement of their careers over the word of a protocol that is not part of national or international law. Contact scenarios and considerations Scientific literature and science fiction have put forward various models of the ways in which extraterrestrial and human civilizations might interact. Their predictions range widely, from sophisticated civilizations that could advance human civilization in many areas to imperial powers that might draw upon the forces necessary to subjugate humanity. Some theories suggest that an extraterrestrial civilization could be advanced enough to dispense with biology, living instead inside of advanced computers. The implications of discovery depend heavily on the level of aggressiveness of the civilization interacting with humanity, its ethics, and how much human and extraterrestrial biologies have in common. These factors may govern the quantity and type of dialogue that can take place. The question of whether contact is via signals from distant places or via probes or extraterrestrials in Earth's vicinity (or both) will also govern the magnitude of the long-term implications of contact. In the case of communication using electromagnetic signals, the long silence between the reception of one message and another would mean that the content of any message would particularly affect the consequences of contact , as would the extent of mutual comprehension. Concerning probes, a study suggested the first interstellar probe to is not likely to be the civilization's earliest (e.g. the ones sent first) but a more advanced one as (at least) the departure speed is thought to (likely) improve for at least some duration per each civilization, which e.g. may have implications for the type of probes to expect and the impacts of any probes sent earlier. Friendly civilizations Many writers have speculated on the ways in which a friendly civilization might interact with humankind. Albert Harrison, a professor emeritus of psychology at the University of California, Davis, thought that a highly advanced civilization might teach humanity such things as a physical theory of everything, how to use zero-point energy, or how to travel faster than light. They suggest that collaboration with such a civilization could initially be in the arts and humanities before moving to the hard sciences, and even that artists may spearhead collaboration. Seth D. Baum, of the Global Catastrophic Risk Institute, and others consider that the greater longevity of cooperative civilizations in comparison to uncooperative and aggressive ones might render extraterrestrial civilizations in general more likely to aid humanity. In contrast to these views, Paolo Musso, a member of the SETI Permanent Study Group of the International Academy of Astronautics (IAA) and the Pontifical Academy of Sciences, took the view that extraterrestrial civilizations possess, like humans, a morality driven not entirely by altruism but for individual benefit as well, thus leaving open the possibility that at least some extraterrestrial civilizations are hostile. Futurist Allen Tough suggests that an extremely advanced extraterrestrial civilization, recalling its own past of war and plunder and knowing that it possesses superweapons that could destroy it, would be likely to try to help humans rather than to destroy them. He identifies three approaches that a friendly civilization might take to help humanity: Intervention only to avert catastrophe: this would involve occasional limited intervention to stop events that could destroy human civilization completely, such as nuclear war or asteroid impact. Advice and action with consent: under this approach, the extraterrestrials would be more closely involved in terrestrial affairs, advising world leaders and acting with their consent to protect against danger. Forcible corrective action: the extraterrestrials could require humanity to reduce major risks against its will, intending to help humans advance to the next stage of civilization. Tough considers advising and acting only with consent to be a more likely choice than the forceful option. While coercive aid may be possible, and advanced extraterrestrials would recognize their own practices as superior to those of humanity, it may be unlikely that this method would be used in cultural cooperation. Lemarchand suggests that instruction of a civilization in its "technological adolescence", such as humanity, would probably focus on morality and ethics rather than on science and technology, to ensure that the civilization did not destroy itself with technology it was not yet ready to use. According to Tough, it is unlikely that the avoidance of immediate dangers and prevention of future catastrophes would be conducted through radio, as these tasks would demand constant surveillance and quick action. However, cultural cooperation might take place through radio or a space probe in the Solar System, as radio waves could be used to communicate information about advanced technologies and cultures to humanity. Even if an ancient and advanced extraterrestrial civilization wished to help humanity, humans could suffer from a loss of identity and confidence due to the technological and cultural prowess of the extraterrestrial civilization. However, a friendly civilization may calibrate its contact with humanity in such a way as to minimize unintended consequences. Michael A. G. Michaud suggests that a friendly and advanced extraterrestrial civilization may even avoid all contact with an emerging intelligent species like humanity, to ensure that the less advanced civilization can develop naturally at its own pace; this is known as the zoo hypothesis. Hostile civilizations Science fiction often depicts humans successfully repelling alien invasions, but scientists more often take the view that an extraterrestrial civilization with sufficient power to reach the Earth would be able to destroy human civilization or humanity with minimal effort. Operations that are enormous on a human scale, such as destroying all major population centers on a planet, bombarding a planet with deadly neutron radiation, or even traveling to another planetary system in order to lay waste to it, may be important tools for a hostile civilization. Deardorff speculates that a small proportion of the intelligent life forms in the galaxy may be aggressive, but the actual aggressiveness or benevolence of the civilizations would cover a wide spectrum, with some civilizations "policing" others. Civilizations may not be homogeneous and contain different factions or subgroups. According to Harrison and Dick, hostile extraterrestrial life may indeed be rare in the Universe, just as belligerent and autocratic nations on Earth have been the ones that lasted for the shortest periods of time, and humanity is seeing a shift away from these characteristics in its own sociopolitical systems. In addition, the causes of war may be diminished greatly for a civilization with access to the galaxy, as there are prodigious quantities of natural resources in space accessible without resort to violence. SETI researcher Carl Sagan believed that a civilization with the technological prowess needed to reach the stars and come to Earth must have transcended war to be able to avoid self-destruction. Representatives of such a civilization would treat humanity with dignity and respect, and humanity, with its relatively backward technology, would have no choice but to reciprocate. Seth Shostak, an astronomer at the SETI Institute, disagrees, stating that the finite quantity of resources in the galaxy would cultivate aggression in any intelligent species, and that an explorer civilization that would want to contact humanity would be aggressive. Similarly, Ragbir Bhathal claimed that since the laws of evolution would be the same on another habitable planet as they are on Earth, an extremely advanced extraterrestrial civilization may have the motivation to colonize humanity in a similar manner to the European colonization of much of the rest of the world. Disputing these analyses, David Brin states that while an extraterrestrial civilization may have an imperative to act for no benefit to itself, it would be naïve to suggest that such a trait would be prevalent throughout the galaxy. Brin points to the fact that in many moral systems on Earth, such as the Aztec or Carthaginian one, non-military killing has been accepted and even "exalted" by society, and further mentions that such acts are not confined to humans but can be found throughout the animal kingdom. Baum et al. speculate that highly advanced civilizations are unlikely to come to Earth to enslave humans, as the achievement of their level of advancement would have required them to solve the problems of labor and resources by other means, such as creating a sustainable environment and using mechanized labor. Moreover, humans may be an unsuitable food source for extraterrestrials because of marked differences in biochemistry. For example, the chirality of molecules used by terrestrial biota may differ from those used by extraterrestrial beings. Douglas Vakoch argues that transmitting intentional signals does not increase the risk of an alien invasion, contrary to concerns raised by British cosmologist Stephen Hawking, because "any civilization that has the ability to travel between the stars can already pick up our accidental radio and TV leakage" at a distance of several hundred light-years. The easiest or most likely artificial signals from Earth to be detectable are brief pulses transmitted by anti-ballistic missile (ABM) early-warning and space-surveillance radars during the Cold War and later astronomical and military radars. Unlike the earliest and conventional radio- and television-broadcasting which has been claimed to be undetectable at short distances, such signals could be detected also from relatively distant receiver stations in certain regions. Politicians have also commented on the likely human reaction to contact with hostile species. In his 1987 speech to the United Nations General Assembly, Ronald Reagan said, "I occasionally think how quickly our differences worldwide would vanish if we were facing an alien threat from outside this world." Equally advanced and more advanced civilizations Robert Freitas speculated in 1978 that the technological advancement and energy usage of a civilization, measured either relative to another civilization or in absolute terms by its rating on the Kardashev scale, may play an important role in the result of extraterrestrial contact. Given the infeasibility of interstellar space flight for civilizations at a technological level similar to that of humanity, interactions between such civilizations would have to take place by radio. Because of the long transit times of radio waves between stars, such interactions would not lead to the establishment of diplomatic relations, nor any significant future interaction at all, between the two civilizations. According to Freitas, direct contact with civilizations significantly more advanced than humanity would have to take place within the Solar System, as only the more advanced society would have the resources and technology to cross interstellar space. Consequently, such contact could only be with civilizations rated as Type II or higher on the Kardashev scale, as Type I civilizations would be incapable of regular interstellar travel. Freitas expected that such interactions would be carefully planned by the more advanced civilization to avoid mass societal shock for humanity. However much planning an extraterrestrial civilization may do before contacting humanity, the humans may experience great shock and terror on their arrival, especially as they would lack any understanding of the contacting civilization. Ben Finney compares the situation to that of the tribespeople of New Guinea, an island that was settled fifty thousand years ago during the last glacial period but saw little contact with the outside world until the arrival of European colonial powers in the late 19th and early 20th centuries. The huge difference between the indigenous stone-age society and the Europeans' technical civilization caused unexpected behaviors among the native populations known as cargo cults: to coax the gods into bringing them the technology that the Europeans possessed, the natives created wooden "radio stations" and "airstrips" as a form of sympathetic magic. Finney argues that humanity may misunderstand the true meaning of an extraterrestrial transmission to Earth, much as the people of New Guinea could not understand the source of modern goods and technologies. He concludes that the results of extraterrestrial contact will become known over the long term with rigorous study, rather than as fast, sharp events briefly making newspaper headlines. Billingham has suggested that a civilization which is far more technologically advanced than humanity is also likely to be culturally and ethically advanced, and would therefore be unlikely to conduct astroengineering projects that would harm human civilization. Such projects could include Dyson spheres, which completely enclose stars and capture all energy coming from them. Even if well within the capability of an advanced civilization and providing an enormous amount of energy, such a project would not be undertaken. For similar reasons, such civilizations would not readily give humanity the knowledge required to build such devices. Nevertheless, the existence of such capabilities would at least show that civilizations have survived "technological adolescence". Despite the caution that such an advanced civilization would exercise in dealing with the less mature human civilization, Sagan imagined that an advanced civilization might send those on Earth an Encyclopædia Galactica describing the sciences and cultures of many extraterrestrial societies. Whether an advanced extraterrestrial civilization would send humanity a decipherable message is a matter of debate in itself. Sagan argued that a highly advanced extraterrestrial civilization would bear in mind that they were communicating with a relatively primitive one and therefore would try to ensure that the receiving civilization would be able to understand the message. Marvin Minsky believed that aliens might think similarly to humans because of shared constraints, permitting communication. Arguing against this view, astronomer Guillermo Lemarchand stated that an advanced civilization would probably encrypt a message with high information content, such as an Encyclopædia Galactica, in order to ensure that only other ethically advanced civilizations would be able to understand it. Douglas Vakoch assumes it may take some time to decode any message, telling ABC News that "I don't think we're going to understand immediately what they have to say." "There’s going to be a lot of guesswork in trying to interpret another civilization," he told Science Friday, adding that "in some ways, any message we get from an extraterrestrial will be like a cosmic Rorschach ink blot test." Interstellar groups of civilizations Given the age of the galaxy, Harrison surmises that "galactic clubs" might exist, groupings of civilizations from across the galaxy. Such clubs could begin as loose confederations or alliances, eventually developing into powerful unions of many civilizations. If humanity could enter into a dialogue with one extraterrestrial civilization, it might be able to join such a galactic club. As more extraterrestrial civilizations, or unions thereof, are found, these could also become assimilated into such a club. Sebastian von Hoerner has suggested that entry into a galactic club may be a way for humanity to handle the culture shock arising from contact with an advanced extraterrestrial civilization. Whether a broad spectrum of civilizations from many places in the galaxy would even be able to cooperate is disputed by Michaud, who states that civilizations with huge differences in the technologies and resources at their command "may not consider themselves even remotely equal". It is unlikely that humanity would meet the basic requirements for membership at its current low level of technological advancement. A galactic club may, William Hamilton speculates, set extremely high entrance requirements that are unlikely to be met by less advanced civilizations. When two Canadian astronomers argued that they potentially discovered 234 extraterrestrial civilizations through analysis of the Sloan Digital Sky Survey database, Douglas Vakoch doubted their explanation for their findings, noting that it would be unusual for all of these stars to pulse at exactly the same frequency unless they were part of a coordinated network: "If you take a step back," he said, "that would mean you have 234 independent stars that all decided to transmit the exact same way." Michaud suggests that an interstellar grouping of civilizations might take the form of an empire, which need not necessarily be a force for evil, but may provide for peace and security throughout its jurisdiction. Owing to the distances between the stars, such an empire would not necessarily maintain control solely by military force, but may rather tolerate local cultures and institutions to the extent that these would not pose a threat to the central imperial authority. Such tolerance may, as has happened historically on Earth, extend to allowing nominal self-rule of specific regions by existing institutions, while maintaining that area as a puppet or client state to accomplish the aims of the imperial power. However, particularly advanced powers may use methods, including faster-than-light travel, to make centralized administration more effective. In contrast to the belief that an extraterrestrial civilization would want to establish an empire, Ćirković proposes that an extraterrestrial civilization would maintain equilibrium rather than expand outward. In such an equilibrium, a civilization would only colonize a small number of stars, aiming to maximize efficiency rather than to expand massive and unsustainable imperial structures. This contrasts with the classic Kardashev Type III civilization, which has access to the energy output of an entire galaxy and is not subject to any limits on its future expansion. According to this view, advanced civilizations may not resemble the classic examples in science fiction, but might more closely reflect the small, independent Greek city-states, with an emphasis on cultural rather than territorial growth. Extraterrestrial artifacts An extraterrestrial civilization may choose to communicate with humanity by means of artifacts or probes rather than by radio, for various reasons. While probes may take a long time to reach the Solar System, once there they would be able to hold a sustained dialogue that would be impossible using radio from hundreds or thousands of light-years away. Radio would be completely unsuitable for surveillance and continued monitoring of a civilization, and should an extraterrestrial civilization wish to perform these activities on humanity, artifacts may be the only option other than to send large, crewed spacecraft to the Solar System. Although faster-than-light travel has been seriously considered by physicists such as Miguel Alcubierre, Tough speculates that the enormous amount of energy required to achieve such speeds under currently proposed mechanisms means that robotic probes traveling at conventional speeds will still have an advantage for various applications. 2013 research at NASA's Johnson Space Center, however, shows that faster-than-light travel with the Alcubierre drive requires dramatically less energy than previously thought, needing only about 1 tonne of exotic mass-energy to move a spacecraft at 10 times the speed of light, in contrast to previous estimates that stated that only a Jupiter-mass object would contain sufficient energy to power a faster-than-light spacecraft. According to Tough, an extraterrestrial civilization might want to send various types of information to humanity by means of artifacts, such as an Encyclopædia Galactica, containing the wisdom of countless extraterrestrial cultures, or perhaps an invitation to engage in diplomacy with them. A civilization that sees itself on the brink of decline might use the abilities it still possesses to send probes throughout the galaxy, with its cultures, values, religions, sciences, technologies, and laws, so that these may not die along with the civilization itself. Freitas finds numerous reasons why interstellar probes may be a preferred method of communication among extraterrestrial civilizations wishing to make contact with Earth. A civilization aiming to learn more about the distribution of life within the galaxy might, he speculates, send probes to a large number of star systems, rather than using radio, as one cannot ensure a response by radio but can (he says) ensure that probes will return to their sender with data on the star systems they survey. Furthermore, probes would enable the surveying of non-intelligent populations, or those not yet capable of space navigation (like humans before the 20th century), as well as intelligent populations that might not wish to provide information about themselves and their planets to extraterrestrial civilizations. In addition, the greater energy required to send living beings rather than a robotic probe would, according to Michaud, be only used for purposes such as a one-way migration. Freitas points out that probes, unlike the interstellar radio waves commonly targeted by SETI searches, could store information for long, perhaps geological, timescales, and could emit strong radio signals unambiguously recognizable as being of intelligent origin, rather than being dismissed as a UFO or a natural phenomenon. Probes could also modify any signal they send to suit the system they were in, which would be impossible for a radio transmission originating from outside the target star system. Moreover, the use of small robotic probes with widely distributed beacons in individual systems, rather than a small number of powerful, centralized beacons, would provide a security advantage to the civilization using them. Rather than revealing the location of a radio beacon powerful enough to signal the whole galaxy and risk such a powerful device being compromised, decentralized beacons installed on robotic probes need not reveal any information that an extraterrestrial civilization prefers others not to have. Given the age of the Milky Way galaxy, an ancient extraterrestrial civilization may have existed and sent probes to the Solar System millions or even billions of years before the evolution of Homo sapiens. Thus, a probe sent may have been nonfunctional for millions of years before humans learn of its existence. Such a "dead" probe would not pose an imminent threat to humanity, but would prove that interstellar flight is possible. However, if an active probe were to be discovered, humans would react much more strongly than they would to the discovery of a probe that has long since ceased to function. Further implications of contact Theological The confirmation of extraterrestrial intelligence could have a profound impact on religious doctrines, potentially causing theologians to reinterpret scriptures to accommodate the new discoveries. However, a survey of people with many different religious beliefs indicated that their faith would not be affected by the discovery of extraterrestrial intelligence, and another study, conducted by Ted Peters of the Pacific Lutheran Theological Seminary, shows that most people would not consider their religious beliefs superseded by it. Surveys of religious leaders indicate that only a small percentage are concerned that the existence of extraterrestrial intelligence might fundamentally contradict the views of the adherents of their religion. Gabriel Funes, the chief astronomer of the Vatican Observatory and a papal adviser on science, has stated that the Catholic Church would be likely to welcome extraterrestrial visitors warmly. There are many UFO religions such as Raëlism. Astronomer David Weintraub suggests unambiguous contact would result in more of these kinds of beliefs and communities, saying "There undoubtedly would be people who would find this as an opportunity or an excuse to call attention to themselves for whatever reason and there would be new religions". Contact with extraterrestrial intelligence would not be completely inconsequential for religion. The Peters study showed that most non-religious people, and a significant minority of religious people, believe that the world could face a religious crisis, even if their own beliefs were unaffected. Contact with extraterrestrial intelligence would be most likely to cause a problem for western religions, in particular traditionalist Christianity, because of the geocentric nature of western faiths. The discovery of extraterrestrial life would not contradict basic conceptions of God, however, and seeing that science has challenged established dogma in the past, for example with the theory of evolution, it is likely that existing religions will adapt similarly to the new circumstances. Douglas Vakoch argues that it is not likely that the discovery of extraterrestrial life will impact religious beliefs. In the view of Musso, a global religious crisis would be unlikely even for Abrahamic faiths, as the studies of himself and others on Christianity, the most "anthropocentric" religion, see no conflict between that religion and the existence of extraterrestrial intelligence. In addition, the cultural and religious values of extraterrestrial species would likely be shared over centuries if contact is to occur by radio, meaning that rather than causing a huge shock to humanity, such information would be viewed much as archaeologists and historians view ancient artifacts and texts. Funes speculates that a decipherable message from extraterrestrial intelligence could initiate an interstellar exchange of knowledge in various disciplines, including whatever religions an extraterrestrial civilization may host. Billingham further suggests that an extremely advanced and friendly extraterrestrial civilization might put an end to present-day religious conflicts and lead to greater religious toleration worldwide. On the other hand, Jill Tarter puts forward the view that contact with extraterrestrial intelligence might eliminate religion as we know it and introduce humanity to an all-encompassing faith. Vakoch doubts that humans would be inclined to adopt extraterrestrial religions, telling ABC News "I think religion meets very human needs, and unless extraterrestrials can provide a replacement for it, I don't think religion is going to go away," and adding, "if there are incredibly advanced civilizations with a belief in God, I don't think Richard Dawkins will start believing." Political According to experts such as Niklas Hedman, executive director of UN Office for Outer Space Affairs, there are "no international agreements or mechanisms in place for how humanity would handle an encounter with extraterrestrial intelligence". Tim Folger speculates that news of radio contact with an extraterrestrial civilization would prove impossible to suppress and would travel rapidly, though Cold War scientific literature on the subject contradicts this. Media coverage of the discovery would probably die down quickly, though, as scientists began to decipher the message and learn its true impact. Different branches of government (for example legislative, executive, and judiciary) may pursue their own policies, potentially giving rise to power struggles. Even in the event of a single contact with no follow-up, radio contact may prompt fierce disagreements as to which bodies have the authority to represent humanity as a whole. Michaud hypothesizes that the fear arising from direct contact may cause nation-states to put aside their conflicts and work together for the common defense of humanity. Apart from the question of who would represent the Earth as a whole, contact could create other international problems, such as the degree of involvement of governments foreign to the one whose radio astronomers received the signal. The United Nations discussed various issues of foreign relations immediately before the launch of the Voyager probes, which in 2012 left the Solar System carrying a golden record in case they are found by extraterrestrial intelligence. Among the issues discussed were what messages would best represent humanity, what format they should take, how to convey the cultural history of the Earth, and what international groups should be formed to study extraterrestrial intelligence in greater detail. According to Luca Codignola of the University of Genoa, contact with a powerful extraterrestrial civilization is comparable to occasions where one powerful civilization destroyed another, such as the arrival of Christopher Columbus and Hernán Cortés into the Americas and the subsequent destruction of the indigenous civilizations and their ways of life. However, the applicability of such a model to contact with extraterrestrial civilizations, and that specific interpretation of the arrival of the European colonists to the Americas, have been disputed. Even so, any large difference between the power of an extraterrestrial civilization and our own could be demoralizing and potentially cause or accelerate the collapse of human society. Being discovered by a "superior" extraterrestrial civilization, and continued contact with it, might have psychological effects that could destroy a civilization, as is claimed to have happened in the past on Earth. Even in the absence of close contact between humanity and extraterrestrials, high-information messages from an extraterrestrial civilization to humanity have the potential to cause a great cultural shock. Sociologist Donald Tarter has conjectured that knowledge of extraterrestrial culture and theology has the potential to compromise human allegiance to existing organizational structures and institutions. The cultural shock of meeting an extraterrestrial civilization may be spread over decades or even centuries if an extraterrestrial message to humanity is extremely difficult to decipher. A study suggests there may be a threat from the perception by state actors (or their subsequent actions based on this perception) that other state-level actors could seek to gain and achieve an information monopoly on communications with an extraterrestrial intelligence. It recommends transparency and data sharing, further development of postdetection protocols , and better education of policymakers in this space. Legal Contact with extraterrestrial civilizations would raise legal questions, such as the rights of the extraterrestrial beings. An extraterrestrial arriving on Earth might only have the protection of animal cruelty statutes. Much as various classes of human being, such as women, children, and indigenous people, were initially denied human rights, so might extraterrestrial beings, who could therefore be legally owned and killed. If such a species were not to be treated as a legal animal, there would arise the challenge of defining the boundary between a legal person and a legal animal, considering the numerous factors that constitute intelligence. Some ethicists are considering "how the rights of a completely unfamiliar alien species would fit into our legal and ethical frameworks" and there is a case for "human rights" to evolve into "sentient rights". Freitas considers that even if an extraterrestrial being were to be afforded legal personhood, problems of nationality and immigration would arise. An extraterrestrial being would not have a legally recognized earthly citizenship, and drastic legal measures might be required in order to account for the technically illegal immigration of extraterrestrial individuals. If contact were to take place through electromagnetic signals, these issues would not arise. Rather, issues relating to patent and copyright law regarding who, if anyone, has rights to the information from the extraterrestrial civilization would be the primary legal problem. Scientific and technological The scientific and technological impact of extraterrestrial contact through electromagnetic waves would probably be quite small, especially at first. However, if the message contains a large amount of information, deciphering it could give humans access to a galactic heritage perhaps predating the formation of the Solar System, which may greatly advance our technology and science. A possible negative effect could be to demoralize research scientists as they come to know that what they are researching may already be known to another civilization. On the other hand, extraterrestrial civilizations with malicious intent could send (unfiltered) information that could enable or facilitate human civilization to destroy itself, such as powerful computer viruses, knowledge to build an advanced artificial intelligence or information on how to make extremely potent weapons that humans would not yet be able to use responsibly. While the motives for such an action are unknown, it may require minimal energy use on the part of the extraterrestrials. It may also be possible that such is sent without malicious intent. According to Musso, however, computer viruses in particular will be nearly impossible unless extraterrestrials possess detailed knowledge of human computer architectures, which would only happen if a human message sent to the stars were protected with little thought to security. Even a virtual machine on which extraterrestrials could run computer programs could be designed specifically for the purpose, bearing little relation to computer systems commonly used on Earth. In addition, humans could send messages to extraterrestrials detailing that they do not want access to the Encyclopædia Galactica until they have reached a suitable level of advancement, thus possibly raising chances that harmful impacts of technology from recipient extraterrestrials are mitigated. Extraterrestrial technology could have profound impacts on the nature of human culture and civilization. Just as television provided a new outlet for a wide variety of political, religious, and social groups, and as the printing press made the Bible available to the common people of Europe, allowing them to interpret it for themselves, so an extraterrestrial technology might change humanity in ways not immediately apparent. Harrison speculates that a knowledge of extraterrestrial technologies could increase the gap between scientific and cultural progress, leading to societal shock and an inability to compensate for negative effects of technology. He gives the example of improvements in agricultural technology during the Industrial Revolution, which displaced thousands of farm laborers until society could retrain them for jobs suited to the new social order. Contact with an extraterrestrial civilization far more advanced than humanity could cause a much greater shock than the Industrial Revolution, or anything previously experienced by humanity. Michaud suggests that humanity could be impacted by an influx of extraterrestrial science and technology in the same way that medieval European scholars were impacted by the knowledge of Arab scientists. Humanity might at first revere the knowledge as having the potential to advance the human species, and might even feel inferior to the extraterrestrial species, but would gradually grow in arrogance as it gained more and more intimate knowledge of the science, technology, and other cultural developments of an advanced extraterrestrial civilization. The discovery of extraterrestrial intelligence would have various impacts on biology and astrobiology. The discovery of extraterrestrial life in any form, intelligent or non-intelligent, would give humanity greater insight into the nature of life on Earth and would improve the conception of how the tree of life is organized. Human biologists could possibly learn about extraterrestrial biochemistry and observe how it differs from that found on Earth. This knowledge could help human civilization to learn which aspects of life are common throughout the universe and which are possibly specific to Earth. Worldviews Some have argued that confirmed reliable detection of extraterrestrial intelligence or contact may be one of the biggest moments in human history and would have major implications for humanity including its contemporary prevalent worldviews, not just from implications within the fields of theology and science , similar to the paradigm shift away from geocentrism as a dominant element of human worldviews. Harvard astronomer and lead scientist of The Galileo Project, Avi Loeb, has argued that humanity is not ready to adopt a sense of what he calls "cosmic modesty" and that this could change if the project detects "relics" of more advanced civilizations. Loeb postulates that if we find that we "are not the smartest kid on the cosmic block, it will give us a different perspective" – such as the way we think about our place in the universe, for example with relevance to prevalent religious worldviews, in which humans may often be considered unique or exceptional. According to Major John R. King, potential sociological consequences of alien contact may include (1) Initial shock and consternation (2) Loss or reduction of ego (3) Modification of human values (4) Decrease in status of [certain] scientists and (5) Reevaluation of religions. The "mediocrity principle" which claims that "there is nothing special about Earth's status or position in the Universe" could present a great challenge to Abrahamic religions, which "teach that human beings are purposefully created by God and occupy a privileged position in relation to other creatures", albeit some have argued that "discovery of life elsewhere in the Universe would not compromise God's love for Earth life" despite there being no "positive affirmation of alien life" in popular religious texts such as the bible and that other civilisations may be "completely unaware of Jesus' story" and may have no such popular story from their own past. There is widespread belief that religions would adapt to contact. Ethics Astroethics refers to the contemplation and development of ethical standards for a variety of outer space issues, including questions of how to interact remotely or in close encounters and concerns not only humans' ethics but also ethics of non-human intelligences, including whether they all afford us rights (and which each or overall). Ecological and biological-warfare impacts An extraterrestrial civilization might bring to Earth pathogens or invasive life forms that do not harm its own biosphere. Alien pathogens could decimate the human population, which would have no immunity to them, or they might use terrestrial livestock or plants as hosts, causing indirect harm to humans. Invasive organisms brought by extraterrestrial civilizations could cause great ecological harm because of the terrestrial biosphere's lack of defenses against them. On the other hand, pathogens and invasive species of extraterrestrial origin might differ enough from terrestrial organisms in their biology to have no adverse effects. Furthermore, pathogens and parasites on Earth are generally suited to only a small and exclusive set of environments, to which extraterrestrial pathogens would have had no opportunity to adapt. If an extraterrestrial civilization bearing malice towards humanity gained sufficient knowledge of terrestrial biology and weaknesses in the immune systems of terrestrial biota, it might be able to create extremely potent biological weapons. Even a civilization without malicious intent could inadvertently cause harm to humanity by not taking account of all the risks of their actions. According to Baum, even if an extraterrestrial civilization were to communicate using electromagnetic signals alone, it could send humanity information with which humans themselves could create lethal biological weapons. See also Archaeology, Anthropology, and Interstellar Communication Relative species abundance References Notes Further reading External links SETI Institute Cultural Aspects of SETI Introduction to ExtraTerrestrial Intelligence Search for extraterrestrial intelligence Extraterrestrial life Cultural anthropology Religion and science Global culture Extraterrestrial Contact
Potential cultural impact of extraterrestrial contact
[ "Astronomy", "Biology" ]
10,076
[ "Biological hypotheses", "Extraterrestrial life", "Astronomical controversies", "Hypothetical life forms" ]
40,436,928
https://en.wikipedia.org/wiki/Immunogenic%20cell%20death
Immunogenic cell death is any type of cell death eliciting an immune response. Both accidental cell death and regulated cell death can result in immune response. Immunogenic cell death contrasts to forms of cell death (apoptosis, autophagy or others) that do not elicit any response or even mediate immune tolerance. The name 'immunogenic cell death' is also used for one specific type of regulated cell death that initiates an immune response after stress to endoplasmic reticulum. Types Immunogenic cell death types are divided according to molecular mechanisms leading up to, during and following the death event. The immunogenicity of a specific cell death is determined by antigens and adjuvant released during the process. Accidental cell death Accidental cell death is the result of physical, chemical or mechanical damage to a cell, which exceeds its repair capacity. It is an uncontrollable process, leading to loss of membrane integrity. The result is the spilling of intracellular components, which may mediate an immune response. Immunogenic cell death or ICD ICD or immunogenic apoptosis is a form of cell death resulting in a regulated activation of the immune response. This cell death is characterized by apoptotic morphology, maintaining membrane integrity. Endoplasmic reticulum (ER) stress is generally recognised as a causative agent for ICD, with high production of reactive oxygen species (ROS). Two groups of ICD inducers are recognised. Type I inducers cause stress to the ER only as collateral damage, mainly targeting DNA or chromatin maintenance apparatus or membrane components. Type II inducers target the ER specifically. ICD is induced by some cytostatic agents such as anthracyclines, oxaliplatin and bortezomib, or radiotherapy and photodynamic therapy (PDT). Some viruses can be listed among biological causes of ICD. Just as immunogenic death of infected cells induces immune response to the infectious agent, immunogenic death of cancer cells can induce an effective antitumor immune response through activation of dendritic cells (DCs) and consequent activation of specific T cell response. This effect is used in antitumor therapy. ICD is characterized by secretion of damage-associated molecular patterns (DAMPs).There are three most important DAMPs which are exposed to the cell surface during ICD. Calreticulin (CRT), one of the DAMP molecules which is normally in the lumen of the endoplasmic reticulum, is translocated after the induction of immunogenic death to the surface of dying cell. There it functions as an "eat me" signal for professional phagocytes. Other important surface exposed DAMPs are heat-shock proteins (HSPs), namely HSP70 and HSP90, which under stress condition also translocate to the plasma membrane. On the cell surface they have an immunostimulatory effect, based on their interaction with number of antigen-presenting cell (APC) surface receptors like CD91 and CD40 and also facilitate crosspresentation of antigens derived from tumour cells on MHC class I molecule, which then leads to the CD8+ T cell response. Other important DAMPs, characteristic for ICD are secreted HMGB1 and ATP. HMGB1 is considered to be a marker of late ICD and its release to the extracellular space seems to be required for the optimal presentation of antigens by dendritic cells. It binds to several pattern recognition receptors (PRRs) such as Toll-like receptors (TLR) 2 and 4, which are expressed on APCs. ATP released during immunogenic cell death functions as a "find-me" signal for phagocytes when secreted and induces their attraction to the site of ICD. Also, binding of ATP to purinergic receptors on target cells has immunostimulatory effect through inflammasome activation. DNA and RNA molecules released during ICD activate TLR3 and cGAS responses, both in the dying cell and in phagocytes. The concept of using ICD in antitumor therapy has started taking shape with the identification of some inducers mentioned above, which have a potential as anti-tumor vaccination strategies. The use of ICD inducers alone or in combination with other anticancer therapies (targeted therapies, immunotherapies) has been effective in mouse models of cancer and is being tested in the clinic. Necroptosis Another type of regulated cell death that induces an immune response is necroptosis. Necroptosis is characterized by necrotic morphology. This type of cell death is induced by extracellular and intracellular microtraumas detected by death or damage receptors. For example, FAS, TNFR1 and pattern recognition receptors may initiate necroptosis. These activation inducers converge on receptor-interacting serine/threonine-protein kinase 3 (RIPK3) and mixed lineage kinase domain like pseudokinase (MLKL). Sequential activation of these proteins leads to membrane permeabilization. Pyroptosis Pyroptosis is a distinct type of regulated cell death, exhibiting a necrotic morphology and cellular content spilling. This type of cell death is induced most commonly in response to microbial pathogen infection, such as infection with Salmonella, Francisella, or Legionella. Host factors, such as those produced during myocardial infarction, may also induce pyroptosis. Cytosolic presence of bacterial metabolites or structures, termed pathogen associated molecular patterns (PAMPs), initiates the pyroptotic response. Detection of such PAMPs by some members of Nod-like receptor family (NLRs), absent in melanoma 2 (AIM2) or pyrin leads to the assembly of an inflammasome structure and caspase 1 activation. So far, the cytosolic PRRs that are known to induce inflammasome formation are NLRP3, NLRP1, NLRC4, AIM2 and Pyrin. These proteins contain oligomerization NACHT domains, CARD domains and some also contain similar pyrin (PYR) domains. Caspase 1, the central activator protease of pyroptosis, attaches to the inflammasome via the CARD domains or a CARD/PYR-containing adaptor protein called apoptosis-associated speck-like protein (ASC). Activation of caspase 1 (CASP1) is central to pyroptosis and when activated mediates the proteolytic activation of other caspases. In humans, other involved caspases are CASP3, CASP4 and CASP5, in mice CASP3 and CASP11. Precursors of IL-1β and IL-18 are among the most important CASP1 substrates, and the secretion of the cleavage products induces the potent immune response to pyroptosis. The release of IL-1β and IL-18 occurs before any morphological changes occur in the cell. The cell dies by spilling its contents, mediating the distribution of further immunogenic molecules. Among these, HMGB1, S100 proteins and IL-1α are important DAMPs. Pyroptosis has some characteristics similar with apoptosis, an immunologically inert cell death. Primarily, both these processes are caspase-dependent, although each process utilizes specific caspases. Chromatin condensation and fragmentation occurs during pyroptosis, but the mechanisms and outcome differ from those during apoptosis. Contrasting with apoptosis, membrane integrity is not maintained in pyroptosis, while mitochondrial membrane integrity is maintained and no spilling of cytochrome c occurs. Ferroptosis Ferroptosis is also a regulated form of cell death. The process is initiated in response to oxidative stress and lipid peroxidation and is dependent on iron availability. Necrotic morphology is typical of ferroptotic cells. Peroxidation of lipids is catalyzed mainly by lipoxygenases, but also by cyclooxygenases. Lipid peroxidation can be inhibited in the cell by glutathione peroxidase 4 (GPX4), making the balance of these enzymes a central regulator of ferroptosis. Chelation of iron also inhibits ferroptosis, possibly by depleting iron from lipoxygenases. Spilling of cytoplasmic components during cell death mediates the immunogenicity of this process. MPT-driven necrosis Mitochondria permeability transition (MPT)- driven cell death is also a form of regulated cell death and manifests a necrotic morphology. Oxidative stress or Ca2+ imbalance are important causes for MPT-driven necrosis. The main event in this process is the loss of inner mitochondrial membrane (IMM) impermeability. The precise mechanisms leading to the formation of permeability-transition pore complexes, which assemble between the inner and outer mitochondrial membranes, are still unknown. Peptidylprolyl isomerase F (CYPD) is the only known required protein for MPT-driven necrosis. The loss of IMM impermeability is followed by membrane potential dissipation and disintegration of both mitochondrial membranes. Parthanatos Parthanatos is also a regulated form of cell demise with necrotic morphology. It is induced under a variety of stressing conditions, but most importantly as a result of long-term alkylating DNA damage, oxidative stress, hypoxia, hypoglycemia and inflammatory environment. This cell death is initiated by the DNA damage response components, mainly poly(ADP-ribose) polymerase 1(PARP1). PARP1 hyperactivation leads to ATP depletion, redox and bioenergetic collapse as well as accumulation of poly(ADPribose) polymers and poly(ADP-ribosyl)ated proteins, which bind to apoptosis inducing factor mitochondria associated 1 (AIF). The outcome is membrane potential dissipation and mitochondrial outer membrane permeabilization. Chromatin condensation and fragmentation by AIF is characteristic of parthanatos. Interconnection of the prathanotic process with some members of the necroptotic apparatus has been proposed, as RIPK3 stimulates PARP1 activity. This type of cell death has been linked to some pathologies, such as some cardiovascular and renal disorders, diabetes, cerebral ischemia, and neurodegeneration. Lysosome-dependent cell death Lysosome dependent cell death is a type of regulated cell death that is dependent on permeabilization of lysosomal membranes. The morphology of cells dying by this death is variable, with apoptotic, necrotic or intermediate morphologies observed. It is a type of intracellular pathogen defense, but is connected with several pathophysiological processes, like tissue remodeling or inflammation. Lysosome permeabilization initiates the cell death process, sometimes along with mitochondrial membrane permeabilization. NETotic cell death NETotic cell death is a specific type of cell death typical for neutrophils, but also observed in basophils and eosinophils. The process is characterized by extrusion of chromatin fibers bound into neutrophil extracellular traps (NETs). NET formation is generally induced in response to microbial infections, but pathologically also in sterile conditions of some inflammatory diseases. ROS inside the cell trigger release of elastase (ELANE) and myeloperoxidase (MPO), their translocation to the nucleus and cytoskeleton remodeling. Some interaction with the necroptotic apparatus (RIPK and MLKL) has been suggested. References Cell biology
Immunogenic cell death
[ "Biology" ]
2,530
[ "Cell biology" ]
40,439,442
https://en.wikipedia.org/wiki/Molecular%20diagnostics
Molecular diagnostics is a collection of techniques used to analyze biological markers in the genome and proteome, and how their cells express their genes as proteins, applying molecular biology to medical testing. In medicine the technique is used to diagnose and monitor disease, detect risk, and decide which therapies will work best for individual patients, and in agricultural biosecurity similarly to monitor crop- and livestock disease, estimate risk, and decide what quarantine measures must be taken. By analysing the specifics of the patient and their disease, molecular diagnostics offers the prospect of personalised medicine. These tests are useful in a range of medical specialties, including infectious disease, oncology, human leucocyte antigen typing (which investigates and predicts immune function), coagulation, and pharmacogenomicsthe genetic prediction of which drugs will work best. They overlap with clinical chemistry (medical tests on bodily fluids). History The field of molecular biology grew in the late twentieth century, as did its clinical application. In 1980, Yuet Wai Kan et al. suggested a prenatal genetic test for Thalassemia that did not rely upon DNA sequencingthen in its infancybut on restriction enzymes that cut DNA where they recognised specific short sequences, creating different lengths of DNA strand depending on which allele (genetic variation) the fetus possessed. In the 1980s, the phrase was used in the names of companies such as Molecular Diagnostics Incorporated and Bethseda Research Laboratories Molecular Diagnostics. During the 1990s, the identification of newly discovered genes and new techniques for DNA sequencing led to the appearance of a distinct field of molecular and genomic laboratory medicine; in 1995, the Association for Molecular Pathology (AMP) was formed to give it structure. In 1999, the AMP co-founded The Journal of Medical Diagnostics. Informa Healthcare launched Expert Reviews in Medical Diagnostics in 2001. From 2002 onwards, the HapMap Project aggregated information on the one-letter genetic differences that recur in the human populationthe single nucleotide polymorphismsand their relationship with disease. In 2012, molecular diagnostic techniques for Thalassemia use genetic hybridization tests to identify the specific single nucleotide polymorphism causing an individual's disease. As the commercial application of molecular diagnostics has become more important, so has the debate about patenting of the genetic discoveries at its heart. In 1998, the European Union's Directive 98/44/ECclarified that patents on DNA sequences were allowable. In 2010 in the US, AMP sued Myriad Genetics to challenge the latter's patents regarding two genes, BRCA1, BRCA2, which are associated with breast cancer. In 2013, the U.S. Supreme Court partially agreed, ruling that a naturally occurring gene sequence could not be patented. Techniques Development from research tools The industrialisation of molecular biology assay tools has made it practical to use them in clinics. Miniaturisation into a single handheld device can bring medical diagnostics into the clinic and into the office or home. The clinical laboratory requires high standards of reliability; diagnostics may require accreditation or fall under medical device regulations. , some US clinical laboratories nevertheless used assays sold for "research use only". Laboratory processes need to adhere to regulations, such as the Clinical Laboratory Improvement Amendments, Health Insurance Portability and Accountability Act, Good Laboratory Practice, and Food and Drug Administration specifications in the United States. Laboratory Information Management Systems help by tracking these processes. Regulation applies to both staff and supplies. , twelve US states require molecular pathologists to be licensed; several boards such as the American Board of Medical Genetics and the American Board of Pathology certify technologists, supervisors, and laboratory directors. Automation and sample barcoding maximise throughput and reduce the possibility of error or contamination during manual handling and results reporting. Single devices to do the assay from beginning to end are now available. Assays Molecular diagnostics uses in vitro biological assays such as PCR-ELISA or Fluorescence in situ hybridization. The assay detects a molecule, often in low concentrations, that is a marker of disease or risk in a sample taken from a patient. Preservation of the sample before analysis is critical. Manual handling should be minimised. The fragile RNA molecule poses certain challenges. As part of the cellular process of expressing genes as proteins, it offers a measure of gene expression but it is vulnerable to hydrolysis and breakdown by ever-present RNAse enzymes. Samples can be snap-frozen in liquid nitrogen or incubated in preservation agents. Because molecular diagnostics methods can detect sensitive markers, these tests are less intrusive than a traditional biopsy. For example, because cell-free nucleic acids exist in human plasma, a simple blood sample can be enough to sample genetic information from tumours, transplants or an unborn fetus. Many, but not all, molecular diagnostics methods based on nucleic acids detection use polymerase chain reaction (PCR) to vastly increase the number of nucleic acid molecules, thereby amplifying the target sequence(s) in the patient sample. PCR is a method that a template DNA is amplified using synthetic primers, a DNA polymerase, and dNTPs. The mixture is cycled between at least 2 temperatures: a high temperature for denaturing double-stranded DNA into single-stranded molecules and a low temperature for the primer to hybridize to the template and for the polymerase to extend the primer. Each temperature cycle theoretically doubles the quantity of target sequence. Detection of sequence variations using PCR typically involves the design and use oligonucleotide reagents that amplify the variant of interest more efficiently than wildtype sequence. PCR is currently the most widely used method for detection of DNA sequences. The detection of the marker might use real time PCR, direct sequencing, microarray chipsprefabricated chips that test many markers at once, or MALDI-TOF The same principle applies to the proteome and the genome. High-throughput protein arrays can use complementary DNA or antibodies to bind and hence can detect many different proteins in parallel. Molecular diagnostic tests vary widely in sensitivity, turn around time, cost, coverage and regulatory approval. They also vary in the level of validation applied in the laboratories using them. Hence, robust local validation in accordance with the regulatory requirements and use of appropriate controls is required especially where the result may be used to inform a patient treatment decision. Benefits Prenatal Conventional prenatal tests for chromosomal abnormalities such as Down Syndrome rely on analysing the number and appearance of the chromosomesthe karyotype. Molecular diagnostics tests such as microarray comparative genomic hybridisation test a sample of DNA instead, and because of cell-free DNA in plasma, could be less invasive, but as of 2013 it is still an adjunct to the conventional tests. Treatment Some of a patient's single nucleotide polymorphismsslight differences in their DNAcan help predict how quickly they will metabolise particular drugs; this is called pharmacogenomics. For example, the enzyme CYP2C19 metabolises several drugs, such as the anti-clotting agent Clopidogrel, into their active forms. Some patients possess polymorphisms in specific places on the 2C19 gene that make poor metabolisers of those drugs; physicians can test for these polymorphisms and find out whether the drugs will be fully effective for that patient. Advances in molecular biology have helped show that some syndromes that were previously classed as a single disease are actually multiple subtypes with entirely different causes and treatments. Molecular diagnostics can help diagnose the subtypefor example of infections and cancersor the genetic analysis of a disease with an inherited component, such as Silver-Russell syndrome. Infectious disease Molecular diagnostics are used to identify infectious diseases such as chlamydia, influenza virus and tuberculosis; or specific strains such as H1N1 virus or SARS-CoV-2. Genetic identification can be swift; for example a loop-mediated isothermal amplification test diagnoses the malaria parasite and is rugged enough for developing countries. But despite these advances in genome analysis, in 2013 infections are still more often identified by other meanstheir proteome, bacteriophage, or chromatographic profile. Molecular diagnostics are also used to understand the specific strain of the pathogenfor example by detecting which drug resistance genes it possessesand hence which therapies to avoid. In addition, assays based on metagenomic next generation sequencing can be implemented to identify pathogenic organisms without bias. Disease risk management A patient's genome may include an inherited or random mutation which affects the probability of developing a disease in the future. For example, Lynch syndrome is a genetic disease that predisposes patients to colorectal and other cancers; early detection can lead to close monitoring that improves the patient's chances of a good outcome. Cardiovascular risk is indicated by biological markers and screening can measure the risk that a child will be born with a genetic disease such as Cystic fibrosis. Genetic testing is ethically complex: patients may not want the stress of knowing their risk. In countries without universal healthcare, a known risk may raise insurance premiums. Cancer Cancer is a change in the cellular processes that cause a tumour to grow out of control. Cancerous cells sometimes have mutations in oncogenes, such as KRAS and CTNNB1 (β-catenin). Analysing the molecular signature of cancerous cellsthe DNA and its levels of expression via messenger RNAenables physicians to characterise the cancer and to choose the best therapy for their patients. As of 2010, assays that incorporate an array of antibodies against specific protein marker molecules are an emerging technology; there are hopes for these multiplex assays that could measure many markers at once. Other potential future biomarkers include micro RNA molecules, which cancerous cells express more of than healthy ones. Cancer is a disease with excessive molecular causes and constant evolution. There's also heterogeneity of disease even in an individual. Molecular studies of cancer have proved the significance of driver mutations in the growth and metastasis of tumors. Many technologies for detection of sequence variations have been developed for cancer research. These technologies generally can be grouped into three approaches: polymerase chain reaction (PCR), hybridization, and next-generation sequencing (NGS). Currently, a lot of PCR and hybridization assays have been approved by FDA as in vitro diagnostics. NGS assays, however, are still at an early stage in clinical diagnostics. To do the molecular diagnostic test for cancer, one of the significant issue is the DNA sequence variation detection. Tumor biopsy samples used for diagnostics always contain as little as 5% of the target variant as compared to wildtype sequence. Also, for noninvasive applications from peripheral blood or urine, the DNA test must be specific enough to detect mutations at variant allele frequencies of less than 0.1%. Currently, by optimizing the traditional PCR, there's a new invention, amplification-refractory mutation system (ARMS) is a method for detecting DNA sequence variants in cancer. The principle behind ARMS is that the enzymatic extension activity of DNA polymerases is highly sensitive to mismatches near the 3' end of primer. Many different companies have developed diagnostics tests based on ARMS PCR primers. For instance, Qiagen therascreen, Roche cobas and Biomerieux THxID have developed FDA approved PCR tests for detecting lung, colon cancer and metastatic melanoma mutations in the KRAS, EGFR and BRAF genes. Their IVD kits were basically validated on genomic DNA extracted from FFPE tissue. There are also microarrays that utilize hybridization mechanism to diagnose cancer. More than a million of different probes can be synthesized on an array with Affymetrix's Genechip technology with a detection limit of one to ten copies of mRNA per well. Optimized microarrays are typically considered to produce repeatable relative quantitation of different targets. Currently, FDA have already approved a number of diagnostics assays utilizing microarrays: Agendia's MammaPrint assays can inform the breast cancer recurrence risk by profiling the expression of 70 genes related to breast cancer; Autogenomics INFNITI CYP2C19 assay can profile genetic polymorphisms, whose impacts on therapeutic response to antidepressants are great; and Affymetrix's CytoScan Dx can evaluate intellectual disabilities and congenital disorders by analyzing chromosomal mutation. In the future, the diagnostic tools for cancer will likely to focus on the Next Generation Sequencing (NGS). By utilizing DNA and RNA sequencing to do cancer diagnostics, technology in the field of molecular diagnostics tools will develop better. Although NGS throughput and price have dramatically been reduced over the past 10 years by roughly 100-fold, we remain at least 6 orders of magnitude away from performing deep sequencing at a whole genome level. Currently, Ion Torrent developed some NGS panels based on translational AmpliSeq, for example, the Oncomine Comprehensive Assay. They are focusing on utilizing deep sequencing of cancer-related genes to detect rare sequence variants. Molecular diagnostics tool can be used for cancer risk assessment. For example, the BRCA1/2 test by Myriad Genetics assesses women for lifetime risk of breast cancer. Also, some cancers are not always employed with clear symptoms. It is useful to analyze people when they do not show obvious symptoms and thus can detect cancer at early stages. For example, the ColoGuard test may be used to screen people over 55 years old for colorectal cancer. Cancer is a longtime-scale disease with various progression steps, molecular diagnostics tools can be used for prognosis of cancer progression. For example, the OncoType Dx test by Genomic Health can estimate risk of breast cancer. Their technology can inform patients to seek chemotherapy when necessary by examining the RNA expression levels in breast cancer biopsy tissue. With rising government support in DNA molecular diagnostics, it is expected that an increasing number of clinical DNA detection assays for cancers will become available soon. Currently, research in cancer diagnostics are developing fast with goals for lower cost, less time consumption and simpler methods for doctors and patients. See also Molecular medicine (the broader field of the molecular understanding of disease) Molecular pathology Laboratory Developed Test Pathogenesis Pathogenomics Pathology Precision medicine Personalized medicine References Biotechnology Medical tests Medical genetics Pathogen genomics
Molecular diagnostics
[ "Biology" ]
3,021
[ "Biotechnology", "Molecular genetics", "DNA sequencing", "nan", "Pathogen genomics" ]
40,440,002
https://en.wikipedia.org/wiki/Russian%20Journal%20of%20Physical%20Chemistry%20B
The Russian Journal of Physical Chemistry B () is an English-language translation of the eponymous Russian-language peer-reviewed scientific journal published by MAIK Nauka/Interperiodica and Springer Science+Business Media. The journal covers all aspects of chemical physics and combustion. The editor-in-chief is Anatoly L. Buchachenko (Russian Academy of Sciences). Abstracting and indexing Current Contents/Physical, Chemical and Earth Sciences Reaction Citation Index Science Citation Index Expanded Journal Citation Reports/Science Edition Chemical Abstracts Service Scopus Inspec See also Russian Journal of Physical Chemistry A References External links Academic journals established in 1982 Bimonthly journals English-language journals Nauka academic journals Physical chemistry journals Russian Academy of Sciences academic journals Russian-language journals Springer Science+Business Media academic journals
Russian Journal of Physical Chemistry B
[ "Chemistry" ]
163
[ "Physical chemistry journals", "Physical chemistry stubs" ]
40,440,122
https://en.wikipedia.org/wiki/Temperature%20anomaly
Temperature anomaly is the difference, positive or negative, of a temperature from a base or reference value, normally chosen as an average of temperatures over a certain reference or base period. In atmospheric sciences, the average temperature is commonly calculated over a period of at least 30 years over a homogeneous geographic region, or globally over the entire planet. Temperatures are obtained from surface and offshore weather stations or inferred from meteorological satellite data. Temperature anomalies can be calculated based on datasets of near-surface and upper-air atmospheric temperatures or sea surface temperatures. Description Temperature anomalies are a measure of temperature compared to a reference temperature, which is often calculated as an average of temperatures over a reference period, often called a base period. Records of global average surface temperature are usually presented as anomalies rather than as absolute temperatures. Using reference values computed for distinct areas over the same time period establishes a baseline from which anomalies are calculated, so that normalized data is used to more accurately compare temperature patterns to what is normal. For example, sub-global datasets may be for land-only, ocean-only, and hemispheric time series. Anomalies provide a frame of reference that allows more meaningful comparisons between locations and more accurate calculations of temperature trends. Using different base periods does not change the shape of time series charts or affect portrayal of the trends within them. For example, World Meteorological Organization (WMO) policy motivates use of a 30 year base period, whereas for conceptual simplicity a century-long base period is sometimes used to track the big-picture evolution of temperatures across the entire global surface. Different meteorological organizations have used respective base periods for global mean surface temperature datasets, such as 1951–1980 (NASA GISS and Berkeley Earth), 1961–1990 (HadCRUT U.K.), 1901–2000 (NCDC/NOAA), and 1991–2020 (Japan Met). Standard deviation Anomalies alone are not sufficient to characterize exceptionality of temperature values. The standard deviation—symbolized by a lower case sigma, σ—quantifies the degree of variation of a dataset's values (see coloured bands in chart at right). For example, a variation of +2 °C can be more significant over a region with normally stable temperatures than another of +3 °C from a region with normally large variability. For this purpose, anomalies are often shown as 'standardized anomalies' namely the anomaly divided by the standard deviation. To summarize: choice of reference period determines vertical placement of a trace on a graph, and deviation determines how much the trace is "stretched" in the vertical direction on the graph. Forecasting Numerical weather prediction provides the temperature forecast for the next few days or weeks. This can be used to calculate anomalies during these forecast periods. There are two types of forecasts, deterministic and probabilistic, which will give different results. Deterministic data are values obtained by running the forecast model with initial conditions determined by the initial conditions from data assimilation. Probabilistic data comes from predicting sets where the model (or different models) is run several times with a slight variations in the initial conditions each time. Deterministic anomalies have a standard deviation which depends only on the bias of the forecast. The deviation and the probabilistic anomalies, being calculated from several model solutions, are themselves probabilities that they will occur. See also Extreme weather Heat wave Marine heatwave References Temperature Meteorological concepts Climate history
Temperature anomaly
[ "Physics", "Chemistry" ]
724
[ "Scalar physical quantities", "Thermodynamic properties", "Temperature", "Physical quantities", "SI base quantities", "Intensive quantities", "Thermodynamics", "Wikipedia categories named after physical quantities" ]
40,440,471
https://en.wikipedia.org/wiki/Mouse%20Genetics%20Project
The Mouse Genetics Project (MGP) is a large-scale mutant mouse production and phenotyping programme aimed at identifying new model organisms of disease. Based at the Wellcome Trust Sanger Institute, the project uses knockout mice most of which were generated by the International Knockout Mouse Consortium. For each mutant line, groups of seven male and seven female mice move through a standard analysis pipeline aimed at detecting traits that differ from healthy C57BL/6 mice. The pipeline collects many measurements of viability, fertility, body weight, infection, hearing, morphology, haematology, behaviour, blood chemistry and immunity and compares them to wild type controls using a statistical mixed model. These data are immediately shared among the scientific and medical research community through a bespoke open access database, and summaries are displayed in other online resources, including the Mouse Genome Informatics database and the Wikipedia-based Gene Wiki. As of July 2013, the MGP reports having over 900 mutant lines openly available to the international research community, and have "substantively complete" analysis for over 650 mutant lines, of which over 75 per cent have at least one abnormal phenotype. Among these are new discoveries of genes implicated in disease, including finding: Mutation of SLX4 causes a new type of Fanconi anemia. Nine new genes that influence bone strength. Mutation of CENPJ models Seckel syndrome. SPNS2 is important in mammalian immune system function. MYSM1 is important for hematopoiesis and lymphocyte differentiation. See also International Mouse Phenotyping Consortium SHIRPA References External links The Sanger Mouse Portal, containing all MGP data The MGP Biomart, for extracting phenotypic data Model organism databases Genetically modified organisms Laboratory mouse strains Wellcome Trust Research projects Genetic engineering in the United Kingdom Science and technology in Cambridgeshire South Cambridgeshire District
Mouse Genetics Project
[ "Engineering", "Biology" ]
383
[ "Model organism databases", "Model organisms", "Genetic engineering", "Genetically modified organisms" ]
40,440,492
https://en.wikipedia.org/wiki/Evidence-based%20Toxicology%20Collaboration
The non-profit Evidence-based Toxicology Collaboration (EBTC) comprises a group of scientists and experts with ties to governmental and non-governmental agencies, chemical and pharmaceutical companies, and academia that have banded together to promote the use of what are known as "evidence-based approaches" in toxicology. The discipline of evidence-based toxicology (EBT) is a process for transparently, consistently, and objectively assessing available scientific evidence in order to answer questions in toxicology. EBT has the potential to address concerns in the toxicological community about the limitations of current approaches. These include concerns related to transparency in decision making, synthesis of different types of evidence, and the assessment of bias and credibility. The evidence-based methods and approaches now being proposed for toxicology are widely used in medicine, which is the basis for their nomenclature. The need to improve how the performance of toxicological test methods is assessed was the main impetus for translating these tools to toxicology. Goals and benefits The EBTC's overall goals are to bring together the international toxicology community to facilitate uses of evidence-based toxicology to inform regulatory, environmental and public health decisions. The group aims for improving the public health outcomes and reduce human impact on the environment by bringing evidence-based approaches to safety sciences The organization's members envision that as these efforts succeed, all interested partiesincluding stakeholders in government, industry, academia, and the general publicshould have confidence and trust in the process by which scientific evidence is assessed when addressing questions related to the safety of chemicals to human health and the environment. All individuals affiliated with the organization are volunteers, except those serving in the organisation's secretariat, which is sponsored by the Johns Hopkins University's Center for Alternatives to Animal Testing (CAAT). The EBTC's members stress that evidence has always been used in toxicology. The evidence-based approach that the collaboration is championing have been used in medicine for decades. Evidence-based medicine (EBM) is a widely respected discipline and it has strengthened the scientific foundation of decision-making in clinical medicine by providing a structured way of assessing the evidence bearing on healthcare questions. The EBTC foresees that the evidence-based approach will provide similar benefits to toxicology, especially at a time when remarkable advances in biochemistry and molecular biology are enhancing scientists’ ability to understand the nature and mechanisms of the adverse effects that can be caused by chemicals. Origins The EBTC builds upon the outcomes of the First International Forum Toward Evidence-based Toxicology, held in Cernobbio/Como, Italy, on October 15–18, 2007. The forum was motivated by increasing concerns in the scientific community about the limitations of toxicological decision-making. EBT was next a major topic of discussion at a 2010 workshop held at Johns Hopkins University on 21st century validation for 21st century methods. The enthusiasm for EBT at this workshop inspired the EBTC's formation with an inaugural conference on March 10, 2011, as a satellite to the 50th annual Society of Toxicology meeting in Washington, DC. At the workshop, speakers presented the concept of EBT as it pertains to decision-making about the utility of new toxicity tests and their implementation into the risk assessment process. Tools The EBTC is translating the tools used in evidence-based medicine (EBM) to toxicology, as well as developing new approaches to respond to the challenges presented by the discipline of toxicology. The primary tool of EBM is the systematic review, which includes a variety of steps: framing the question to be addressed and deciding how relevant studies will be identified and retrieved; determining which studies will be excluded from the analysis, and how the included studies will be appraised for quality/potential for bias; and how the data will be synthesized across studies (e.g., meta-analysis). Scientists have made progress in their efforts to apply systematic reviews to evaluate the evidence for associations between environmental toxicants and human health risks. To date, researchers have shown that important elements of the systematic review methodology established in evidence-based medicine can be adopted into EBT, and a limited number of such studies have been attempted. EBTC scientists are promoting and conducting systematic reviews of toxicological test methods. Organization The EBTC is governed by a board of trustees that has the fundamental responsibilities to provide strategic and fiduciary oversight, and direction. In addition, the Scientific Advisory Council has been established to provide the expertise needed to develop the new EBT methods, to conduct specific projects and to advise the Board and the EBTC Director on new areas of research and other scientific issues of relevance to the broader toxicology community. The organization also has working groups charged with producing guidance documents tailored to toxicology on conducting systematic reviews and their components. Working groups are also focused on the application of evidence-based tools to various toxicological practices. Scientists affiliated with the EBTC are conducting pilot studies to demonstrate the value of evidence-based approaches for helping researchers evaluate new laboratory tools and tests for assessing chemical toxicity. See also Evidence-based toxicology References External links Evidence-based medicine Toxicology organizations Non-profit organizations based in the United States
Evidence-based Toxicology Collaboration
[ "Environmental_science" ]
1,057
[ "Toxicology organizations", "Toxicology" ]