id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
41,834
https://en.wikipedia.org/wiki/Uninterruptible%20power%20supply
An uninterruptible power supply (UPS) or uninterruptible power source is a type of continual power system that provides automated backup electric power to a load when the input power source or mains power fails. A UPS differs from a traditional auxiliary/emergency power system or standby generator in that it will provide near-instantaneous protection from input power interruptions by switching to energy stored in battery packs, supercapacitors or flywheels. The on-battery run-times of most UPSs are relatively short (only a few minutes) but sufficient to "buy time" for initiating a standby power source or properly shutting down the protected equipment. Almost all UPSs also contain integrated surge protection to shield the output appliances from voltage spikes. A UPS is typically used to protect hardware such as computers, hospital equipment, data centers, telecommunications equipment or other electrical equipment where an unexpected power disruption could cause injuries, fatalities, serious business disruption or data loss. UPS units range in size from ones designed to protect a single computer (around 200 volt-ampere rating) to large units powering entire data centers or buildings. Common power problems The primary role of any UPS is to provide short-term power when the input power source fails. However, most UPS units are also capable in varying degrees of correcting common utility power problems: Voltage spike or sustained overvoltage Momentary or sustained reduction in input voltage Voltage sag Noise, defined as a high frequency transient or oscillation, usually injected into the line by nearby equipment Instability of the mains frequency Harmonic distortion, defined as a departure from the ideal sinusoidal waveform expected on the line Some manufacturers of UPS units categorize their products in accordance with the number of power-related problems they address. A UPS unit may also introduce problems with electric power quality. To prevent this, a UPS should be selected not only by capacity but also by the quality of power that is required by the equipment that is being supplied. Technologies The three general categories of modern UPS systems are on-line, line-interactive and standby: An online UPS uses a "double conversion" method of accepting AC input, rectifying to DC for passing through the rechargeable battery (or battery strings), then inverting back to 120 V/230 V AC for powering the protected equipment. A line-interactive UPS maintains the inverter in line and redirects the battery's DC current path from the normal charging mode to supplying current when power is lost. In a standby ("off-line") system the load is powered directly by the input power and the backup power circuitry is only invoked when the utility power fails. Most UPS below one kilovolt-ampere (1 kVA) are of the line-interactive or standby variety which are usually less expensive. For large power units, dynamic uninterruptible power supplies (DUPS) are sometimes used. A synchronous motor/alternator is connected on the mains via a choke. Energy is stored in a flywheel. When the mains power fails, an eddy-current regulation maintains the power on the load as long as the flywheel's energy is not exhausted. DUPS are sometimes combined or integrated with a diesel generator that is turned on after a brief delay, forming a diesel rotary uninterruptible power supply (DRUPS). Offline/standby The offline/standby UPS offers only the most basic features, providing surge protection and battery backup. The protected equipment is normally connected directly to incoming utility power. When the incoming voltage falls below or rises above a predetermined level the UPS turns on its internal DC-AC inverter circuitry, which is powered from an internal storage battery. The UPS then mechanically switches the connected equipment onto its DC-AC inverter output. The switch-over time can be as long as 25 milliseconds depending on the amount of time it takes the standby UPS to detect the lost utility voltage. The UPS will be designed to power certain equipment, such as a personal computer, without any objectionable dip or brownout to that device. Line-interactive The line-interactive UPS is similar in operation to a standby UPS but with the addition of a multi-tap variable-voltage autotransformer. This is a special type of transformer that can add or subtract powered coils of wire, thereby increasing or decreasing the magnetic field and the output voltage of the transformer. This may also be performed by a buck–boost transformer which is distinct from an autotransformer, since the former may be wired to provide galvanic isolation. This type of UPS is able to tolerate continuous undervoltage brownouts and overvoltage surges without consuming the limited reserve battery power. It instead compensates by automatically selecting different power taps on the autotransformer. Depending on the design, changing the autotransformer tap can cause a very brief output power disruption, which may cause UPSs equipped with a power-loss alarm to "chirp" for a moment. This has become popular even in the cheapest UPSes because it takes advantage of components already included. The main 50/60 Hz transformer used to convert between line voltage and battery voltage needs to provide two slightly different turns ratios: One to convert the battery output voltage (typically a multiple of 12 V) to line voltage, and a second one to convert the line voltage to a slightly higher battery charging voltage (such as a multiple of 14 V). The difference between the two voltages is because charging a battery requires a delta voltage (up to 13–14 V for charging a 12 V battery). Furthermore, it is easier to do the switching on the line-voltage side of the transformer because of the lower currents on that side. To gain the buck/boost feature, all that is required is two separate switches so that the AC input can be connected to one of the two primary taps, while the load is connected to the other, thus using the main transformer's primary windings as an autotransformer. The battery can still be charged while "bucking" an overvoltage, but while "boosting" an undervoltage, the transformer output is too low to charge the batteries. Autotransformers can be engineered to cover a wide range of varying input voltages, but this requires more taps and increases complexity, as well as the expense of the UPS. It is common for the autotransformer to cover a range only from about 90 V to 140 V for 120 V power, and then switch to battery if the voltage goes much higher or lower than that range. In low-voltage conditions the UPS will use more current than normal, so it may need a higher current circuit than a normal device. For example, to power a 1000 W device at 120 V, the UPS will draw 8.33 A. If a brownout occurs and the voltage drops to 100 V, the UPS will draw 10 A to compensate. This also works in reverse, so that in an overvoltage condition, the UPS will need less current. Online/double-conversion In an online UPS, the batteries are always connected to the inverter, so that no power transfer switches are necessary. When power loss occurs, the rectifier simply drops out of the circuit and the batteries keep the power steady and unchanged. When power is restored, the rectifier resumes carrying most of the load and begins charging the batteries, though the charging current may be limited to prevent the high-power rectifier from damaging the batteries. The main advantage of an online UPS is its ability to provide an "electrical firewall" between the incoming utility power and sensitive electronic equipment. The online UPS is ideal for environments where electrical isolation is necessary or for equipment that is very sensitive to power fluctuations. Although it was at one time reserved for very large installations of 10 kW or more, advances in technology have now permitted it to be available as a common consumer device, supplying 500 W or less. The online UPS may be necessary when the power environment is "noisy", when utility power sags, outages and other anomalies are frequent, when protection of sensitive IT equipment loads is required, or when operation from an extended-run backup generator is necessary. The basic technology of the online UPS is the same as in a standby or line-interactive UPS. However, it typically costs much more, due to it having a much greater current AC-to-DC battery-charger/rectifier, and with the rectifier and inverter designed to run continuously with improved cooling systems. It is called a double-conversion UPS due to the rectifier directly driving the inverter, even when powered from normal AC current. Online UPS typically has a static transfer switch (STS) for increasing reliability. Other designs Hybrid topology/double conversion on demand These hybrid rotary UPS designs do not have official designations, although one name used by UTL is "double conversion on demand". This style of UPS is targeted towards high-efficiency applications while still maintaining the features and protection level offered by double conversion. A hybrid (double conversion on demand) UPS operates as an off-line/standby UPS when power conditions are within a certain preset window. This allows the UPS to achieve very high efficiency ratings. When the power conditions fluctuate outside of the predefined windows, the UPS switches to online/double-conversion operation. In double-conversion mode the UPS can adjust for voltage variations without having to use battery power, can filter out line noise and control frequency. Ferroresonant Ferroresonant units operate in the same way as a standby UPS unit; however, they are online with the exception that a ferroresonant transformer, is used to filter the output. This transformer is designed to hold energy long enough to cover the time between switching from line power to battery power and effectively eliminates the transfer time. Many ferroresonant UPSs are 82–88% efficient (AC/DC-AC) and offer excellent isolation. The transformer has three windings, one for ordinary mains power, the second for rectified battery power, and the third for output AC power to the load. This once was the dominant type of UPS and is limited to around the range. These units are still mainly used in some industrial settings (oil and gas, petrochemical, chemical, utility, and heavy industry markets) due to the robust nature of the UPS. Many ferroresonant UPSs utilizing controlled ferro technology may interact with power-factor-correcting equipment. This will result in fluctuating output voltage of the UPS, but may be corrected by reducing the load levels or adding other linear type loads. DC power A UPS designed for powering DC equipment is very similar to an online UPS, except that it does not need an output inverter. Also, if the UPS's battery voltage is matched with the voltage the device needs, the device's power supply will not be needed either. Since one or more power conversion steps are eliminated, this increases efficiency and run time. Many systems used in telecommunications use an extra-low voltage "common battery" 48 V DC power, because it has less restrictive safety regulations, such as being installed in conduit and junction boxes. DC has typically been the dominant power source for telecommunications, and AC has typically been the dominant source for computers and servers. There has been much experimentation with 48 V DC power for computer servers, in the hope of reducing the likelihood of failure and the cost of equipment. However, to supply the same amount of power, the current would be higher than an equivalent 115 V or 230 V circuit; greater current requires larger conductors or more energy lost as heat. High voltage DC (380 V) is finding use in some data center applications and allows for small power conductors, but is subject to the more complex electrical code rules for safe containment of high voltages. For lower power devices that run on 5 V, some portable battery banks can work as a UPS. Rotary A rotary UPS uses the inertia of a high-mass spinning flywheel (flywheel energy storage) to provide short-term ride-through in the event of power loss. The flywheel also acts as a buffer against power spikes and sags, since such short-term power events are not able to appreciably affect the rotational speed of the high-mass flywheel. It is also one of the oldest designs, predating vacuum tubes and integrated circuits. It can be considered to be on line since it spins continuously under normal conditions. However, unlike a battery-based UPS, flywheel-based UPS systems typically provide 10 to 20 seconds of protection before the flywheel has slowed and power output stops. It is traditionally used in conjunction with standby generators, providing backup power only for the brief period of time the engine needs to start running and stabilize its output. The rotary UPS is generally reserved for applications needing more than 10,000 W of protection, to justify the expense and benefit from the advantages rotary UPS systems bring. A larger flywheel or multiple flywheels operating in parallel will increase the reserve running time or capacity. Because the flywheels are a mechanical power source, it is not necessary to use an electric motor or generator as an intermediary between it and a diesel engine designed to provide emergency power. By using a transmission gearbox, the rotational inertia of the flywheel can be used to directly start up a diesel engine, and once running, the diesel engine can be used to directly spin the flywheel. Multiple flywheels can likewise be connected in parallel through mechanical countershafts, without the need for separate motors and generators for each flywheel. They are normally designed to provide very high current output compared to a purely electronic UPS, and are better able to provide inrush current for inductive loads such as motor startup or compressor loads, as well as medical MRI and cath lab equipment. It is also able to tolerate short-circuit conditions up to 17 times larger than an electronic UPS, permitting one device to blow a fuse and fail while other devices still continue to be powered from the rotary UPS. Its life cycle is usually far greater than a purely electronic UPS, up to 30 years or more. But they do require periodic downtime for mechanical maintenance, such as ball bearing replacement. In larger systems, redundancy of the system ensures the availability of processes during this maintenance. Battery-based designs do not require downtime if the batteries can be hot-swapped, which is usually the case for larger units. Newer rotary units use technologies such as magnetic bearings and air-evacuated enclosures to increase standby efficiency and reduce maintenance to very low levels. Typically, the high-mass flywheel is used in conjunction with a motor-generator system. These units can be configured as: A motor driving a mechanically connected generator, A combined synchronous motor and generator wound in alternating slots of a single rotor and stator, A hybrid rotary UPS, designed similar to an online UPS, except that it uses the flywheel in place of batteries. The rectifier drives a motor to spin the flywheel, while a generator uses the flywheel to power the inverter. In case No. 3, the motor generator can be synchronous/synchronous or induction/synchronous. The motor side of the unit in case Nos. 2 and 3 can be driven directly by an AC power source (typically when in inverter bypass), a 6-step double-conversion motor drive, or a 6-pulse inverter. Case No. 1 uses an integrated flywheel as a short-term energy source instead of batteries to allow time for external, electrically coupled gensets to start and be brought online. Case Nos. 2 and 3 can use batteries or a free-standing electrically coupled flywheel as the short-term energy source. Form factors Smaller UPS systems come in several different forms and sizes. However, the two most common forms are tower and rack-mount. Tower models stand upright on the ground or on a desk or shelf, and are typically used in network workstations or desktop computer applications. Rack-mount models can be mounted in standard 19-inch rack enclosures and can require anywhere from 1U to 12U (rack units). They are typically used in server and networking applications. Some devices feature user interfaces that rotate 90°, allowing the devices to be mounted vertically on the ground or horizontally as would be found in a rack. Applications N + 1 In large business environments where reliability is of great importance, a single huge UPS can also be a single point of failure that can disrupt many other systems. To provide greater reliability, multiple smaller UPS modules and batteries can be integrated together to provide redundant power protection equivalent to one very large UPS. "N + 1" means that if the load can be supplied by N modules, the installation will contain N + 1 modules. In this way, failure of one module will not impact system operation. Multiple redundancy Many computer servers offer the option of redundant power supplies, so that in the event of one power supply failing, one or more other power supplies are able to power the load. This is a critical point – each power supply must be able to power the entire server by itself. Redundancy is further enhanced by plugging each power supply into a different circuit (i.e. to a different circuit breaker). Redundant protection can be extended further yet by connecting each power supply to its own UPS. This provides double protection from both a power supply failure and a UPS failure, so that continued operation is assured. This configuration is also referred to as 1 + 1 or 2N redundancy. If the budget does not allow for two identical UPS units then it is common practice to plug one power supply into mains power and the other into the UPS. Outdoor use When a UPS system is placed outdoors, it should have some specific features that guarantee that it can tolerate weather without any effects on performance. Factors such as temperature, humidity, rain, and snow among others should be considered by the manufacturer when designing an outdoor UPS system. Operating temperature ranges for outdoor UPS systems could be around −40 °C to +55 °C. Outdoor UPS systems can either be pole, ground (pedestal), or host mounted. Outdoor environment could mean extreme cold, in which case the outdoor UPS system should include a battery heater mat, or extreme heat, in which case the outdoor UPS system should include a fan system or an air conditioning system. A solar inverter, or PV inverter, or solar converter, converts the variable direct current (DC) output of a photovoltaic (PV) solar panel into a utility frequency alternating current (AC) that can be fed into a commercial electrical grid or used by a local, off-grid electrical network. It is a critical BOS–component in a photovoltaic system, allowing the use of ordinary AC-powered equipment. Solar inverters have special functions adapted for use with photovoltaic arrays, including maximum power point tracking and anti-islanding protection. Harmonic distortion The output of some electronic UPSes can have a significant departure from an ideal sinusoidal waveform. This is especially true of inexpensive consumer-grade single-phase units designed for home and office use. These often utilize simple switching AC power supplies and the output resembles a square wave rich in harmonics. These harmonics can cause interference with other electronic devices including radio communication, and some devices (e.g. inductive loads such as AC motors) may perform with reduced efficiency or not at all. More sophisticated (and expensive) UPS units can produce nearly pure sinusoidal AC power. Power factor A problem in the combination of a double-conversion UPS and a generator is the voltage distortion created by the UPS. The input of a double-conversion UPS is essentially a big rectifier. The current drawn by the UPS is non-sinusoidal. This can cause the voltage from the AC mains or a generator to also become non-sinusoidal. The voltage distortion then can cause problems in all electrical equipment connected to that power source, including the UPS itself. It will also cause more power to be lost in the wiring supplying power to the UPS due to the spikes in current flow. This level of "noise" is measured as a percentage of "total harmonic distortion of the current" (THDI). Classic UPS rectifiers have a THDI level of around 25%–30%. To reduce voltage distortion, this requires heavier mains wiring or generators more than twice as large as the UPS. There are several solutions to reduce the THDI in a double-conversion UPS: Classic solutions such as passive filters reduce THDI to 5%–10% at full load. They are reliable, but big and only work at full load, and present their own problems when used in tandem with generators. An alternative solution is an active filter. Through the use of such a device, THDI can drop to 5% over the full power range. The newest technology in double-conversion UPS units is a rectifier that does not use classic rectifier components (thyristors and diodes) but uses high-frequency components instead. A double-conversion UPS with an insulated-gate bipolar transistor rectifier and inductor can have a THDI as small as 2%. This completely eliminates the need to oversize the generator (and transformers), without additional filters, investment cost, losses, or space. Communication Power management (PM) requires: The UPS to report its status to the computer it powers via a communications link such as a serial port, Ethernet and Simple Network Management Protocol, GSM/GPRS or USB A subsystem in the OS that processes the reports and generates notifications, PM events, or commands an ordered shut down. Some UPS manufacturers publish their communication protocols, but other manufacturers (such as APC) use proprietary protocols. The basic computer-to-UPS control methods are intended for one-to-one signaling from a single source to a single target. For example, a single UPS may connect to a single computer to provide status information about the UPS, and allow the computer to control the UPS. Similarly, the USB protocol is also intended to connect a single computer to multiple peripheral devices. In some situations, it is useful for a single large UPS to be able to communicate with several protected devices. For traditional serial or USB control, a signal replication device may be used, which for example allows one UPS to connect to five computers using serial or USB connections. However, the splitting is typically only one direction from UPS to the devices to provide status information. Return control signals may only be permitted from one of the protected systems to the UPS. As Ethernet has increased in common use since the 1990s, control signals are now commonly sent between a single UPS and multiple computers using standard Ethernet data communication methods such as TCP/IP. The status and control information is typically encrypted so that, for example, an outside hacker can not gain control of the UPS and command it to shut down. Distribution of UPS status and control data requires that all intermediary devices such as Ethernet switches or serial multiplexers be powered by one or more UPS systems, in order for the UPS alerts to reach the target systems during a power outage. To avoid the dependency on Ethernet infrastructure, the UPSs can be connected directly to the main control server by using a GSM/GPRS channel also. The SMS or GPRS data packets sent from UPSs trigger software to shut down the PCs to reduce the load. Batteries There are three main types of UPS batteries: Valve Regulated Lead Acid (VRLA), Flooded Cell or VLA batteries, and lithium-ion batteries. The run-time for a battery-operated UPS depends on the type and size of batteries and rate of discharge, and the efficiency of the inverter. The total capacity of a lead–acid battery is a function of the rate at which it is discharged, which is described as Peukert's law. Manufacturers supply run-time rating in minutes for packaged UPS systems. Larger systems (such as for data centers) require detailed calculation of the load, inverter efficiency, and battery characteristics to ensure the required endurance is attained. Common battery characteristics and load testing When a lead–acid battery is charged or discharged, this initially affects only the reacting chemicals, which are at the interface between the electrodes and the electrolyte. With time, the charge stored in the chemicals at the interface, often called "interface charge", spreads by diffusion of these chemicals throughout the volume of the active material. If a battery has been completely discharged (e.g. the car lights were left on overnight) and next is given a fast charge for only a few minutes, then during the short charging time it develops only a charge near the interface. The battery voltage may rise to be close to the charger voltage so that the charging current decreases significantly. After a few hours, this interface charge will not spread to the volume of the electrode and electrolyte, leading to an interface charge so low that it may be insufficient to start a car. Due to the interface charge, brief UPS self-test functions lasting only a few seconds may not accurately reflect the true runtime capacity of a UPS, and instead an extended recalibration or rundown test that deeply discharges the battery is needed. The deep discharge testing is itself damaging to batteries due to the chemicals in the discharged battery starting to crystallize into highly stable molecular shapes that will not re-dissolve when the battery is recharged, permanently reducing charge capacity. In lead-acid batteries, this is known as sulfation, but deep-discharge damage also affects other types such as nickel-cadmium batteries and lithium batteries. Therefore, it is commonly recommended that rundown tests be performed infrequently, such as every six months to a year. Testing of strings of batteries/cells Multi-kilowatt commercial UPS systems with large and easily accessible battery banks are capable of isolating and testing individual cells within a battery string, which consists of either combined-cell battery units (such as 12 V lead acid batteries) or individual chemical cells wired in series. Isolating a single cell and installing a jumper in place of it allows the one battery to be discharge-tested, while the rest of the battery string remains charged and available to provide protection. It is also possible to measure the electrical characteristics of individual cells in a battery string, using intermediate sensor wires that are installed at every cell-to-cell junction, and monitored both individually and collectively. Battery strings may also be wired as series-parallel, for example, two sets of 20 cells. In such a situation it is also necessary to monitor current flow between parallel strings, as current may circulate between the strings to balance out the effects of weak cells, dead cells with high resistance, or shorted cells. For example, stronger strings can discharge through weaker strings until voltage imbalances are equalized, and this must be factored into the individual inter-cell measurements within each string. Series-parallel battery interactions Battery strings wired in series-parallel can develop unusual failure modes due to interactions between the multiple parallel strings. Defective batteries in one string can adversely affect the operation and lifespan of good or new batteries in other strings. These issues also apply to other situations where series-parallel strings are used, not just in UPS systems but also in electric vehicle applications. Consider a series-parallel battery arrangement with all good cells, and one becomes shorted or dead: The failed cell will reduce the maximum developed voltage for the entire series string it is within. Other series strings wired in parallel with the degraded string will now discharge through the degraded string until their voltage matches the voltage of the degraded string, potentially overcharging and leading to electrolyte boiling and outgassing from the remaining good cells in the degraded string. These parallel strings can now never be fully recharged, as the increased voltage will bleed off through the string containing the failed battery. Charging systems may attempt to gauge battery string capacity by measuring overall voltage. Due to the overall string voltage depletion due to the dead cells, the charging system may detect this as a state of discharge, and will continuously attempt to charge the series-parallel strings, which leads to continuous overcharging and damage to all the cells in the degraded series string containing the damaged battery. If lead-acid batteries are used, all cells in the formerly good parallel strings will begin to sulfate due to the inability for them to be fully recharged, resulting in the storage capacity of these cells being permanently damaged, even if the damaged cell in the one degraded string is eventually discovered and replaced with a new one. The only way to prevent these subtle series-parallel string interactions is by not using parallel strings at all and using separate charge controllers and inverters for individual series strings. Series new/old battery interactions Even just a single string of batteries wired in series can have adverse interactions if new batteries are mixed with old batteries. Older batteries tend to have reduced storage capacity, and so will both discharge faster than new batteries and also charge to their maximum capacity more rapidly than new batteries. As a mixed string of new and old batteries is depleted, the string voltage will drop, and when the old batteries are exhausted the new batteries still have charge available. The newer cells may continue to discharge through the rest of the string, but due to the low voltage this energy flow may not be useful and may be wasted in the old cells as resistance heating. For cells that are supposed to operate within a specific discharge window, new cells with more capacity may cause the old cells in the series string to continue to discharge beyond the safe bottom limit of the discharge window, damaging the old cells. When recharged, the old cells recharge more rapidly, leading to a rapid rise of voltage to near the fully charged state, but before the new cells with more capacity have fully recharged. The charge controller detects the high voltage of a nearly fully charged string and reduces current flow. The new cells with more capacity now charge very slowly, so slowly that the chemicals may begin to crystallize before reaching the fully charged state, reducing new cell capacity over several charge/discharge cycles until their capacity more closely matches the old cells in the series string. For such reasons, some industrial UPS management systems recommend periodic replacement of entire battery arrays potentially using hundreds of expensive batteries, due to these damaging interactions between new batteries and old batteries, within and across series and parallel strings. Standards IEC 62040-1:2017 Uninterruptible power systems (UPS) – Part 1: General and safety requirements for UPS IEC 62040-2:2016 Uninterruptible power systems (UPS) – Part 2: Electromagnetic compatibility (EMC) requirements IEC 62040-3:2021 Uninterruptible power systems (UPS) – Part 3: Method of specifying the performance and test requirements IEC 62040-4:2013 Uninterruptible power systems (UPS) – Part 4: Environmental aspects – Requirements and reporting See also Battery room Emergency power system Fuel cell applications IT baseline protection Power conditioner Dynamic voltage restoration Net metering system with energy storage Surge protector Switched-mode power supply (SMPS) Switched-mode power supply applications Emergency light References External links Electric power systems components Fault tolerance Voltage stability
Uninterruptible power supply
[ "Physics", "Engineering" ]
6,500
[ "Physical quantities", "Reliability engineering", "Fault tolerance", "Voltage", "Voltage stability" ]
41,855
https://en.wikipedia.org/wiki/Voice%20frequency
A voice frequency (VF) or voice band is the range of audio frequencies used for the transmission of speech. Frequency band In telephony, the usable voice frequency band ranges from approximately 300 to 3400 Hz. It is for this reason that the ultra low frequency band of the electromagnetic spectrum between 300 and 3000 Hz is also referred to as voice frequency, being the electromagnetic energy that represents acoustic energy at baseband. The bandwidth allocated for a single voice-frequency transmission channel is usually 4 kHz, including guard bands, allowing a sampling rate of 8 kHz to be used as the basis of the pulse-code modulation system used for the digital PSTN. Per the Nyquist–Shannon sampling theorem, the sampling frequency (8 kHz) must be at least twice the highest component of the voice frequency via appropriate filtering prior to sampling at discrete times (4 kHz) for effective reconstruction of the voice signal. Fundamental frequency The voiced speech of a typical adult male will have a fundamental frequency from 90 to 155 Hz, and that of a typical adult female from 165 to 255 Hz. Thus, the fundamental frequency of most speech falls below the bottom of the voice frequency band as defined. However, enough of the harmonic series will be present for the missing fundamental to create the impression of hearing the fundamental tone. Wavelength The speed of sound at room temperature (20°C) is 343.15 m/s. Using the formula we have: Typical female voices range from to . Typical male voices range from to . See also Formant Hearing (sense) Voice call References Human voice Telephony Spectrum (physical sciences)
Voice frequency
[ "Physics" ]
323
[ "Waves", "Physical phenomena", "Spectrum (physical sciences)" ]
41,951
https://en.wikipedia.org/wiki/Post%20and%20lintel
Post and lintel (also called prop and lintel, a trabeated system, or a trilithic system) is a building system where strong horizontal elements are held up by strong vertical elements with large spaces between them. This is usually used to hold up a roof, creating a largely open space beneath, for whatever use the building is designed. The horizontal elements are called by a variety of names including lintel, header, architrave or beam, and the supporting vertical elements may be called posts, columns, or pillars. The use of wider elements at the top of the post, called capitals, to help spread the load, is common to many architectural traditions. Lintels In architecture, a post-and-lintel or trabeated system refers to the use of horizontal stone beams or lintels which are borne by columns or posts. The name is from the Latin trabs, beam; influenced by trabeatus, clothed in the trabea, a ritual garment. Post-and-lintel construction is one of four ancient structural methods of building, the others being the corbel, arch-and-vault, and truss. A noteworthy example of a trabeated system is in Volubilis, from the Roman era, where one side of the Decumanus Maximus is lined with trabeated elements, while the opposite side of the roadway is designed in arched style. History of lintel systems The trabeated system is a fundamental principle of Neolithic architecture, ancient Indian architecture, ancient Greek architecture and ancient Egyptian architecture. Other trabeated styles are the Persian, Lycian, Japanese, traditional Chinese, and ancient Chinese architecture, especially in northern China, and nearly all the Indian styles. The traditions are represented in North and Central America by Mayan architecture, and in South America by Inca architecture. In all or most of these traditions, certainly in Greece and India, the earliest versions developed using wood, which were later translated into stone for larger and grander buildings. Timber framing, also using trusses, remains common for smaller buildings such as houses to the modern day. Span limitations There are two main forces acting upon the post and lintel system: weight carrying compression at the joint between lintel and post, and tension induced by deformation of self-weight and the load above between the posts. The two posts are under compression from the weight of the lintel (or beam) above. The lintel will deform by sagging in the middle because the underside is under tension and the upper is under compression. The biggest disadvantage to lintel construction is the limited weight that can be held up, and the resulting small distances required between the posts. Ancient Roman architecture's development of the arch allowed for much larger structures to be constructed. The arcuated system spreads larger loads more effectively, and replaced the post-and-lintel system in most larger buildings and structures, until the introduction of steel girder beams and steel-reinforced concrete in the industrial era. As with the Roman temple portico front and its descendants in later classical architecture, trabeated features were often retained in parts of buildings as an aesthetic choice. The classical orders of Greek origin were in particular retained in buildings designed to impress, even though they usually had little or no structural role. Lintel reinforcement The flexural strength of a stone lintel can be dramatically increased with the use of Post-tensioned stone. See also Architrave – structural lintel or beam resting on columns-pillars Atalburu – Basque decorative lintel Dolmen – Neolithic megalithic tombs with structural stone lintels Dougong – traditional Chinese structural element I-beam – steel lintels and beams Marriage stone – decorative lintel Opus caementicium Structural design Timber framing – post and beam systems Stonehenge Notes References Summerson, John, The Classical Language of Architecture, 1980 edition, Thames and Hudson World of Art series, Architectural elements Ancient Roman architectural elements Building Building engineering Doors Windows Timber framing Structural system
Post and lintel
[ "Technology", "Engineering" ]
811
[ "Structural engineering", "Timber framing", "Building engineering", "Building", "Structural system", "Construction", "Architectural elements", "Civil engineering", "Components", "Architecture" ]
41,957
https://en.wikipedia.org/wiki/Electrical%20impedance
In electrical engineering, impedance is the opposition to alternating current presented by the combined effect of resistance and reactance in a circuit. Quantitatively, the impedance of a two-terminal circuit element is the ratio of the complex representation of the sinusoidal voltage between its terminals, to the complex representation of the current flowing through it. In general, it depends upon the frequency of the sinusoidal voltage. Impedance extends the concept of resistance to alternating current (AC) circuits, and possesses both magnitude and phase, unlike resistance, which has only magnitude. Impedance can be represented as a complex number, with the same units as resistance, for which the SI unit is the ohm (). Its symbol is usually , and it may be represented by writing its magnitude and phase in the polar form . However, Cartesian complex number representation is often more powerful for circuit analysis purposes. The notion of impedance is useful for performing AC analysis of electrical networks, because it allows relating sinusoidal voltages and currents by a simple linear law. In multiple port networks, the two-terminal definition of impedance is inadequate, but the complex voltages at the ports and the currents flowing through them are still linearly related by the impedance matrix. The reciprocal of impedance is admittance, whose SI unit is the siemens, formerly called mho. Instruments used to measure the electrical impedance are called impedance analyzers. History Perhaps the earliest use of complex numbers in circuit analysis was by Johann Victor Wietlisbach in 1879 in analysing the Maxwell bridge. Wietlisbach avoided using differential equations by expressing AC currents and voltages as exponential functions with imaginary exponents (see ). Wietlisbach found the required voltage was given by multiplying the current by a complex number (impedance), although he did not identify this as a general parameter in its own right. The term impedance was coined by Oliver Heaviside in July 1886. Heaviside recognised that the "resistance operator" (impedance) in his operational calculus was a complex number. In 1887 he showed that there was an AC equivalent to Ohm's law. Arthur Kennelly published an influential paper on impedance in 1893. Kennelly arrived at a complex number representation in a rather more direct way than using imaginary exponential functions. Kennelly followed the graphical representation of impedance (showing resistance, reactance, and impedance as the lengths of the sides of a right angle triangle) developed by John Ambrose Fleming in 1889. Impedances could thus be added vectorially. Kennelly realised that this graphical representation of impedance was directly analogous to graphical representation of complex numbers (Argand diagram). Problems in impedance calculation could thus be approached algebraically with a complex number representation. Later that same year, Kennelly's work was generalised to all AC circuits by Charles Proteus Steinmetz. Steinmetz not only represented impedances by complex numbers but also voltages and currents. Unlike Kennelly, Steinmetz was thus able to express AC equivalents of DC laws such as Ohm's and Kirchhoff's laws. Steinmetz's work was highly influential in spreading the technique amongst engineers. Introduction In addition to resistance as seen in DC circuits, impedance in AC circuits includes the effects of the induction of voltages in conductors by the magnetic fields (inductance), and the electrostatic storage of charge induced by voltages between conductors (capacitance). The impedance caused by these two effects is collectively referred to as reactance and forms the imaginary part of complex impedance whereas resistance forms the real part. Complex impedance The impedance of a two-terminal circuit element is represented as a complex quantity . The polar form conveniently captures both magnitude and phase characteristics as where the magnitude represents the ratio of the voltage difference amplitude to the current amplitude, while the argument (commonly given the symbol ) gives the phase difference between voltage and current. is the imaginary unit, and is used instead of in this context to avoid confusion with the symbol for electric current. In Cartesian form, impedance is defined as where the real part of impedance is the resistance and the imaginary part is the reactance . Where it is needed to add or subtract impedances, the cartesian form is more convenient; but when quantities are multiplied or divided, the calculation becomes simpler if the polar form is used. A circuit calculation, such as finding the total impedance of two impedances in parallel, may require conversion between forms several times during the calculation. Conversion between the forms follows the normal conversion rules of complex numbers. Complex voltage and current To simplify calculations, sinusoidal voltage and current waves are commonly represented as complex-valued functions of time denoted as and . The impedance of a bipolar circuit is defined as the ratio of these quantities: Hence, denoting , we have The magnitude equation is the familiar Ohm's law applied to the voltage and current amplitudes, while the second equation defines the phase relationship. Validity of complex representation This representation using complex exponentials may be justified by noting that (by Euler's formula): The real-valued sinusoidal function representing either voltage or current may be broken into two complex-valued functions. By the principle of superposition, we may analyse the behaviour of the sinusoid on the left-hand side by analysing the behaviour of the two complex terms on the right-hand side. Given the symmetry, we only need to perform the analysis for one right-hand term. The results are identical for the other. At the end of any calculation, we may return to real-valued sinusoids by further noting that Ohm's law The meaning of electrical impedance can be understood by substituting it into Ohm's law. Assuming a two-terminal circuit element with impedance is driven by a sinusoidal voltage or current as above, there holds The magnitude of the impedance acts just like resistance, giving the drop in voltage amplitude across an impedance for a given current . The phase factor tells us that the current lags the voltage by a phase (i.e., in the time domain, the current signal is shifted later with respect to the voltage signal). Just as impedance extends Ohm's law to cover AC circuits, other results from DC circuit analysis, such as voltage division, current division, Thévenin's theorem and Norton's theorem, can also be extended to AC circuits by replacing resistance with impedance. Phasors A phasor is represented by a constant complex number, usually expressed in exponential form, representing the complex amplitude (magnitude and phase) of a sinusoidal function of time. Phasors are used by electrical engineers to simplify computations involving sinusoids (such as in AC circuits), where they can often reduce a differential equation problem to an algebraic one. The impedance of a circuit element can be defined as the ratio of the phasor voltage across the element to the phasor current through the element, as determined by the relative amplitudes and phases of the voltage and current. This is identical to the definition from Ohm's law given above, recognising that the factors of cancel. Device examples Resistor The impedance of an ideal resistor is purely real and is called resistive impedance: In this case, the voltage and current waveforms are proportional and in phase. Inductor and capacitor Ideal inductors and capacitors have a purely imaginary reactive impedance: the impedance of inductors increases as frequency increases; the impedance of capacitors decreases as frequency increases; In both cases, for an applied sinusoidal voltage, the resulting current is also sinusoidal, but in quadrature, 90 degrees out of phase with the voltage. However, the phases have opposite signs: in an inductor, the current is lagging; in a capacitor the current is leading. Note the following identities for the imaginary unit and its reciprocal: Thus the inductor and capacitor impedance equations can be rewritten in polar form: The magnitude gives the change in voltage amplitude for a given current amplitude through the impedance, while the exponential factors give the phase relationship. Deriving the device-specific impedances What follows below is a derivation of impedance for each of the three basic circuit elements: the resistor, the capacitor, and the inductor. Although the idea can be extended to define the relationship between the voltage and current of any arbitrary signal, these derivations assume sinusoidal signals. In fact, this applies to any arbitrary periodic signals, because these can be approximated as a sum of sinusoids through Fourier analysis. Resistor For a resistor, there is the relation which is Ohm's law. Considering the voltage signal to be it follows that This says that the ratio of AC voltage amplitude to alternating current (AC) amplitude across a resistor is , and that the AC voltage leads the current across a resistor by 0 degrees. This result is commonly expressed as Capacitor For a capacitor, there is the relation: Considering the voltage signal to be it follows that and thus, as previously, Conversely, if the current through the circuit is assumed to be sinusoidal, its complex representation being then integrating the differential equation leads to The Const term represents a fixed potential bias superimposed to the AC sinusoidal potential, that plays no role in AC analysis. For this purpose, this term can be assumed to be 0, hence again the impedance Inductor For the inductor, we have the relation (from Faraday's law): This time, considering the current signal to be: it follows that: This result is commonly expressed in polar form as or, using Euler's formula, as As in the case of capacitors, it is also possible to derive this formula directly from the complex representations of the voltages and currents, or by assuming a sinusoidal voltage between the two poles of the inductor. In the latter case, integrating the differential equation above leads to a constant term for the current, that represents a fixed DC bias flowing through the inductor. This is set to zero because AC analysis using frequency domain impedance considers one frequency at a time and DC represents a separate frequency of zero hertz in this context. Generalised s-plane impedance Impedance defined in terms of jω can strictly be applied only to circuits that are driven with a steady-state AC signal. The concept of impedance can be extended to a circuit energised with any arbitrary signal by using complex frequency instead of jω. Complex frequency is given the symbol and is, in general, a complex number. Signals are expressed in terms of complex frequency by taking the Laplace transform of the time domain expression of the signal. The impedance of the basic circuit elements in this more general notation is as follows: For a DC circuit, this simplifies to . For a steady-state sinusoidal AC signal . Formal derivation The impedance of an electrical component is defined as the ratio between the Laplace transforms of the voltage over it and the current through it, i.e. where is the complex Laplace parameter. As an example, according to the I-V-law of a capacitor, , from which it follows that . In the phasor regime (steady-state AC, meaning all signals are represented mathematically as simple complex exponentials and oscillating at a common frequency ), impedance can simply be calculated as the voltage-to-current ratio, in which the common time-dependent factor cancels out: Again, for a capacitor, one gets that , and hence . The phasor domain is sometimes dubbed the frequency domain, although it lacks one of the dimensions of the Laplace parameter. For steady-state AC, the polar form of the complex impedance relates the amplitude and phase of the voltage and current. In particular: The magnitude of the complex impedance is the ratio of the voltage amplitude to the current amplitude; The phase of the complex impedance is the phase shift by which the current lags the voltage. These two relationships hold even after taking the real part of the complex exponentials (see phasors), which is the part of the signal one actually measures in real-life circuits. Resistance vs reactance Resistance and reactance together determine the magnitude and phase of the impedance through the following relations: In many applications, the relative phase of the voltage and current is not critical so only the magnitude of the impedance is significant. Resistance Resistance is the real part of impedance; a device with a purely resistive impedance exhibits no phase shift between the voltage and current. Reactance Reactance is the imaginary part of the impedance; a component with a finite reactance induces a phase shift between the voltage across it and the current through it. A purely reactive component is distinguished by the sinusoidal voltage across the component being in quadrature with the sinusoidal current through the component. This implies that the component alternately absorbs energy from the circuit and then returns energy to the circuit. A pure reactance does not dissipate any power. Capacitive reactance A capacitor has a purely reactive impedance that is inversely proportional to the signal frequency. A capacitor consists of two conductors separated by an insulator, also known as a dielectric. The minus sign indicates that the imaginary part of the impedance is negative. At low frequencies, a capacitor approaches an open circuit so no current flows through it. A DC voltage applied across a capacitor causes charge to accumulate on one side; the electric field due to the accumulated charge is the source of the opposition to the current. When the potential associated with the charge exactly balances the applied voltage, the current goes to zero. Driven by an AC supply, a capacitor accumulates only a limited charge before the potential difference changes sign and the charge dissipates. The higher the frequency, the less charge accumulates and the smaller the opposition to the current. Inductive reactance Inductive reactance is proportional to the signal frequency and the inductance . An inductor consists of a coiled conductor. Faraday's law of electromagnetic induction gives the back emf (voltage opposing current) due to a rate-of-change of magnetic flux density through a current loop. For an inductor consisting of a coil with loops this gives: The back-emf is the source of the opposition to current flow. A constant direct current has a zero rate-of-change, and sees an inductor as a short-circuit (it is typically made from a material with a low resistivity). An alternating current has a time-averaged rate-of-change that is proportional to frequency, this causes the increase in inductive reactance with frequency. Total reactance The total reactance is given by ( is negative) so that the total impedance is Combining impedances The total impedance of many simple networks of components can be calculated using the rules for combining impedances in series and parallel. The rules are identical to those for combining resistances, except that the numbers in general are complex numbers. The general case, however, requires equivalent impedance transforms in addition to series and parallel. Series combination For components connected in series, the current through each circuit element is the same; the total impedance is the sum of the component impedances. Or explicitly in real and imaginary terms: Parallel combination For components connected in parallel, the voltage across each circuit element is the same; the ratio of currents through any two elements is the inverse ratio of their impedances. Hence the inverse total impedance is the sum of the inverses of the component impedances: or, when n = 2: The equivalent impedance can be calculated in terms of the equivalent series resistance and reactance . Measurement The measurement of the impedance of devices and transmission lines is a practical problem in radio technology and other fields. Measurements of impedance may be carried out at one frequency, or the variation of device impedance over a range of frequencies may be of interest. The impedance may be measured or displayed directly in ohms, or other values related to impedance may be displayed; for example, in a radio antenna, the standing wave ratio or reflection coefficient may be more useful than the impedance alone. The measurement of impedance requires the measurement of the magnitude of voltage and current, and the phase difference between them. Impedance is often measured by "bridge" methods, similar to the direct-current Wheatstone bridge; a calibrated reference impedance is adjusted to balance off the effect of the impedance of the device under test. Impedance measurement in power electronic devices may require simultaneous measurement and provision of power to the operating device. The impedance of a device can be calculated by complex division of the voltage and current. The impedance of the device can be calculated by applying a sinusoidal voltage to the device in series with a resistor, and measuring the voltage across the resistor and across the device. Performing this measurement by sweeping the frequencies of the applied signal provides the impedance phase and magnitude. The use of an impulse response may be used in combination with the fast Fourier transform (FFT) to rapidly measure the electrical impedance of various electrical devices. The LCR meter (Inductance (L), Capacitance (C), and Resistance (R)) is a device commonly used to measure the inductance, resistance and capacitance of a component; from these values, the impedance at any frequency can be calculated. Example Consider an LC tank circuit. The complex impedance of the circuit is It is immediately seen that the value of is minimal (actually equal to 0 in this case) whenever Therefore, the fundamental resonance angular frequency is Variable impedance In general, neither impedance nor admittance can vary with time, since they are defined for complex exponentials in which . If the complex exponential voltage to current ratio changes over time or amplitude, the circuit element cannot be described using the frequency domain. However, many components and systems (e.g., varicaps that are used in radio tuners) may exhibit non-linear or time-varying voltage to current ratios that seem to be linear time-invariant (LTI) for small signals and over small observation windows, so they can be roughly described as if they had a time-varying impedance. This description is an approximation: Over large signal swings or wide observation windows, the voltage to current relationship will not be LTI and cannot be described by impedance. See also Transmission line impedance Notes References Kline, Ronald R., Steinmetz: Engineer and Socialist, Plunkett Lake Press, 2019 (ebook reprint of Johns Hopkins University Press, 1992 ). External links ECE 209: Review of Circuits as LTI Systems – Brief explanation of Laplace-domain circuit analysis; includes a definition of impedance. Electrical resistance and conductance Physical quantities Antennas (radio)
Electrical impedance
[ "Physics", "Mathematics" ]
3,945
[ "Physical phenomena", "Physical quantities", "Quantity", "Wikipedia categories named after physical quantities", "Physical properties", "Electrical resistance and conductance" ]
42,020
https://en.wikipedia.org/wiki/Synthetic%20radioisotope
A synthetic radioisotope is a radionuclide that is not found in nature: no natural process or mechanism exists which produces it, or it is so unstable that it decays away in a very short period of time. Frédéric Joliot-Curie and Irène Joliot-Curie were the first to produce a synthetic radioisotope in the 20th century. Examples include technetium-99 and promethium-146. Many of these are found in, and harvested from, spent nuclear fuel assemblies. Some must be manufactured in particle accelerators. Production Some synthetic radioisotopes are extracted from spent nuclear reactor fuel rods, which contain various fission products. For example, it is estimated that up to 1994, about 49,000 terabecquerels (78 metric tons) of technetium were produced in nuclear reactors; as such, anthropogenic technetium is far more abundant than technetium from natural radioactivity. Some synthetic isotopes are produced in significant quantities by fission but are not yet being reclaimed. Other isotopes are manufactured by neutron irradiation of parent isotopes in a nuclear reactor (for example, technetium-97 can be made by neutron irradiation of ruthenium-96) or by bombarding parent isotopes with high energy particles from a particle accelerator. Many isotopes, including radiopharmaceuticals, are produced in cyclotrons. For example, the synthetic fluorine-18 and oxygen-15 are widely used in positron emission tomography. Uses Most synthetic radioisotopes have a short half-life. Though a health hazard, radioactive materials have many medical and industrial uses. Nuclear medicine The field of nuclear medicine covers use of radioisotopes for diagnosis or treatment. Diagnosis Radioactive tracer compounds, radiopharmaceuticals, are used to observe the function of various organs and body systems. These compounds use a chemical tracer which is attracted to or concentrated by the activity which is being studied. That chemical tracer incorporates a short lived radioactive isotope, usually one which emits a gamma ray which is energetic enough to travel through the body and be captured outside by a gamma camera to map the concentrations. Gamma cameras and other similar detectors are highly efficient, and the tracer compounds are generally very effective at concentrating at the areas of interest, so the total amounts of radioactive material needed are very small. The metastable nuclear isomer technetium-99m is a gamma-ray emitter widely used for medical diagnostics because it has a short half-life of 6 hours, but can be easily made in the hospital using a technetium-99m generator. Weekly global demand for the parent isotope molybdenum-99 was in 2010, overwhelmingly provided by fission of uranium-235. Treatment Several radioisotopes and compounds are used for medical treatment, usually by bringing the radioactive isotope to a high concentration in the body near a particular organ. For example, iodine-131 is used for treating some disorders and tumors of the thyroid gland. Industrial radiation sources Alpha particle, beta particle, and gamma ray radioactive emissions are industrially useful. Most sources of these are synthetic radioisotopes. Areas of use include the petroleum industry, industrial radiography, homeland security, process control, food irradiation and underground detection. Footnotes External links Map of the Nuclides at LANL T-2 Website Radioactivity Radiopharmaceuticals af:Radio-aktiewe isotoop
Synthetic radioisotope
[ "Physics", "Chemistry" ]
726
[ "Medicinal radiochemistry", "Radiopharmaceuticals", "Nuclear physics", "Chemicals in medicine", "Radioactivity" ]
42,021
https://en.wikipedia.org/wiki/Trace%20radioisotope
A trace radioisotope is a radioisotope that occurs naturally in trace amounts (i.e. extremely small). Generally speaking, trace radioisotopes have half-lives that are short in comparison with the age of the Earth, since primordial nuclides tend to occur in larger than trace amounts. Trace radioisotopes are therefore present only because they are continually produced on Earth by natural processes. Natural processes which produce trace radioisotopes include cosmic ray bombardment of stable nuclides, ordinary alpha and beta decay of the long-lived heavy nuclides, thorium-232, uranium-238, and uranium-235, spontaneous fission of uranium-238, and nuclear transmutation reactions induced by natural radioactivity, such as the production of plutonium-239 and uranium-236 from neutron capture by natural uranium. Elements The elements that occur on Earth only in traces are listed below. Isotopes of other elements (not exhaustive): Tritium Beryllium-7 Beryllium-10 Carbon-14 Fluorine-18 Sodium-22 Sodium-24 Magnesium-28 Silicon-31 Silicon-32 Phosphorus-32 Sulfur-35 Sulfur-38 Chlorine-34m Chlorine-36 Chlorine-38 Chlorine-39 Argon-39 Argon-42 Calcium-41 Iron-52 Cobalt-55 Nickel-59 Copper-60 Germanium-64 Selenium-79 Krypton-81 Strontium-90 Rhodium-105 References Radioactivity
Trace radioisotope
[ "Physics", "Chemistry" ]
318
[ "Nuclear chemistry stubs", "Nuclear and atomic physics stubs", "Radioactivity", "Nuclear physics" ]
42,445
https://en.wikipedia.org/wiki/Dalton%20%28unit%29
The dalton or unified atomic mass unit (symbols: Da or u, respectively) is a unit of mass defined as of the mass of an unbound neutral atom of carbon-12 in its nuclear and electronic ground state and at rest. It is a non-SI unit accepted for use with SI. The atomic mass constant, denoted mu, is defined identically, giving . This unit is commonly used in physics and chemistry to express the mass of atomic-scale objects, such as atoms, molecules, and elementary particles, both for discrete instances and multiple types of ensemble averages. For example, an atom of helium-4 has a mass of . This is an intrinsic property of the isotope and all helium-4 atoms have the same mass. Acetylsalicylic acid (aspirin), , has an average mass of about . However, there are no acetylsalicylic acid molecules with this mass. The two most common masses of individual acetylsalicylic acid molecules are , having the most common isotopes, and , in which one carbon is carbon-13. The molecular masses of proteins, nucleic acids, and other large polymers are often expressed with the unit kilodalton (kDa) and megadalton (MDa). Titin, one of the largest known proteins, has a molecular mass of between 3 and 3.7 megadaltons. The DNA of chromosome 1 in the human genome has about 249 million base pairs, each with an average mass of about , or total. The mole is a unit of amount of substance used in chemistry and physics, such that the mass of one mole of a substance expressed in grams is numerically equal to the average mass of one of its particles expressed in daltons. That is, the molar mass of a chemical compound expressed in g/mol or kg/kmol is numerically equal to its average molecular mass expressed in Da. For example, the average mass of one molecule of water is about 18.0153 Da, and the mass of one mole of water is about 18.0153 g. A protein whose molecule has an average mass of would have a molar mass of . However, while this equality can be assumed for practical purposes, it is only approximate, because of the 2019 redefinition of the mole. In general, the mass in daltons of an atom is numerically close but not exactly equal to the number of nucleons in its nucleus. It follows that the molar mass of a compound (grams per mole) is numerically close to the average number of nucleons contained in each molecule. By definition, the mass of an atom of carbon-12 is 12 daltons, which corresponds with the number of nucleons that it has (6 protons and 6 neutrons). However, the mass of an atomic-scale object is affected by the binding energy of the nucleons in its atomic nuclei, as well as the mass and binding energy of its electrons. Therefore, this equality holds only for the carbon-12 atom in the stated conditions, and will vary for other substances. For example, the mass of an unbound atom of the common hydrogen isotope (hydrogen-1, protium) is , the mass of a proton is the mass of a free neutron is and the mass of a hydrogen-2 (deuterium) atom is . In general, the difference (absolute mass excess) is less than 0.1%; exceptions include hydrogen-1 (about 0.8%), helium-3 (0.5%), lithium-6 (0.25%) and beryllium (0.14%). The dalton differs from the unit of mass in the system of atomic units, which is the electron rest mass (m). Energy equivalents The atomic mass constant can also be expressed as its energy-equivalent, mc. The CODATA recommended values are: The mass-equivalent is commonly used in place of a unit of mass in particle physics, and these values are also important for the practical determination of relative atomic masses. History Origin of the concept The interpretation of the law of definite proportions in terms of the atomic theory of matter implied that the masses of atoms of various elements had definite ratios that depended on the elements. While the actual masses were unknown, the relative masses could be deduced from that law. In 1803 John Dalton proposed to use the (still unknown) atomic mass of the lightest atom, hydrogen, as the natural unit of atomic mass. This was the basis of the atomic weight scale. For technical reasons, in 1898, chemist Wilhelm Ostwald and others proposed to redefine the unit of atomic mass as the mass of an oxygen atom. That proposal was formally adopted by the International Committee on Atomic Weights (ICAW) in 1903. That was approximately the mass of one hydrogen atom, but oxygen was more amenable to experimental determination. This suggestion was made before the discovery of isotopes in 1912. Physicist Jean Perrin had adopted the same definition in 1909 during his experiments to determine the atomic masses and the Avogadro constant. This definition remained unchanged until 1961. Perrin also defined the "mole" as an amount of a compound that contained as many molecules as 32 grams of oxygen (). He called that number the Avogadro number in honor of physicist Amedeo Avogadro. Isotopic variation The discovery of isotopes of oxygen in 1929 required a more precise definition of the unit. Two distinct definitions came into use. Chemists choose to define the AMU as of the average mass of an oxygen atom as found in nature; that is, the average of the masses of the known isotopes, weighted by their natural abundance. Physicists, on the other hand, defined it as of the mass of an atom of the isotope oxygen-16 (16O). Definition by IUPAC The existence of two distinct units with the same name was confusing, and the difference (about in relative terms) was large enough to affect high-precision measurements. Moreover, it was discovered that the isotopes of oxygen had different natural abundances in water and in air. For these and other reasons, in 1961 the International Union of Pure and Applied Chemistry (IUPAC), which had absorbed the ICAW, adopted a new definition of the atomic mass unit for use in both physics and chemistry; namely, of the mass of a carbon-12 atom. This new value was intermediate between the two earlier definitions, but closer to the one used by chemists (who would be affected the most by the change). The new unit was named the "unified atomic mass unit" and given a new symbol "u", to replace the old "amu" that had been used for the oxygen-based unit. However, the old symbol "amu" has sometimes been used, after 1961, to refer to the new unit, particularly in lay and preparatory contexts. With this new definition, the standard atomic weight of carbon is about , and that of oxygen is about . These values, generally used in chemistry, are based on averages of many samples from Earth's crust, its atmosphere, and organic materials. Adoption by BIPM The IUPAC 1961 definition of the unified atomic mass unit, with that name and symbol "u", was adopted by the International Bureau for Weights and Measures (BIPM) in 1971 as a non-SI unit accepted for use with the SI. Unit name In 1993, the IUPAC proposed the shorter name "dalton" (with symbol "Da") for the unified atomic mass unit. As with other unit names such as watt and newton, "dalton" is not capitalized in English, but its symbol, "Da", is capitalized. The name was endorsed by the International Union of Pure and Applied Physics (IUPAP) in 2005. In 2003 the name was recommended to the BIPM by the Consultative Committee for Units, part of the CIPM, as it "is shorter and works better with [SI] prefixes". In 2006, the BIPM included the dalton in its 8th edition of the SI brochure of formal definitions as a non-SI unit accepted for use with the SI. The name was also listed as an alternative to "unified atomic mass unit" by the International Organization for Standardization in 2009. It is now recommended by several scientific publishers, and some of them consider "atomic mass unit" and "amu" deprecated. In 2019, the BIPM retained the dalton in its 9th edition of the SI brochure, while dropping the unified atomic mass unit from its table of non-SI units accepted for use with the SI, but secondarily notes that the dalton (Da) and the unified atomic mass unit (u) are alternative names (and symbols) for the same unit. 2019 revision of the SI The definition of the dalton was not affected by the 2019 revision of the SI, that is, 1 Da in the SI is still of the mass of a carbon-12 atom, a quantity that must be determined experimentally in terms of SI units. However, the definition of a mole was changed to be the amount of substance consisting of exactly entities and the definition of the kilogram was changed as well. As a consequence, the molar mass constant remains close to but no longer exactly 1 g/mol, meaning that the mass in grams of one mole of any substance remains nearly but no longer exactly numerically equal to its average molecular mass in daltons, although the relative standard uncertainty of at the time of the redefinition is insignificant for all practical purposes. Measurement Though relative atomic masses are defined for neutral atoms, they are measured (by mass spectrometry) for ions: hence, the measured values must be corrected for the mass of the electrons that were removed to form the ions, and also for the mass equivalent of the electron binding energy, E/mc. The total binding energy of the six electrons in a carbon-12 atom is  = : Eb/muc2 = , or about one part in 10 million of the mass of the atom. Before the 2019 revision of the SI, experiments were aimed to determine the value of the Avogadro constant for finding the value of the unified atomic mass unit. Josef Loschmidt A reasonably accurate value of the atomic mass unit was first obtained indirectly by Josef Loschmidt in 1865, by estimating the number of particles in a given volume of gas. Jean Perrin Perrin estimated the Avogadro number by a variety of methods, at the turn of the 20th century. He was awarded the 1926 Nobel Prize in Physics, largely for this work. Coulometry The electric charge per mole of elementary charges is a constant called the Faraday constant, F, whose value had been essentially known since 1834 when Michael Faraday published his works on electrolysis. In 1910, Robert Millikan obtained the first measurement of the charge on an electron, −e. The quotient F/e provided an estimate of the Avogadro constant. The classic experiment is that of Bower and Davis at NIST, and relies on dissolving silver metal away from the anode of an electrolysis cell, while passing a constant electric current I for a known time t. If m is the mass of silver lost from the anode and A the atomic weight of silver, then the Faraday constant is given by: The NIST scientists devised a method to compensate for silver lost from the anode by mechanical causes, and conducted an isotope analysis of the silver used to determine its atomic weight. Their value for the conventional Faraday constant was F = , which corresponds to a value for the Avogadro constant of : both values have a relative standard uncertainty of . Electron mass measurement In practice, the atomic mass constant is determined from the electron rest mass m and the electron relative atomic mass A(e) (that is, the mass of electron divided by the atomic mass constant). The relative atomic mass of the electron can be measured in cyclotron experiments, while the rest mass of the electron can be derived from other physical constants. where c is the speed of light, h is the Planck constant, α is the fine-structure constant, and R is the Rydberg constant. As may be observed from the old values (2014 CODATA) in the table below, the main limiting factor in the precision of the Avogadro constant was the uncertainty in the value of the Planck constant, as all the other constants that contribute to the calculation were known more precisely. The power of having defined values of universal constants as is presently the case can be understood from the table below (2018 CODATA). X-ray crystal density methods Silicon single crystals may be produced today in commercial facilities with extremely high purity and with few lattice defects. This method defined the Avogadro constant as the ratio of the molar volume, V, to the atomic volume V: where and n is the number of atoms per unit cell of volume Vcell. The unit cell of silicon has a cubic packing arrangement of 8 atoms, and the unit cell volume may be measured by determining a single unit cell parameter, the length a of one of the sides of the cube. The CODATA value of a for silicon is In practice, measurements are carried out on a distance known as d(Si), which is the distance between the planes denoted by the Miller indices {220}, and is equal to . The isotope proportional composition of the sample used must be measured and taken into account. Silicon occurs in three stable isotopes (Si, Si, Si), and the natural variation in their proportions is greater than other uncertainties in the measurements. The atomic weight A for the sample crystal can be calculated, as the standard atomic weights of the three nuclides are known with great accuracy. This, together with the measured density ρ of the sample, allows the molar volume V to be determined: where M is the molar mass constant. The CODATA value for the molar volume of silicon is , with a relative standard uncertainty of See also Mass (mass spectrometry) Kendrick mass Monoisotopic mass Mass-to-charge ratio Notes References External links Metrology Nuclear chemistry Units of chemical measurement Units of mass
Dalton (unit)
[ "Physics", "Chemistry", "Mathematics" ]
2,914
[ "Units of measurement", "Nuclear chemistry", "Quantity", "Units of mass", "Mass", "Chemical quantities", "nan", "Nuclear physics", "Units of chemical measurement", "Matter" ]
42,453
https://en.wikipedia.org/wiki/Kirkendall%20effect
The Kirkendall effect is the motion of the interface between two metals that occurs due to the difference in diffusion rates of the metal atoms. The effect can be observed, for example, by placing insoluble markers at the interface between a pure metal and an alloy containing that metal, and heating to a temperature where atomic diffusion is reasonable for the given timescale; the boundary will move relative to the markers. This process was named after Ernest Kirkendall (1914–2005), assistant professor of chemical engineering at Wayne State University from 1941 to 1946. The paper describing the discovery of the effect was published in 1947. The Kirkendall effect has important practical consequences. One of these is the prevention or suppression of voids formed at the boundary interface in various kinds of alloy-to-metal bonding. These are referred to as Kirkendall voids. History The Kirkendall effect was discovered by Ernest Kirkendall and Alice Smigelskas in 1947, in the course of Kirkendall's ongoing research into diffusion in brass. The paper in which he discovered the famous effect was the third in his series of papers on brass diffusion, the first being his thesis. His second paper revealed that zinc diffused more quickly than copper in alpha-brass, which led to the research producing his revolutionary theory. Until this point, substitutional and ring methods were the dominant ideas for diffusional motion. Kirkendall's experiment produced evidence of a vacancy diffusion mechanism, which is the accepted mechanism to this day. At the time it was submitted, the paper and Kirkendall's ideas were rejected from publication by Robert Franklin Mehl, director of the Metals Research Laboratory at Carnegie Institute of Technology (now Carnegie Mellon University). Mehl refused to accept Kirkendall's evidence of this new diffusion mechanism and denied publication for over six months, only relenting after a conference was held and several other researchers confirmed Kirkendall's results. Kirkendall's experiment A bar of brass (70% Cu, 30% Zn) was used as a core, with molybdenum wires stretched along its length, and then coated in a layer of pure copper. Molybdenum was chosen as the marker material due to it being very insoluble in brass, eliminating any error due to the markers diffusing themselves. Diffusion was allowed to take place at 785 °C over the course of 56 days, with cross-sections being taken at six times throughout the span of the experiment. Over time, it was observed that the wire markers moved closer together as the zinc diffused out of the brass and into the copper. A difference in location of the interface was visible in cross sections of different times. Compositional change of the material from diffusion was confirmed by x-ray diffraction. Diffusion mechanism Early diffusion models postulated that atomic motion in substitutional alloys occurs via a direct exchange mechanism, in which atoms migrate by switching positions with atoms on adjacent lattice sites. Such a mechanism implies that the atomic fluxes of two different materials across an interface must be equal, as each atom moving across the interface causes another atom to move across in the other direction. Another possible diffusion mechanism involves lattice vacancies. An atom can move into a vacant lattice site, effectively causing the atom and the vacancy to switch places. If large-scale diffusion takes place in a material, there will be a flux of atoms in one direction and a flux of vacancies in the other. The Kirkendall effect arises when two distinct materials are placed next to each other and diffusion is allowed to take place between them. In general, the diffusion coefficients of the two materials in each other are not the same. This is only possible if diffusion occurs by a vacancy mechanism; if the atoms instead diffused by an exchange mechanism, they would cross the interface in pairs, so the diffusion rates would be identical, contrary to observation. By Fick's 1st law of diffusion, the flux of atoms from the material with the higher diffusion coefficient will be larger, so there will be a net flux of atoms from the material with the higher diffusion coefficient into the material with the lower diffusion coefficient. To balance this flux of atoms, there will be a flux of vacancies in the opposite direction—from the material with the lower diffusion coefficient into the material with the higher diffusion coefficient—resulting in an overall translation of the lattice relative to the environment in the direction of the material with the lower diffusion constant. Macroscopic evidence for the Kirkendall effect can be gathered by placing inert markers at the initial interface between the two materials, such as molybdenum markers at an interface between copper and brass. The diffusion coefficient of zinc is higher than the diffusion coefficient of copper in this case. Since zinc atoms leave the brass at a higher rate than copper atoms enter, the size of the brass region decreases as diffusion progresses. Relative to the molybdenum markers, the copper–brass interface moves toward the brass at an experimentally measurable rate. Darken's equations Shortly after the publication of Kirkendall's paper, L. S. Darken published an analysis of diffusion in binary systems much like the one studied by Smigelskas and Kirkendall. By separating the actual diffusive flux of the materials from the movement of the interface relative to the markers, Darken found the marker velocity to be where and are the diffusion coefficients of the two materials, and is an atomic fraction. One consequence of this equation is that the movement of an interface varies linearly with the square root of time, which is exactly the experimental relationship discovered by Smigelskas and Kirkendall. Darken also developed a second equation that defines a combined chemical diffusion coefficient in terms of the diffusion coefficients of the two interfacing materials: This chemical diffusion coefficient can be used to mathematically analyze Kirkendall effect diffusion via the Boltzmann–Matano method. Kirkendall porosity One important consideration deriving from Kirkendall's work is the presence of pores formed during diffusion. These voids act as sinks for vacancies, and when enough accumulate, they can become substantial and expand in an attempt to restore equilibrium. Porosity occurs due to the difference in diffusion rate of the two species. Pores in metals have ramifications for mechanical, thermal, and electrical properties, and thus control over their formation is often desired. The equation where is the distance moved by a marker, is a coefficient determined by intrinsic diffusivities of the materials, and is a concentration difference between components, has proven to be an effective model for mitigating Kirkendall porosity. Controlling annealing temperature is another method of reducing or eliminating porosity. Kirkendall porosity typically occurs at a set temperature in a system, so annealing can be performed at lower temperatures for longer times to avoid formation of pores. Examples In 1972, C. W. Horsting of the RCA Corporation published a paper which reported test results on the reliability of semiconductor devices in which the connections were made using aluminium wires bonded ultrasonically to gold-plated posts. His paper demonstrated the importance of the Kirkendall effect in wire bonding technology, but also showed the significant contribution of any impurities present to the rate at which precipitation occurred at the wire bonds. Two of the important contaminants that have this effect, known as Horsting effect (Horsting voids) are fluorine and chlorine. Both Kirkendall voids and Horsting voids are known causes of wire-bond fractures, though historically this cause is often confused with the purple-colored appearance of one of the five different gold–aluminium intermetallics, commonly referred to as "purple plague" and less often "white plague". See also Electromigration References External links Aloke Paul, Tomi Laurila, Vesa Vuorinen and Sergiy Divinski, Thermodynamics, Diffusion and the Kirkendall effect in Solids, Springer, Heidelberg, Germany, 2014. Kirkendall Effect: Dramatic History of Discovery and Developments by L.N. Paritskaya Interdiffusion and Kirkendall Effect in Cu-Sn Alloys Visual demonstration of the Kirkendall effect Metallurgy
Kirkendall effect
[ "Chemistry", "Materials_science", "Engineering" ]
1,690
[ "Metallurgy", "Materials science", "nan" ]
42,526
https://en.wikipedia.org/wiki/Etching
Etching is traditionally the process of using strong acid or mordant to cut into the unprotected parts of a metal surface to create a design in intaglio (incised) in the metal. In modern manufacturing, other chemicals may be used on other types of material. As a method of printmaking, it is, along with engraving, the most important technique for old master prints, and remains in wide use today. In a number of modern variants such as microfabrication etching and photochemical milling, it is a crucial technique in modern technology, including circuit boards. In traditional pure etching, a metal plate (usually of copper, zinc or steel) is covered with a waxy ground which is resistant to acid. The artist then scratches off the ground with a pointed etching needle where the artist wants a line to appear in the finished piece, exposing the bare metal. The échoppe, a tool with a slanted oval section, is also used for "swelling" lines. The plate is then dipped in a bath of acid, known as the mordant (French for "biting") or etchant, or has acid washed over it. The acid "bites" into the metal (it undergoes a redox reaction) to a depth depending on time and acid strength, leaving behind the drawing (as carved into the wax) on the metal plate. The remaining ground is then cleaned off the plate. For first and renewed uses the plate is inked in any chosen non-corrosive ink all over and the surface ink drained and wiped clean, leaving ink in the etched forms. The plate is then put through a high-pressure printing press together with a sheet of paper (often moistened to soften it). The paper picks up the ink from the etched lines, making a print. The process can be repeated many times; typically several hundred impressions (copies) could be printed before the plate shows much sign of wear. The work on the plate can be added to or repaired by re-waxing and further etching; such an etching (plate) may have been used in more than one state. Etching has often been combined with other intaglio techniques such as engraving (e.g., Rembrandt) or aquatint (e.g., Francisco Goya). History Origin Etching in antiquity Etching was already used in antiquity for decorative purposes. Etched carnelian beads are a type of ancient decorative beads made from carnelian with an etched design in white, which were probably manufactured by the Indus Valley civilization during the 3rd millennium BCE. They were made according to a technique of alkaline etching developed by the Harappans, and vast quantities of these beads were found in the archaeological sites of the Indus Valley civilization. They are considered as an important marker of ancient trade between the Indus Valley, Mesopotamia and even Ancient Egypt, as these precious and unique manufactured items circulated in great numbers between these geographical areas during the 3rd millennium BCE, and have been found in numerous tomb deposits. Sumerian kings, such as Shulgi , also created etched carnelian beads for dedication purposes. Early etching Etching by goldsmiths and other metal-workers in order to decorate metal items such as guns, armour, cups and plates has been known in Europe since the Middle Ages at least, and may go back to antiquity. The elaborate decoration of armour, in Germany at least, was an art probably imported from Italy around the end of the 15th century—little earlier than the birth of etching as a printmaking technique. Printmakers from the German-speaking lands and Central Europe perfected the art and transmitted their skills over the Alps and across Europe. The process as applied to printmaking is believed to have been invented by Daniel Hopfer (–1536) of Augsburg, Germany. Hopfer was a craftsman who decorated armour in this way, and applied the method to printmaking, using iron plates (many of which still exist). Apart from his prints, there are two proven examples of his work on armour: a shield from 1536 now in the Real Armeria of Madrid and a sword in the Germanisches Nationalmuseum of Nuremberg. An Augsburg horse armour in the German Historical Museum, Berlin, dating to between 1512 and 1515, is decorated with motifs from Hopfer's etchings and woodcuts, but this is no evidence that Hopfer himself worked on it, as his decorative prints were largely produced as patterns for other craftsmen in various media. The oldest dated etching is by Albrecht Dürer in 1515, although he returned to engraving after six etchings instead of developing the craft. The switch to copper plates was probably made in Italy, and thereafter etching soon came to challenge engraving as the most popular medium for artists in printmaking. Its great advantage was that, unlike engraving where the difficult technique for using the burin requires special skill in metalworking, the basic technique for creating the image on the plate in etching is relatively easy to learn for an artist trained in drawing. On the other hand, the handling of the ground and acid need skill and experience, and are not without health and safety risks, as well as the risk of a ruined plate. Callot's innovations: échoppe, hard ground, stopping-out Jacques Callot (1592–1635) from Nancy in Lorraine (now part of France) made important technical advances in etching technique. Callot also appears to have been responsible for an improved, harder, recipe for the etching ground, using lute-makers' varnish rather than a wax-based formula. This enabled lines to be more deeply bitten, prolonging the life of the plate in printing, and also greatly reducing the risk of "foul-biting", where acid gets through the ground to the plate where it is not intended to, producing spots or blotches on the image. Previously the risk of foul-biting had always been at the back of an etcher's mind, preventing too much time on a single plate that risked being ruined in the biting process. Now etchers could do the highly detailed work that was previously the monopoly of engravers, and Callot made full use of the new possibilities. Callot also made more extensive and sophisticated use of multiple "stoppings-out" than previous etchers had done. This is the technique of letting the acid bite lightly over the whole plate, then stopping-out those parts of the work which the artist wishes to keep light in tone by covering them with ground before bathing the plate in acid again. He achieved unprecedented subtlety in effects of distance and light and shade by careful control of this process. Most of his prints were relatively small—up to about six inches or 15 cm on their longest dimension, but packed with detail. One of his followers, the Parisian Abraham Bosse, spread Callot's innovations all over Europe with the first published manual of etching, which was translated into Italian, Dutch, German and English. The 17th century was the great age of etching, with Rembrandt, Giovanni Benedetto Castiglione and many other masters. In the 18th century, Piranesi, Tiepolo and Daniel Chodowiecki were the best of a smaller number of fine etchers. In the 19th and early 20th century, the Etching revival produced a host of lesser artists, but no really major figures. Etching is still widely practiced today. Variants Aquatint uses acid-resistant resin to achieve tonal effects. Soft-ground etching uses a special softer ground. The artist places a piece of paper (or cloth etc. in modern uses) over the ground and draws on it. The print resembles a drawing. Soft ground can also be used to capture the texture or pattern of fabrics or furs pressed into the soft surface. Other materials that are not manufactured specifically for etching can be used as grounds or resists. Examples including printing ink, paint, spray paint, oil pastels, candle or bees wax, tacky vinyl or stickers, and permanent markers. There are some new non-toxic grounds on the market that work differently than typical hard or soft grounds. Relief etching was invented by William Blake in about 1788, and he has been almost the only artist to use it in its original form. However, from 1880 to 1950 a photo-mechanical ("line-block") variant was the dominant form of commercial printing for images. A similar process to etching, but printed as a relief print, so it is the "white" background areas which are exposed to the acid, and the areas to print "black" which are covered with ground. Blake's exact technique remains controversial. He used the technique to print texts and images together, writing the text and drawing lines with an acid-resistant medium. Carborundum etching (sometimes called carbograph printing) was invented in the mid-20th century by American artists who worked for the WPA. In this technique, a metal plate is first covered with silicon carbide grit and run through an etching press; then a design is drawn on the roughened plate using an acid-resistant medium. After immersion in an acid bath, the resulting plate is printed as a relief print. The roughened surface of the relief permits considerable tonal range, and it is possible to attain a high relief that results in strongly embossed prints. Printmaking technique in detail A waxy acid-resist, known as a ground, is applied to a metal plate, most often copper or zinc but steel plate is another medium with different qualities. There are two common types of ground: hard ground and soft ground. Hard ground can be applied in two ways. Solid hard ground comes in a hard waxy block. To apply hard ground of this variety, the plate to be etched is placed upon a hot-plate (set at 70 °C, 158 °F), a kind of metal worktop that is heated up. The plate heats up and the ground is applied by hand, melting onto the plate as it is applied. The ground is spread over the plate as evenly as possible using a roller. Once applied the etching plate is removed from the hot-plate and allowed to cool which hardens the ground. After the ground has hardened the artist "smokes" the plate, classically with 3 beeswax tapers, applying the flame to the plate to darken the ground and make it easier to see what parts of the plate are exposed. Smoking not only darkens the plate but adds a small amount of wax. Afterwards the artist uses a sharp tool to scratch into the ground, exposing the metal. The second way to apply hard ground is by liquid hard ground. This comes in a can and is applied with a brush upon the plate to be etched. Exposed to air the hard ground will harden. Some printmakers use oil/tar based asphaltum or bitumen as hard ground, although often bitumen is used to protect steel plates from rust and copper plates from aging. Soft ground also comes in liquid form and is allowed to dry but it does not dry hard like hard ground and is impressionable. After the soft ground has dried the printmaker may apply materials such as leaves, objects, hand prints and so on which will penetrate the soft ground and expose the plate underneath. The ground can also be applied in a fine mist, using powdered rosin or spraypaint. This process is called aquatint, and allows for the creation of tones, shadows, and solid areas of color. The design is then drawn (in reverse) with an etching-needle or échoppe. An "echoppe" point can be made from an ordinary tempered steel etching needle, by grinding the point back on a carborundum stone, at a 45–60 degree angle. The "echoppe" works on the same principle that makes a fountain pen's line more attractive than a ballpoint's: The slight swelling variation caused by the natural movement of the hand "warms up" the line, and although hardly noticeable in any individual line, has a very attractive overall effect on the finished plate. It can be drawn with in the same way as an ordinary needle. The plate is then completely submerged in a solution that eats away at the exposed metal. ferric chloride may be used for etching copper or zinc plates, whereas nitric acid may be used for etching zinc or steel plates. Typical solutions are 1 part FeCl3 to 1 part water and 1 part nitric to 3 parts water. The strength of the acid determines the speed of the etching process. The etching process is known as biting (see also spit-biting below). The waxy resist prevents the acid from biting the parts of the plate which have been covered. The longer the plate remains in the acid the deeper the "bites" become. During the etching process the printmaker uses a bird feather or similar item to wave away bubbles and detritus produced by the dissolving process, from the surface of the plate, or the plate may be periodically lifted from the acid bath. If a bubble is allowed to remain on the plate then it will stop the acid biting into the plate where the bubble touches it. Zinc produces more bubbles much more rapidly than copper and steel and some artists use this to produce interesting round bubble-like circles within their prints for a Milky Way effect. The detritus is powdery dissolved metal that fills the etched grooves and can also block the acid from biting evenly into the exposed plate surfaces. Another way to remove detritus from a plate is to place the plate to be etched face down within the acid upon plasticine balls or marbles, although the drawback of this technique is the exposure to bubbles and the inability to remove them readily. For aquatinting a printmaker will often use a test strip of metal about a centimetre to three centimetres wide. The strip will be dipped into the acid for a specific number of minutes or seconds. The metal strip will then be removed and the acid washed off with water. Part of the strip will be covered in ground and then the strip is redipped into the acid and the process repeated. The ground will then be removed from the strip and the strip inked up and printed. This will show the printmaker the different degrees or depths of the etch, and therefore the strength of the ink color, based upon how long the plate is left in the acid. The plate is removed from the acid and washed over with water to remove the acid. The ground is removed with a solvent such as turpentine. Turpentine is often removed from the plate using methylated spirits since turpentine is greasy and can affect the application of ink and the printing of the plate. Spit-biting is a process whereby the printmaker will apply acid to a plate with a brush in certain areas of the plate. The plate may be aquatinted for this purpose or exposed directly to the acid. The process is known as "spit"-biting due to the use of saliva once used as a medium to dilute the acid, although gum arabic or water are now commonly used. A piece of matte board, a plastic "card", or a wad of cloth is often used to push the ink into the incised lines. The surface is wiped clean with a piece of stiff fabric known as tarlatan and then wiped with newsprint paper; some printmakers prefer to use the blade part of their hand or palm at the base of their thumb. The wiping leaves ink in the incisions. You may also use a folded piece of organza silk to do the final wipe. If copper or zinc plates are used, then the plate surface is left very clean and therefore white in the print. If steel plate is used, then the plate's natural tooth gives the print a grey background similar to the effects of aquatinting. As a result, steel plates do not need aquatinting as gradual exposure of the plate via successive dips into acid will produce the same result. A damp piece of paper is placed over the plate and it is run through the press. Nontoxic etching Growing concerns about the health effects of acids and solvents led to the development of less toxic etching methods in the late 20th century. An early innovation was the use of floor wax as a hard ground for coating the plate. Others, such as printmakers Mark Zaffron and Keith Howard, developed systems using acrylic polymers as a ground and ferric chloride for etching. The polymers are removed with sodium carbonate (washing soda) solution, rather than solvents. When used for etching, ferric chloride does not produce a corrosive gas, as acids do, thus eliminating another danger of traditional etching. The traditional aquatint, which uses either powdered rosin or enamel spray paint, is replaced with an airbrush application of the acrylic polymer hard ground. Again, no solvents are needed beyond the soda ash solution, though a ventilation hood is needed due to acrylic particulates from the air brush spray. The traditional soft ground, requiring solvents for removal from the plate, is replaced with water-based relief printing ink. The ink receives impressions like traditional soft ground, resists the ferric chloride etchant, yet can be cleaned up with warm water and either soda ash solution or ammonia. Anodic etching has been used in industrial processes for over a century. The etching power is a source of direct current. The item to be etched (anode) is connected to its positive pole. A receiver plate (cathode) is connected to its negative pole. Both, spaced slightly apart, are immersed in a suitable aqueous solution of a suitable electrolyte. The current pushes the metal out from the anode into solution and deposits it as metal on the cathode. Shortly before 1990, two groups working independently developed different ways of applying it to creating intaglio printing plates. In the patented Electroetch system, invented by Marion and Omri Behr, in contrast to certain nontoxic etching methods, an etched plate can be reworked as often as the artist desires The system uses voltages below 2 volts which exposes the uneven metal crystals in the etched areas resulting in superior ink retention and printed image appearance of quality equivalent to traditional acid methods. With polarity reversed the low voltage provides a simpler method of making mezzotint plates as well as the "steel facing" copper plates. Some of the earliest printmaking workshops experimenting with, developing and promoting nontoxic techniques include Grafisk Eksperimentarium, in Copenhagen, Denmark, Edinburgh Printmakers, in Scotland, and New Grounds Print Workshop, in Albuquerque, New Mexico. Photo-etching Light sensitive polymer plates allow for photorealistic etchings. A photo-sensitive coating is applied to the plate by either the plate supplier or the artist. Light is projected onto the plate as a negative image to expose it. Photopolymer plates are either washed in hot water or under other chemicals according to the plate manufacturers' instructions. Areas of the photo-etch image may be stopped-out before etching to exclude them from the final image on the plate, or removed or lightened by scraping and burnishing once the plate has been etched. Once the photo-etching process is complete, the plate can be worked further as a normal intaglio plate, using drypoint, further etching, engraving, etc. The final result is an intaglio plate which is printed like any other. Types of metal plates Copper is a traditional metal, and is still preferred, for etching, as it bites evenly, holds texture well, and does not distort the color of the ink when wiped. Zinc is cheaper than copper, so preferable for beginners, but it does not bite as cleanly as copper does, and it alters some colors of ink. Steel is growing in popularity as an etching substrate. Increases in the prices of copper and zinc have steered steel to an acceptable alternative. The line quality of steel is less fine than copper, but finer than zinc. Steel has a natural and rich aquatint. The type of metal used for the plate impacts the number of prints the plate will produce. The firm pressure of the printing press slowly rubs out the finer details of the image with every pass-through. With relatively soft copper, for example, the etching details will begin to wear very quickly, some copper plates show extreme wear after only ten prints. Steel, on the other hand, is incredibly durable. This wearing out of the image over time is one of the reasons etched prints created early in a numbered series tend to be valued more highly. An artist thus takes the total number of prints he or she wishes to produce into account whenever choosing the metal. Industrial uses Etching is also used in the manufacturing of printed circuit boards and semiconductor devices, and in the preparation of metallic specimens for microscopic observation. Prior to 1100 AD, the New World Hohokam culture independently utilized the technique of acid etching in marine shell designs. The shells were daubed in pitch and then bathed in acid probably made from fermented cactus juice. Metallographic etching Metallographic etching is a method of preparing samples of metal for analysis. It can be applied after polishing to further reveal microstructural features (such as grain size, distribution of phases, and inclusions), along with other aspects such as prior mechanical deformation or thermal treatments. Metal can be etched using chemicals, electrolysis, or heat (thermal etching). Controlling the acid's effects There are many ways for the printmaker to control the acid's effects. Hard grounds Most typically, the surface of the plate is covered in a hard, waxy 'ground' that resists acid. The printmaker then scratches through the ground with a sharp point, exposing lines of metal which the mordant acid attacks. Aquatint Aquatint is a variation giving only tone rather than lines when printed. Particulate resin is evenly distributed on all or parts of the plate, then heated to form a screen ground of uniform, but less than perfect, density. After etching, any exposed surface will result in a roughened (i.e., darkened) surface. Areas that are to be light in the final print are protected by varnishing between acid baths. Successive turns of varnishing and placing the plate in acid create areas of tone difficult or impossible to achieve by drawing through a wax ground. Sugar lift Designs in a syrupy solution of sugar or Camp Coffee are painted onto the metal surface prior to it being coated in a liquid etching ground or 'stop out' varnish. When the plate is placed in hot water the sugar dissolves, leaving the image. The plate can then be etched. Spit bite A mixture of nitric acid and gum arabic (or, very rarely, saliva) which can be dripped, spattered or painted onto a metal surface giving interesting results. A mixture of nitric acid and rosin may also be used. Printing Printing the plate is done by covering the surface with printing ink, then rubbing the ink off the surface with tarlatan cloth or newsprint, leaving ink in the roughened areas and lines. Damp paper is placed on the plate, and both are run through a printing press; the pressure forces the paper into contact with the ink, transferring the image (c.f., chine-collé). The pressure subtly degrades the image in the plate, smoothing the roughened areas and closing the lines; a copper plate is good for, at most, a few hundred printings of a strongly etched imaged before the degradation is considered too great by the artist. At that point, the artist can manually restore the plate by re-etching it, essentially putting ground back on and retracing their lines; alternatively, plates can be electro-plated before printing with a harder metal to preserve the surface. Zinc is also used, because as a softer metal, etching times are shorter; however, that softness also leads to faster degradation of the image in the press. Faults Foul-bite or "over-biting" is common in etching, and is the effect of minuscule amounts of acid leaking through the ground to create minor pitting and burning on the surface. This incidental roughening may be removed by smoothing and polishing the surface, but artists often leave faux-bite, or deliberately court it by handling the plate roughly, because it is viewed as a desirable mark of the process. "Etchings" euphemism The phrase "Want to come up and see my etchings?" is a romantic euphemism by which a person entices someone to come back to their place with an offer to look at something artistic, but with ulterior motives. The phrase is a corruption of some phrases in a novel by Horatio Alger Jr. called The Erie Train Boy, which was first published in 1891. Alger was an immensely popular author in the 19th century—especially with young people—and his books were widely quoted. In chapter XXII of the book, a woman writes to her boyfriend, "I have a new collection of etchings that I want to show you. Won't you name an evening when you will call, as I want to be certain to be at home when you really do come." The boyfriend then writes back "I shall no doubt find pleasure in examining the etchings which you hold out as an inducement to call." This was referenced in a 1929 James Thurber cartoon in which a man tells a woman in a building lobby: "You wait here and I'll bring the etchings down". It was also referenced in Dashiell Hammett's 1934 novel The Thin Man, in which the narrator answers his wife asking him about a lady he had wandered off with by saying: "She just wanted to show me some French etchings." The phrase was given new popularity in 1937: in a well publicized case, violinist David Rubinoff was accused of inviting a young woman to his hotel room to view some French etchings, but instead seducing her. As early as 1895, Hjalmar Söderberg used the reference in his "decadent" début novel Delusions (swe: Förvillelser), when he lets the dandy Johannes Hall lure the main character's younger sister Greta into his room under the pretence that they browse through his etchings and engravings (e.g., Die Sünde by Franz Stuck). See also Acid test (gold) Electroetching List of art techniques List of etchings by Rembrandt List of printmakers Old master prints for the history of the method Photoengraving Photolithography References External links Prints & People: A Social History of Printed Pictures, an exhibition catalog from The Metropolitan Museum of Art, which contains material on etching The Print Australia Reference Library Catalogue Etching from the MMA Timeline of Art History Metropolitan Museum, materials-and-techniques: etching Museum of Modern Art information on printing techniques and examples of prints The Wenceslaus Hollar Collection of digitized books and images at the University of Toronto Carrington, Fitzroy. Prints and their makers: essays on engravers and etchers old and modern. United States: The Century Co., 1911, copyright 1912. Printmaking Relief printing Metalworking Chemical processes
Etching
[ "Chemistry" ]
5,657
[ "Chemical process engineering", "Chemical processes", "nan" ]
42,676
https://en.wikipedia.org/wiki/Mold%20health%20issues
Mold health issues refer to the harmful health effects of molds ("moulds" in British English) and their mycotoxins. Molds are ubiquitous in the biosphere, and mold spores are a common component of household and workplace dust. The vast majority of molds are not hazardous to humans, and reaction to molds can vary between individuals, with relatively minor allergic reactions being the most common. The United States Centers for Disease Control and Prevention (CDC) reported in its June 2006 report, 'Mold Prevention Strategies and Possible Health Effects in the Aftermath of Hurricanes and Major Floods,' that "excessive exposure to mold-contaminated materials can cause adverse health effects in susceptible persons regardless of the type of mold or the extent of contamination." When mold spores are present in abnormally high quantities, they can present especially hazardous health risks to humans after prolonged exposure, including allergic reactions or poisoning by mycotoxins, or causing fungal infection (mycosis). Health effects People who are atopic (sensitive), already have allergies, asthma, or compromised immune systems and occupy damp or moldy buildings are at an increased risk of health problems such as inflammatory responses to mold spores, metabolites such as mycotoxins, and other components. Other problems are respiratory and/or immune system responses including respiratory symptoms, respiratory infections, exacerbation of asthma, and rarely hypersensitivity pneumonitis, allergic alveolitis, chronic rhinosinusitis and allergic fungal sinusitis. A person's reaction to mold depends on their sensitivity and other health conditions, the amount of mold present, length of exposure, and the type of mold or mold products. The five most common genera of indoor molds are Cladosporium, Penicillium, Aspergillus, Alternaria, and Trichoderma. Damp environments that allow mold to grow can also allow the proliferation of bacteria and release volatile organic compounds. Symptoms of mold exposure Symptoms of mold exposure can include: Nasal and sinus congestion, runny nose Respiratory problems, such as wheezing and difficulty breathing, chest tightness Cough Throat irritation Sneezing Health effects linking to asthma Adverse respiratory health effects are associated with occupancy in buildings with moisture and mold damage. Infants in homes with mold have a much greater risk of developing asthma and allergic rhinitis. Infants may develop respiratory symptoms due to exposure to a specific type of fungal mold, called Penicillium. Signs that an infant may have mold-related respiratory problems include (but are not limited to) a persistent cough and wheeze. Increased exposure increases the probability of developing respiratory symptoms during their first year of life. As many as 21% of asthma cases may result from exposure to mold. Mold exposures have a variety of health effects depending on the person. Some people are more sensitive to mold than others. Exposure to mold can cause several health issues such as; throat irritation, nasal stuffiness, eye irritation, cough, and wheezing, as well as skin irritation in some cases. Exposure to mold may also cause heightened sensitivity depending on the time and nature of exposure. People at higher risk for mold allergies are people with chronic lung illnesses and weak immune systems, which can often result in more severe reactions when exposed to mold. There has been sufficient evidence that damp indoor environments are correlated with upper respiratory tract symptoms such as coughing, and wheezing in people with asthma. Flood-specific mold health effects Among children and adolescents, the most common health effect post-flooding was lower respiratory tract symptoms, though there was a lack of association with measurements of total fungi. Another study found that these respiratory symptoms were positively associated with exposure to water damaged homes, exposure included being inside without participating in clean up. Despite lower respiratory effects among all children, there was a significant difference in health outcomes between children with pre-existing conditions and children without. Children with pre-existing conditions were at greater risk that can likely be attributed to the greater disruption of care in the face of flooding and natural disaster. Although mold is the primary focus post flooding for residents, the effects of dampness alone must also be considered. According to the Institute of Medicine, there is a significant association between dampness in the home and wheeze, cough, and upper respiratory symptoms. A later analysis determined that 30% to 50% of asthma-related health outcomes are associated with not only mold, but also dampness in buildings. While there is a proven correlation between mold exposure and the development of upper and lower respiratory syndromes, there are still fewer incidences of negative health effects than one might expect. Barbeau and colleagues suggested that studies do not show a greater impact from mold exposure for several reasons: 1) the types of health effects are not severe and are therefore not caught; 2) people whose homes have flooded find alternative housing to prevent exposure; 3) self-selection, the healthier people participated in mold clean-up and were less likely to get sick; 4) exposures were time-limited as result of remediation efforts and; 5) the lack of access to health care post-flooding may result in fewer illnesses being discovered and reported for their association with mold. There are also certain notable scientific limitations in studying the exposure effects of dampness and molds on individuals because there are currently no known biomarkers that can prove that a person was exclusively exposed to molds. Thus, it is currently impossible to prove correlation between mold exposure and symptoms. Mold-associated conditions Health problems associated with high levels of airborne mold spores include allergic reactions, asthma episodes, irritations of the eye, nose and throat, sinus congestion, and other respiratory problems. Several studies and reviews have suggested that childhood exposure to dampness and mold might contribute to the development of asthma. For example, residents of homes with mold are at an elevated risk for both respiratory infections and bronchitis. When mold spores are inhaled by an immunocompromised individual, some mold spores may begin to grow on living tissue, attaching to cells along the respiratory tract and causing further problems. Generally, when this occurs, the illness is an epiphenomenon and not the primary pathology. Also, mold may produce mycotoxins, either before or after exposure to humans, potentially causing toxicity. Fungal infection A serious health threat from mold exposure for immunocompromised individuals is systemic fungal infection (systemic mycosis). Immunocompromised individuals exposed to high levels of mold, or individuals with chronic exposure may become infected. Sinuses and digestive tract infections are most common; lung and skin infections are also possible. Mycotoxins may or may not be produced by the invading mold. Dermatophytes are the parasitic fungi that cause skin infections such as athlete's foot and tinea cruris. Most dermatophyte fungi take the form of mold, as opposed to a yeast, with an appearance (when cultured) that is similar to other molds. Opportunistic infection by molds such as Talaromyces marneffei and Aspergillus fumigatus is a common cause of illness and death among immunocompromised people, including people with AIDS or asthma. Mold-induced hypersensitivity The most common form of hypersensitivity is caused by the direct exposure to inhaled mold spores that can be dead or alive or hyphal fragments which can lead to allergic asthma or allergic rhinitis. The most common effects are rhinorrhea (runny nose), watery eyes, coughing and asthma attacks. Another form of hypersensitivity is hypersensitivity pneumonitis. Exposure can occur at home, at work or in other settings. It is predicted that about 5% of people have some airway symptoms due to allergic reactions to molds in their lifetimes. Hypersensitivity may also be a reaction toward an established fungal infection in allergic bronchopulmonary aspergillosis. Mycotoxin toxicity Some molds excrete toxic compounds called mycotoxins, secondary metabolites produced by fungi under certain environmental conditions. These environmental conditions affect the production of mycotoxins at the transcription level. Temperature, water activity and pH, strongly influence mycotoxin biosynthesis by increasing the level of transcription within the fungal spore. It has also been found that low levels of fungicides can boost mycotoxin synthesis. Certain mycotoxins can be harmful or lethal to humans and animals when exposure is high enough. Extreme exposure to very high levels of mycotoxins can lead to neurological problems and, in some cases, death; fortunately, such exposures rarely to never occur in normal exposure scenarios, even in residences with serious mold problems. Prolonged exposure, such as daily workplace exposure, can be particularly harmful. It is thought that all molds may produce mycotoxins, and thus all molds may be potentially toxic if large enough quantities are ingested, or the human becomes exposed to extreme quantities of mold. Mycotoxins are not produced all the time, but only under specific growing conditions. Mycotoxins are harmful or lethal to humans and animals only when exposure is high enough. Mycotoxins can be found on the mold spore and mold fragments, and therefore they can also be found on the substrate upon which the mold grows. Routes of entry for these insults can include ingestion, dermal exposure, and inhalation. Aflatoxin is an example of a mycotoxin. It is a cancer-causing poison produced by certain fungi in or on foods and feeds, especially in field corn and peanuts. Exposure sources and prevention The primary sources of mold exposure are from the indoor air in buildings with substantial mold growth and the ingestion of food with mold growths. Air While mold and related microbial agents can be found both inside and outside, specific factors can lead to significantly higher levels of these microbes, creating a potential health hazard. Several notable factors are water damage in buildings, the use of building materials which provide a suitable substrate and source of food to amplify mold growth, relative humidity, and energy-efficient building designs, which can prevent proper circulation of outside air and create a unique ecology in the built environment. A common issue with mold hazards in the household can be the placement of furniture, resulting in a lack of ventilation of the nearby wall. The simplest method of avoiding mold in a home so affected is to move the furniture in question. More than half of adult workers in moldy/humid buildings suffer from nasal or sinus symptoms due to mold exposure. Prevention of mold exposure and its ensuing health issues begins with the prevention of mold growth in the first place by avoiding a mold-supporting environment. Extensive flooding and water damage can support extensive mold growth. Following hurricanes, homes with greater flood damage, especially those with more than of indoor flooding, demonstrated far higher levels of mold growth compared with homes with little or no flooding. It is useful to perform an assessment of the location and extent of the mold hazard in a structure. Various practices of remediation can be followed to mitigate mold issues in buildings, the most important of which is to reduce moisture levels. Removal of affected materials after the source of moisture has been reduced and/or eliminated may be necessary, as some materials cannot be remediated. Thus, the concept of mold growth, assessment, and remediation is essential in preventing health issues arising due to the presence of dampness and mold. Molds may excrete liquids or low-volatility gases, but the concentrations are so low that frequently they cannot be detected even with sensitive analytical sampling techniques. Sometimes, these by-products are detectable by odor, in which case they are referred to as "ergonomic odors", meaning the odors are noticeable but do not indicate toxicologically significant exposures. Food Molds that are often found on meat and poultry include members of the genera Alternaria, Aspergillus, Botrytis, Cladosporium, Fusarium, Geotrichum, Mortierella, Mucor, Neurospora, Paecilomyces, Penicillium, and Rhizopus. Grain crops in particular incur considerable losses both in field and storage due to pathogens, post-harvest spoilage, and insect damage. A number of common microfungi are important agents of post-harvest spoilage, notably members of the genera Aspergillus, Fusarium, and Penicillium. A number of these produce mycotoxins (soluble, non-volatile toxins produced by a range of microfungi that demonstrate specific and potent toxic properties on human and animal cells) that can render foods unfit for consumption. When ingested, inhaled, or absorbed through skin, mycotoxins may cause or contribute to a range of effects from reduced appetite and general malaise to acute illness or death in rare cases. Mycotoxins may also contribute to cancer. Dietary exposure to the mycotoxin aflatoxin B1, commonly produced by growth of the fungus Aspergillus flavus on improperly stored ground nuts in many areas of the developing world, is known to independently (and synergistically with Hepatitis B virus) induce liver cancer. Mycotoxin-contaminated grain and other food products have a significant impact on human and animal health globally. According to the World Health Organization, roughly 25% of the world's food may be contaminated by mycotoxins. Prevention of mold exposure from food is generally to consume food that has no mold growths on it. Also, mold growth in the first place can be prevented by the same concept of mold growth, assessment, and remediation that prevents air exposure. Also, it is especially useful to clean the inside of the refrigerator and to ensure dishcloths, towels, sponges, and mops are clean. Ruminants are considered to have increased resistance to some mycotoxins, presumably due to the superior mycotoxin-degrading capabilities of their gut microbiota. The passage of mycotoxins through the food chain may also have important consequences on human health. For example, in China in December 2011, high levels of carcinogen aflatoxin M1 in Mengniu brand milk were found to be associated with the consumption of mold-contaminated feed by dairy cattle. Bedding Bacteria, fungi, allergens, and particle-bound semi-volatile organic compounds (SVOCs) can all be found in bedding and pillows with possible consequences for human health given the high amount of exposure each day. Over 47 species of fungi have been identified in pillows, although the typical range of species found in a single pillow varied between four and sixteen. Compared to feather pillows, synthetic pillows typically display a slightly greater variety of fungal species and significantly higher levels of β‐(1,3)‐glucan, which can cause inflammatory responses. The authors concluded that these and related results suggest feather bedding might be a more appropriate choice for asthmatics than synthetics. Some newer bedding products incorporate silver nanoparticles due to their antibacterial, antifungal, and antiviral properties; however, the long-term safety of this additional exposure to these nanoparticles is relatively unknown, and a conservative approach to the use of these products is recommended. Flooding Flooding in houses causes a unique opportunity for mold growth, which may be attributed to adverse health effects in people exposed to the mold, especially children and adolescents. In a study on the health effects of mold exposure after hurricanes Katrina and Rita, the predominant types of mold were Aspergillus, Penicillium, and Cladosporium with indoor spore counts ranging from 6,142 – 735,123 spores m−3. Molds isolated following flooding were different from mold previously reported for non-water damaged homes in the area. Further research found that homes with greater than three feet of indoor flooding demonstrated significantly higher levels of mold than those with little or no flooding. Mitigation Recommended strategies to prevent mold include avoiding mold-contamination; utilization of environmental controls; the use of personal protective equipment (PPE), including skin and eye protection and respiratory protection; and environmental controls such as ventilation and suppression of dust. When mold cannot be prevented, the CDC recommends clean-up protocol including first taking emergency action to stop water intrusion. Second, they recommend determining the extent of water damage and mold contamination. And third, they recommend planning remediation activities such as establishing containment and protection for workers and occupants; eliminating water or moisture sources if possible; decontaminating or removing damaged materials and drying any wet materials; evaluating whether space has been successfully remediated; and reassembling the space to control sources of moisture. History In 1698, the physician Sir John Floyer published the first edition of A Treatise of the Asthma, the first English textbook on the malady. In it, he describes how dampness and mold could trigger an asthmatic attack, specifically, "damp houses and fenny [boggy] countries". He also writes of an asthmatic "who fell into a violent fit by going into a Wine-Cellar", presumably due to the "fumes" in the air. In the 1930s, mold was identified as the cause behind the mysterious deaths of farm animals in Russia and other countries. Stachybotrys chartarum was found growing on the wet grain used for animal feed. Illness and death also occurred in humans when starving peasants ate large quantities of rotten food grains and cereals heavily overgrown with the Stachybotrys mold. In the 1970s, building construction techniques changed in response to changing economic realities, including the energy crisis. As a result, homes, and buildings became more airtight. Also, cheaper materials such as drywall came into common use. The newer building materials reduced the drying potential of the structures, making moisture problems more prevalent. This combination of increased moisture and suitable substrates contributed to increased mold growth inside buildings. Today, the US Food and Drug Administration and the agriculture industry closely monitor mold and mycotoxin levels in grains and foodstuffs to keep the contamination of animal feed and human food supplies below specific levels. In 2005, Diamond Pet Foods, a US pet food manufacturer, experienced a significant rise in the number of corn shipments containing elevated levels of aflatoxin. This mold toxin eventually made it into the pet food supply, and dozens of dogs and cats died before the company was forced to recall affected products. In November 2022, a UK coroner recorded that a two-year-old child, Awaab Ishak from Rochdale, England, died in 2020 of "acute airway oedema with severe granulomatous tracheobronchitis due to environmental mould exposure" in his home. While not specified in the coroner's report or outputs from official proceedings, the death was widely reported as due to specifically 'toxic' or 'toxic black' mold. The finding led to a 2023 change in UK law, known as Awaab's Law, which will require social housing providers to remedy reported damp and mould within certain time limits. See also Environmental engineering Environmental health Occupational asthma Occupational safety and health References Further reading External links CDC.gov Mold US EPA: Mold Information – U.S. Environmental Protection Agency US EPA: EPA Publication #402-K-02-003 "A Brief Guide to Mold, Moisture, and Your Home" NIBS: Whole Building Design Guide: Air Decontamination NPIC: Mold Pest Control Information – National Pesticide Information Center Mycotoxins in grains and the food supply: indianacrop.org cropwatch.unl.edu agbiopubs.sdstate.edu (PDF) Building biology Fungi and humans Environmental engineering Toxic effects of substances chiefly nonmedicinal as to source Industrial hygiene Building defects Environmental law Product liability Occupational safety and health Indoor air pollution
Mold health issues
[ "Chemistry", "Materials_science", "Engineering", "Biology", "Environmental_science" ]
4,100
[ "Humans and other species", "Fungi", "Toxicology", "Building engineering", "Chemical engineering", "Environmental engineering", "Fungi and humans", "Civil engineering", "Building defects", "Toxic effects of substances chiefly nonmedicinal as to source", "Mechanical failure", "Building biology"...
42,739
https://en.wikipedia.org/wiki/Bubble%20fusion
Bubble fusion is the non-technical name for a nuclear fusion reaction hypothesized to occur inside extraordinarily large collapsing gas bubbles created in a liquid during acoustic cavitation. The more technical name is sonofusion. The term was coined in 2002 with the release of a report by Rusi Taleyarkhan and collaborators that claimed to have observed evidence of sonofusion. The claim was quickly surrounded by controversy, including allegations ranging from experimental error to academic fraud. Subsequent publications claiming independent verification of sonofusion were also highly controversial. Eventually, an investigation by Purdue University found that Taleyarkhan had engaged in falsification of independent verification, and had included a student as an author on a paper when he had not participated in the research. He was subsequently stripped of his professorship. One of his funders, the Office of Naval Research reviewed the report by Purdue and barred him from federal funding for 28 months. Original experiments US patent 4,333,796, filed by Hugh Flynn in 1978, appears to be the earliest documented reference to a sonofusion-type reaction. In the March 8, 2002 issue of the peer-reviewed journal Science, Rusi P. Taleyarkhan and colleagues at the Oak Ridge National Laboratory (ORNL) reported that acoustic cavitation experiments conducted with deuterated acetone () showed measurements of tritium and neutron output consistent with the occurrence of fusion. The neutron emission was also reported to be coincident with the sonoluminescence pulse, a key indicator that its source was fusion caused by the heat and pressure inside the collapsing bubbles. Oak Ridge failed replication The results were so startling that the Oak Ridge National Laboratory asked two independent researchers, D. Shapira and M. J. Saltmarsh, to repeat the experiment using more sophisticated neutron detection equipment. They reported that the neutron release was consistent with random coincidence. A rebuttal by Taleyarkhan and the other authors of the original report argued that the Shapira and Saltmarsh report failed to account for significant differences in experimental setup, including over an inch of shielding between the neutron detector and the sonoluminescing acetone. According to Taleyarkhan et al., when properly considering those differences, the results were consistent with fusion. As early as 2002, while experimental work was still in progress, Aaron Galonsky of Michigan State University, in a letter to the journal Science expressed doubts about the claim made by the Taleyarkhan team. In Galonsky's opinion, the observed neutrons were too high in energy to be from a deuterium-deuterium (d-d) fusion reaction. In their response (published on the same page), the Taleyarkhan team provided detailed counter-arguments and concluded that the energy was "reasonably close" to that which was expected from a fusion reaction. In February 2005 the documentary series Horizon commissioned two leading sonoluminescence researchers, Seth Putterman and Kenneth S. Suslick, to reproduce Taleyarkhan's work. Using similar acoustic parameters, deuterated acetone, similar bubble nucleation, and a much more sophisticated neutron detection device, the researchers could find no evidence of a fusion reaction. Subsequent reports of replication In 2004, new reports of bubble fusion were published by the Taleyarkhan group, claiming that the results of previous experiments had been replicated under more stringent experimental conditions. These results differed from the original results in that fusion was claimed to occur over longer times than previously reported. The original report only claimed neutron emission from the initial bubble collapse following bubble nucleation, whereas this report claimed neutron emission many acoustic cycles later. In July 2005, two of Taleyarkhan's students at Purdue University published evidence confirming the previous result. They used the same acoustic chamber, the same deuterated acetone fluid and a similar bubble nucleation system. In this report, no neutron-sonoluminescence coincidence was attempted. An article in Nature raised issues about the validity of the research and complaints from his Purdue colleagues (see full analysis elsewhere in this page). Charges of misconduct were raised, and Purdue University opened an investigation. It concluded in 2008 that Taleyarkhan's name should have appeared in the author list because of his deep involvement in many steps of the research, that he added one author that had not really participated in the paper just to overcome the criticism of one reviewer, and that this was part of an attempt of "an effort to falsify the scientific record by assertion of independent confirmation". The investigation did not address the validity of the experimental results. In January 2006, a paper published in the journal Physical Review Letters by Taleyarkhan in collaboration with researchers from Rensselaer Polytechnic Institute reported statistically significant evidence of fusion. In November 2006, in the midst of accusations concerning Taleyarkhan's research standards, two different scientists visited the meta-stable fluids research lab at Purdue University to measure neutrons, using Taleyarkhan's equipment. Dr. Edward R. Forringer and undergraduates David Robbins and Jonathan Martin of LeTourneau University presented two papers at the American Nuclear Society Winter Meeting that reported replication of neutron emission. Their experimental setup was similar to previous experiments in that it used a mixture of deuterated acetone, deuterated benzene, tetrachloroethylene and uranyl nitrate. Notably, however, it operated without an external neutron source and used two types of neutron detectors. They claimed a liquid scintillation detector measured neutron levels at 8 standard deviations above the background level, while plastic detectors measured levels at 3.8 standard deviations above the background. When the same experiment was performed with non-deuterated control liquid, the measurements were within one standard deviation of background, indicating that the neutron production had only occurred during cavitation of the deuterated liquid. William M. Bugg, emeritus physics professor at the University of Tennessee also traveled to Taleyarkhan's lab to repeat the experiment with his equipment. He also reported neutron emission, using plastic neutron detectors. Taleyarkhan claimed these visits counted as independent replications by experts, but Forringer later recognized that he was not an expert, and Bugg later said that Taleyarkhan performed the experiments and he had only watched. Nature report In March 2006, Nature published a special report that called into question the validity of the results of the Purdue experiments. The report quotes Brian Naranjo of the University of California, Los Angeles to the effect that neutron energy spectrum reported in the 2006 paper by Taleyarkhan, et al. was statistically inconsistent with neutrons produced by the proposed fusion reaction and instead highly consistent with neutrons produced by the radioactive decay of Californium 252, an isotope commonly used as a laboratory neutron source. The response of Taleyarkhan et al., published in Physical Review Letters, attempts to refute Naranjo's hypothesis as to the cause of the neutrons detected. Tsoukalas, head of the School of Nuclear Engineering at Purdue, and several of his colleagues at Purdue, had convinced Taleyarkhan to move to Purdue and attempt a joint replication. In the 2006 Nature report they detail several troubling issues when trying to collaborate with Taleyarkhan. He reported positive results from certain set of raw data, but his colleagues had also examined that set and it only contained negative results. He never showed his colleagues the raw data corresponding to the positive results, despite several requests. He moved the equipment from a shared laboratory to his own laboratory, thus impeding review by his colleagues, and he did not give any advance warning or explanation for the move. Taleyarkhan convinced his colleagues that they should not publish a paper with their negative results. Taleyarkhan then insisted that the university's press release present his experiment as "peer-reviewed" and "independent", when the co-authors were working in his laboratory under his supervision, and his peers in the faculty were not allowed to review the data. In summary, Taleyarkhan's colleagues at Purdue said he placed obstacles to peer review of his experiments, and they had serious doubts about the validity of the research. Nature also revealed that the process of anonymous peer-review had not been followed, and that the journal Nuclear Engineering and Design was not independent from the authors. Taleyarkhan was co-editor of the journal, and the paper was only peer-reviewed by his co-editor, with Taleyarkhan's knowledge. In 2002, Taleyarkhan filed a patent application on behalf of the United States Department of Energy, while working in Oak Ridge. Nature reported that the patent had been rejected in 2005 by the US Patent Office. The examiner called the experiment a variation of discredited cold fusion, found that there was "no reputable evidence of record to support any allegations or claims that the invention is capable of operating as indicated", and found that there was not enough detail for others to replicate the invention. The field of fusion suffered from many flawed claims, thus the examiner asked for additional proof that the radiation was generated from fusion and not from other sources. An appeal was not filed because the Department of Energy had dropped the claim in December 2005. Doubts prompt investigation Doubts among Purdue University's Nuclear Engineering faculty as to whether the positive results reported from sonofusion experiments conducted there were truthful prompted the university to initiate a review of the research, conducted by Purdue's Office of the Vice President for Research. In a March 9, 2006 article entitled "Evidence for bubble fusion called into question", Nature interviewed several of Taleyarkhan's colleagues who suspected something was amiss. On February 7, 2007, the Purdue University administration determined that "the evidence does not support the allegations of research misconduct and that no further investigation of the allegations is warranted". Their report also stated that "vigorous, open debate of the scientific merits of this new technology is the most appropriate focus going forward." In order to verify that the investigation was properly conducted, House Representative Brad Miller requested full copies of its documents and reports by March 30, 2007. His congressional report concluded that "Purdue deviated from its own procedures in investigating this case and did not conduct a thorough investigation"; in response, Purdue announced that it would re-open its investigation. In June 2008, a multi-institutional team including Taleyarkhan published a paper in Nuclear Engineering and Design to "clear up misconceptions generated by a webposting of UCLA which served as the basis for the Nature article of March 2006", according to a press release. On July 18, 2008, Purdue University announced that a committee with members from five institutions had investigated 12 allegations of research misconduct against Rusi Taleyarkhan. It concluded that two allegations were founded—that Taleyarkhan had claimed independent confirmation of his work when in reality the apparent confirmations were done by Taleyarkhan's former students and was not as "independent" as Taleyarkhan implied, and that Taleyarkhan had included a colleague's name on one of his papers who had not actually been involved in the research ("the sole apparent motivation for the addition of Mr. Bugg was a desire to overcome a reviewer's criticism", the report concluded). Taleyarkhan's appeal of the report's conclusions was rejected. He said the two allegations of misconduct were trivial administrative issues and had nothing to do with the discovery of bubble nuclear fusion or the underlying science, and that "all allegations of fraud and fabrication have been dismissed as invalid and without merit — thereby supporting the underlying science and experimental data as being on solid ground". A researcher questioned by the LA Times said that the report had not clarified whether bubble fusion was real or not, but that the low quality of the papers and the doubts cast by the report had destroyed Taleyarkhan's credibility with the scientific community. On August 27, 2008, he was stripped of his named Arden Bement Jr. Professorship, and forbidden to be a thesis advisor for graduate students for at least the next 3 years. Despite the findings against him, Taleyarkhan received a $185,000 grant from the National Science Foundation between September 2008 and August 2009 to investigate bubble fusion. In 2009 the Office of Naval Research debarred him for 28 months, until September 2011, from receiving U.S. Federal Funding. During that period his name was listed in the 'Excluded Parties List' to prevent him from receiving further grants from any government agency. See also Cold fusion List of energy topics Mechanism of sonoluminescence References Further reading "Bubble Fusion Research Under Scrutiny", IEEE Spectrum, May 2006 "Sonofusion Experiment Produces Results Without External Neutron Source" PhysOrg.com January 27, 2006 "Bubble fusion: silencing the hype", Nature online, March 8, 2006 — Nature reveals serious doubts over reports of fusion in collapsing bubbles (subscription required) "Fusion controversy rekindled" BBC News, March 5, 2002 "Fusion experiment disappoints" BBC News, July 2, 2002 What's New, March 10, 2006 – failed replications "Practical Fusion, or Just a Bubble?", Kenneth Chang, The New York Times, February 27, 2007 Cold fusion Bubbles (physics) Scientific misconduct incidents 2002 in science 2006 in science de:Kalte Fusion#Sonofusion
Bubble fusion
[ "Physics", "Chemistry" ]
2,717
[ "Bubbles (physics)", "Foams", "Cold fusion", "Nuclear physics", "Nuclear fusion", "Fluid dynamics" ]
42,806
https://en.wikipedia.org/wiki/Cyclone
In meteorology, a cyclone () is a large air mass that rotates around a strong center of low atmospheric pressure, counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere as viewed from above (opposite to an anticyclone). Cyclones are characterized by inward-spiraling winds that rotate about a zone of low pressure. The largest low-pressure systems are polar vortices and extratropical cyclones of the largest scale (the synoptic scale). Warm-core cyclones such as tropical cyclones and subtropical cyclones also lie within the synoptic scale. Mesocyclones, tornadoes, and dust devils lie within the smaller mesoscale. Upper level cyclones can exist without the presence of a surface low, and can pinch off from the base of the tropical upper tropospheric trough during the summer months in the Northern Hemisphere. Cyclones have also been seen on extraterrestrial planets, such as Mars, Jupiter, and Neptune. Cyclogenesis is the process of cyclone formation and intensification. Extratropical cyclones begin as waves in large regions of enhanced mid-latitude temperature contrasts called baroclinic zones. These zones contract and form weather fronts as the cyclonic circulation closes and intensifies. Later in their life cycle, extratropical cyclones occlude as cold air masses undercut the warmer air and become cold core systems. A cyclone's track is guided over the course of its 2 to 6 day life cycle by the steering flow of the subtropical jet stream. Weather fronts mark the boundary between two masses of air of different temperature, humidity, and densities, and are associated with the most prominent meteorological phenomena. Strong cold fronts typically feature narrow bands of thunderstorms and severe weather, and may on occasion be preceded by squall lines or dry lines. Such fronts form west of the circulation center and generally move from west to east; warm fronts form east of the cyclone center and are usually preceded by stratiform precipitation and fog. Warm fronts move poleward ahead of the cyclone path. Occluded fronts form late in the cyclone life cycle near the center of the cyclone and often wrap around the storm center. Tropical cyclogenesis describes the process of development of tropical cyclones. Tropical cyclones form due to latent heat driven by significant thunderstorm activity, and are warm core. Cyclones can transition between extratropical, subtropical, and tropical phases. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. In the Atlantic and the northeastern Pacific oceans, a tropical cyclone is generally referred to as a hurricane (from the name of the ancient Central American deity of wind, Huracan), in the Indian and south Pacific oceans it is called a cyclone, and in the northwestern Pacific it is called a typhoon. The growth of instability in the vortices is not universal. For example, the size, intensity, moist-convection, surface evaporation, the value of potential temperature at each potential height can affect the nonlinear evolution of a vortex. Nomenclature Henry Piddington published 40 papers dealing with tropical storms from Calcutta between 1836 and 1855 in The Journal of the Asiatic Society. He also coined the term cyclone, meaning the coil of a snake. In 1842, he published his landmark thesis, Laws of the Storms. Structure There are a number of structural characteristics common to all cyclones. A cyclone is a low-pressure area. A cyclone's center (often known in a mature tropical cyclone as the eye), is the area of lowest atmospheric pressure in the region. Near the center, the pressure gradient force (from the pressure in the center of the cyclone compared to the pressure outside the cyclone) and the force from the Coriolis effect must be in an approximate balance, or the cyclone would collapse on itself as a result of the difference in pressure. Because of the Coriolis effect, the wind flow around a large cyclone is counterclockwise in the Northern Hemisphere and clockwise in the Southern Hemisphere. In the Northern Hemisphere, the fastest winds relative to the surface of the Earth therefore occur on the eastern side of a northward-moving cyclone and on the northern side of a westward-moving one; the opposite occurs in the Southern Hemisphere. In contrast to low-pressure systems, the wind flow around high-pressure systems are clockwise (anticyclonic) in the northern hemisphere, and counterclockwise in the southern hemisphere. Formation Cyclogenesis is the development or strengthening of cyclonic circulation in the atmosphere. Cyclogenesis is an umbrella term for several different processes that all result in the development of some sort of cyclone. It can occur at various scales, from the microscale to the synoptic scale. Extratropical cyclones begin as waves along weather fronts before occluding later in their life cycle as cold-core systems. However, some intense extratropical cyclones can become warm-core systems when a warm seclusion occurs. Tropical cyclones form as a result of significant convective activity, and are warm core. Mesocyclones form as warm core cyclones over land, and can lead to tornado formation. Waterspouts can also form from mesocyclones, but more often develop from environments of high instability and low vertical wind shear. Cyclolysis is the opposite of cyclogenesis, and is the high-pressure system equivalent, which deals with the formation of high-pressure areas—Anticyclogenesis. A surface low can form in a variety of ways. Topography can create a surface low. Mesoscale convective systems can spawn surface lows that are initially warm-core. The disturbance can grow into a wave-like formation along the front and the low is positioned at the crest. Around the low, the flow becomes cyclonic. This rotational flow moves polar air towards the equator on the west side of the low, while warm air move towards the pole on the east side. A cold front appears on the west side, while a warm front forms on the east side. Usually, the cold front moves at a quicker pace than the warm front and "catches up" with it due to the slow erosion of higher density air mass out ahead of the cyclone. In addition, the higher density air mass sweeping in behind the cyclone strengthens the higher pressure, denser cold air mass. The cold front over takes the warm front, and reduces the length of the warm front. At this point an occluded front forms where the warm air mass is pushed upwards into a trough of warm air aloft, which is also known as a trowal. Tropical cyclogenesis is the development and strengthening of a tropical cyclone. The mechanisms by which tropical cyclogenesis occurs are distinctly different from those that produce mid-latitude cyclones. Tropical cyclogenesis, the development of a warm-core cyclone, begins with significant convection in a favorable atmospheric environment. There are six main requirements for tropical cyclogenesis: sufficiently warm sea surface temperatures, atmospheric instability, high humidity in the lower to middle levels of the troposphere enough Coriolis force to develop a low-pressure center a preexisting low-level focus or disturbance low vertical wind shear. An average of 86 tropical cyclones of tropical storm intensity form annually worldwide, with 47 reaching hurricane/typhoon strength, and 20 becoming intense tropical cyclones (at least Category 3 intensity on the Saffir–Simpson hurricane scale). Synoptic scale The following types of cyclones are identifiable in synoptic charts. Surface-based types There are three main types of surface-based cyclones: Extratropical cyclones, Subtropical cyclones and Tropical cyclones Extratropical cyclone An extratropical cyclone is a synoptic scale low-pressure weather system that does not have tropical characteristics, as it is connected with fronts and horizontal gradients (rather than vertical) in temperature and dew point otherwise known as "baroclinic zones". "Extratropical" is applied to cyclones outside the tropics, in the middle latitudes. These systems may also be described as "mid-latitude cyclones" due to their area of formation, or "post-tropical cyclones" when a tropical cyclone has moved (extratropical transition) beyond the tropics. They are often described as "depressions" or "lows" by weather forecasters and the general public. These are the everyday phenomena that, along with anticyclones, drive weather over much of the Earth. Although extratropical cyclones are almost always classified as baroclinic since they form along zones of temperature and dewpoint gradient within the westerlies, they can sometimes become barotropic late in their life cycle when the temperature distribution around the cyclone becomes fairly uniform with radius. An extratropical cyclone can transform into a subtropical storm, and from there into a tropical cyclone, if it dwells over warm waters sufficient to warm its core, and as a result develops central convection. A particularly intense type of extratropical cyclone that strikes during winter is known colloquially as a nor'easter. Polar low A polar low is a small-scale, short-lived atmospheric low-pressure system (depression) that is found over the ocean areas poleward of the main polar front in both the Northern and Southern Hemispheres. Polar lows were first identified on the meteorological satellite imagery that became available in the 1960s, which revealed many small-scale cloud vortices at high latitudes. The most active polar lows are found over certain ice-free maritime areas in or near the Arctic during the winter, such as the Norwegian Sea, Barents Sea, Labrador Sea and Gulf of Alaska. Polar lows dissipate rapidly when they make landfall. Antarctic systems tend to be weaker than their northern counterparts since the air-sea temperature differences around the continent are generally smaller . However, vigorous polar lows can be found over the Southern Ocean. During winter, when cold-core lows with temperatures in the mid-levels of the troposphere reach move over open waters, deep convection forms, which allows polar low development to become possible. The systems usually have a horizontal length scale of less than and exist for no more than a couple of days. They are part of the larger class of mesoscale weather systems. Polar lows can be difficult to detect using conventional weather reports and are a hazard to high-latitude operations, such as shipping and gas and oil platforms. Polar lows have been referred to by many other terms, such as polar mesoscale vortex, Arctic hurricane, Arctic low, and cold air depression. Today the term is usually reserved for the more vigorous systems that have near-surface winds of at least 17 m/s. Subtropical A subtropical cyclone is a weather system that has some characteristics of a tropical cyclone and some characteristics of an extratropical cyclone. They can form between the equator and the 50th parallel. As early as the 1950s, meteorologists were unclear whether they should be characterized as tropical cyclones or extratropical cyclones, and used terms such as quasi-tropical and semi-tropical to describe the cyclone hybrids. By 1972, the National Hurricane Center officially recognized this cyclone category. Subtropical cyclones began to receive names off the official tropical cyclone list in the Atlantic Basin in 2002. They have broad wind patterns with maximum sustained winds located farther from the center than typical tropical cyclones, and exist in areas of weak to moderate temperature gradient. Since they form from extratropical cyclones, which have colder temperatures aloft than normally found in the tropics, the sea surface temperatures required is around 23 degrees Celsius (73 °F) for their formation, which is three degrees Celsius (5 °F) lower than for tropical cyclones. This means that subtropical cyclones are more likely to form outside the traditional bounds of the hurricane season. Although subtropical storms rarely have hurricane-force winds, they may become tropical in nature as their cores warm. Tropical A tropical cyclone is a storm system characterized by a low-pressure center and numerous thunderstorms that produce strong winds and flooding rain. A tropical cyclone feeds on heat released when moist air rises, resulting in condensation of water vapour contained in the moist air. They are fueled by a different heat mechanism than other cyclonic windstorms such as nor'easters, European windstorms, and polar lows, leading to their classification as "warm core" storm systems. The term "tropical" refers to both the geographic origin of these systems, which form almost exclusively in tropical regions of the globe, and their dependence on Maritime Tropical air masses for their formation. The term "cyclone" refers to the storms' cyclonic nature, with counterclockwise rotation in the Northern Hemisphere and clockwise rotation in the Southern Hemisphere. Depending on their location and strength, tropical cyclones are referred to by other names, such as hurricane, typhoon, tropical storm, cyclonic storm, tropical depression, or simply as a cyclone. While tropical cyclones can produce extremely powerful winds and torrential rain, they are also able to produce high waves and a damaging storm surge. Their winds increase the wave size, and in so doing they draw more heat and moisture into their system, thereby increasing their strength. They develop over large bodies of warm water, and hence lose their strength if they move over land. This is the reason coastal regions can receive significant damage from a tropical cyclone, while inland regions are relatively safe from strong winds. Heavy rains, however, can produce significant flooding inland. Storm surges are rises in sea level caused by the reduced pressure of the core that in effect "sucks" the water upward and from winds that in effect "pile" the water up. Storm surges can produce extensive coastal flooding up to from the coastline. Although their effects on human populations can be devastating, tropical cyclones can also relieve drought conditions. They also carry heat and energy away from the tropics and transport it toward temperate latitudes, which makes them an important part of the global atmospheric circulation mechanism. As a result, tropical cyclones help to maintain equilibrium in the Earth's troposphere. Many tropical cyclones develop when the atmospheric conditions around a weak disturbance in the atmosphere are favorable. Others form when other types of cyclones acquire tropical characteristics. Tropical systems are then moved by steering winds in the troposphere; if the conditions remain favorable, the tropical disturbance intensifies, and can even develop an eye. On the other end of the spectrum, if the conditions around the system deteriorate or the tropical cyclone makes landfall, the system weakens and eventually dissipates. A tropical cyclone can become extratropical as it moves toward higher latitudes if its energy source changes from heat released by condensation to differences in temperature between air masses. A tropical cyclone is usually not considered to become subtropical during its extratropical transition. Upper level types Polar cyclone A polar, sub-polar, or Arctic cyclone (also known as a polar vortex) is a vast area of low pressure that strengthens in the winter and weakens in the summer. A polar cyclone is a low-pressure weather system, usually spanning to , in which the air circulates in a counterclockwise direction in the northern hemisphere, and a clockwise direction in the southern hemisphere. The Coriolis acceleration acting on the air masses moving poleward at high altitude, causes a counterclockwise circulation at high altitude. The poleward movement of air originates from the air circulation of the Polar cell. The polar low is not driven by convection as are tropical cyclones, nor the cold and warm air mass interactions as are extratropical cyclones, but is an artifact of the global air movement of the Polar cell. The base of the polar low is in the mid to upper troposphere. In the Northern Hemisphere, the polar cyclone has two centers on average. One center lies near Baffin Island and the other over northeast Siberia. In the southern hemisphere, it tends to be located near the edge of the Ross ice shelf near 160 west longitude. When the polar vortex is strong, its effect can be felt at the surface as a westerly wind (toward the east). When the polar cyclone is weak, significant cold outbreaks occur. TUTT cell Under specific circumstances, upper level cold lows can break off from the base of the tropical upper tropospheric trough (TUTT), which is located mid-ocean in the Northern Hemisphere during the summer months. These upper tropospheric cyclonic vortices, also known as TUTT cells or TUTT lows, usually move slowly from east-northeast to west-southwest, and their bases generally do not extend below in altitude. A weak inverted surface trough within the trade wind is generally found underneath them, and they may also be associated with broad areas of high-level clouds. Downward development results in an increase of cumulus clouds and the appearance of a surface vortex. In rare cases, they become warm-core tropical cyclones. Upper cyclones and the upper troughs that trail tropical cyclones can cause additional outflow channels and aid in their intensification. Developing tropical disturbances can help create or deepen upper troughs or upper lows in their wake due to the outflow jet emanating from the developing tropical disturbance/cyclone. Mesoscale The following types of cyclones are not identifiable in synoptic charts. Mesocyclone A mesocyclone is a vortex of air, to in diameter (the mesoscale of meteorology), within a convective storm. Air rises and rotates around a vertical axis, usually in the same direction as low-pressure systems in both northern and southern hemisphere. They are most often cyclonic, that is, associated with a localized low-pressure region within a supercell. Such storms can feature strong surface winds and severe hail. Mesocyclones often occur together with updrafts in supercells, where tornadoes may form. About 1,700 mesocyclones form annually across the United States, but only half produce tornadoes. Tornado A tornado is a violently rotating column of air that is in contact with both the surface of the earth and a cumulonimbus cloud or, in rare cases, the base of a cumulus cloud. Also referred to as twisters, a colloquial term in America, or cyclones, although the word cyclone is used in meteorology, in a wider sense, to name any closed low-pressure circulation. Dust devil A dust devil is a strong, well-formed, and relatively long-lived whirlwind, ranging from small (half a metre wide and a few metres tall) to large (more than 10 metres wide and more than 1000 metres tall). The primary vertical motion is upward. Dust devils are usually harmless, but can on rare occasions grow large enough to pose a threat to both people and property. Waterspout A waterspout is a columnar vortex forming over water that is, in its most common form, a non-supercell tornado over water that is connected to a cumuliform cloud. While it is often weaker than most of its land counterparts, stronger versions spawned by mesocyclones do occur. Steam devil A gentle vortex over calm water or wet land made visible by rising water vapour. Fire whirl A fire whirl – also colloquially known as a fire devil, fire tornado, firenado, or fire twister – is a whirlwind induced by a fire and often made up of flame or ash. Other planets Cyclones are not unique to Earth. Cyclonic storms are common on giant planets, such as the Small Dark Spot on Neptune. It is about one third the diameter of the Great Dark Spot and received the nickname "Wizard's Eye" because it looks like an eye. This appearance is caused by a white cloud in the middle of the Wizard's Eye. Mars has also exhibited cyclonic storms. Jovian storms like the Great Red Spot are usually mistakenly named as giant hurricanes or cyclonic storms. However, this is inaccurate, as the Great Red Spot is, in fact, the inverse phenomenon, an anticyclone. See also Tropical cyclone Subtropical cyclone Extratropical cyclone Tornado Storm Atlantic hurricane Australian region tropical cyclone Space hurricane Space tornado References External links Current map of global mean sea-level pressure Meteorological phenomena Tropical cyclone meteorology Cyclone Weather hazards Vortices
Cyclone
[ "Physics", "Chemistry", "Mathematics" ]
4,172
[ "Physical phenomena", "Earth phenomena", "Vortices", "Weather hazards", "Weather", "Meteorological phenomena", "Dynamical systems", "Fluid dynamics" ]
42,882
https://en.wikipedia.org/wiki/Cosmogony
Cosmogony is any model concerning the origin of the cosmos or the universe. Overview Scientific theories In astronomy, cosmogony is the study of the origin of particular astrophysical objects or systems, and is most commonly used in reference to the origin of the universe, the Solar System, or the Earth–Moon system. The prevalent cosmological model of the early development of the universe is the Big Bang theory. Sean M. Carroll, who specializes in theoretical cosmology and field theory, explains two competing explanations for the origins of the singularity, which is the center of a space in which a characteristic is limitless (one example is the singularity of a black hole, where gravity is the characteristic that becomes infinite). It is generally thought that the universe began at a point of singularity, but among Modern Cosmologists and Physicists, a singularity usually represents a lack of understanding, and in the case of Cosmology/Cosmogony, requires a theory of quantum gravity to understand. When the universe started to expand, what is colloquially known as the Big Bang occurred, which evidently began the universe. The other explanation, held by proponents such as Stephen Hawking, asserts that time did not exist when it emerged along with the universe. This assertion implies that the universe does not have a beginning, as time did not exist "prior" to the universe. Hence, it is unclear whether properties such as space or time emerged with the singularity and the known universe. Despite the research, there is currently no theoretical model that explains the earliest moments of the universe's existence (during the Planck epoch) due to a lack of a testable theory of quantum gravity. Nevertheless, researchers of string theory, its extensions (such as M-theory), and of loop quantum cosmology, like Barton Zwiebach and Washington Taylor, have proposed solutions to assist in the explanation of the universe's earliest moments. Cosmogonists have only tentative theories for the early stages of the universe and its beginning. The proposed theoretical scenarios include string theory, M-theory, the Hartle–Hawking initial state, emergent Universe, string landscape, cosmic inflation, the Big Bang, and the ekpyrotic universe. Some of these proposed scenarios, like the string theory, are compatible, whereas others are not. Mythology In mythology, creation or cosmogonic myths are narratives describing the beginning of the universe or cosmos. Some methods of the creation of the universe in mythology include: the will or action of a supreme being or beings, the process of metamorphosis, the copulation of female and male deities, from chaos, or via a cosmic egg. Creation myths may be etiological, attempting to provide explanations for the origin of the universe. For instance, Eridu Genesis, the oldest known creation myth, contains an account of the creation of the world in which the universe was created out of a primeval sea (Abzu). Creation myths vary, but they may share similar deities or symbols. For instance, the ruler of the gods in Greek mythology, Zeus, is similar to the ruler of the gods in Roman mythology, Jupiter. Another example is the ruler of the gods in Tagalog mythology, Bathala, who is similar to various rulers of certain pantheons within Philippine mythology such as the Bisaya's Kaptan. Compared with cosmology In the humanities, the distinction between cosmogony and cosmology is blurred. For example, in theology, the cosmological argument for the existence of God (pre-cosmic cosmogonic bearer of personhood) is an appeal to ideas concerning the origin of the universe and is thus cosmogonical. Some religious cosmogonies have an impersonal first cause (for example Taoism). However, in astronomy, cosmogony can be distinguished from cosmology, which studies the universe and its existence, but does not necessarily inquire into its origins. There is therefore a scientific distinction between cosmological and cosmogonical ideas. Physical cosmology is the science that attempts to explain all observations relevant to the development and characteristics of the universe on its largest scale. Some questions regarding the behaviour of the universe have been described by some physicists and cosmologists as being extra-scientific or metaphysical. Attempted solutions to such questions may include the extrapolation of scientific theories to untested regimes (such as the Planck epoch), or the inclusion of philosophical or religious ideas. See also Why there is anything at all References External links Creation myths Greek words and phrases Natural philosophy Origins Physical cosmology Concepts in astronomy
Cosmogony
[ "Physics", "Astronomy" ]
950
[ "Cosmogony", "Astronomical sub-disciplines", "Concepts in astronomy", "Theoretical physics", "Astrophysics", "Creation myths", "Physical cosmology" ]
1,490,017
https://en.wikipedia.org/wiki/Electroactive%20polymer
An electroactive polymer (EAP) is a polymer that exhibits a change in size or shape when stimulated by an electric field. The most common applications of this type of material are in actuators and sensors. A typical characteristic property of an EAP is that they will undergo a large amount of deformation while sustaining large forces. The majority of historic actuators are made of ceramic piezoelectric materials. While these materials are able to withstand large forces, they commonly will only deform a fraction of a percent. In the late 1990s, it has been demonstrated that some EAPs can exhibit up to a 380% strain, which is much more than any ceramic actuator. One of the most common applications for EAPs is in the field of robotics in the development of artificial muscles; thus, an electroactive polymer is often referred to as an artificial muscle. History The field of EAPs emerged back in 1880, when Wilhelm Röntgen designed an experiment in which he tested the effect of an electrostatic field on the mechanical properties of a stripe of natural rubber. The rubber stripe was fixed at one end and was attached to a mass at the other. Electric charges were then sprayed onto the rubber, and it was observed that the length changed. It was in 1925 that the first piezoelectric polymer was discovered (Electret). Electret was formed by combining carnauba wax, rosin and beeswax, and then cooling the solution while it is subject to an applied DC electrical bias. The mixture would then solidify into a polymeric material that exhibited a piezoelectric effect. Polymers that respond to environmental conditions, other than an applied electric current, have also been a large part of this area of study. In 1949 Katchalsky et al. demonstrated that when collagen filaments are dipped in acid or alkali solutions, they would respond with a change in volume. The collagen filaments were found to expand in an acidic solution and contract in an alkali solution. Although other stimuli (such as pH) have been investigated, due to its ease and practicality most research has been devoted to developing polymers that respond to electrical stimuli in order to mimic biological systems. The next major breakthrough in EAPs took place in the late 1960s. In 1969 Kawai demonstrated that polyvinylidene fluoride (PVDF) exhibits a large piezoelectric effect. This sparked research interest in developing other polymers that would show a similar effect. In 1977 the first electrically conducting polymers were discovered by Hideki Shirakawa et al. Shirakawa, along with Alan MacDiarmid and Alan Heeger, demonstrated that polyacetylene was electrically conductive, and that by doping it with iodine vapor, they could enhance its conductivity by 8 orders of magnitude. Thus the conductance was close to that of a metal. By the late 1980s a number of other polymers had been shown to exhibit a piezoelectric effect or were demonstrated to be conductive. In the early 1990s, ionic polymer-metal composites (IPMCs) were developed and shown to exhibit electroactive properties far superior to previous EAPs. The major advantage of IPMCs was that they were able to show activation (deformation) at voltages as low as 1 or 2 volts. This is orders of magnitude less than any previous EAP. Not only was the activation energy for these materials much lower, but they could also undergo much larger deformations. IPMCs were shown to exhibit anywhere up to 380% strain, orders of magnitude larger than previously developed EAPs. In 1999, Yoseph Bar-Cohen proposed the Armwrestling Match of EAP Robotic Arm Against Human Challenge. This was a challenge in which research groups around the world competed to design a robotic arm consisting of EAP muscles that could defeat a human in an arm wrestling match. The first challenge was held at the Electroactive Polymer Actuators and Devices Conference in 2005. Another major milestone of the field is that the first commercially developed device including EAPs as an artificial muscle was produced in 2002 by Eamex in Japan. This device was a fish that was able to swim on its own, moving its tail using an EAP muscle. But the progress in practical development has not been satisfactory. DARPA-funded research in the 1990s at SRI International and led by Ron Pelrine developed an electroactive polymer using silicone and acrylic polymers; the technology was spun off into the company Artificial Muscle in 2003, with industrial production beginning in 2008. In 2010, Artificial Muscle became a subsidiary of Bayer MaterialScience. Types EAPs can have several configurations, but are generally divided in two principal classes: Dielectric and Ionic. Dielectric Dielectric EAPs are materials in which actuation is caused by electrostatic forces between two electrodes which squeeze the polymer. Dielectric elastomers are capable of very high strains and are fundamentally a capacitor that changes its capacitance when a voltage is applied by allowing the polymer to compress in thickness and expand in area due to the electric field. This type of EAP typically requires a large actuation voltage to produce high electric fields (hundreds to thousands of volts), but very low electrical power consumption. Dielectric EAPs require no power to keep the actuator at a given position. Examples are electrostrictive polymers and dielectric elastomers. Ferroelectric polymers Ferroelectric polymers are a group of crystalline polar polymers that are also ferroelectric, meaning that they maintain a permanent electric polarization that can be reversed, or switched, in an external electric field. Ferroelectric polymers, such as polyvinylidene fluoride (PVDF), are used in acoustic transducers and electromechanical actuators because of their inherent piezoelectric response, and as heat sensors because of their inherent pyroelectric response. Electrostrictive graft polymers Electrostrictive graft polymers consist of flexible backbone chains with branching side chains. The side chains on neighboring backbone polymers cross link and form crystal units. The backbone and side chain crystal units can then form polarized monomers, which contain atoms with partial charges and generate dipole moments, shown in Figure 2. When an electrical field is applied, a force is applied to each partial charge, which causes rotation of the whole polymer unit. This rotation causes electrostrictive strain and deformation of the polymer. Liquid crystalline polymers Main-chain liquid crystalline polymers have mesogenic groups linked to each other by a flexible spacer. The mesogens within a backbone form the mesophase structure, causing the polymer itself to adopt a conformation compatible with the structure of the mesophase. The direct coupling of the liquid crystalline order with the polymer conformation has given main-chain liquid crystalline elastomers a large amount of interest. The synthesis of highly oriented elastomers leads to a large strain thermal actuation along the polymer chain direction, with temperature variation resulting in unique mechanical properties and potential applications as mechanical actuators. Ionic Ionic EAPs are polymers in which actuation is caused by the displacement of ions inside the polymer. Only a few volts are needed for actuation, but the ionic flow implies that higher electrical power is needed for actuation, and energy is needed to keep the actuator at a given position. Examples of ionic EAPs are conductive polymers, ionic polymer-metal composites (IPMCs), and responsive gels. Yet another example is a Bucky gel actuator, which is a polymer-supported layer of polyelectrolyte material consisting of an ionic liquid sandwiched between two electrode layers, which is then a gel of ionic liquid containing single-wall carbon nanotubes. The name comes from the similarity of the gel to the paper that can be made by filtering carbon nanotubes, the so-called buckypaper. Electrorheological fluid Electrorheological fluids change viscosity when an electric field is applied. The fluid is a suspension of polymers in a low dielectric-constant liquid. With the application of a large electric field the viscosity of the suspension increases. Potential applications of these fluids include shock absorbers, engine mounts and acoustic dampers. Ionic polymer-metal composite Ionic polymer-metal composites consist of a thin ionomeric membrane with noble metal electrodes plated on its surface. It also has cations to balance the charge of the anions fixed to the polymer backbone. They are very active actuators that show very high deformation at low applied voltage and show low impedance. Ionic polymer-metal composites work through electrostatic attraction between the cationic counter ions and the cathode of the applied electric field, a schematic representation is shown in Figure 3. These types of polymers show the greatest promise for bio-mimetic uses as collagen fibers are essentially composed of natural charged ionic polymers. Nafion and Flemion are commonly used ionic polymer metal composites. Stimuli-responsive gels Stimuli-responsive gels (hydrogels, when the swelling agent is an aqueous solution) are a special kind of swellable polymer networks with volume phase transition behaviour. These materials change reversibly their volume, optical, mechanical and other properties by very small alterations of certain physical (e.g. electric field, light, temperature) or chemical (concentrations) stimuli. The volume change of these materials occurs by swelling/shrinking and is diffusion-based. Gels provide the biggest change in volume of solid-state materials. Combined with an excellent compatibility with micro-fabrication technologies, especially stimuli-responsive hydrogels are of strong increasing interest for microsystems with sensors and actuators. Current fields of research and application are chemical sensor systems, microfluidics and multimodal imaging systems. Comparison of dielectric and ionic EAPs Dielectric polymers are able to hold their induced displacement while activated under a DC voltage. This allows dielectric polymers to be considered for robotic applications. These types of materials also have high mechanical energy density and can be operated in air without a major decrease in performance. However, dielectric polymers require very high activation fields (>10 V/μm) that are close to the breakdown level. The activation of ionic polymers, on the other hand, requires only 1-2 volts. They however need to maintain wetness, though some polymers have been developed as self-contained encapsulated activators which allows their use in dry environments. Ionic polymers also have a low electromechanical coupling. They are however ideal for bio-mimetic devices. Characterization While there are many different ways electroactive polymers can be characterized, only three will be addressed here: stress–strain curve, dynamic mechanical thermal analysis, and dielectric thermal analysis. Stress–strain curve Stress strain curves provide information about the polymer's mechanical properties such as the brittleness, elasticity and yield strength of the polymer. This is done by providing a force to the polymer at a uniform rate and measuring the deformation that results. An example of this deformation is shown in Figure 4. This technique is useful for determining the type of material (brittle, tough, etc.), but it is a destructive technique as the stress is increased until the polymer fractures. Dynamic mechanical thermal analysis (DMTA) Dynamic mechanical analysis is a non destructive technique that is useful in understanding the mechanism of deformation at a molecular level. In DMTA a sinusoidal stress is applied to the polymer, and based on the polymer's deformation, the elastic modulus and damping characteristics are obtained (assuming the polymer is a damped harmonic oscillator). Elastic materials take the mechanical energy of the stress and convert it into potential energy which can later be recovered. An ideal spring will use all the potential energy to regain its original shape (no damping), while a liquid will use all the potential energy to flow, never returning to its original position or shape (high damping). A viscoeleastic polymer will exhibit a combination of both types of behavior. Dielectric thermal analysis (DETA) DETA is similar to DMTA, but instead of an alternating mechanical force an alternating electric field is applied. The applied field can lead to polarization of the sample, and if the polymer contains groups that have permanent dipoles (as in Figure 2), they will align with the electrical field. The permittivity can be measured from the change in amplitude and resolved into dielectric storage and loss components. The electric displacement field can also be measured by following the current. Once the field is removed, the dipoles will relax back into a random orientation. Applications EAP materials can be easily manufactured in various shapes due to the ease of processing many polymeric materials, making them very versatile materials. One potential application for EAPs is integration into microelectromechanical systems (MEMS) to produce smart actuators. Artificial muscles As the most prospective practical research direction, EAPs have been used in artificial muscles. Their ability to emulate the operation of biological muscles with high fracture toughness, large actuation strain and inherent vibration damping draw the attention of scientists in this field. EAPs have even successfully been used to make a type of hand. Tactile displays In recent years, "electro active polymers for refreshable Braille displays" has emerged to aid the visually impaired in fast reading and computer assisted communication. This concept is based on using an EAP actuator configured in an array form. Rows of electrodes on one side of an EAP film and columns on the other activate individual elements in the array. Each element is mounted with a Braille dot and is lowered by applying a voltage across the thickness of the selected element, causing local thickness reduction. Under computer control, dots would be activated to create tactile patterns of highs and lows representing the information to be read. Visual and tactile impressions of a virtual surface are displayed by a high resolution tactile display, a so-called "artificial skin" (Fig. 6). These monolithic devices consist of an array of thousands of multimodal modulators (actuator pixels) based on stimuli-responsive hydrogels. Each modulator is able to change individually their transmission, height and softness. Besides their possible use as graphic displays for visually impaired such displays are interesting as free programmable keys of touchpads and consoles. Microfluidics EAP materials have huge potential for microfluidics, e.g. as drug delivery systems, microfluidic devices and lab-on-a-chip. A first microfluidic platform technology reported in the literature is based on stimuli-responsive gels. To avoid the electrolysis of water, hydrogel-based microfluidic devices are mainly based on temperature-responsive polymers with lower critical solution temperature (LCST) characteristics, which are controlled by an electrothermic interface. Two types of micropumps are known, a diffusion micropump and a displacement micropump. Microvalves based on stimuli-responsive hydrogels show some advantageous properties such as particle tolerance, no leakage and outstanding pressure resistance. Besides these microfluidic standard components, the hydrogel platform provides also chemical sensors and a novel class of microfluidic components, the chemical transistors (also referred as chemostat valves). These devices regulate a liquid flow if a threshold concentration of a certain chemical is reached. Chemical transistors form the basis of microchemomechanical fluidic integrated circuits. "Chemical ICs" process exclusively chemical information, are energy-self-powered, operate automatically and are suitable for large-scale integration. Another microfluidic platform is based on ionomeric materials. Pumps made from that material could offer low voltage (battery) operation, extremely low noise signature, high system efficiency, and highly accurate control of flow rate. Another technology that can benefit from the unique properties of EAP actuators is optical membranes. Due to their low modulus, the mechanical impedance of the actuators, they are well-matched to common optical membrane materials. Also, a single EAP actuator is capable of generating displacements that range from micrometers to centimeters. For this reason, these materials can be used for static shape correction and jitter suppression. These actuators could also be used to correct for optical aberrations due to atmospheric interference. Since these materials exhibit excellent electroactive character, EAP materials show potential in biomimetic-robot research, stress sensors and acoustics field, which will make EAPs become a more attractive study topic in the near future. They have been used for various actuators such as face muscles and arm muscles in humanoid robots. Future directions The field of EAPs is far from mature, which leaves several issues that still need to be worked on. The performance and long-term stability of the EAP should be improved by designing a water impermeable surface. This will prevent the evaporation of water contained in the EAP, and also reduce the potential loss of the positive counter ions when the EAP is operating submerged in an aqueous environment. Improved surface conductivity should be explored using methods to produce a defect-free conductive surface. This could possibly be done using metal vapor deposition or other doping methods. It may also be possible to utilize conductive polymers to form a thick conductive layer. Heat resistant EAP would be desirable to allow operation at higher voltages without damaging the internal structure of the EAP due to the generation of heat in the EAP composite. Development of EAPs in different configurations (e.g., fibers and fiber bundles), would also be beneficial, in order to increase the range of possible modes of motion. See also Pneumatic artificial muscles Artificial muscles References Further reading Electroactive polymer (EAP) actuators as artificial muscles – reality, potential and challenges, Electroactive Polymers as Artificial Muscles Reality and Challenges Electroactive polymers for sensing Electrical engineering Polymer material properties Smart materials Transducers
Electroactive polymer
[ "Chemistry", "Materials_science", "Engineering" ]
3,734
[ "Materials science", "Polymer material properties", "Polymer chemistry", "Electrical engineering", "Smart materials" ]
1,490,148
https://en.wikipedia.org/wiki/Perturbation%20%28astronomy%29
In astronomy, perturbation is the complex motion of a massive body subjected to forces other than the gravitational attraction of a single other massive body. The other forces can include a third (fourth, fifth, etc.) body, resistance, as from an atmosphere, and the off-center attraction of an oblate or otherwise misshapen body. Introduction The study of perturbations began with the first attempts to predict planetary motions in the sky. In ancient times the causes were unknown. Isaac Newton, at the time he formulated his laws of motion and of gravitation, applied them to the first analysis of perturbations, recognizing the complex difficulties of their calculation. Many of the great mathematicians since then have given attention to the various problems involved; throughout the 18th and 19th centuries there was demand for accurate tables of the position of the Moon and planets for marine navigation. The complex motions of gravitational perturbations can be broken down. The hypothetical motion that the body follows under the gravitational effect of one other body only is a conic section, and can be described in geometrical terms. This is called a two-body problem, or an unperturbed Keplerian orbit. The differences between that and the actual motion of the body are perturbations due to the additional gravitational effects of the remaining body or bodies. If there is only one other significant body then the perturbed motion is a three-body problem; if there are multiple other bodies it is an ‑body problem. A general analytical solution (a mathematical expression to predict the positions and motions at any future time) exists for the two-body problem; when more than two bodies are considered analytic solutions exist only for special cases. Even the two-body problem becomes insoluble if one of the bodies is irregular in shape. Most systems that involve multiple gravitational attractions present one primary body which is dominant in its effects (for example, a star, in the case of the star and its planet, or a planet, in the case of the planet and its satellite). The gravitational effects of the other bodies can be treated as perturbations of the hypothetical unperturbed motion of the planet or satellite around its primary body. Mathematical analysis General perturbations In methods of general perturbations, general differential equations, either of motion or of change in the orbital elements, are solved analytically, usually by series expansions. The result is usually expressed in terms of algebraic and trigonometric functions of the orbital elements of the body in question and the perturbing bodies. This can be applied generally to many different sets of conditions, and is not specific to any particular set of gravitating objects. Historically, general perturbations were investigated first. The classical methods are known as variation of the elements, variation of parameters or variation of the constants of integration. In these methods, it is considered that the body is always moving in a conic section, however the conic section is constantly changing due to the perturbations. If all perturbations were to cease at any particular instant, the body would continue in this (now unchanging) conic section indefinitely; this conic is known as the osculating orbit and its orbital elements at any particular time are what are sought by the methods of general perturbations. General perturbations takes advantage of the fact that in many problems of celestial mechanics, the two-body orbit changes rather slowly due to the perturbations; the two-body orbit is a good first approximation. General perturbations is applicable only if the perturbing forces are about one order of magnitude smaller, or less, than the gravitational force of the primary body. In the Solar System, this is usually the case; Jupiter, the second largest body, has a mass of about that of the Sun. General perturbation methods are preferred for some types of problems, as the source of certain observed motions are readily found. This is not necessarily so for special perturbations; the motions would be predicted with similar accuracy, but no information on the configurations of the perturbing bodies (for instance, an orbital resonance) which caused them would be available. Special perturbations In methods of special perturbations, numerical datasets, representing values for the positions, velocities and accelerative forces on the bodies of interest, are made the basis of numerical integration of the differential equations of motion. In effect, the positions and velocities are perturbed directly, and no attempt is made to calculate the curves of the orbits or the orbital elements. Special perturbations can be applied to any problem in celestial mechanics, as it is not limited to cases where the perturbing forces are small. Once applied only to comets and minor planets, special perturbation methods are now the basis of the most accurate machine-generated planetary ephemerides of the great astronomical almanacs. Special perturbations are also used for modeling an orbit with computers. Cowell's formulation Cowell's formulation (so named for Philip H. Cowell, who, with A.C.D. Cromellin, used a similar method to predict the return of Halley's comet) is perhaps the simplest of the special perturbation methods. In a system of mutually interacting bodies, this method mathematically solves for the Newtonian forces on body by summing the individual interactions from the other bodies: where is the acceleration vector of body , is the gravitational constant, is the mass of body , and are the position vectors of objects and respectively, and is the distance from object to object , all vectors being referred to the barycenter of the system. This equation is resolved into components in and and these are integrated numerically to form the new velocity and position vectors. This process is repeated as many times as necessary. The advantage of Cowell's method is ease of application and programming. A disadvantage is that when perturbations become large in magnitude (as when an object makes a close approach to another) the errors of the method also become large. However, for many problems in celestial mechanics, this is never the case. Another disadvantage is that in systems with a dominant central body, such as the Sun, it is necessary to carry many significant digits in the arithmetic because of the large difference in the forces of the central body and the perturbing bodies, although with high precision numbers built into modern computers this is not as much of a limitation as it once was. Encke's method Encke's method begins with the osculating orbit as a reference and integrates numerically to solve for the variation from the reference as a function of time. Its advantages are that perturbations are generally small in magnitude, so the integration can proceed in larger steps (with resulting lesser errors), and the method is much less affected by extreme perturbations. Its disadvantage is complexity; it cannot be used indefinitely without occasionally updating the osculating orbit and continuing from there, a process known as rectification. Encke's method is similar to the general perturbation method of variation of the elements, except the rectification is performed at discrete intervals rather than continuously. Letting be the radius vector of the osculating orbit, the radius vector of the perturbed orbit, and the variation from the osculating orbit, and are just the equations of motion of and where is the gravitational parameter with and the masses of the central body and the perturbed body, is the perturbing acceleration, and and are the magnitudes of and . Substituting from equations () and () into equation (), which, in theory, could be integrated twice to find . Since the osculating orbit is easily calculated by two-body methods, and are accounted for and can be solved. In practice, the quantity in the brackets, , is the difference of two nearly equal vectors, and further manipulation is necessary to avoid the need for extra significant digits. Encke's method was more widely used before the advent of modern computers, when much orbit computation was performed on mechanical calculating machines. Periodic nature In the Solar System, many of the disturbances of one planet by another are periodic, consisting of small impulses each time a planet passes another in its orbit. This causes the bodies to follow motions that are periodic or quasi-periodic – such as the Moon in its strongly perturbed orbit, which is the subject of lunar theory. This periodic nature led to the discovery of Neptune in 1846 as a result of its perturbations of the orbit of Uranus. On-going mutual perturbations of the planets cause long-term quasi-periodic variations in their orbital elements, most apparent when two planets' orbital periods are nearly in sync. For instance, five orbits of Jupiter (59.31 years) is nearly equal to two of Saturn (58.91 years). This causes large perturbations of both, with a period of 918 years, the time required for the small difference in their positions at conjunction to make one complete circle, first discovered by Laplace. Venus currently has the orbit with the least eccentricity, i.e. it is the closest to circular, of all the planetary orbits. In 25,000 years' time, Earth will have a more circular (less eccentric) orbit than Venus. It has been shown that long-term periodic disturbances within the Solar System can become chaotic over very long time scales; under some circumstances one or more planets can cross the orbit of another, leading to collisions. The orbits of many of the minor bodies of the Solar System, such as comets, are often heavily perturbed, particularly by the gravitational fields of the gas giants. While many of these perturbations are periodic, others are not, and these in particular may represent aspects of chaotic motion. For example, in April 1996, Jupiter's gravitational influence caused the period of Comet Hale–Bopp's orbit to decrease from 4,206 to 2,380 years, a change that will not revert on any periodic basis. See also Formation and evolution of the Solar System Frozen orbit Molniya orbit Nereid one of the outer moons of Neptune with a high orbital eccentricity of ~0.75 and is frequently perturbed Osculating orbit Orbit modeling Orbital resonance Perturbation theory Proper orbital elements Stability of the Solar System References Footnotes Citations Bibliography Further reading P.E. El'Yasberg: Introduction to the Theory of Flight of Artificial Earth Satellites External links Solex (by Aldo Vitagliano) predictions for the position/orbit/close approaches of Mars Gravitation Sir George Biddell Airy's 1884 book on gravitational motion and perturbations, using little or no math.(at Google books) Dynamical systems Dynamics of the Solar System Celestial mechanics
Perturbation (astronomy)
[ "Physics", "Astronomy", "Mathematics" ]
2,232
[ "Dynamics of the Solar System", "Classical mechanics", "Astrophysics", "Mechanics", "Celestial mechanics", "Solar System", "Dynamical systems" ]
1,492,498
https://en.wikipedia.org/wiki/Muscular%20hydrostat
A muscular hydrostat is a biological structure found in animals. It is used to manipulate items (including food) or to move its host about and consists mainly of muscles with no skeletal support. It performs its hydraulic movement without fluid in a separate compartment, as in a hydrostatic skeleton. A muscular hydrostat, like a hydrostatic skeleton, relies on the fact that water is effectively incompressible at physiological pressures. In contrast to a hydrostatic skeleton, where muscle surrounds a fluid-filled cavity, a muscular hydrostat is composed mainly of muscle tissue. Since muscle tissue itself is mainly made of water and is also effectively incompressible, similar principles apply. Muscular anatomy Muscles provide the force to move a muscular hydrostat. Since muscles are only able to produce force by contracting and becoming shorter, different groups of muscles have to work against each other, with one group relaxing and lengthening as the other group provides the force by contracting. Such complementary muscle groups are termed antagonistic pairs. The muscle fibers in a muscular hydrostat are oriented in three directions: parallel to the long axis, perpendicular to the long axis, and wrapped obliquely around the long axis. The muscles parallel to the long axis are arranged in longitudinal bundles. The more peripherally these are located, the more elaborate bending movements are possible. A more peripheral distribution is found in tetrapod tongues, octopus arms, nautilus tentacles, and elephant trunks. Tongues that are adapted for protrusion typically have centrally located longitudinal fibers. These are found in snake tongues, many lizard tongues, and the mammalian anteaters. The muscles perpendicular to the long axis may be arranged in a transverse, circular, or radial pattern. A transverse arrangement involves sheets of muscle fibers running perpendicular to the long axis, usually alternating between horizontal and vertical orientations. This arrangement is found in the arms and tentacles of squid, octopuses, and in most mammalian tongues. A radial arrangement involves fibers radiating out in all directions from the center of the organ. This is found in the tentacles of the chambered nautilus and in the elephant proboscis (trunk). A circular arrangement has rings of contractive fibers around the long axis. This is found in many mammalian and lizard tongues along with squid tentacles. Helical or oblique fibers around the long axis are generally present in two layers with opposite chirality and wrap around the central core of musculature. Mechanism of operation In a muscular hydrostat, the musculature itself both creates movement and provides skeletal support for that movement. It can provide this support because it is composed primarily of an incompressible “liquid" and is thus constant in volume. The most important biomechanical feature of a muscular hydrostat is its constant volume. Muscle is composed primarily of an aqueous liquid that is essentially incompressible at physiological pressures. In a muscular hydrostat or any other structure of constant volume, a decrease in one dimension will cause a compensatory increase in at least one other dimension. The mechanisms of elongation, bending and torsion in muscular hydrostats all depend on constancy of volume to effect shape changes in the absence of stiff skeletal attachments. Since muscular hydrostats are under constant volume when the diameter increases or decreases, the length must also decrease or increase, respectively. When looking at a cylinder the volume is: V=πr²l. When the radius is differentiated with respect to the length: dr/dl=-r/(2l). From this, if a diameter decreases by 25%, the length will increase by approximately 80% which may produce a large amount of force depending on what the animal is trying to do. Elongation and shortening Elongation in hydrostats is caused by the contraction of transverse or helical musculature arrangements. Given the constant volume of muscular hydrostats, these contractions cause an elongation of the longitudinal muscles. Change in length is proportional to the square of the decrease in diameter. Therefore, contractions of muscles perpendicular to the long axis will cause a decrease in diameter while keeping a constant volume will elongate the organ length-wise. Shortening, on the other hand, can be caused by contraction of the muscles parallel to the long axis resulting in the organ increasing in diameter as well as shortening in length. The muscles used in elongation and shortening maintain support through the constant volume principle and their antagonistic relationships with each other. These mechanisms are seen often in prey capture of shovelnose frogs and chameleons, as well as in the human tongue and many other examples. In some frogs, the tongue elongates up to 180% of its resting length. Extra-oral tongues show higher length/width ratios than intra-oral tongues, allowing for a greater increase in length (more than 100% of resting length, as compared to intra-oral tongues at only about 50% of resting length increase). Greater elongation lengths trade off with the force produced by the organ; as the length/width ratio is increased elongation increases while force is decreased. Squids have been shown to use muscular hydrostat elongation in prey capture and feeding as well. Bending The bending of a muscular hydrostat can occur in two ways, both of which require the use of antagonistic muscles. The unilateral contraction of a longitudinal muscle will produce little or no bending and will serve to increase the diameter of the muscular hydrostat because of the constant volume principle that must be met. To bend the hydrostat structure, the unilateral contraction of longitudinal muscle must be accompanied by contractile activity of transverse, radial, or circular muscles to maintain a constant diameter. Bending of a muscular hydrostat can also occur by the contraction of transverse, radial, or circular muscles which decreases the diameter. Bending is produced by longitudinal muscle activity which maintains a constant length on one side of the structure. The bending of a muscular hydrostat is particularly important in animal tongues. This motion provides the mechanism by which a snake flicks the air with its tongue to sense its surroundings, and it is also responsible for the complexities of human speech. Stiffening The stiffening of a muscular hydrostat is accomplished by the muscle or connective tissue of the hydrostat resisting dimensional changes. Torsion Torsion is the twisting of a muscular hydrostat along its long axis and is produced by a helical or oblique arrangement of musculature which have varying direction. For a counter-clockwise torsion it is necessary for a right-hand helix to contract. Contraction of a left-hand helix causes clockwise torsion. The simultaneous contraction of both right and left-hand helixes results in an increase in resistance to torsional forces. The oblique or helical muscle arrays in the muscular hydrostats are located in the periphery of the structure, wrapping the inner core of musculature, and this peripheral location provides a larger moment through which the torque is applied than a more central location. The effect of helically arranged muscle fibers, which may also contribute to changes in length of a muscular hydrostat, depends on fiber angle—the angle that the helical muscle fibers make with the long axis of the structure. The length of the helical fiber is at a minimum when the fiber angle equals 54°44′ and is at maximum length when the fiber angle approaches 0° and 90°. Summed up, this means that helically arranged muscle fibers with a fiber angle greater than 54°44′ will create force for both torsion and elongation while helically arranged muscle fibers with a fiber angle less than 54°44′ will create force for both torsion and shortening. The fiber angle of the oblique or helical muscle layers must increase during shortening and decrease during lengthening. In addition to creating a torsional force, the oblique muscle layers will therefore create a force for elongation that may aid the transverse musculature in resisting longitudinal compression. Examples Whole bodies of many worms Feet of mollusks (including arms and tentacles in cephalopods) Tongues of mammals and reptiles Trunks of elephants The snout of the West Indian manatee Technological applications A group of engineers and biologists have collaborated to develop robotic arms that are able to manipulate and handle various objects of different size, mass, surface texture and mechanical properties. These robotic arms have many advantages over previous robotic arms that were not based on muscular hydrostats. References Animal anatomy Biomechanics
Muscular hydrostat
[ "Physics" ]
1,754
[ "Biomechanics", "Mechanics" ]
1,493,025
https://en.wikipedia.org/wiki/Glassphalt
Glassphalt or glasphalt (a portmanteau of glass and asphalt) is a variety of asphalt that uses crushed glass. It has been used as an alternative to conventional bituminous asphalt pavement since the early 1970s. Glassphalt must be properly mixed and placed if it is to meet roadway pavement standards, requiring some modifications to generally accepted asphalt procedures. Generally, there is about 10–20% glass by weight in glassphalt. External links Recycled Glass in Asphalt Preparation and Placement of Glassphalt Building materials Glass applications Pavements
Glassphalt
[ "Physics", "Engineering" ]
113
[ "Building engineering", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
1,494,813
https://en.wikipedia.org/wiki/Ramsauer%E2%80%93Townsend%20effect
The Ramsauer–Townsend effect, also sometimes called the Ramsauer effect or the Townsend effect, is a physical phenomenon involving the scattering of low-energy electrons by atoms of a noble gas. This effect is a result of quantum mechanics. The effect is named for Carl Ramsauer and John Sealy Townsend, who each independently studied the collisions between atoms and low-energy electrons in 1921. Definitions When an electron moves through a gas, its interactions with the gas atoms cause scattering to occur. These interactions are classified as inelastic if they cause excitation or ionization of the atom to occur and elastic if they do not. The probability of scattering in such a system is defined as the number of electrons scattered, per unit electron current, per unit path length, per unit pressure at 0 °C, per unit solid angle. The number of collisions equals the total number of electrons scattered elastically and inelastically in all angles, and the probability of collision is the total number of collisions, per unit electron current, per unit path length, per unit pressure at 0 °C. Because noble gas atoms have a relatively high first ionization energy and the electrons do not carry enough energy to cause excited electronic states, ionization and excitation of the atom are unlikely, and the probability of elastic scattering over all angles is approximately equal to the probability of collision. Description If one tries to predict the probability of collision with a classical model that treats the electron and atom as hard spheres, one finds that the probability of collision should be independent of the incident electron energy. However, Ramsauer and Townsend, independently observed that for slow-moving electrons in argon, krypton, or xenon, the probability of collision between the electrons and gas atoms obtains a minimum value for electrons with a certain amount of kinetic energy (about 1 electron volts for xenon gas). No good explanation for the phenomenon existed until the introduction of quantum mechanics, which explains that the effect results from the wave-like properties of the electron. A simple model of the collision that makes use of wave theory can predict the existence of the Ramsauer–Townsend minimum. Niels Bohr presented a simple model for the phenomenon that considers the atom as a finite square potential well. Predicting from theory the kinetic energy that will produce a Ramsauer–Townsend minimum is quite complicated since the problem involves understanding the wave nature of particles. However, the problem has been extensively investigated both experimentally and theoretically and is well understood. References Scattering Physical phenomena
Ramsauer–Townsend effect
[ "Physics", "Chemistry", "Materials_science" ]
506
[ "Physical phenomena", "Scattering", "Condensed matter physics", "Particle physics", "Nuclear physics" ]
151,577
https://en.wikipedia.org/wiki/Causality%20%28physics%29
Causality is the relationship between causes and effects. While causality is also a topic studied from the perspectives of philosophy and physics, it is operationalized so that causes of an event must be in the past light cone of the event and ultimately reducible to fundamental interactions. Similarly, a cause cannot have an effect outside its future light cone. Macroscopic vs microscopic causality Causality can be defined macroscopically, at the level of human observers, or microscopically, for fundamental events at the atomic level. The strong causality principle forbids information transfer faster than the speed of light; the weak causality principle operates at the microscopic level and need not lead to information transfer. Physical models can obey the weak principle without obeying the strong version. Macroscopic causality In classical physics, an effect cannot occur before its cause which is why solutions such as the advanced time solutions of the Liénard–Wiechert potential are discarded as physically meaningless. In both Einstein's theory of special and general relativity, causality means that an effect cannot occur from a cause that is not in the back (past) light cone of that event. Similarly, a cause cannot have an effect outside its front (future) light cone. These restrictions are consistent with the constraint that mass and energy that act as causal influences cannot travel faster than the speed of light and/or backwards in time. In quantum field theory, observables of events with a spacelike relationship, "elsewhere", have to commute, so the order of observations or measurements of such observables do not impact each other. Another requirement of causality is that cause and effect be mediated across space and time (requirement of contiguity). This requirement has been very influential in the past, in the first place as a result of direct observation of causal processes (like pushing a cart), in the second place as a problematic aspect of Newton's theory of gravitation (attraction of the earth by the sun by means of action at a distance) replacing mechanistic proposals like Descartes' vortex theory; in the third place as an incentive to develop dynamic field theories (e.g., Maxwell's electrodynamics and Einstein's general theory of relativity) restoring contiguity in the transmission of influences in a more successful way than in Descartes' theory. Simultaneity In modern physics, the notion of causality had to be clarified. The word simultaneous is observer-dependent in special relativity. The principle is relativity of simultaneity. Consequently, the relativistic principle of causality says that the cause must precede its effect according to all inertial observers. This is equivalent to the statement that the cause and its effect are separated by a timelike interval, and the effect belongs to the future of its cause. If a timelike interval separates the two events, this means that a signal could be sent between them at less than the speed of light. On the other hand, if signals could move faster than the speed of light, this would violate causality because it would allow a signal to be sent across spacelike intervals, which means that at least to some inertial observers the signal would travel backward in time. For this reason, special relativity does not allow communication faster than the speed of light. In the theory of general relativity, the concept of causality is generalized in the most straightforward way: the effect must belong to the future light cone of its cause, even if the spacetime is curved. New subtleties must be taken into account when we investigate causality in quantum mechanics and relativistic quantum field theory in particular. In those two theories, causality is closely related to the principle of locality. Bell's Theorem shows that conditions of "local causality" in experiments involving quantum entanglement result in non-classical correlations predicted by quantum mechanics. Despite these subtleties, causality remains an important and valid concept in physical theories. For example, the notion that events can be ordered into causes and effects is necessary to prevent (or at least outline) causality paradoxes such as the grandfather paradox, which asks what happens if a time-traveler kills his own grandfather before he ever meets the time-traveler's grandmother. See also Chronology protection conjecture. Determinism (or, what causality is not) The word causality in this context means that all effects must have specific physical causes due to fundamental interactions. Causality in this context is not associated with definitional principles such as Newton's second law. As such, in the context of causality, a force does not cause a mass to accelerate nor vice versa. Rather, Newton's Second Law can be derived from the conservation of momentum, which itself is a consequence of the spatial homogeneity of physical laws. The empiricists' aversion to metaphysical explanations (like Descartes' vortex theory) meant that scholastic arguments about what caused phenomena were either rejected for being untestable or were just ignored. The complaint that physics does not explain the cause of phenomena has accordingly been dismissed as a problem that is philosophical or metaphysical rather than empirical (e.g., Newton's "Hypotheses non fingo"). According to Ernst Mach the notion of force in Newton's second law was pleonastic, tautological and superfluous and, as indicated above, is not considered a consequence of any principle of causality. Indeed, it is possible to consider the Newtonian equations of motion of the gravitational interaction of two bodies, as two coupled equations describing the positions and of the two bodies, without interpreting the right hand sides of these equations as forces; the equations just describe a process of interaction, without any necessity to interpret one body as the cause of the motion of the other, and allow one to predict the states of the system at later (as well as earlier) times. The ordinary situations in which humans singled out some factors in a physical interaction as being prior and therefore supplying the "because" of the interaction were often ones in which humans decided to bring about some state of affairs and directed their energies to producing that state of affairs—a process that took time to establish and left a new state of affairs that persisted beyond the time of activity of the actor. It would be difficult and pointless, however, to explain the motions of binary stars with respect to each other in that way which, indeed, are time-reversible and agnostic to the arrow of time, but with such a direction of time established, the entire evolution system could then be completely determined. The possibility of such a time-independent view is at the basis of the deductive-nomological (D-N) view of scientific explanation, considering an event to be explained if it can be subsumed under a scientific law. In the D-N view, a physical state is considered to be explained if, applying the (deterministic) law, it can be derived from given initial conditions. (Such initial conditions could include the momenta and distance from each other of binary stars at any given moment.) Such 'explanation by determinism' is sometimes referred to as causal determinism. A disadvantage of the D-N view is that causality and determinism are more or less identified. Thus, in classical physics, it was assumed that all events are caused by earlier ones according to the known laws of nature, culminating in Pierre-Simon Laplace's claim that if the current state of the world were known with precision, it could be computed for any time in the future or the past (see Laplace's demon). However, this is usually referred to as Laplace determinism (rather than 'Laplace causality') because it hinges on determinism in mathematical models as dealt with in the mathematical Cauchy problem. Confusion between causality and determinism is particularly acute in quantum mechanics, this theory being acausal in the sense that it is unable in many cases to identify the causes of actually observed effects or to predict the effects of identical causes, but arguably deterministic in some interpretations (e.g. if the wave function is presumed not to actually collapse as in the many-worlds interpretation, or if its collapse is due to hidden variables, or simply redefining determinism as meaning that probabilities rather than specific effects are determined). Distributed causality Theories in physics like the butterfly effect from chaos theory open up the possibility of a type of distributed parameter systems in causality. The butterfly effect theory proposes: "Small variations of the initial condition of a nonlinear dynamical system may produce large variations in the long term behavior of the system." This opens up the opportunity to understand a distributed causality. A related way to interpret the butterfly effect is to see it as highlighting the difference between the application of the notion of causality in physics and a more general use of causality as represented by Mackie's INUS conditions. In classical (Newtonian) physics, in general, only those conditions are (explicitly) taken into account, that are both necessary and sufficient. For instance, when a massive sphere is caused to roll down a slope starting from a point of unstable equilibrium, then its velocity is assumed to be caused by the force of gravity accelerating it; the small push that was needed to set it into motion is not explicitly dealt with as a cause. In order to be a physical cause there must be a certain proportionality with the ensuing effect. A distinction is drawn between triggering and causation of the ball's motion. By the same token the butterfly can be seen as triggering a tornado, its cause being assumed to be seated in the atmospherical energies already present beforehand, rather than in the movements of a butterfly. Causal sets In causal set theory, causality takes an even more prominent place. The basis for this approach to quantum gravity is in a theorem by David Malament. This theorem states that the causal structure of a spacetime suffices to reconstruct its conformal class, so knowing the conformal factor and the causal structure is enough to know the spacetime. Based on this, Rafael Sorkin proposed the idea of Causal Set Theory, which is a fundamentally discrete approach to quantum gravity. The causal structure of the spacetime is represented as a poset, while the conformal factor can be reconstructed by identifying each poset element with a unit volume. See also (general) References Further reading Bohm, David. (2005). Causality and Chance in Modern Physics. London: Taylor and Francis. Espinoza, Miguel (2006). Théorie du déterminisme causal. Paris: L'Harmattan. . External links Causal Processes, Stanford Encyclopedia of Philosophy Caltech Tutorial on Relativity — A nice discussion of how observers moving relatively to each other see different slices of time. Faster-than-c signals, special relativity, and causality. This article explains that faster than light signals do not necessarily lead to a violation of causality. Causality Concepts in physics Time Philosophy of physics Time travel ja:因果律
Causality (physics)
[ "Physics", "Mathematics" ]
2,271
[ "Philosophy of physics", "Applied and interdisciplinary physics", "Physical quantities", "Time", "Time travel", "Quantity", "nan", "Spacetime", "Wikipedia categories named after physical quantities" ]
152,036
https://en.wikipedia.org/wiki/S-100%20bus
The S-100 bus or Altair bus, IEEE 696-1983 (inactive-withdrawn), is an early computer bus designed in 1974 as a part of the Altair 8800. The bus was the first industry standard expansion bus for the microcomputer industry. computers, consisting of processor and peripheral cards, were produced by a number of manufacturers. The bus formed the basis for homebrew computers whose builders (e.g., the Homebrew Computer Club) implemented drivers for CP/M and MP/M. These microcomputers ran the gamut from hobbyist toy to small business workstation and were common in early home computers until the advent of the IBM PC. Architecture The bus is a passive backplane of 100-pin printed circuit board edge connectors wired in parallel. Circuit cards measuring serving the functions of CPU, memory, or I/O interface plugged into these connectors. The bus signal definitions closely follow those of an 8080 microprocessor system, since the Intel 8080 microprocessor was the first microprocessor hosted on the bus. The 100 lines of the bus can be grouped into four types: 1) Power, 2) Data, 3) Address, and 4) Clock and control. Power supplied on the bus is bulk unregulated +8 Volt DC and ±16 Volt DC, designed to be regulated on the cards to +5 V (used by TTL ICs), -5 V and +12 V for Intel 8080 CPU IC, ±12 V RS-232 line driver ICs, +12 V for disk drive motors. The onboard voltage regulation is typically performed by devices of the 78xx family (for example, a 7805 device to produce +5 volts). These are linear regulators which are commonly mounted on heat sinks. The bi-directional 8-bit data bus of the Intel 8080 is split into two unidirectional 8-bit data buses. The processor could use only one of these at a time. The Sol-20 used a variation that had only a single 8-bit bus and used the now-unused pins as signal grounds to reduce electronic noise. The direction of the bus, in or out, was signaled using the otherwise unused DBIN pin. This became universal in the market as well, making the second bus superfluous. Later, these two 8-bit buses would be combined to support a 16-bit data width for more advanced processors, using the Sol's system to signal the direction. The address bus is 16-bits wide in the initial implementation and later extended to 24-bits wide. A bus control signal can put these lines in a tri-state condition to allow direct memory access. The Cromemco Dazzler, for example, is an early card that retrieved digital images from memory using direct memory access. Clock and control signals are used to manage the traffic on the bus. For example, the DO Disable line will tristate the address lines during direct memory access. Unassigned lines of the original bus specification were later assigned to support more advanced processors. For example, the Zilog Z-80 processor has a non-maskable interrupt line that the Intel 8080 processor does not. One unassigned line of the bus then was reassigned to support the non-maskable interrupt request. History During the design of the Altair, the hardware required to make a usable machine was not available in time for the January 1975 launch date. The designer, Ed Roberts, also had the problem of the backplane taking up too much room. Attempting to avoid these problems, he placed the existing components in a case with additional "slots", so that the missing components could be plugged in later when they became available. The backplane is split into four separate cards, with the CPU on a fifth. He then looked for an inexpensive source of connectors, and he came across a supply of military surplus 100-pin edge connectors. The 100-pin bus was created by an anonymous draftsman, who selected the connector from a parts catalog and arbitrarily assigned signal names to groups of connector pins. A burgeoning industry of "clone" machines followed the introduction of the Altair in 1975. Most of these used the same bus layout as the Altair, creating a new industry standard. These companies were forced to refer to the system as the "Altair bus", and wanted another name in order to avoid referring to their competitor when describing their own system. The "" name, short for "Standard 100", was coined by Harry Garland and Roger Melen, co-founders of Cromemco. While on a flight to attend the Atlantic City PC '76 microcomputer conference in August 1976, they shared the cabin with Bob Marsh and Lee Felsenstein of Processor Technology. Melen went over to them to convince them to adopt the same name. He had a beer in his hand and when the plane hit a bump, Melen spilt some of the beer on Marsh. Marsh agreed to use the name, which Melen ascribes to him wanting to get Melen to leave with his beer. The term first appeared in print in a Cromemco advertisement in the November 1976 issue of Byte magazine. The first symposium on the bus, moderated by Jim Warren, was held November 20, 1976 at Diablo Valley College with a panel consisting of Harry Garland, George Morrow, and Lee Felsenstein. Just one year later, the Bus would be described as "the most used busing standard ever developed in the computer industry." Cromemco was the largest of the manufacturers, followed by Vector Graphic and North Star Computers. Other innovators were companies such as Alpha Microsystems, IMS Associates, Inc., Godbout Electronics (later CompuPro), and Ithaca InterSystems. In May 1984, Microsystems published a comprehensive product directory listing over 500 "/IEEE-696" products from over 150 companies. The bus signals were simple to create using an 8080 CPU, but increasingly less so when using other processors like the 68000. More board space was occupied by signal conversion logic. Nonetheless by 1984, eleven different processors were hosted on the bus, from the 8-bit Intel 8080 to the 16-bit Zilog Z-8000. In 1986, Cromemco introduced the XXU card, designed by Ed Lupin, utilizing a 32-bit Motorola 68020 processor. IEEE-696 Standard As the bus gained momentum, there was a need to develop a formal specification of the bus to help assure compatibility of products produced by different manufacturers. There was also a need to extend the bus so that it could support processors more capable than the Intel 8080 used in the original Altair Computer. In May 1978, George Morrow and Howard Fullmer published a "Proposed Standard for the Bus" noting that 150 vendors were already supplying products for the Bus. This proposed standard documented the 8-bit data path and 16-bit address path of the bus and stated that consideration was being given to extending the data path to 16 bits and the address path to 24 bits. In July 1979 Kells Elmquist, Howard Fullmer, David Gustavson, and George Morrow published a "Standard Specification for Bus Interface Devices." In this specification the data path was extended to 16 bits and the address path was extended to 24 bits. The IEEE 696 Working Group, chaired by Mark Garetz, continued to develop the specification which was proposed as an IEEE Standard and approved by the IEEE Computer Society on June 10, 1982. The American National Standards Institute (ANSI) approved the IEEE standard on September 8, 1983. The computer bus structure developed by Ed Roberts for the Altair 8800 computer had been extended, rigorously documented, and now designated as the American National Standard IEEE Std 696–1983. Retirement IBM introduced the IBM Personal Computer in 1981 and followed it with increasingly capable models: the XT in 1983 and the AT in 1984. The success of these computers, which used IBM's own, incompatible bus architecture, cut deeply into the market for bus products. In May 1984, Sol Libes (who had been a member of the IEEE-696 Working Group) wrote in Microsystems: "there is no doubt that the S-100 market can now be considered a mature industry with only moderate growth potential, compared to the IBM PC-compatible market". As the IBM PC products captured the low-end of the market, machines moved up-scale to more powerful OEM and multiuser systems. Banks of bus computers were used, for example, to process the trades at the Chicago Mercantile Exchange; the United States Air Force deployed bus machines for their mission planning systems. However throughout the 1980s the market for bus machines for the hobbyist, for personal use, and even for small business was on the decline. The market for bus products continued to contract through the early 1990s, as IBM-compatible computers became more capable. In 1992, the Chicago Mercantile Exchange, for example, replaced their bus computers with the IBM model PS/2. By 1994, the bus industry had contracted sufficiently that the IEEE did not see a need to continue supporting the IEEE-696 standard. The IEEE-696 standard was retired on June 14, 1994. References External links "S100 Computers", A website containing many photos of cards, documentation, and history ""Cromemco" based, S-100 micro-computer" , Robert Kuhmann's images of several cards "Herb's S-100 Stuff", Herbert Johnson's collection of history "IEEE-696 / Bus Documentation and Manuals Archive", Howard Harte's manuals collection Computer buses S-100 IEEE standards Computer-related introductions in 1974 Cromemco
S-100 bus
[ "Technology" ]
2,015
[ "Computer standards", "IEEE standards" ]
152,440
https://en.wikipedia.org/wiki/Stellar%20nucleosynthesis
In astrophysics, stellar nucleosynthesis is the creation of chemical elements by nuclear fusion reactions within stars. Stellar nucleosynthesis has occurred since the original creation of hydrogen, helium and lithium during the Big Bang. As a predictive theory, it yields accurate estimates of the observed abundances of the elements. It explains why the observed abundances of elements change over time and why some elements and their isotopes are much more abundant than others. The theory was initially proposed by Fred Hoyle in 1946, who later refined it in 1954. Further advances were made, especially to nucleosynthesis by neutron capture of the elements heavier than iron, by Margaret and Geoffrey Burbidge, William Alfred Fowler and Fred Hoyle in their famous 1957 B2FH paper, which became one of the most heavily cited papers in astrophysics history. Stars evolve because of changes in their composition (the abundance of their constituent elements) over their lifespans, first by burning hydrogen (main sequence star), then helium (horizontal branch star), and progressively burning higher elements. However, this does not by itself significantly alter the abundances of elements in the universe as the elements are contained within the star. Later in its life, a low-mass star will slowly eject its atmosphere via stellar wind, forming a planetary nebula, while a higher–mass star will eject mass via a sudden catastrophic event called a supernova. The term supernova nucleosynthesis is used to describe the creation of elements during the explosion of a massive star or white dwarf. The advanced sequence of burning fuels is driven by gravitational collapse and its associated heating, resulting in the subsequent burning of carbon, oxygen and silicon. However, most of the nucleosynthesis in the mass range (from silicon to nickel) is actually caused by the upper layers of the star collapsing onto the core, creating a compressional shock wave rebounding outward. The shock front briefly raises temperatures by roughly 50%, thereby causing furious burning for about a second. This final burning in massive stars, called explosive nucleosynthesis or supernova nucleosynthesis, is the final epoch of stellar nucleosynthesis. A stimulus to the development of the theory of nucleosynthesis was the discovery of variations in the abundances of elements found in the universe. The need for a physical description was already inspired by the relative abundances of the chemical elements in the solar system. Those abundances, when plotted on a graph as a function of the atomic number of the element, have a jagged sawtooth shape that varies by factors of tens of millions (see history of nucleosynthesis theory). This suggested a natural process that is not random. A second stimulus to understanding the processes of stellar nucleosynthesis occurred during the 20th century, when it was realized that the energy released from nuclear fusion reactions accounted for the longevity of the Sun as a source of heat and light. History In 1920, Arthur Eddington, on the basis of the precise measurements of atomic masses by F.W. Aston and a preliminary suggestion by Jean Perrin, proposed that stars obtained their energy from nuclear fusion of hydrogen to form helium and raised the possibility that the heavier elements are produced in stars. This was a preliminary step toward the idea of stellar nucleosynthesis. In 1928 George Gamow derived what is now called the Gamow factor, a quantum-mechanical formula yielding the probability for two contiguous nuclei to overcome the electrostatic Coulomb barrier between them and approach each other closely enough to undergo nuclear reaction due to the strong nuclear force which is effective only at very short distances. In the following decade the Gamow factor was used by Atkinson and Houtermans and later by Edward Teller and Gamow himself to derive the rate at which nuclear reactions would occur at the high temperatures believed to exist in stellar interiors. In 1939, in a Nobel lecture entitled "Energy Production in Stars", Hans Bethe analyzed the different possibilities for reactions by which hydrogen is fused into helium. He defined two processes that he believed to be the sources of energy in stars. The first one, the proton–proton chain reaction, is the dominant energy source in stars with masses up to about the mass of the Sun. The second process, the carbon–nitrogen–oxygen cycle, which was also considered by Carl Friedrich von Weizsäcker in 1938, is more important in more massive main-sequence stars. These works concerned the energy generation capable of keeping stars hot. A clear physical description of the proton–proton chain and of the CNO cycle appears in a 1968 textbook. Bethe's two papers did not address the creation of heavier nuclei, however. That theory was begun by Fred Hoyle in 1946 with his argument that a collection of very hot nuclei would assemble thermodynamically into iron. Hoyle followed that in 1954 with a paper describing how advanced fusion stages within massive stars would synthesize the elements from carbon to iron in mass. Hoyle's theory was extended to other processes, beginning with the publication of the 1957 review paper "Synthesis of the Elements in Stars" by Burbidge, Burbidge, Fowler and Hoyle, more commonly referred to as the B2FH paper. This review paper collected and refined earlier research into a heavily cited picture that gave promise of accounting for the observed relative abundances of the elements; but it did not itself enlarge Hoyle's 1954 picture for the origin of primary nuclei as much as many assumed, except in the understanding of nucleosynthesis of those elements heavier than iron by neutron capture. Significant improvements were made by Alastair G. W. Cameron and by Donald D. Clayton. In 1957 Cameron presented his own independent approach to nucleosynthesis, informed by Hoyle's example, and introduced computers into time-dependent calculations of evolution of nuclear systems. Clayton calculated the first time-dependent models of the s-process in 1961 and of the r-process in 1965, as well as of the burning of silicon into the abundant alpha-particle nuclei and iron-group elements in 1968, and discovered radiogenic chronologies for determining the age of the elements. Key reactions The most important reactions in stellar nucleosynthesis: Hydrogen fusion: Deuterium fusion The proton–proton chain The carbon–nitrogen–oxygen cycle Helium fusion: The triple-alpha process The alpha process Fusion of heavier elements: Lithium burning: a process found most commonly in brown dwarfs Carbon-burning process Neon-burning process Oxygen-burning process Silicon-burning process Production of elements heavier than iron: Neutron capture: The r-process The s-process Proton capture: The rp-process The p-process Photodisintegration Hydrogen fusion Hydrogen fusion (nuclear fusion of four protons to form a helium-4 nucleus) is the dominant process that generates energy in the cores of main-sequence stars. It is also called "hydrogen burning", which should not be confused with the chemical combustion of hydrogen in an oxidizing atmosphere. There are two predominant processes by which stellar hydrogen fusion occurs: proton–proton chain and the carbon–nitrogen–oxygen (CNO) cycle. Ninety percent of all stars, with the exception of white dwarfs, are fusing hydrogen by these two processes. In the cores of lower-mass main-sequence stars such as the Sun, the dominant energy production process is the proton–proton chain reaction. This creates a helium-4 nucleus through a sequence of reactions that begin with the fusion of two protons to form a deuterium nucleus (one proton plus one neutron) along with an ejected positron and neutrino. In each complete fusion cycle, the proton–proton chain reaction releases about 26.2 MeV. Proton-proton chain with a dependence of approximately T^4, meaning the reaction cycle is highly sensitive to temperature; a 10% rise of temperature would increase energy production by this method by 46%, hence, this hydrogen fusion process can occur in up to a third of the star's radius and occupy half the star's mass. For stars above 35% of the Sun's mass, the energy flux toward the surface is sufficiently low and energy transfer from the core region remains by radiative heat transfer, rather than by convective heat transfer. As a result, there is little mixing of fresh hydrogen into the core or fusion products outward. In higher-mass stars, the dominant energy production process is the CNO cycle, which is a catalytic cycle that uses nuclei of carbon, nitrogen and oxygen as intermediaries and in the end produces a helium nucleus as with the proton–proton chain. During a complete CNO cycle, 25.0 MeV of energy is released. The difference in energy production of this cycle, compared to the proton–proton chain reaction, is accounted for by the energy lost through neutrino emission. CNO cycle is highly sensitive to temperature, with rates proportional to T^{16-20}, a 10% rise of temperature would produce a 350% rise in energy production. About 90% of the CNO cycle energy generation occurs within the inner 15% of the star's mass, hence it is strongly concentrated at the core. This results in such an intense outward energy flux that convective energy transfer becomes more important than does radiative transfer. As a result, the core region becomes a convection zone, which stirs the hydrogen fusion region and keeps it well mixed with the surrounding proton-rich region. This core convection occurs in stars where the CNO cycle contributes more than 20% of the total energy. As the star ages and the core temperature increases, the region occupied by the convection zone slowly shrinks from 20% of the mass down to the inner 8% of the mass. The Sun produces on the order of 1% of its energy from the CNO cycle. The type of hydrogen fusion process that dominates in a star is determined by the temperature dependency differences between the two reactions. The proton–proton chain reaction starts at temperatures about , making it the dominant fusion mechanism in smaller stars. A self-maintaining CNO chain requires a higher temperature of approximately , but thereafter it increases more rapidly in efficiency as the temperature rises, than does the proton–proton reaction. Above approximately , the CNO cycle becomes the dominant source of energy. This temperature is achieved in the cores of main-sequence stars with at least 1.3 times the mass of the Sun. The Sun itself has a core temperature of about . As a main-sequence star ages, the core temperature will rise, resulting in a steadily increasing contribution from its CNO cycle. Helium fusion Main sequence stars accumulate helium in their cores as a result of hydrogen fusion, but the core does not become hot enough to initiate helium fusion. Helium fusion first begins when a star leaves the red giant branch after accumulating sufficient helium in its core to ignite it. In stars around the mass of the Sun, this begins at the tip of the red giant branch with a helium flash from a degenerate helium core, and the star moves to the horizontal branch where it burns helium in its core. More massive stars ignite helium in their core without a flash and execute a blue loop before reaching the asymptotic giant branch. Such a star initially moves away from the AGB toward bluer colours, then loops back again to what is called the Hayashi track. An important consequence of blue loops is that they give rise to classical Cepheid variables, of central importance in determining distances in the Milky Way and to nearby galaxies. Despite the name, stars on a blue loop from the red giant branch are typically not blue in colour but are rather yellow giants, possibly Cepheid variables. They fuse helium until the core is largely carbon and oxygen. The most massive stars become supergiants when they leave the main sequence and quickly start helium fusion as they become red supergiants. After the helium is exhausted in the core of a star, helium fusion will continue in a shell around the carbon–oxygen core. In all cases, helium is fused to carbon via the triple-alpha process, i.e., three helium nuclei are transformed into carbon via 8Be. This can then form oxygen, neon, and heavier elements via the alpha process. In this way, the alpha process preferentially produces elements with even numbers of protons by the capture of helium nuclei. Elements with odd numbers of protons are formed by other fusion pathways. Reaction rate The reaction rate density between species A and B, having number densities nA,B, is given by: where k is the reaction rate constant of each single elementary binary reaction composing the nuclear fusion process: here, σ(v) is the cross-section at relative velocity v, and averaging is performed over all velocities. Semi-classically, the cross section is proportional to , where is the de Broglie wavelength. Thus semi-classically the cross section is proportional to . However, since the reaction involves quantum tunneling, there is an exponential damping at low energies that depends on Gamow factor EG, giving an Arrhenius equation: where S(E) depends on the details of the nuclear interaction, and has the dimension of an energy multiplied for a cross section. One then integrates over all energies to get the total reaction rate, using the Maxwell–Boltzmann distribution and the relation: where is the reduced mass. Since this integration has an exponential damping at high energies of the form and at low energies from the Gamow factor, the integral almost vanished everywhere except around the peak, called Gamow peak, at E0, where: Thus: The exponent can then be approximated around E0 as: And the reaction rate is approximated as: Values of S(E0) are typically , but are damped by a huge factor when involving a beta decay, due to the relation between the intermediate bound state (e.g. diproton) half-life and the beta decay half-life, as in the proton–proton chain reaction. Note that typical core temperatures in main-sequence stars give kT of the order of keV. Thus, the limiting reaction in the CNO cycle, proton capture by , has S(E0) ~ S(0) = 3.5keV·b, while the limiting reaction in the proton–proton chain reaction, the creation of deuterium from two protons, has a much lower S(E0) ~ S(0) = 4×10−22keV·b. Incidentally, since the former reaction has a much higher Gamow factor, and due to the relative abundance of elements in typical stars, the two reaction rates are equal at a temperature value that is within the core temperature ranges of main-sequence stars. References Notes Citations Further reading External links "How the Sun Shines", by John N. Bahcall (Nobel prize site, accessed 6 January 2020) Nucleosynthesis in NASA's Cosmicopia Nucleosynthesis Nuclear physics Nucleosynthesis, Stellar Concepts in stellar astronomy Concepts in astronomy
Stellar nucleosynthesis
[ "Physics", "Chemistry", "Astronomy" ]
3,098
[ "Nuclear fission", "Concepts in astrophysics", "Concepts in astronomy", "Astrophysics", "Nucleosynthesis", "Nuclear physics", "Concepts in stellar astronomy", "Nuclear fusion", "Astronomical sub-disciplines", "Stellar astronomy" ]
152,464
https://en.wikipedia.org/wiki/Nuclide
Nuclides (or nucleides, from nucleus, also known as nuclear species) are a class of atoms characterized by their number of protons, Z, their number of neutrons, N, and their nuclear energy state. The word nuclide was coined by the American nuclear physicist Truman P. Kohman in 1947. Kohman defined nuclide as a "species of atom characterized by the constitution of its nucleus" containing a certain number of neutrons and protons. The term thus originally focused on the nucleus. Nuclides vs isotopes A nuclide is a species of an atom with a specific number of protons and neutrons in the nucleus, for example carbon-13 with 6 protons and 7 neutrons. The nuclide concept (referring to individual nuclear species) emphasizes nuclear properties over chemical properties, while the isotope concept (grouping all atoms of each element) emphasizes chemical over nuclear. The neutron number has large effects on nuclear properties, but its effect on chemical reactions is negligible for most elements. Even in the case of the very lightest elements, where the ratio of neutron number to atomic number varies the most between isotopes, it usually has only a small effect, but it matters in some circumstances. For hydrogen, the lightest element, the isotope effect is large enough to affect biological systems strongly. In the case of helium, helium-4 obeys Bose–Einstein statistics, while helium-3 obeys Fermi–Dirac statistics. Since isotope is the older term, it is better known than nuclide, and is still occasionally used in contexts in which nuclide might be more appropriate, such as nuclear technology and nuclear medicine. Types of nuclides Although the words nuclide and isotope are often used interchangeably, being isotopes is actually only one relation between nuclides. The following table names some other relations. A set of nuclides with equal proton number (atomic number), i.e., of the same chemical element but different neutron numbers, are called isotopes of the element. Particular nuclides are still often loosely called "isotopes", but the term "nuclide" is the correct one in general (i.e., when Z is not fixed). In similar manner, a set of nuclides with equal mass number A, but different atomic number, are called isobars (isobar = equal in weight), and isotones are nuclides of equal neutron number but different proton numbers. Likewise, nuclides with the same neutron excess (N − Z) are called isodiaphers. The name isotone was derived from the name isotope to emphasize that in the first group of nuclides it is the number of neutrons (n) that is constant, whereas in the second the number of protons (p). See Isotope#Notation for an explanation of the notation used for different nuclide or isotope types. Nuclear isomers are members of a set of nuclides with equal proton number and equal mass number (thus making them by definition the same isotope), but different states of excitation. An example is the two states of the single isotope shown among the decay schemes. Each of these two states (technetium-99m and technetium-99) qualifies as a different nuclide, illustrating one way that nuclides may differ from isotopes (an isotope may consist of several different nuclides of different excitation states). The longest-lived non-ground state nuclear isomer is the nuclide tantalum-180m (), which has a half-life in excess of 1,000 trillion years. This nuclide occurs primordially, and has never been observed to decay to the ground state. (In contrast, the ground state nuclide tantalum-180 does not occur primordially, since it decays with a half life of only 8 hours to 180Hf (86%) or 180W (14%).) There are 251 nuclides in nature that have never been observed to decay. They occur among the 80 different elements that have one or more stable isotopes. See stable nuclide and primordial nuclide. Unstable nuclides are radioactive and are called radionuclides. Their decay products ('daughter' products) are called radiogenic nuclides. Origins of naturally occurring radionuclides Natural radionuclides may be conveniently subdivided into three types. First, those whose half-lives t1/2 are at least 2% as long as the age of the Earth (for practical purposes, these are difficult to detect with half-lives less than 10% of the age of the Earth) (). These are remnants of nucleosynthesis that occurred in stars before the formation of the Solar System. For example, the isotope (t1/2 = ) of uranium is still fairly abundant in nature, but the shorter-lived isotope (t1/2 = ) is 138 times rarer. About 34 of these nuclides have been discovered (see List of nuclides and Primordial nuclide for details). The second group of radionuclides that exist naturally consists of radiogenic nuclides such as (t1/2 = ), an isotope of radium, which are formed by radioactive decay. They occur in the decay chains of primordial isotopes of uranium or thorium. Some of these nuclides are very short-lived, such as isotopes of francium. There exist about 51 of these daughter nuclides that have half-lives too short to be primordial, and which exist in nature solely due to decay from longer lived radioactive primordial nuclides. The third group consists of nuclides that are continuously being made in another fashion that is not simple spontaneous radioactive decay (i.e., only one atom involved with no incoming particle) but instead involves a natural nuclear reaction. These occur when atoms react with natural neutrons (from cosmic rays, spontaneous fission, or other sources), or are bombarded directly with cosmic rays. The latter, if non-primordial, are called cosmogenic nuclides. Other types of natural nuclear reactions produce nuclides that are said to be nucleogenic nuclides. An example of nuclides made by nuclear reactions, are cosmogenic (radiocarbon) that is made by cosmic ray bombardment of other elements, and nucleogenic which is still being created by neutron bombardment of natural as a result of natural fission in uranium ores. Cosmogenic nuclides may be either stable or radioactive. If they are stable, their existence must be deduced against a background of stable nuclides, since every known stable nuclide is present on Earth primordially. Artificially produced nuclides Beyond the naturally occurring nuclides, more than 3000 radionuclides of varying half-lives have been artificially produced and characterized. The known nuclides are shown in Table of nuclides. A list of primordial nuclides is given sorted by element, at List of elements by stability of isotopes. List of nuclides is sorted by half-life, for the 905 nuclides with half-lives longer than one hour. Summary table for numbers of each class of nuclides This is a summary table for the 905 nuclides with half-lives longer than one hour, given in list of nuclides. Note that numbers are not exact, and may change slightly in the future, if some "stable" nuclides are observed to be radioactive with very long half-lives. Nuclear properties and stability Atomic nuclei other than hydrogen have protons and neutrons bound together by the residual strong force. Because protons are positively charged, they repel each other. Neutrons, which are electrically neutral, stabilize the nucleus in two ways. Their copresence pushes protons slightly apart, reducing the electrostatic repulsion between the protons, and they exert the attractive nuclear force on each other and on protons. For this reason, one or more neutrons are necessary for two or more protons to be bound into a nucleus. As the number of protons increases, so does the ratio of neutrons to protons necessary to ensure a stable nucleus (see graph). For example, although the neutron–proton ratio of is 1:2, the neutron–proton ratio of is greater than 3:2. A number of lighter elements have stable nuclides with the ratio 1:1 (). The nuclide (calcium-40) is observationally the heaviest stable nuclide with the same number of neutrons and protons. All stable nuclides heavier than calcium-40 contain more neutrons than protons. Even and odd nucleon numbers The proton–neutron ratio is not the only factor affecting nuclear stability. It depends also on even or odd parity of its atomic number Z, neutron number N and, consequently, of their sum, the mass number A. Oddness of both Z and N tends to lower the nuclear binding energy, making odd nuclei, generally, less stable. This remarkable difference of nuclear binding energy between neighbouring nuclei, especially of odd-A isobars, has important consequences: unstable isotopes with a nonoptimal number of neutrons or protons decay by beta decay (including positron decay), electron capture or more exotic means, such as spontaneous fission and cluster decay. The majority of stable nuclides are even-proton–even-neutron, where all numbers Z, N, and A are even. The odd-A stable nuclides are divided (roughly evenly) into odd-proton–even-neutron, and even-proton–odd-neutron nuclides. Odd-proton–odd-neutron nuclides (and nuclei) are the least common. See also Isotope (much more information on abundance of stable nuclides) List of elements by stability of isotopes List of nuclides (sorted by half-life) Table of nuclides Alpha nuclide Monoisotopic element Mononuclidic element Primordial element Radionuclide Hypernucleus References External links Livechart - Table of Nuclides at The International Atomic Energy Agency Nuclear physics
Nuclide
[ "Physics", "Chemistry" ]
2,140
[ "Isotopes", "Nuclear physics" ]
152,703
https://en.wikipedia.org/wiki/Hilbert%27s%20third%20problem
The third of Hilbert's list of mathematical problems, presented in 1900, was the first to be solved. The problem is related to the following question: given any two polyhedra of equal volume, is it always possible to cut the first into finitely many polyhedral pieces which can be reassembled to yield the second? Based on earlier writings by Carl Friedrich Gauss, David Hilbert conjectured that this is not always possible. This was confirmed within the year by his student Max Dehn, who proved that the answer in general is "no" by producing a counterexample. The answer for the analogous question about polygons in 2 dimensions is "yes" and had been known for a long time; this is the Wallace–Bolyai–Gerwien theorem. Unknown to Hilbert and Dehn, Hilbert's third problem was also proposed independently by Władysław Kretkowski for a math contest of 1882 by the Academy of Arts and Sciences of Kraków, and was solved by Ludwik Antoni Birkenmajer with a different method than Dehn's. Birkenmajer did not publish the result, and the original manuscript containing his solution was rediscovered years later. History and motivation The formula for the volume of a pyramid, had been known to Euclid, but all proofs of it involve some form of limiting process or calculus, notably the method of exhaustion or, in more modern form, Cavalieri's principle. Similar formulas in plane geometry can be proven with more elementary means. Gauss regretted this defect in two of his letters to Christian Ludwig Gerling, who proved that two symmetric tetrahedra are equidecomposable. Gauss's letters were the motivation for Hilbert: is it possible to prove the equality of volume using elementary "cut-and-glue" methods? Because if not, then an elementary proof of Euclid's result is also impossible. Dehn's proof Dehn's proof is an instance in which abstract algebra is used to prove an impossibility result in geometry. Other examples are doubling the cube and trisecting the angle. Two polyhedra are called scissors-congruent if the first can be cut into finitely many polyhedral pieces that can be reassembled to yield the second. Any two scissors-congruent polyhedra have the same volume. Hilbert asks about the converse. For every polyhedron , Dehn defines a value, now known as the Dehn invariant , with the property that, if is cut into polyhedral pieces , then In particular, if two polyhedra are scissors-congruent, then they have the same Dehn invariant. He then shows that every cube has Dehn invariant zero while every regular tetrahedron has non-zero Dehn invariant. Therefore, these two shapes cannot be scissors-congruent. A polyhedron's invariant is defined based on the lengths of its edges and the angles between its faces. If a polyhedron is cut into two, some edges are cut into two, and the corresponding contributions to the Dehn invariants should therefore be additive in the edge lengths. Similarly, if a polyhedron is cut along an edge, the corresponding angle is cut into two. Cutting a polyhedron typically also introduces new edges and angles; their contributions must cancel out. The angles introduced when a cut passes through a face add to , and the angles introduced around an edge interior to the polyhedron add to . Therefore, the Dehn invariant is defined in such a way that integer multiples of angles of give a net contribution of zero. All of the above requirements can be met by defining as an element of the tensor product of the real numbers (representing lengths of edges) and the quotient space (representing angles, with all rational multiples of replaced by zero). For some purposes, this definition can be made using the tensor product of modules over (or equivalently of abelian groups), while other aspects of this topic make use of a vector space structure on the invariants, obtained by considering the two factors and to be vector spaces over and taking the tensor product of vector spaces over . This choice of structure in the definition does not make a difference in whether two Dehn invariants, defined in either way, are equal or unequal. For any edge of a polyhedron , let be its length and let denote the dihedral angle of the two faces of that meet at , measured in radians and considered modulo rational multiples of . The Dehn invariant is then defined as where the sum is taken over all edges of the polyhedron . It is a valuation. Further information In light of Dehn's theorem above, one might ask "which polyhedra are scissors-congruent"? Sydler (1965) showed that two polyhedra are scissors-congruent if and only if they have the same volume and the same Dehn invariant. Børge Jessen later extended Sydler's results to four dimensions. In 1990, Dupont and Sah provided a simpler proof of Sydler's result by reinterpreting it as a theorem about the homology of certain classical groups. Debrunner showed in 1980 that the Dehn invariant of any polyhedron with which all of three-dimensional space can be tiled periodically is zero. Jessen also posed the question of whether the analogue of Jessen's results remained true for spherical geometry and hyperbolic geometry. In these geometries, Dehn's method continues to work, and shows that when two polyhedra are scissors-congruent, their Dehn invariants are equal. However, it remains an open problem whether pairs of polyhedra with the same volume and the same Dehn invariant, in these geometries, are always scissors-congruent. Original question Hilbert's original question was more complicated: given any two tetrahedra T1 and T2 with equal base area and equal height (and therefore equal volume), is it always possible to find a finite number of tetrahedra, so that when these tetrahedra are glued in some way to T1 and also glued to T2, the resulting polyhedra are scissors-congruent? Dehn's invariant can be used to yield a negative answer also to this stronger question. See also Hill tetrahedron Onorato Nicoletti References Further reading External links Proof of Dehn's Theorem at Everything2 Dehn Invariant at Everything2 03 Euclidean solid geometry Geometric dissection Geometry problems
Hilbert's third problem
[ "Physics", "Mathematics" ]
1,348
[ "Geometry problems", "Euclidean solid geometry", "Hilbert's problems", "Space", "Geometry", "Spacetime", "Mathematical problems" ]
153,008
https://en.wikipedia.org/wiki/Knot%20theory
In topology, knot theory is the study of mathematical knots. While inspired by knots which appear in daily life, such as those in shoelaces and rope, a mathematical knot differs in that the ends are joined so it cannot be undone, the simplest knot being a ring (or "unknot"). In mathematical language, a knot is an embedding of a circle in 3-dimensional Euclidean space, . Two mathematical knots are equivalent if one can be transformed into the other via a deformation of upon itself (known as an ambient isotopy); these transformations correspond to manipulations of a knotted string that do not involve cutting it or passing it through itself. Knots can be described in various ways. Using different description methods, there may be more than one description of the same knot. For example, a common method of describing a knot is a planar diagram called a knot diagram, in which any knot can be drawn in many different ways. Therefore, a fundamental problem in knot theory is determining when two descriptions represent the same knot. A complete algorithmic solution to this problem exists, which has unknown complexity. In practice, knots are often distinguished using a knot invariant, a "quantity" which is the same when computed from different descriptions of a knot. Important invariants include knot polynomials, knot groups, and hyperbolic invariants. The original motivation for the founders of knot theory was to create a table of knots and links, which are knots of several components entangled with each other. More than six billion knots and links have been tabulated since the beginnings of knot theory in the 19th century. To gain further insight, mathematicians have generalized the knot concept in several ways. Knots can be considered in other three-dimensional spaces and objects other than circles can be used; see knot (mathematics). For example, a higher-dimensional knot is an n-dimensional sphere embedded in (n+2)-dimensional Euclidean space. History Archaeologists have discovered that knot tying dates back to prehistoric times. Besides their uses such as recording information and tying objects together, knots have interested humans for their aesthetics and spiritual symbolism. Knots appear in various forms of Chinese artwork dating from several centuries BC (see Chinese knotting). The endless knot appears in Tibetan Buddhism, while the Borromean rings have made repeated appearances in different cultures, often representing strength in unity. The Celtic monks who created the Book of Kells lavished entire pages with intricate Celtic knotwork. A mathematical theory of knots was first developed in 1771 by Alexandre-Théophile Vandermonde who explicitly noted the importance of topological features when discussing the properties of knots related to the geometry of position. Mathematical studies of knots began in the 19th century with Carl Friedrich Gauss, who defined the linking integral . In the 1860s, Lord Kelvin's theory that atoms were knots in the aether led to Peter Guthrie Tait's creation of the first knot tables for complete classification. Tait, in 1885, published a table of knots with up to ten crossings, and what came to be known as the Tait conjectures. This record motivated the early knot theorists, but knot theory eventually became part of the emerging subject of topology. These topologists in the early part of the 20th century—Max Dehn, J. W. Alexander, and others—studied knots from the point of view of the knot group and invariants from homology theory such as the Alexander polynomial. This would be the main approach to knot theory until a series of breakthroughs transformed the subject. In the late 1970s, William Thurston introduced hyperbolic geometry into the study of knots with the hyperbolization theorem. Many knots were shown to be hyperbolic knots, enabling the use of geometry in defining new, powerful knot invariants. The discovery of the Jones polynomial by Vaughan Jones in 1984 , and subsequent contributions from Edward Witten, Maxim Kontsevich, and others, revealed deep connections between knot theory and mathematical methods in statistical mechanics and quantum field theory. A plethora of knot invariants have been invented since then, utilizing sophisticated tools such as quantum groups and Floer homology. In the last several decades of the 20th century, scientists became interested in studying physical knots in order to understand knotting phenomena in DNA and other polymers. Knot theory can be used to determine if a molecule is chiral (has a "handedness") or not . Tangles, strings with both ends fixed in place, have been effectively used in studying the action of topoisomerase on DNA . Knot theory may be crucial in the construction of quantum computers, through the model of topological quantum computation . Knot equivalence A knot is created by beginning with a one-dimensional line segment, wrapping it around itself arbitrarily, and then fusing its two free ends together to form a closed loop . Simply, we can say a knot is a "simple closed curve" (see Curve) — that is: a "nearly" injective and continuous function , with the only "non-injectivity" being . Topologists consider knots and other entanglements such as links and braids to be equivalent if the knot can be pushed about smoothly, without intersecting itself, to coincide with another knot. The idea of knot equivalence is to give a precise definition of when two knots should be considered the same even when positioned quite differently in space. A formal mathematical definition is that two knots are equivalent if there is an orientation-preserving homeomorphism with . What this definition of knot equivalence means is that two knots are equivalent when there is a continuous family of homeomorphisms of space onto itself, such that the last one of them carries the first knot onto the second knot. (In detail: Two knots and are equivalent if there exists a continuous mapping such that a) for each the mapping taking to is a homeomorphism of onto itself; b) for all ; and c) . Such a function is known as an ambient isotopy.) These two notions of knot equivalence agree exactly about which knots are equivalent: Two knots that are equivalent under the orientation-preserving homeomorphism definition are also equivalent under the ambient isotopy definition, because any orientation-preserving homeomorphisms of to itself is the final stage of an ambient isotopy starting from the identity. Conversely, two knots equivalent under the ambient isotopy definition are also equivalent under the orientation-preserving homeomorphism definition, because the (final) stage of the ambient isotopy must be an orientation-preserving homeomorphism carrying one knot to the other. The basic problem of knot theory, the recognition problem, is determining the equivalence of two knots. Algorithms exist to solve this problem, with the first given by Wolfgang Haken in the late 1960s . Nonetheless, these algorithms can be extremely time-consuming, and a major issue in the theory is to understand how hard this problem really is . The special case of recognizing the unknot, called the unknotting problem, is of particular interest . In February 2021 Marc Lackenby announced a new unknot recognition algorithm that runs in quasi-polynomial time. Knot diagrams A useful way to visualise and manipulate knots is to project the knot onto a plane—think of the knot casting a shadow on the wall. A small change in the direction of projection will ensure that it is one-to-one except at the double points, called crossings, where the "shadow" of the knot crosses itself once transversely . At each crossing, to be able to recreate the original knot, the over-strand must be distinguished from the under-strand. This is often done by creating a break in the strand going underneath. The resulting diagram is an immersed plane curve with the additional data of which strand is over and which is under at each crossing. (These diagrams are called knot diagrams when they represent a knot and link diagrams when they represent a link.) Analogously, knotted surfaces in 4-space can be related to immersed surfaces in 3-space. A reduced diagram is a knot diagram in which there are no reducible crossings (also nugatory or removable crossings), or in which all of the reducible crossings have been removed. A petal projection is a type of projection in which, instead of forming double points, all strands of the knot meet at a single crossing point, connected to it by loops forming non-nested "petals". Reidemeister moves In 1927, working with this diagrammatic form of knots, J. W. Alexander and Garland Baird Briggs, and independently Kurt Reidemeister, demonstrated that two knot diagrams belonging to the same knot can be related by a sequence of three kinds of moves on the diagram, shown below. These operations, now called the Reidemeister moves, are: The proof that diagrams of equivalent knots are connected by Reidemeister moves relies on an analysis of what happens under the planar projection of the movement taking one knot to another. The movement can be arranged so that almost all of the time the projection will be a knot diagram, except at finitely many times when an "event" or "catastrophe" occurs, such as when more than two strands cross at a point or multiple strands become tangent at a point. A close inspection will show that complicated events can be eliminated, leaving only the simplest events: (1) a "kink" forming or being straightened out; (2) two strands becoming tangent at a point and passing through; and (3) three strands crossing at a point. These are precisely the Reidemeister moves . Knot invariants A knot invariant is a "quantity" that is the same for equivalent knots . For example, if the invariant is computed from a knot diagram, it should give the same value for two knot diagrams representing equivalent knots. An invariant may take the same value on two different knots, so by itself may be incapable of distinguishing all knots. An elementary invariant is tricolorability. "Classical" knot invariants include the knot group, which is the fundamental group of the knot complement, and the Alexander polynomial, which can be computed from the Alexander invariant, a module constructed from the infinite cyclic cover of the knot complement . In the late 20th century, invariants such as "quantum" knot polynomials, Vassiliev invariants and hyperbolic invariants were discovered. These aforementioned invariants are only the tip of the iceberg of modern knot theory. Knot polynomials A knot polynomial is a knot invariant that is a polynomial. Well-known examples include the Jones polynomial, the Alexander polynomial, and the Kauffman polynomial. A variant of the Alexander polynomial, the Alexander–Conway polynomial, is a polynomial in the variable z with integer coefficients . The Alexander–Conway polynomial is actually defined in terms of links, which consist of one or more knots entangled with each other. The concepts explained above for knots, e.g. diagrams and Reidemeister moves, also hold for links. Consider an oriented link diagram, i.e. one in which every component of the link has a preferred direction indicated by an arrow. For a given crossing of the diagram, let be the oriented link diagrams resulting from changing the diagram as indicated in the figure: The original diagram might be either or , depending on the chosen crossing's configuration. Then the Alexander–Conway polynomial, , is recursively defined according to the rules: (where is any diagram of the unknot) The second rule is what is often referred to as a skein relation. To check that these rules give an invariant of an oriented link, one should determine that the polynomial does not change under the three Reidemeister moves. Many important knot polynomials can be defined in this way. The following is an example of a typical computation using a skein relation. It computes the Alexander–Conway polynomial of the trefoil knot. The yellow patches indicate where the relation is applied. C() = C() + z C() gives the unknot and the Hopf link. Applying the relation to the Hopf link where indicated, C() = C() + z C() gives a link deformable to one with 0 crossings (it is actually the unlink of two components) and an unknot. The unlink takes a bit of sneakiness: C() = C() + z C() which implies that C(unlink of two components) = 0, since the first two polynomials are of the unknot and thus equal. Putting all this together will show: Since the Alexander–Conway polynomial is a knot invariant, this shows that the trefoil is not equivalent to the unknot. So the trefoil really is "knotted". Actually, there are two trefoil knots, called the right and left-handed trefoils, which are mirror images of each other (take a diagram of the trefoil given above and change each crossing to the other way to get the mirror image). These are not equivalent to each other, meaning that they are not amphichiral. This was shown by Max Dehn, before the invention of knot polynomials, using group theoretical methods . But the Alexander–Conway polynomial of each kind of trefoil will be the same, as can be seen by going through the computation above with the mirror image. The Jones polynomial can in fact distinguish between the left- and right-handed trefoil knots . Hyperbolic invariants William Thurston proved many knots are hyperbolic knots, meaning that the knot complement (i.e., the set of points of 3-space not on the knot) admits a geometric structure, in particular that of hyperbolic geometry. The hyperbolic structure depends only on the knot so any quantity computed from the hyperbolic structure is then a knot invariant . Geometry lets us visualize what the inside of a knot or link complement looks like by imagining light rays as traveling along the geodesics of the geometry. An example is provided by the picture of the complement of the Borromean rings. The inhabitant of this link complement is viewing the space from near the red component. The balls in the picture are views of horoball neighborhoods of the link. By thickening the link in a standard way, the horoball neighborhoods of the link components are obtained. Even though the boundary of a neighborhood is a torus, when viewed from inside the link complement, it looks like a sphere. Each link component shows up as infinitely many spheres (of one color) as there are infinitely many light rays from the observer to the link component. The fundamental parallelogram (which is indicated in the picture), tiles both vertically and horizontally and shows how to extend the pattern of spheres infinitely. This pattern, the horoball pattern, is itself a useful invariant. Other hyperbolic invariants include the shape of the fundamental parallelogram, length of shortest geodesic, and volume. Modern knot and link tabulation efforts have utilized these invariants effectively. Fast computers and clever methods of obtaining these invariants make calculating these invariants, in practice, a simple task . Higher dimensions A knot in three dimensions can be untied when placed in four-dimensional space. This is done by changing crossings. Suppose one strand is behind another as seen from a chosen point. Lift it into the fourth dimension, so there is no obstacle (the front strand having no component there); then slide it forward, and drop it back, now in front. Analogies for the plane would be lifting a string up off the surface, or removing a dot from inside a circle. In fact, in four dimensions, any non-intersecting closed loop of one-dimensional string is equivalent to an unknot. First "push" the loop into a three-dimensional subspace, which is always possible, though technical to explain. Four-dimensional space occurs in classical knot theory, however, and an important topic is the study of slice knots and ribbon knots. A notorious open problem asks whether every slice knot is also ribbon. Knotting spheres of higher dimension Since a knot can be considered topologically a 1-dimensional sphere, the next generalization is to consider a two-dimensional sphere () embedded in 4-dimensional Euclidean space (). Such an embedding is knotted if there is no homeomorphism of onto itself taking the embedded 2-sphere to the standard "round" embedding of the 2-sphere. Suspended knots and spun knots are two typical families of such 2-sphere knots. The mathematical technique called "general position" implies that for a given n-sphere in m-dimensional Euclidean space, if m is large enough (depending on n), the sphere should be unknotted. In general, piecewise-linear n-spheres form knots only in (n + 2)-dimensional space , although this is no longer a requirement for smoothly knotted spheres. In fact, there are smoothly knotted -spheres in 6k-dimensional space; e.g., there is a smoothly knotted 3-sphere in . Thus the codimension of a smooth knot can be arbitrarily large when not fixing the dimension of the knotted sphere; however, any smooth k-sphere embedded in with is unknotted. The notion of a knot has further generalisations in mathematics, see: Knot (mathematics), isotopy classification of embeddings. Every knot in the n-sphere is the link of a real-algebraic set with isolated singularity in . An n-knot is a single embedded in . An n-link consists of k-copies of embedded in , where k is a natural number. Both the and the cases are well studied, and so is the case. Adding knots Two knots can be added by cutting both knots and joining the pairs of ends. The operation is called the knot sum, or sometimes the connected sum or composition of two knots. This can be formally defined as follows : consider a planar projection of each knot and suppose these projections are disjoint. Find a rectangle in the plane where one pair of opposite sides are arcs along each knot while the rest of the rectangle is disjoint from the knots. Form a new knot by deleting the first pair of opposite sides and adjoining the other pair of opposite sides. The resulting knot is a sum of the original knots. Depending on how this is done, two different knots (but no more) may result. This ambiguity in the sum can be eliminated regarding the knots as oriented, i.e. having a preferred direction of travel along the knot, and requiring the arcs of the knots in the sum are oriented consistently with the oriented boundary of the rectangle. The knot sum of oriented knots is commutative and associative. A knot is prime if it is non-trivial and cannot be written as the knot sum of two non-trivial knots. A knot that can be written as such a sum is composite. There is a prime decomposition for knots, analogous to prime and composite numbers . For oriented knots, this decomposition is also unique. Higher-dimensional knots can also be added but there are some differences. While you cannot form the unknot in three dimensions by adding two non-trivial knots, you can in higher dimensions, at least when one considers smooth knots in codimension at least 3. Knots can also be constructed using the circuit topology approach. This is done by combining basic units called soft contacts using five operations (Parallel, Series, Cross, Concerted, and Sub). The approach is applicable to open chains as well and can also be extended to include the so-called hard contacts. Tabulating knots Traditionally, knots have been catalogued in terms of crossing number. Knot tables generally include only prime knots, and only one entry for a knot and its mirror image (even if they are different) . The number of nontrivial knots of a given crossing number increases rapidly, making tabulation computationally difficult . Tabulation efforts have succeeded in enumerating over 6 billion knots and links . The sequence of the number of prime knots of a given crossing number, up to crossing number 16, is 0, 0, 1, 1, 2, 3, 7, 21, 49, 165, 552, 2176, 9988, , , ... . While exponential upper and lower bounds for this sequence are known, it has not been proven that this sequence is strictly increasing . The first knot tables by Tait, Little, and Kirkman used knot diagrams, although Tait also used a precursor to the Dowker notation. Different notations have been invented for knots which allow more efficient tabulation . The early tables attempted to list all knots of at most 10 crossings, and all alternating knots of 11 crossings . The development of knot theory due to Alexander, Reidemeister, Seifert, and others eased the task of verification and tables of knots up to and including 9 crossings were published by Alexander–Briggs and Reidemeister in the late 1920s. The first major verification of this work was done in the 1960s by John Horton Conway, who not only developed a new notation but also the Alexander–Conway polynomial . This verified the list of knots of at most 11 crossings and a new list of links up to 10 crossings. Conway found a number of omissions but only one duplication in the Tait–Little tables; however he missed the duplicates called the Perko pair, which would only be noticed in 1974 by Kenneth Perko . This famous error would propagate when Dale Rolfsen added a knot table in his influential text, based on Conway's work. Conway's 1970 paper on knot theory also contains a typographical duplication on its non-alternating 11-crossing knots page and omits 4 examples — 2 previously listed in D. Lombardero's 1968 Princeton senior thesis and 2 more subsequently discovered by Alain Caudron. [see Perko (1982), Primality of certain knots, Topology Proceedings] Less famous is the duplicate in his 10 crossing link table: 2.-2.-20.20 is the mirror of 8*-20:-20. [See Perko (2016), Historical highlights of non-cyclic knot theory, J. Knot Theory Ramifications]. In the late 1990s Hoste, Thistlethwaite, and Weeks tabulated all the knots through 16 crossings . In 2003 Rankin, Flint, and Schermann, tabulated the alternating knots through 22 crossings . In 2020 Burton tabulated all prime knots with up to 19 crossings . Alexander–Briggs notation This is the most traditional notation, due to the 1927 paper of James W. Alexander and Garland B. Briggs and later extended by Dale Rolfsen in his knot table (see image above and List of prime knots). The notation simply organizes knots by their crossing number. One writes the crossing number with a subscript to denote its order amongst all knots with that crossing number. This order is arbitrary and so has no special significance (though in each number of crossings the twist knot comes after the torus knot). Links are written by the crossing number with a superscript to denote the number of components and a subscript to denote its order within the links with the same number of components and crossings. Thus the trefoil knot is notated 31 and the Hopf link is 2. Alexander–Briggs names in the range 10162 to 10166 are ambiguous, due to the discovery of the Perko pair in Charles Newton Little's original and subsequent knot tables, and differences in approach to correcting this error in knot tables and other publications created after this point. Dowker–Thistlethwaite notation The Dowker–Thistlethwaite notation, also called the Dowker notation or code, for a knot is a finite sequence of even integers. The numbers are generated by following the knot and marking the crossings with consecutive integers. Since each crossing is visited twice, this creates a pairing of even integers with odd integers. An appropriate sign is given to indicate over and undercrossing. For example, in this figure the knot diagram has crossings labelled with the pairs (1,6) (3,−12) (5,2) (7,8) (9,−4) and (11,−10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6, −12, 2, 8, −4, −10. A knot diagram has more than one possible Dowker notation, and there is a well-understood ambiguity when reconstructing a knot from a Dowker–Thistlethwaite notation. Conway notation The Conway notation for knots and links, named after John Horton Conway, is based on the theory of tangles . The advantage of this notation is that it reflects some properties of the knot or link. The notation describes how to construct a particular link diagram of the link. Start with a basic polyhedron, a 4-valent connected planar graph with no digon regions. Such a polyhedron is denoted first by the number of vertices then a number of asterisks which determine the polyhedron's position on a list of basic polyhedra. For example, 10** denotes the second 10-vertex polyhedron on Conway's list. Each vertex then has an algebraic tangle substituted into it (each vertex is oriented so there is no arbitrary choice in substitution). Each such tangle has a notation consisting of numbers and + or − signs. An example is 1*2 −3 2. The 1* denotes the only 1-vertex basic polyhedron. The 2 −3 2 is a sequence describing the continued fraction associated to a rational tangle. One inserts this tangle at the vertex of the basic polyhedron 1*. A more complicated example is 8*3.1.2 0.1.1.1.1.1 Here again 8* refers to a basic polyhedron with 8 vertices. The periods separate the notation for each tangle. Any link admits such a description, and it is clear this is a very compact notation even for very large crossing number. There are some further shorthands usually used. The last example is usually written 8*3:2 0, where the ones are omitted and kept the number of dots excepting the dots at the end. For an algebraic knot such as in the first example, 1* is often omitted. Conway's pioneering paper on the subject lists up to 10-vertex basic polyhedra of which he uses to tabulate links, which have become standard for those links. For a further listing of higher vertex polyhedra, there are nonstandard choices available. Gauss code Gauss code, similar to the Dowker–Thistlethwaite notation, represents a knot with a sequence of integers. However, rather than every crossing being represented by two different numbers, crossings are labeled with only one number. When the crossing is an overcrossing, a positive number is listed. At an undercrossing, a negative number. For example, the trefoil knot in Gauss code can be given as: 1,−2,3,−1,2,−3 Gauss code is limited in its ability to identify knots. This problem is partially addressed with by the extended Gauss code. See also Arithmetic rope Circuit topology Lamp cord trick Legendrian submanifolds and knots List of knot theory topics Molecular knot Quantum topology Ribbon theory References Sources Footnotes Further reading Introductory textbooks There are a number of introductions to knot theory. A classical introduction for graduate students or advanced undergraduates is . Other good texts from the references are and . Adams is informal and accessible for the most part to high schoolers. Lickorish is a rigorous introduction for graduate students, covering a nice mix of classical and modern topics. is suitable for undergraduates who know point-set topology; knowledge of algebraic topology is not required. Surveys Menasco and Thistlethwaite's handbook surveys a mix of topics relevant to current research trends in a manner accessible to advanced undergraduates but of interest to professional researchers. External links "Mathematics and Knots" This is an online version of an exhibition developed for the 1989 Royal Society "PopMath RoadShow". Its aim was to use knots to present methods of mathematics to the general public. History Movie of a modern recreation of Tait's smoke ring experiment History of knot theory (on the home page of Andrew Ranicki) Knot tables and software KnotInfo: Table of Knot Invariants and Knot Theory Resources The Knot Atlas — detailed info on individual knots in knot tables KnotPlot — software to investigate geometric properties of knots Knotscape — software to create images of knots Knoutilus — online database and image generator of knots KnotData.html — Wolfram Mathematica function for investigating knots Regina — software for low-dimensional topology with native support for knots and links. Tables of prime knots with up to 19 crossings Low-dimensional topology
Knot theory
[ "Mathematics" ]
5,887
[ "Topology", "Low-dimensional topology" ]
153,099
https://en.wikipedia.org/wiki/Normal%20closure%20%28group%20theory%29
In group theory, the normal closure of a subset of a group is the smallest normal subgroup of containing Properties and description Formally, if is a group and is a subset of the normal closure of is the intersection of all normal subgroups of containing : The normal closure is the smallest normal subgroup of containing in the sense that is a subset of every normal subgroup of that contains The subgroup is generated by the set of all conjugates of elements of in Therefore one can also write Any normal subgroup is equal to its normal closure. The conjugate closure of the empty set is the trivial subgroup. A variety of other notations are used for the normal closure in the literature, including and Dual to the concept of normal closure is that of or , defined as the join of all normal subgroups contained in Group presentations For a group given by a presentation with generators and defining relators the presentation notation means that is the quotient group where is a free group on References Group theory Closure operators
Normal closure (group theory)
[ "Mathematics" ]
200
[ "Group theory", "Fields of abstract algebra", "Order theory", "Closure operators" ]
153,106
https://en.wikipedia.org/wiki/Dedekind%20group
In group theory, a Dedekind group is a group G such that every subgroup of G is normal. All abelian groups are Dedekind groups. A non-abelian Dedekind group is called a Hamiltonian group. The most familiar (and smallest) example of a Hamiltonian group is the quaternion group of order 8, denoted by Q8. Dedekind and Baer have shown (in the finite and respectively infinite order case) that every Hamiltonian group is a direct product of the form , where B is an elementary abelian 2-group, and D is a torsion abelian group with all elements of odd order. Dedekind groups are named after Richard Dedekind, who investigated them in , proving a form of the above structure theorem (for finite groups). He named the non-abelian ones after William Rowan Hamilton, the discoverer of quaternions. In 1898 George Miller delineated the structure of a Hamiltonian group in terms of its order and that of its subgroups. For instance, he shows "a Hamilton group of order 2a has quaternion groups as subgroups". In 2005 Horvat et al used this structure to count the number of Hamiltonian groups of any order where o is an odd integer. When then there are no Hamiltonian groups of order n, otherwise there are the same number as there are Abelian groups of order o. Notes References . Baer, R. Situation der Untergruppen und Struktur der Gruppe, Sitz.-Ber. Heidelberg. Akad. Wiss.2, 12–17, 1933. . . . . Group theory Properties of groups
Dedekind group
[ "Mathematics" ]
351
[ "Mathematical structures", "Properties of groups", "Group theory", "Fields of abstract algebra", "Algebraic structures" ]
153,130
https://en.wikipedia.org/wiki/Quaternion%20group
In group theory, the quaternion group Q8 (sometimes just denoted by Q) is a non-abelian group of order eight, isomorphic to the eight-element subset of the quaternions under multiplication. It is given by the group presentation where e is the identity element and commutes with the other elements of the group. These relations, discovered by W. R. Hamilton, also generate the quaternions as an algebra over the real numbers. Another presentation of Q8 is Like many other finite groups, it can be realized as the Galois group of a certain field of algebraic numbers. Compared to dihedral group The quaternion group Q8 has the same order as the dihedral group D4, but a different structure, as shown by their Cayley and cycle graphs: In the diagrams for D4, the group elements are marked with their action on a letter F in the defining representation R2. The same cannot be done for Q8, since it has no faithful representation in R2 or R3. D4 can be realized as a subset of the split-quaternions in the same way that Q8 can be viewed as a subset of the quaternions. Cayley table The Cayley table (multiplication table) for Q8 is given by: Properties The elements i, j, and k all have order four in Q8 and any two of them generate the entire group. Another presentation of Q8 based in only two elements to skip this redundancy is: For instance, writing the group elements in lexicographically minimal normal forms, one may identify: The quaternion group has the unusual property of being Hamiltonian: Q8 is non-abelian, but every subgroup is normal. Every Hamiltonian group contains a copy of Q8. The quaternion group Q8 and the dihedral group D4 are the two smallest examples of a nilpotent non-abelian group. The center and the commutator subgroup of Q8 is the subgroup . The inner automorphism group of Q8 is given by the group modulo its center, i.e. the factor group which is isomorphic to the Klein four-group V. The full automorphism group of Q8 is isomorphic to S4, the symmetric group on four letters (see Matrix representations below), and the outer automorphism group of Q8 is thus S4/V, which is isomorphic to S3. The quaternion group Q8 has five conjugacy classes, and so five irreducible representations over the complex numbers, with dimensions 1, 1, 1, 1, 2: Trivial representation. Sign representations with i, j, k-kernel: Q8 has three maximal normal subgroups: the cyclic subgroups generated by i, j, and k respectively. For each maximal normal subgroup N, we obtain a one-dimensional representation factoring through the 2-element quotient group G/N. The representation sends elements of N to 1, and elements outside N to −1. 2-dimensional representation: Described below in Matrix representations. It is not realizable over the real numbers, but is a complex representation: indeed, it is just the quaternions considered as an algebra over , and the action is that of left multiplication by . The character table of Q8 turns out to be the same as that of D4: Nevertheless, all the irreducible characters in the rows above have real values, this gives the decomposition of the real group algebra of into minimal two-sided ideals: where the idempotents correspond to the irreducibles: so that Each of these irreducible ideals is isomorphic to a real central simple algebra, the first four to the real field . The last ideal is isomorphic to the skew field of quaternions by the correspondence: Furthermore, the projection homomorphism given by has kernel ideal generated by the idempotent: so the quaternions can also be obtained as the quotient ring . Note that this is irreducible as a real representation of , but splits into two copies of the two-dimensional irreducible when extended to the complex numbers. Indeed, the complex group algebra is where is the algebra of biquaternions. Matrix representations The two-dimensional irreducible complex representation described above gives the quaternion group Q8 as a subgroup of the general linear group . The quaternion group is a multiplicative subgroup of the quaternion algebra: which has a regular representation by left multiplication on itself considered as a complex vector space with basis so that corresponds to the -linear mapping The resulting representation is given by: Since all of the above matrices have unit determinant, this is a representation of Q8 in the special linear group . A variant gives a representation by unitary matrices (table at right). Let correspond to the linear mapping so that is given by: It is worth noting that physicists exclusively use a different convention for the matrix representation to make contact with the usual Pauli matrices: This particular choice is convenient and elegant when one describes spin-1/2 states in the basis and considers angular momentum ladder operators There is also an important action of Q8 on the 2-dimensional vector space over the finite field (table at right). A modular representation is given by This representation can be obtained from the extension field: where and the multiplicative group has four generators, of order 8. For each the two-dimensional -vector space admits a linear mapping: In addition we have the Frobenius automorphism satisfying and Then the above representation matrices are: This representation realizes Q8 as a normal subgroup of . Thus, for each matrix , we have a group automorphism with In fact, these give the full automorphism group as: This is isomorphic to the symmetric group S4 since the linear mappings permute the four one-dimensional subspaces of i.e., the four points of the projective space Also, this representation permutes the eight non-zero vectors of giving an embedding of Q8 in the symmetric group S8, in addition to the embeddings given by the regular representations. Galois group Richard Dedekind considered the field in attempting to relate the quaternion group to Galois theory. In 1936 Ernst Witt published his approach to the quaternion group through Galois theory. In 1981, Richard Dean showed the quaternion group can be realized as the Galois group Gal(T/Q) where Q is the field of rational numbers and T is the splitting field of the polynomial . The development uses the fundamental theorem of Galois theory in specifying four intermediate fields between Q and T and their Galois groups, as well as two theorems on cyclic extension of degree four over a field. Generalized quaternion group A generalized quaternion group Q4n of order 4n is defined by the presentation for an integer , with the usual quaternion group given by n = 2. Coxeter calls Q4n the dicyclic group , a special case of the binary polyhedral group and related to the polyhedral group and the dihedral group . The generalized quaternion group can be realized as the subgroup of generated by where . It can also be realized as the subgroup of unit quaternions generated by and . The generalized quaternion groups have the property that every abelian subgroup is cyclic. It can be shown that a finite p-group with this property (every abelian subgroup is cyclic) is either cyclic or a generalized quaternion group as defined above. Another characterization is that a finite p-group in which there is a unique subgroup of order p is either cyclic or a 2-group isomorphic to generalized quaternion group. In particular, for a finite field F with odd characteristic, the 2-Sylow subgroup of SL2(F) is non-abelian and has only one subgroup of order 2, so this 2-Sylow subgroup must be a generalized quaternion group, . Letting pr be the size of F, where p is prime, the size of the 2-Sylow subgroup of SL2(F) is 2n, where . The Brauer–Suzuki theorem shows that the groups whose Sylow 2-subgroups are generalized quaternion cannot be simple. Another terminology reserves the name "generalized quaternion group" for a dicyclic group of order a power of 2, which admits the presentation ==See also== 16-cell Binary tetrahedral group Clifford algebra Dicyclic group Hurwitz integral quaternion List of small groups Notes References Dean, Richard A. (1981) "A rational polynomial whose group is the quaternions", American Mathematical Monthly 88:42–5. P.R. Girard (1984) "The quaternion group and modern physics", European Journal of Physics 5:25–32. External links Quaternion groups on GroupNames Quaternion group on GroupProps Conrad, Keith. "Generalized Quaternions" Group theory Finite groups group
Quaternion group
[ "Mathematics" ]
1,899
[ "Mathematical structures", "Finite groups", "Group theory", "Fields of abstract algebra", "Algebraic structures" ]
153,187
https://en.wikipedia.org/wiki/Intracytoplasmic%20sperm%20injection
Intracytoplasmic sperm injection (ICSI ) is an in vitro fertilization (IVF) procedure in which a single sperm cell is injected directly into the cytoplasm of an egg. This technique is used in order to prepare the gametes for the obtention of embryos that may be transferred to a maternal uterus. With this method, the acrosome reaction is skipped. There are several differences between classic IVF and ICSI. However, the steps to be followed before and after insemination are the same. In terms of insemination, ICSI needs only one sperm cell per oocyte, while IVF needs 50,000–100,000. This is because the acrosome reaction has to take place and thousands of sperm cells have to be involved in IVF. Once fertilized, the egg is transformed into a pre-embryo and it has to be transferred to the uterus to continue its development. The first human pregnancy generated by ICSI was carried out in 1991 by Gianpiero Palermo and his team. Round spermatid injection (ROSI) Round spermatid injection (ROSI) is a technique of assisted reproduction whereby a round spermatid is injected into oocyte cytoplasm in order to achieve fertilization. This technique can be used to enable genetic fatherhood to some men who have no spermatozoa in the ejaculate (azoospermia) and in whom spermatozoa cannot be obtained surgically from the testicles. This condition is called non-obstructive or secretory azoospermia, as opposed to obstructive azoospermia, in which complete sperm production does occur in the testicles, and potentially fertilizing spermatozoa can be obtained by testicular sperm extraction (TESE) and used for ICSI. In cases of nonobstructive (secretory) azoospermia, on the other hand, testicular sperm production is blocked at different stages of the process of sperm formation (spermatogenesis). In those men in whom spermatogenesis is blocked at the stage of round spermatids, in which meiosis has already been completed, these round cells can successfully fertilize oocytes after being injected into their cytoplasm. Even though many technical aspects of ROSI are similar to those of ICSI, there are also significant differences between both techniques. In the first place, as compared to spermatozoa, round spermatids do not possess easily perceptible morphological characteristics and are immotile. Consequently, the distinction between round spermatids and other round cells of similar size, such as leukocytes, is not an easy task. Moreover, the distinction between living round spermatids, to be used in ROSI, and dead round spermatids, to be discarded, needs specific methods and skills, not required in the case of ICSI where sperm cell viability can be easily evaluated on the basis of sperm motility in most cases. The microinjection procedure for ROSI also differs slightly from that of ICSI, since additional stimuli are needed to ensure proper oocyte activation after spermatid injection. If all requirements for round spermatid selection and injection are successfully met, the injected oocytes develop to early embryos and can be transferred to the mother's uterus to produce pregnancy. The first successful pregnancies and births with the use of ROSI were achieved in 1995 by Jan Tesarik and his team. The clinical potential of ROSI in the treatment of male infertility due to the total absence of spermatozoa has been corroborated recently by a publication reporting on the postnatal development of 90 babies born in Japan and 17 in Spain. Based on the evaluation of the babies born, no abnormalities attributable to the ROSI technique have been identified. Indications This procedure is most commonly used to overcome male infertility problems, although it may also be used where eggs cannot easily be penetrated by sperm, and occasionally in addition to sperm donation. It can be used in teratozoospermia, because once the egg is fertilized, abnormal sperm morphology does not appear to influence blastocyst development or blastocyst morphology. Even with severe teratozoospermia, microscopy can still detect the few sperm cells that have a "normal" morphology, allowing for optimal success rate. Additionally, specialists use ICSI in cases of azoospermia (when there are no spermatozoa ejaculated but they can be found in testis), when valious spermatozoa (the name given to sperm samples taken to preservate fertility after chemotherapy) is available, or after previous irruptions in IVF cycles. Sperm selection Before performing ICSI, sperm in vitro selection and capacitation has to be done. Apart from the most common techniques of in vitro sperm capacitation (swim-up, density gradients, filtration and simple wash), some new techniques are useful and have advantages over older methods. One of these new techniques is the use of microfluidic chips, like Zymot ICSI chip invented by Prof. Utkan Demirci. This chip is a device that helps identify the highest quality spermatozoa for the ICSI technique. It reproduces the conditions of the vagina, resulting in a more natural spermatozoa selection. One of the main advantages of this method is spermatozoa quality, as the selected ones have better motility, morphology, little DNA fragmentation and less quantity of reactive oxygen species (ROS). Another way to perform the selection is the MACS technique, which consists of tiny magnetic particles linked to an antibody (annexin V) which is able to identify more viable spermatozoa. When the semen sample is passed through a column with a magnetic field, apoptotic respermatozoa are retained in the column while the healthy ones are easily obtained at the bottom of it. PICSI is another method derived from this one, the only difference is the selection process of the spermatozoa. In this case, they are placed on a plate containing drops of a synthetic compound similar to hyaluronic acid. We will know which spermatozoa are mature because they will bind to the HA drops. This is because only mature sperm have a receptor for hyaluronic acid, which they need because this acid can be found surrounding the oocytes, and sperm need to be able to bind to this acid and digest it in order to fertilize the oocyte. After the mature spermatozoa have been selected, they can be used for the microinjection of oocytes. Sperms selected by hyaluronic acid has about no effect on whether a live birth results, but may reduce miscarriage. History The first child born from a gamete micromanipulation (technique in which special tools and inverted microscopes are used that help embryologists to choose and pick an individual sperm for ICSI IVF) was a Singapore-born child in April 1989. The technique was developed by Gianpiero Palermo at the Vrije Universiteit Brussel, in the Center for Reproductive Medicine headed by Paul Devroey and Andre Van Steirteghem. Actually, the discovery was made by a mistake. The procedure itself was first performed in 1987, though it only went to the pronuclear stage. The first activated embryo by ICSI was produced in 1990, but the first successful birth by ICSI took place on January 14, 1992, after an April 1991 conception. Sharpe et al comment on the success of ICSI since 1992 saying, "[t]hus, the woman carries the treatment burden for male infertility, a fairly unique scenario in medical practice. ICSI's success has effectively diverted attention from identifying what causes male infertility and focused research onto the female, to optimize the provision of eggs and a receptive endometrium, on which ICSI's success depends." Procedure ICSI is generally performed following a transvaginal oocyte retrieval procedure to extract one or several oocytes from a woman. In ICSI IVF, the male partner or a donor provides a sperm sample on the same day when the eggs are collected. The sample is checked in the lab, and if no sperm is present, doctors will extract sperm from the epididymis or testicle. The extraction of sperm from epididymis is also known as percutaneous epididymal sperm aspiration (PESA) and extraction of sperm from testicle is also known as testicular sperm aspiration (TESA). Depending on the total amount of spermatozoa in the semen sample, either low or high, it can be just washed or capacitated via swim-up or gradients, respectively. The procedure is done under a microscope using multiple micromanipulation devices (micromanipulator, microinjectors and micropipettes). A holding pipette stabilizes the mature oocyte with gentle suction applied by a microinjector. From the opposite side a thin, hollow glass micropipette is used to collect a single sperm, having immobilised it by cutting its tail with the point of the micropipette. The oocyte is pierced through the oolemma and the sperm is directed into the inner part of the oocyte (cytoplasm). The sperm is then released into the oocyte. The pictured oocyte has an extruded polar body at about 12 o'clock indicating its maturity. The polar body is positioned at the 12 or 6 o'clock position, to ensure that the inserted micropipette does not disrupt the spindle inside the egg. After the procedure, the oocyte will be placed into cell culture and checked on the following day for signs of fertilization. In contrast, in natural fertilization sperm compete and when the first sperm penetrates the oolemma, the oolemma hardens to block the entry of any other sperm. Concern has been raised that in ICSI this sperm selection process is bypassed and the sperm is selected by the embryologist without any specific testing. However, in mid-2006 the FDA cleared a device that allows embryologists to select mature sperm for ICSI based on sperm binding to hyaluronan, the main constituent of the gel layer (cumulus oophorus) surrounding the oocyte. The device provides microscopic droplets of hyaluronan hydrogel attached to the culture dish. The embryologist places the prepared sperm on the microdot, selects and captures sperm that bind to the dot. Basic research on the maturation of sperm shows that hyaluronan-binding sperm are more mature and show fewer DNA strand breaks and significantly lower levels of aneuploidy than the sperm population from which they were selected. A brand name for one such sperm selection device is PICSI. A recent clinical trial showed a sharp reduction in miscarriage with embryos derived from PICSI sperm selection. 'Washed' or 'unwashed' sperm may be used in the process. Live birth rate are significantly higher with progesterone to assist implantation in ICSI cycles. Also, addition of a GNRH agonist has been estimated to increase success rates. Ultra-high magnification sperm injection (IMSI) has no evidence of increased live birth or miscarriage rates compared to standard ICSI. A new variation of the standard ICSI-procedure called Piezo-ICSI uses small axial mechanical pulses (Piezo-pulses) to lower stress to the cytoskeleton during zona pellucida and oolemma breakage. The procedure includes specialized Piezo actuators, microcapillaries, and filling medium to transfer mechanical pulses to the cell membranes. The Piezo technique itself was for example established for animal ICSI and animal ES cell transfer. Assisted zona hatching (AH) People who have experienced repeatedly failed implantation, or whose experimental embryo has a thick zona pellucida (covering) around the embryo, have ideal candidates for assisted zona hatching. The procedure involves creating a hole in the zona to improve the chances of normal implantation of the embryo in the uterus. Preimplantation genetic diagnosis (PGD) PGD is a process in which one or two cells from an embryo on Day 3 or Day 5 are extracted and the cells genetically analyzed. Couples who are at a high risk of having abnormal number of chromosomes or who have an history of single gene defects or chromosome defects are ideal candidates for this procedure. It is used to diagnose a large number of genetic defects at present. Success or failure factors One of the areas in which sperm injection can be useful is vasectomy reversal. However, potential factors that may influence pregnancy rates (and live birth rates) in ICSI include level of DNA fragmentation as measured e.g. by comet assay, advanced maternal age and semen quality. It is uncertain whether ICSI improves live birth rates or reduces the risk of miscarriage compared with ultra‐high magnification (IMSI) sperm selection. A systematic meta-analysis of 24 estimates of DNA damage based on a variety of techniques concluded that sperm DNA damage negatively affects clinical pregnancy following ICSI. Numerous biochemical markers were shown to be associated with oocyte quality for ICSI. For example, it was shown that after ICSI the follicular fluid of unfertilized oocytes contains high levels of cytotoxicity and oxidative stress markers, as Cu,Zn-superoxide dismutase, catalase, and lipoperoxidation product 4-hydroxynonenal (4-HNE) -protein conjugates. Complications There is some suggestion that birth defects are increased with the use of IVF in general, and ICSI specifically, though different studies show contradictory results. In a summary position paper, the Practice Committee of the American Society of Reproductive Medicine has said it considers ICSI safe and effective therapy for male factor infertility, but may carry an increased risk for the transmission of selected genetic abnormalities to offspring, either through the procedure itself or through the increased inherent risk of such abnormalities in parents undergoing the procedure. There is not enough evidence to say that ICSI procedures are safe in females with hepatitis B in regard to vertical transmission to the offspring, since the puncture of the oocyte can potentially avail for vertical transmission to the offspring. Follow-up on fetus In addition to regular prenatal care, prenatal aneuploidy screening based on maternal age, nuchal translucency scan and biomarkers is appropriate. However, biomarkers seem to be altered for pregnancies resulting from ICSI, causing a higher false-positive rate. Correction factors have been developed and should be used when screening for Down syndrome in singleton pregnancies after ICSI, but in twin pregnancies such correction factors have not been fully elucidated. In vanishing twin pregnancies with a second gestational sac with a dead fetus, first trimester screening should be based solely on the maternal age and the nuchal translucency scan as biomarkers are significantly altered in these cases. See also Reproductive technology Ernestine Gwet Bell References External links The Human Fertilisation and Embryology Authority (HFEA) The Epigenome Network of Excellence (NoE) TEST TUBE BABY PROCESS Assisted Zona hatching Assisted reproductive technology Fertility medicine 1991 introductions Semen
Intracytoplasmic sperm injection
[ "Biology" ]
3,227
[ "Assisted reproductive technology", "Medical technology" ]
153,197
https://en.wikipedia.org/wiki/Periodic%20table%20%28electron%20configurations%29
Configurations of elements 109 and above are not available. Predictions from reliable sources have been used for these elements. Grayed out electron numbers indicate subshells filled to their maximum. Bracketed noble gas symbols on the left represent inner configurations that are the same in each period. Written out, these are: He, 2, helium : 1s2 Ne, 10, neon : 1s2 2s2 2p6 Ar, 18, argon : 1s2 2s2 2p6 3s2 3p6 Kr, 36, krypton : 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 Xe, 54, xenon : 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 Rn, 86, radon : 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 Og, 118, oganesson : 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 7s2 5f14 6d10 7p6 Note that these electron configurations are given for neutral atoms in the gas phase, which are not the same as the electron configurations for the same atoms in chemical environments. In many cases, multiple configurations are within a small range of energies and the small irregularities that arise in the d- and f-blocks are quite irrelevant chemically. The construction of the periodic table ignores these irregularities and is based on ideal electron configurations. Note the non-linear shell ordering, which comes about due to the different energies of smaller and larger shells. References See list of sources at Electron configurations of the elements (data page). Electron configurations
Periodic table (electron configurations)
[ "Chemistry" ]
419
[ "Periodic table" ]
153,215
https://en.wikipedia.org/wiki/Working%20mass
Working mass, also referred to as reaction mass, is a mass against which a system operates in order to produce acceleration. In the case of a chemical rocket, for example, the reaction mass is the product of the burned fuel shot backwards to provide propulsion. All acceleration requires an exchange of momentum, which can be thought of as the "unit of movement". Momentum is related to mass and velocity, as given by the formula P = mv, where P is the momentum, m the mass, and v the velocity. The velocity of a body is easily changeable, but in most cases the mass is not, which makes it important. Rockets and rocket-like reaction engines In rockets, the total velocity change can be calculated (using the Tsiolkovsky rocket equation) as follows: Where: v = ship velocity. u = exhaust velocity. M = ship mass, not including the working mass. m = total mass ejected from the ship (working mass). The term working mass is used primarily in the aerospace field. In more "down to earth" examples, the working mass is typically provided by the Earth, which contains so much momentum in comparison to most vehicles that the amount it gains or loses can be ignored. However, in the case of an aircraft the working mass is the air, and in the case of a rocket, it is the rocket fuel itself. Most rocket engines use light-weight fuels (liquid hydrogen, oxygen, or kerosene) accelerated to supersonic speeds. However, ion engines often use heavier elements like xenon as the reaction mass, accelerated to much higher speeds using electric fields. In many cases, the working mass is separate from the energy used to accelerate it. In a car, the engine provides power to the wheels, which then accelerates the Earth backward to make the car move forward. This is not the case for most rockets, however, where the rocket propellant is the working mass, as well as the energy source. This means that rockets stop accelerating as soon as they run out of fuel, regardless of other power sources they may have. This can be a problem for satellites that need to be repositioned often, as it limits their useful life. In general, the exhaust velocity should be close to the ship velocity for optimum energy efficiency. This limitation of rocket propulsion is one of the main motivations for the ongoing interest in field propulsion technology. See also Rocket equation Aerospace engineering Mass
Working mass
[ "Physics", "Mathematics", "Engineering" ]
493
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "Size", "Aerospace engineering", "Wikipedia categories named after physical quantities", "Matter" ]
153,221
https://en.wikipedia.org/wiki/Heat%20exchanger
A heat exchanger is a system used to transfer heat between a source and a working fluid. Heat exchangers are used in both cooling and heating processes. The fluids may be separated by a solid wall to prevent mixing or they may be in direct contact. They are widely used in space heating, refrigeration, air conditioning, power stations, chemical plants, petrochemical plants, petroleum refineries, natural-gas processing, and sewage treatment. The classic example of a heat exchanger is found in an internal combustion engine in which a circulating fluid known as engine coolant flows through radiator coils and air flows past the coils, which cools the coolant and heats the incoming air. Another example is the heat sink, which is a passive heat exchanger that transfers the heat generated by an electronic or a mechanical device to a fluid medium, often air or a liquid coolant. Flow arrangement There are three primary classifications of heat exchangers according to their flow arrangement. In parallel-flow heat exchangers, the two fluids enter the exchanger at the same end, and travel in parallel to one another to the other side. In counter-flow heat exchangers the fluids enter the exchanger from opposite ends. The counter current design is the most efficient, in that it can transfer the most heat from the heat (transfer) medium per unit mass due to the fact that the average temperature difference along any unit length is higher. See countercurrent exchange. In a cross-flow heat exchanger, the fluids travel roughly perpendicular to one another through the exchanger. For efficiency, heat exchangers are designed to maximize the surface area of the wall between the two fluids, while minimizing resistance to fluid flow through the exchanger. The exchanger's performance can also be affected by the addition of fins or corrugations in one or both directions, which increase surface area and may channel fluid flow or induce turbulence. The driving temperature across the heat transfer surface varies with position, but an appropriate mean temperature can be defined. In most simple systems this is the "log mean temperature difference" (LMTD). Sometimes direct knowledge of the LMTD is not available and the NTU method is used. Types By maximum operating temperature, heat exchangers can be divided into low-temperature and high-temperature ones. The former work up to 500–650°C depending on the industry and generally don't require special design and material considerations. The latter work up to 1000 or even 1400°C. Double pipe heat exchangers are the simplest exchangers used in industries. On one hand, these heat exchangers are cheap for both design and maintenance, making them a good choice for small industries. On the other hand, their low efficiency coupled with the high space occupied in large scales, has led modern industries to use more efficient heat exchangers like shell and tube or plate. However, since double pipe heat exchangers are simple, they are used to teach heat exchanger design basics to students as the fundamental rules for all heat exchangers are the same. 1. Double-pipe heat exchanger When one fluid flows through the smaller pipe, the other flows through the annular gap between the two pipes. These flows may be parallel or counter-flows in a double pipe heat exchanger. (a) Parallel flow, where both hot and cold liquids enter the heat exchanger from the same side, flow in the same direction and exit at the same end. This configuration is preferable when the two fluids are intended to reach exactly the same temperature, as it reduces thermal stress and produces a more uniform rate of heat transfer. (b) Counter-flow, where hot and cold fluids enter opposite sides of the heat exchanger, flow in opposite directions, and exit at opposite ends. This configuration is preferable when the objective is to maximize heat transfer between the fluids, as it creates a larger temperature differential when used under otherwise similar conditions. The figure above illustrates the parallel and counter-flow flow directions of the fluid exchanger. 2. Shell-and-tube heat exchanger In a shell-and-tube heat exchanger, two fluids at different temperatures flow through the heat exchanger. One of the fluids flows through the tube side and the other fluid flows outside the tubes, but inside the shell (shell side). Baffles are used to support the tubes, direct the fluid flow to the tubes in an approximately natural manner, and maximize the turbulence of the shell fluid. There are many various kinds of baffles, and the choice of baffle form, spacing, and geometry depends on the allowable flow rate of the drop in shell-side force, the need for tube support, and the flow-induced vibrations. There are several variations of shell-and-tube exchangers available; the differences lie in the arrangement of flow configurations and details of construction. In application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines), fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration. 3. Plate Heat Exchanger A plate heat exchanger contains an amount of thin shaped heat transfer plates bundled together. The gasket arrangement of each pair of plates provides two separate channel system. Each pair of plates form a channel where the fluid can flow through. The pairs are attached by welding and bolting methods. The following shows the components in the heat exchanger. In single channels the configuration of the gaskets enables flow through. Thus, this allows the main and secondary media in counter-current flow. A gasket plate heat exchanger has a heat region from corrugated plates. The gasket function as seal between plates and they are located between frame and pressure plates. Fluid flows in a counter current direction throughout the heat exchanger. An efficient thermal performance is produced. Plates are produced in different depths, sizes and corrugated shapes. There are different types of plates available including plate and frame, plate and shell and spiral plate heat exchangers. The distribution area guarantees the flow of fluid to the whole heat transfer surface. This helps to prevent stagnant area that can cause accumulation of unwanted material on solid surfaces. High flow turbulence between plates results in a greater transfer of heat and a decrease in pressure. 4. Condensers and Boilers Heat exchangers using a two-phase heat transfer system are condensers, boilers and evaporators. Condensers are instruments that take and cool hot gas or vapor to the point of condensation and transform the gas into a liquid form. The point at which liquid transforms to gas is called vaporization and vice versa is called condensation. Surface condenser is the most common type of condenser where it includes a water supply device. Figure 5 below displays a two-pass surface condenser. The pressure of steam at the turbine outlet is low where the steam density is very low where the flow rate is very high. To prevent a decrease in pressure in the movement of steam from the turbine to condenser, the condenser unit is placed underneath and connected to the turbine. Inside the tubes the cooling water runs in a parallel way, while steam moves in a vertical downward position from the wide opening at the top and travel through the tube. Furthermore, boilers are categorized as initial application of heat exchangers. The word steam generator was regularly used to describe a boiler unit where a hot liquid stream is the source of heat rather than the combustion products. Depending on the dimensions and configurations the boilers are manufactured. Several boilers are only able to produce hot fluid while on the other hand the others are manufactured for steam production. Shell and tube Shell and tube heat exchangers consist of a series of tubes which contain fluid that must be either heated or cooled. A second fluid runs over the tubes that are being heated or cooled so that it can either provide the heat or absorb the heat required. A set of tubes is called the tube bundle and can be made up of several types of tubes: plain, longitudinally finned, etc. Shell and tube heat exchangers are typically used for high-pressure applications (with pressures greater than 30 bar and temperatures greater than 260 °C). This is because the shell and tube heat exchangers are robust due to their shape.Several thermal design features must be considered when designing the tubes in the shell and tube heat exchangers: There can be many variations on the shell and tube design. Typically, the ends of each tube are connected to plenums (sometimes called water boxes) through holes in tubesheets. The tubes may be straight or bent in the shape of a U, called U-tubes. Tube diameter: Using a small tube diameter makes the heat exchanger both economical and compact. However, it is more likely for the heat exchanger to foul up faster and the small size makes mechanical cleaning of the fouling difficult. To prevail over the fouling and cleaning problems, larger tube diameters can be used. Thus to determine the tube diameter, the available space, cost and fouling nature of the fluids must be considered. Tube thickness: The thickness of the wall of the tubes is usually determined to ensure: There is enough room for corrosion That flow-induced vibration has resistance Axial strength Availability of spare parts Hoop strength (to withstand internal tube pressure) Buckling strength (to withstand overpressure in the shell) Tube length: heat exchangers are usually cheaper when they have a smaller shell diameter and a long tube length. Thus, typically there is an aim to make the heat exchanger as long as physically possible whilst not exceeding production capabilities. However, there are many limitations for this, including space available at the installation site and the need to ensure tubes are available in lengths that are twice the required length (so they can be withdrawn and replaced). Also, long, thin tubes are difficult to take out and replace. Tube pitch: when designing the tubes, it is practical to ensure that the tube pitch (i.e., the centre-centre distance of adjoining tubes) is not less than 1.25 times the tubes' outside diameter. A larger tube pitch leads to a larger overall shell diameter, which leads to a more expensive heat exchanger. Tube corrugation: this type of tubes, mainly used for the inner tubes, increases the turbulence of the fluids and the effect is very important in the heat transfer giving a better performance. Tube Layout: refers to how tubes are positioned within the shell. There are four main types of tube layout, which are, triangular (30°), rotated triangular (60°), square (90°) and rotated square (45°). The triangular patterns are employed to give greater heat transfer as they force the fluid to flow in a more turbulent fashion around the piping. Square patterns are employed where high fouling is experienced and cleaning is more regular. Baffle Design: baffles are used in shell and tube heat exchangers to direct fluid across the tube bundle. They run perpendicularly to the shell and hold the bundle, preventing the tubes from sagging over a long length. They can also prevent the tubes from vibrating. The most common type of baffle is the segmental baffle. The semicircular segmental baffles are oriented at 180 degrees to the adjacent baffles forcing the fluid to flow upward and downwards between the tube bundle. Baffle spacing is of large thermodynamic concern when designing shell and tube heat exchangers. Baffles must be spaced with consideration for the conversion of pressure drop and heat transfer. For thermo economic optimization it is suggested that the baffles be spaced no closer than 20% of the shell's inner diameter. Having baffles spaced too closely causes a greater pressure drop because of flow redirection. Consequently, having the baffles spaced too far apart means that there may be cooler spots in the corners between baffles. It is also important to ensure the baffles are spaced close enough that the tubes do not sag. The other main type of baffle is the disc and doughnut baffle, which consists of two concentric baffles. An outer, wider baffle looks like a doughnut, whilst the inner baffle is shaped like a disk. This type of baffle forces the fluid to pass around each side of the disk then through the doughnut baffle generating a different type of fluid flow. Tubes & fins Design: in application to cool air with shell-and-tube technology (such as intercooler / charge air cooler for combustion engines), the difference in heat transfer between air and cold fluid can be such that there is a need to increase heat transfer area on air side. For this function fins can be added on the tubes to increase heat transfer area on air side and create a tubes & fins configuration. Fixed tube liquid-cooled heat exchangers especially suitable for marine and harsh applications can be assembled with brass shells, copper tubes, brass baffles, and forged brass integral end hubs. (See: Copper in heat exchangers). Plate Another type of heat exchanger is the plate heat exchanger. These exchangers are composed of many thin, slightly separated plates that have very large surface areas and small fluid flow passages for heat transfer. Advances in gasket and brazing technology have made the plate-type heat exchanger increasingly practical. In HVAC applications, large heat exchangers of this type are called plate-and-frame; when used in open loops, these heat exchangers are normally of the gasket type to allow periodic disassembly, cleaning, and inspection. There are many types of permanently bonded plate heat exchangers, such as dip-brazed, vacuum-brazed, and welded plate varieties, and they are often specified for closed-loop applications such as refrigeration. Plate heat exchangers also differ in the types of plates that are used, and in the configurations of those plates. Some plates may be stamped with "chevron", dimpled, or other patterns, where others may have machined fins and/or grooves. When compared to shell and tube exchangers, the stacked-plate arrangement typically has lower volume and cost. Another difference between the two is that plate exchangers typically serve low to medium pressure fluids, compared to medium and high pressures of shell and tube. A third and important difference is that plate exchangers employ more countercurrent flow rather than cross current flow, which allows lower approach temperature differences, high temperature changes, and increased efficiencies. Plate and shell A third type of heat exchanger is a plate and shell heat exchanger, which combines plate heat exchanger with shell and tube heat exchanger technologies. The heart of the heat exchanger contains a fully welded circular plate pack made by pressing and cutting round plates and welding them together. Nozzles carry flow in and out of the platepack (the 'Plate side' flowpath). The fully welded platepack is assembled into an outer shell that creates a second flowpath (the 'Shell side'). Plate and shell technology offers high heat transfer, high pressure, high operating temperature, compact size, low fouling and close approach temperature. In particular, it does completely without gaskets, which provides security against leakage at high pressures and temperatures. Adiabatic wheel A fourth type of heat exchanger uses an intermediate fluid or solid store to hold heat, which is then moved to the other side of the heat exchanger to be released. Two examples of this are adiabatic wheels, which consist of a large wheel with fine threads rotating through the hot and cold fluids, and fluid heat exchangers. Plate fin This type of heat exchanger uses "sandwiched" passages containing fins to increase the effectiveness of the unit. The designs include crossflow and counterflow coupled with various fin configurations such as straight fins, offset fins and wavy fins. Plate and fin heat exchangers are usually made of aluminum alloys, which provide high heat transfer efficiency. The material enables the system to operate at a lower temperature difference and reduce the weight of the equipment. Plate and fin heat exchangers are mostly used for low temperature services such as natural gas, helium and oxygen liquefaction plants, air separation plants and transport industries such as motor and aircraft engines. Advantages of plate and fin heat exchangers: High heat transfer efficiency especially in gas treatment Larger heat transfer area Approximately 5 times lighter in weight than that of shell and tube heat exchanger. Able to withstand high pressure Disadvantages of plate and fin heat exchangers: Might cause clogging as the pathways are very narrow Difficult to clean the pathways Aluminium alloys are susceptible to Mercury Liquid Embrittlement Failure Finned tube The usage of fins in a tube-based heat exchanger is common when one of the working fluids is a low-pressure gas, and is typical for heat exchangers that operate using ambient air, such as automotive radiators and HVAC air condensers. Fins dramatically increase the surface area with which heat can be exchanged, which improves the efficiency of conducting heat to a fluid with very low thermal conductivity, such as air. The fins are typically made from aluminium or copper since they must conduct heat from the tube along the length of the fins, which are usually very thin. The main construction types of finned tube exchangers are: A stack of evenly-spaced metal plates act as the fins and the tubes are pressed through pre-cut holes in the fins, good thermal contact usually being achieved by deformation of the fins around the tube. This is typical construction for HVAC air coils and large refrigeration condensers. Fins are spiral-wound onto individual tubes as a continuous strip, the tubes can then be assembled in banks, bent in a serpentine pattern, or wound into large spirals. Zig-zag metal strips are sandwiched between flat rectangular tubes, often being soldered or brazed together for good thermal and mechanical strength. This is common in low-pressure heat exchangers such as water-cooling radiators. Regular flat tubes will expand and deform if exposed to high pressures but flat microchannel tubes allow this construction to be used for high pressures. Stacked-fin or spiral-wound construction can be used for the tubes inside shell-and-tube heat exchangers when high efficiency thermal transfer to a gas is required. In electronics cooling, heat sinks, particularly those using heat pipes, can have a stacked-fin construction. Pillow plate A pillow plate heat exchanger is commonly used in the dairy industry for cooling milk in large direct-expansion stainless steel bulk tanks. Nearly the entire surface area of a tank can be integrated with this heat exchanger, without gaps that would occur between pipes welded to the exterior of the tank. Pillow plates can also be constructed as flat plates that are stacked inside a tank. The relatively flat surface of the plates allows easy cleaning, especially in sterile applications. The pillow plate can be constructed using either a thin sheet of metal welded to the thicker surface of a tank or vessel, or two thin sheets welded together. The surface of the plate is welded with a regular pattern of dots or a serpentine pattern of weld lines. After welding the enclosed space is pressurised with sufficient force to cause the thin metal to bulge out around the welds, providing a space for heat exchanger liquids to flow, and creating a characteristic appearance of a swelled pillow formed out of metal. Waste heat recovery units A waste heat recovery unit (WHRU) is a heat exchanger that recovers heat from a hot gas stream while transferring it to a working medium, typically water or oils. The hot gas stream can be the exhaust gas from a gas turbine or a diesel engine or a waste gas from industry or refinery. Large systems with high volume and temperature gas streams, typical in industry, can benefit from steam Rankine cycle (SRC) in a waste heat recovery unit, but these cycles are too expensive for small systems. The recovery of heat from low temperature systems requires different working fluids than steam. An organic Rankine cycle (ORC) waste heat recovery unit can be more efficient at low temperature range using refrigerants that boil at lower temperatures than water. Typical organic refrigerants are ammonia, pentafluoropropane (R-245fa and R-245ca), and toluene. The refrigerant is boiled by the heat source in the evaporator to produce super-heated vapor. This fluid is expanded in the turbine to convert thermal energy to kinetic energy, that is converted to electricity in the electrical generator. This energy transfer process decreases the temperature of the refrigerant that, in turn, condenses. The cycle is closed and completed using a pump to send the fluid back to the evaporator. Dynamic scraped surface Another type of heat exchanger is called "(dynamic) scraped surface heat exchanger". This is mainly used for heating or cooling with high-viscosity products, crystallization processes, evaporation and high-fouling applications. Long running times are achieved due to the continuous scraping of the surface, thus avoiding fouling and achieving a sustainable heat transfer rate during the process. Phase-change In addition to heating up or cooling down fluids in just a single phase, heat exchangers can be used either to heat a liquid to evaporate (or boil) it or used as condensers to cool a vapor and condense it to a liquid. In chemical plants and refineries, reboilers used to heat incoming feed for distillation towers are often heat exchangers. Distillation set-ups typically use condensers to condense distillate vapors back into liquid. Power plants that use steam-driven turbines commonly use heat exchangers to boil water into steam. Heat exchangers or similar units for producing steam from water are often called boilers or steam generators. In the nuclear power plants called pressurized water reactors, special large heat exchangers pass heat from the primary (reactor plant) system to the secondary (steam plant) system, producing steam from water in the process. These are called steam generators. All fossil-fueled and nuclear power plants using steam-driven turbines have surface condensers to convert the exhaust steam from the turbines into condensate (water) for re-use. To conserve energy and cooling capacity in chemical and other plants, regenerative heat exchangers can transfer heat from a stream that must be cooled to another stream that must be heated, such as distillate cooling and reboiler feed pre-heating. This term can also refer to heat exchangers that contain a material within their structure that has a change of phase. This is usually a solid to liquid phase due to the small volume difference between these states. This change of phase effectively acts as a buffer because it occurs at a constant temperature but still allows for the heat exchanger to accept additional heat. One example where this has been investigated is for use in high power aircraft electronics. Heat exchangers functioning in multiphase flow regimes may be subject to the Ledinegg instability. Direct contact Direct contact heat exchangers involve heat transfer between hot and cold streams of two phases in the absence of a separating wall. Thus such heat exchangers can be classified as: Gas – liquid Immiscible liquid – liquid Solid-liquid or solid – gas Most direct contact heat exchangers fall under the Gas – Liquid category, where heat is transferred between a gas and liquid in the form of drops, films or sprays. Such types of heat exchangers are used predominantly in air conditioning, humidification, industrial hot water heating, water cooling and condensing plants. Microchannel Microchannel heat exchangers are multi-pass parallel flow heat exchangers consisting of three main elements: manifolds (inlet and outlet), multi-port tubes with the hydraulic diameters smaller than 1mm, and fins. All the elements usually brazed together using controllable atmosphere brazing process. Microchannel heat exchangers are characterized by high heat transfer ratio, low refrigerant charges, compact size, and lower airside pressure drops compared to finned tube heat exchangers. Microchannel heat exchangers are widely used in automotive industry as the car radiators, and as condenser, evaporator, and cooling/heating coils in HVAC industry. Micro heat exchangers, Micro-scale heat exchangers, or microstructured heat exchangers are heat exchangers in which (at least one) fluid flows in lateral confinements with typical dimensions below 1 mm. The most typical such confinement are microchannels, which are channels with a hydraulic diameter below 1 mm. Microchannel heat exchangers can be made from metal or ceramics. Microchannel heat exchangers can be used for many applications including: high-performance aircraft gas turbine engines heat pumps Microprocessor and microchip cooling air conditioning HVAC and refrigeration air coils One of the widest uses of heat exchangers is for refrigeration and air conditioning. This class of heat exchangers is commonly called air coils, or just coils due to their often-serpentine internal tubing, or condensers in the case of refrigeration, and are typically of the finned tube type. Liquid-to-air, or air-to-liquid HVAC coils are typically of modified crossflow arrangement. In vehicles, heat coils are often called heater cores. On the liquid side of these heat exchangers, the common fluids are water, a water-glycol solution, steam, or a refrigerant. For heating coils, hot water and steam are the most common, and this heated fluid is supplied by boilers, for example. For cooling coils, chilled water and refrigerant are most common. Chilled water is supplied from a chiller that is potentially located very far away, but refrigerant must come from a nearby condensing unit. When a refrigerant is used, the cooling coil is the evaporator, and the heating coil is the condenser in the vapor-compression refrigeration cycle. HVAC coils that use this direct-expansion of refrigerants are commonly called DX coils. Some DX coils are "microchannel" type. On the air side of HVAC coils a significant difference exists between those used for heating, and those for cooling. Due to psychrometrics, air that is cooled often has moisture condensing out of it, except with extremely dry air flows. Heating some air increases that airflow's capacity to hold water. So heating coils need not consider moisture condensation on their air-side, but cooling coils must be adequately designed and selected to handle their particular latent (moisture) as well as the sensible (cooling) loads. The water that is removed is called condensate. For many climates, water or steam HVAC coils can be exposed to freezing conditions. Because water expands upon freezing, these somewhat expensive and difficult to replace thin-walled heat exchangers can easily be damaged or destroyed by just one freeze. As such, freeze protection of coils is a major concern of HVAC designers, installers, and operators. The introduction of indentations placed within the heat exchange fins controlled condensation, allowing water molecules to remain in the cooled air. The heat exchangers in direct-combustion furnaces, typical in many residences, are not 'coils'. They are, instead, gas-to-air heat exchangers that are typically made of stamped steel sheet metal. The combustion products pass on one side of these heat exchangers, and air to heat on the other. A cracked heat exchanger is therefore a dangerous situation that requires immediate attention because combustion products may enter living space. Helical-coil Although double-pipe heat exchangers are the simplest to design, the better choice in the following cases would be the helical-coil heat exchanger (HCHE): The main advantage of the HCHE, like that for the Spiral heat exchanger (SHE), is its highly efficient use of space, especially when it's limited and not enough straight pipe can be laid. Under conditions of low flowrates (or laminar flow), such that the typical shell-and-tube exchangers have low heat-transfer coefficients and becoming uneconomical. When there is low pressure in one of the fluids, usually from accumulated pressure drops in other process equipment. When one of the fluids has components in multiple phases (solids, liquids, and gases), which tends to create mechanical problems during operations, such as plugging of small-diameter tubes. Cleaning of helical coils for these multiple-phase fluids can prove to be more difficult than its shell and tube counterpart; however the helical coil unit would require cleaning less often. These have been used in the nuclear industry as a method for exchanging heat in a sodium system for large liquid metal fast breeder reactors since the early 1970s, using an HCHE device invented by Charles E. Boardman and John H. Germer. There are several simple methods for designing HCHE for all types of manufacturing industries, such as using the Ramachandra K. Patil (et al.) method from India and the Scott S. Haraburda method from the United States. However, these are based upon assumptions of estimating inside heat transfer coefficient, predicting flow around the outside of the coil, and upon constant heat flux. Spiral A modification to the perpendicular flow of the typical HCHE involves the replacement of shell with another coiled tube, allowing the two fluids to flow parallel to one another, and which requires the use of different design calculations. These are the Spiral Heat Exchangers (SHE), which may refer to a helical (coiled) tube configuration, more generally, the term refers to a pair of flat surfaces that are coiled to form the two channels in a counter-flow arrangement. Each of the two channels has one long curved path. A pair of fluid ports are connected tangentially to the outer arms of the spiral, and axial ports are common, but optional. The main advantage of the SHE is its highly efficient use of space. This attribute is often leveraged and partially reallocated to gain other improvements in performance, according to well known tradeoffs in heat exchanger design. (A notable tradeoff is capital cost vs operating cost.) A compact SHE may be used to have a smaller footprint and thus lower all-around capital costs, or an oversized SHE may be used to have less pressure drop, less pumping energy, higher thermal efficiency, and lower energy costs. Construction The distance between the sheets in the spiral channels is maintained by using spacer studs that were welded prior to rolling. Once the main spiral pack has been rolled, alternate top and bottom edges are welded and each end closed by a gasketed flat or conical cover bolted to the body. This ensures no mixing of the two fluids occurs. Any leakage is from the periphery cover to the atmosphere, or to a passage that contains the same fluid. Self cleaning Spiral heat exchangers are often used in the heating of fluids that contain solids and thus tend to foul the inside of the heat exchanger. The low pressure drop lets the SHE handle fouling more easily. The SHE uses a “self cleaning” mechanism, whereby fouled surfaces cause a localized increase in fluid velocity, thus increasing the drag (or fluid friction) on the fouled surface, thus helping to dislodge the blockage and keep the heat exchanger clean. "The internal walls that make up the heat transfer surface are often rather thick, which makes the SHE very robust, and able to last a long time in demanding environments." They are also easily cleaned, opening out like an oven where any buildup of foulant can be removed by pressure washing. Self-cleaning water filters are used to keep the system clean and running without the need to shut down or replace cartridges and bags. Flow arrangements There are three main types of flows in a spiral heat exchanger: Counter-current Flow: Fluids flow in opposite directions. These are used for liquid-liquid, condensing and gas cooling applications. Units are usually mounted vertically when condensing vapour and mounted horizontally when handling high concentrations of solids. Spiral Flow/Cross Flow: One fluid is in spiral flow and the other in a cross flow. Spiral flow passages are welded at each side for this type of spiral heat exchanger. This type of flow is suitable for handling low density gas, which passes through the cross flow, avoiding pressure loss. It can be used for liquid-liquid applications if one liquid has a considerably greater flow rate than the other. Distributed Vapour/Spiral flow: This design is that of a condenser, and is usually mounted vertically. It is designed to cater for the sub-cooling of both condensate and non-condensables. The coolant moves in a spiral and leaves via the top. Hot gases that enter leave as condensate via the bottom outlet. Applications The Spiral heat exchanger is good for applications such as pasteurization, digester heating, heat recovery, pre-heating (see: recuperator), and effluent cooling. For sludge treatment, SHEs are generally smaller than other types of heat exchangers. These are used to transfer the heat. Selection Due to the many variables involved, selecting optimal heat exchangers is challenging. Hand calculations are possible, but many iterations are typically needed. As such, heat exchangers are most often selected via computer programs, either by system designers, who are typically engineers, or by equipment vendors. To select an appropriate heat exchanger, the system designers (or equipment vendors) would firstly consider the design limitations for each heat exchanger type. Though cost is often the primary criterion, several other selection criteria are important: High/low pressure limits Thermal performance Temperature ranges Product mix (liquid/liquid, particulates or high-solids liquid) Pressure drops across the exchanger Fluid flow capacity Cleanability, maintenance and repair Materials required for construction Ability and ease of future expansion Material selection, such as copper, aluminium, carbon steel, stainless steel, nickel alloys, ceramic, polymer, and titanium. Small-diameter coil technologies are becoming more popular in modern air conditioning and refrigeration systems because they have better rates of heat transfer than conventional sized condenser and evaporator coils with round copper tubes and aluminum or copper fin that have been the standard in the HVAC industry. Small diameter coils can withstand the higher pressures required by the new generation of environmentally friendlier refrigerants. Two small diameter coil technologies are currently available for air conditioning and refrigeration products: copper microgroove and brazed aluminum microchannel. Choosing the right heat exchanger (HX) requires some knowledge of the different heat exchanger types, as well as the environment where the unit must operate. Typically in the manufacturing industry, several differing types of heat exchangers are used for just one process or system to derive the final product. For example, a kettle HX for pre-heating, a double pipe HX for the 'carrier' fluid and a plate and frame HX for final cooling. With sufficient knowledge of heat exchanger types and operating requirements, an appropriate selection can be made to optimise the process. Monitoring and maintenance Online monitoring of commercial heat exchangers is done by tracking the overall heat transfer coefficient. The overall heat transfer coefficient tends to decline over time due to fouling. By periodically calculating the overall heat transfer coefficient from exchanger flow rates and temperatures, the owner of the heat exchanger can estimate when cleaning the heat exchanger is economically attractive. Integrity inspection of plate and tubular heat exchanger can be tested in situ by the conductivity or helium gas methods. These methods confirm the integrity of the plates or tubes to prevent any cross contamination and the condition of the gaskets. Mechanical integrity monitoring of heat exchanger tubes may be conducted through Nondestructive methods such as eddy current testing. Fouling Fouling occurs when impurities deposit on the heat exchange surface. Deposition of these impurities can decrease heat transfer effectiveness significantly over time and are caused by: Low wall shear stress Low fluid velocities High fluid velocities Reaction product solid precipitation Precipitation of dissolved impurities due to elevated wall temperatures The rate of heat exchanger fouling is determined by the rate of particle deposition less re-entrainment/suppression. This model was originally proposed in 1959 by Kern and Seaton. Crude Oil Exchanger Fouling. In commercial crude oil refining, crude oil is heated from to prior to entering the distillation column. A series of shell and tube heat exchangers typically exchange heat between crude oil and other oil streams to heat the crude to prior to heating in a furnace. Fouling occurs on the crude side of these exchangers due to asphaltene insolubility. The nature of asphaltene solubility in crude oil was successfully modeled by Wiehe and Kennedy. The precipitation of insoluble asphaltenes in crude preheat trains has been successfully modeled as a first order reaction by Ebert and Panchal who expanded on the work of Kern and Seaton. Cooling Water Fouling. Cooling water systems are susceptible to fouling. Cooling water typically has a high total dissolved solids content and suspended colloidal solids. Localized precipitation of dissolved solids occurs at the heat exchange surface due to wall temperatures higher than bulk fluid temperature. Low fluid velocities (less than 3 ft/s) allow suspended solids to settle on the heat exchange surface. Cooling water is typically on the tube side of a shell and tube exchanger because it's easy to clean. To prevent fouling, designers typically ensure that cooling water velocity is greater than and bulk fluid temperature is maintained less than . Other approaches to control fouling control combine the "blind" application of biocides and anti-scale chemicals with periodic lab testing. Maintenance Plate and frame heat exchangers can be disassembled and cleaned periodically. Tubular heat exchangers can be cleaned by such methods as acid cleaning, sandblasting, high-pressure water jet, bullet cleaning, or drill rods. In large-scale cooling water systems for heat exchangers, water treatment such as purification, addition of chemicals, and testing, is used to minimize fouling of the heat exchange equipment. Other water treatment is also used in steam systems for power plants, etc. to minimize fouling and corrosion of the heat exchange and other equipment. A variety of companies have started using water borne oscillations technology to prevent biofouling. Without the use of chemicals, this type of technology has helped in providing a low-pressure drop in heat exchangers. Design and manufacturing regulations The design and manufacturing of heat exchangers has numerous regulations, which vary according to the region in which they will be used. Design and manufacturing codes include: ASME Boiler and Pressure Vessel Code (US); PD 5500 (UK); BS 1566 (UK); EN 13445 (EU); CODAP (French); Pressure Equipment Safety Regulations 2016 (PER) (UK); Pressure Equipment Directive (EU); NORSOK (Norwegian); TEMA; API 12; and API 560. In nature Humans The human nasal passages serve as a heat exchanger, with cool air being inhaled and warm air being exhaled. Its effectiveness can be demonstrated by putting the hand in front of the face and exhaling, first through the nose and then through the mouth. Air exhaled through the nose is substantially cooler. This effect can be enhanced with clothing, by, for example, wearing a scarf over the face while breathing in cold weather. In species that have external testes (such as human), the artery to the testis is surrounded by a mesh of veins called the pampiniform plexus. This cools the blood heading to the testes, while reheating the returning blood. Birds, fish, marine mammals "Countercurrent" heat exchangers occur naturally in the circulatory systems of fish, whales and other marine mammals. Arteries to the skin carrying warm blood are intertwined with veins from the skin carrying cold blood, causing the warm arterial blood to exchange heat with the cold venous blood. This reduces the overall heat loss in cold water. Heat exchangers are also present in the tongues of baleen whales as large volumes of water flow through their mouths. Wading birds use a similar system to limit heat losses from their body through their legs into the water. Carotid rete Carotid rete is a counter-current heat exchanging organ in some ungulates. The blood ascending the carotid arteries on its way to the brain, flows via a network of vessels where heat is discharged to the veins of cooler blood descending from the nasal passages. The carotid rete allows Thomson's gazelle to maintain its brain almost 3 °C (5.4 °F) cooler than the rest of the body, and therefore aids in tolerating bursts in metabolic heat production such as associated with outrunning cheetahs (during which the body temperature exceeds the maximum temperature at which the brain could function). Humans with other primates lack a carotid rete. In industry Heat exchangers are widely used in industry both for cooling and heating large scale industrial processes. The type and size of heat exchanger used can be tailored to suit a process depending on the type of fluid, its phase, temperature, density, viscosity, pressures, chemical composition and various other thermodynamic properties. In many industrial processes there is waste of energy or a heat stream that is being exhausted, heat exchangers can be used to recover this heat and put it to use by heating a different stream in the process. This practice saves a lot of money in industry, as the heat supplied to other streams from the heat exchangers would otherwise come from an external source that is more expensive and more harmful to the environment. Heat exchangers are used in many industries, including: Waste water treatment Refrigeration Wine and beer making Petroleum refining Nuclear power In waste water treatment, heat exchangers play a vital role in maintaining optimal temperatures within anaerobic digesters to promote the growth of microbes that remove pollutants. Common types of heat exchangers used in this application are the double pipe heat exchanger as well as the plate and frame heat exchanger. In aircraft In commercial aircraft heat exchangers are used to take heat from the engine's oil system to heat cold fuel. This improves fuel efficiency, as well as reduces the possibility of water entrapped in the fuel freezing in components. Current market and forecast Estimated at US$17.5 billion in 2021, the global demand of heat exchangers is expected to experience robust growth of about 5% annually over the next years. The market value is expected to reach US$27 billion by 2030. With an expanding desire for environmentally friendly options and increased development of offices, retail sectors, and public buildings, market expansion is due to grow. A model of a simple heat exchanger A simple heat exchange might be thought of as two straight pipes with fluid flow, which are thermally connected. Let the pipes be of equal length L, carrying fluids with heat capacity (energy per unit mass per unit change in temperature) and let the mass flow rate of the fluids through the pipes, both in the same direction, be (mass per unit time), where the subscript i applies to pipe 1 or pipe 2. Temperature profiles for the pipes are and where x is the distance along the pipe. Assume a steady state, so that the temperature profiles are not functions of time. Assume also that the only transfer of heat from a small volume of fluid in one pipe is to the fluid element in the other pipe at the same position, i.e., there is no transfer of heat along a pipe due to temperature differences in that pipe. By Newton's law of cooling the rate of change in energy of a small volume of fluid is proportional to the difference in temperatures between it and the corresponding element in the other pipe: ( this is for parallel flow in the same direction and opposite temperature gradients, but for counter-flow heat exchange countercurrent exchange the sign is opposite in the second equation in front of ), where is the thermal energy per unit length and γ is the thermal connection constant per unit length between the two pipes. This change in internal energy results in a change in the temperature of the fluid element. The time rate of change for the fluid element being carried along by the flow is: where is the "thermal mass flow rate". The differential equations governing the heat exchanger may now be written as: Since the system is in a steady state, there are no partial derivatives of temperature with respect to time, and since there is no heat transfer along the pipe, there are no second derivatives in x as is found in the heat equation. These two coupled first-order differential equations may be solved to yield: where , , (this is for parallel-flow, but for counter-flow the sign in front of is negative, so that if , for the same "thermal mass flow rate" in both opposite directions, the gradient of temperature is constant and the temperatures linear in position x with a constant difference along the exchanger, explaining why the counter current design countercurrent exchange is the most efficient ) and A and B are two as yet undetermined constants of integration. Let and be the temperatures at x=0 and let and be the temperatures at the end of the pipe at x=L. Define the average temperatures in each pipe as: Using the solutions above, these temperatures are: {| |- | | |- | | |- |          | |} Choosing any two of the temperatures above eliminates the constants of integration, letting us find the other four temperatures. We find the total energy transferred by integrating the expressions for the time rate of change of internal energy per unit length: By the conservation of energy, the sum of the two energies is zero. The quantity is known as the Log mean temperature difference, and is a measure of the effectiveness of the heat exchanger in transferring heat energy. See also Architectural engineering Chemical engineering Cooling tower Copper in heat exchangers Heat pipe Heat pump Heat recovery ventilation Jacketed vessel Log mean temperature difference (LMTD) Marine heat exchangers Mechanical engineering Micro heat exchanger Moving bed heat exchanger Packed bed and in particular Packed columns Pumpable ice technology Reboiler Recuperator, or cross plate heat exchanger Regenerator Run around coil Steam generator (nuclear power) Surface condenser Toroidal expansion joint Thermosiphon Thermal wheel, or rotary heat exchanger (including enthalpy wheel and desiccant wheel) Tube tool Waste heat References Coulson, J. and Richardson, J (1999). Chemical Engineering- Fluid Flow. Heat Transfer and Mass Transfer- Volume 1; Reed Educational & Professional Publishing LTD Dogan Eryener (2005), 'Thermoeconomic optimization of baffle spacing for shell and tube heat exchangers', Energy Conservation and Management, Volume 47, Issue 11–12, Pages 1478–1489. G.F.Hewitt, G.L.Shires, T.R.Bott (1994) Process Heat Transfer, CRC Press, Inc, United States Of America. External links Shell and Tube Heat Exchanger Design Software for Educational Applications (PDF) EU Pressure Equipment Guideline A Thermal Management Concept For More Electric Aircraft Power System Application (PDF) Heat transfer Gas technologies
Heat exchanger
[ "Physics", "Chemistry", "Engineering" ]
9,768
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Chemical equipment", "Heat exchangers", "Thermodynamics" ]
153,563
https://en.wikipedia.org/wiki/Scilab
Scilab is a free and open-source, cross-platform numerical computational package and a high-level, numerically oriented programming language. It can be used for signal processing, statistical analysis, image enhancement, fluid dynamics simulations, numerical optimization, and modeling, simulation of explicit and implicit dynamical systems and (if the corresponding toolbox is installed) symbolic manipulations. Scilab is one of the two major open-source alternatives to MATLAB, the other one being GNU Octave. Scilab puts less emphasis on syntactic compatibility with MATLAB than Octave does, but it is similar enough that some authors suggest that it is easy to transfer skills between the two systems. Introduction Scilab is a high-level, numerically oriented programming language. The language provides an interpreted programming environment, with matrices as the main data type. By using matrix-based computation, dynamic typing, and automatic memory management, many numerical problems may be expressed in a reduced number of code lines, as compared to similar solutions using traditional languages, such as Fortran, C, or C++. This allows users to rapidly construct models for a range of mathematical problems. While the language provides simple matrix operations such as multiplication, the Scilab package also provides a library of high-level operations such as correlation and complex multidimensional arithmetic. Scilab also includes a free package called Xcos for modeling and simulation of explicit and implicit dynamical systems, including both continuous and discrete sub-systems. Xcos is the open source equivalent to Simulink from the MathWorks. As the syntax of Scilab is similar to MATLAB, Scilab includes a source code translator for assisting the conversion of code from MATLAB to Scilab. Scilab is available free of cost under an open source license. Due to the open source nature of the software, some user contributions have been integrated into the main program. Syntax Scilab syntax is largely based on the MATLAB language. The simplest way to execute Scilab code is to type it in at the prompt, --> , in the graphical command window. In this way, Scilab can be used as an interactive mathematical shell. Hello World! in Scilab: disp('Hello World'); Plotting a 3D surface function: // A simple plot of z = f(x,y) t=[0:0.3:2*%pi]'; z=sin(t)*cos(t'); plot3d(t,t',z) Determining the equivalent single index corresponding to a given set of subscript values: function I=sub2ind(dims,varargin) //I = sub2ind(dims,i1,i2,..) returns the linear index equivalent to the //row, column, ... subscripts in the arrays i1,i2,.. for an matrix of //size dims. //I = sub2ind(dims,Mi) returns the linear index //equivalent to the n subscripts in the columns of the matrix Mi for a matrix //of size dims. d=[1;cumprod(matrix(dims(1:$-1),-1,1))] for i=1:size(varargin) if varargin(i)==[] then I=[],return,end end if size(varargin)==1 then //subindices are the columns of the argument I=(varargin(1)-1)*d+1 else //subindices are given as separated arguments I=1 for i=1:size(varargin) I=I+(varargin(i)-1)*d(i) end end endfunction Toolboxes Scilab has many contributed toolboxes for different tasks, such as Scilab Image Processing Toolbox (SIP) and its variants (such as SIVP) Scilab Wavelet Toolbox Scilab Java and .NET Module Scilab Remote Access Module More are available on ATOMS Portal or the Scilab forge. History Scilab was created in 1990 by researchers from INRIA and École nationale des ponts et chaussées (ENPC). It was initially named Ψlab (Psilab). The Scilab Consortium was formed in May 2003 to broaden contributions and promote Scilab as worldwide reference software in academia and industry. In July 2008, in order to improve the technology transfer, the Scilab Consortium joined the Digiteo Foundation. Scilab 5.1, the first release compiled for Mac, was available in early 2009, and supported Mac OS X 10.5, a.k.a. Leopard. Thus, OSX 10.4, Tiger, was never supported except by porting from sources. Linux and Windows builds had been released since the beginning, with Solaris support dropped with version 3.1.1, and HP-UX dropped with version 4.1.2 after spotty support. In June 2010, the Consortium announced the creation of Scilab Enterprises. Scilab Enterprises develops and markets, directly or through an international network of affiliated services providers, a comprehensive set of services for Scilab users. Scilab Enterprises also develops and maintains the Scilab software. The ultimate goal of Scilab Enterprises is to help make the use of Scilab more effective and easy. In February 2017 Scilab 6.0.0 was released which leveraged the latest C++ standards and lifted memory allocation limitations. Since July 2012, Scilab is developed and published by Scilab Enterprises and in early 2017 Scilab Enterprises was acquired by Virtual Prototyping pioneer ESI Group Since 2019 and Scilab 6.0.2, the University of Technology of Compiègne provides resources to build and maintain the macOS version. Since mid 2022 the Scilab team is part of Dassault Systèmes. Scilab Cloud App & Scilab Cloud API Since 2016 Scilab can be embedded in a browser and be called via an interface written in Scilab or an API. This new deployment method has the notable advantages of masking code & data as well as providing large computational power. These features have not been included in the open source version of Scilab and are still proprietary developments. See also SageMath List of numerical-analysis software Comparison of numerical-analysis software SimulationX References Further reading External links Scilab website Array programming languages Dassault Group Free educational software Free mathematics software Free software programmed in Fortran Numerical analysis software for Linux Numerical analysis software for macOS Numerical analysis software for Windows Numerical programming languages Science software that uses GTK
Scilab
[ "Mathematics" ]
1,367
[ "Free mathematics software", "Mathematical software" ]
153,783
https://en.wikipedia.org/wiki/Crystal%20optics
Crystal optics is the branch of optics that describes the behaviour of light in anisotropic media, that is, media (such as crystals) in which light behaves differently depending on which direction the light is propagating. The index of refraction depends on both composition and crystal structure and can be calculated using the Gladstone–Dale relation. Crystals are often naturally anisotropic, and in some media (such as liquid crystals) it is possible to induce anisotropy by applying an external electric field. Isotropic media Typical transparent media such as glasses are isotropic, which means that light behaves the same way no matter which direction it is travelling in the medium. In terms of Maxwell's equations in a dielectric, this gives a relationship between the electric displacement field D and the electric field E: where ε0 is the permittivity of free space and P is the electric polarization (the vector field corresponding to electric dipole moments present in the medium). Physically, the polarization field can be regarded as the response of the medium to the electric field of the light. Electric susceptibility In an isotropic and linear medium, this polarization field P is proportional and parallel to the electric field E: where χ is the electric susceptibility of the medium. The relation between D and E is thus: where is the dielectric constant of the medium. The value 1+χ is called the relative permittivity of the medium, and is related to the refractive index n, for non-magnetic media, by Anisotropic media In an anisotropic medium, such as a crystal, the polarisation field P is not necessarily aligned with the electric field of the light E. In a physical picture, this can be thought of as the dipoles induced in the medium by the electric field having certain preferred directions, related to the physical structure of the crystal. This can be written as: Here χ is not a number as before but a tensor of rank 2, the electric susceptibility tensor. In terms of components in 3 dimensions: or using the summation convention: Since χ is a tensor, P is not necessarily colinear with E. In nonmagnetic and transparent materials, χij = χji, i.e. the χ tensor is real and symmetric. In accordance with the spectral theorem, it is thus possible to diagonalise the tensor by choosing the appropriate set of coordinate axes, zeroing all components of the tensor except χxx, χyy and χzz. This gives the set of relations: The directions x, y and z are in this case known as the principal axes of the medium. Note that these axes will be orthogonal if all entries in the χ tensor are real, corresponding to a case in which the refractive index is real in all directions. It follows that D and E are also related by a tensor: Here ε is known as the relative permittivity tensor or dielectric tensor. Consequently, the refractive index of the medium must also be a tensor. Consider a light wave propagating along the z principal axis polarised such the electric field of the wave is parallel to the x-axis. The wave experiences a susceptibility χxx and a permittivity εxx. The refractive index is thus: For a wave polarised in the y direction: Thus these waves will see two different refractive indices and travel at different speeds. This phenomenon is known as birefringence and occurs in some common crystals such as calcite and quartz. If χxx = χyy ≠ χzz, the crystal is known as uniaxial. (See Optic axis of a crystal.) If χxx ≠ χyy and χyy ≠ χzz the crystal is called biaxial. A uniaxial crystal exhibits two refractive indices, an "ordinary" index (no) for light polarised in the x or y directions, and an "extraordinary" index (ne) for polarisation in the z direction. A uniaxial crystal is "positive" if ne > no and "negative" if ne < no. Light polarised at some angle to the axes will experience a different phase velocity for different polarization components, and cannot be described by a single index of refraction. This is often depicted as an index ellipsoid. Other effects Certain nonlinear optical phenomena such as the electro-optic effect cause a variation of a medium's permittivity tensor when an external electric field is applied, proportional (to lowest order) to the strength of the field. This causes a rotation of the principal axes of the medium and alters the behaviour of light travelling through it; the effect can be used to produce light modulators. In response to a magnetic field, some materials can have a dielectric tensor that is complex-Hermitian; this is called a gyro-magnetic or magneto-optic effect. In this case, the principal axes are complex-valued vectors, corresponding to elliptically polarized light, and time-reversal symmetry can be broken. This can be used to design optical isolators, for example. A dielectric tensor that is not Hermitian gives rise to complex eigenvalues, which corresponds to a material with gain or absorption at a particular frequency. See also Birefringence Index ellipsoid Optical rotation Prism References External links A virtual polarization microscope Condensed matter physics Crystallography Nonlinear optics
Crystal optics
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,126
[ "Phases of matter", "Materials science", "Crystallography", "Condensed matter physics", "Matter" ]
153,852
https://en.wikipedia.org/wiki/Chief%20information%20officer
Chief information officer (CIO), chief digital information officer (CDIO) or information technology (IT) director, is a job title commonly given to the most senior executive in an enterprise who works with information technology and computer systems, in order to support enterprise goals. Normally, the CIO reports directly to the chief executive officer, but may also report to the chief operating officer or chief financial officer. In military organizations, the CIO reports to the commanding officer. The role of chief information officer was first defined in 1981 by William R. Synnott, former senior vice president of the Bank of Boston, and William H. Gruber, a former professor at the Massachusetts Institute of Technology Sloan School of Management. A CIO will sometimes serve as a member of the board of directors. The need for CIOs CIOs and CDIOs play an important role in businesses that use technology and data because they provide a critical interface between the business needs, user needs, and the information and communication technology (ICT) used in the work. In recent years it has become increasingly understood that knowledge limited to just business or just IT is not sufficient for success in this role. Instead, CIOs need both kinds of knowledge to manage IT resources and to manage and plan "ICT, including policy and practice development, planning, budgeting, resourcing and training." Also, CIOs are playing an increasingly important role in helping to control costs and increase profits via the use of ICT, and to limit potential organizational damage by setting up appropriate IT controls and planning for IT recovery from possible disasters. These objectives also demand a combination of personal skills. Computer Weekly magazine highlights that "53% of IT leaders report a shortage of [IT managers] with a high-level of personal skills, such as communication and leadership" in the workplace. Because information technologies and digital tools evolve so quickly, organizations are sometimes challenged to find staff with the necessary combination of skills in the marketplace, and may look to train existing staff to mitigate skill shortages. CIOs are needed to bridge the gap between IT and non-IT professional roles to support effective working relationships. Roles and responsibilities The chief information officer of an organization is responsible for several business functions. First and most importantly, the CIO must fulfill the role of a business leader. The CIO makes executive decisions regarding matters such as the purchase of IT equipment from suppliers or the creation of new IT systems. Also as a business leader, the CIO is responsible for leading and directing the workforce of their specific organization. A CIO is typically "required to have strong organizational skills." This is particularly relevant for the chief information officer of an organization who must balance roles and responsibilities in order to gain a competitive advantage, whilst keeping the best interests of the organization's employees in mind. CIOs also have the responsibility of recruiting, so it is important that they work proactively to source and nurture the best employees possible. CIOs are required to map out both the ICT strategy and ICT policy of an organization. The ICT strategy covers future-proofing, procurement, and the external and internal standards laid out by an organization. Similarly, the CIO must develop the ICT policy, which details how ICT is utilized and applied. Both are needed for the protection of the organization in the short and long term and the process of strategizing for the future. Paul Burfitt, former CIO of AstraZeneca, also outlines the role of the CIO in IT governance, which he refers to as the "clarifying [of] accountability and the role of committees". In recent years, CIOs have become more closely involved in customer-facing products. With the rising awareness in organizations that their customers are expecting digital services as part of their relationship with an organization, CIOs have been tasked with more product-oriented responsibilities. Risks involved The CIO faces a rather high risk of error and failures, as a result of the challenging nature of the role, along with a large number of responsibilities – such as the provision of finance, recruitment of professionals, establishing data protection and development of policy and strategy. The CIO of U.S company Target was forced into resignation in 2014 after the theft of 40 million credit card details and 70 million customer details by hackers. CIOs that are knowledgeable about their industry are able to adapt and thereby reduce their chances of error. With the introduction of legislation such as the General Data Protection Regulation (GDPR), CIOs have now become increasingly focused on how their role is regulated and can lead to financial and reputational damage to a business. However, regulations such as GDPR have also been advantageous to CIOs, enabling them to have the budget and authority in the organization to make significant changes to the way information is managed. Sabah Khan-Carter of Rupert Murdoch's News Corp described GDPR as "a really big opportunity for most organizations". Educational background and technology skills Many candidates have a Master of Business Administration degree or a Master of Science in Management degree. More recently, CIOs' leadership capabilities, business acumen, and strategic perspectives have taken precedence over technical skills. It is now quite common for CIOs to be appointed from the business side of the organization, especially if they have project management skills. Despite the strategic nature of the role, a 2017 survey, conducted by Logicalis, of 890 CIOs across 23 countries found that 62% of CIOs spend 60% or more of their time on day to day IT activities. In 2012, Gartner Executive Programs conducted a global CIO survey and received responses from 2,053 CIOs from 41 countries and 36 industries. Gartner reported that survey results indicated that the top ten technology priorities for CIOs for 2013 were analytics and business intelligence, mobile technologies, cloud computing, collaboration technologies, legacy modernization, IT management, customer relationship management, virtualization, security, and enterprise resource planning. CIO magazine's "State of the CIO 2008" survey asked 558 IT leaders whom they report to, and the results were: CEO (41%), CFO (23%), COO (16%), corporate CIO (7%) and other (13%). Typically, the CIO is involved with driving the analysis and re-engineering of existing business processes, identifying and developing the capability to use new tools, reshaping the enterprise's physical infrastructure and network access, and identifying and exploiting the enterprise's knowledge resources. Many CIOs head the enterprise's efforts to integrate the Internet into both its long-term strategy and its immediate business plans. CIOs are often tasked with either driving or heading up crucial IT projects that are essential to the strategic and operational objectives of an organization. A good example of this would be the implementation of an enterprise resource planning (ERP) system, which typically has wide-ranging implications for most organizations. Another way that the CIO role has changed is an increasing focus on service management. As SaaS, IaaS, BPO and other flexible delivery techniques are brought into organizations the CIO usually manages these 3rd party services. In essence, a CIO in the modern organization needs business skills and the ability to relate to the organization as a whole, as opposed to being a technological expert with limited functional business expertise. The CIO position is as much about anticipating technology and usage trends in the market place as it is about ensuring that the business navigates these trends with expert guidance and strategic planning aligned to the corporate strategy. Distinction between CIO, CDO, and CTO The roles of chief information officer, chief digital officer and chief technology officer are often mixed up. It has been stated that CTOs are concerned with technology itself, often customer-facing, whereas CIOs are much more concerned with its applications within the business and how they can be managed. More specifically, CIOs oversee a business's IT systems and functions, create and deliver strategies and policies, and focus on internal customers. In contrast to this, CTOs focus on the external customers to the organization and how technology can make the company more profitable. The traditional definition of CTOs focused on using technology as an external competitive advantage now includes CDOs who use the power of modern technologies, online design and big data to digitize a business. CIO Councils CIO Councils bring together a number of CIOs from different organizations which aim to work together, for example across healthcare or across government. Examples include the UK public sector's CIO Council, the London CIO Council for the healthcare sector, and the Chief Information Officers Council in the USA. Awards and recognition It is not uncommon for CIOs to be recognized and awarded annually, particularly in the technology space. These awards are commonly dictated by the significance of their contribution to the industry and generally occur in local markets only. Awards are generally judged by industry peers, or senior qualified executives such as the chief executive officer, chief operating officer or chief financial officer. Generally, awards recognize substantial impact to the local technology market. In Australia, the top 50 CIOs are recognized annually under the CIO50 banner. In the United States of America, United Kingdom and New Zealand CIOs are recognized under the CIO100 banner. See also Chief information security officer Chief technology officer Chief AI officer Chief digital officer Chief executive officer Chief financial officer Chief operating officer Chief investment officer Chief knowledge officer Chief accessibility officer Public information officer References Information systems Management occupations Business occupations
Chief information officer
[ "Technology" ]
1,913
[ "Information systems", "Information technology" ]
153,861
https://en.wikipedia.org/wiki/Moat
A moat is a deep, broad ditch dug around a castle, fortification, building, or town, historically to provide it with a preliminary line of defence. Moats can be dry or filled with water. In some places, moats evolved into more extensive water defences, including natural or artificial lakes, dams and sluices. In older fortifications, such as hillforts, they are usually referred to simply as ditches, although the function is similar. In later periods, moats or water defences may be largely ornamental. They could also act as a sewer. Historical use Ancient Some of the earliest evidence of moats has been uncovered around ancient Egyptian fortresses. One example is at Buhen, a settlement excavated in Nubia. Other evidence of ancient moats is found in the ruins of Babylon, and in reliefs from ancient Egypt, Assyria, and other cultures in the region. Evidence of early moats around settlements has been discovered in many archaeological sites throughout Southeast Asia, including Noen U-Loke, Ban Non Khrua Chut, Ban Makham Thae and Ban Non Wat. The use of the moats could have been either for defensive or agriculture purposes. Medieval Moats were excavated around castles and other fortifications as part of the defensive system as an obstacle immediately outside the walls. In suitable locations, they might be filled with water. A moat made access to the walls difficult for siege weapons such as siege towers and battering rams, which needed to be brought up against a wall to be effective. A water-filled moat made the practice of mining – digging tunnels under the castles in order to effect a collapse of the defences – very difficult as well. Segmented moats have one dry section and one section filled with water. Dry moats that cut across the narrow part of a spur or peninsula are called neck ditches. Moats separating different elements of a castle, such as the inner and outer wards, are cross ditches. The word was adapted in Middle English from the Old French () and was first applied to the central mound on which a castle was erected (see Motte and bailey) and then came to be applied to the excavated ring, a 'dry moat'. The shared derivation implies that the two features were closely related and possibly constructed at the same time. The term moat is also applied to natural formations reminiscent of the artificial structure and to similar modern architectural features. Later western fortification With the introduction of siege artillery, a new style of fortification emerged in the 16th century using low walls and projecting strong points called bastions, which was known as the trace italienne. The walls were further protected from infantry attack by wet or dry moats, sometimes in elaborate systems. When this style of fortification was superseded by lines of polygonal forts in the mid-19th century, moats continued to be used for close protection. Africa The Walls of Benin were a combination of ramparts and moats, called Iya, used as a defence of the capital Benin City in present-day Edo State of Nigeria. It was considered the largest man-made structure lengthwise, second only to the Great Wall of China and the largest earthwork in the world. Recent work by Patrick Darling has established it as the largest man-made structure in the world, larger than Sungbo's Eredo, also in Nigeria. It enclosed of community lands. Its length was over of earth boundaries. It was estimated that earliest construction began in 800 and continued into the mid-15th century. The walls are built of a ditch and dike structure, the ditch dug to form an inner moat with the excavated earth used to form the exterior rampart. The Benin Walls were ravaged by the British in 1897. Scattered pieces of the walls remain in Edo, with material being used by the locals for building purposes. The walls continue to be torn down for real-estate developments. The Walls of Benin City were the world's largest man-made structure. Fred Pearce wrote in New Scientist:They extend for some 16,000 kilometres in all, in a mosaic of more than 500 interconnected settlement boundaries. They cover 6,500 square kilometres and were all dug by the Edo people. In all, they are four times longer than the Great Wall of China, and consumed a hundred times more material than the Great Pyramid of Cheops. They took an estimated 150 million hours of digging to construct, and are perhaps the largest single archaeological phenomenon on the planet. Asia Japanese castles often have very elaborate moats, with up to three moats laid out in concentric circles around the castle and a host of different patterns engineered around the landscape. The outer moat of a Japanese castle typically protects other support buildings in addition to the castle. As many Japanese castles have historically been a very central part of their cities, the moats have provided a vital waterway to the city. Even in modern times the moat system of the Tokyo Imperial Palace consists of a very active body of water, hosting everything from rental boats and fishing ponds to restaurants. Most modern Japanese castles have moats filled with water, but castles in the feudal period more commonly had 'dry moats' , a trench. A is a dry moat dug into a slope. A is a series of parallel trenches running up the sides of the excavated mountain, and the earthen wall, which was also called , was an outer wall made of earth dug out from a moat. Even today it is common for mountain Japanese castles to have dry moats. A is a moat filled with water. Moats were also used in the Forbidden City and Xi'an in China; in Vellore Fort in India; Hsinchu in Taiwan; and in Southeast Asia, such as at Angkor Wat in Cambodia; Mandalay in Myanmar; Chiang Mai in Thailand and Huế in Vietnam. Australia The only moated fort ever built in Australia was Fort Lytton in Brisbane. As Brisbane was much more vulnerable to attack than either Sydney or Melbourne a series of coastal defences was built throughout Moreton Bay, Fort Lytton being the largest. Built between 1880 and 1881 in response to fear of a Russian invasion, it is a pentagonal fortress concealed behind grassy embankments and surrounded by a water-filled moat. North America Moats were developed independently by North American indigenous people of the Mississippian culture as the outer defence of some fortified villages. The remains of a 16th-century moat are still visible at the Parkin Archeological State Park in eastern Arkansas. The Maya people also used moats, for example in the city of Becan. European colonists in the Americas often built dry ditches surrounding forts built to protect important landmarks, harbours or cities (e.g. Fort Jay on Governors Island in New York Harbor). Photo gallery Modern usage Architectural usage Dry moats were a key element used in French Classicism and Beaux-Arts architecture dwellings, both as decorative designs and to provide discreet access for service. Excellent examples of these can be found in Newport, Rhode Island at Miramar (mansion) and The Elms, as well as at Carolands, outside of San Francisco, California, and at Union Station in Toronto, Ontario, Canada. Additionally, a dry moat can allow light and fresh air to reach basement workspaces, as for example at the James Farley Post Office in New York City. Anti-terrorist moats Whilst moats are no longer a significant tool of warfare, modern architectural building design continues to use them as a defence against certain modern threats, such as terrorist attacks from car bombs and improvised fighting vehicles. For example, the new location of the Embassy of the United States in London, opened in 2018, includes a moat among its security features - the first moat built in England for more than a century. Modern moats may also be used for aesthetic or ergonomic purposes. The Catawba Nuclear Station has a concrete moat around the sides of the plant not bordering a lake. The moat is a part of precautions added to such sites after the September 11, 2001 attacks. Safety moats Moats, rather than fences, separate animals from spectators in many modern zoo installations. Moats were first used in this way by Carl Hagenbeck at his Tierpark in Hamburg, Germany. The structure, with a vertical outer retaining wall rising direct from the moat, is an extended usage of the ha-ha of English landscape gardening. Border defence moats In 2004, plans were suggested for a two-mile moat across the southern border of the Gaza Strip to prevent tunnelling from Egyptian territory to the border town of Rafah. In 2008, city officials in Yuma, Arizona planned to dig out a two-mile stretch of a wetland known as Hunters Hole to control immigrants coming from Mexico. Pest control moats Researchers of jumping spiders, which have excellent vision and adaptable tactics, built water-filled miniature moats, too wide for the spiders to jump across. Some specimens were rewarded for jumping then swimming and others for swimming only. Portia fimbriata from Queensland generally succeeded, for whichever method they were rewarded. When specimens from two different populations of Portia labiata were set the same task, members of one population determined which method earned them a reward, whilst members of the other continued to use whichever method they tried first and did not try to adapt. As a basic method of pest control in bonsai, a moat may be used to restrict access of crawling insects to the bonsai. See also Drawbridge Gracht Ha-ha wall Moated settlements Moot hill (sometimes written as Moat Hill) Neck ditch Bullengraben References External links Engineering barrages Castle architecture Masonry Water
Moat
[ "Engineering", "Environmental_science" ]
1,947
[ "Hydrology", "Engineering barrages", "Construction", "Military engineering", "Water", "Masonry" ]
153,911
https://en.wikipedia.org/wiki/Invisibility
Invisibility is the state of an object that cannot be seen. An object in this state is said to be invisible (literally, "not visible"). The phenomenon is studied by physics and perceptual psychology. Since objects can be seen by light from a source reflecting off their surfaces and hitting the viewer's eyes, the most natural form of invisibility (whether real or fictional) is an object that neither reflects nor absorbs light (that is, it allows light to pass through it). This is known as transparency, and is seen in many naturally occurring materials (although no naturally occurring material is 100% transparent). Invisibility perception depends on several optical and visual factors. For example, invisibility depends on the eyes of the observer and/or the instruments used. Thus an object can be classified as "invisible" to a person, animal, instrument, etc. In research on sensorial perception it has been shown that invisibility is perceived in cycles. Invisibility is often considered to be the supreme form of camouflage, as it does not reveal to the viewer any kind of vital signs, visual effects, or any frequencies of the electromagnetic spectrum detectable to the human eye, instead making use of radio, infrared or ultraviolet wavelengths. In illusion optics, invisibility is a special case of illusion effects: the illusion of free space. The term is often used in fantasy and science fiction, where objects cannot be seen by means of magic or hypothetical technology. Practical efforts Technology can be used theoretically or practically to render real-world objects invisible. Making use of a real-time image displayed on a wearable display, it is possible to create a see-through effect. This is known as active camouflage. Though stealth technology is declared to be invisible to radar, all officially disclosed applications of the technology can only reduce the size and/or clarity of the signature detected by radar. In 2003 the Chilean scientist Gunther Uhlmann postulates the first mathematical equations to create invisible materials. In 2006, a team effort of researchers from Britain and the US announced the development of a real cloak of invisibility, an artificially made meta material that is invisible to the microwave spectrum, though it is only in its first stages. In filmmaking, people, objects, or backgrounds can be made to look invisible on camera through a process known as chroma keying. Engineers and scientists have performed various kinds of research to investigate the possibility of finding ways to create real optical invisibility (cloaks) for objects. Methods are typically based on implementing the theoretical techniques of transformation optics, which have given rise to several theories of cloaking. Currently, a practical cloaking device does not exist. A 2006 theoretical work predicts that the imperfections are minor, and metamaterials may make real-life "cloaking devices" practical. The technique is predicted to be applied to radio waves within five years, and the distortion of visible light is an eventual possibility. The theory that light waves can be acted upon the same way as radio waves is now a popular idea among scientists. The agent can be compared to a stone in a river, around which water passes, but slightly down-stream leaves no trace of the stone. Comparing light waves to the water, and whatever object that is being "cloaked" to the stone, the goal is to have light waves pass around that object, leaving no visible aspects of it, possibly not even a shadow. This is the technique depicted in the 2000 television portrayal of The Invisible Man. Two teams of scientists worked separately to create two "Invisibility Cloaks" from 'metamaterials' engineered at the nanoscale level. They demonstrated for the first time the possibility of cloaking three-dimensional (3-D) objects with artificially engineered materials that redirect radar, light or other waves around an object. While one uses a type of fishnet of metal layers to reverse the direction of light, the other uses tiny silver wires. Xiang Zhang, of the University of California, Berkeley said: "In the case of invisibility cloaks or shields, the material would need to curve light waves completely around the object like a river flowing around a rock. An observer looking at the cloaked object would then see light from behind it, making it seem to disappear." UC Berkeley researcher Jason Valentine's team made a material that affects light near the visible spectrum, in a region used in fibre optics: 'Instead of the fish appearing to be slightly ahead of where it is in the water, it would actually appear to be above the water's surface. For a metamaterial to produce negative refraction, it must have a structural array smaller than the wavelength of the electromagnetic radiation being used." Valentine's team created their 'fishnet' material by stacking silver and metal dielectric layers on top of each other and then punching holes through them. The other team used an oxide template and grew silver nanowires inside porous aluminum oxide at tiny distances apart, smaller than the wavelength of visible light. This material refracts visible light. The Imperial College London research team achieved results with microwaves. An invisibility cloak layout of a copper cylinder was produced in May, 2008, by physicist Professor Sir John Pendry. Scientists working with him at Duke University in the US put the idea into practice. Pendry, who theorized the invisibility cloak "as a joke" to illustrate the potential of metamaterials, said in an interview in August 2011 that grand, theatrical manifestations of his idea are probably overblown: "I think it’s pretty sure that any cloak that Harry Potter would recognize is not on the table. You could dream up some theory, but the very practicality of making it would be so impossible. But can you hide things from light? Yes. Can you hide things which are a few centimeters across? Yes. Is the cloak really flexible and flappy? No. Will it ever be? No. So you can do quite a lot of things, but there are limitations. There are going to be some disappointed kids around, but there might be a few people in industry who are very grateful for it." In Turkey in 2009, Bilkent University Search Center Of Nanotechnology researches explained and published in New Journal of Physics that they achieved to make invisibility real in practice using nanotechnology making an object invisible with no shadows etc. next to perfect transparent scene by producing nanotechnologic material that can also be produced like a suit anyone can wear. In 2019, Hyperstealth Biotechnology has patented the technology behind a material that bends light to make people and objects near invisible to the naked eye. The material, called Quantum Stealth, is currently still in the prototyping stage, but was developed by the company's CEO Guy Cramer primarily for military purposes, to conceal agents and equipment such as tanks and jets in the field. Unlike traditional camouflage materials, which are limited to specific conditions such as forests or deserts, according to Cramer this "invisibility cloak" works in any environment or season, at any time of day. This is despite its actual application requiring artificial backgrounds made up of horizontal lines. Psychological A person can be described as invisible if others refuse to see them or routinely overlook them. The term was used in this manner in the title of the book Invisible Man, by Ralph Ellison, in reference to the protagonist, likely modeled after the author, being overlooked on account of his status as an African American. This is supported by the quote taken from the Prologue, "I am invisible, understand, simply because people refuse to see me." (Prologue.1) Fictional use In fiction, people or objects can be rendered completely invisible by several means: Magical objects such as rings, cloaks and amulets can be worn to grant the wearer permanent invisibility (or temporary invisibility until the object is taken off). Magical potions can be consumed to grant temporary or permanent invisibility. Magic spells can be cast on people or objects, usually giving temporary invisibility. Some mythical creatures can make themselves invisible at will, such as in some tales in which leprechauns or Chinese dragons can shrink so much that humans cannot see them. In science fiction, the idea of a "cloaking device". In some works, the power of magic creates an effective means of invisibility by distracting anyone who might notice the character. But since the character is not truly invisible, the effect could be betrayed by mirrors or other reflective surfaces. Where magical invisibility is concerned, the issue may arise of whether the clothing worn by and any items carried by the invisible being are also rendered invisible. In general they are also regarded as being invisible, but in some instances clothing remains visible and must be removed for the full invisibility effect. See also Ambiguity Covert operation Social invisibility Visibility References External links The Digital Chameleon Principle: Computing Invisibility by Rendering Transparency Physics World special issue on invisibility science - July 2011 Light Fantastic: Flirting With Invisibility - The New York Times Invisibility in the real world Interesting picture of a test tube's bottom half invisible in cooking oil. Brief piece on why visible light is visible - Straight Dope CNN.com - Science reveals secrets of invisibility - Aug 9, 2006 - Next to perfect Invisibility achieved using nanotechnologic material In Turkey - July 2009 Optics
Invisibility
[ "Physics" ]
1,928
[ "Optical phenomena", "Physical phenomena", "Optical illusions", "Invisibility" ]
8,577,896
https://en.wikipedia.org/wiki/Methane%20reformer
A methane reformer is a device based on steam reforming, autothermal reforming or partial oxidation and is a type of chemical synthesis which can produce pure hydrogen gas from methane using a catalyst. There are multiple types of reformers in development but the most common in industry are autothermal reforming (ATR) and steam methane reforming (SMR). Most methods work by exposing methane to a catalyst (usually nickel) at high temperature and pressure. Steam reforming Steam reforming (SR), sometimes referred to as steam methane reforming (SMR) uses an external source of hot gas to heat tubes in which a catalytic reaction takes place that converts steam and lighter hydrocarbons such as methane, biogas or refinery feedstock into hydrogen and carbon monoxide (syngas). Syngas reacts further to give more hydrogen and carbon dioxide in the reactor. The carbon oxides are removed before use by means of pressure swing adsorption (PSA) with molecular sieves for the final purification. The PSA works by adsorbing impurities from the syngas stream to leave a pure hydrogen gas. CH4 + H2O (steam) → CO + 3 H2 Endothermic CO + H2O (steam) → CO2 + H2 Exothermic Autothermal reforming Autothermal reforming (ATR) uses oxygen and carbon dioxide or steam in a reaction with methane to form syngas. The reaction takes place in a single chamber where the methane is partially oxidized. The reaction is exothermic due to the oxidation. When the ATR uses carbon dioxide the H2:CO ratio produced is 1:1; when the ATR uses steam the H2:CO ratio produced is 2.5:1 The reactions can be described in the following equations, using CO2: 2 CH4 + O2 + CO2 → 3 H2 + 3 CO + H2O And using steam: 4 CH4 + O2 + 2 H2O → 10 H2 + 4 CO The outlet temperature of the syngas is between 950 and 1100 °C and outlet pressure can be as high as 100 bar. The main difference between SMR and ATR is that SMR only uses oxygen via air for combustion as a heat source to create steam, while ATR directly combusts oxygen. The advantage of ATR is that the H2:CO can be varied, this is particularly useful for producing certain second generation biofuels, such as DME which requires a 1:1 H2:CO ratio. Partial oxidation Partial oxidation (POX) is a type of chemical reaction. It occurs when a substoichiometric fuel-air mixture is partially combusted in a reformer, creating a hydrogen-rich syngas which can then be put to further use. Advantages and disadvantages The capital cost of steam reforming plants is prohibitive for small to medium size applications because the technology does not scale down well. Conventional steam reforming plants operate at pressures between 200 and 600 psi with outlet temperatures in the range of 815 to 925 °C. However, analyses have shown that even though it is more costly to construct, a well-designed SMR can produce hydrogen more cost-effectively than an ATR for smaller applications. See also Catalytic reforming Industrial gas Reformed methanol fuel cell PROX Partial oxidation Chemical looping reforming and gasification References External links Harvest Energy Technology, Inc. an Air Products and Chemicals Incorporated company Hydrogen production Fuel cells Chemical equipment Industrial gases
Methane reformer
[ "Chemistry", "Engineering" ]
703
[ "Chemical process engineering", "Chemical equipment", "Industrial gases", "nan" ]
8,578,085
https://en.wikipedia.org/wiki/Electron%20affinity%20%28data%20page%29
This page deals with the electron affinity as a property of isolated atoms or molecules (i.e. in the gas phase). Solid state electron affinities are not listed here. Elements Electron affinity can be defined in two equivalent ways. First, as the energy that is released by adding an electron to an isolated gaseous atom. The second (reverse) definition is that electron affinity is the energy required to remove an electron from a singly charged gaseous negative ion. The latter can be regarded as the ionization energy of the –1 ion or the zeroth ionization energy. Either convention can be used. Negative electron affinities can be used in those cases where electron capture requires energy, i.e. when capture can occur only if the impinging electron has a kinetic energy large enough to excite a resonance of the atom-plus-electron system. Conversely electron removal from the anion formed in this way releases energy, which is carried out by the freed electron as kinetic energy. Negative ions formed in these cases are always unstable. They may have lifetimes of the order of microseconds to milliseconds, and invariably autodetach after some time. Molecules The electron affinities Eea of some molecules are given in the table below, from the lightest to the heaviest. Many more have been listed by . The electron affinities of the radicals OH and SH are the most precisely known of all molecular electron affinities. Second and third electron affinity Bibliography . . Updated values can be found in the NIST chemistry webbook for around three dozen elements and close to 400 compounds. Specific molecules References See also Atomic physics Chemical properties Chemical element data pages
Electron affinity (data page)
[ "Physics", "Chemistry" ]
343
[ "Chemical data pages", "Quantum mechanics", "Chemical element data pages", "Atomic physics", " molecular", "nan", "Atomic", " and optical physics" ]
8,582,679
https://en.wikipedia.org/wiki/PowerColor
PowerColor is a Taiwanese graphics card brand established in 1997 by TUL Corporation (撼訊科技), based in New Taipei, Taiwan. PowerColor maintains office locations in a number of countries, including Taiwan, the Netherlands and the United States. The United States branch is located in City of Industry, California and serves the North and Latin American markets. TUL also has another brand, VTX3D, which serves the European market and some Asian markets. Products PowerColor is a licensed producer of AMD Radeon video cards. The majority of PowerColor cards are manufactured by Foxconn. PowerColor's AMD video cards range from affordable cards appropriate for low-end workstations, to cards for high-end gaming machines, thus catering to a wide range of the market. PowerColor's manufacturing arrangement with FoxConn has given it the ability to change the specifications of cards, allowing them to announce products with higher specifications—overclocked by default—than AMD or its main competitor, Sapphire Technology. PowerColor products have been widely reviewed and have gained a number of awards at computer hardware review sites. Support PowerColor provides a two-year warranty on its products. To return a video card, the end-user must sign in and register their card. The return process is available only to end users in North America, with the customer liable for shipping. See also Diamond Multimedia – for North and South American markets References 1997 establishments in Taiwan Computer companies of Taiwan Computer hardware companies Electronics companies of Taiwan Graphics hardware companies Electronics companies established in 1997 Taiwanese brands Manufacturing companies based in New Taipei
PowerColor
[ "Technology" ]
327
[ "Computer hardware companies", "Computers" ]
6,921,017
https://en.wikipedia.org/wiki/Algebraic%20Geometry%20%28book%29
Algebraic Geometry is an algebraic geometry textbook written by Robin Hartshorne and published by Springer-Verlag in 1977. Importance It was the first extended treatment of scheme theory written as a text intended to be accessible to graduate students. Contents The first chapter, titled "Varieties", deals with the classical algebraic geometry of varieties over algebraically closed fields. This chapter uses many classical results in commutative algebra, including Hilbert's Nullstellensatz, with the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel as usual references. The second and the third chapters, "Schemes" and "Cohomology", form the technical heart of the book. The last two chapters, "Curves" and "Surfaces", respectively explore the geometry of 1- and 2-dimensional objects, using the tools developed in the chapters 2 and 3. Notes References Graduate Texts in Mathematics 1977 non-fiction books Algebraic geometry Mathematics textbooks Monographs
Algebraic Geometry (book)
[ "Mathematics" ]
195
[ "Fields of abstract algebra", "Algebraic geometry" ]
6,926,718
https://en.wikipedia.org/wiki/Batch%20distillation
Batch distillation refers to the use of distillation in batches, meaning that a mixture is distilled to separate it into its component fractions before the distillation still is again charged with more mixture and the process is repeated. This is in contrast with continuous distillation where the feedstock is added and the distillate drawn off without interruption. Batch distillation has always been an important part of the production of seasonal, or low capacity and high-purity chemicals. It is a very frequent separation process in the pharmaceutical industry. Batch rectifier The simplest and most frequently used batch distillation configuration is the batch rectifier, including the alembic and pot still. The batch rectifier consists of a pot (or reboiler), rectifying column, a condenser, some means of splitting off a portion of the condensed vapour (distillate) as reflux, and one or more receivers. The pot is filled with liquid mixture and heated. Vapour flows upwards in the rectifying column and condenses at the top. Usually, the entire condensate is initially returned to the column as reflux. This contacting of vapour and liquid considerably improves the separation. Generally, this step is named start-up. The first condensate is the head, and it contains undesirable components. The last condensate is the feints and it is also undesirable, although it adds flavor. In between is the heart and this forms the desired product. The head and feints may be thrown out, refluxed, or added to the next batch of mash/juice, according to the practice of the distiller. After some time, a part of the overhead condensate is withdrawn continuously as distillate and it is accumulated in the receivers, and the other part is recycled into the column as reflux. Owing to the differing vapour pressures of the distillate, there will be a change in the overhead distillation with time, as early on in the batch distillation, the distillate will contain a high concentration of the component with the higher relative volatility. As the supply of the material is limited and lighter components are removed, the relative fraction of heavier components will increase as the distillation progresses. Batch stripper The other simple batch distillation configuration is the batch stripper. The batch stripper consists of the same parts as the batch rectifier. However, in this case, the charge pot is located above the stripping column. During operation (after charging the pot and starting up the system) the high boiling constituents are primarily separated from the charge mixture. The liquid in the pot is depleted in the high boiling constituents, and enriched in low boiling ones. The high boiling product is routed into the bottom product receivers. The residual low boiling product is withdrawn from the charge pot. This mode of batch distillation is very seldom applied in industrial processes. Middle vessel column A third feasible batch column configuration is the middle vessel column. The middle vessel column consists of both a rectifying and a stripping section and the charge pot is located at the middle of the column. Feasibility studies Generally, the feasibility studies of batch distillation are based on analyses of the following maps: Residue curve map still path map distillate path map different column profile maps During the feasibility studies, the following basic simplifying assumptions are made: infinite number of equilibrium stages infinite reflux ratio negligible tray hold-up in the two column sections quasi-steady state in the column constant molar overflow Bernot et al. used the batch distillation regions to determine the sequence of the fractions. According to Ewell and Welch, a batch distillation region gives the same fractions upon rectification of any mixture lying within it. Bernot et al. examined the still and distillate paths for the determination of the region boundaries under high number of stages and high reflux ratio, named maximal separation. Pham and Doherty in pioneering work described the structure and properties of residue curve maps for ternary heterogeneous azeotropic mixtures. In their model, the possibility of the phase separation of the vapour condensed is not taken into consideration yet. The singular points of the residue curve maps determined by this method were used to assign batch distillation regions by Rodriguez-Donis et al. and Skouras et al. Modla et al. pointed out that this method may give misleading results for the minimal amount of entrainer. Lang and Modla extended the method of Pham and Doherty and suggested a new, general method for the calculation of residue curves and for the determination of batch distillation regions of heteroazeotropic distillation. Lelkes et al. published a feasibility method for the separation of minimum boiling point azeotropes by continuously entrainer feeding batch distillation. This method has been applied for the use of a light entrainer in the batch rectifier and stripper by Lang et al. (1999) and it applied for maximum azeotropes by Lang et al. Modla et al. extended this method for batch heteroazeotropic distillation under continuous entrainer feeding. See also Azeotropic distillation Extractive distillation Fractional distillation Heteroazeotrope Steam distillation Vacuum distillation Theoretical plate References Further reading Hilmen Eva-Katrine, Separation of Azeotropic Mixtures:Tools for Analysis and Studies on Batch Distillation Operation, Thesis, Norwegian University of Science and Technology Department of Chemical Engineering, (2000). External links Batch distillation program online Batch distillation of the hydrocarbon compounds. Distillation
Batch distillation
[ "Chemistry" ]
1,204
[ "Distillation", "Separation processes" ]
11,684,875
https://en.wikipedia.org/wiki/Semiregular%20polytope
In geometry, by Thorold Gosset's definition a semiregular polytope is usually taken to be a polytope that is vertex-transitive and has all its facets being regular polytopes. E.L. Elte compiled a longer list in 1912 as The Semiregular Polytopes of the Hyperspaces which included a wider definition. Gosset's list In three-dimensional space and below, the terms semiregular polytope and uniform polytope have identical meanings, because all uniform polygons must be regular. However, since not all uniform polyhedra are regular, the number of semiregular polytopes in dimensions higher than three is much smaller than the number of uniform polytopes in the same number of dimensions. The three convex semiregular 4-polytopes are the rectified 5-cell, snub 24-cell and rectified 600-cell. The only semiregular polytopes in higher dimensions are the k21 polytopes, where the rectified 5-cell is the special case of k = 0. These were all listed by Gosset, but a proof of the completeness of this list was not published until the work of for four dimensions, and for higher dimensions. Gosset's 4-polytopes (with his names in parentheses) Rectified 5-cell (Tetroctahedric), Rectified 600-cell (Octicosahedric), Snub 24-cell (Tetricosahedric), , or Semiregular E-polytopes in higher dimensions 5-demicube (5-ic semi-regular), a 5-polytope, ↔ 221 polytope (6-ic semi-regular), a 6-polytope, or 321 polytope (7-ic semi-regular), a 7-polytope, 421 polytope (8-ic semi-regular), an 8-polytope, Euclidean honeycombs Semiregular polytopes can be extended to semiregular honeycombs. The semiregular Euclidean honeycombs are the tetrahedral-octahedral honeycomb (3D), gyrated alternated cubic honeycomb (3D) and the 521 honeycomb (8D). Gosset honeycombs: Tetrahedral-octahedral honeycomb or alternated cubic honeycomb (Simple tetroctahedric check), ↔ (Also quasiregular polytope) Gyrated alternated cubic honeycomb (Complex tetroctahedric check), Semiregular E-honeycomb: 521 honeycomb (9-ic check) (8D Euclidean honeycomb), additionally allowed Euclidean honeycombs as facets of higher-dimensional Euclidean honeycombs, giving the following additional figures: Hypercubic honeycomb prism, named by Gosset as the (n – 1)-ic semi-check (analogous to a single rank or file of a chessboard) Alternated hexagonal slab honeycomb (tetroctahedric semi-check), Hyperbolic honeycombs There are also hyperbolic uniform honeycombs composed of only regular cells , including: Hyperbolic uniform honeycombs, 3D honeycombs: Alternated order-5 cubic honeycomb, ↔ (Also quasiregular polytope) Tetrahedral-octahedral honeycomb, Tetrahedron-icosahedron honeycomb, Paracompact uniform honeycombs, 3D honeycombs, which include uniform tilings as cells: Rectified order-6 tetrahedral honeycomb, Rectified square tiling honeycomb, Rectified order-4 square tiling honeycomb, ↔ Alternated order-6 cubic honeycomb, ↔ (Also quasiregular) Alternated hexagonal tiling honeycomb, ↔ Alternated order-4 hexagonal tiling honeycomb, ↔ Alternated order-5 hexagonal tiling honeycomb, ↔ Alternated order-6 hexagonal tiling honeycomb, ↔ Alternated square tiling honeycomb, ↔ (Also quasiregular) Cubic-square tiling honeycomb, Order-4 square tiling honeycomb, = Tetrahedral-triangular tiling honeycomb, 9D hyperbolic paracompact honeycomb: 621 honeycomb (10-ic check), See also Semiregular polyhedron References Uniform polytopes
Semiregular polytope
[ "Physics" ]
948
[ "Uniform polytopes", "Symmetry" ]
11,685,115
https://en.wikipedia.org/wiki/Overlap%E2%80%93add%20method
In signal processing, the overlap–add method is an efficient way to evaluate the discrete convolution of a very long signal with a finite impulse response (FIR) filter : where for outside the region   This article uses common abstract notations, such as or in which it is understood that the functions should be thought of in their totality, rather than at specific instants (see Convolution#Notation). The concept is to divide the problem into multiple convolutions of with short segments of : where is an arbitrary segment length. Then: and can be written as a sum of short convolutions: where the linear convolution is zero outside the region And for any parameter it is equivalent to the -point circular convolution of with in the region   The advantage is that the circular convolution can be computed more efficiently than linear convolution, according to the circular convolution theorem: where: DFTN and IDFTN refer to the Discrete Fourier transform and its inverse, evaluated over discrete points, and is customarily chosen such that is an integer power-of-2, and the transforms are implemented with the FFT algorithm, for efficiency. Pseudocode The following is a pseudocode of the algorithm: (Overlap-add algorithm for linear convolution) h = FIR_filter M = length(h) Nx = length(x) N = 8 × 2^ceiling( log2(M) ) (8 times the smallest power of two bigger than filter length M. See next section for a slightly better choice.) step_size = N - (M-1) (L in the text above) H = DFT(h, N) position = 0 y(1 : Nx + M-1) = 0 while position + step_size ≤ Nx do y(position+(1:N)) = y(position+(1:N)) + IDFT(DFT(x(position+(1:step_size)), N) × H) position = position + step_size end Efficiency considerations When the DFT and IDFT are implemented by the FFT algorithm, the pseudocode above requires about complex multiplications for the FFT, product of arrays, and IFFT. Each iteration produces output samples, so the number of complex multiplications per output sample is about: For example, when and equals whereas direct evaluation of would require up to complex multiplications per output sample, the worst case being when both and are complex-valued. Also note that for any given has a minimum with respect to Figure 2 is a graph of the values of that minimize for a range of filter lengths (). Instead of , we can also consider applying to a long sequence of length samples. The total number of complex multiplications would be: Comparatively, the number of complex multiplications required by the pseudocode algorithm is: Hence the cost of the overlap–add method scales almost as while the cost of a single, large circular convolution is almost . The two methods are also compared in Figure 3, created by Matlab simulation. The contours are lines of constant ratio of the times it takes to perform both methods. When the overlap-add method is faster, the ratio exceeds 1, and ratios as high as 3 are seen. See also Overlap–save method Circular_convolution#Example Notes References Further reading Signal processing Transforms Fourier analysis Numerical analysis
Overlap–add method
[ "Mathematics", "Technology", "Engineering" ]
712
[ "Functions and mappings", "Telecommunications engineering", "Computer engineering", "Signal processing", "Mathematical objects", "Computational mathematics", "Mathematical relations", "Transforms", "Numerical analysis", "Approximations" ]
11,685,459
https://en.wikipedia.org/wiki/Sound%20speed%20gradient
In acoustics, the sound speed gradient is the rate of change of the speed of sound with distance, for example with depth in the ocean, or height in the Earth's atmosphere. A sound speed gradient leads to refraction of sound wavefronts in the direction of lower sound speed, causing the sound rays to follow a curved path. The radius of curvature of the sound path is inversely proportional to the gradient. When the sun warms the Earth's surface, there is a negative temperature gradient in atmosphere. The speed of sound decreases with decreasing temperature, so this also creates a negative sound speed gradient. The sound wave front travels faster near the ground, so the sound is refracted upward, away from listeners on the ground, creating an acoustic shadow at some distance from the source. The opposite effect happens when the ground is covered with snow, or in the morning over water, when the sound speed gradient is positive. In this case, sound waves can be refracted from the upper levels down to the surface. In underwater acoustics, speed of sound depends on pressure (hence depth), temperature, and salinity of seawater, thus leading to vertical speed gradients similar to those that exist in atmospheric acoustics. However, when there is a zero sound speed gradient, values of sound speed have the same "isospeed" in all parts of a given water column (there is no change in sound speed with depth). The same effect happens in an isothermal atmosphere with the ideal gas assumption. References See also SOFAR channel Wind gradient Acoustics Spatial gradient
Sound speed gradient
[ "Physics" ]
323
[ "Classical mechanics", "Acoustics" ]
11,686,224
https://en.wikipedia.org/wiki/Chichibabin%20pyridine%20synthesis
The Chichibabin pyridine synthesis () is a method for synthesizing pyridine rings. The reaction involves the condensation reaction of aldehydes, ketones, α,β-Unsaturated carbonyl compounds, or any combination of the above, with ammonia. It was reported by Aleksei Chichibabin in 1924. Methyl-substituted pyridines, which show widespread uses among multiple fields of applied chemistry, are prepared by this methodology. Representative syntheses The syntheses are presently conduced commercially in the presence of oxide catalysts such as modified alumina (Al2O3) or silica (SiO2). The reactants are passed over the catalyst at 350-500 °C. 2-Methylpyridine- and 4-methylpyridine are produced as a mixture from acetaldehyde and ammonia. 3-Methylpyridine and pyridine are produced from acrolein and ammonia. Acrolein and propionaldehyde react with ammonia affords mainly 3-methylpyridine. 5-Ethyl-2-methylpyridine is produced from paraldehyde and ammonia. Mechanism and optimizations These syntheses involve many reactions such as imine synthesis, base-catalyzed aldol condensations, and Michael reactions. Many efforts have been made to improve the method. Conducting the reaction in the gas phase in the presence of aluminium(III) oxide. zeolite (yield 98.9% at 500 K), From nitriles Of the many variations have been explored. one approach employs nitriles as the nitrogen source. For example, acrylonitrile and acetone affords 2-methylpyridine uncontaminated with the 4-methyl derivative. In another variation, alkynes and nitriles react in the presence of organocobalt catalysts, a reaction inspired by alkyne trimerization. See also Chichibabin reaction Gattermann–Skita synthesis Hantzsch pyridine synthesis Ciamician–Dennstedt rearrangement References Pyridine forming reactions Heterocycle forming reactions Name reactions Soviet inventions
Chichibabin pyridine synthesis
[ "Chemistry" ]
455
[ "Name reactions", "Ring forming reactions", "Heterocycle forming reactions", "Organic reactions" ]
11,686,976
https://en.wikipedia.org/wiki/Aluminium-conductor%20steel-reinforced%20cable
Aluminium conductor steel-reinforced cable (ACSR) is a type of high-capacity, high-strength stranded conductor typically used in overhead power lines. The outer strands are high-purity aluminium, chosen for its good conductivity, low weight, low cost, resistance to corrosion and decent mechanical stress resistance. The centre strand is steel for additional strength to help support the weight of the conductor. Steel is of higher strength than aluminium which allows for increased mechanical tension to be applied on the conductor. Steel also has lower elastic and inelastic deformation (permanent elongation) due to mechanical loading (e.g. wind and ice) as well as a lower coefficient of thermal expansion under current loading. These properties allow ACSR to sag significantly less than all-aluminium conductors. As per the International Electrotechnical Commission (IEC) and The CSA Group (formerly the Canadian Standards Association or CSA) naming convention, ACSR is designated A1/S1A. Design The aluminium alloy and temper used for the outer strands in the United States and Canada is normally 1350-H19 and elsewhere is 1370-H19, each with 99.5+% aluminium content. The temper of the aluminium is defined by the aluminium version's suffix, which in the case of H19 is extra hard. To extend the service life of the steel strands used for the conductor core they are normally galvanized, or coated with zinc to prevent corrosion. The diameters of the strands used for both the aluminum and steel strands vary for different ACSR conductors. ACSR cable still depends on the tensile strength of the aluminium; it is only reinforced by the steel. Because of this, its continuous operating temperature is limited to , the temperature at which aluminium begins to anneal and soften over time. For situations where higher operating temperatures are required, aluminium-conductor steel-supported (ACSS) may be used. Steel core The standard steel core used for ACSR is galvanized steel, but zinc, 5% or 10% aluminium alloy and trace mischmetal coated steel (sometimes called by the trade-names Bezinal or Galfan) and aluminium-clad steel (sometimes called by the trade-name Alumoweld) are also available. Higher strength steel may also be used. In the United States the most commonly used steel is designated GA2 for galvanized steel (G) with class A zinc coating thickness (A) and regular strength (2). Class C zinc coatings are thicker than class A and provide increased corrosion protection at the expense of reduced tensile strength. A regular strength galvanized steel core with Class C coating thickness would be designated GC2. Higher strength grades of steel are designated high-strength (3), extra-high-strength (4), and ultra-high-strength (5). An ultra-high-strength galvanized steel core with class A coating thickness would be designated GA5. The use of higher strength steel cores increases the tensile strength of the conductor allowing for higher tensions which results in lower sag. Zinc-5% aluminium mischmetal coatings are designated with an "M". These coatings provide increased corrosion protection and heat resistance compared to zinc alone. Regular strength Class "A" mischmetal thickness weight coated regular strength steel would be designated MA2. Aluminium-clad steel is designated as "AW". Aluminium-clad steel offers increased corrosion protection and conductivity at the expense of reduced tensile strength. Aluminium-clad steel is commonly specified for coastal applications. IEC and CSA use a different naming convention. The most commonly used steel is S1A for S1 regular strength steel with a class A coating. S1 steel has slightly lower tensile strength than the regular strength steel used in the United States. Per the Canadian CSA standards the S2A strength grade is classified as High Strength steel. The equivalent material per the ASTM standards is the GA2 strength grade and called Regular Strength steel. The CSA S3A strength grade is classified as Extra High Strength steel. The equivalent material per the ASTM standards is the GA3 strength grade called High Strength. The present day CSA standards for overhead electrical conductor do not yet officially recognize the ASTM equivalent GA4 or GA5 grades. The present day CSA standards do not yet officially recognize the ASTM "M" family of zinc alloy coating material. Canadian utilities are using conductors built with the higher strength steels with the "M" zinc alloy coating. Lay Lay of a conductor is determined by four extended fingers; "right" or "left" direction of the lay is determined depending if it matches finger direction from right hand or left hand respectively. Overhead aluminium (AAC, AAAC, ACAR) and ACSR conductors in the USA are always manufactured with the outer conductor layer with a right-hand lay. Going toward the center, each layer has alternating lays. Some conductor types (e.g. copper overhead conductor, OPGW, steel EHS) are different and have left-hand lay on the outer conductor. Some South American countries specify left-hand lay for the outer conductor layer on their ACSR, so those are wound differently than those used in the USA. Sizing ACSR conductors are available in numerous specific sizes, with single or multiple center steel wires and generally larger quantities of aluminium strands. Although rarely used, there are some conductors that have more steel strands than aluminum strands. An ACSR conductor can in part be denoted by its stranding, for example, an ACSR conductor with 72 aluminium strands with a core of 7 steel strands will be called 72/7 ACSR conductor. Cables generally range from #6 AWG ("6/1" – six outer aluminum conductors and one steel reinforcing conductor) to 2167 kcmil ("72/7" – seventy two outer aluminum conductors and seven steel reinforcing conductors). Naming convention To help avoid confusion due to the numerous combinations of stranding of the steel and aluminium strands, code words are used to specify a specific conductor version. In North America bird names are used for the code words while animal names are used elsewhere. For instance in North America, Grosbeak is a (636 kcmil) ACSR conductor with 26/7 Aluminium/Steel stranding whereas Egret is the same total aluminium size (, 636 kcmil conductor) but with 30/19 Aluminium/Steel stranding. Although the number of aluminium strands is different between Grosbeak and Egret, differing sizes of the aluminium strands are used to offset the change in the number of strands such that the total amount of aluminium remains the same. Differences in the number of steel strands result in varying weights of the steel portion and also result in different overall conductor diameters. Most utilities standardize on a specific conductor version when various versions of the same amount of aluminum to avoid issues related to different size hardware (such as splices). Due to the numerous different sizes available, utilities often skip over some of the sizes to reduce their inventory. The various stranding versions result in different electrical and mechanical characteristics. Ampacity ratings Manufacturers of ACSR typically provide ampacity tables for a defined set of assumptions. Individual utilities normally apply different ratings due to using varying assumptions (which may be a result in higher or lower amperage ratings than those the manufacturers provide). Significant variables include wind speed and direction relative to the conductor, sun intensity, emissivity, ambient temperature, and maximum conductor temperature. Conducting properties In three phase electrical power distribution, conductors must be designed to have low electrical impedance in order to assure that the power lost in the distribution of power is minimal. Impedance is a combination of two quantities: resistance and reactance. The resistances of ASCR conductors are tabulated for different conductor designs by the manufacturer at DC and AC frequency assuming specific operating temperatures. The reasons that resistance changes with frequency are largely due to the skin effect, the proximity effect, and hysteresis loss. Depending on the geometry of the conductor as differentiated by the conductor name, these phenomena have varying degrees of affecting the overall resistance in the conductor at AC vs DC frequency. Often not tabulated with ACSR conductors is the electrical reactance of the conductor, which is due largely to the spacing between the other current carrying conductors and the conductor radius. The reactance of the conductor contributes significantly to the overall current that needs to travel through the line, and thus contributes to resistive losses in the line. For more information on transmission line inductance and capacitance, see electric power transmission and overhead power line. Skin effect The skin effect decreases the cross sectional area in which the current travels through the conductor as AC frequency increases. For alternating current, most (63%) of the electric current flows between the surface and the skin depth, δ, which depends on the frequency of the current and the electrical (conductivity) and magnetic properties of the conductor. This decreased area causes the resistance to rise due to the inverse relationship between resistance and conductor cross sectional area. The skin effect benefits the design, as it causes the current to be concentrated towards the low-resistivity aluminum on the outside of the conductor. To illustrate the impact of the skin effect, the American Society for Testing and Materials (ASTM) standard includes the conductivity of the steel core when calculating the DC and AC resistance of the conductor, but the IEC and CSA Group standards do not. Proximity effect In a conductor (ACSR and other types) carrying AC current, if currents are flowing through one or more other nearby conductors the distribution of current within each conductor will be constrained to smaller regions. The resulting current crowding is termed as the proximity effect. This crowding gives an increase in the effective AC resistance of the circuit, with the effect at 60 Hertz being greater than at 50 Hertz. Geometry, conductivity, and frequency are factors in determining the amount of proximity effect. The proximity effect is result of a changing magnetic field which influences the distribution of an electric current flowing within an electrical conductor due to electromagnetic induction. When an alternating current (AC) flows through an isolated conductor, it creates an associated alternating magnetic field around it. The alternating magnetic field induces eddy currents in adjacent conductors, altering the overall distribution of current flowing through them. The result is that the current is concentrated in the areas of the conductor furthest away from nearby conductors carrying current in the same direction. Hysteresis loss Hysteresis in an ACSR conductor is due to the atomic dipoles in the steel core changing direction due to induction from the 60 or 50 Hertz AC current in the conductor. Hysteresis losses in ACSR are undesirable and can be minimized by using an even number of aluminium layers in the conductor. Due to the cancelling effect of the magnetic field from the opposing lay (right-hand and left-hand) conductors for two aluminium layers there is significantly less hysteresis loss in the steel core than there would be for one or three aluminium layers where the magnetic field does not cancel out. The hysteresis effect is negligible on ACSR conductors with even numbers of aluminium layers and so it is not considered in these cases. For ACSR conductors with an odd number of aluminium layers however, a magnetization factor is used to accurately calculate the AC resistance. The correction method for single-layer ACSR is different than that used for three-layer conductors. Due to applying the magnetization factor, a conductor with an odd number of layers has an AC resistance slightly higher than an equivalent conductor with an even number of layers. Due to higher hysteresis losses in the steel and associated heating of the core, an odd-layer design will have a lower ampacity rating (up to a 10% de-rate) than an equivalent even-layer design. All standard ACSR conductors smaller than Partridge ( {266.8 kcmil} 26/7 Aluminium/Steel) have only one layer due to their small diameters so the hysteresis losses cannot be avoided. Non-standard designs ACSR is widely used due to its efficient and economical design. Variations of standard (sometimes called traditional or conventional) ACSR are used in some cases due to the special properties they offer which provide sufficient advantage to justify their added expense. Special conductors may be more economic, offer increased reliability, or provide a unique solution to an otherwise difficult, of impossible, design problem. The main types of special conductors include "trapezoidal wire conductor" (TW) - a conductor having aluminium strands with a trapezoidal shape rather than round) and "self-damping" (SD), sometimes called "self-damping conductor" (SDC). A similar, higher temperature conductor made from annealed aluminium is called "aluminium conductor steel supported" (ACSS) is also available. Trapezoidal wire Trapezoidal-shaped wire (TW) can be used in lieu of round wire in order to "fill in the gaps" and have a 10–15% smaller overall diameter for the same cross-sectional area or a 20–25% larger cross-sectional area for the same overall diameter. Ontario Hydro (Hydro One) introduced trapezoidal-shaped wire ACSR conductor designs in the 1980s to replace existing round-wire ACSR designs (they called them compact conductors; these conductor types are now called ACSR/TW). Ontario Hydro's trapezoidal-shaped wire (TW) designs used the same steel core but increased the aluminium content of the conductor to match the overall diameter of the former round-wire designs (they could then use the same hardware fittings for both the round and the TW conductors). Hydro One's designs for their trapezoidal ACSR/TW conductors only use even numbers of aluminium layers (either two layers or four layers). They do not use designs which have odd number of layers (three layers) due to that design incurring higher hysteresis losses in the steel core. Also in the 1980s, Bonneville Power Administration (BPA) introduced TW designs where the size of the steel core was increased to maintain the same Aluminium/Steel ratio. Self-damping Self-damping (ACSR/SD) is a nearly obsolete conductor technology and is rarely used for new installations. It is a concentric-lay stranded, self-damping conductor designed to control wind induced (Aeolian-type) vibration in overhead transmission lines by internal damping. Self-damping conductors consists of a central core of one or more round steel wires surrounded by two layers of trapezoidal shaped aluminium wires. One or more layers of round aluminium wires may be added as required. SD conductor differs from conventional ACSR in that the aluminium wires in the first two layers are trapezoidal shaped and sized so that each aluminium layer forms a stranded tube which does not collapse onto the layer beneath when under tension, but maintains a small annular gap between layers. The trapezoidal wire layers are separated from each other and from the steel core by the two smaller annular gaps that permit movement between the layers. The round aluminium wire layers are in tight contact with each other and the underlying trapezoidal wire layer. Under vibration, the steel core and the aluminium layers vibrate with different frequencies and impact damping results. This impact damping is sufficient to keep any Aeolian vibration to a low level. The use of trapezoidal strands also results in reduced conductor diameter for a given AC resistance per mile. The major advantages ACSR/SD are: High self-damping allows the use of higher unloaded tension levels resulting in reduced maximum sag and thus reduced structure height and/or fewer structures per km [or per mile]. Reduced diameter for a given AC resistance yielding reduced structure transverse wind and ice loading. The major disadvantages ACSR/SD are: There most likely will be increased installation and clipping costs due to special hardware requirements and specialized stringing methods. The conductor design always requires the use of a steel core even in light loading areas. Aluminium-conductor steel supported Aluminium-conductor steel supported (ACSS) conductor visually appears to be similar to standard ACSR but the aluminium strands are fully annealed. Annealing the aluminium strands reduces the composite conductor strength, but after installation, permanent elongation of the aluminium strands results in a much larger percentage of the conductor tension being carried in the steel core than is true for standard ACSR. This in turn yields reduced composite thermal elongation and increased self-damping. The major advantages of ACSS are: Since the aluminium strands are "dead-soft" to begin with, the conductor may be operated at temperatures in excess of without loss of strength. Since the tension in the aluminium strands is normally low, the conductor's self-damping of Aeolian vibration is high and it may be installed at high unloaded tension levels without the need for separate Stockbridge-type dampers. The major disadvantages of ACSS are: In areas experiencing heavy ice load, the reduced strength of this conductor relative to standard ACSR may make it less desirable. The softness of the annealed aluminium strands and the possible need for pre-stressing prior to clipping and sagging may raise installation costs. Twisted pair Twisted pair (TP) conductor (sometimes called by the trade-names T-2 or VR) has the two sub-conductors twisted (usually with a left-hand lay) about one another generally with a lay length of approximately three meters (nine feet). The conductor cross-section of the TP is a rotating "figure-8". The sub-conductors can be any type of standard ACSR conductor but the conductors need to match one another to provide mechanical balance. The major advantages of TP conductor are: The use of the TP conductor reduces the propensity of ice/wind galloping starting on the line. In an ice storm when ice deposits start to accumulate along the conductor the twisted conductor profile prevents a uniform airfoil shape from forming. With a standard round conductor the airfoil shape results in uplift of the conductor and initiation of the galloping motion. The TP conductor profile and this absence of the uniform airfoil shape inhibits the initiation of the galloping motion. The reduction in motion during icing events helps prevent the phase conductors from contacting each other causing a fault and an associated outage of the electrical circuit. With the reduction in large amplitude motions, closer phase spacing or longer span lengths can be used. This in turn can result in a lower cost of construction. TP conductor is generally installed only in areas that normally are exposed to wind speed and freezing temperature conditions associated with ice buildup. The non-round shape of this conductor reduces the amplitude of Aeolian vibration and the accompanying fatigue inducing strains near splices and conductor attachment clamps. TP conductors can gently rotate to dissipate energy. As a result, TP conductor can be installed to higher tension levels and reduced sags. The major disadvantages of TP conductor are: The non-round cross-section yields wind and ice loadings which are about 11% higher than standard conductor of the same AC resistance per mile. The installation of, and hardware for this conductor, can be somewhat more expensive than standard conductor. Splicing Many electrical circuits are longer than the length of conductor which can be contained on one reel. As a result, splicing is often necessary to join conductors to provide the desired length. It is important that the splice not be the weak link. A splice (joint) must have high physical strength along with a high electrical current rating. Within the limitations of the equipment used to install the conductor from the reels, a sufficient length of conductor is generally purchased that the reel can accommodate to avoid more splices than are absolutely necessary. Splices are designed to run cooler than the conductor. The temperature of the splice is kept lower by having a larger cross-sectional area and thus less electrical resistance than the conductor. Heat generated at the splice is also dissipated faster due to the larger diameter of the splice. Failures of splices are of concern, as a failure of just one splice can cause an outage that affects a large amount of electrical load. Most splices are compression-type splices (crimps). These splices are inexpensive and have good strength and conductivity characteristics. Some splices, called automatics, use a jaw-type design that is faster to install (does not require the heavy compression equipment) and are often used during storm restoration when speed of installation is more important than the long term performance of the splice. Causes for splice failures are numerous. Some of the main failure modes are related to installation issues, such as: insufficient cleaning (wire brushing) of the conductor to eliminate the aluminium oxide layer (which has a high resistance {is a poor electrical conductor}), improper application of conducting grease, improper compression force, improper compression locations or number of compressions. Splice failures can also be due to Aeolian vibration damage as the small vibrations of the conductor over time cause damage (breakage) of the aluminium strands near the ends of the splice. Special splices (two-piece splices) are required on SD-type conductors as the gap between the trapezoidal aluminium layer and the steel core prevents the compression force on the splice to the steel core to be adequate. A two-piece design has a splice for the steel core and a longer and larger-diameter splice for the aluminium portion. The outer splice must be threaded on first and slid along the conductor and the steel splice compressed first and then the outer splice is slid back over the smaller splice and then compressed. This complicated process can easily result in a poor splice. Splices can also fail partially, where they have higher resistance than expected, usually after some time in the field. These can be detected using thermal camera, thermal probes, and direct resistance measurements, even when the line is energized. Such splices usually require replacement, either on deenergized line, by doing a temporary bypass to replace it, or by adding a big splice over the existing splice, without disconnecting. Conductor coatings When ACSR is new, the aluminium has a shiny surface which has a low emissivity for heat radiation and a low absorption of sunlight. As the conductor ages the color becomes dull gray due to the oxidation reaction of the aluminium strands. In high pollution environments, the color may turn almost black after many years of exposure to the elements and chemicals. For aged conductor, the emissivity for heat radiation and the absorption of sunlight increases. Conductor coatings are available that have a high emissivity for high heat radiation and a low absorption of sunlight. These coatings would be applied to new conductor during manufacture. These types of coatings have the ability to potentially increase the current rating of the ACSR conductor. For the same amount of amperage, the temperature of the same conductor will be lower due to the better heat dissipation of the higher emissivity coating. See also ACCC conductor Copper clad steel References Power engineering Electrical wiring
Aluminium-conductor steel-reinforced cable
[ "Physics", "Engineering" ]
4,766
[ "Electrical systems", "Building engineering", "Energy engineering", "Physical systems", "Power engineering", "Electrical engineering", "Electrical wiring" ]
11,688,824
https://en.wikipedia.org/wiki/Fission%20product%20yield
Nuclear fission splits a heavy nucleus such as uranium or plutonium into two lighter nuclei, which are called fission products. Yield refers to the fraction of a fission product produced per fission. Yield can be broken down by: Individual isotope Chemical element spanning several isotopes of different mass number but same atomic number. Nuclei of a given mass number regardless of atomic number. Known as "chain yield" because it represents a decay chain of beta decay. Isotope and element yields will change as the fission products undergo beta decay, while chain yields do not change after completion of neutron emission by a few neutron-rich initial fission products (delayed neutrons), with half-life measured in seconds. A few isotopes can be produced directly by fission, but not by beta decay because the would-be precursor with atomic number one less is stable and does not decay (atomic number grows by 1 during beta decay). Chain yields do not account for these "shadowed" isotopes; however, they have very low yields (less than a millionth as much as common fission products) because they are far less neutron-rich than the original heavy nuclei. Yield is usually stated as percentage per fission, so that the total yield percentages sum to 200%. Less often, it is stated as percentage of all fission products, so that the percentages sum to 100%. Ternary fission, about 0.2–0.4% of fissions, also produces a third light nucleus such as helium-4 (90%) or tritium (7%). Mass vs. yield curve If a graph of the mass or mole yield of fission products against the atomic number of the fragments is drawn then it has two peaks, one in the area zirconium through to palladium and one at xenon through to neodymium. This is because the fission event causes the nucleus to split in an asymmetric manner, as nuclei closer to magic numbers are more stable. Yield vs. Z - This is a typical distribution for the fission of uranium. Note that in the calculations used to make this graph the activation of fission products was ignored and the fission was assumed to occur in a single moment rather than a length of time. In this bar chart results are shown for different cooling times (time after fission). Because of the stability of nuclei with even numbers of protons and/or neutrons the curve of yield against element is not a smooth curve. It tends to alternate. In general, the higher the energy of the state that undergoes nuclear fission, the more likely a symmetric fission is, hence as the neutron energy increases and/or the energy of the fissile atom increases, the valley between the two peaks becomes more shallow; for instance, the curve of yield against mass for Pu-239 has a more shallow valley than that observed for U-235, when the neutrons are thermal neutrons. The curves for the fission of the later actinides tend to make even more shallow valleys. In extreme cases such as 259Fm, only one peak is seen. Yield is usually expressed relative to number of fissioning nuclei, not the number of fission product nuclei, that is, yields should sum to 200%. The table in the next section ("Ordered by yield") gives yields for notable radioactive (with half-lives greater than one year, plus iodine-131) fission products, and (the few most absorptive) neutron poison fission products, from thermal neutron fission of U-235 (typical of nuclear power reactors), computed from . The yields in the table sum to only 45.5522%, including 34.8401% which have half-lives greater than one year: The remainder and the unlisted 54.4478% decay with half-lives less than one year into nonradioactive nuclei. This is before accounting for the effects of any subsequent neutron capture; e.g.: 135Xe capturing a neutron and becoming nearly stable 136Xe, rather than decaying to 135Cs which is radioactive with a half-life of 2.3 million years Nonradioactive 133Cs capturing a neutron and becoming 134Cs, which is radioactive with a half-life of 2 years Many of the fission products with mass 147 or greater such as 147Pm, 149Sm, 151Sm, and 155Eu have significant cross sections for neutron capture, so that one heavy fission product atom can undergo multiple successive neutron captures. Besides fission products, the other types of radioactive products are plutonium containing 238Pu, 239Pu, 240Pu, 241Pu, and 242Pu, minor actinides including 237Np, 241Am, 243Am, curium isotopes, and perhaps californium reprocessed uranium containing 236U and other isotopes tritium activation products of neutron capture by the reactor or bomb structure or the environment Fission products from U-235 Cumulative fission yields Cumulative fission yields give the amounts of nuclides produced either directly in the fission or by decay of other nuclides. Ordered by mass number Decays, even if lengthy, are given down to the stable nuclide. Decays with half lives longer than a century are marked with a single asterisk (), while decays with a half life longer than a hundred million years are marked with two asterisks (). Half lives, decay modes, and branching fractions Ordered by thermal neutron absorption cross section References External links HANDBOOK OF NUCLEAR DATA FOR SAFEGUARDS: DATABASE EXTENSIONS, AUGUST 2008 The Live Chart of Nuclides - IAEA Color-map of yields, and detailed data by click on a nuclide. Nuclear fission Nuclear chemistry
Fission product yield
[ "Physics", "Chemistry" ]
1,148
[ "Nuclear chemistry", "Nuclear fission", "nan", "Nuclear physics" ]
11,689,708
https://en.wikipedia.org/wiki/Fritware
Fritware, also known as stone-paste, is a type of pottery in which ground glass (frit) is added to clay to reduce its fusion temperature. The mixture may include quartz or other siliceous material. An organic compound such as gum or glue may be added for binding. The resulting mixture can be fired at a lower temperature than clay alone. A glaze is then applied on the surface. Fritware was invented to give a strong white body, which, combined with tin-glazing of the surface, allowed it to approximate the result of Chinese porcelain. Porcelain was not manufactured in the Islamic world until modern times, and most fine Islamic pottery was made of fritware. Frit was also a significant component in some early European porcelains. Composition and techniques Fritware was invented in the Medieval Islamic world to give a strong white body, which, combined with tin-glazing of the surface, allowed it to approximate the white colour, translucency, and thin walls of Chinese porcelain. True porcelain was not manufactured in the Islamic world until modern times, and most fine Islamic pottery was made of fritware. Frit was also a significant component in some early European porcelains. Although its production centres may have shifted with time and imperial power, fritware remained in continued use throughout the Islamic world with little significant innovation. The technique was used to create many other significant artistic traditions such as lustreware, Raqqa ware, and Iznik pottery. Raw materials in one contemporary recipe used in Jaipur are quartz powder, glass power, fuller's earth, borax and tragacanth gum. Raw materials for a glaze are reported to be glass powder, lead oxide, borax, potassium nitrate, zinc oxide and boric acid. The blue decoration is cobalt oxide. History Frit is crushed glass that is used in ceramics. The pottery produced from the manufacture of frit is often called 'fritware' but has also been referred to as "stonepaste" and "faience" among other names. Fritware was innovative because the glaze and the body of the ceramic piece were made of nearly the same materials, allowing them to fuse better, be less likely to flake, and could also be fired at a lower temperature. The manufacture of proto-fritware began in Iraq in the 9th century AD under the Abbasid Caliphate, and with the establishment of Samarra as its capital in 836, there is extensive evidence of ceramics in the court of the Abbasids both in Samarra and Baghdad. A ninth-century corpus of 'proto-stonepaste' from Baghdad has "relict glass fragments" in its fabric. The glass is alkali-lime-lead-silica and, when the paste was fired or cooled, wollastonite and diopside crystals formed within the glass fragments. The lack of "inclusions of crushed pottery" suggests these fragments did not come from a glaze. The reason for their addition would have been to release alkali into the matrix on firing, which would "accelerate vitrification at a relatively low firing temperature, and thus increase the hardness and density of the [ceramic] body." Following the fall of the Abbasid Caliphate, the main centres of manufacture moved to Egypt where true fritware was invented between the 10th and the 12th centuries under the Fatimids, but the technique then spread throughout the Middle East. There are many variations on designs, colour, and composition, the last often attributed to the differences in mineral compositions of soil and rock used in the production of fritware. The bodies of the fritware ceramics were always made quite thin to imitate their porcelain counterparts in China, a practice not common before the discovery of the frit technique which produced stronger ceramics. In the 13th century the town of Kashan in Iran was an important centre for the production of fritware. Abū'l-Qāsim, who came from a family of tilemakers in the city, wrote a treatise in 1301 on precious stones that included a chapter on the manufacture of fritware. His recipe specified a fritware body containing a mixture of 10 parts silica to 1 part glass frit and 1 part clay. The frit was prepared by mixing powdered quartz with soda which acted as a flux. The mixture was then heated in a kiln. The internal circulation of pottery within the Islamic world from its earliest days was quite common, with the movement of ideas regarding pottery without their physical presence in certain areas being readily apparent. The movement of fritware into China - whose monopoly on porcelain production had prompted the Islamic world to produce fritware to begin with - impacted Chinese porcelain decoration, deriving the signature cobalt blue colour from Islamic traditions of fritware decoration. The transfer of this artistic idea was likely a consequence of the enhanced connection and trade relations between the Middle and Near East and Far East Asia under the Mongols beginning in the 13th century. The Middle and Near East had an initial monopoly on the cobalt colour due to its own richness in cobalt ore, which was especially abundant in Qamsar and Anarak in Persia. Iznik pottery was produced in Ottoman Turkey beginning in the last quarter of 15th century AD. It consists of a body, slip, and glaze, where the body and glaze are 'quartz-frit'. The 'frits' in both cases "are unusual in that they contain lead oxide as well as soda"; the lead oxide would help reduce the thermal expansion coefficient of the ceramic. Microscopic analysis reveals that the material that has been labeled 'frit' is 'interstitial glass' which serves to connect the quartz particles. The glass was added as frit and the interstitial glass formed on firing. In 2011, 29 potteries, employing a total of 300 persons, making fritware were identified in Jaipur. Applications Fritware served a wide variety of purposes in the medieval Islamic world. As a porcelain substitute, the fritware technique was used to craft bowls, vases, and pots, not only as symbols of luxury but also to practical ends. It was similarly used by medieval tilemakers to craft strong tiles with a colourless body that provided a suitable base for underglaze and decoration. Fritware was also known to be used to craft objects beyond pottery and tiling, and has been found to be used in the twelfth century to make objects like chess sets. There is also a tradition of using fritware to create intricate figurines, with surviving examples from the Seljuk Empire. It was also used as the ceramic body for Islamic lustreware, a technique that puts a lustred ceramic glaze onto pottery. Blue pottery A small manufacturing cluster of fritware exists around Jaipur, Rajasthan in India, where it is known as 'Blue Pottery' due its most popular glaze. The Blue Pottery of Jaipur technique may have arrived in India with the Mughals, with production in Jaipur dating to at least as early as the 17th century. References Further reading "Technology of Frit Making in Iznik." Okyar F. Euro Ceramics VIII, Part 3. Trans Tech Publications. 2004, p. 2391-2394. Published for The European Ceramic Society. Pancaroğlu, O. (2007). Perpetual glory: Medieval Islamic ceramics from the Harvey B. Plotnick Collection (1055933707 805629715 M. Bayani, Trans.). Chicago, IL: Art Institute of Chicago. Watson, O. (2004). Ceramics from Islamic lands. New York, NY: Thames & Hudson. Pottery Arabic pottery Iranian pottery Ceramic materials Islamic pottery Arab inventions
Fritware
[ "Engineering" ]
1,580
[ "Ceramic engineering", "Ceramic materials" ]
11,693,664
https://en.wikipedia.org/wiki/Photothermal%20therapy
Photothermal therapy (PTT) refers to efforts to use electromagnetic radiation (most often in infrared wavelengths) for the treatment of various medical conditions, including cancer. This neurotherapy is an extension of photodynamic therapy, in which a photosensitizer is excited with specific band light. This activation brings the sensitizer to an excited state where it then releases vibrational energy (heat), which is what kills the targeted cells. Unlike photodynamic therapy, photothermal therapy does not require oxygen to interact with the target cells or tissues. Current studies also show that photothermal therapy is able to use longer wavelength light, which is less energetic and therefore less harmful to other cells and tissues. Nanoscale materials Most materials of interest currently being investigated for photothermal therapy are on the nanoscale. One of the key reasons behind this is the enhanced permeability and retention effect observed with particles in a certain size range (typically 20 - 300 nm). Molecules in this range have been observed to preferentially accumulate in tumor tissue. When a tumor forms, it requires new blood vessels in order to fuel its growth; these new blood vessels in/near tumors have different properties as compared to regular blood vessels, such as poor lymphatic drainage and a disorganized, leaky vasculature. These factors lead to a significantly higher concentration of certain particles in a tumor as compared to the rest of the body. Gold NanoRods (AuNR) Huang et al. investigated the feasibility of using gold nanorods for both cancer cell imaging as well as photothermal therapy. The authors conjugated antibodies (anti-EGFR monoclonal antibodies) to the surface of gold nanorods, allowing the gold nanorods to bind specifically to certain malignant cancer cells (HSC and HOC malignant cells). After incubating the cells with the gold nanorods, an 800 nm Ti:sapphire laser was used to irradiate the cells at varying powers. The authors reported successful destruction of the malignant cancer cells, while nonmalignant cells were unharmed. When AuNRs are exposed to NIR light, the oscillating electromagnetic field of the light causes the free electrons of the AuNR to collectively coherently oscillate. Changing the size and shape of AuNRs changes the wavelength that gets absorbed. A desired wavelength would be between 700-1000 nm because biological tissue is optically transparent at these wavelengths. While all AuNP are sensitive to change in their shape and size, Au nanorods properties are extremely sensitive to any change in any of their dimensions regarding their length and width or their aspect ratio. When light is shone on a metal NP, the NP forms a dipole oscillation along the direction of the electric field. When the oscillation reaches its maximum, this frequency is called the surface plasmon resonance (SPR). AuNR have two SPR spectrum bands: one in the NIR region caused by its longitudinal oscillation which tends to be stronger with a longer wavelength and one in the visible region caused by the transverse electronic oscillation which tends to be weaker with a shorter wavelength. The SPR characteristics account for the increase in light absorption for the particle. As the AuNR aspect ratio increases, the absorption wavelength is redshifted and light scattering efficiency is increased. The electrons excited by the NIR lose energy quickly after absorption via electron-electron collisions, and as these electrons relax back down, the energy is released as a phonon that then heats the environment of the AuNP which in cancer treatments would be the cancerous cells. This process is observed when a laser has a continuous wave onto the AuNP. Pulsed laser light beams generally results in the AuNP melting or ablation of the particle. Continuous wave lasers take minutes rather than a single pulse time for a pulsed laser, continues wave lasers are able to heat larger areas at once. Gold Nanoshells Gold nanoshells, coated silica nanoparticles with a thin layer of gold. have been conjugated to antibodies (anti-HER2 or anti-IgG) via PEG linkers. After incubation of SKBr3 cancer cells with the gold nanoshells, an 820 nm laser was used to irradiate the cells. Only the cells incubated with the gold nanoshells conjugated with the specific antibody (anti-HER2) were damaged by the laser. Another category of gold nanoshells are gold layer on liposomes, as soft template. In this case, drug can also be encapsulated inside and/or in bilayer and the release can be triggered by laser light. thermo Nano-Architectures (tNAs) The failure of clinical translation of nanoparticles-mediated PTT is mainly ascribed to concerns about their persistence in the body. Indeed, the optical response of anisotropic nanomaterials can be tuned in the NIR region by increasing their size to up to 150 nm. On the other hand, body excretion of non-biodegradable noble metals nanomaterials above 10 nm occurs through the hepatobiliary route in a slow and inefficient manner. A common approach to avoid metal persistence is to reduce the nanoparticles size below the threshold for renal clearance, i.e. ultrasmall nanoparticles (USNPs), meanwhile the maximum light-to-heat transduction is for < 5 nm nanoparticles. On the other hand, the surface plasmon of excretable gold USNPs is in the UV/visible region (far from the first biological windows), severely limiting their potential application in PTT. Excretion of metals has been combined with NIR-triggered PTT by employing ultrasmall-in-nano architectures composed by metal USNPs embedded in biodegradable silica nanocapsules. tNAs are the first reported NIR-absorbing plasmonic ultrasmall-in-nano platforms that jointly combine: i) photothermal conversion efficacy suitable for hyperthermia, ii) multiple photothermal sequences and iii) renal excretion of the building blocks after the therapeutic action. Nowadays, tNAs therapeutic effect has been assessed on valuable 3D models of human pancreatic adenocarcinoma. Graphene and graphene oxide Graphene is viable for photothermal therapy. An 808 nm laser at a power density of 2 W/cm2 was used to irradiate the tumor sites on mice for 5 minutes. As noted by the authors, the power densities of lasers used to heat gold nanorods range from 2 to 4 W/cm2. Thus, these nanoscale graphene sheets require a laser power on the lower end of the range used with gold nanoparticles to photothermally ablate tumors. In 2012, Yang et al. incorporated the promising results regarding nanoscale reduced graphene oxide reported by Robinson et al. into another in vivo mice study.< The therapeutic treatment used in this study involved the use of nanoscale reduced graphene oxide sheets, nearly identical to the ones used by Robinson et al. (but without any active targeting sequences attached). Nanoscale reduced graphene oxide sheets were successfully irradiated in order to completely destroy the targeted tumors. Most notably, the required power density of the 808 nm laser was reduced to 0.15 W/cm2, an order of magnitude lower than previously required power densities. This study demonstrates the higher efficacy of nanoscale reduced graphene oxide sheets as compared to both nanoscale graphene sheets and gold nanorods. Conjugated polymers (CPs) PTT utilizes photothermal transduction agents (PTAs) which can transform light energy to heat through photothermal effect to raise the temperature of tumor area and thus cause the ablation of tumor cells. Specifically, ideal PTAs should have high photothermal conversion efficiency (PCE), excellent optical stability and biocompatibility, and strong light adsorption in the near-infrared (NIR) region (650-1350 nm) due to the deep-tissue penetration and minimal absorption of NIR light in the biological tissues. PTAs mainly include inorganic materials and organic materials. Inorganic PTAs, such as noble metal materials, carbon-based nanomaterials, and other 2D materials, have high PCE and excellent photostability, but they are not biodegradable and thus have potential long-term toxicity in vivo. Organic PTAs including small molecule dyes and conjugated polymers (CPs) have good biocompatibility and biodegradability, but poor photostability. Among them, small molecule dyes, such as cyanine, porphyrin, phthalocyanine, are limited in the field of cancer treatment because of their susceptibility to photobleaching and poor tumor enrichment ability. Conjugated polymers with large π−π conjugated skeleton and a high electron delocalization structure show potential for PTT due to their strong NIR absorption, excellent photostability, low cytotoxicity, outstanding PCE, good dispersibility in aqueous medium, increased accumulation at tumor site, and long blood circulation time. Moreover, conjugated polymers can be easily combined with other imaging agents and drugs to construct multifunctional nanomaterials for selective and synergistic cancer therapy. The CPs used for tumor PTT mainly include polyaniline (PANI), polypyrrole (PPy), polythiophene (PTh), polydopamine (PDA), donor−acceptor (D-A) conjugated polymers, and poly(3,4-ethylenedioxythiophene):poly(4-styrenesulfonate) (PEDOT:PSS). Photothermal conversion mechanism The nonradiative process for heat generation of organic PTAs is different from that of inorganic PTAs such as metals and semiconductors which is related with surface plasmon resonance. As shown in the figure, conjugated polymers are first activated to the excited state (S1) under light irradiation and then excited state (S1) decays back to the ground state (S0) via three processes: (I) emitting a photon (fluorescence), (II) intersystem crossing, and (III) nonradiative relaxation (heat generation). Because these three pathways of the S1 decaying back to the S0 are usually competitive in photosensitive materials, light emitting and intersystem crossing must be efficiently reduced in order to increase the heat generation and improve the photothermal conversion efficiency. For conjugated polymers, on the one hand, their unique structures lead to closed stacking of the molecular sensitizers with highly frequent intermolecular collisions which can efficiently quench the fluorescence and intersystem crossing, and thus enhance the yield of nonradiative relaxation. On the other hand, compared with monomeric phototherapeutic molecules, conjugated polymers possess higher stability in vivo against disassembly and photobleaching, longer blood circulation time, and more accumulation at tumor site due to the enhanced permeability and retention (EPR) effect. Therefore, conjugated polymers have high photothermal conversion efficiency and a large amount of heat generation. One of the most widely used equations to calculate photothermal conversion efficiency (η) of organic PTAs is as follows: η = (hAΔΤmax-Qs)/I(1-10-Aλ) where h is the heat transfer coefficient, A is the container surface area, ΔΤmax means the maximum temperature change in the solution, Aλ means the light absorbance, I is the laser power density, and Qs is the heat associated with the light absorbance of the solvent. Furthermore, various efficient methods, especially donor-acceptor (D-A) strategy, have been designed to enhance the photothermal conversion efficiency and heat generation of conjugated polymers. The D-A assembly system in the conjugated polymers contributes to strong intermolecular electron transfer from the donor to the acceptor, thus bringing efficient fluorescence and intersystem crossing quenching, and improved heat generation. In addition, the HOMO-LUMO gap of the D−A conjugated polymers can be easily tuned through changing the selection of electron donor (ED) and electron acceptor (EA) moieties, and thus D−A structured polymers with extremely low band gap can be developed to improve the NIR absorption and photothermal conversion efficiency of CPs. Polyaniline (PANI) Polyaniline (PANI) is one of the earliest types of conjugated polymers reported for tumor PTT. Polypyrrole (PPy) Polypyrrole (PPy) is suited for PTT applications because of its strong NIR absorbance, large PCE, stability, and biocompatibility. In vivo experiments show that tumors treated with PPy NPs could be effectively eliminated under the irradiation of an 808 nm laser (1 W cm−2, 5 min). PPy nanosheets exhibit promising photothermal ablation ability toward cancer cells in the NIR II window for deep-tissue PTT. PPy nanoparticles and its derivative nanomaterials can also be combined with imaging contrast agents and diverse drugs to construct multifunctional theranostic applications in imaging-guided PTT and synergistic treatment, including fluorescent imaging, magnetic resonance imaging (MRI), photoacoustic imaging (PA), computed tomography (CT), photodynamic therapy (PDT), chemotherapy, etc. For example, PPy has been used to encapsulate ultrasmall iron oxide nanoparticles (IONPs) and finally develop IONP@PPy NPs for in vivo MR and PA imaging-guided PTT. Polypyrrole (I-PPy) nanocomposites have been investigated for CT imaging-guided tumor PTT. Polythiophene (PTh) Polythiophene (PTh) and its derivatives-based polymers are also one kind of conjugated polymers for PTT. Polythiophene-based polymers usually exhibit excellent photostability, large light-harvesting ability, easy synthesis, and facile functionalization with different substituents. Conjugated copolymer (C3) with promising photothermal properties can be prepared by linking 2-N,N′-bis(2-(ethyl)hexyl)-perylene-3,4,9,10-tetra-carboxylic acid bis-imide to a thienylvinylene oligomer. C3 was coprecipitated with PEG-PCL and indocyanine green (ICG) to obtain PEG-PCL-C3-ICG nanoparticles for fluorescence-guided photothermal/photodynamic therapy against oral squamous cell carcinoma (OSCC). A biodegradable PLGA-PEGylated DPPV (poly{2,2′-[(2,5-bis(2-hexyldecyl)-3,6-dioxo-2,3,5,6-tetrahydropyrrolo[3,4-c]-pyrrole-1,4-diyl)-dithiophene]-5,5′-diyl-alt-vinylene) conjugated polymer for PA-guided PTT with PCE 71% (@ 808 nm, 0.3 W cm−2). The vinylene bonds in the main chain improves the biodegradability, biocompatibility and photothermal conversion efficiency of CPs. Polydopamine (PDA) Dopamine is one of neurotransmitters in the body which helps cells send impulses. Polydopamine (PDA) is obtained through the self-aggregation of dopamine to form a melanin-like substance under mild alkaline conditions. PDA has strong NIR absorption, good photothermal stability, excellent biocompatibility and biodegradability, and high photothermal conversion efficiency. Furthermore, with π conjugated structure and different active groups, PDA can be easily combined with various materials to achieve multifunction, such as fluorescence imaging, MRI, CT, PA, targeted therapy etc. In view of this, PDA and its composite nanomaterials have a broad application prospect in the biomedical field. Dopamine-melanin colloidal nanospheres is an efficient near-infrared photothermal therapeutic agent for in vivo cancer therapy. PDA can also be modified on the surface of other PTAs, such as gold nanorods, carbon-based materials, to enhance the photothermal stability and efficiency in vivo. For example, PDA-modified spiky gold nanoparticles (SGNP@PDAs) have been investigated for chemo-photothermal therapy. Donor−Acceptor (D−A) CPs Donor−acceptor (D−A) conjugated polymers have been investigated for the medicinal purposes. Nano-PCPDTBT CPs have two moieties: 2-ethylhexyl cyclopentadithiophene and 2,1,3-benzothiadiazole. When the PCPDTBT nanoparticle solution (0.115 mg/mL) was exposed to an 808 nm NIR laser (0.6 W/cm2), the temperature could be increased by more than 30 °C. Wang et al. designed four NIR-absorbing D-A structured conjugated polymer dots (Pdots) containing diketopyrrolo-pyrrole (DPP) and thiophene units as effective photothermal materials with the PCE up to 65% for in vivo cancer therapy. Zhang et al. constructed PBIBDF-BT D-A CPs by using isoindigo derivative (BIBDF) and bithiophene (BT) as EA and ED respectively. PBIBDF-BT was further modified with poly(ethylene glycol)-block-poly(hexyl ethylene phosphate) (mPEG-b-PHEP) to obtain PBIBDF-BT@NP PPE with PCE of 46.7% and high stability in physiological environment. Yang’s group designed PBTPBF-BT CPs, in which the bis(5-oxothieno[3,2-b]pyrrole-6-ylidene)-benzodifurandione (BTPBF) and the 3,3′-didodecyl-2,2′-bithiophene (BT) units acting as EA and ED respectively. The D-A CPs have a maximum absorption peak at 1107 nm and a relative high photothermal conversion efficiency (66.4%). Pu et al. synthesized PC70BM-PCPDTBT D-A CPs via nanoprecipitation of EA (6,6)-phenyl-C71-butyric acid methyl ester (PC70BM) and ED PCPDTBT (SPs) for PA-guided PTT. Wang et al. developed D-A CPs TBDOPV-DT containing thiophene-fused benzodifurandione-based oligo(p-phenylenevinylene) (TBDOPV) as EA unit and 2,2′-bithio-phene (DT) as ED unit. TBDOPV-DT CPs have a strong absorption at 1093 nm and achieve highly efficient NIR-II photothermal conversion. PEDOT:PSS Poly(3,4-ethylenedioxythiophene):poly(4-styrenesulfonate) (PEDOT:PSS) is often used in organic electronics and have strong NIR absorption. In 2012, Liu’s group first reported PEGylated PEDOT:PSS polymeric nanoparticle (PEDOT:PSS-PEG) for near-infrared photothermal therapy of cancer. PEDOT:PSS-PEG nanoparticles have high stability in vivo and long blood circulation half-life of 21.4 ± 3.1 h. The PTT in animals showed no appreciable side effects for the tested dose and an excellent therapeutic efficacy under the 808 nm laser irradiation. Kang et al. synthesized magneto-conjugated polymer core−shell MNP@PEDOT:PSS nanoparticles for multimodal imaging-guided PTT. Furthermore, PEDOT:PSS NPs can not only serve as PTAs but also as a drug carrier to load various types of drugs, such as SN38, chemotherapy drugs DOX and photodynamic agent chlorin e6 (Ce6), thus achieving synergistic cancer therapy. See also Photomedicine Light Therapy Hyperthermia therapy Experimental cancer treatment References Further reading Medical physics Photochemistry Medical treatments Light therapy Experimental cancer treatments Oncothermia
Photothermal therapy
[ "Physics", "Chemistry" ]
4,462
[ "nan", "Applied and interdisciplinary physics", "Medical physics" ]
2,141,003
https://en.wikipedia.org/wiki/Indium%20gallium%20phosphide
Indium gallium phosphide (InGaP), also called gallium indium phosphide (GaInP), is a semiconductor composed of indium, gallium and phosphorus. It is used in high-power and high-frequency electronics because of its superior electron velocity with respect to the more common semiconductors silicon and gallium arsenide. It is used mainly in HEMT and HBT structures, but also for the fabrication of high efficiency solar cells used for space applications and, in combination with aluminium (AlGaInP alloy) to make high brightness LEDs with orange-red, orange, yellow, and green colors. Some semiconductor devices such as EFluor Nanocrystal use InGaP as their core particle. Indium gallium phosphide is a solid solution of indium phosphide and gallium phosphide. Ga0.5In0.5P is a solid solution of special importance, which is almost lattice matched to GaAs. This allows, in combination with (AlxGa1−x)0.5In0.5, the growth of lattice matched quantum wells for red emitting semiconductor lasers, e.g., red emitting (650nm) RCLEDs or VCSELs for PMMA plastic optical fibers. Ga0.5In0.5P is used as the high energy junction on double and triple junction photovoltaic cells grown on GaAs. Recent years have shown GaInP/GaAs tandem solar cells with AM0 (sunlight incidence in space = 1.35 kW/m2) efficiencies in excess of 25%. A different composition of GaInP, lattice matched to the underlying GaInAs, is utilized as the high energy junction GaInP/GaInAs/Ge triple junction photovoltaic cells. Growth of GaInP by epitaxy can be complicated by the tendency of GaInP to grow as an ordered material, rather than a truly random solid solution (i.e., a mixture). This changes the bandgap and the electronic and optical properties of the material. See also Gallium phosphide Indium(III) phosphide Indium gallium nitride Indium gallium arsenide GaInP/GaAs solar cell References E.F. Schubert "Light emitting diodes", External links EMCORE Solar Cells Spectrolab Solar Cells NSM Archive Phosphides Indium compounds Gallium compounds III-V semiconductors III-V compounds Solar cells Light-emitting diode materials
Indium gallium phosphide
[ "Physics", "Chemistry", "Materials_science" ]
528
[ "Materials science stubs", "Inorganic compounds", "Semiconductor materials", "Condensed matter physics", "III-V semiconductors", "Light-emitting diode materials", "Condensed matter stubs", "III-V compounds" ]
2,142,913
https://en.wikipedia.org/wiki/Diquark
In particle physics, a diquark, or diquark correlation/clustering, is a hypothetical state of two quarks grouped inside a baryon (that consists of three quarks). Corresponding models of baryons are referred to as quark–diquark models. The diquark is often treated as a single subatomic particle with which the third quark interacts via the strong interaction. The existence of diquarks inside the nucleons is a disputed issue, but it helps to explain some nucleon properties and to reproduce experimental data sensitive to the nucleon structure. Diquark–antidiquark pairs have also been advanced for anomalous particles such as the X(3872). Formation The forces between the two quarks in a diquark is attractive when both the colors and spins are antisymmetric. When both quarks are correlated in this way they tend to form a very low energy configuration. This low energy configuration has become known as a diquark. Controversy Many scientists theorize that a diquark should not be considered a particle. Even though they may contain two quarks they are not color neutral, and therefore cannot exist as isolated bound states. So instead they tend to float freely inside hadrons as composite entities; while free-floating they have a size of about . This also happens to be the same size as the hadron itself. Uses Diquarks are the conceptual building blocks, and as such give scientists an ordering principle for the most important states in the hadronic spectrum. There are many different pieces of evidence that prove diquarks are fundamental in the structure of hadrons. One of the most compelling pieces of evidence comes from a recent study of baryons. In this study the baryon had one heavy and two light quarks. Since the heavy quark is inert, the scientists were able to discern the properties of the different quark configurations in the hadronic spectrum. Λ and Σ baryon experiment An experiment was conducted using diquarks in an attempt to study the Λ and Σ baryons that are produced in the creation of hadrons created by fast-moving quarks. In the experiment the quarks ionized the vacuum area. This produced the quark–antiquark pairs, which then converted themselves into mesons. When generating a baryon by assembling quarks, it is helpful if the quarks first form a stable two-quark state. The Λ and the Σ are created as a result of up, down and strange quarks. Scientists found that the Λ contains the [ud] diquark, however the Σ does not. From this experiment scientists inferred that Λ baryons are more common than Σ baryons, and indeed they are more common by a factor of 10. References Further reading Quarks Hypothetical elementary particles
Diquark
[ "Physics" ]
602
[ "Hypothetical elementary particles", "Unsolved problems in physics", "Physics beyond the Standard Model" ]
2,144,050
https://en.wikipedia.org/wiki/Hydrion%20paper
Hydrion is a trademarked name for a popular line of compound pH indicators, marketed by Micro Essential Laboratory Inc., exhibiting a series of color changes (typically producing a recognizably different color for each pH unit) over a range of pH values. Although solutions are available, the most common forms of Hydrion are a series of papers impregnated with various mixtures of indicator dyes. It is considered a "universal indicator". See also PHydrion External links Micro Essential Laboratory, Inc. website PH indicators
Hydrion paper
[ "Chemistry", "Materials_science" ]
112
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Equilibrium chemistry" ]
2,145,168
https://en.wikipedia.org/wiki/Metric%20tensor%20%28general%20relativity%29
In general relativity, the metric tensor (in this context often abbreviated to simply the metric) is the fundamental object of study. The metric captures all the geometric and causal structure of spacetime, being used to define notions such as time, distance, volume, curvature, angle, and separation of the future and the past. In general relativity, the metric tensor plays the role of the gravitational potential in the classical theory of gravitation, although the physical content of the associated equations is entirely different. Gutfreund and Renn say "that in general relativity the gravitational potential is represented by the metric tensor." Notation and conventions This article works with a metric signature that is mostly positive (); see sign convention. The gravitation constant will be kept explicit. This article employs the Einstein summation convention, where repeated indices are automatically summed over. Definition Mathematically, spacetime is represented by a four-dimensional differentiable manifold and the metric tensor is given as a covariant, second-degree, symmetric tensor on , conventionally denoted by . Moreover, the metric is required to be nondegenerate with signature . A manifold equipped with such a metric is a type of Lorentzian manifold. Explicitly, the metric tensor is a symmetric bilinear form on each tangent space of that varies in a smooth (or differentiable) manner from point to point. Given two tangent vectors and at a point in , the metric can be evaluated on and to give a real number: This is a generalization of the dot product of ordinary Euclidean space. Unlike Euclidean space – where the dot product is positive definite – the metric is indefinite and gives each tangent space the structure of Minkowski space. Local coordinates and matrix representations Physicists usually work in local coordinates (i.e. coordinates defined on some local patch of ). In local coordinates (where is an index that runs from 0 to 3) the metric can be written in the form The factors are one-form gradients of the scalar coordinate fields . The metric is thus a linear combination of tensor products of one-form gradients of coordinates. The coefficients are a set of 16 real-valued functions (since the tensor is a tensor field, which is defined at all points of a spacetime manifold). In order for the metric to be symmetric giving 10 independent coefficients. If the local coordinates are specified, or understood from context, the metric can be written as a symmetric matrix with entries . The nondegeneracy of means that this matrix is non-singular (i.e. has non-vanishing determinant), while the Lorentzian signature of implies that the matrix has one negative and three positive eigenvalues. Physicists often refer to this matrix or the coordinates themselves as the metric (see, however, abstract index notation). With the quantities being regarded as the components of an infinitesimal coordinate displacement four-vector (not to be confused with the one-forms of the same notation above), the metric determines the invariant square of an infinitesimal line element, often referred to as an interval. The interval is often denoted The interval imparts information about the causal structure of spacetime. When , the interval is timelike and the square root of the absolute value of is an incremental proper time. Only timelike intervals can be physically traversed by a massive object. When , the interval is lightlike, and can only be traversed by (massless) things that move at the speed of light. When , the interval is spacelike and the square root of acts as an incremental proper length. Spacelike intervals cannot be traversed, since they connect events that are outside each other's light cones. Events can be causally related only if they are within each other's light cones. The components of the metric depend on the choice of local coordinate system. Under a change of coordinates , the metric components transform as Properties The metric tensor plays a key role in index manipulation. In index notation, the coefficients of the metric tensor provide a link between covariant and contravariant components of other tensors. Contracting the contravariant index of a tensor with one of a covariant metric tensor coefficient has the effect of lowering the index and similarly a contravariant metric coefficient raises the index Applying this property of raising and lowering indices to the metric tensor components themselves leads to the property For a diagonal metric (one for which coefficients ; i.e. the basis vectors are orthogonal to each other), this implies that a given covariant coefficient of the metric tensor is the inverse of the corresponding contravariant coefficient , etc. Examples Flat spacetime The simplest example of a Lorentzian manifold is flat spacetime, which can be given as with coordinates and the metric These coordinates actually cover all of . The flat space metric (or Minkowski metric) is often denoted by the symbol and is the metric used in special relativity. In the above coordinates, the matrix representation of is (An alternative convention replaces coordinate by , and defines as in .) In spherical coordinates , the flat space metric takes the form where is the standard metric on the 2-sphere. Black hole metrics The Schwarzschild metric describes an uncharged, non-rotating black hole. There are also metrics that describe rotating and charged black holes. Schwarzschild metric Besides the flat space metric the most important metric in general relativity is the Schwarzschild metric which can be given in one set of local coordinates by where, again, is the standard metric on the 2-sphere. Here, is the gravitation constant and is a constant with the dimensions of mass. Its derivation can be found here. The Schwarzschild metric approaches the Minkowski metric as approaches zero (except at the origin where it is undefined). Similarly, when goes to infinity, the Schwarzschild metric approaches the Minkowski metric. With coordinates the metric can be written as Several other systems of coordinates have been devised for the Schwarzschild metric: Eddington–Finkelstein coordinates, Gullstrand–Painlevé coordinates, Kruskal–Szekeres coordinates, and Lemaître coordinates. Rotating and charged black holes The Schwarzschild solution supposes an object that is not rotating in space and is not charged. To account for charge, the metric must satisfy the Einstein field equations like before, as well as Maxwell's equations in a curved spacetime. A charged, non-rotating mass is described by the Reissner–Nordström metric. Rotating black holes are described by the Kerr metric (uncharged) and the Kerr–Newman metric (charged). Other metrics Other notable metrics are: Alcubierre metric, de Sitter/anti-de Sitter metrics, Friedmann–Lemaître–Robertson–Walker metric, Isotropic coordinates, Lemaître–Tolman metric, Peres metric, Rindler coordinates, Weyl–Lewis–Papapetrou coordinates, Gödel metric. Some of them are without the event horizon or can be without the gravitational singularity. Volume The metric induces a natural volume form (up to a sign), which can be used to integrate over a region of a manifold. Given local coordinates for the manifold, the volume form can be written where is the determinant of the matrix of components of the metric tensor for the given coordinate system. Curvature The metric completely determines the curvature of spacetime. According to the fundamental theorem of Riemannian geometry, there is a unique connection on any semi-Riemannian manifold that is compatible with the metric and torsion-free. This connection is called the Levi-Civita connection. The Christoffel symbols of this connection are given in terms of partial derivatives of the metric in local coordinates by the formula (where commas indicate partial derivatives). The curvature of spacetime is then given by the Riemann curvature tensor which is defined in terms of the Levi-Civita connection ∇. In local coordinates this tensor is given by: The curvature is then expressible purely in terms of the metric and its derivatives. Einstein's equations One of the core ideas of general relativity is that the metric (and the associated geometry of spacetime) is determined by the matter and energy content of spacetime. Einstein's field equations: where the Ricci curvature tensor and the scalar curvature relate the metric (and the associated curvature tensors) to the stress–energy tensor . This tensor equation is a complicated set of nonlinear partial differential equations for the metric components. Exact solutions of Einstein's field equations are very difficult to find. See also Alternatives to general relativity Introduction to the mathematics of general relativity Mathematics of general relativity Ricci calculus References See general relativity resources for a list of references. Tensors in general relativity Time in physics
Metric tensor (general relativity)
[ "Physics", "Engineering" ]
1,804
[ "Time in physics", "Physical phenomena", "Tensors", "Physical quantities", "Tensor physical quantities", "Tensors in general relativity" ]
10,708,900
https://en.wikipedia.org/wiki/Reversible%20reference%20system%20propagation%20algorithm
Reversible reference system propagation algorithm (r-RESPA) is a time stepping algorithm used in molecular dynamics. It evolves the system state over time, where the L is the Liouville operator. References Molecular dynamics Hamiltonian mechanics
Reversible reference system propagation algorithm
[ "Physics", "Chemistry", "Mathematics" ]
50
[ "Molecular physics", "Theoretical physics", "Classical mechanics", "Computational physics", "Molecular dynamics", "Hamiltonian mechanics", "Computational chemistry", "Dynamical systems" ]
15,866,439
https://en.wikipedia.org/wiki/Hadamard%20regularization
In mathematics, Hadamard regularization (also called Hadamard finite part or Hadamard's partie finie) is a method of regularizing divergent integrals by dropping some divergent terms and keeping the finite part, introduced by . showed that this can be interpreted as taking the meromorphic continuation of a convergent integral. If the Cauchy principal value integral exists, then it may be differentiated with respect to to obtain the Hadamard finite part integral as follows: Note that the symbols and are used here to denote Cauchy principal value and Hadamard finite-part integrals respectively. The Hadamard finite part integral above (for ) may also be given by the following equivalent definitions: The definitions above may be derived by assuming that the function is differentiable infinitely many times at , that is, by assuming that can be represented by its Taylor series about . For details, see . (Note that the term in the second equivalent definition above is missing in but this is corrected in the errata sheet of the book.) Integral equations containing Hadamard finite part integrals (with unknown) are termed hypersingular integral equations. Hypersingular integral equations arise in the formulation of many problems in mechanics, such as in fracture analysis. Example Consider the divergent integral Its Cauchy principal value also diverges since To assign a finite value to this divergent integral, we may consider The inner Cauchy principal value is given by Therefore, Note that this value does not represent the area under the curve , which is clearly always positive. However, it can be seen where this comes from. Recall the Cauchy principal value of this integral, when evaluated at the endpoints, took the form If one removes the infinite components, the pair of terms, that which remains is which equals the value derived above. References . . . . . . . Integrals Summability methods
Hadamard regularization
[ "Mathematics" ]
389
[ "Sequences and series", "Summability methods", "Mathematical structures" ]
15,875,500
https://en.wikipedia.org/wiki/Algorithmic%20mechanism%20design
Algorithmic mechanism design (AMD) lies at the intersection of economic game theory, optimization, and computer science. The prototypical problem in mechanism design is to design a system for multiple self-interested participants, such that the participants' self-interested actions at equilibrium lead to good system performance. Typical objectives studied include revenue maximization and social welfare maximization. Algorithmic mechanism design differs from classical economic mechanism design in several respects. It typically employs the analytic tools of theoretical computer science, such as worst case analysis and approximation ratios, in contrast to classical mechanism design in economics which often makes distributional assumptions about the agents. It also considers computational constraints to be of central importance: mechanisms that cannot be efficiently implemented in polynomial time are not considered to be viable solutions to a mechanism design problem. This often, for example, rules out the classic economic mechanism, the Vickrey–Clarke–Groves auction. History Noam Nisan and Amir Ronen first coined "Algorithmic mechanism design" in a research paper published in 1999. See also Algorithmic game theory Computational social choice Metagame Incentive compatible Vickrey–Clarke–Groves mechanism References and notes Further reading . Mechanism design Algorithms
Algorithmic mechanism design
[ "Mathematics" ]
237
[ "Applied mathematics", "Algorithms", "Mathematical logic", "Game theory", "Mechanism design" ]
9,206,499
https://en.wikipedia.org/wiki/Metal%E2%80%93insulator%20transition
Metal–insulator transitions are transitions of a material from a metal (material with good electrical conductivity of electric charges) to an insulator (material where conductivity of charges is quickly suppressed). These transitions can be achieved by tuning various ambient parameters such as temperature, pressure or, in case of a semiconductor, doping. History The basic distinction between metals and insulators was proposed by Hans Bethe, Arnold Sommerfeld and Felix Bloch in 1928-1929. It distinguished between conducting metals (with partially filled bands) and nonconducting insulators. However, in 1937 Jan Hendrik de Boer and Evert Verwey reported that many transition-metal oxides (such as NiO) with a partially filled d-band were poor conductors, often insulating. In the same year, the importance of the electron-electron correlation was stated by Rudolf Peierls. Since then, these materials as well as others exhibiting a transition between a metal and an insulator have been extensively studied, e.g. by Sir Nevill Mott, after whom the insulating state is named Mott insulator. The first metal-insulator transition to be found was the Verwey transition of magnetite in the 1940s. Theoretical description The classical band structure of solid state physics predicts the Fermi level to lie in a band gap for insulators and in the conduction band for metals, which means metallic behavior is seen for compounds with partially filled bands. However, some compounds have been found which show insulating behavior even for partially filled bands. This is due to the electron-electron correlation, since electrons cannot be seen as noninteracting. Mott considers a lattice model with just one electron per site. Without taking the interaction into account, each site could be occupied by two electrons, one with spin up and one with spin down. Due to the interaction the electrons would then feel a strong Coulomb repulsion, which Mott argued splits the band in two. Having one electron per-site fills the lower band while the upper band remains empty, which suggests the system becomes an insulator. This interaction-driven insulating state is referred to as a Mott insulator. The Hubbard model is one simple model commonly used to describe metal-insulator transitions and the formation of a Mott insulator. Elementary mechanisms Metal–insulator transitions (MIT) and models for approximating them can be classified based on the origin of their transition. Mott transition: The most common transition, arising from intense electron-electron correlation. Mott-Hubbard transition: An extension incorporating the Hubbard model, approaching the transition from the correlated paramagnetic state. Brinkman-Rice transition: Approaching the transition from the non-interacting metallic state, where each orbital is half-filled. Dynamical mean-field theory: A theory that accommodates both Mott-Hubbard and Brinbkman-Rice models of the transition. Peierls transition: On some occasions, the lattice itself through electron-phonon interactions can give rise to a transition. An example of a Peierls insulator is the blue bronze K0.3MoO3, which undergoes transition at T = 180 K. Anderson transition: When an insulator behavior in metals arises from distortions and lattice defects. Polarization catastrophe The polarization catastrophe model describes the transition of a material from an insulator to a metal. This model considers the electrons in a solid to act as oscillators and the conditions for this transition to occur is determined by the number of oscillators per unit volume of the material. Since every oscillator has a frequency (ω0) we can describe the dielectric function of a solid as, where ε(ω) is the dielectric function, N is the number of oscillators per unit volume, ω0 is the fundamental oscillation frequency, m is the oscillator mass, and ω is the excitation frequency. For a material to be a metal, the excitation frequency (ω) must be zero by definition, which then gives us the static dielectric constant, where εs is the static dielectric constant. If we rearrange equation (1) to isolate the number of oscillators per unit volume we get the critical concentration of oscillators (Nc) at which εs becomes infinite, indicating a metallic solid and the transition from an insulator to a metal. This expression creates a boundary that defines the transition of a material from an insulator to a metal. This phenomenon is known as the polarization catastrophe. The polarization catastrophe model also theorizes that, with a high enough density, and thus a low enough molar volume, any solid could become metallic in character. Predicting whether a material will be metallic or insulating can be done by taking the ratio R/V, where R is the molar refractivity, sometimes represented by A, and V is the molar volume. In cases where R/V is less than 1, the material will have non-metallic, or insulating properties, while an R/V value greater than one yields metallic character. See also References Further reading http://rmp.aps.org/abstract/RMP/v70/i4/p1039_1 Condensed matter physics Phase transitions
Metal–insulator transition
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,100
[ "Physical phenomena", "Phase transitions", "Phases of matter", "Critical phenomena", "Materials science", "Condensed matter physics", "Statistical mechanics", "Matter" ]
9,206,525
https://en.wikipedia.org/wiki/Hydron%20%28chemistry%29
In chemistry, the hydron, informally called proton, is the cationic form of atomic hydrogen, represented with the symbol . The general term "hydron", endorsed by IUPAC, encompasses cations of hydrogen regardless of isotope: thus it refers collectively to protons (H) for the protium isotope, deuterons (H or D) for the deuterium isotope, and tritons (H or T) for the tritium isotope. Unlike most other ions, the hydron consists only of a bare atomic nucleus. The negatively charged counterpart of the hydron is the hydride anion, . Properties Solute properties Other things being equal, compounds that readily donate hydrons (Brønsted acids, see below) are generally polar, hydrophilic solutes and are often soluble in solvents with high relative static permittivity (dielectric constants). Examples include organic acids like acetic acid (CHCOOH) or methanesulfonic acid (CHSOH). However, large nonpolar portions of the molecule may attenuate these properties. Thus, as a result of its alkyl chain, octanoic acid (CHCOOH) is considerably less hydrophilic compared to acetic acid. The unsolvated hydron (a completely free or "naked" hydrogen atomic nucleus) does not exist in the condensed (liquid or solid) phase. As the surface Electric field strength is inverse to the radius, a tiny nucleus interacts thousands times stronger with nearby electrons than any partly ionized atom. Although superacids are sometimes said to owe their extraordinary hydron-donating power to the presence of "free hydrons", such a statement is misleading: even for a source of "free hydrons" like , one of the superacidic cations present in the superacid fluoroantimonic acid (HF:SbF), detachment of a free still comes at an enormous energetic penalty on the order of several hundred kcal/mol. This effectively rules out the possibility of the free hydron being present in solution. For this reason, in liquid strong acids, hydrons are believed to diffuse by sequential transfer from one molecule to the next along a network of hydrogen bonds through what is known as the Grotthuss mechanism. Acidity The hydron ion can incorporate an electron pair from a Lewis base into the molecule by adduction: + :L → Because of this capture of the Lewis base (L), the hydron ion has Lewis acidic character. In terms of Hard/Soft Acid Base (HSAB) theory, the bare hydron is an infinitely hard Lewis acid. The hydron plays a central role in Brønsted–Lowry acid–base theory: a species that behaves as a hydron donor in a reaction is known as the Brønsted acid, while the species accepting the hydron is known as the Brønsted base. In the generic acid–base reaction shown below, HA is the acid, while B (shown with a lone pair) is the base: + :B → + :A The hydrated form of the hydrogen cation, the hydronium (hydroxonium) ion (aq), is a key object of Arrhenius' definition of acid. Other hydrated forms, the Zundel cation , which is formed from a proton and two water molecules, and the Eigen cation , which is formed from a hydronium ion and three water molecules, are theorized to play an important role in the diffusion of protons though an aqueous solution according to the Grotthuss mechanism. Although the ion (aq) is often shown in introductory textbooks to emphasize that the hydron is never present as an unsolvated species in aqueous solution, it is somewhat misleading, as it oversimplifies infamously complex speciation of the solvated proton in water; the notation (aq) is often preferred, since it conveys aqueous solvation while remaining noncommittal with respect to the number of water molecules involved. Isotopes of hydron Proton, having the symbol p or H, is the +1 ion of protium, 1H. Deuteron, having the symbol H or D, is the +1 ion of deuterium, H or D. Triton, having the symbol H or T, is the +1 ion of tritium, H or T. Other isotopes of hydrogen are too unstable to be relevant in chemistry. History of the term The term "hydron" is recommended by IUPAC to be used instead of "proton" if no distinction is made between the isotopes proton, deuteron and triton, all found in naturally occurring isotope mixtures. The name "proton" refers to isotopically pure H. On the other hand, calling the hydron simply hydrogen ion is not recommended because hydrogen anions also exist. The term "hydron" was defined by IUPAC in 1988. Traditionally, the term "proton" was and is used in place of "hydron". The latter term is generally only used in the context where comparisons between the various isotopes of hydrogen is important (as in the kinetic isotope effect or hydrogen isotopic labeling). Otherwise, referring to hydrons as protons is still considered acceptable, for example in such terms as protonation, deprotonation, proton pump, or proton channel. The transfer of in an acid-base reaction is usually referred to as proton transfer. Acid and bases are referred to as proton donors and acceptors correspondingly. 99.9844% of natural hydrons (hydrogen nuclei) are protons, and the remainder (about 156 per million in sea water) are deuterons (see deuterium), except for some very rare natural tritons (see tritium). See also Deprotonation Dihydrogen cation Hydrogen ion cluster Solvated electron Superacid Trihydrogen cation References Cations Hydrogen Proton Deuterium Tritium
Hydron (chemistry)
[ "Physics", "Chemistry" ]
1,255
[ "Cations", "Ions", "Matter" ]
9,208,413
https://en.wikipedia.org/wiki/QuarkNet
QuarkNet is a long-term, research-based teacher professional development program in the United States jointly funded by the National Science Foundation and the US Department of Energy. Since 1999, QuarkNet has established centers at universities and national laboratories conducting research in particle physics (also called high-energy physics) across the United States, and has been bringing such physics to high school classrooms. QuarkNet programs are described in the National Research Council National Science Education Standards report (1995) and support the Next Generation Science Standards (2013). Overview Boot Camp The summer Boot Camp is an annual national activity allowing teachers to see detectors and colliders, as well as form research groups to process experimental data. Teachers have been working in separate groups investigating triggers released by CMS since early 2011. The groups search the data for evidence of the J/Psi, Z and W bosons. They used Excel to reconstruct the invariant mass of a particle when given the four-vector of that particle's decay products. In addition, participants attend several talks and tours of technical areas. Cosmic ray studies The main QuarkNet student investigations supported at the national level are cosmic ray studies. Working with Fermilab technicians and research physicists, QuarkNet staff have developed a classroom cosmic ray muon detector that uses the same technologies as the largest detectors at Fermilab and CERN. To support inter school collaboration, QuarkNet collaborates with the Interactions in Understanding the Universe Project (I2U2) to develop and support the Cosmic Ray e-Lab. An e-Lab is a student-led, teacher-guided investigation using experimental data. Students have an opportunity to organize and conduct authentic research and experience the environment of a scientific collaboration. Participating schools set up a detector somewhere at the school. Students collect and upload the data to a central server located at Argonne National Laboratory. Students can access data from detectors in the cluster for use in studies, such as determining the (mean) lifetime of muons, the overall flux of muons in cosmic rays, or a study of extended air showers. Fellowships & programs In summer 2007, QuarkNet inaugurated the QuarkNet Fellows Program to develop the leadership potential of teachers who would work with staff to provide professional development activities and support for centers. Three groups of fellows in the areas of cosmic ray studies, LHC and teaching and learning share responsibilities for offering workshops and sessions, developing workshop materials, supporting e-Labs and masterclasses, giving presentations at AAPT and more. In 2009, a new group of fellows joined the program. Leadership fellows work with staff to support centers and gather data about center performance. Masterclass Since 2007, QuarkNet has hosted a one-day national program for students called Masterclass, initially studying Large Electron–Positron Collider-era CERN data, and now studying ALICE, ATLAS or CMS data. In addition to analysis of data, the day offers lectures and the opportunity to discuss results. Summer Student Research Program Based on a model at the University of Notre Dame, QuarkNet has offered a summer student research program since 2004. Typically, teams of four high school students supervised by one teacher spend six weeks involved in various physics research projects. Some centers choose to modify this model, involving more students and/or less time. The research is associated with ATLAS and CMS, the International Linear Collider R&D, cosmic ray muon detectors, optical fiber R&D and more. Teams are supported at up to 25 centers each summer. Examples of recent research titles include: Search and Identification of Comparing the Amount of Muon Events to Daily Weather Changes, Cosmic Ray Signals in Radar Echo, Fibers for Forward Calorimeter, The Effects of Impurities on Radio Signal Detection in Ice, Quartz Plate Calorimetry, Galactic Asymmetry of the Milky Way and RF Magnet Design, and Weak Lensing Mass Estimates of the Elliot Arc Cluster. References External links QuarkNet Fermilab CERN Cosmic Ray e-Lab Interactions in Understanding the Universe More On DataCamp Physics education 1999 establishments in the United States
QuarkNet
[ "Physics" ]
832
[ "Applied and interdisciplinary physics", "Physics education" ]
9,209,693
https://en.wikipedia.org/wiki/Helically%20Symmetric%20Experiment
The Helically Symmetric Experiment (HSX, stylized as Helically Symmetric eXperiment), is an experimental plasma confinement device at the University of Wisconsin–Madison, with design principles that are intended to be incorporated into a fusion reactor. The HSX is a modular coil stellarator which is a toroid-shaped pressure vessel with external electromagnets which generate a magnetic field for the purpose of containing a plasma. It began operation in 1999. Background A stellarator is a magnetic confinement fusion device that uses external magnetic coils to generate all of the magnetic fields needed to confine the high temperature plasma. In contrast, in tokamaks and reversed field pinches, the magnetic field is created by the interaction of external magnets and an electrical current flowing through the plasma. The lack of this large externally driven plasma current makes stellarators suitable for steady-state fusion power plants. However, due to non-axisymmetric nature of the fields, old stellarators have a combination of toroidal and helical modulation of the magnetic field lines, which leads to high transport of plasma out of the confinement volume at fusion-relevant conditions, solved in the Wendelstein 7-X which has a better particle confinement than the expected in ITER, and achieve plasma duration of 30 minutes. This large transport in old stellarators can limit their performance as fusion reactors. This problem can be largely reduced by tailoring the magnetic field geometry. The dramatic improvements in computer modeling capability in the last two decades has helped to "optimize" the magnetic geometry to reduce this transport, resulting in a new class of stellarators called "quasi-symmetric stellarators". Computer-modeled odd-looking electromagnets will directly produce the needed magnetic field configuration. These devices combine the good confinement properties of tokamaks and the steady-state nature of conventional stellarators. The Helically Symmetric Experiment (HSX) at the University of Wisconsin-Madison is such a quasi-helically symmetric stellarator (helical axis of symmetry). Device The magnetic field in HSX is generated by a set of 48 twisted coils arranged in four field periods. HSX typically operates at a magnetic field of 1 Tesla at the center of the plasma column. A set of auxiliary coils is used to deliberately break the symmetry to mimic conventional stellarator properties for comparison. The HSX vacuum vessel is made of stainless steel, and is helically shaped to follow the magnetic geometry. Plasma formation and heating is achieved using 28 GHz, 100 kW electron cyclotron resonance heating (ECRH). A second 100 kW gyrotron has recently been installed on HSX to perform heat pulse modulation studies. Operations Plasmas as high as 3 kiloelectronvolts in temperature and about 8/cc in density are routinely formed for various experiments. Experiments have shown that edge magnetic islands affect particle fueling and exhaust. In HSX, the presence of a magnetic island chain at the plasma edge increases the plasma sourcing to exhaust ratio but reduces fueling efficiency by 25%. Moving the island radially inward decreases both the effective and global particle confinement times. This process is effective for controlling plasma fueling and helium exhaust times. Subsystems, diagnostics HSX has a large set of diagnostics to measure properties of plasma and magnetic fields. The following gives a list of major diagnostics and subsystems. Thomson scattering Diagnostic neutral beam Electron cyclotron resonance heating system Electron cyclotron emission radiometers Charge exchange recombination spectroscopy Interferometer Motional Stark effect Heavy ion beam probe (coming soon) Laser blow-off Hard and soft-X-ray detectors Mirnov coils Rogowski coils Passive spectroscopy Goals and major achievements HSX has made and continues to make fundamental contributions to the physics of quasisymmetric stellarators that show significant improvement over the conventional stellarator concept. These include: Measuring large ion flows in the direction of quasisymmetry Reduced flow damping in the direction of quasisymmetry Reduced passing particle deviation from a flux surface Reduced direct loss orbits Reduced neoclassical transport Reduced equilibrium parallel currents because of the high effective transform Ongoing experiments A large number of experimental and computational research works are being done in HSX by students, staff and faculties. Some of them are in collaboration with other universities and national laboratories, both in the USA and abroad. Major research projects at present are listed below: Effect of quasi-symmetry on plasma flows Impurity transport Radio frequency heating Supersonic plasma fueling and the neutral population Heat pulse propagation experiments to study thermal transport Interaction of turbulence and flows in HSX and the effects of quasi-symmetry on the determination of the radial electric field Equilibrium reconstruction of the plasma density, pressure and current profiles Effects of viscosity and symmetry on the determination of the flows and the radial electric field Divertor flows, particle edge fluxes Effect of radial electric field on the bootstrap current Effect of quasi-symmetry on fast ion confinement References Additional resources External links Experimental Tests of Quasisymmetry in HSX. Talmadge Slide 4 compares with tokamak Stellarators Plasma physics facilities University of Wisconsin–Madison
Helically Symmetric Experiment
[ "Physics" ]
1,042
[ "Plasma physics facilities", "Plasma physics" ]
9,209,712
https://en.wikipedia.org/wiki/Thermoelectric%20generator
A thermoelectric generator (TEG), also called a Seebeck generator, is a solid state device that converts heat (driven by temperature differences) directly into electrical energy through a phenomenon called the Seebeck effect (a form of thermoelectric effect). Thermoelectric generators function like heat engines, but are less bulky and have no moving parts. However, TEGs are typically more expensive and less efficient. When the same principle is used in reverse to create a heat gradient from an electric current, it is called a thermoelectric (or Peltier) cooler. Thermoelectric generators could be used in power plants and factories to convert waste heat into additional electrical power and in automobiles as automotive thermoelectric generators (ATGs) to increase fuel efficiency. Radioisotope thermoelectric generators use radioisotopes to generate the required temperature difference to power space probes. Thermoelectric generators can also be used alongside solar panels. History In 1821, Thomas Johann Seebeck discovered that a thermal gradient formed between two different conductors can produce electricity. At the heart of the thermoelectric effect is that a temperature gradient in a conducting material results in heat flow; this results in the diffusion of charge carriers. The flow of charge carriers between the hot and cold regions in turn creates a voltage difference. In 1834, Jean Charles Athanase Peltier discovered the reverse effect, that running an electric current through the junction of two dissimilar conductors could, depending on the direction of the current, cause it to act as a heater or cooler. George Cove had accidentally invented a photovoltaic panel, despite intending to invent a thermoelectric generator with thermocouples, in 1909. He notes that heat alone didn't produce any power, only incident light, but he had no explanation for how it could be working. The operational principle is now understood to have been a very simple form of Schottky junction. Efficiency The typical efficiency of TEGs is around 5–8%, although it can be higher. Older devices used bimetallic junctions and were bulky. More recent devices use highly doped semiconductors made from bismuth telluride (Bi2Te3), lead telluride (PbTe), calcium manganese oxide (Ca2Mn3O8), or combinations thereof, depending on application temperature. These are solid-state devices and unlike dynamos have no moving parts, with the occasional exception of a fan or pump to improve heat transfer. If the hot region is around 1273K and the ZT values of 3 - 4 are implemented, the efficiency is approximately 33-37%; allowing TEG's to compete with certain heat engine efficiencies. As of 2021, there are materials (some containing widely available and inexpensive arsenic and tin) reaching a ZT value > 3; monolayer AsP3 (ZT = 3.36 on the armchair axis); n-type doped InP3 (ZT = 3.23); p-type doped SnP3 (ZT = 3.46); p-type doped SbP3 (ZT = 3.5). Construction Thermoelectric power generators consist of three major components: thermoelectric materials, thermoelectric modules and thermoelectric systems that interface with the heat source. Thermoelectric materials Thermoelectric materials generate power directly from the heat by converting temperature differences into electric voltage. These materials must have both high electrical conductivity (σ) and low thermal conductivity (κ) to be good thermoelectric materials. Having low thermal conductivity ensures that when one side is made hot, the other side stays cold, which helps to generate a large voltage while in a temperature gradient. The measure of the magnitude of electrons flow in response to a temperature difference across that material is given by the Seebeck coefficient (S). The efficiency of a given material to produce a thermoelectric power is simply estimated by its “figure of merit” zT = S2σT/κ. For many years, the main three semiconductors known to have both low thermal conductivity and high power factor were bismuth telluride (Bi2Te3), lead telluride (PbTe), and silicon germanium (SiGe). Some of these materials have somewhat rare elements which make them expensive. Today, the thermal conductivity of semiconductors can be lowered without affecting their high electrical properties using nanotechnology. This can be achieved by creating nanoscale features such as particles, wires or interfaces in bulk semiconductor materials. However, the manufacturing processes of nano-materials are still challenging. Thermoelectric advantages Thermoelectric generators are all-solid-state devices that do not require any fluids for fuel or cooling, making them non-orientation dependent allowing for use in zero-gravity or deep-sea applications. The solid-state design allows for operation in severe environments. Thermoelectric generators have no moving parts which produce a more reliable device that does not require maintenance for long periods. The durability and environmental stability have made thermoelectrics a favorite for NASA's deep space explorers among other applications. One of the key advantages of thermoelectric generators outside of such specialized applications is that they can potentially be integrated into existing technologies to boost efficiency and reduce environmental impact by producing usable power from waste heat. Thermoelectric module A thermoelectric module is a circuit containing thermoelectric materials which generate electricity from heat directly. A thermoelectric module consists of two dissimilar thermoelectric materials joined at their ends: an n-type (with negative charge carriers), and a p-type (with positive charge carriers) semiconductor. Direct electric current will flow in the circuit when there is a temperature difference between the ends of the materials. Generally, the current magnitude is directly proportional to the temperature difference: where is the local conductivity, S is the Seebeck coefficient (also known as thermopower), a property of the local material, and is the temperature gradient. In application, thermoelectric modules in power generation work in very tough mechanical and thermal conditions. Because they operate in a very high-temperature gradient, the modules are subject to large thermally induced stresses and strains for long periods. They also are subject to mechanical fatigue caused by a large number of thermal cycles. Thus, the junctions and materials must be selected so that they survive these tough mechanical and thermal conditions. Also, the module must be designed such that the two thermoelectric materials are thermally in parallel, but electrically in series. The efficiency of a thermoelectric module is greatly affected by the geometry of its design. Thermoelectric design Thermoelectric generators are made of several thermopiles, each consisting of many thermocouples made of a connected n-type and p-type material. The arrangement of the thermocouples is typically in three main designs: planar, vertical, and mixed. Planar design involves thermocouples put onto a substrate horizontally between the heat source and cool side, resulting in the ability to create longer and thinner thermocouples, thereby increasing the thermal resistance and temperature gradient and eventually increasing voltage output. Vertical design has thermocouples arranged vertically between the hot and cool plates, leading to high integration of thermocouples as well as a high output voltage, making this design the most widely-used design commercially. The mixed design has the thermocouples arranged laterally on the substrate while the heat flow is vertical between plates. Microcavities under the hot contacts of the device allow for a temperature gradient, which allows for the substrate’s thermal conductivity to affect the gradient and efficiency of the device. For microelectromechanical systems, TEGs can be designed on the scale of handheld devices to use body heat in the form of thin films. Flexible TEGs for wearable electronics are able to be made with novel polymers through additive manufacturing or thermal spraying processes. Cylindrical TEGs for using heat from vehicle exhaust pipes can also be made using circular thermocouples arranged in a cylinder. Many designs for TEGs can be made for the different devices they are applied to. Thermoelectric systems Using thermoelectric modules, a thermoelectric system generates power by taking in heat from a source such as a hot exhaust flue. To operate, the system needs a large temperature gradient, which is not easy in real-world applications. The cold side must be cooled by air or water. Heat exchangers are used on both sides of the modules to supply this heating and cooling. There are many challenges in designing a reliable TEG system that operates at high temperatures. Achieving high efficiency in the system requires extensive engineering design to balance between the heat flow through the modules and maximizing the temperature gradient across them. To do this, designing heat exchanger technologies in the system is one of the most important aspects of TEG engineering. In addition, the system requires to minimize the thermal losses due to the interfaces between materials at several places. Another challenging constraint is avoiding large pressure drops between the heating and cooling sources. If AC power is required (such as for powering equipment designed to run from AC mains power), the DC power from the TE modules must be passed through an inverter, which lowers efficiency and adds to the cost and complexity of the system. Materials for TEG Only a few known materials to date are identified as thermoelectric materials. Most thermoelectric materials today have a zT, the figure of merit, value of around 1, such as in bismuth telluride (Bi2Te3) at room temperature and lead telluride (PbTe) at 500–700 K. However, in order to be competitive with other power generation systems, TEG materials should have a zT of 2–3. Most research in thermoelectric materials has focused on increasing the Seebeck coefficient (S) and reducing the thermal conductivity, especially by manipulating the nanostructure of the thermoelectric materials. Because both the thermal and electrical conductivity correlate with the charge carriers, new means must be introduced in order to conciliate the contradiction between high electrical conductivity and low thermal conductivity, as is needed. When selecting materials for thermoelectric generation, a number of other factors need to be considered. During operation, ideally, the thermoelectric generator has a large temperature gradient across it. Thermal expansion will then introduce stress in the device which may cause fracture of the thermoelectric legs or separation from the coupling material. The mechanical properties of the materials must be considered and the coefficient of thermal expansion of the n and p-type material must be matched reasonably well. In segmented thermoelectric generators, the material's compatibility must also be considered to avoid incompatibility of relative current, defined as the ratio of electrical current to diffusion heat current, between segment layers. A material's compatibility factor is defined as . When the compatibility factor from one segment to the next differs by more than a factor of about two, the device will not operate efficiently. The material parameters determining s (as well as zT) are temperature-dependent, so the compatibility factor may change from the hot side to the cold side of the device, even in one segment. This behavior is referred to as self-compatibility and may become important in devices designed for wide-temperature application. In general, thermoelectric materials can be categorized into conventional and new materials: Conventional materials Many TEG materials are employed in commercial applications today. These materials can be divided into three groups based on the temperature range of operation: Low temperature materials (up to around 450 K): Alloys based on bismuth (Bi) in combinations with antimony (Sb), tellurium (Te) or selenium (Se). Intermediate temperature (up to 850 K): such as materials based on alloys of lead (Pb) Highest temperatures material (up to 1300 K): materials fabricated from silicon-germanium (SiGe) alloys. Although these materials still remain the cornerstone for commercial and practical applications in thermoelectric power generation, significant advances have been made in synthesizing new materials and fabricating material structures with improved thermoelectric performance. Recent research has focused on improving the material’s figure-of-merit (zT), and hence the conversion efficiency, by reducing the lattice thermal conductivity. New materials Researchers are trying to develop new thermoelectric materials for power generation by improving the figure-of-merit zT. One example of these materials is the semiconductor compound ß-Zn4Sb3, which possesses an exceptionally low thermal conductivity and exhibits a maximum zT of 1.3 at a temperature of 670K. This material is also relatively inexpensive and stable up to this temperature in a vacuum, and can be a good alternative in the temperature range between materials based on Bi2Te3 and PbTe. Among the most exciting developments in thermoelectric materials was the development of single crystal tin selenide which produced a record zT of 2.6 in one direction. Other new materials of interest include Skutterudites, Tetrahedrites, and rattling ions crystals. Besides improving the figure-of-merit, there is increasing focus to develop new materials by increasing the electrical power output, decreasing cost and developing environmentally friendly materials. For example, when the fuel cost is low or almost free, such as in waste heat recovery, then the cost per watt is only determined by the power per unit area and the operating period. As a result, it has initiated a search for materials with high power output rather than conversion efficiency. For example, the rare earth compounds YbAl3 has a low figure-of-merit, but it has a power output of at least double that of any other material, and can operate over the temperature range of a waste heat source. Novel processing To increase the figure of merit (zT), a material’s thermal conductivity should be minimized while its electrical conductivity and Seebeck coefficient is maximized. In most cases, methods to increase or decrease one property result in the same effect on other properties due to their interdependence. A novel processing technique exploits the scattering of different phonon frequencies to selectively reduce lattice thermal conductivity without the typical negative effects on electrical conductivity from the simultaneous increased scattering of electrons. In a bismuth antimony tellurium ternary system, liquid-phase sintering is used to produce low-energy semicoherent grain boundaries, which do not have a significant scattering effect on electrons. The breakthrough is then applying a pressure to the liquid in the sintering process, which creates a transient flow of the Te rich liquid and facilitates the formation of dislocations that greatly reduce the lattice conductivity. The ability to selectively decrease the lattice conductivity results in reported zT value of 1.86, which is a significant improvement over the current commercial thermoelectric generators with zT ~ 0.3–0.6. These improvements highlight the fact that in addition to the development of novel materials for thermoelectric applications, using different processing techniques to design microstructure is a viable and worthwhile effort. In fact, it often makes sense to work to optimize both composition and microstructure. Uses Thermoelectric generators (TEG) have a variety of applications. Frequently, thermoelectric generators are used for low power remote applications or where bulkier but more efficient heat engines such as Stirling engines would not be possible. Unlike heat engines, the solid state electrical components typically used to perform thermal to electric energy conversion have no moving parts. The thermal to electric energy conversion can be performed using components that require no maintenance, have inherently high reliability, and can be used to construct generators with long service-free lifetimes. This makes thermoelectric generators well suited for equipment with low to modest power needs in remote uninhabited or inaccessible locations such as mountaintops, the vacuum of space, or the deep ocean. The main uses of thermoelectric generators are: Space probes, including the Mars Curiosity rover, generate electricity using a radioisotope thermoelectric generator whose heat source is a radioactive element. Waste heat recovery. Every human activity, transport and industrial process generates waste heat, being possible to harvest residual energy from cars, aircraft, ships, industries and the human body. From cars the main source of energy is the exhaust gas. Harvesting that heat energy using a thermoelectric generator can increase the fuel efficiency of the car. Thermoelectric generators have been investigated to replace the alternators in cars demonstrating a 3.45% reduction in fuel consumption. Projections for future improvements are up to a 10% increase in mileage for hybrid vehicles. It has been stated that the potential energy savings could be higher for gasoline engines rather than for diesel engines. For more details, see the article: Automotive thermoelectric generator. For aircraft, engine nozzles have been identified as the best place to recover energy from, but heat from engine bearings and the temperature gradient existing in the aircraft skin have also been proposed. Solar cells use only the high-frequency part of the radiation, while the low-frequency heat energy is wasted. Several patents about the use of thermoelectric devices in parallel or cascade configuration with solar cells have been filed. The idea is to increase the efficiency of the combined solar/thermoelectric system to convert solar radiation into useful electricity. Thermoelectric generators are primarily used as remote and off-grid power generators for unmanned sites. They are the most reliable power generator in such situations as they do not have moving parts (thus virtually maintenance-free), work day and night, perform under all weather conditions and can work without battery backup. Although solar photovoltaic systems are also implemented in remote sites, Solar PV may not be a suitable solution where solar radiation is low, i.e. areas at higher latitudes with snow or no sunshine, areas with much cloud or tree canopy cover, dusty deserts, forests, etc. Thermoelectric generators are commonly used on gas pipelines, for example, for cathodic protection, radio communication, and telemetry. On gas pipelines for power consumption of up to 5 kW, thermal generators are preferable to other power sources. The manufacturers of generators for gas pipelines are Global Power Technologies (formerly Global Thermoelectric) (Calgary, Canada) and TELGEN (Russia). Microprocessors generate waste heat. Researchers have considered whether some of that energy could be recycled. (However, see below for problems that can arise.) Thermoelectric generators have also been investigated as standalone solar-thermal cells. Integration of thermoelectric generators have been directly integrated to a solar thermal cell with efficiency of 4.6%. The Maritime Applied Physics Corporation in Baltimore, Maryland is developing a thermoelectric generator to produce electric power on the deep-ocean offshore seabed using the temperature difference between cold seawater and hot fluids released by hydrothermal vents, hot seeps, or from drilled geothermal wells. A high-reliability source of seafloor electric power is needed for ocean observatories and sensors used in the geological, environmental, and ocean sciences, by seafloor mineral and energy resource developers, and by the military. Recent studies have found that deep-sea thermoelectric generators for large scale energy plants are also economically viable. Ann Makosinski from British Columbia, Canada has developed several devices using Peltier tiles to harvest heat (from a human hand, the forehead, and hot beverage) that claims to generate enough electricity to power an LED light or charge a mobile device, although the inventor admits that the brightness of the LED light is not competitive with those on the market. Thermoelectric generators are used in stove fans. They are put on top of a wood or coal burning stove. The TEG is sandwiched between 2 heat sinks and the difference in temperature will power a slow-moving fan that helps circulate the stove's heat into the room. Practical limitations Besides low efficiency and relatively high cost, practical problems exist in using thermoelectric devices in certain types of applications resulting from a relatively high electrical output resistance, which increases self-heating, and a relatively low thermal conductivity, which makes them unsuitable for applications where heat removal is critical, as with heat removal from an electrical device such as microprocessors. High generator output resistance: To get voltage output levels in the range required by digital electrical devices, a common approach is to place many thermoelectric elements in series within a generator module. The element's voltages increase, but so does their output resistance. The maximum power transfer theorem dictates that maximum power is delivered to a load when the source and load resistances are identically matched. For low impedance loads near zero ohms, as the generator resistance rises the power delivered to the load decreases. To lower the output resistance, some commercial devices place more individual elements in parallel and fewer in series and employ a boost regulator to raise the voltage to the voltage needed by the load. Low thermal conductivity: Because a very high thermal conductivity is required to transport thermal energy away from a heat source such as a digital microprocessor, the low thermal conductivity of thermoelectric generators makes them unsuitable to recover the heat. Cold-side heat removal with air: In air-cooled thermoelectric applications, such as when harvesting thermal energy from a motor vehicle's crankcase, the large amount of thermal energy that must be dissipated into ambient air presents a significant challenge. As a thermoelectric generator's cool side temperature rises, the device's differential working temperature decreases. As the temperature rises, the device's electrical resistance increases causing greater parasitic generator self-heating. In motor vehicle applications a supplementary radiator is sometimes used for improved heat removal, though the use of an electric water pump to circulate a coolant adds parasitic loss to total generator output power. Water cooling the thermoelectric generator's cold side, as when generating thermoelectric power from the hot crankcase of an inboard boat motor, would not suffer from this disadvantage. Water is a far easier coolant to use effectively in contrast to air. Future market While TEG technology has been used in military and aerospace applications for decades, new TE materials and systems are being developed to generate power using low or high temperatures waste heat, and that could provide a significant opportunity in the near future. These systems can also be scalable to any size and have lower operation and maintenance cost. The global market for thermoelectric generators is estimated to be US$320 million in 2015 and US$472 million in 2021; up to US$1.44 billion by 2030 with a CAGR of 11.8%. Today, North America captures 66% of the market share and it will continue to be the biggest market in the near future. However, Asia-Pacific and European countries are projected to grow at relatively higher rates. A study found that the Asia-Pacific market would grow at a Compound Annual Growth Rate (CAGR) of 18.3% in the period from 2015 to 2020 due to the high demand of thermoelectric generators by the automotive industries to increase overall fuel efficiency, as well as the growing industrialization in the region. Small scale thermoelectric generators are also in the early stages of investigation in wearable technologies to reduce or replace charging and boost charge duration. Recent studies focused on the novel development of a flexible inorganic thermoelectric, silver selenide, on a nylon substrate. Thermoelectrics represent particular synergy with wearables by harvesting energy directly from the human body creating a self-powered device. One project used n-type silver selenide on a nylon membrane. Silver selenide is a narrow bandgap semiconductor with high electrical conductivity and low thermal conductivity, making it perfect for thermoelectric applications. Low power TEG or "sub-watt" (i.e. generating up to 1 watt peak) market is a growing part of the TEG market, capitalizing on the latest technologies. Main applications are sensors, low power applications and more globally Internet of things applications. A specialized market research company indicated that 100,000 units have been shipped in 2014 and expects 9 million units per year by 2020. See also Bismuth telluride Electrical generator Energy harvesting devices: Thermoelectrics Gentherm Incorporated Mária Telkes Stirling engine Thermal power station Thermoelectric battery Thermionic converter Thermoelectric cooling or Peltier cooler Thermoelectric effect Thermoelectric materials References External links Small Thermoelectric Generators by G. Jeffrey Snyder Kanellos, M. (2008, November 24). Tapping America’s Secret Power Source. Retrieved from Greentech Media, October 30, 2009. Web site: Tapping America's Secret Power Source LT Journal October 2010: Ultralow Voltage Energy Harvester Uses Thermoelectric Generator for Battery-Free Wireless Sensors DIY: How to Build a Thermoelectric Energy Generator With a Cheap Peltier Unit Gentherm Inc. This device harnesses the cold night sky to generate electricity in the dark Electrical generators Energy harvesting Thermoelectricity hr:Termoelektrični generator kk:Термоэлектрлік генератор ja:熱電変換素子
Thermoelectric generator
[ "Physics", "Technology" ]
5,398
[ "Physical systems", "Electrical generators", "Machines" ]
9,210,048
https://en.wikipedia.org/wiki/Richard%20Crandall
Richard E. Crandall (December 29, 1947 – December 20, 2012) was an American physicist and computer scientist who made contributions to computational number theory. Background Crandall was born in Ann Arbor, Michigan, and spent two years at Caltech before transferring to Reed College in Portland, Oregon, where he graduated in physics and wrote his undergraduate thesis on randomness. He earned his Ph.D in theoretical physics from Massachusetts Institute of Technology. Career In 1978, he became a physics professor at Reed College, where he taught courses in experimental physics and computational physics for many years, ultimately becoming Vollum Professor of Science and director of the Center for Advanced Computation. He was also, at various times, Chief Scientist at NeXT, Inc., Chief Cryptographer and Distinguished Scientist at Apple, and head of Apple's Advanced Computation Group. He was a pioneer in experimental mathematics. He developed the irrational base discrete weighted transform, a method of finding very large primes. He wrote several books and many scholarly papers on scientific programming and computation. Crandall was awarded numerous patents for his work in the field of cryptography. He also wrote a poker program that could bluff. He owned and operated PSI Press, an online publishing company. Personal life Crandall was part Cherokee and proud of his Native heritage. He fronted a band called the Chameleons in 1981. He was working on an intellectual biography of Steve Jobs when he collapsed at his home in Portland, Oregon, from acute leukemia. He died 10 days later, on December 20, 2012, at the age of 64. Books Pascal Applications for the Sciences. John Wiley & Sons, New York 1983. with M. M. Colgrove: Scientific Programming with Macintosh Pascal. John Wiley & Sons, New York 1986. Mathematica for the Sciences, Addison-Wesley, Reading, Mass, 1991. Projects in Scientific Computation. Springer 1994. Topics in Advanced Scientific Computation. Springer 1996. with M. Levich: A Network Orange. Springer 1997. with C. Pomerance: Prime numbers: A Computational Perspective.'' Springer 2001. References External links Professor Richard E. Crandall; many of Crandall's papers can be found here Nicholas Wheeler, Remembering Prof. Crandall Stephen Wolfram, Remembering Richard Crandall (1947-2012) David Bailey and Jonathan Borwein, Mathematician/physicist/inventor Richard Crandall dies at 64 David Broadhurst, A prime puzzle in honor of Richard Crandall 1947 births 2012 deaths Scientists from Ann Arbor, Michigan Scientists from Portland, Oregon 20th-century American inventors 21st-century American inventors American atheists American computer scientists Apple Inc. employees Computational physicists Deaths from leukemia in Oregon Deaths from acute leukemia Reed College faculty Reed College alumni
Richard Crandall
[ "Physics" ]
561
[ "Computational physicists", "Computational physics" ]
9,210,114
https://en.wikipedia.org/wiki/Rhind%20Mathematical%20Papyrus
The Rhind Mathematical Papyrus (RMP; also designated as papyrus British Museum 10057, pBM 10058, and Brooklyn Museum 37.1784Ea-b) is one of the best known examples of ancient Egyptian mathematics. It is one of two well-known mathematical papyri, along with the Moscow Mathematical Papyrus. The Rhind Papyrus is the larger, but younger, of the two. In the papyrus' opening paragraphs Ahmes presents the papyrus as giving "Accurate reckoning for inquiring into things, and the knowledge of all things, mysteries ... all secrets". He continues: This book was copied in regnal year 33, month 4 of Akhet, under the majesty of the King of Upper and Lower Egypt, Awserre, given life, from an ancient copy made in the time of the King of Upper and Lower Egypt Nimaatre. The scribe Ahmose writes this copy. Several books and articles about the Rhind Mathematical Papyrus have been published, and a handful of these stand out. The Rhind Papyrus was published in 1923 by the English Egyptologist T. Eric Peet and contains a discussion of the text that followed Francis Llewellyn Griffith's Book I, II and III outline. Chace published a compendium in 1927–29 which included photographs of the text. A more recent overview of the Rhind Papyrus was published in 1987 by Robins and Shute. History The Rhind Mathematical Papyrus dates to the Second Intermediate Period of Egypt. It was copied by the scribe Ahmes (i.e., Ahmose; Ahmes is an older transcription favoured by historians of mathematics) from a now-lost text from the reign of the 12th dynasty king Amenemhat III. It dates to around 1550 BC. The document is dated to Year 33 of the Hyksos king Apophis and also contains a separate later historical note on its verso likely dating from "Year 11" of his successor, Khamudi. Alexander Henry Rhind, a Scottish antiquarian, purchased two parts of the papyrus in 1858 in Luxor, Egypt; it was stated to have been found in "one of the small buildings near the Ramesseum", near Luxor. The British Museum, where the majority of the papyrus is now kept, acquired it in 1865 along with the Egyptian Mathematical Leather Roll, also owned by Henry Rhind. Fragments of the text were independently purchased in Luxor by American Egyptologist Edwin Smith in the mid 1860s, were donated by his daughter in 1906 to the New York Historical Society, and are now held by the Brooklyn Museum. An central section is missing. The papyrus began to be transliterated and mathematically translated in the late 19th century. The mathematical-translation aspect remains incomplete in several respects. Books Book I – Arithmetic and Algebra The first part of the Rhind papyrus consists of reference tables and a collection of 21 arithmetic and 20 algebraic problems. The problems start out with simple fractional expressions, followed by completion (sekem) problems and more involved linear equations (aha problems). The first part of the papyrus is taken up by the 2/n table. The fractions 2/n for odd n ranging from 3 to 101 are expressed as sums of unit fractions. For example, . The decomposition of 2/n into unit fractions is never more than 4 terms long as in for example: This table is followed by a much smaller, tiny table of fractional expressions for the numbers 1 through 9 divided by 10. For instance the division of 7 by 10 is recorded as: 7 divided by 10 yields 2/3 + 1/30 After these two tables, the papyrus records 91 problems altogether, which have been designated by moderns as problems (or numbers) 1–87, including four other items which have been designated as problems 7B, 59B, 61B and 82B. Problems 1–7, 7B and 8–40 are concerned with arithmetic and elementary algebra. Problems 1–6 compute divisions of a certain number of loaves of bread by 10 men and record the outcome in unit fractions. Problems 7–20 show how to multiply the expressions 1 + 1/2 + 1/4 = 7/4, and 1 + 2/3 + 1/3 = 2 by different fractions. Problems 21–23 are problems in completion, which in modern notation are simply subtraction problems. Problems 24–34 are ‘‘aha’’ problems; these are linear equations. Problem 32 for instance corresponds (in modern notation) to solving x + 1/3 x + 1/4 x = 2 for x. Problems 35–38 involve divisions of the heqat, which is an ancient Egyptian unit of volume. Beginning at this point, assorted units of measurement become much more important throughout the remainder of the papyrus, and indeed a major consideration throughout the rest of the papyrus is dimensional analysis. Problems 39 and 40 compute the division of loaves and use arithmetic progressions. Book II – Geometry The second part of the Rhind papyrus, being problems 41–59, 59B and 60, consists of geometry problems. Peet referred to these problems as "mensuration problems". Volumes Problems 41–46 show how to find the volume of both cylindrical and rectangular granaries. In problem 41 Ahmes computes the volume of a cylindrical granary. Given the diameter d and the height h, the volume V is given by: In modern mathematical notation (and using d = 2r) this gives . The fractional term 256/81 approximates the value of π as being 3.1605..., an error of less than one percent. Problem 47 is a table with fractional equalities which represent the ten situations where the physical volume quantity of "100 quadruple heqats" is divided by each of the multiples of ten, from ten through one hundred. The quotients are expressed in terms of Horus eye fractions, sometimes also using a much smaller unit of volume known as a "quadruple ro". The quadruple heqat and the quadruple ro are units of volume derived from the simpler heqat and ro, such that these four units of volume satisfy the following relationships: 1 quadruple heqat = 4 heqat = 1280 ro = 320 quadruple ro. Thus, 100/10 quadruple heqat = 10 quadruple heqat 100/20 quadruple heqat = 5 quadruple heqat 100/30 quadruple heqat = (3 + 1/4 + 1/16 + 1/64) quadruple heqat + (1 + 2/3) quadruple ro 100/40 quadruple heqat = (2 + 1/2) quadruple heqat 100/50 quadruple heqat = 2 quadruple heqat 100/60 quadruple heqat = (1 + 1/2 + 1/8 + 1/32) quadruple heqat + (3 + 1/3) quadruple ro 100/70 quadruple heqat = (1 + 1/4 + 1/8 + 1/32 + 1/64) quadruple heqat + (2 + 1/14 + 1/21 + 1/42) quadruple ro 100/80 quadruple heqat = (1 + 1/4) quadruple heqat 100/90 quadruple heqat = (1 + 1/16 + 1/32 + 1/64) quadruple heqat + (1/2 + 1/18) quadruple ro 100/100 quadruple heqat = 1 quadruple heqat Areas Problems 48–55 show how to compute an assortment of areas. Problem 48 is notable in that it succinctly computes the area of a circle by approximating π. Specifically, problem 48 explicitly reinforces the convention (used throughout the geometry section) that "a circle's area stands to that of its circumscribing square in the ratio 64/81." Equivalently, the papyrus approximates π as 256/81, as was already noted above in the explanation of problem 41. Other problems show how to find the area of rectangles, triangles and trapezoids. Pyramids The final six problems are related to the slopes of pyramids. A seked problem is reported as follows: If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is its seked?" The solution to the problem is given as the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity found for the seked is the cotangent of the angle to the base of the pyramid and its face. Book III – Miscellany The third part of the Rhind papyrus consists of the remainder of the 91 problems, being 61, 61B, 62–82, 82B, 83–84, and "numbers" 85–87, which are items that are not mathematical in nature. This final section contains more complicated tables of data (which frequently involve Horus eye fractions), several pefsu problems which are elementary algebraic problems concerning food preparation, and even an amusing problem (79) which is suggestive of geometric progressions, geometric series, and certain later problems and riddles in history. Problem 79 explicitly cites, "seven houses, 49 cats, 343 mice, 2401 ears of spelt, 16807 hekats." In particular problem 79 concerns a situation in which 7 houses each contain seven cats, which all eat seven mice, each of which would have eaten seven ears of grain, each of which would have produced seven measures of grain. The third part of the Rhind papyrus is therefore a kind of miscellany, building on what has already been presented. Problem 61 is concerned with multiplications of fractions. Problem 61B, meanwhile, gives a general expression for computing 2/3 of 1/n, where n is odd. In modern notation the formula given is The technique given in 61B is closely related to the derivation of the 2/n table. Problems 62–68 are general problems of an algebraic nature. Problems 69–78 are all pefsu problems in some form or another. They involve computations regarding the strength of bread and beer, with respect to certain raw materials used in their production. Problem 79 sums five terms in a geometric progression. Its language is strongly suggestive of the more modern riddle and nursery rhyme "As I was going to St Ives". Problems 80 and 81 compute Horus eye fractions of hinu (or heqats). The last four mathematical items, problems 82, 82B and 83–84, compute the amount of feed necessary for various animals, such as fowl and oxen. However, these problems, especially 84, are plagued by pervasive ambiguity, confusion, and simple inaccuracy. The final three items on the Rhind papyrus are designated as "numbers" 85–87, as opposed to "problems", and they are scattered widely across the papyrus's back side, or verso. They are, respectively, a small phrase which ends the document (and has a few possibilities for translation, given below), a piece of scrap paper unrelated to the body of the document, used to hold it together (yet containing words and Egyptian fractions which are by now familiar to a reader of the document), and a small historical note which is thought to have been written some time after the completion of the body of the papyrus's writing. This note is thought to describe events during the "Hyksos domination", a period of external interruption in ancient Egyptian society which is closely related with its second intermediary period. With these non-mathematical yet historically and philologically intriguing errata, the papyrus's writing comes to an end. Unit concordance Much of the Rhind Papyrus's material is concerned with Ancient Egyptian units of measurement and especially the dimensional analysis used to convert between them. A concordance of units of measurement used in the papyrus is given in the image. Content This table summarizes the content of the Rhind Papyrus by means of a concise modern paraphrase. It is based upon the two-volume exposition of the papyrus which was published by Arnold Buffum Chace in 1927, and in 1929. In general, the papyrus consists of four sections: a title page, the 2/n table, a tiny "1–9/10 table", and 91 problems, or "numbers". The latter are numbered from 1 through 87 and include four mathematical items which have been designated by moderns as problems 7B, 59B, 61B, and 82B. Numbers 85–87, meanwhile, are not mathematical items forming part of the body of the document, but instead are respectively: a small phrase ending the document, a piece of "scrap-paper" used to hold the document together (having already contained unrelated writing), and a historical note which is thought to describe a time period shortly after the completion of the body of the papyrus. These three latter items are written on disparate areas of the papyrus's verso (back side), far away from the mathematical content. Chace therefore differentiates them by styling them as numbers as opposed to problems, like the other 88 numbered items. See also List of ancient Egyptian papyri Akhmim wooden tablet Ancient Egyptian units of measurement As I was going to St. Ives Berlin Papyrus 6619 History of mathematics Lahun Mathematical Papyri Bibliography References External links British Museum webpage on the first section of the Papyrus British Museum webpage on the second section of the Papyrus . Williams, Scott W. Mathematicians of the African Diaspora, containing a page on Egyptian Mathematics Papyri. 16th-century BC literature 1858 archaeological discoveries Egyptian mathematics Egyptian fractions Papyri from ancient Egypt Papyrus Mathematics manuscripts Pi Hyksos Ancient Egyptian objects in the British Museum Luxor Amenemhat III Mathematics literature
Rhind Mathematical Papyrus
[ "Mathematics" ]
2,938
[ "Pi" ]
9,210,345
https://en.wikipedia.org/wiki/Gaussian%20adaptation
Gaussian adaptation (GA), also called normal or natural adaptation (NA) is an evolutionary algorithm designed for the maximization of manufacturing yield due to statistical deviation of component values of signal processing systems. In short, GA is a stochastic adaptive process where a number of samples of an n-dimensional vector x[xT = (x1, x2, ..., xn)] are taken from a multivariate Gaussian distribution, N(m, M), having mean m and moment matrix M. The samples are tested for fail or pass. The first- and second-order moments of the Gaussian restricted to the pass samples are m* and M*. The outcome of x as a pass sample is determined by a function s(x), 0 < s(x) < q ≤ 1, such that s(x) is the probability that x will be selected as a pass sample. The average probability of finding pass samples (yield) is Then the theorem of GA states: For any s(x) and for any value of P < q, there always exist a Gaussian p. d. f. [ probability density function ] that is adapted for maximum dispersion. The necessary conditions for a local optimum are m = m* and M proportional to M*. The dual problem is also solved: P is maximized while keeping the dispersion constant (Kjellström, 1991). Proofs of the theorem may be found in the papers by Kjellström, 1970, and Kjellström & Taxén, 1981. Since dispersion is defined as the exponential of entropy/disorder/average information it immediately follows that the theorem is valid also for those concepts. Altogether, this means that Gaussian adaptation may carry out a simultaneous maximisation of yield and average information (without any need for the yield or the average information to be defined as criterion functions). The theorem is valid for all regions of acceptability and all Gaussian distributions. It may be used by cyclic repetition of random variation and selection (like the natural evolution). In every cycle a sufficiently large number of Gaussian distributed points are sampled and tested for membership in the region of acceptability. The centre of gravity of the Gaussian, m, is then moved to the centre of gravity of the approved (selected) points, m*. Thus, the process converges to a state of equilibrium fulfilling the theorem. A solution is always approximate because the centre of gravity is always determined for a limited number of points. It was used for the first time in 1969 as a pure optimization algorithm making the regions of acceptability smaller and smaller (in analogy to simulated annealing, Kirkpatrick 1983). Since 1970 it has been used for both ordinary optimization and yield maximization. Natural evolution and Gaussian adaptation It has also been compared to the natural evolution of populations of living organisms. In this case s(x) is the probability that the individual having an array x of phenotypes will survive by giving offspring to the next generation; a definition of individual fitness given by Hartl 1981. The yield, P, is replaced by the mean fitness determined as a mean over the set of individuals in a large population. Phenotypes are often Gaussian distributed in a large population and a necessary condition for the natural evolution to be able to fulfill the theorem of Gaussian adaptation, with respect to all Gaussian quantitative characters, is that it may push the centre of gravity of the Gaussian to the centre of gravity of the selected individuals. This may be accomplished by the Hardy–Weinberg law. This is possible because the theorem of Gaussian adaptation is valid for any region of acceptability independent of the structure (Kjellström, 1996). In this case the rules of genetic variation such as crossover, inversion, transposition etcetera may be seen as random number generators for the phenotypes. So, in this sense Gaussian adaptation may be seen as a genetic algorithm. How to climb a mountain Mean fitness may be calculated provided that the distribution of parameters and the structure of the landscape is known. The real landscape is not known, but figure below shows a fictitious profile (blue) of a landscape along a line (x) in a room spanned by such parameters. The red curve is the mean based on the red bell curve at the bottom of figure. It is obtained by letting the bell curve slide along the x-axis, calculating the mean at every location. As can be seen, small peaks and pits are smoothed out. Thus, if evolution is started at A with a relatively small variance (the red bell curve), then climbing will take place on the red curve. The process may get stuck for millions of years at B or C, as long as the hollows to the right of these points remain, and the mutation rate is too small. If the mutation rate is sufficiently high, the disorder or variance may increase and the parameter(s) may become distributed like the green bell curve. Then the climbing will take place on the green curve, which is even more smoothed out. Because the hollows to the right of B and C have now disappeared, the process may continue up to the peaks at D. But of course the landscape puts a limit on the disorder or variability. Besides — dependent on the landscape — the process may become very jerky, and if the ratio between the time spent by the process at a local peak and the time of transition to the next peak is very high, it may as well look like a punctuated equilibrium as suggested by Gould (see Ridley). Computer simulation of Gaussian adaptation Thus far the theory only considers mean values of continuous distributions corresponding to an infinite number of individuals. In reality however, the number of individuals is always limited, which gives rise to an uncertainty in the estimation of m and M (the moment matrix of the Gaussian). And this may also affect the efficiency of the process. Unfortunately very little is known about this, at least theoretically. The implementation of normal adaptation on a computer is a fairly simple task. The adaptation of m may be done by one sample (individual) at a time, for example m(i + 1) = (1 – a) m(i) + ax where x is a pass sample, and a < 1 a suitable constant so that the inverse of a represents the number of individuals in the population. M may in principle be updated after every step y leading to a feasible point x = m + y according to: M(i + 1) = (1 – 2b) M(i) + 2byyT, where yT is the transpose of y and b << 1 is another suitable constant. In order to guarantee a suitable increase of average information, y should be normally distributed with moment matrix μ2M, where the scalar μ > 1 is used to increase average information (information entropy, disorder, diversity) at a suitable rate. But M will never be used in the calculations. Instead we use the matrix W defined by WWT = M. Thus, we have y = Wg, where g is normally distributed with the moment matrix μU, and U is the unit matrix. W and WT may be updated by the formulas W = (1 – b)W + bygT and WT = (1 – b)WT + bgyT because multiplication gives M = (1 – 2b)M + 2byyT, where terms including b2 have been neglected. Thus, M will be indirectly adapted with good approximation. In practice it will suffice to update W only W(i + 1) = (1 – b)W(i) + bygT. This is the formula used in a simple 2-dimensional model of a brain satisfying the Hebbian rule of associative learning; see the next section (Kjellström, 1996 and 1999). The figure below illustrates the effect of increased average information in a Gaussian p.d.f. used to climb a mountain Crest (the two lines represent the contour line). Both the red and green cluster have equal mean fitness, about 65%, but the green cluster has a much higher average information making the green process much more efficient. The effect of this adaptation is not very salient in a 2-dimensional case, but in a high-dimensional case, the efficiency of the search process may be increased by many orders of magnitude. The evolution in the brain In the brain the evolution of DNA-messages is supposed to be replaced by an evolution of signal patterns and the phenotypic landscape is replaced by a mental landscape, the complexity of which will hardly be second to the former. The metaphor with the mental landscape is based on the assumption that certain signal patterns give rise to a better well-being or performance. For instance, the control of a group of muscles leads to a better pronunciation of a word or performance of a piece of music. In this simple model it is assumed that the brain consists of interconnected components that may add, multiply and delay signal values. A nerve cell kernel may add signal values, a synapse may multiply with a constant and An axon may delay values. This is a basis of the theory of digital filters and neural networks consisting of components that may add, multiply and delay signalvalues and also of many brain models, Levine 1991. In the figure below the brain stem is supposed to deliver Gaussian distributed signal patterns. This may be possible since certain neurons fire at random (Kandel et al.). The stem also constitutes a disordered structure surrounded by more ordered shells (Bergström, 1969), and according to the central limit theorem the sum of signals from many neurons may be Gaussian distributed. The triangular boxes represent synapses and the boxes with the + sign are cell kernels. In the cortex signals are supposed to be tested for feasibility. When a signal is accepted the contact areas in the synapses are updated according to the formulas below in agreement with the Hebbian theory. The figure shows a 2-dimensional computer simulation of Gaussian adaptation according to the last formula in the preceding section. m and W are updated according to: m1 = 0.9 m1 + 0.1 x1; m2 = 0.9 m2 + 0.1 x2; w11 = 0.9 w11 + 0.1 y1g1; w12 = 0.9 w12 + 0.1 y1g2; w21 = 0.9 w21 + 0.1 y2g1; w22 = 0.9 w22 + 0.1 y2g2; As can be seen this is very much like a small brain ruled by the theory of Hebbian learning (Kjellström, 1996, 1999 and 2002). Gaussian adaptation and free will Gaussian adaptation as an evolutionary model of the brain obeying the Hebbian theory of associative learning offers an alternative view of free will due to the ability of the process to maximize the mean fitness of signal patterns in the brain by climbing a mental landscape in analogy with phenotypic evolution. Such a random process gives us much freedom of choice, but hardly any will. An illusion of will may, however, emanate from the ability of the process to maximize mean fitness, making the process goal seeking. I. e., it prefers higher peaks in the landscape prior to lower, or better alternatives prior to worse. In this way an illusive will may appear. A similar view has been given by Zohar 1990. See also Kjellström 1999. A theorem of efficiency for random search The efficiency of Gaussian adaptation relies on the theory of information due to Claude E. Shannon (see information content). When an event occurs with probability P, then the information −log(P) may be achieved. For instance, if the mean fitness is P, the information gained for each individual selected for survival will be −log(P) – on the average - and the work/time needed to get the information is proportional to 1/P. Thus, if efficiency, E, is defined as information divided by the work/time needed to get it we have: E = −P log(P). This function attains its maximum when P = 1/e = 0.37. The same result has been obtained by Gaines with a different method. E = 0 if P = 0, for a process with infinite mutation rate, and if P = 1, for a process with mutation rate = 0 (provided that the process is alive). This measure of efficiency is valid for a large class of random search processes provided that certain conditions are at hand. 1 The search should be statistically independent and equally efficient in different parameter directions. This condition may be approximately fulfilled when the moment matrix of the Gaussian has been adapted for maximum average information to some region of acceptability, because linear transformations of the whole process do not affect efficiency. 2 All individuals have equal cost and the derivative at P = 1 is < 0. Then, the following theorem may be proved: All measures of efficiency, that satisfy the conditions above, are asymptotically proportional to –P log(P/q) when the number of dimensions increases, and are maximized by P = q exp(-1) (Kjellström, 1996 and 1999). The figure above shows a possible efficiency function for a random search process such as Gaussian adaptation. To the left the process is most chaotic when P = 0, while there is perfect order to the right where P = 1. In an example by Rechenberg, 1971, 1973, a random walk is pushed thru a corridor maximizing the parameter x1. In this case the region of acceptability is defined as a (n − 1)-dimensional interval in the parameters x2, x3, ..., xn, but a x1-value below the last accepted will never be accepted. Since P can never exceed 0.5 in this case, the maximum speed towards higher x1-values is reached for P = 0.5/e = 0.18, in agreement with the findings of Rechenberg. A point of view that also may be of interest in this context is that no definition of information (other than that sampled points inside some region of acceptability gives information about the extension of the region) is needed for the proof of the theorem. Then, because, the formula may be interpreted as information divided by the work needed to get the information, this is also an indication that −log(P) is a good candidate for being a measure of information. The Stauffer and Grimson algorithm Gaussian adaptation has also been used for other purposes as for instance shadow removal by "The Stauffer-Grimson algorithm" which is equivalent to Gaussian adaptation as used in the section "Computer simulation of Gaussian adaptation" above. In both cases the maximum likelihood method is used for estimation of mean values by adaptation at one sample at a time. But there are differences. In the Stauffer-Grimson case the information is not used for the control of a random number generator for centering, maximization of mean fitness, average information or manufacturing yield. The adaptation of the moment matrix also differs very much as compared to "the evolution in the brain" above. See also Entropy in thermodynamics and information theory Fisher's fundamental theorem of natural selection Free will Genetic algorithm Hebbian learning Information content Simulated annealing Stochastic optimization Covariance matrix adaptation evolution strategy (CMA-ES) Unit of selection References Bergström, R. M. An Entropy Model of the Developing Brain. Developmental Psychobiology, 2(3): 139–152, 1969. Brooks, D. R. & Wiley, E. O. Evolution as Entropy, Towards a unified theory of Biology. The University of Chicago Press, 1986. Brooks, D. R. Evolution in the Information Age: Rediscovering the Nature of the Organism. Semiosis, Evolution, Energy, Development, Volume 1, Number 1, March 2001 Gaines, Brian R. Knowledge Management in Societies of Intelligent Adaptive Agents. Journal of intelligent Information systems 9, 277–298 (1997). Hartl, D. L. A Primer of Population Genetics. Sinauer, Sunderland, Massachusetts, 1981. Hamilton, WD. 1963. The evolution of altruistic behavior. American Naturalist 97:354–356 Kandel, E. R., Schwartz, J. H., Jessel, T. M. Essentials of Neural Science and Behavior. Prentice Hall International, London, 1995. S. Kirkpatrick and C. D. Gelatt and M. P. Vecchi, Optimization by Simulated Annealing, Science, Vol 220, Number 4598, pages 671–680, 1983. Kjellström, G. Network Optimization by Random Variation of component values. Ericsson Technics, vol. 25, no. 3, pp. 133–151, 1969. Kjellström, G. Optimization of electrical Networks with respect to Tolerance Costs. Ericsson Technics, no. 3, pp. 157–175, 1970. Kjellström, G. & Taxén, L. Stochastic Optimization in System Design. IEEE Trans. on Circ. and Syst., vol. CAS-28, no. 7, July 1981. Kjellström, G., Taxén, L. and Lindberg, P. O. Discrete Optimization of Digital Filters Using Gaussian Adaptation and Quadratic Function Minimization. IEEE Trans. on Circ. and Syst., vol. CAS-34, no 10, October 1987. Kjellström, G. On the Efficiency of Gaussian Adaptation. Journal of Optimization Theory and Applications, vol. 71, no. 3, December 1991. Kjellström, G. & Taxén, L. Gaussian Adaptation, an evolution-based efficient global optimizer; Computational and Applied Mathematics, In, C. Brezinski & U. Kulish (Editors), Elsevier Science Publishers B. V., pp 267–276, 1992. Kjellström, G. Evolution as a statistical optimization algorithm. Evolutionary Theory 11:105–117 (January, 1996). Kjellström, G. The evolution in the brain. Applied Mathematics and Computation, 98(2–3):293–300, February, 1999. Kjellström, G. Evolution in a nutshell and some consequences concerning valuations. EVOLVE, , Stockholm, 2002. Levine, D. S. Introduction to Neural & Cognitive Modeling. Laurence Erlbaum Associates, Inc., Publishers, 1991. MacLean, P. D. A Triune Concept of the Brain and Behavior. Toronto, Univ. Toronto Press, 1973. Maynard Smith, J. 1964. Group Selection and Kin Selection, Nature 201:1145–1147. Maynard Smith, J. Evolutionary Genetics. Oxford University Press, 1998. Mayr, E. What Evolution is. Basic Books, New York, 2001. Müller, Christian L. and Sbalzarini Ivo F. Gaussian Adaptation revisited - an entropic view on Covariance Matrix Adaptation. Institute of Theoretical Computer Science and Swiss Institute of Bioinformatics, ETH Zurich, CH-8092 Zurich, Switzerland. Pinel, J. F. and Singhal, K. Statistical Design Centering and Tolerancing Using Parametric Sampling. IEEE Transactions on Circuits and Systems, Vol. Das-28, No. 7, July 1981. Rechenberg, I. (1971): Evolutionsstrategie — Optimierung technischer Systeme nach Prinzipien der biologischen Evolution (PhD thesis). Reprinted by Fromman-Holzboog (1973). Ridley, M. Evolution. Blackwell Science, 1996. Stauffer, C. & Grimson, W.E.L. Learning Patterns of Activity Using Real-Time Tracking, IEEE Trans. on PAMI, 22(8), 2000. Stehr, G. On the Performance Space Exploration of Analog Integrated Circuits. Technischen Universität Munchen, Dissertation 2005. Taxén, L. A Framework for the Coordination of Complex Systems’ Development. Institute of Technology, Linköping University, Dissertation, 2003. Zohar, D. The quantum self : a revolutionary view of human nature and consciousness rooted in the new physics. London, Bloomsbury, 1990. Evolutionary algorithms Creationism Free will
Gaussian adaptation
[ "Biology" ]
4,264
[ "Creationism", "Biology theories", "Obsolete biology theories" ]
9,210,858
https://en.wikipedia.org/wiki/Marine%20architecture
Marine architecture is the design of architectural and engineering structures which support coastal design, near-shore and off-shore or deep-water planning for many projects such as shipyards, ship transport, coastal management or other marine and/or hydroscape activities. These structures include harbors, lighthouses, marinas, oil platforms, offshore drillings, accommodation platforms and offshore wind farms, floating engineering structures and building architectures or civil seascape developments. Floating structures in deep water may use suction caisson for anchoring. See also , a temporary water-excluding structure built in place, sometimes surrounding a working area as does an open caisson. Photo gallery References External links Water and the environment Offshore engineering
Marine architecture
[ "Engineering" ]
143
[ "Construction", "Marine architecture", "Architecture", "Offshore engineering" ]
13,192,026
https://en.wikipedia.org/wiki/Serviceability%20%28computer%29
In software engineering and hardware engineering, serviceability (also known as supportability) is one of the -ilities or aspects (from IBM's RAS(U) (Reliability, Availability, Serviceability, and Usability)). It refers to the ability of technical support personnel to install, configure, and monitor computer products, identify exceptions or faults, debug or isolate faults to root cause analysis, and provide hardware or software maintenance in pursuit of solving a problem and restoring the product into service. Incorporating serviceability facilitating features typically results in more efficient product maintenance and reduces operational costs and maintains business continuity. Examples of features that facilitate serviceability include: Help desk notification of exceptional events (e.g., by electronic mail or by sending text to a pager) Network monitoring Documentation Event logging / Tracing (software) Logging of program state, such as Execution path and/or local and global variables Procedure entry and exit, optionally with incoming and return variable values (see: subroutine) Exception block entry, optionally with local state (see: exception handling) Software upgrade Graceful degradation, where the product is designed to allow recovery from exceptional events without intervention by technical support staff Hardware replacement or upgrade planning, where the product is designed to allow efficient hardware upgrades with minimal computer system downtime (e.g., hotswap components.) Serviceability engineering may also incorporate some routine system maintenance related features (see: Operations, Administration and Maintenance (OA&M.)) A service tool is defined as a facility or feature, closely tied to a product, that provides capabilities and data so as to service (analyze, monitor, debug, repair, etc.) that product. Service tools can provide broad ranges of capabilities. Regarding diagnosis, a proposed taxonomy of service tools is as follows: Level 1: Service tool that indicates if a product is functional or not functional. Describing computer servers, the states are often referred to as ‘up’ or ‘down’. This is a binary value. Level 2: Service tool that provides some detailed diagnostic data. Often the diagnostic data is referred to as a problem ‘signature’, a representation of key values such as system environment, running program name, etc. This level of data is used to compare one problem’s signature to another problem’s signature: the ability to match the new problem to an old one allows one to use the solution already created for the prior problem. The ability to screen problems is valuable when a problem does match a pre-existing problem, but it is not sufficient to debug a new problem. Level 3: Provides detailed diagnostic data sufficient to debug a new and unique problem. As a rough rule of thumb for these taxonomies, there are multiple ‘orders of magnitude’ of diagnostic data in level 1 vs. level 2 vs. level 3 service tools. Additional characteristics and capabilities that have been observed in service tools: Time of data collection: some tools can collect data immediately, as soon as problem occurs, others are delayed in collecting data. Pre-analyzed, or not-yet-analyzed data: some tools collect ‘external’ data, while others collect ‘internal’ data. This is seen when comparing system messages (natural language-like statements in the user’s native language) vs. ‘binary’ storage dumps. Partial or full set of system state data: some tools collect a complete system state vs. a partial system state (user or partial ‘binary’ storage dump vs. complete system dump). Raw or analyzed data: some tools display raw data, while others analyze it (examples storage dump formatters that format data, vs. ‘intelligent’ data formatters (“ANALYZE” is a common verb) that combine product knowledge with analysis of state variables to indicate the ‘meaning’ of the data. Programmable tools vs. ‘fixed function’ tools. Some tools can be altered to get varying amounts of data, at varying times. Some tools have only a fixed function. Automatic or manual? Some tools are built into a product, to automatically collect data when a fault or failure occurs. Other tools have to be specifically invoked to start the data collection process. Repair or non-repair? Some tools collect data as a fore-runner to an automatic repair process (self-healing/fault tolerant). These tools have the challenge of quickly obtaining unaltered data before the desired repair process starts. See also FURPS Maintainability External links Excellent example of Serviceability Feature Requirements: Sun Gathering Debug Data (Sun GDD). This is a set of tools developed by the Sun's support guys aimed to provide the right approach to problem resolution by leveraging proactive actions and best practices to gather the debug data needed for further analysis. "Carrier Grade Linux Serviceability Requirements Definition Version 4," Copyright (c) 2005-2007 by Open Source Development Labs, Inc. Beaverton, OR 97005 USA Design for X
Serviceability (computer)
[ "Engineering" ]
1,008
[ "Design", "Design for X" ]
13,192,028
https://en.wikipedia.org/wiki/Serviceability%20%28structure%29
In civil engineering and structural engineering, serviceability refers to the conditions under which a building is still considered useful. Should these limit states be exceeded, a structure that may still be structurally sound would nevertheless be considered unfit. It refers to conditions other than the building strength that render the buildings unusable. Serviceability limit state design of structures includes factors such as durability, overall stability, fire resistance, deflection, cracking and excessive vibration. For example, a skyscraper could sway severely and cause the occupants to be sick (much like sea-sickness), yet be perfectly sound structurally. This building is in no danger of collapsing, yet since it is obviously no longer fit for human occupation, it is considered to have exceeded its serviceability limit state. Serviceability limit A serviceability limit defines the performance criterion for serviceability and corresponds to a conditions beyond which specified service requirements resulting from the planned use are no longer met. In limit state design, a structure fails its serviceability if the criteria of the serviceability limit state are not met during the specified service life and with the required reliability. Hence, the serviceability limit state identifies a civil engineering structure which fails to meet technical requirements for use even though it may be strong enough to remain standing. A structure that fails serviceability has exceeded a defined limit for one of the following properties: Excessive deflection Vibration Local deformation (engineering) Serviceability limits are not always defined by building code developer, government or regulatory agency. Building codes tend to be restricted to ultimate limits related to public and occupant safety. Global geopolitical variations are likely to exist. Structural engineering Building engineering
Serviceability (structure)
[ "Engineering" ]
331
[ "Structural engineering", "Building engineering", "Construction", "Civil engineering", "Architecture" ]
13,193,455
https://en.wikipedia.org/wiki/Irina%20Beletskaya
Irina Petrovna Beletskaya (; born 10 March 1933) is a Soviet and Russian professor of chemistry at Moscow State University. She specializes in organometallic chemistry and its application to problems in organic chemistry. She is best known for her studies on aromatic reaction mechanisms, as well as work on carbanion acidity and reactivity. She developed some of the first methods for carbon-carbon bond formation using palladium or nickel catalysts, and extended these reactions to work in aqueous media. She also helped to open up the chemistry of organolanthanides. Academic career Beletskaya was born in Leningrad (St. Petersburg, Russia) in 1933. She graduated from the Department of Chemistry of Lomonosov Moscow State University in 1955 where she focused her undergraduate research on organoarsenic chemistry. She obtained the Candidate of Chemistry (analogous to Ph.D.) degree in 1958. For this degree she investigated electrophilic substitution reactions. More specifically, she explored the influence of ammonia on a-bromomercurophenylacetic acid reactions. In 1963 she received her Dr.Sci. degree from the same institution. In 1970 she became a Full Professor of Chemistry at Moscow State University, where she currently serves as head of the Organoelement Chemistry Laboratory. Beletskaya was elected a corresponding member of the Academy of Science of USSR in 1974. In 1992 she became a full member (academician) of the Russian Academy of Sciences. Between 1991 and 1993 she served as president of the Division of Organic Chemistry of IUPAC. Until 2001 she served on the IUPAC Committee on Chemical Weapons Destruction Technology (CWDT). She is editor-in chief of the Russian Journal of Organic Chemistry. Beletskaya initially researched the reaction mechanisms of organic reactions, focusing on compounds with metal-carbon bonds. Her research included Grignard-like reactions and lanthanide complexes in the context of catalysts. She and Prof. O. Reutov worked on electrophilic reactions at saturated carbon. She also investigated the reaction mechanisms of organometallic compounds. She also researched carbanion reactivity, emphasizing the reactivity and structure of ion pairs.  Once more advanced in her career, Beltskaya focused more on transition metal catalysts and developing economically favorable catalysts. Currently, she serves as the head of the Laboratory of Organoelement Compounds within the Department of Chemistry at Moscow State University, where she has concentrated her research on carbon dioxide utilization and its utility in renewable energy and reactions with epoxides. Research contributions Beletskaya is known for her foundational contributions to organometallic chemistry and as one of the first prominent female chemists. Her work helped pave the way for women in Russia to participate in the scientific community. Her pioneering role in organometallic synthesis has laid an essential foundation for future organic chemists. Her work advocating for rare-earth elements in organic chemistry led to the publication of many new textbooks, changing how organic chemistry is taught everywhere. The current field of organic chemists does not always see the need to include other elements in the study of organic chemistry, as it is all carbon-based. Beletskaya’s work helps to expand the use of precious metals in organic reactions. External links Publications Protolysis mechanism of cis- and trans-β-chlorovinylmercury chlorides when acted upon by HCl and DCl Pd-Catalyzed amination of dibromobiphenyls in the synthesis of macrocycles comprising two biphenyl and two polyamine moieties The influence of the substituents in the electrofilic bimolecular reaction New trends in the cross-coupling and other catalytic reactions Honors and awards Source: Lomonosov Prize, 1974. Mendeleev Prize, 1979. Nesmeyanov Prize, 1991. Demidov Prize, 2003. State Prize, 2004. IUPAC 2013 Distinguished Women in Chemistry or Chemical Engineering Award, 2013. References 1933 births Living people 20th-century Russian inventors 20th-century Russian women Corresponding Members of the USSR Academy of Sciences Full Members of the Russian Academy of Sciences Academic staff of Moscow State University Demidov Prize laureates Honoured Scientists of the Russian Federation Recipients of the Order of the Red Banner of Labour State Prize of the Russian Federation laureates Russian women chemists 20th-century Russian chemists Soviet women chemists Women inventors Organometallic chemistry
Irina Beletskaya
[ "Chemistry" ]
909
[ "Organometallic chemistry" ]
13,195,175
https://en.wikipedia.org/wiki/Ceramic%20tile%20cutter
Ceramic tile cutters are used to cut ceramic tiles to a required size or shape. They come in a number of different forms, from basic manual devices to complex attachments for power tools. Hand tools Beam score cutters, cutter boards The ceramic tile cutter works by first scratching a straight line across the surface of the tile with a hardened metal wheel and then applying pressure directly below the line and on each side of the line on top. Snapping pressure varies widely, some mass-produced models exerting over 750 kg. The cutting wheel and breaking jig are combined in a carriage that travels along one or two beams to keep the carriage angled correctly and the cut straight. The beam(s) may be height adjustable to handle different thicknesses of tiles. The base of the tool may have adjustable fences for angled cuts and square cuts and fence stops for multiple cuts of exactly the same size. The scoring wheel is easily replaceable. History The first tile cutter was designed to facilitate the work and solve the problems that masons had when cutting a mosaic of encaustic tiles (a type of decorative tile with pigment, highly used in 1950s, due to the high strength needed because of the high hardness and thickness of these tiles). Over the time the tool evolved, incorporating elements that made it more accurate and productive. The first cutter had an iron point to scratch the tiles. It was later replaced by the current tungsten carbide scratching wheel. Another built-in device introduced in 1960 was the snapping element. It allowed users to snap the tiles easily and not with the bench, the cutter handle or hitting the tile with a knee as it was done before. This was a revolution in the cutting process of the ceramic world. Tile nippers Tile nippers are similar to small pairs of pincers, with part of the width of the tool removed so that they can be fit into small holes. They can be used to break off small edges of tiles that have been scored or nibble out small chips enlarging holes etc. Glass cutter A simple hand held glass cutter is capable of scoring smooth ceramic glaze surface allowing the tile to be snapped. Power tools The harder grades of ceramic tiles like fully vitrified porcelain tiles, stone tiles, and some clay tiles with textured surfaces have to be cut with a diamond blade. The diamond blades are mounted in: Angle grinders An angle grinder can be used for short, sometimes curved cuts. It can also be used for L-shaped cuts and for making holes. It can be used dry and, more rarely, wet. Tile saws Dedicated tile saws are designed to be used with water as a coolant for the diamond blade. They are available in different sizes. Adjustable fences for angled cuts and square cuts. Fence stops for multiple cuts of exactly the same size. Gallery References See also Hand tool Power tool Diamond tool Encaustic tile Porcelain tile Dimension stone Glass tiles Quarry tile Mosaic Mechanical hand tools Hand-held power tools Cutting tools Grinding machines
Ceramic tile cutter
[ "Physics" ]
610
[ "Mechanics", "Mechanical hand tools" ]
13,196,068
https://en.wikipedia.org/wiki/Fractional%20vortices
In a standard superconductor, described by a complex field fermionic condensate wave function (denoted ), vortices carry quantized magnetic fields because the condensate wave function is invariant to increments of the phase by . There a winding of the phase by creates a vortex which carries one flux quantum. See quantum vortex. The term Fractional vortex is used for two kinds of very different quantum vortices which occur when: (i) A physical system allows phase windings different from , i.e. non-integer or fractional phase winding. Quantum mechanics prohibits it in a uniform ordinary superconductor, but it becomes possible in an inhomogeneous system, for example, if a vortex is placed on a boundary between two superconductors which are connected only by an extremely weak link (also called a Josephson junction); such a situation also occurs on grain boundaries etc. At such superconducting boundaries the phase can have a discontinuous jump. Correspondingly, a vortex placed onto such a boundary acquires a fractional phase winding hence the term fractional vortex. A similar situation occurs in Spin-1 Bose condensate, where a vortex with phase winding can exist if it is combined with a domain of overturned spins. (ii) A different situation occurs in uniform multicomponent superconductors, which allow stable vortex solutions with integer phase winding , where , which however carry arbitrarily fractionally quantized magnetic flux. Observation of fractional-flux vortices was reported in a multiband Iron-based superconductor. (i) Vortices with non-integer phase winding Josephson vortices Fractional vortices at phase discontinuities Josephson phase discontinuities may appear in specially designed long Josephson junctions (LJJ). For example, so-called 0-π LJJ have a discontinuity of the Josephson phase at the point where 0 and parts join. Physically, such LJJ can be fabricated using tailored ferromagnetic barrier or using d-wave superconductors. The Josephson phase discontinuities can also be introduced using artificial tricks, e.g., a pair of tiny current injectors attached to one of the superconducting electrodes of the LJJ. The value of the phase discontinuity is denoted by κ and, without losing generality, it is assumed that , because the phase is periodic. An LJJ reacts to the phase discontinuity by bending the Josephson phase in the vicinity of the discontinuity point, so that far away there are no traces of this perturbation. The bending of the Josephson phase inevitably results in appearance of a local magnetic field localized around the discontinuity ( boundary). It also results in the appearance of a supercurrent circulating around the discontinuity. The total magnetic flux Φ, carried by the localized magnetic field is proportional to the value of the discontinuity , namely , where is a magnetic flux quantum. For a π-discontinuity, , the vortex of the supercurrent is called a semifluxon. When , one speaks about arbitrary fractional Josephson vortices. This type of vortex is pinned at the phase discontinuity point, but may have two polarities, positive and negative, distinguished by the direction of the fractional flux and direction of the supercurrent (clockwise or counterclockwise) circulating around its center (discontinuity point). The semifluxon is a particular case of such a fractional vortex pinned at the phase discontinuity point. Although, such fractional Josephson vortices are pinned, if perturbed they may perform a small oscillations around the phase discontinuity point with an eigenfrequency, that depends on the value of κ. Splintered vortices (double sine-Gordon solitons) In the context of d-wave superconductivity, a fractional vortex (also known as splintered vortex) is a vortex of supercurrent carrying unquantized magnetic flux , which depends on parameters of the system. Physically, such vortices may appear at the grain boundary between two d-wave superconductors, which often looks like a regular or irregular sequence of 0 and π facets. One can also construct an artificial array of short 0 and π facets to achieve the same effect. These splintered vortices are solitons. They are able to move and preserve their shape similar to conventional integer Josephson vortices (fluxons). This is opposite to the fractional vortices pinned at phase discontinuity, e.g. semifluxons, which are pinned at the discontinuity and cannot move far from it. Theoretically, one can describe a grain boundary between d-wave superconductors (or an array of tiny 0 and π facets) by an effective equation for a large-scale phase ψ. Large scale means that the scale is much larger than the facet size. This equation is double sin-Gordon equation, which in normalized units reads where is a dimensionless constant resulting from averaging over tiny facets. The detailed mathematical procedure of averaging is similar to the one done for a parametrically driven pendulum, and can be extended to time-dependent phenomena. In essence, () described extended φ Josephson junction. For () has two stable equilibrium values (in each 2π interval): , where . They corresponding to two energy minima. Correspondingly, there are two fractional vortices (topological solitons): one with the phase going from to , while the other has the phase changing from to . The first vortex has a topological change of 2φ and carries the magnetic flux . The second vortex has a topological change of and carries the flux . Splintered vortices were first observed at the asymmetric 45° grain boundaries between two d-wave superconductors YBa2Cu3O7−δ. Spin-triplet Superfluidity In certain states of spin-1 superfluids or Bose condensates, the condensate wavefunction is invariant if the superfluid phase changes by , along with a rotation of spin angle. This is in contrast to the invariance of condensate wavefunction in a spin-0 superfluid. A vortex resulting from such phase windings is called fractional or half-quantum vortex, in contrast to one-quantum vortex where a phase changes by . (ii) Vortices with integer phase winding and fractional flux in multicomponent superconductivity Different kinds of "Fractional vortices" appear in a different context in multi-component superconductivity where several independent charged condensates or superconducting components interact with each other electromagnetically. Such a situation occurs for example in the theories of the projected quantum states of liquid metallic hydrogen, where two order parameters originate from theoretically anticipated coexistence of electronic and protonic Cooper pairs. There topological defects with an (i.e. "integer") phase winding only in or only in a protonic condensate carries fractionally quantized magnetic flux: a consequence of electromagnetic interaction with the second condensate. Also these fractional vortices carry a superfluid momentum which does not obey Onsager-Feynman quantization Despite the integer phase winding, the basic properties of these kinds of fractional vortices are very different from the Abrikosov vortex solutions. For example, in contrast to the Abrikosov vortex, their magnetic field generically is not exponentially localized in space. Also in some cases the magnetic flux inverts its direction at a certain distance from the vortex center See also Josephson junction π Josephson junction magnetic flux quantum Semifluxon Quantum vortex References Josephson effect Superfluidity
Fractional vortices
[ "Physics", "Chemistry", "Materials_science" ]
1,662
[ "Physical phenomena", "Phase transitions", "Josephson effect", "Superconductivity", "Phases of matter", "Superfluidity", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
7,083,690
https://en.wikipedia.org/wiki/Omnitruncation
In geometry, an omnitruncation of a convex polytope is a simple polytope of the same dimension, having a vertex for each flag of the original polytope and a facet for each face of any dimension of the original polytope. Omnitruncation is the dual operation to barycentric subdivision. Because the barycentric subdivision of any polytope can be realized as another polytope, the same is true for the omnitruncation of any polytope. When omnitruncation is applied to a regular polytope (or honeycomb) it can be described geometrically as a Wythoff construction that creates a maximum number of facets. It is represented in a Coxeter–Dynkin diagram with all nodes ringed. It is a shortcut term which has a different meaning in progressively-higher-dimensional polytopes: Uniform polytope truncation operators For regular polygons: An ordinary truncation, . Coxeter-Dynkin diagram For uniform polyhedra (3-polytopes): A cantitruncation, . (Application of both cantellation and truncation operations) Coxeter-Dynkin diagram: For uniform polychora: A runcicantitruncation, . (Application of runcination, cantellation, and truncation operations) Coxeter-Dynkin diagram: , , For uniform polytera (5-polytopes): A steriruncicantitruncation, t0,1,2,3,4{p,q,r,s}. . (Application of sterication, runcination, cantellation, and truncation operations) Coxeter-Dynkin diagram: , , For uniform n-polytopes: . See also Expansion (geometry) Omnitruncated polyhedron References Further reading Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition, (pp. 145–154 Chapter 8: Truncation, p 210 Expansion) Norman Johnson Uniform Polytopes, Manuscript (1991) N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966 External links Polyhedra Uniform polyhedra
Omnitruncation
[ "Physics" ]
491
[ "Symmetry", "Uniform polytopes", "Truncated tilings", "Tessellation", "Uniform polyhedra" ]
7,084,895
https://en.wikipedia.org/wiki/Hot-carrier%20injection
Hot carrier injection (HCI) is a phenomenon in solid-state electronic devices where an electron or a “hole” gains sufficient kinetic energy to overcome a potential barrier necessary to break an interface state. The term "hot" refers to the effective temperature used to model carrier density, not to the overall temperature of the device. Since the charge carriers can become trapped in the gate dielectric of a MOS transistor, the switching characteristics of the transistor can be permanently changed. Hot-carrier injection is one of the mechanisms that adversely affects the reliability of semiconductors of solid-state devices. Physics The term “hot carrier injection” usually refers to the effect in MOSFETs, where a carrier is injected from the conducting channel in the silicon substrate to the gate dielectric, which usually is made of silicon dioxide (SiO2). To become “hot” and enter the conduction band of SiO2, an electron must gain a kinetic energy of ~3.2 eV. For holes, the valence band offset in this case dictates they must have a kinetic energy of 4.6 eV. The term "hot electron" comes from the effective temperature term used when modelling carrier density (i.e., with a Fermi-Dirac function) and does not refer to the bulk temperature of the semiconductor (which can be physically cold, although the warmer it is, the higher the population of hot electrons it will contain all else being equal). The term “hot electron” was originally introduced to describe non-equilibrium electrons (or holes) in semiconductors. More broadly, the term describes electron distributions describable by the Fermi function, but with an elevated effective temperature. This greater energy affects the mobility of charge carriers and as a consequence affects how they travel through a semiconductor device. Hot electrons can tunnel out of the semiconductor material, instead of recombining with a hole or being conducted through the material to a collector. Consequent effects include increased leakage current and possible damage to the encasing dielectric material if the hot carrier disrupts the atomic structure of the dielectric. Hot electrons can be created when a high-energy photon of electromagnetic radiation (such as light) strikes a semiconductor. The energy from the photon can be transferred to an electron, exciting the electron out of the valence band, and forming an electron-hole pair. If the electron receives enough energy to leave the valence band, and to surpass the conduction band, it becomes a hot electron. Such electrons are characterized by high effective temperatures. Because of the high effective temperatures, hot electrons are very mobile, and likely to leave the semiconductor and travel into other surrounding materials. In some semiconductor devices, the energy dissipated by hot electron phonons represents an inefficiency as energy is lost as heat. For instance, some solar cells rely on the photovoltaic properties of semiconductors to convert light to electricity. In such cells, the hot electron effect is the reason that a portion of the light energy is lost to heat rather than converted to electricity. Hot electrons arise generically at low temperatures even in degenerate semiconductors or metals. There are a number of models to describe the hot-electron effect. The simplest predicts an electron-phonon (e-p) interaction based on a clean three-dimensional free-electron model. Hot electron effect models illustrate a correlation between power dissipated, the electron gas temperature and overheating. Effects on transistors In MOSFETs, hot electrons have sufficient energy to tunnel through the thin gate oxide to show up as gate current, or as substrate leakage current. In a MOSFET, when a gate is positive, and the switch is on, the device is designed with the intent that electrons will flow laterally through the conductive channel, from the source to the drain. Hot electrons may jump from the channel region or from the drain, for instance, and enter the gate or the substrate. These hot electrons do not contribute to the amount of current flowing through the channel as intended and instead are a leakage current. Attempts to correct or compensate for the hot electron effect in a MOSFET may involve locating a diode in reverse bias at gate terminal or other manipulations of the device (such as lightly doped drains or double-doped drains). When electrons are accelerated in the channel, they gain energy along the mean free path. This energy is lost in two different ways: The carrier hits an atom in the substrate. Then the collision creates a cold carrier and an additional electron-hole pair. In the case of nMOS transistors, additional electrons are collected by the channel and additional holes are evacuated by the substrate. The carrier hits a Si-H bond and break the bond. An interface state is created and the hydrogen atom is released in the substrate. The probability to hit either an atom or a Si-H bond is random, and the average energy involved in each process is the same in both case. This is the reason why the substrate current is monitored during HCI stress. A high substrate current means a large number of created electron-hole pairs and thus an efficient Si-H bond breakage mechanism. When interface states are created, the threshold voltage is modified and the subthreshold slope is degraded. This leads to lower current, and degrades the operating frequency of integrated circuit. Scaling Advances in semiconductor manufacturing techniques and ever increasing demand for faster and more complex integrated circuits (ICs) have driven the associated Metal–Oxide–Semiconductor field-effect transistor (MOSFET) to scale to smaller dimensions. However, it has not been possible to scale the supply voltage used to operate these ICs proportionately due to factors such as compatibility with previous generation circuits, noise margin, power and delay requirements, and non-scaling of threshold voltage, subthreshold slope, and parasitic capacitance. As a result, internal electric fields increase in aggressively scaled MOSFETs, which comes with the additional benefit of increased carrier velocities (up to velocity saturation), and hence increased switching speed, but also presents a major reliability problem for the long term operation of these devices, as high fields induce hot carrier injection which affects device reliability. Large electric fields in MOSFETs imply the presence of high-energy carriers, referred to as “hot carriers”. These hot carriers that have sufficiently high energies and momenta to allow them to be injected from the semiconductor into the surrounding dielectric films such as the gate and sidewall oxides as well as the buried oxide in the case of silicon on insulator (SOI) MOSFETs. Reliability impact The presence of such mobile carriers in the oxides triggers numerous physical damage processes that can drastically change the device characteristics over prolonged periods. The accumulation of damage can eventually cause the circuit to fail as key parameters such as threshold voltage shift due to such damage. The accumulation of damage resulting degradation in device behavior due to hot carrier injection is called “hot carrier degradation”. The useful life-time of circuits and integrated circuits based on such a MOS device are thus affected by the life-time of the MOS device itself. To assure that integrated circuits manufactured with minimal geometry devices will not have their useful life impaired, the life-time of the component MOS devices must have their HCI degradation well understood. Failure to accurately characterize HCI life-time effects can ultimately affect business costs such as warranty and support costs and impact marketing and sales promises for a foundry or IC manufacturer. Relationship to radiation effects Hot carrier degradation is fundamentally the same as the ionization radiation effect known as the total dose damage to semiconductors, as experienced in space systems due to solar proton, electron, X-ray and gamma ray exposure. HCI and NOR flash memory cells HCI is the basis of operation for a number of non-volatile memory technologies such as EPROM cells. As soon as the potential detrimental influence of HC injection on the circuit reliability was recognized, several fabrication strategies were devised to reduce it without compromising the circuit performance. NOR flash memory exploits the principle of hot carriers injection by deliberately injecting carriers across the gate oxide to charge the floating gate. This charge alters the MOS transistor threshold voltage to represent a logic '0' state. An uncharged floating gate represents a '1' state. Erasing the NOR Flash memory cell removes stored charge through the process of Fowler–Nordheim tunneling. Because of the damage to the oxide caused by normal NOR Flash operation, HCI damage is one of the factors that cause the number of write-erase cycles to be limited. Because the ability to hold charge and the formation of damage traps in the oxide affects the ability to have distinct '1' and '0' charge states, HCI damage results in the closing of the non-volatile memory logic margin window over time. The number of write-erase cycles at which '1' and '0' can no longer be distinguished defines the endurance of a non-volatile memory. See also Time-dependent gate oxide breakdown (also time-dependent dielectric breakdown, TDDB) Electromigration (EM) Negative bias temperature instability (NBTI) Stress migration Lattice scattering References External links An article about hot carriers at www.siliconfareast.com IEEE International Reliability Physics Symposium, the primary academic and technical conference for semiconductor reliability involving HCI and other reliability phenomena Integrated circuits Semiconductors Semiconductor device defects Charge carriers Electric and magnetic fields in matter
Hot-carrier injection
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
1,940
[ "Physical phenomena", "Matter", "Integrated circuits", "Physical quantities", "Charge carriers", "Computer engineering", "Semiconductors", "Technological failures", "Semiconductor device defects", "Electric and magnetic fields in matter", "Materials science", "Materials", "Electrical phenome...
7,086,534
https://en.wikipedia.org/wiki/Kelvin%27s%20circulation%20theorem
In fluid mechanics, Kelvin's circulation theorem states:In a barotropic, ideal fluid with conservative body forces, the circulation around a closed curve (which encloses the same fluid elements) moving with the fluid remains constant with time. The theorem is named after William Thomson, 1st Baron Kelvin who published it in 1869. Stated mathematically: where is the circulation around a material moving contour as a function of time . The differential operator is a substantial (material) derivative moving with the fluid particles. Stated more simply, this theorem says that if one observes a closed contour at one instant, and follows the contour over time (by following the motion of all of its fluid elements), the circulation over the two locations of this contour remains constant. This theorem does not hold in cases with viscous stresses, nonconservative body forces (for example the Coriolis force) or non-barotropic pressure-density relations. Mathematical proof The circulation around a closed material contour is defined by: where u is the velocity vector, and ds is an element along the closed contour. The governing equation for an inviscid fluid with a conservative body force is where D/Dt is the convective derivative, ρ is the fluid density, p is the pressure and Φ is the potential for the body force. These are the Euler equations with a body force. The condition of barotropicity implies that the density is a function only of the pressure, i.e. . Taking the convective derivative of circulation gives For the first term, we substitute from the governing equation, and then apply Stokes' theorem, thus: The final equality arises since owing to barotropicity. We have also made use of the fact that the curl of any gradient is necessarily 0, or for any function . For the second term, we note that evolution of the material line element is given by Hence The last equality is obtained by applying gradient theorem. Since both terms are zero, we obtain the result Poincaré–Bjerknes circulation theorem A similar principle which conserves a quantity can be obtained for the rotating frame also, known as the Poincaré–Bjerknes theorem, named after Henri Poincaré and Vilhelm Bjerknes, who derived the invariant in 1893 and 1898. The theorem can be applied to a rotating frame which is rotating at a constant angular velocity given by the vector , for the modified circulation Here is the position of the area of fluid. From Stokes' theorem, this is: The vorticity of a velocity field in fluid dynamics is defined by: Then: See also Bernoulli's principle Euler equations (fluid dynamics) Helmholtz's theorems Thermomagnetic convection Notes Equations of fluid dynamics Fluid dynamics Equations Circulation theorem
Kelvin's circulation theorem
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
579
[ "Equations of fluid dynamics", "Equations of physics", "Chemical engineering", "Mathematical objects", "Equations", "Piping", "Fluid dynamics" ]
2,951,017
https://en.wikipedia.org/wiki/RealMagic
RealMagic (or ReelMagic), from Sigma Designs, was one of the first fully compliant MPEG playback boards on the market in the mid-1990s. RealMagic is a hardware-accelerated MPEG decoder that mixes its video stream into a computer video card's output through the video card's feature connector. It is also a SoundBlaster-compatible sound card. Successors Sigma design's Realmagic superseded by Realmagic Hollywood+ Realmagic XCard Realmagic NetStream2000 - 4000 Several software companies in 1993 promised to support the card, including Access, Interplay, and Sierra. Software written for RealMagic includes: Under a Killing Moon - Access Software Gabriel Knight Escape from Cybercity Kings Quest VI - Sierra Online Dragon's Lair Police Quest IV - Sierra Online Return to Zork - Infocom Lord of the Rings - Interplay Entertainment Note: the above titles were on a REELMAGIC demo CD that came with the hardware. The CD also contained corporate promotion videos, training videos, news footage of John F. Kennedy and the Apollo Moon mission. Also included in the bundle, was a complete version of The Horde - published by Crystal Dynamics (1994) Other software includes: The Psychotron (an interactive mystery movie) - Merit Software References Graphics cards
RealMagic
[ "Technology" ]
267
[ "Computing stubs", "Computer hardware stubs" ]
2,951,323
https://en.wikipedia.org/wiki/Cram%C3%A9r%E2%80%93Wold%20theorem
In mathematics, the Cramér–Wold theorem or the Cramér–Wold device is a theorem in measure theory and which states that a Borel probability measure on is uniquely determined by the totality of its one-dimensional projections. It is used as a method for proving joint convergence results. The theorem is named after Harald Cramér and Herman Ole Andreas Wold, who published the result in 1936. Let and be random vectors of dimension k. Then converges in distribution to if and only if: for each , that is, if every fixed linear combination of the coordinates of converges in distribution to the correspondent linear combination of coordinates of . If takes values in , then the statement is also true with . References Theorems in measure theory Probability theorems Convergence (mathematics)
Cramér–Wold theorem
[ "Mathematics" ]
155
[ "Sequences and series", "Theorems in mathematical analysis", "Mathematical analysis", "Mathematical theorems", "Functions and mappings", "Convergence (mathematics)", "Mathematical analysis stubs", "Mathematical structures", "Theorems in measure theory", "Mathematical objects", "Theorems in proba...
2,951,653
https://en.wikipedia.org/wiki/Supercritical%20carbon%20dioxide
Supercritical carbon dioxide (s) is a fluid state of carbon dioxide where it is held at or above its critical temperature and critical pressure. Carbon dioxide usually behaves as a gas in air at standard temperature and pressure (STP), or as a solid called dry ice when cooled and/or pressurised sufficiently. If the temperature and pressure are both increased from STP to be at or above the critical point for carbon dioxide, it can adopt properties midway between a gas and a liquid. More specifically, it behaves as a supercritical fluid above its critical temperature () and critical pressure (), expanding to fill its container like a gas but with a density like that of a liquid. Supercritical is becoming an important commercial and industrial solvent due to its role in chemical extraction, in addition to its relatively low toxicity and environmental impact. The relatively low temperature of the process and the stability of also allows compounds to be extracted with little damage or denaturing. In addition, the solubility of many extracted compounds in varies with pressure, permitting selective extractions. Applications Solvent Carbon dioxide is gaining popularity among coffee manufacturers looking to move away from classic decaffeinating solvents. s is forced through green coffee beans which are then sprayed with water at high pressure to remove the caffeine. The caffeine can then be isolated for resale (e.g., to pharmaceutical or beverage manufacturers) by passing the water through activated charcoal filters or by distillation, crystallization or reverse osmosis. Supercritical carbon dioxide is used to remove organochloride pesticides and metals from agricultural crops without adulterating the desired constituents from plant matter in the herbal supplement industry. Supercritical carbon dioxide can be used as a solvent in dry cleaning. Supercritical carbon dioxide is used as the extraction solvent for creation of essential oils and other herbal distillates. Its main advantages over solvents such as hexane and acetone in this process are that it is non-flammable and does not leave toxic residue. Furthermore, separation of the reaction components from the starting material is much simpler than with traditional organic solvents. The can evaporate into the air or be recycled by condensation into a recovery vessel. Its advantage over steam distillation is that it operates at a lower temperature, which can separate the plant waxes from the oils. In laboratories, s is used as an extraction solvent, for example for determining total recoverable hydrocarbons from soils, sediments, fly-ash, and other media, and determination of polycyclic aromatic hydrocarbons in soil and solid wastes. Supercritical fluid extraction has been used in determining hydrocarbon components in water. Processes that use s to produce micro and nano scale particles, often for pharmaceutical uses, are under development. The gas antisolvent process, rapid expansion of supercritical solutions, and supercritical antisolvent precipitation (as well as several related methods) process a variety of substances into particles. Due to its ability to selectively dissolve organic compounds and assist enzyme functioning, s has been suggested as a potential solvent to support biological activity on Venus- or super-Earth-type planets. Manufactured products Environmentally beneficial, low-cost substitutes for rigid thermoplastic and fired ceramic are made using s as a chemical reagent. The s in these processes is reacted with the alkaline components of fully hardened hydraulic cement or gypsum plaster to form various carbonates. The primary byproduct is water. s is used in the foaming of polymers. Supercritical carbon dioxide can saturate the polymer with solvent. Upon depressurization and heating, the carbon dioxide rapidly expands, causing voids within the polymer matrix, i.e., creating a foam. Research is ongoing on microcellular foams. An electrochemical carboxylation of a para-isobutylbenzyl chloride to ibuprofen is promoted under s. Working fluid s is chemically stable, reliable, low-cost, non-flammable and readily available, making it a desirable candidate working fluid for transcritical cycles. Supercritical is used as the working fluid in domestic water heat pumps. Manufactured and widely used, heat pumps are available for domestic and business heating and cooling. While some of the more common domestic water heat pumps remove heat from the space in which they are located, such as a basement or garage, heat pump water heaters are typically located outside, where they remove heat from the outside air. Power generation The unique properties of s present advantages for closed-loop power generation and can be applied to power generation applications. Power generation systems that use traditional air Brayton and steam Rankine cycles can use s to increase efficiency and power output. The relatively new Allam power cycle uses s as the working fluid in combination with fuel and pure oxygen. The produced by combustion mixes with the s working fluid. A corresponding amount of pure must be removed from the process (for industrial use or sequestration). This process reduces atmospheric emissions to zero. s promises substantial efficiency improvements. Due to its high fluid density, s enables compact and efficient turbomachinery. It can use simpler, single casing body designs while steam turbines require multiple turbine stages and associated casings, as well as additional inlet and outlet piping. The high density allows more compact, microchannel-based heat exchanger technology. For concentrated solar power, carbon dioxide critical temperature is not high enough to obtain the maximum energy conversion efficiency. Solar thermal plants are usually located in arid areas, so it is impossible to cool down the heat sink to sub-critical temperatures. Therefore, supercritical carbon dioxide blends, with higher critical temperatures, are in development to improve concentrated solar power electricity production. Further, due to its superior thermal stability and non-flammability, direct heat exchange from high temperature sources is possible, permitting higher working fluid temperatures and therefore higher cycle efficiency. Unlike two-phase flow, the single-phase nature of s eliminates the necessity of a heat input for phase change that is required for the water to steam conversion, thereby also eliminating associated thermal fatigue and corrosion. The use of s presents corrosion engineering, material selection and design issues. Materials in power generation components must display resistance to damage caused by high-temperature, oxidation and creep. Candidate materials that meet these property and performance goals include incumbent alloys in power generation, such as nickel-based superalloys for turbomachinery components and austenitic stainless steels for piping. Components within s Brayton loops suffer from corrosion and erosion, specifically erosion in turbomachinery and recuperative heat exchanger components and intergranular corrosion and pitting in the piping. Testing has been conducted on candidate Ni-based alloys, austenitic steels, ferritic steels and ceramics for corrosion resistance in s cycles. The interest in these materials derive from their formation of protective surface oxide layers in the presence of carbon dioxide, however in most cases further evaluation of the reaction mechanics and corrosion/erosion kinetics and mechanisms is required, as none of the materials meet the necessary goals. In 2016, General Electric announced a s-based turbine that enabled a 50% efficiency of converting heat energy to electrical energy. In it the is heated to 700 °C. It requires less compression and allows heat transfer. It reaches full power in 2 minutes, whereas steam turbines need at least 30 minutes. The prototype generated 10 MW and is approximately 10% the size of a comparable steam turbine. The 10 MW US$155-million Supercritical Transformational Electric Power (STEP) pilot plant was completed in 2023 in San Antonio. It is the size of a desk and can power around 10,000 homes. Other Work is underway to develop a s closed-cycle gas turbine to operate at temperatures near 550 °C. This would have implications for bulk thermal and nuclear generation of electricity, because the supercritical properties of carbon dioxide at above 500 °C and 20 MPa enable thermal efficiencies approaching 45 percent. This could increase the electrical power produced per unit of fuel required by 40 percent or more. Given the volume of carbon fuels used in producing electricity, the environmental impact of cycle efficiency increases would be significant. Supercritical is an emerging natural refrigerant, used in new, low carbon solutions for domestic heat pumps. Supercritical heat pumps are commercially marketed in Asia. EcoCute systems from Japan, developed by Mayekawa, develop high temperature domestic water with small inputs of electric power by moving heat into the system from the surroundings. Supercritical has been used since the 1980s to enhance recovery in mature oil fields. "Clean coal" technologies are emerging that could combine such enhanced recovery methods with carbon sequestration. Using gasifiers instead of conventional furnaces, coal and water is reduced to hydrogen gas, carbon dioxide and ash. This hydrogen gas can be used to produce electrical power In combined cycle gas turbines, is captured, compressed to the supercritical state and injected into geological storage, possibly into existing oil fields to improve yields. Supercritical can be used as a working fluid for geothermal electricity generation in both enhanced geothermal systems and sedimentary geothermal systems (so-called Plume Geothermal). EGS systems utilize an artificially fractured reservoir in basement rock while CPG systems utilize shallower naturally-permeable sedimentary reservoirs. Possible advantages of using in a geologic reservoir, compared to water, include higher energy yield resulting from its lower viscosity, better chemical interaction, and permanent storage as the reservoir must be filled with large masses of . As of 2011, the concept had not been tested in the field. Aerogel production Supercritical carbon dioxide is used in the production of silica, carbon and metal based aerogels. For example, silicon dioxide gel is formed and then exposed to s. When the goes supercritical, all surface tension is removed, allowing the liquid to leave the aerogel and produce nanometer sized pores. Sterilization of biomedical materials Supercritical is an alternative for thermal sterilization of biological materials and medical devices with combination of the additive peracetic acid (PAA). Supercritical does not sterilize the media, because it does not kill the spores of microorganisms. Moreover, this process is gentle, as the morphology, ultrastructure and protein profiles of inactivated microbes are preserved. Cleaning Supercritical is used in certain industrial cleaning processes. See also Caffeine Dry cleaning Perfume Supercritical fluid Atmosphere of Venus, nearly all carbon dioxide, supercritical at the surface References Further reading Mukhopadhyay M. (2000). Natural extracts using supercritical carbon dioxide. United States: CRC Press, LLC. Free preview at Google Books. Carbon dioxide Gas technologies Industrial gases Inorganic solvents
Supercritical carbon dioxide
[ "Chemistry" ]
2,212
[ "Greenhouse gases", "Carbon dioxide", "Industrial gases", "Chemical process engineering" ]
2,951,670
https://en.wikipedia.org/wiki/Matthew%20Krok
Matthew Krok (born 8 March 1982) is a former Australian child actor best known for playing the role of schoolboy Arthur McArthur on the Australian sitcom Hey Dad...! from 1991 to 1994. He also appeared in a popular Sorbent toilet paper advertising campaign at around the same time. Career During the peak of his stardom in the early 1990s, Krok appeared as a celebrity guest on Wheel of Fortune and was also frequently referred to as "the little fat kid from Hey Dad...!". In an infamous The Late Show skit, "Arnold Schwarzenegger" (played by Tony Martin in heavy prosthetic makeup) jokingly revealed that the plot of the then yet-to-be-released Terminator 3 revolved around killing him and said, "Hasta la vista, little fat kid!". Krok's other credits include the children's films Paws and Joey. His last credited acting appearance was in the 2001 children's television series Outriders. Personal life In 2003, The Sydney Morning Herald revealed that the actor began a two-year stint as a Mormon missionary by the name of Elder Krok. Prior to this, he commenced studying for a degree in civil engineering at the University of Western Sydney and has since transferred to the University of New South Wales where he is undertaking a double degree in civil and environmental engineering. Matthew married Jade Bennallack on 5 July 2008 at the Sydney Australia Temple in Carlingford. Filmography References External links 1982 births Living people Australian Latter Day Saints Male actors from Sydney Australian civil engineers Australian male child actors Australian male television actors Environmental engineers
Matthew Krok
[ "Chemistry", "Engineering" ]
332
[ "Environmental engineers", "Environmental engineering" ]
2,951,818
https://en.wikipedia.org/wiki/Mitomycins
The mitomycins are a family of aziridine-containing natural products isolated from Streptomyces caespitosus or Streptomyces lavendulae. They include mitomycin A, mitomycin B, and mitomycin C. When the name mitomycin occurs alone, it usually refers to mitomycin C, its international nonproprietary name. Mitomycin C is used as a medicine for treating various disorders associated with the growth and spread of cells. Biosynthesis In general, the biosynthesis of all mitomycins proceeds via combination of 3-amino-5-hydroxybenzoic acid (AHBA), D-glucosamine, and carbamoyl phosphate, to form the mitosane core, followed by specific tailoring steps. The key intermediate, AHBA, is a common precursor to other anticancer drugs, such as rifamycin and ansamycin. Specifically, the biosynthesis begins with the addition of phosphoenolpyruvate (PEP) to erythrose-4-phosphate (E4P) with a yet undiscovered enzyme, which is then ammoniated to give 4-amino-3-deoxy-D-arabino heptulosonic acid-7-phosphate (aminoDHAP). Next, DHQ synthase catalyzes a ring closure to give 4-amino3-dehydroquinate (aminoDHQ), which then undergoes a double oxidation via aminoDHQ dehydratase to give 4-amino-dehydroshikimate (aminoDHS). The key intermediate, 3-amino-5-hydroxybenzoic acid (AHBA), is made via aromatization by AHBA synthase. Synthesis of the key intermediate, 3-amino-5-hydroxy-benzoic acid. The mitosane core is synthesized as shown below via condensation of AHBA and D-glucosamine, although no specific enzyme has been characterized that mediates this transformation. Once this condensation has occurred, the mitosane core is tailored by a variety of enzymes. Both the sequence and the identity of these steps are yet to be determined. Complete reduction of C-6 – Likely via F420-dependent tetrahydromethanopterin (H4MPT) reductase and H4MPT:CoM methyltransferase Hydroxylation of C-5, C-7 (followed by transamination), and C-9a. – Likely via cytochrome P450 monooxygenase or benzoate hydroxylase O-Methylation at C-9a – Likely via SAM dependent methyltransferase Oxidation at C-5 and C8 – Unknown Intramolecular amination to form aziridine – Unknown Carbamoylation at C-10 – Carbamoyl transferase, with carbamoyl phosphate (C4P) being derived from L-citrulline or L-arginine Biological effects In the bacterium Legionella pneumophila, mitomycin C induces competence for transformation. Natural transformation is a process of DNA transfer between cells, and is regarded as a form of bacterial sexual interaction. In the fruit fly Drosophila melanogaster, exposure to mitomycin C increases recombination during meiosis, a key stage of the sexual cycle. In the plant Arabidopsis thaliana, mutant strains defective in genes necessary for recombination during meiosis and mitosis are hypersensitive to killing by mitomycin C. Medicinal uses and research Mitomycin C has been shown to have activity against stationary phase persisters caused by Borrelia burgdorferi, a factor in lyme disease. Mitomycin C is used to treat pancreatic and stomach cancer, and is under clinical research for its potential to treat gastrointestinal strictures, wound healing from glaucoma surgery, corneal excimer laser surgery and endoscopic dacryocystorhinostomy. References DNA replication inhibitors IARC Group 2B carcinogens Quinones Carbamates Ethers Aziridines Nitrogen heterocycles Heterocyclic compounds with 4 rings Enones Methoxy compounds
Mitomycins
[ "Chemistry" ]
910
[ "Organic compounds", "Functional groups", "Ethers" ]
2,951,953
https://en.wikipedia.org/wiki/Argo%20%28ROV%29
Argo is an unmanned deep-towed undersea video camera sled developed by Dr. Robert Ballard through Woods Hole Oceanographic Institute's Deep Submergence Laboratory. Argo is most famous for its role in the discovery of the wreck of the RMS Titanic in 1985. Argo would also play the key role in Ballard's discovery of the wreck of the battleship Bismarck in 1989. The towed sled, capable of operating depths of 6,000 meters (20,000 feet), meant 98% of the ocean floor was within reach. The original Argo, used to find Titanic, was long, tall, and wide and weighed about in air. It had an array of cameras looking forward and down, as well as strobes and incandescent lighting to illuminate the ocean floor. It could acquire wide-angle film and television pictures while flying above the sea floor, towed from a surface vessel, and could also zoom in for detailed views. See also Acoustically Navigated Geological Underwater Survey (ANGUS) References Oceanographic instrumentation Unmanned underwater vehicles
Argo (ROV)
[ "Technology", "Engineering" ]
216
[ "Oceanographic instrumentation", "Measuring instruments" ]
2,952,019
https://en.wikipedia.org/wiki/Acoustically%20Navigated%20Geological%20Underwater%20Survey
The Acoustically Navigated Geological Underwater Survey (ANGUS) was a deep-towed still-camera sled operated by the Woods Hole Oceanographic Institute (WHOI) in the early 1970s. It was the first unmanned research vehicle made by WHOI. ANGUS was encased in a large steel frame designed to explore rugged volcanic terrain and able to withstand high impact collisions. It was fitted with three 35 mm color cameras with of film. Together, its three cameras were able to photograph a strip of the sea floor with a width up to . Each camera was equipped with strobe lights allowing them to photograph the ocean floor from above. On the bottom of the body was a downward-facing sonar system to monitor the sled's height above the ocean floor. It was capable of working in depths up to and could therefore reach roughly 98% of the sea floor. ANGUS could remain in the deep ocean for work sessions of 12 to 14 hours at a time, taking up to 16,000 photographs in one session. ANGUS was often used to scout locations of interest to later be explored and sampled by other vehicles such as Argo or Alvin. ANGUS has been used to search for and photograph underground geysers and the creatures living near them, and it was equipped with a heat sensor to alert the tether-ship when it passed over one. It was used on expeditions such as Project FAMOUS (French-American Mid Ocean Undersea Study 1973–1974), the Discovery expedition with Argo to survey the wreckage of the Titanic. (1985), and again in the return mission to the Titanic (1986). ANGUS was the only ROV used on both dives to the Titanic. On Project FAMOUS, ANGUS helped change scientists' views of the ocean floor. It showed them how different geological formations and chemical compositions of sediments can be, disproving previous assumptions of ocean floor uniformity The project also provided new insight to the theory of seafloor spreading by observing and sampling the rock formations around ridges and the horizontal formation of layers parallel to the ridge. In another 1977 expedition with ANGUS, scientists monitored temperatures over the ocean floor for any fluctuation. It was not until late at night the crew noticed temperatures rise drastically. They would review the photograph footage taken after the vehicle's session. ANGUS provided the first photographic evidence for hydrothermal vents and black smokers. It had returned with over 3000 colored photos showing both vents as well as colonies of clams and other organisms. They would later return with Alvin to take samples. Scientists nicknamed ANGUS Dope on a rope due to its durability and lack of fragile sensors. It was also given the motto "takes a lickin' but it keeps on clickin'". ANGUS was retired in the late 1980s, having completed over 250 voyages. References External links Project FAMOUS: Exploring the Mid-Atlantic Ridge Oceanography
Acoustically Navigated Geological Underwater Survey
[ "Physics", "Environmental_science" ]
578
[ "Oceanography", "Hydrology", "Applied and interdisciplinary physics" ]
2,952,350
https://en.wikipedia.org/wiki/Cotton%20effect
The Cotton effect in physics, is the characteristic change in optical rotatory dispersion and/or circular dichroism in the vicinity of an absorption band of a substance. In a wavelength region where the light is absorbed, the absolute magnitude of the optical rotation at first varies rapidly with wavelength, crosses zero at absorption maxima and then again varies rapidly with wavelength but in the opposite direction. This phenomenon was discovered in 1895 by the French physicist Aimé Cotton (1869–1951). The Cotton effect is called positive if the optical rotation first increases as the wavelength decreases (as first observed by Cotton), and negative if the rotation first decreases. A protein structure such as a beta sheet shows a negative Cotton effect. See also Cotton–Mouton effect References Polarization (waves) Atomic, molecular, and optical physics
Cotton effect
[ "Physics", "Chemistry" ]
167
[ " and optical physics stubs", "Astrophysics", " molecular", "Atomic", "Polarization (waves)", "Physical chemistry stubs", " and optical physics" ]
2,952,363
https://en.wikipedia.org/wiki/Optical%20rotatory%20dispersion
In optics, optical rotatory dispersion is the variation of the specific rotation of a medium with respect to the wavelength of light. Usually described by German physicist Paul Drude's empirical relation: where is the specific rotation at temperature and wavelength , and and are constants that depend on the properties of the medium. Optical rotatory dispersion has applications in organic chemistry regarding determining the structure of organic compounds. Principles of operation When white light passes through a polarizer, the extent of rotation of light depends on its wavelength. Short wavelengths are rotated more than longer wavelengths, per unit of distance. Because the wavelength of light determines its color, the variation of color with distance through the tube is observed. This dependence of specific rotation on wavelength is called optical rotatory dispersion. In all materials the rotation varies with wavelength. The variation is caused by two quite different phenomena. The first accounts in most cases for the majority of the variation in rotation and should not strictly be termed rotatory dispersion. It depends on the fact that optical activity is actually circular birefringence. In other words, a substance which is optically active transmits right circularly polarized light with a different velocity from left circularly polarized light. In addition to this pseudodispersion which depends on the material thickness, there is a true rotatory dispersion which depends on the variation with wavelength of the indices of refraction for right and left circularly polarized light. For wavelengths that are absorbed by the optically active sample, the two circularly polarized components will be absorbed to differing extents. This unequal absorption is known as circular dichroism. Circular dichroism causes incident linearly polarized light to become elliptically polarized. The two phenomena are closely related, just as are ordinary absorption and dispersion. If the entire optical rotatory dispersion spectrum is known, the circular dichroism spectrum can be calculated, and vice versa. Chirality In order for a molecule (or crystal) to exhibit circular birefringence and circular dichroism, it must be distinguishable from its mirror image. An object that cannot be superimposed on its mirror image is said to be chiral, and optical rotatory dispersion and circular dichroism are known as chiroptical properties. Most biological molecules have one or more chiral centers and undergo enzyme-catalyzed transformations that either maintain or invert the chirality at one or more of these centers. Still other enzymes produce new chiral centers, always with a high specificity. These properties account for the fact that optical rotatory dispersion and circular dichroism are widely used in organic and inorganic chemistry and in biochemistry. In the absence of magnetic fields, only chiral substances exhibit optical rotatory dispersion and circular dichroism. In a magnetic field, even substances that lack chirality rotate the plane of polarized light, as shown by Michael Faraday. Magnetic optical rotation is known as the Faraday effect, and its wavelength dependence is known as magnetic optical rotatory dispersion. In regions of absorption, magnetic circular dichroism is observable. See also Absorption Circular dichroism Enzyme Magnetic circular dichroism Polarimetry Polarography Hyper–Rayleigh scattering optical activity Raman optical activity (ROA) Stereochemistry References Polarization (waves) Stereochemistry
Optical rotatory dispersion
[ "Physics", "Chemistry" ]
704
[ "Stereochemistry", "Astrophysics", "Space", "nan", "Spacetime", "Polarization (waves)" ]
2,952,442
https://en.wikipedia.org/wiki/Beijing%E2%80%93Shanghai%20high-speed%20railway
The Beijing–Shanghai high-speed railway (or Jinghu high-speed railway) is a high-speed railway that connects two major economic zones in the People's Republic of China: the Bohai Economic Rim and the Yangtze River Delta. Construction began on April 18, 2008, with the line opened to the public for commercial service on June 30, 2011. The long high-speed line is the world's longest high-speed line ever constructed in a single phase. The line is one of the busiest high speed railways in the world, transporting over 210 million passengers in 2019, more than the annual ridership of the entire TGV or Intercity Express network. It is also China's most profitable high speed rail line, reporting a ¥11.9 billion Yuan ($1.86 billion USD) net profit in 2019. The non-stop train from Beijing South station to Shanghai Hongqiao station was expected to take 3 hours and 58 minutes, making it the fastest scheduled train in the world, compared to 9 hours and 49 minutes on the fastest trains running on the parallel conventional railway. At first trains were limited to a maximum speed of , with the fastest train taking 4 hours and 48 minutes to travel from Beijing South to Shanghai Hongqiao, with one stop at Nanjing South. On September 21, 2017, operation was restored with the introduction of China Standardized EMU. This reduced travel times between Beijing and Shanghai to about 4 hours 18 minutes on the fastest scheduled trains, attaining an average speed of over a journey of making those services the fastest in the world. The Beijing–Shanghai high-speed railway went public on Shanghai Stock Exchange () in 2020. Specifications The Beijing–Shanghai High-Speed Railway Co., Ltd. was in charge of construction. The project was expected to cost 220 billion yuan (about $32 billion). An estimated 220,000 passengers are expected to use the trains each day, which is double the current capacity. During peak hours, trains should run every five minutes. , or 87% of the railway, is elevated. There are 244 bridges along the line. The long Danyang–Kunshan Grand Bridge is the longest bridge in the world, the long viaduct bridge between Langfang and Qingxian is the second longest in the world, and the Cangde Grand Bridge between Beijing's 4th Ring Road and Langfang is the fifth longest. The line also includes 22 tunnels, totaling . A total of of the length is ballastless. According to Zhang Shuguang, then deputy chief designer of China's high-speed railway network, the designed continuous operating speed is , with a maximum speed of up to . The average commercial speed from Beijing to Shanghai was planned to be , which would have cut the train travel time from 10 hours to 4 hours. The rolling stock used on this line consists mainly of CRH380 trains. The CTCS-3 based train control system is used on the line, to allow for a maximum speed of of running and a minimum train interval of 3 minutes. With power consumption of and capacity of about 1,050 passengers, the energy consumption per passenger from Beijing to Shanghai should be less than 80 kWh. History Beijing and Shanghai were not linked by rail until 1912, when the Jinpu railway was completed between Tianjin and Pukou. With the existing railway between Beijing and Tianjin, which was completed in 1900, the Huning railway between Nanjing and Shanghai opened in 1908, interrupted by a ferry between Pukou and Nanjing across the Yangtze River. A weekly Beijing–Shanghai direct train was first introduced in 1913. In 1933, a train ride from Beijing to Shanghai took around 44 hours, at an average speed of . Passengers had to get off in Pukou with their luggage, board a ferry named "Kuaijie" across the Yangtze, and get on another connecting train in Xiaguan on the other side of the river. In 1933, the Nanjing Train Ferry was opened for service. The new train ferry, "Changjiang" (Yangtze), built by a British company, was long, wide, was able to carry 21 freight cars or 12 passenger cars. Passengers could remain on the train when crossing the river, and the travel time was thus cut to around 36 hours. The train service was suspended during the Japanese invasion. In 1949, from Shanghai's North railway station toward Beijing (then Beiping) it took 36 hours, 50 minutes, at an average speed of . In 1956 the trip time was cut to 28 hours, 17 minutes. In the early 1960s, the travel time was further cut down to 23 hours, 39 minutes. In October 1968, the Nanjing Yangtze River Bridge was opened. The travel time was cut to 21 hours, 34 minutes. As new diesel locomotives were introduced in the 1970s, the speed was increased further. In 1986, the travel time was 16 hours, 59 minutes. China introduced six line schedule reductions from 1997 to 2007. In October 2001, train T13/T14 took about 14 hours from Beijing to Shanghai. On April 18, 2004, Z-series trains were introduced. The trip time was cut to 11 hours, 58 minutes. There were five trains departing around 7 pm every day, each 7 minutes apart, arriving at their destination the next morning. The railway was completely electrified in 2006. On April 18, 2007, the new CRH bullet train was introduced on the upgraded railway as part of the Sixth Railway Speed-Up Campaign. A day-time train D31 served the route, departing from Beijing at 10:50 every morning, and arriving at Shanghai at 20:49 in the evening, travelling mostly at (up to in a very short section between Anting and Shanghai West). In 2008 overnight sleeper CRH trains were introduced, replacing the locomotive-hauled Z sleeper trains. With a new high-speed intercity line opening between Nanjing and Shanghai in the summer of 2010, the sleeper trains made use of the high-speed line in the Shanghai–Nanjing section, travelling at for a longer distance. The fastest sleeper trains took 9 hours, 49 minutes, with four intermediate stops, at an average speed of . As the Nanjing Yangtze Bridge connected the two sections of the railway into a continuous line, the entire railway between Beijing and Shanghai was renamed the Jinghu Railway, with Jing (京) being the standard Chinese abbreviation for Beijing, and Hu (沪), short for Shanghai. The Jinghu Railway has served as China's busiest railway for nearly a century. Due to rapid growth in passenger and freight traffic in the last 20 years, this line has reached and surpassed capacity. Dedicated high-speed rail proposal The Jinghu high-speed railway was proposed in the early 1990s, because one quarter of the country's population lived along the existing Beijing-Shanghai rail line In December 1990, the Ministry of Railways submitted to the National People's Congress a proposal to build the Beijing–Shanghai high speed railway parallel to the existing Beijing–Shanghai railway line. In 1995, Premier Li Peng announced that work on the Beijing–Shanghai high-speed railway would begin in the 9th Five Year Plan (1996–2000). The Ministry's initial design for the high-speed rail line was completed, and a report was submitted for state approval in June 1998. The construction plan was set in 2004, after a five-year debate on whether to use steel-on-steel rail track, or maglev technology. Maglev was not chosen due to its incompatibility with China's existing rail-and-track technology and its high price, which is two times higher than that of conventional rail technology. Technology debate Although engineers originally said construction could take until 2015, the China's Ministry of Railways initially promised a 2010 opening date for the new line. However, the Ministry did not anticipate an ensuing debate over the possible use of maglev technology. Although more traditional steel-on-steel rail technology was chosen for the railway, the technology debate resulted in a substantial delay of the railway's feasibility studies, completed in March 2006. The current rolling stock is the CRH380AL, which is a Chinese electric high-speed train that was developed by China South Locomotive & Rolling Stock Corporation Limited (CSR). CRH380A is one of the four Chinese train series which have been designed for the new standard operating speed of on newly constructed Chinese high-speed main lines. The other three are CRH380B, CRH380C and CRH380D. Engineering challenges Testing began shortly thereafter on the main line section between Shanghai and Nanjing. This section of the line sits on the soft soil of the Yangtze Delta, providing engineers an example of the more difficult challenges they would face in later construction. In addition to these challenges, high speed trains use extensive amounts of aluminium alloy, with specially designed windscreen glass capable of withstanding avian impacts. Construction Construction work began on April 18, 2008. Track-laying was started on July 19, 2010, and completed on November 15, 2010. On December 3, 2010, a 16-car CRH380AL trainset set a speed record of on the Zaozhuang West to Bengbu section of the line during a test run. On January 10, 2011, another 16-car modified CRH380BL train set a speed record of during a test run. The overhead catenary work was completed on February 4, 2011 for the entire line. According to CCTV, more than 130,000 construction workers and engineers were at work at the peak of the construction phase. According to the Ministry of Railways, construction has used twice as much concrete as the Three Gorges dam, and 120 times the amount of steel in the Beijing National Stadium. There are 244 bridges and 22 tunnels built to standardized designs, and the route is monitored by 321 seismic, 167 windspeed and 50 rainfall sensors. Start of service Tickets were put on sale at 09:00 on June 24, 2011, and sold out within an hour. To compete with the new train service, airlines slashed the cost of flights between Beijing and Shanghai by up to 65%. Economy air fares between Beijing and Shanghai fell by 52%. Sleeper bullet trains on the upgraded railway were cancelled at the beginning, but later resumed. The new line will increase the freight capacity of the old line by 50 million tons per year between Beijing and Shanghai. In its second week in service, the system experienced three malfunctions in four days. On July 10, 2011, trains were delayed after heavy winds and a thunderstorm caused power supply problems in Shandong. On July 12, 2011, trains were delayed again when another power failure occurred in Suzhou. On July 13, 2011, a transformer malfunction in Changzhou forced a train to halve its top speed, forcing passengers to take a backup train. Within two weeks after opening, airline prices had rebounded due to frequent malfunctions on the line. Airline ticket sales were only down 5% in July 2011 compared to June 2011, after the opening of the line. On August 12, 2011, after several delays caused by equipment problems, 54 CRH380BL trains running on this line were recalled by their manufacturer. They returned to regular service on November 16, 2011. A spokesman for the Ministry of Railways apologized for the glitches and delays, stating that in the two weeks since service had begun only 85.6% of trains had arrived on time. Finances In 2006, it was estimated that the line would cost between CN¥130 billion (US$16.25 billion) and ¥170 billion ($21.25 billion). The following year, the estimated cost had revised to ¥200 billion ($25 billion), or ¥150 million per kilometer. Due to rapid rises in the costs of labor, construction materials and land acquisitions over the previous years, by July 2008, the estimated cost was increased to ¥220 billion ($32 billion). By then, the state-owned company Beijing–Shanghai high-speed railway, established to raise funds for the project, had raised ¥110 billion, with the remaining to be sourced from local governments, share offerings, bank loans and, for the first time for a railway project, foreign investment. In the end, investment in the project totaled ¥217.6 billion ($34.7 billion). In 2016 it was revealed, that last year the Beijing–Shanghai High-Speed Railway Company (BSHSRC) has total assets of ¥181.54 billion ($28 billion), revenue ¥23.42 billion ($3.6 billion) and a net profit ¥6.58 billion (US$1 billion), thus being labeled as the most profitable railway line in the world. In 2019, Jinghu Express Railway Company submitted an application for an IPO. The company announced that the Jinghu HSR recorded a net profit of ¥9.5 billion (US$1.35 billion) in the first nine months of 2019. In 2020, BSHSRC went public, as the first high-speed rail operator in China. The proceeds of the IPO will be used to purchase a 65% stake in the Beijing Fuzhou Railway Passenger Dedicated Line Anhui Company, which operates the Hefei–Bengbu high-speed railway, Hefei–Fuzhou high-speed railway (Anhui section), Shangqiu–Hangzhou high-speed railway (Anhui section, still under construction) and Zhengzhou–Fuyang high-speed railway (Anhui section). Rolling stock services use the CR400AF, CR400BF, CRH380A, CRH380B, and CRH380C trainsets, prior to 2014 slower services use CRH2 and CRH5 trainsets. First and Second Class coaches are available on all trains. On the shorter trains, a six-person Premier Class compartment is available. Available on the longer trains are up to 28 Business Class seats and a full-length dining car. Operation and ridership More than 90 trains a day run between Beijing South and Shanghai Hongqiao from 07:00 until 18:00. The line's average ridership in its initial two weeks of operation was 165,000 passengers daily, while 80,000 passengers every day continued to ride on the slower and less expensive old railway. The figure of 165,000 daily riders was three-quarters of the forecast of 220,000 daily riders. After the opening passengers numbers continued to grow, with 230,000 passengers using the line each day by 2013. By March 2013, the line had carried 100 million passengers. By 2015, ridership grew to 489,000 passengers per day. By 2017, average ridership reached over 500,000 passengers per day. This line is gradually gaining popularity through the years and it is reaching its capacity at weekends and holidays. With the introduction of the China Standardized EMU, the highest operation speed of the line is raised to on September 21, 2017. The fastest train will complete the journey in 4 hours 18 minutes (G7), while making two stops along the trip at Jinan and Nanjing. In 2019, in response to high passenger demand 17-car-long Fuxing trains started operating on the line. Fares On June 13, 2011, the list of fares was announced at a Ministry of Railways press conference. The fares from Beijing South to Shanghai Hongqiao in RMB Yuan are listed below: Note: *Only available on services using the CRH380AL, CRH380BL and CRH380CL trains Online ticketing service Passengers can buy tickets online. If the passenger uses a 2nd-generation PRC ID Card or an International Passport, they can use this card directly as the ticket to enter the station and pass the ticketing gates. Components Stations and service There are 24 stations on the line. Cruise speeds are depending on services. Fare are calculated based on distance traveled regardless of speed and travel time. More than 40 pairs of daily scheduled train services travel end-to-end along this route, and hundreds more that only use a segment of it. Note: * – Lines in italic text are under construction or planned The travel time column in the following table only list shortest time possible to get to a certain station from Beijing. Different services make different stops along the way and there is no services that stop at every station. Bridges The railway line has some of the longest bridges in the world. They include: Danyang–Kunshan Grand Bridge – longest bridge in the world. Tianjin Grand Bridge – fourth longest bridge in the world. Beijing Grand Bridge Cangzhou–Dezhou Grand Bridge Nanjing Qinhuai River Bridge Zhenjiang Beijing–Hangzhou Canal Bridge Notes From its native Mandarin name. References External links High-speed railway lines in China Standard gauge railways in China 2011 establishments in China Rail transport in Shanghai Railway lines opened in 2011 25 kV AC railway electrification Companies in the CSI 100 Index 2011 in Shanghai 2011 in Beijing
Beijing–Shanghai high-speed railway
[ "Engineering" ]
3,466
[ "Megaprojects" ]
2,952,577
https://en.wikipedia.org/wiki/Safety%20integrity%20level
In functional safety, safety integrity level (SIL) is defined as the relative level of risk-reduction provided by a safety instrumented function (SIF), i.e. the measurement of the performance required of the SIF. In the functional safety standards based on the IEC 61508 standard, four SILs are defined, with SIL4 being the most dependable and SIL1 the least. The applicable SIL is determined based on a number of quantitative factors in combination with qualitative factors, such as risk assessments and safety lifecycle management. Other standards, however, may have different SIL number definitions. SIL allocation Assignment, or allocation of SIL is an exercise in risk analysis where the risk associated with a specific hazard, which is intended to be protected against by a SIF, is calculated without the beneficial risk reduction effect of the SIF. That unmitigated risk is then compared against a tolerable risk target. The difference between the unmitigated risk and the tolerable risk, if the unmitigated risk is higher than tolerable, must be addressed through risk reduction of provided by the SIF. This amount of required risk reduction is correlated with the SIL target. In essence, each order of magnitude of risk reduction that is required correlates with an increase in SIL, up to a maximum of SIL4. Should the risk assessment establish that the required SIL cannot be achieved by a SIL4 SIF, then alternative arrangements must be designed, such as non-instrumented safeguards (e.g, a pressure relief valve). There are several methods used to assign a SIL. These are normally used in combination, and may include: Risk matrices Risk graphs Layer of protection analysis (LOPA) Of the methods presented above, LOPA is by far the most commonly used in large industrial facilities, such as for example chemical process plants. The assignment may be tested using both pragmatic and controllability approaches, applying industry guidance such as the one published by the UK HSE. SIL assignment processes that use the HSE guidance to ratify assignments developed from Risk Matrices have been certified to meet IEC 61508. Problems There are several problems inherent in the use of safety integrity levels. These can be summarized as follows: Poor harmonization of definition across the different standards bodies which utilize SIL. Process-oriented metrics for derivation of SIL. Estimation of SIL based on reliability estimates. System complexity, particularly in software systems, making SIL estimation difficult to impossible. These lead to such erroneous statements as the tautology "This system is a SIL N system because the process adopted during its development was the standard process for the development of a SIL N system", or use of the SIL concept out of context such as "This is a SIL 3 heat exchanger" or "This software is SIL 2". According to IEC 61508, the SIL concept must be related to the dangerous failure rate of a system, not just its failure rate or the failure rate of a component part, such as the software. Definition of the dangerous failure modes by safety analysis is intrinsic to the proper determination of the failure rate. SIL types and certification The International Electrotechnical Commission's (IEC) standard IEC 61508 defines SIL using requirements grouped into two broad categories: hardware safety integrity and systematic safety integrity. A device or system must meet the requirements for both categories to achieve a given SIL. The SIL requirements for hardware safety integrity are based on a probabilistic analysis of the device. In order to achieve a given SIL, the device must meet targets for the maximum probability of dangerous failure and a minimum safe failure fraction. The concept of 'dangerous failure' must be rigorously defined for the system in question, normally in the form of requirement constraints whose integrity is verified throughout system development. The actual targets required vary depending on the likelihood of a demand, the complexity of the device(s), and types of redundancy used. PFD (probability of dangerous failure on demand) and RRF (risk reduction factor) of low demand operation for different SILs as defined in IEC EN 61508 are as follows: For continuous operation, these change to the following, where PFH is probability of dangerous failure per hour. Hazards of a control system must be identified then analysed through risk analysis. Mitigation of these risks continues until their overall contribution to the hazard are considered acceptable. The tolerable level of these risks is specified as a safety requirement in the form of a target 'probability of a dangerous failure' in a given period of time, stated as a discrete SIL. Certification schemes, such as the CASS Scheme (Conformity Assessment of Safety-related Systems) are used to establish whether a device meets a particular SIL. Third parties that can provide certification include Bureau Veritas, CSA Group, TÜV Rheinland, TÜV SÜD and UL among others. Self-certification is also possible. The requirements of these schemes can be met either by establishing a rigorous development process, or by establishing that the device has sufficient operating history to argue that it has been proven in use. Certification is achieved by proving the functional safety capability (FSC) of the organization, usually by assessment of its functional safety management (FSM) program, and the assessment of the design and life-cycle activities of the product to be certified, which is conducted based on specifications, design documents, test specifications and results, failure rate predictions, FMEAs, etc. Electric and electronic devices can be certified for use in functional safety applications according to IEC 61508. There are a number of application-specific standards based on or adapted from IEC 61508, such as IEC 61511 for the process industry sector. This standard is used in the petrochemical and hazardous chemical industries, among others. Standards The following standards use SIL as a measure of reliability and/or risk reduction. ANSI/ISA S84 (functional safety of safety instrumented systems for the process industry sector) IEC 61508 (functional safety of electrical/electronic/programmable electronic safety related systems) IEC 61511 (implementing IEC 61508 in the process industry sector) IEC 61513 (implementing IEC 61508 in the nuclear industry) IEC 62061 (implementing IEC 61508 in the domain of machinery safety) EN 50128 (railway applications – software for railway control and protection) EN 50129 (railway applications – safety related electronic systems for signalling) EN 50657 (railway applications – software on board of rolling stock) EN 50402 (fixed gas detection systems) ISO 26262 (automotive industry) MISRA (guidelines for safety analysis, modelling, and programming in automotive applications) See also As low as reasonably practicable (ALARP) High-integrity pressure protection system (HIPPS) Reliability engineering Spurious trip level (STL) References Further reading Hartmann, H.; Thomas, H.; Scharpf, E. (2022). Practical SIL Target Selection – Risk Analysis per the IEC 61511 Safety Lifecycle. Exida. Houtermans, M.J.M. (2014). SIL and Functional Safety in a Nutshell (2nd ed.). Prime Intelligence. ASIN B00MTWSBG2 Medoff, M.; Faller, R. (2014). Functional Safety – An IEC 61508 SIL 3 Compliant Development Process (3rd ed.). Exida. Punch, Marcus (2013). Functional Safety for the Mining and Machinery-based Industries (2nd ed.). Tenambit, N.S.W.: Marcus Punch. External links 61508.org - The 61508 Association Functional Safety, A Basic Guide IEC Safety and functional safety - The IEC functional safety site Safety Integrity Level Manual (Archived) - Pepperl+Fuchs SIL Manual Process safety Safety
Safety integrity level
[ "Chemistry", "Engineering" ]
1,641
[ "Chemical process engineering", "Safety engineering", "Process safety" ]
2,952,636
https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28pressure%29
This is a tabulated listing of the orders of magnitude in relation to pressure expressed in pascals. psi values, prefixed with + and -, denote values relative to Earth's sea level standard atmospheric pressure (psig); otherwise, psia is assumed. References Units of pressure Pressure
Orders of magnitude (pressure)
[ "Mathematics" ]
61
[ "Quantity", "Orders of magnitude", "Units of measurement", "Units of pressure" ]
2,953,344
https://en.wikipedia.org/wiki/Stagnation%20pressure
In fluid dynamics, stagnation pressure, also referred to as total pressure, is what the pressure would be if all the kinetic energy of the fluid were to be converted into pressure in a reversable manner.; it is defined as the sum of the free-stream static pressure and the free-stream dynamic pressure. The Bernoulli equation applicable to incompressible flow shows that the stagnation pressure is equal to the dynamic pressure and static pressure combined. In compressible flows, stagnation pressure is also equal to total pressure as well, provided that the fluid entering the stagnation point is brought to rest isentropically. Stagnation pressure is sometimes referred to as pitot pressure because the two pressures are equal. Magnitude The magnitude of stagnation pressure can be derived from Bernoulli equation for incompressible flow and no height changes. For any two points 1 and 2: The two points of interest are 1) in the freestream flow at relative speed where the pressure is called the "static" pressure, (for example well away from an airplane moving at speed ); and 2) at a "stagnation" point where the fluid is at rest with respect to the measuring apparatus (for example at the end of a pitot tube in an airplane). Then or where: is the stagnation pressure is the fluid density is the speed of fluid is the static pressure So the stagnation pressure is increased over the static pressure, by the amount which is called the "dynamic" or "ram" pressure because it results from fluid motion. In our airplane example, the stagnation pressure would be atmospheric pressure plus the dynamic pressure. In compressible flow however, the fluid density is higher at the stagnation point than at the static point. Therefore, can't be used for the dynamic pressure. For many purposes in compressible flow, the stagnation enthalpy or stagnation temperature plays a role similar to the stagnation pressure in incompressible flow. Compressible flow Stagnation pressure is the static pressure a gas retains when brought to rest isentropically from Mach number M. or, assuming an isentropic process, the stagnation pressure can be calculated from the ratio of stagnation temperature to static temperature: where: is the stagnation pressure is the static pressure is the stagnation temperature is the static temperature is the ratio of specific heats The above derivation holds only for the case when the gas is assumed to be calorically perfect (specific heats and the ratio of the specific heats are assumed to be constant with temperature). See also Hydraulic ram Stagnation temperature Notes References L. J. Clancy (1975), Aerodynamics, Pitman Publishing Limited, London. Cengel, Boles, "Thermodynamics, an engineering approach, McGraw Hill, External links Pitot-Statics and the Standard Atmosphere F. L. Thompson (1937) The Measurement of Air Speed in Airplanes, NACA Technical note #616, from SpaceAge Control. Fluid dynamics
Stagnation pressure
[ "Chemistry", "Engineering" ]
635
[ "Piping", "Chemical engineering", "Fluid dynamics" ]
2,953,922
https://en.wikipedia.org/wiki/Microcrystalline%20wax
Microcrystalline waxes are a type of wax produced by de-oiling petrolatum, as part of the petroleum refining process. In contrast to the more familiar paraffin wax which contains mostly unbranched alkanes, microcrystalline wax contains a higher percentage of isoparaffinic (branched) hydrocarbons and naphthenic hydrocarbons. It is characterized by the fineness of its crystals in contrast to the larger crystal of paraffin wax. It consists of high molecular weight saturated aliphatic hydrocarbons. It is generally darker, more viscous, denser, tackier and more elastic than paraffin waxes, and has a higher molecular weight and melting point. The elastic and adhesive characteristics of microcrystalline waxes are related to the non-straight chain components which they contain. Typical microcrystalline wax crystal structure is small and thin, making them more flexible than paraffin wax. It is commonly used in cosmetic formulations. Microcrystalline waxes when produced by wax refiners are typically produced to meet a number of ASTM specifications. These include congeal point (ASTM D938), needle penetration (ASTM D1321), color (ASTM D6045), and viscosity (ASTM D445). Microcrystalline waxes can generally be put into two categories: "laminating" grades and "hardening" grades. The laminating grades typically have a melting point of 140–175 F (60 – 80 °C) and needle penetration of 25 or above. The hardening grades will range from about 175–200 F (80 – 93 °C), and have a needle penetration of 25 or below. Color in both grades can range from brown to white, depending on the degree of processing done at the refinery level. Microcrystalline waxes are derived from the refining of the heavy distillates from lubricant oil production. This by-product must then be de-oiled at a wax refinery. Depending on the end use and desired specification, the product may then have its odor removed and color removed (which typically starts as a brown or dark yellow). This is usually done by means of a filtration method or by hydro-treating the wax material. Industries and applications Microcrystalline wax is often used in industries such as tire and rubber, candles, adhesives, corrugated board, cosmetics, castings, and others. Refineries may use blending facilities to combine paraffin and microcrystalline waxes; this is prevalent in the tire and rubber industries. Microcrystalline waxes have considerable application in the custom making of jewelry and small sculptures. Different formulations produce waxes from those soft enough to be molded by hand to those hard enough to be carved with rotary tools. The melted wax can be cast to make multiple copies that are further carved with details. Jewelry suppliers sell wax molded into the basic forms of rings as well as details that can be heat welded together and tubes and sheets for cutting and building the wax models. Rings may be attached to a wax "tree" so that many can be cast in one pouring. A brand of microcrystalline wax, Renaissance Wax, is also used extensively in museum and conservation settings for protection and polishing of antique woods, ivory, gemstones, and metal objects. It was developed by The British Museum in the 1950s to replace the potentially unstable natural waxes that were previously used such as beeswax and carnauba. Microcrystalline waxes are excellent materials to use when modifying the crystalline properties of paraffin wax. The microcrystalline wax has significantly more branching of the carbon chains that are the backbone of paraffin wax. This is useful when some desired functional changes in the paraffin are needed, such as flexibility, higher melt point, and increased opacity. They are also used as slip agents in printing ink. Microcrystalline wax is used in such sports as ice hockey, skiing and snowboarding. It is applied to the friction tape of an ice hockey stick to prevent degradation of the tape due to water destroying the glue on the tape and also to increase control of the hockey puck due to the wax’s adhesive quality. It is also applied to the underside of skis and snowboards as glide wax to reduce friction and increase the gliding ability of the board, making it easier to control; stickier grades of kick or grip wax are also used on cross-country skis to allow the ski to alternately grip the snow and slip across it as the skier shifts their weight while striding. Microcrystalline wax was used in the final phases of the restoration of the Cosmatesque pavement, Westminster Abbey, London. Use in petrolatum Microcrystalline wax is also a key component in the manufacture of petrolatum. The branched structure of the carbon chain backbone allows oil molecules to be incorporated into the crystal lattice structure. The desired properties of the petrolatum can be modified by using microcrystalline wax bases of different congeal points (ASTM D938) and needle penetration (ASTM D1321). However, key industries that utilize petrolatum, such as the personal care, cosmetic, and candle industries, have pushed for more materials that are considered "green" and based on renewable resources. As an alternative, hybrid petrolatum can be used. Hybrid petrolatum utilizes a complex mixture of vegetable oils and waxes and combines them with petroleum and micro wax-based technologies. This allows a formulator to incorporate higher percentages of renewable resources while maintaining the beneficial properties of the petrolatum. References External links ASTM official website: wax tests Cosmetics chemicals Waxes Petroleum products Sculpture materials
Microcrystalline wax
[ "Physics", "Chemistry" ]
1,196
[ "Petroleum products", "Petroleum", "Materials", "Matter", "Waxes" ]
2,954,049
https://en.wikipedia.org/wiki/Iterative%20learning%20control
Iterative Learning Control (ILC) is an open-loop control approach of tracking control for systems that work in a repetitive mode. Examples of systems that operate in a repetitive manner include robot arm manipulators, chemical batch processes and reliability testing rigs. In each of these tasks the system is required to perform the same action over and over again with high precision. This action is represented by the objective of accurately tracking a chosen reference signal on a finite time interval. Repetition allows the system to sequentially improve tracking accuracy, in effect learning the required input needed to track the reference as closely as possible. The learning process uses information from previous repetitions to improve the control signal, ultimately enabling a suitable control action to be found iteratively. The internal model principle yields conditions under which perfect tracking can be achieved but the design of the control algorithm still leaves many decisions to be made to suit the application. A typical, simple control law is of the form: where is the input to the system during the pth repetition, is the tracking error during the pth repetition and is a design parameter representing operations on . Achieving perfect tracking through iteration is represented by the mathematical requirement of convergence of the input signals as becomes large, whilst the rate of this convergence represents the desirable practical need for the learning process to be rapid. There is also the need to ensure good algorithm performance even in the presence of uncertainty about the details of process dynamics. The operation is crucial to achieving design objectives (i.e. trading off fast convergence and robust performance) and ranges from simple scalar gains to sophisticated optimization computations. In many cases a low-pass filter is added to the input to improve performance. The control law then takes the form where is a low-pass filtering matrix. This removes high-frequency disturbances which may otherwise be aplified during the learning process. References Control theory
Iterative learning control
[ "Mathematics" ]
377
[ "Applied mathematics", "Control theory", "Dynamical systems" ]
2,954,685
https://en.wikipedia.org/wiki/Povarov%20reaction
The Povarov reaction is an organic reaction described as a formal cycloaddition between an aromatic imine and an alkene. The imine in this organic reaction is a condensation reaction product from an aniline type compound and a benzaldehyde type compound. The alkene must be electron rich which means that functional groups attached to the alkene must be able to donate electrons. Such alkenes are enol ethers and enamines. The reaction product in the original Povarov reaction is a quinoline. Because the reactions can be carried out with the three components premixed in one reactor it is an example of a multi-component reaction. Reaction mechanism The reaction mechanism for the Povarov reaction to the quinoline is outlined in Scheme 1. In step one aniline and benzaldehyde react to the Schiff base in a condensation reaction. The Povarov reaction requires a Lewis acid such as boron trifluoride to activate the imine for an electrophilic addition of the activated alkene. This reaction step forms an oxonium ion which then reacts with the aromatic ring in a classical electrophilic aromatic substitution. Two additional elimination reactions create the quinoline ring structure. The reaction is also classified as a subset of aza Diels-Alder reactions; however, it occurs by a step-wise rather than concerted mechanism. Examples The reaction depicted in Scheme 2 illustrates the Povarov reaction with an imine and an enamine in the presence of yttrium triflate as the Lewis acid. This reaction is regioselective because the iminium ion preferentially attacks the nitro ortho position and not the para position. The nitro group is a meta directing substituent but since this position is blocked, the most electron rich ring position is now ortho and not para. The reaction is also stereoselective because the enamine addition occurs with a diastereomeric preference for trans addition without formation of the cis isomer. This is in contrast to traditional Diels–Alder reactions, which are stereospecific based on the alkene geometry. In 2013, Doyle and coworkers reported a Povarov-type, formal [4+2]-cycloaddition reaction between donor-acceptor cyclopropenes and imines (Scheme 3). In the first step, a dirhodium catalyst effects diazo decomposition from silyl enol ether diazo compound to yield a donor/acceptor cyclopropene. The donor/acceptor cyclopropene is then reacted with an aryl imine under scandium(III) triflate catalyzed conditions to yield cyclopropane-fused tetrahydroquinolines in good yields and diastereoselectivities. Treatment of these compounds with TBAF invokes a ring-expansion that provides the corresponding benzazepines. Variations One variation of the Povarov reaction is a four component reaction. Whereas in the traditional Povarov reaction the intermediate carbocation gives an intramolecular reaction with the aryl group, this intermediate can also be terminated by an additional nucleophile such as an alcohol. Scheme 4 depicts this 4 component reaction with the ethyl ester of glyoxylic acid, 3,4-dihydro-2H-pyran, aniline and ethanol with lewis acid scandium(III) triflate and molecular sieves. References See also Doebner reaction Doebner-Miller reaction Grieco three-component condensation Cycloadditions Multiple component reactions Quinoline forming reactions Name reactions
Povarov reaction
[ "Chemistry" ]
774
[ "Name reactions" ]
4,061,767
https://en.wikipedia.org/wiki/Heaviside%E2%80%93Lorentz%20units
Heaviside–Lorentz units (or Lorentz–Heaviside units) constitute a system of units and quantities that extends the CGS with a particular set of equations that defines electromagnetic quantities, named for Oliver Heaviside and Hendrik Antoon Lorentz. They share with the CGS-Gaussian system that the electric constant and magnetic constant do not appear in the defining equations for electromagnetism, having been incorporated implicitly into the electromagnetic quantities. Heaviside–Lorentz units may be thought of as normalizing and , while at the same time revising Maxwell's equations to use the speed of light instead. The Heaviside–Lorentz unit system, like the International System of Quantities upon which the SI system is based, but unlike the CGS-Gaussian system, is rationalized, with the result that there are no factors of appearing explicitly in Maxwell's equations. That this system is rationalized partly explains its appeal in quantum field theory: the Lagrangian underlying the theory does not have any factors of when this system is used. Consequently, electromagnetic quantities in the Heaviside–Lorentz system differ by factors of in the definitions of the electric and magnetic fields and of electric charge. It is often used in relativistic calculations, and are used in particle physics. They are particularly convenient when performing calculations in spatial dimensions greater than three such as in string theory. Motivation In the mid-late 19th century, electromagnetic measurements were frequently made in either the so-named electrostatic (ESU) or electromagnetic (EMU) systems of units. These were based respectively on Coulomb's and Ampere's Law. Use of these systems, as with to the subsequently developed Gaussian CGS units, resulted in many factors of appearing in formulas for electromagnetic results, including those without any circular or spherical symmetry. For example, in the CGS-Gaussian system, the capacitance of sphere of radius is while that of a parallel plate capacitor is , where is the area of the smaller plate and is their separation. Heaviside, who was an important, though somewhat isolated, early theorist of electromagnetism, suggested in 1882 that the irrational appearance of in these sorts of relations could be removed by redefining the units for charges and fields. In his 1893 book Electromagnetic Theory, Heaviside wrote in the introduction: Length–mass–time framework As in the Gaussian system (), the Heaviside–Lorentz system () uses the length–mass–time dimensions. This means that all of the units of electric and magnetic quantities are expressible in terms of the units of the base quantities length, time and mass. Coulomb's equation, used to define charge in these systems, is in the Gaussian system, and in the HL system. The unit of charge then connects to , where 'HLC' is the HL unit of charge. The HL quantity describing a charge is then times larger than the corresponding Gaussian quantity. There are comparable relationships for the other electromagnetic quantities (see below). The commonly used set of units is the called the SI, which defines two constants, the vacuum permittivity () and the vacuum permeability (). These can be used to convert SI units to their corresponding Heaviside–Lorentz values, as detailed below. For example, SI charge is . When one puts , , , and , this evaluates to , the SI-equivalent of the Heaviside–Lorentz unit of charge. Comparison of Heaviside–Lorentz with other systems of units This section has a list of the basic formulas of electromagnetism, given in the SI, Heaviside–Lorentz, and Gaussian systems. Here and are the electric field and displacement field, respectively, and are the magnetic fields, is the polarization density, is the magnetization, is charge density, is current density, is the speed of light in vacuum, is the electric potential, is the magnetic vector potential, is the Lorentz force acting on a body of charge and velocity , is the permittivity, is the electric susceptibility, is the magnetic permeability, and is the magnetic susceptibility. Maxwell's equations The electric and magnetic fields can be written in terms of the potentials and . The definition of the magnetic field in terms of , , is the same in all systems of units, but the electric field is in the SI system, but in the HL or Gaussian systems. Other basic laws Dielectric and magnetic materials Below are the expressions for the macroscopic fields , , and in a material medium. It is assumed here for simplicity that the medium is homogeneous, linear, isotropic, and nondispersive, so that the susceptibilities are constants. Note that The quantities , and are dimensionless, and they have the same numeric value. By contrast, the electric susceptibility is dimensionless in all the systems, but has for the same material: The same statements apply for the corresponding magnetic quantities. Advantages and disadvantages of Heaviside–Lorentz units Advantages The formulas above are clearly simpler in units compared to either or Gaussian units. As Heaviside proposed, removing the from the Gauss law and putting it in the Force law considerably reduces the number of places the appears compared to Gaussian CGS units. Removing the explicit from the Gauss law makes it clear that the inverse-square force law arises by the field spreading out over the surface of a sphere. This allows a straightforward extension to other dimensions. For example, the case of long, parallel wires extending straight in the direction can be considered a two-dimensional system. Another example is in string theory, where more than three spatial dimensions often need to be considered. The equations are free of the constants and that are present in the SI system. (In addition and are overdetermined, because .) The below points are true in both Heaviside–Lorentz and Gaussian systems, but not SI. The electric and magnetic fields and have the same dimensions in the Heaviside–Lorentz system, meaning it is easy to recall where factors of go in the Maxwell equation. Every time derivative comes with a , which makes it dimensionally the same as a space derivative. In contrast, in SI units is . Giving the and fields the same dimension makes the assembly into the electromagnetic tensor more transparent. There are no factors of that need to be inserted when assembling the tensor out of the three-dimensional fields. Similarly, and have the same dimensions and are the four components of the 4-potential. The fields , , , and also have the same dimensions as and . For vacuum, any expression involving can simply be recast as the same expression with . In SI units, and have the same units, as do and , but they have different units from each other and from and . Disadvantages Despite Heaviside's urgings, it proved difficult to persuade people to switch from the established units. He believed that if the units were changed, "[o]ld style instruments would very soon be in a minority, and then disappear ...". Persuading people to switch was already difficult in 1893, and in the meanwhile there have been more than a century's worth of additional textbooks printed and voltmeters built. Heaviside–Lorentz units, like the Gaussian CGS units by which they generally differ by a factor of about 3.5, are frequently of rather inconvenient sizes. The ampere (coulomb/second) is reasonable unit for measuring currents commonly encountered, but the ESU/s, as demonstrated above, is far too small. The Gaussian CGS unit of electric potential is named a statvolt. It is about , a value which is larger than most commonly encountered potentials. The henry, the SI unit for inductance is already on the large side compared to most inductors; the Gaussian unit is 12 orders of magnitude larger. A few of the Gaussian CGS units have names; none of the Heaviside–Lorentz units do. Textbooks in theoretical physics use Heaviside–Lorentz units nearly exclusively, frequently in their natural form (see below), system's conceptual simplicity and compactness significantly clarify the discussions, and it is possible if necessary to convert the resulting answers to appropriate units after the fact by inserting appropriate factors of and . Some textbooks on classical electricity and magnetism have been written using Gaussian CGS units, but recently some of them have been rewritten to use SI units. Outside of these contexts, including for example magazine articles on electric circuits, Heaviside–Lorentz and Gaussian CGS units are rarely encountered. Translating formulas between systems To convert any formula between the SI, Heaviside–Lorentz system or Gaussian system, the corresponding expressions shown in the table below can be equated and hence substituted for each other. Replace by or vice versa. This will reproduce any of the specific formulas given in the list above. As an example, starting with the equation and the equations from the table Moving the factor across in the latter identities and substituting, the result is which then simplifies to Notes References Special relativity Electromagnetism Hendrik Lorentz
Heaviside–Lorentz units
[ "Physics" ]
1,939
[ "Electromagnetism", "Physical phenomena", "Special relativity", "Fundamental interactions", "Theory of relativity" ]
4,062,370
https://en.wikipedia.org/wiki/American%20Institute%20of%20Mining%2C%20Metallurgical%2C%20and%20Petroleum%20Engineers
The American Institute of Mining, Metallurgical, and Petroleum Engineers (AIME) is a professional association for mining and metallurgy, with over 145,000 members. The association was founded in 1871 by 22 mining engineers in Wilkes-Barre, Pennsylvania, and was one of the first national engineering societies in the country. The association's charter is to "advance and disseminate, through the programs of the Member Societies, knowledge of engineering and the arts and sciences involved in the production and use of minerals, metals, energy sources and materials for the benefit of humankind." It is the parent organization of four Member Societies, the Society for Mining, Metallurgy, and Exploration (SME), The Minerals, Metals & Materials Society (TMS), the Association for Iron and Steel Technology (AIST), and the Society of Petroleum Engineers (SPE). The organization is currently based in San Ramon, California. History Founded as the American Institute of Mining Engineers (AIME), the institute had a membership at the beginning of 1915 of over 5,000, made up of honorary, elected, and associate members. The annual meeting of the institute was held in February, with other meetings during the year as authorized by the council. The institute published three volumes of Transactions annually and a monthly Bulletin which appeared on the first of each month. The headquarters of the institute was in the Engineering Building in New York City. Following creation of the Petroleum Division in 1922, the Iron and Steel Division in 1928 and the Institute of Metals Division in 1933 the name of the society was changed in 1957 to the American Institute of Mining, Metallurgical and Petroleum Engineers. Three of the current member societies were then created from the divisions, increasing to four in 1974 when the Iron and Steel Society (ISS) was formed. In 2004 ISS merged with the Association of Iron and Steel Engineers (AISE) to form the Association for Iron and Steel Technology (AIST) whilst remaining a member society of AIME. Awards The society awards some 25 awards every year at the annual conference. In addition, the member societies also disburse their own awards, including the Percy Nicholls Award, awarded by SME jointly with American Society of Mechanical Engineers. Presidents The following individuals have held the position of President of this organization. 1871: David Thomas 1872–1874: Rossiter Worthington Raymond 1875: Alexander Lyman Holley 1876: Abram Stevens Hewitt 1877: Thomas Sterry Hunt 1878–1879: Eckley Brinton Coxe 1880: William Powell Shinn 1881: William Metcalf 1882: Richard Pennefather Rothwell 1883: Robert Woolston Hunt 1884–1885: James Cooper Bayles 1886: Robert Hallowell Richards 1887: Thomas Egleston 1888: William Bleeker Potter 1889: Richard Pearce 1890: Abram Stevens Hewitt 1891–1892: John Birkinbine 1893: Henry Marion Howe 1894: John Fritz 1895: Joseph D. Weeks 1896: Edmund Gybbon Spilsbury 1897: Thomas Messinger Drown 1898: Charles Kirchhoff 1899–1900: James Douglas 1901–1902: Eben Erskine Olcott 1903: Albert Reid Ledoux 1904–1905: James Gayley 1906: Robert Woolston Hunt 1907–1908: John Hays Hammond 1909–1910: David William Brunton 1911: Charles Kirchhoff 1912: James Furman Kemp 1913: Charles Frederic Rand 1914: Benjamin Bowditch Thayer 1915: William Lawrence Saunders 1916: Louis Davidson Ricketts 1917: Philip North Moore 1918: Sidney Johnston Jennings 1919: Horace Vaughn Winchell 1920: Herbert Hoover 1921: Edwin Ludlow 1922: Arthur Smith Dwight 1923: Edward Payson Mathewson 1924: William Kelly 1925: John van Wicheren Reynders 1926: Samuel A. Taylor 1927: Everette Lee DeGolyer 1928: George Otis Smith 1929: Frederick Worthen Bradley 1930: William Hastings Bassett 1931: Robert Emmet Tally 1932: Scott Turner 1933: Frederick Mark Becket 1934: Howard Nicholas Eavenson 1935: Henry Andrew Buehler 1936: John Meston Lovejoy 1937: Rolland Craten Allen 1938: Daniel Cowan Jackling 1939: Donald Burton Gillies 1940: Herbert George Moulton 1941: John Robert Suman 1942: Eugene McAuliffe 1943: Champion Herbert Mathewson 1944: Chester Alan Fulton 1945: Harvey Seeley Mudd 1946: Louis S. Cates 1947: Clyde Williams 1948: William Embry Wrather 1949: Lewis Emanuel Young 1950: Donald Hamilton McLaughlin 1951: Willis McGerald Peirce 1952: Michael Lawrence Haider 1953: Andrew Fletcher 1954: Leo Frederick Reinartz 1955: Henry DeWitt Smith 1956: Carl Ernest Reistle Jr. 1957: Grover Justine Holt 1958: Augustus Braun Kinzel 1959: Howard Carter Pyle 1960: Joseph Lincoln Gillson 1961: Ronald Russel McNaughton 1962: Lloyd E. Elkins 1963: Roger Vern Pierce 1964: Karl Leroy Fetters 1965: Thomas Corwin Frick 1966: William Bishop Stephenson 1967: Walter R. Hibbard Jr. 1968: John Robertson McMillan 1969: James Boyd 1970: John C. Kinnear 1971: John Smith Bell 1972: Dennis L. McElroy 1973: James B. Austin 1974: Wayne E. Glenn 1975: James D. Reilly 1976: Julius J. Harwood 1977: H. Arthur Nedom 1978: Wayne L. Dowdey 1979: William H. Wise 1980: M. Scott Kraemer 1981: Robert H. Merrill 1982: Harold W. Paxton 1983: Edward E. Runyan 1984: Nelson Severinghaus, Jr. 1985: Norman T. Mills 1986: Arlen L. Edgar 1987: Alan Lawley 1988: Thomas V. Falkie 1989: Howard N. Hubbard, Jr. 1990: Donald G. Russell 1991: Milton E. Wadsworth 1992: Roshan B. Bhappu 1993: G. Hugh Walker 1994: Noel D. Rietman 1995: Frank V. Nolfi, Jr. 1996: Donald W. Gentry 1997: Leonard G. Nelson 1998: Roy H. Koerner 1999: Paul G. Campbell, Jr. 2000: Robert E. Murray 2001: Grant P. Schneider 2002: George H. Sawyer 2003: Robert H. Wagoner 2004: Robert C. Freas 2005: Alan W. Cramb 2006: James R. Jorden 2007: Dan J. Thoma 2008: Michael Karmis 2009: Ian Sadler 2010: DeAnn Craig 2011: Brajendra Mishra 2012: George W. Luxbacher 2013: Dale Heinz 2014: Behrooz Fattahi 2015: Garry W. Warren 2016: Nikhil Trivedi 2017: John G. Speer Vice presidents 1893–1894: Robert Gilmour Leckie Member Societies In addition to individual members, AIME's membership includes the following societies: Association for Iron and Steel Technology (AIST) The Society for Mining, Metallurgy & Exploration (SME) Society of Petroleum Engineers (SPE) The Minerals, Metals & Materials Society (TMS) Mining Engineering magazine The Society for Mining, Metallurgy & Exploration publishes the monthly magazine Mining Engineering since 1949. References External links Organizations based in Colorado Organizations established in 1871 1871 establishments in Pennsylvania Engineering societies based in the United States
American Institute of Mining, Metallurgical, and Petroleum Engineers
[ "Chemistry", "Materials_science", "Engineering" ]
1,480
[ "Mining engineering", "Metallurgy", "Petroleum engineering", " and Petroleum Engineers", "American Institute of Mining", " Metallurgical" ]
4,062,960
https://en.wikipedia.org/wiki/Particle%20aggregation
Particle agglomeration refers to the formation of assemblages in a suspension and represents a mechanism leading to the functional destabilization of colloidal systems. During this process, particles dispersed in the liquid phase stick to each other, and spontaneously form irregular particle assemblages, flocs, or agglomerates. This phenomenon is also referred to as coagulation or flocculation and such a suspension is also called unstable. Particle agglomeration can be induced by adding salts or other chemicals referred to as coagulant or flocculant. Particle agglomeration can be a reversible or irreversible process. Particle agglomerates defined as "hard agglomerates" are more difficult to redisperse to the initial single particles. In the course of agglomeration, the agglomerates will grow in size, and as a consequence they may settle to the bottom of the container, which is referred to as sedimentation. Alternatively, a colloidal gel may form in concentrated suspensions which changes its rheological properties. The reverse process whereby particle agglomerates are re-dispersed as individual particles, referred to as peptization, hardly occurs spontaneously, but may occur under stirring or shear. Colloidal particles may also remain dispersed in liquids for long periods of time (days to years). This phenomenon is referred to as colloidal stability and such a suspension is said to be functionally stable. Stable suspensions are often obtained at low salt concentrations or by addition of chemicals referred to as stabilizers or stabilizing agents. The stability of particles, colloidal or otherwise, is most commonly evaluated in terms of zeta potential. This parameter provides a readily quantifiable measure of interparticle repulsion, which is the key inhibitor of particle aggregation. Similar agglomeration processes occur in other dispersed systems too. In emulsions, they may also be coupled to droplet coalescence, and not only lead to sedimentation but also to creaming. In aerosols, airborne particles may equally aggregate and form larger clusters (e.g., soot). Early stages A well dispersed colloidal suspension consists of individual, separated particles and is stabilized by repulsive inter-particle forces. When the repulsive forces weaken or become attractive through the addition of a coagulant, particles start to aggregate. Initially, particle doublets A2 will form from singlets A1 according to the scheme In the early stage of the aggregation process, the suspension mainly contains individual particles. The rate of this phenomenon is characterized by the aggregation rate coefficient . Since doublet formation is a second order rate process, the units of this coefficients are m3s−1 since particle concentrations are expressed as particle number per unit volume (m−3). Since absolute aggregation rates are difficult to measure, one often refers to the dimensionless stability ratio , defined as where is the aggregation rate coefficient in the fast regime, and the coefficient at the conditions of interest. The stability ratio is close to unity in the fast regime, increases in the slow regime, and becomes very large when the suspension is stable. Often, colloidal particles are suspended in water. In this case, they accumulate a surface charge and an electrical double layer forms around each particle. The overlap between the diffuse layers of two approaching particles results in a repulsive double layer interaction potential, which leads to particle stabilization. When salt is added to the suspension, the electrical double layer repulsion is screened, and van der Waals attraction become dominant and induce fast aggregation. The figure on the right shows the typical dependence of the stability ratio versus the electrolyte concentration, whereby the regimes of slow and fast aggregation are indicated. The table below summarizes the critical coagulation concentration (CCC) ranges for different net charge of the counter ion. The charge is expressed in units of elementary charge. This dependence reflects the Schulze–Hardy rule, which states that the CCC varies as the inverse sixth power of the counter ion charge. The CCC also depends on the type of ion somewhat, even if they carry the same charge. This dependence may reflect different particle properties or different ion affinities to the particle surface. Since particles are frequently negatively charged, multivalent metal cations thus represent highly effective coagulants. Adsorption of oppositely charged species (e.g., protons, specifically adsorbing ions, surfactants, or polyelectrolytes) may destabilize a particle suspension by charge neutralization or stabilize it by buildup of charge, leading to a fast aggregation near the charge neutralization point, and slow aggregation away from it. Quantitative interpretation of colloidal stability was first formulated within the DLVO theory. This theory confirms the existence slow and fast aggregation regimes, even though in the slow regime the dependence on the salt concentration is often predicted to be much stronger than observed experimentally. The Schulze–Hardy rule can be derived from DLVO theory as well. Other mechanisms of colloid stabilization are equally possible, particularly, involving polymers. Adsorbed or grafted polymers may form a protective layer around the particles, induce steric repulsive forces, and lead to steric stabilization at it is the case with polycarboxylate ether (PCE), the last generation of chemically tailored superplasticizer specifically designed to increase the workability of concrete while reducing its water content to improve its properties and durability. When polymers chains adsorb to particles loosely, a polymer chain may bridge two particles, and induce bridging forces. This situation is referred to as bridging flocculation. When particle aggregation is solely driven by diffusion, one refers to perikinetic aggregation. Aggregation can be enhanced through shear stress (e.g., stirring). The latter case is called orthokinetic aggregation. Later stages As the aggregation process continues, larger clusters form. The growth occurs mainly through encounters between different clusters, and therefore one refers to cluster-cluster aggregation process. The resulting clusters are irregular, but statistically self-similar. They are examples of mass fractals, whereby their mass M grows with their typical size characterized by the radius of gyration Rg as a power-law where d is the mass fractal dimension. Depending whether the aggregation is fast or slow, one refers to diffusion limited cluster aggregation (DLCA) or reaction limited cluster aggregation (RLCA). The clusters have different characteristics in each regime. DLCA clusters are loose and ramified (d ≈ 1.8), while the RLCA clusters are more compact (d ≈ 2.1). The cluster size distribution is also different in these two regimes. DLCA clusters are relatively monodisperse, while the size distribution of RLCA clusters is very broad. The larger the cluster size, the faster their settling velocity. Therefore, aggregating particles sediment and this mechanism provides a way for separating them from suspension. At higher particle concentrations, the growing clusters may interlink, and form a particle gel. Such a gel is an elastic solid body, but differs from ordinary solids by having a very low elastic modulus. Homoaggregation versus heteroaggregation When aggregation occurs in a suspension composed of similar monodisperse colloidal particles, the process is called homoaggregation (or homocoagulation). When aggregation occurs in a suspension composed of dissimilar colloidal particles, one refers to heteroaggregation (or heterocoagulation). The simplest heteroaggregation process occurs when two types of monodisperse colloidal particles are mixed. In the early stages, three types of doublets may form: While the first two processes correspond to homoaggregation in pure suspensions containing particles A or B, the last reaction represents the actual heteroaggregation process. Each of these reactions is characterized by the respective aggregation coefficients , , and . For example, when particles A and B bear positive and negative charge, respectively, the homoaggregation rates may be slow, while the heteroaggregation rate is fast. In contrast to homoaggregation, the heteroaggregation rate accelerates with decreasing salt concentration. Clusters formed at later stages of such heteroaggregation processes are even more ramified that those obtained during DLCA (d ≈ 1.4). An important special case of a heteroaggregation process is the deposition of particles on a substrate. Early stages of the process correspond to the attachment of individual particles to the substrate, which can be pictures as another, much larger particle. Later stages may reflect blocking of the substrate through repulsive interactions between the particles, while attractive interactions may lead to multilayer growth, and is also referred to as ripening. These phenomena are relevant in membrane or filter fouling. Experimental techniques Numerous experimental techniques have been developed to study particle aggregation. Most frequently used are time-resolved optical techniques that are based on transmittance or scattering of light. Light transmission. The variation of transmitted light through an aggregating suspension can be studied with a regular spectrophotometer in the visible region. As aggregation proceeds, the medium becomes more turbid, and its absorbance increases. The increase of the absorbance can be related to the aggregation rate constant k and the stability ratio can be estimated from such measurements. The advantage of this technique is its simplicity. Light scattering. These techniques are based on probing the scattered light from an aggregating suspension in a time-resolved fashion. Static light scattering yields the change in the scattering intensity, while dynamic light scattering the variation in the apparent hydrodynamic radius. At early-stages of aggregation, the variation of each of these quantities is directly proportional to the aggregation rate constant k. At later stages, one can obtain information on the clusters formed (e.g., fractal dimension). Light scattering works well for a wide range of particle sizes. Multiple scattering effects may have to be considered, since scattering becomes increasingly important for larger particles or larger aggregates. Such effects can be neglected in weakly turbid suspensions. Aggregation processes in strongly scattering systems have been studied with transmittance, backscattering techniques or diffusing-wave spectroscopy. Single particle counting. This technique offers excellent resolution, whereby clusters made out of tenths of particles can be resolved individually. The aggregating suspension is forced through a narrow capillary particle counter and the size of each aggregate is being analyzed by light scattering. From the scattering intensity, one can deduce the size of each aggregate, and construct a detailed aggregate size distribution. If the suspensions contain high amounts of salt, one could equally use a Coulter counter. As time proceeds, the size distribution shifts towards larger aggregates, and from this variation aggregation and breakup rates involving different clusters can be deduced. The disadvantage of the technique is that the aggregates are forced through a narrow capillary under high shear, and the aggregates may disrupt under these conditions. Indirect techniques. As many properties of colloidal suspensions depend on the state of aggregation of the suspended particles, various indirect techniques have been used to monitor particle aggregation too. While it can be difficult to obtain quantitative information on aggregation rates or cluster properties from such experiments, they can be most valuable for practical applications. Among these techniques settling tests are most relevant. When one inspects a series of test tubes with suspensions prepared at different concentration of the flocculant, stable suspensions often remain dispersed, while the unstable ones settle. Automated instruments based on light scattering/transmittance to monitor suspension settling have been developed, and they can be used to probe particle aggregation. One must realize, however, that these techniques may not always reflect the actual aggregation state of a suspension correctly. For example, larger primary particles may settle even in the absence of aggregation, or aggregates that have formed a colloidal gel will remain in suspension. Other indirect techniques capable to monitor the state of aggregation include, for example, filtration, rheology, absorption of ultrasonic waves, or dielectric properties. Relevance Particle aggregation is a widespread phenomenon, which spontaneously occurs in nature but is also widely explored in manufacturing. Some examples include. Formation of river delta. When river water carrying suspended sediment particles reaches salty water, particle aggregation may be one of the factors responsible for river delta formation. Charged particles are stable in river's fresh water containing low levels of salt, but they become unstable in sea water containing high levels of salt. In the latter medium, the particles aggregate, the larger aggregates sediment, and thus create the river delta. Papermaking. Retention aids are added to the pulp to accelerate paper formation. These aids are coagulating aids, which accelerate the aggregation between the cellulose fibers and filler particles. Frequently, cationic polyelectrolytes are being used for that purpose. Water treatment. Treatment of municipal waste water normally includes a phase where fine solid particles are removed. This separation is achieved by addition of a flocculating or coagulating agent, which induce the aggregation of the suspended solids. The aggregates are normally separated by sedimentation, leading to sewage sludge. Commonly used flocculating agents in water treatment include multivalent metal ions (e.g., Fe3+ or Al3+), polyelectrolytes, or both. Cheese making. The key step in cheese production is the separation of the milk into solid curds and liquid whey. This separation is achieved by inducing the aggregation processes between casein micelles by acidifying the milk or adding rennet. The acidification neutralizes the carboxylate groups on the micelles and induces the aggregation. See also Aerosol Colloid Clarifying agent Double layer forces DLVO theory (stability of colloids) Electrical double layer Emulsion Flocculation Gel Nanoparticle Particle deposition Peptization Reaction rate Settling Smoluchowski coagulation equation Sol-gel Surface charge Suspension (chemistry) References External links in Microgravity Chemistry Materials science Colloidal chemistry
Particle aggregation
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
2,903
[ "Colloidal chemistry", "Applied and interdisciplinary physics", "Materials science", "Colloids", "Surface science", "nan" ]
4,063,091
https://en.wikipedia.org/wiki/Thermal%20physics
Thermal physics is the combined study of thermodynamics, statistical mechanics, and kinetic theory of gases. This umbrella-subject is typically designed for physics students and functions to provide a general introduction to each of three core heat-related subjects. Other authors, however, define thermal physics loosely as a summation of only thermodynamics and statistical mechanics. Thermal physics can be seen as the study of system with larger number of atom, it unites thermodynamics to statistical mechanics. Overview Thermal physics, generally speaking, is the study of the statistical nature of physical systems from an energetic perspective. Starting with the basics of heat and temperature, thermal physics analyzes the first law of thermodynamics and second law of thermodynamics from the statistical perspective, in terms of the number of microstates corresponding to a given macrostate. In addition, the concept of entropy is studied via quantum theory. A central topic in thermal physics is the canonical probability distribution. The electromagnetic nature of photons and phonons are studied which show that the oscillations of electromagnetic fields and of crystal lattices have much in common. Waves form a basis for both, provided one incorporates quantum theory. Other topics studied in thermal physics include: chemical potential, the quantum nature of an ideal gas, i.e. in terms of fermions and bosons, Bose–Einstein condensation, Gibbs free energy, Helmholtz free energy, chemical equilibrium, phase equilibrium, the equipartition theorem, entropy at absolute zero, and transport processes as mean free path, viscosity, and conduction. See also Heat transfer physics Information theory Philosophy of thermal and statistical physics Thermodynamic instruments References Further reading External links Thermal Physics Links on the Web Physics education Thermodynamics
Thermal physics
[ "Physics", "Chemistry", "Mathematics" ]
371
[ "Applied and interdisciplinary physics", "Thermodynamics", "Physics education", "Dynamical systems" ]
4,064,439
https://en.wikipedia.org/wiki/Free%20entropy
A thermodynamic free entropy is an entropic thermodynamic potential analogous to the free energy. Also known as a Massieu, Planck, or Massieu–Planck potentials (or functions), or (rarely) free information. In statistical mechanics, free entropies frequently appear as the logarithm of a partition function. The Onsager reciprocal relations in particular, are developed in terms of entropic potentials. In mathematics, free entropy means something quite different: it is a generalization of entropy defined in the subject of free probability. A free entropy is generated by a Legendre transformation of the entropy. The different potentials correspond to different constraints to which the system may be subjected. Examples The most common examples are: where is entropy is the Massieu potential is the Planck potential is internal energy is temperature is pressure is volume is Helmholtz free energy is Gibbs free energy is number of particles (or number of moles) composing the i-th chemical component is the chemical potential of the i-th chemical component is the total number of components is the th components. Note that the use of the terms "Massieu" and "Planck" for explicit Massieu-Planck potentials are somewhat obscure and ambiguous. In particular "Planck potential" has alternative meanings. The most standard notation for an entropic potential is , used by both Planck and Schrödinger. (Note that Gibbs used to denote the free energy.) Free entropies were invented by French engineer François Massieu in 1869, and actually predate Gibbs's free energy (1875). Dependence of the potentials on the natural variables Entropy By the definition of a total differential, From the equations of state, The differentials in the above equation are all of extensive variables, so they may be integrated to yield Massieu potential / Helmholtz free entropy Starting over at the definition of and taking the total differential, we have via a Legendre transform (and the chain rule) The above differentials are not all of extensive variables, so the equation may not be directly integrated. From we see that If reciprocal variables are not desired, Planck potential / Gibbs free entropy Starting over at the definition of and taking the total differential, we have via a Legendre transform (and the chain rule) The above differentials are not all of extensive variables, so the equation may not be directly integrated. From we see that If reciprocal variables are not desired, References Bibliography Thermodynamic entropy
Free entropy
[ "Physics" ]
509
[ "Statistical mechanics", "Entropy", "Physical quantities", "Thermodynamic entropy" ]
4,066,199
https://en.wikipedia.org/wiki/Frenkel%20defect
In crystallography, a Frenkel defect is a type of point defect in crystalline solids, named after its discoverer Yakov Frenkel. The defect forms when an atom or smaller ion (usually cation) leaves its place in the structure, creating a vacancy and becomes an interstitial by lodging in a nearby location. In elemental systems, they are primarily generated during particle irradiation, as their formation enthalpy is typically much higher than for other point defects, such as vacancies, and thus their equilibrium concentration according to the Boltzmann distribution is below the detection limit. In ionic crystals, which usually possess low coordination number or a considerable disparity in the sizes of the ions, this defect can be generated also spontaneously, where the smaller ion (usually the cation) is dislocated. Similar to a Schottky defect the Frenkel defect is a stoichiometric defect (does not change the over all stoichiometry of the compound). In ionic compounds, the vacancy and interstitial defect involved are oppositely charged and one might expect them to be located close to each other due to electrostatic attraction. However, this is not likely the case in real material due to smaller entropy of such a coupled defect, or because the two defects might collapse into each other. Also, because such coupled complex defects are stoichiometric, their concentration will be independent of chemical conditions. Effect on density Even though Frenkel defects involve only the migration of the ions within the crystal, the total volume and thus the density is not necessarily changed: in particular for close-packed systems, the structural expansion due to the strains induced by the interstitial atom typically dominates over the structural contraction due to the vacancy, leading to a decrease of density. Examples Frenkel defects are exhibited in ionic solids with a large size difference between the anion and cation (with the cation usually smaller due to an increased effective nuclear charge) Some examples of solids which exhibit Frenkel defects: zinc sulfide, silver(I) chloride, silver(I) bromide (also shows Schottky defects), silver(I) iodide. These are due to the comparatively smaller size of Zn^2+ and Ag+ ions. For example, consider a structure formed by Xn− and Mn+ ions. Suppose an M ion leaves the M sublattice, leaving the X sublattice unchanged. The number of interstitials formed will equal the number of vacancies formed. One form of a Frenkel defect reaction in MgO with the oxide anion leaving the structure and going into the interstitial site written in Kröger–Vink notation: Mg + O → O + v + Mg This can be illustrated with the example of the sodium chloride crystal structure. The diagrams below are schematic two-dimensional representations. See also Deep-level transient spectroscopy (DLTS) Schottky defect Wigner effect Crystallographic defect References Further reading Crystallographic defects
Frenkel defect
[ "Chemistry", "Materials_science", "Engineering" ]
627
[ "Crystallographic defects", "Crystallography", "Materials degradation", "Materials science" ]
4,066,812
https://en.wikipedia.org/wiki/Boiler%20water
Boiler water is liquid water within a boiler, or in associated piping, pumps and other equipment, that is intended for evaporation into steam. The term may also be applied to raw water intended for use in boilers, treated boiler feedwater, steam condensate being returned to a boiler, or boiler blowdown being removed from a boiler. Early practice Impurities in water will leave solid deposits as steam evaporates. These solid deposits thermally insulate heat exchange surfaces initially decreasing the rate of steam generation, and potentially causing boiler metals to reach failure temperatures. Boiler explosions were not uncommon until surviving boiler operators learned how to periodically clean their boilers. Some solids could be removed by cooling the boiler so differential thermal expansion caused brittle crystalline solids to crack and flake off metal boiler surfaces. Other solids were removed by acid washing or mechanical scouring. Various rates of boiler blowdown could reduce the frequency of cleaning, but efficient operation and maintenance of individual boilers was determined by trial and error until chemists devised means of measuring and adjusting water quality to minimize cleaning requirements. Boiler water treatment Boiler water treatment is a type of industrial water treatment focused on the removal or chemical modification of substances potentially damaging to the boiler. Varying types of treatment are used at different locations to avoid scale, corrosion, or foaming. External treatment of raw water supplies intended for use within a boiler is focused on the removal of impurities before they reach the boiler. Internal treatment within the boiler is focused on limiting the tendency of water to dissolve the boiler, and maintaining impurities in forms least likely to cause trouble before they can be removed from the boiler in boiler blowdown. Within the boiler At the elevated temperatures and pressures within a boiler, water exhibits different physical and chemical properties than those observed at room temperature and atmospheric pressure. Chemicals may be added to maintain pH levels minimizing water solubility of boiler materials while allowing efficient action of other chemicals added to prevent foaming, to consume oxygen before it corrodes the boiler, to precipitate dissolved solids before they form scale on steam-generating surfaces, and to remove those precipitates from the vicinity of the steam-generating surfaces. Oxygen scavengers Sodium sulphite or hydrazine may be used to maintain reducing conditions within the boiler. Sulphite is less desirable in boilers operating at pressures above ; because sulfates formed by combination with oxygen may form sulfate scale or decompose into corrosive sulfur dioxide or hydrogen sulfide at elevated temperatures. Excess hydrazine may evaporate with steam to provide corrosion protection by neutralizing carbon dioxide in the steam condensate system; but it may also decompose into ammonia which will attack copper alloys. Products based on filming amines such as Helamin may be preferred for corrosion protection of condensate systems with copper alloys. Coagulation Boilers operating at pressures less than may use unsoftened feedwater with the addition of sodium carbonate or sodium hydroxide to maintain alkaline conditions to precipitate calcium carbonate, magnesium hydroxide and magnesium silicate. Hard water treated this way causes a fairly high concentration of suspended solid particles within the boiler to serve as precipitation nuclei preventing later deposition of calcium sulfate scale. Natural organic materials like starches, tannins and lignins may be added to control crystal growth and disperse precipitates. The soft sludge of precipitates and organic materials accumulates in quiescent portions of the boiler to be removed during bottom blowdown. Phosphates Boiler sludge concentrations created by coagulation treatment may be avoided by sodium phosphate treatment when water hardness is less than 60 mg/L. With adequate alkalinity, addition of sodium phosphate produces an insoluble precipitate of hydroxyapatite with magnesium hydroxide and magnesium and calcium silicates. Lignin may be processed for high temperature stability to control calcium phosphate scale and magnetic iron oxide deposits. Acceptable phosphate concentrations decrease from 140 mg/L in low pressure boilers to less than 40 mg/L at pressures above . Recommended alkalinity similarly decreases from 700 mg/L to 200 mg/L over the same pressure range. Foaming problems are more common with high alkalinity. Coordinated control of pH and phosphates attempts to limit caustic corrosion occurring from concentrations of hydroxyl ions under porous scale on steam generating surfaces within the boiler. High pressure boilers using demineralized water are most vulnerable to caustic corrosion. Hydrolysis of trisodium phosphate is a pH buffer in equilibrium with disodium phosphate and sodium hydroxide. Chelants Chelants like ethylenediaminetetraacetic acid (EDTA) or nitrilotriacetic acid (NTA) form complex ions with calcium and magnesium. Solubility of these complex ions may reduce blowdown requirements if anionic carboxylate polymers are added to control scale formation. Potential decomposition at high temperatures limits chelant use to boilers operating at pressures less than . Decomposition products may cause metal corrosion in areas of stress and high temperature. Feedwater Many large boilers including those used in thermal power stations recycle condensed steam for re-use within the boiler. Steam condensate is distilled water, but it may contain dissolved gases. A deaerator is often used to convert condensate to feedwater by removing potentially damaging gases including oxygen, carbon dioxide, ammonia and hydrogen sulfide. Inclusion of a polisher (an Ion exchange vessel) helps to maintain water purity, and in particular protect the boiler from a condenser tube leak. Make-up water All boilers lose some water in steam leaks; and some is intentionally wasted as boiler blowdown to remove impurities accumulating within the boiler. Steam locomotives and boilers generating steam for use in direct contact with contaminating materials may not recycle condensed steam. Replacement water is required to continue steam production. Make-up water is initially treated to remove floating and suspended materials. Hard water intended for low-pressure boilers may be softened by substituting sodium for divalent cations of dissolved calcium and magnesium most likely to cause carbonate and sulfate scale. High-pressure boilers typically require water demineralized by reverse osmosis, distillation or ion-exchange. See also Dealkalization of water Sources References Boilers Water
Boiler water
[ "Chemistry", "Environmental_science" ]
1,285
[ "Water", "Boilers", "Hydrology", "Pressure vessels" ]
5,424,160
https://en.wikipedia.org/wiki/Chevalley%20basis
In mathematics, a Chevalley basis for a simple complex Lie algebra is a basis constructed by Claude Chevalley with the property that all structure constants are integers. Chevalley used these bases to construct analogues of Lie groups over finite fields, called Chevalley groups. The Chevalley basis is the Cartan-Weyl basis, but with a different normalization. The generators of a Lie group are split into the generators H and E indexed by simple roots and their negatives . The Cartan-Weyl basis may be written as Defining the dual root or coroot of as where is the euclidean inner product. One may perform a change of basis to define The Cartan integers are The resulting relations among the generators are the following: where in the last relation is the greatest positive integer such that is a root and we consider if is not a root. For determining the sign in the last relation one fixes an ordering of roots which respects addition, i.e., if then provided that all four are roots. We then call an extraspecial pair of roots if they are both positive and is minimal among all that occur in pairs of positive roots satisfying . The sign in the last relation can be chosen arbitrarily whenever is an extraspecial pair of roots. This then determines the signs for all remaining pairs of roots. References Lie groups Lie algebras
Chevalley basis
[ "Mathematics" ]
282
[ "Algebra stubs", "Mathematical structures", "Lie groups", "Algebraic structures", "Algebra" ]
5,424,364
https://en.wikipedia.org/wiki/Voigt%20effect
The Voigt effect is a magneto-optical phenomenon which rotates and elliptizes linearly polarised light sent into an optically active medium. The effect is named after the German scientist Woldemar Voigt who discovered it in vapors. Unlike many other magneto-optical effects such as the Kerr or Faraday effect which are linearly proportional to the magnetization (or to the applied magnetic field for a non magnetized material), the Voigt effect is proportional to the square of the magnetization (or square of the magnetic field) and can be seen experimentally at normal incidence. There are also other denominations for this effect, used interchangeably in the modern scientific literature: the Cotton–Mouton effect (in reference to French scientists Aimé Cotton and Henri Mouton who discovered the same effect in liquids a few years later) and magnetic-linear birefringence, with the latter reflecting the physical meaning of the effect. For an electromagnetic incident wave linearly polarized and an in-plane polarized sample , the expression of the rotation in reflection geometry is is: and in the transmission geometry: where is the difference of refraction indices depending on the Voigt parameter (same as for the Kerr effect), the material refraction indices and the parameter responsible of the Voigt effect and so proportional to the or in the case of a paramagnetic material. Detailed calculation and an illustration are given in sections below. Theory As with the other magneto-optical effects, the theory is developed in a standard way with the use of an effective dielectric tensor from which one calculates systems eigenvalues and eigenvectors. As usual, from this tensor, magneto-optical phenomena are described mainly by the off-diagonal elements. Here, one considers an incident polarisation propagating in the z direction: the electric field and a homogenously in-plane magnetized sample where is counted from the [100] crystallographic direction. The aim is to calculate where is the rotation of polarization due to the coupling of the light with the magnetization. Let us notice that is experimentally a small quantity of the order of mrad. is the reduced magnetization vector defined by , the magnetization at saturation. We emphasized with the fact that it is because the light propagation vector is perpendicular to the magnetization plane that it is possible to see the Voigt effect. Dielectric tensor Following the notation of Hubert, the generalized dielectric cubic tensor take the following form: where is the material dielectric constant, the Voigt parameter, and two cubic constants describing magneto-optical effect depending on . is the reduce . Calculation is made in the spherical approximation with . At the present moment, there is no evidence that this approximation is not valid, as the observation of Voigt effect is rare because it is extremely small with respect to the Kerr effect. Eigenvalues and eigenvectors To calculate the eigenvalues and eigenvectors, we consider the propagation equation derived from the Maxwell equations, with the convention : When the magnetization is perpendicular to the propagation wavevector, on the contrary to the Kerr effect, may have all his three components equals to zero making calculations rather more complicated and making Fresnels equations no longer valid. A way to simplify the problem consists to use the electric field displacement vector . Since and we have . The inverse dielectric tensor can seem complicated to handle, but here the calculation was made for the general case. One can follow easily the demonstration by considering . Eigenvalues and eigenvectors are found by solving the propagation equation on which gives the following system of equation: where represents the inverse element of the dielectric tensor , and . After a straightforward calculation of the system's determinant, one has to make a development on 2nd order in and first order of . This led to the two eigenvalues corresponding the two refraction indices: The corresponding eigenvectors for and for are: Reflection geometry Continuity relation Knowing the eigenvectors and eigenvalues inside the material, one have to calculate the reflected electromagnetic vector usually detected in experiments. We use the continuity equations for and where is the induction defined from Maxwell's equations by . Inside the medium, the electromagnetic field is decomposed on the derived eigenvectors . The system of equation to solve is: The solution of this system of equation are: Calculation of rotation angle The rotation angle and the ellipticity angle are defined from the ratio with the two following formulae: where and represent the real and imaginary part of . Using the two previously calculated components, one obtains: This gives for the Voigt rotation: which can also be rewritten in the case of , , and real: where is the difference of refraction indices. Consequently, one obtains something proportional to and which depends on the incident linear polarisation. For proper no Voigt rotation can be observed. is proportional to the square of the magnetization since and . Transmission geometry The calculation of the rotation of the Voigt effect in transmission is in principle equivalent to the one of the Faraday effect. In practice, this configuration is not used in general for ferromagnetic samples since the absorption length is weak in this kind of material. However, the use of transmission geometry is more common for paramagnetic liquid or cristal where the light can travel easily inside the material. The calculation for a paramagnetic material is exactly the same with respect to a ferromagnetic one, except that the magnetization is replaced by a field ( in or ). For convenience, the field will be added at the end of calculation in the magneto-optical parameters. Consider the transmitted electromagnetic waves propagating in a medium of length L. From equation (5), one obtains for and : At the position , the expression of is: where and are the eigenvectors previously calculated, and is the difference for the two refraction indices. The rotation is then calculated from the ratio , with development in first order in and second order in . This gives: Again we obtain something proportional to and , the light propagation length. Let us notice that is proportional to in the same way with respect to the geometry in reflexion for the magnetization. In order to extract the Voigt rotation, we consider , and real. Then we have to calculate the real part of (14). The resulting expression is then inserted in (8). In the approximation of no absorption, one obtains for the Voigt rotation in transmission geometry: Illustration of Voigt effect in GaMnAs As an illustration of the application of the Voigt effect, we give an example in the magnetic semiconductor where a large Voigt effect was observed. At low temperatures (in general for ) for a material with an in-plane magnetization, exhibits a biaxial anisotropy with the magnetization aligned along (or close to) <100> directions. A typical hysteresis cycle containing the Voigt effect is shown in figure 1. This cycle was obtained by sending a linearly polarized light along the [110] direction with an incident angle of approximately 3° (more details can be found in ), and measuring the rotation due to magneto-optical effects of the reflected light beam. In contrast to the common longitudinal/polar Kerr effect, the hysteresis cycle is even with respect to the magnetization, which is a signature of the Voigt effect. This cycle was obtained with a light incidence very close to normal, and it also exhibits a small odd part; a correct treatment has to be carried out in order to extract the symmetric part of the hysteresis corresponding to the Voigt effect, and the asymmetric part corresponding to the longitudinal Kerr effect. In the case of the hysteresis presented here, the field was applied along the [1-10] direction. The switching mechanism is as follows: We start with a high negative field and the magnetization is close to the [-1-10] direction at position 1. The magnetic field is decreasing leading to a coherent magnetization rotation from 1 to 2 At positive field, the magnetization switch brutally from 2 to 3 by nucleation and propagation of magnetic domains giving a first coercive field named here The magnetization stay close to the state 3 while rotating coherently to the state 4, closer from the applied field direction. Again the magnetization switches abruptly from 4 to 5 by nucleation and propagation of magnetic domains. This switching is due to the fact that the final equilibrium position is closer from the state 5 with respect to the state 4 (and so his magnetic energy is lower). This gives another coercive field named Finally the magnetization rotates coherently from the state 5 to the state 6. The simulation of this scenario is given in the figure 2, with As one can see, the simulated hysteresis is qualitatively the same with respect to the experimental one. Notice that the amplitude at or are approximately twice of See also Atomic line filter Cotton–Mouton effect Faraday effect References Further reading Zhao, Zhong-Quan. Excited state atomic line filters . Retrieved March 26, 2006. Magneto-optic effects Polarization (waves)
Voigt effect
[ "Physics", "Chemistry", "Materials_science" ]
1,924
[ "Physical phenomena", "Electric and magnetic fields in matter", "Astrophysics", "Optical phenomena", "Magneto-optic effects", "Polarization (waves)" ]
962,739
https://en.wikipedia.org/wiki/Institute%20of%20Chemistry%2C%20Slovak%20Academy%20of%20Sciences
The research activities of the Institute of Chemistry of the Slovak Academy of Sciences are aimed at the chemistry and biochemistry of saccharides. The main fields of interest may be classified into the following directions: Synthesis and structure of biologically important mono- and oligosaccharides and their derivatives Structure and functional properties of polysaccharides, their derivatives, and conjugates with other polymers Structure, function, and mechanism of action of glycanases Development of physicochemical methods for structural analysis of carbohydrates Gene engineering and nutritional and biologically active proteins Glycobiotechnology Ecology, taxonomy, and phylogenesis of yeasts and yeasts-like fungi Development of technologies for isolation of natural compounds and preparation of saccharides and their derivatives for commercial purposes References Biochemistry research institutes Slovak Academy of Sciences
Institute of Chemistry, Slovak Academy of Sciences
[ "Chemistry" ]
174
[ "Biochemistry research institutes", "Chemistry organization stubs", "Biochemistry organizations" ]
963,042
https://en.wikipedia.org/wiki/Finitely%20generated%20group
In algebra, a finitely generated group is a group G that has some finite generating set S so that every element of G can be written as the combination (under the group operation) of finitely many elements of S and of inverses of such elements. By definition, every finite group is finitely generated, since S can be taken to be G itself. Every infinite finitely generated group must be countable but countable groups need not be finitely generated. The additive group of rational numbers Q is an example of a countable group that is not finitely generated. Examples Every quotient of a finitely generated group G is finitely generated; the quotient group is generated by the images of the generators of G under the canonical projection. A group that is generated by a single element is called cyclic. Every infinite cyclic group is isomorphic to the additive group of the integers Z. A locally cyclic group is a group in which every finitely generated subgroup is cyclic. The free group on a finite set is finitely generated by the elements of that set (§Examples). A fortiori, every finitely presented group (§Examples) is finitely generated. Finitely generated abelian groups Every abelian group can be seen as a module over the ring of integers Z, and in a finitely generated abelian group with generators x1, ..., xn, every group element x can be written as a linear combination of these generators, x = α1⋅x1 + α2⋅x2 + ... + αn⋅xn with integers α1, ..., αn. Subgroups of a finitely generated abelian group are themselves finitely generated. The fundamental theorem of finitely generated abelian groups states that a finitely generated abelian group is the direct sum of a free abelian group of finite rank and a finite abelian group, each of which are unique up to isomorphism. Subgroups A subgroup of a finitely generated group need not be finitely generated. The commutator subgroup of the free group on two generators is an example of a subgroup of a finitely generated group that is not finitely generated. On the other hand, all subgroups of a finitely generated abelian group are finitely generated. A subgroup of finite index in a finitely generated group is always finitely generated, and the Schreier index formula gives a bound on the number of generators required. In 1954, Albert G. Howson showed that the intersection of two finitely generated subgroups of a free group is again finitely generated. Furthermore, if and are the numbers of generators of the two finitely generated subgroups then their intersection is generated by at most generators. This upper bound was then significantly improved by Hanna Neumann to ; see Hanna Neumann conjecture. The lattice of subgroups of a group satisfies the ascending chain condition if and only if all subgroups of the group are finitely generated. A group such that all its subgroups are finitely generated is called Noetherian. A group such that every finitely generated subgroup is finite is called locally finite. Every locally finite group is periodic, i.e., every element has finite order. Conversely, every periodic abelian group is locally finite. Applications Finitely generated groups arise in diverse mathematical and scientific contexts. A frequent way they do so is by the Švarc-Milnor lemma, or more generally thanks to an action through which a group inherits some finiteness property of a space. Geometric group theory studies the connections between algebraic properties of finitely generated groups and topological and geometric properties of spaces on which these groups act. Differential geometry and topology Fundamental groups of compact manifolds are finitely generated. Their geometry coarsely reflects the possible geometries of the manifold: for instance, non-positively curved compact manifolds have CAT(0) fundamental groups, whereas uniformly positively-curved manifolds have finite fundamental group (see Myers' theorem). Mostow's rigidity theorem: for compact hyperbolic manifolds of dimension at least 3, an isomorphism between their fundamental groups extends to a Riemannian isometry. Mapping class groups of surfaces are also important finitely generated groups in low-dimensional topology. Algebraic geometry and number theory Lattices in Lie groups, in p-adic groups... Superrigidity, Margulis' arithmeticity theorem Combinatorics, algorithmics and cryptography Infinite families of expander graphs can be constructed thanks to finitely generated groups with property T Algorithmic problems in combinatorial group theory Group-based cryptography attempts to make use of hard algorithmic problems related to group presentations in order to construct quantum-resilient cryptographic protocols Analysis Probability theory Random walks on Cayley graphs of finitely generated groups provide approachable examples of random walks on graphs Percolation on Cayley graphs Physics and chemistry Crystallographic groups Mapping class groups appear in topological quantum field theories Biology Knot groups are used to study molecular knots Related notions The word problem for a finitely generated group is the decision problem of whether two words in the generators of the group represent the same element. The word problem for a given finitely generated group is solvable if and only if the group can be embedded in every algebraically closed group. The rank of a group is often defined to be the smallest cardinality of a generating set for the group. By definition, the rank of a finitely generated group is finite. See also Finitely generated module Presentation of a group Notes References Group theory Properties of groups
Finitely generated group
[ "Mathematics" ]
1,135
[ "Mathematical structures", "Properties of groups", "Group theory", "Fields of abstract algebra", "Algebraic structures" ]