id int64 39 79M | url stringlengths 32 168 | text stringlengths 7 145k | source stringlengths 2 105 | categories listlengths 1 6 | token_count int64 3 32.2k | subcategories listlengths 0 27 |
|---|---|---|---|---|---|---|
28,135,846 | https://en.wikipedia.org/wiki/Ring%20a%20Ding%20Dong | "Ring a Ding Dong" is Japanese musician Kaela Kimura's 16th physical single, released on June 9, 2010. The song was used in a wide-scale commercial campaign for NTT DoCoMo, which featured Kimura in the commercials.
Background
This single is the first to be released after Kimura's hit song "Butterfly," and the first after the announcement of her marriage to Japanese actor Eita.
Music video
The music video was shot by director Takeshi Nakamura. It features Kimura in a brightly painted room with many doors, dancing to the song along with four dancers, dressed in maid's outfits and suits and holding umbrellas. Through the video, more and more exotically dressed dancers enter the room and begin dancing.
Track listing
*Tracks 2-9 live recordings from Kimura's Live Tour 2010 "5 Years" @ Nippon Budōkan (March 27/28 2010).
Chart rankings
Reported sales and certifications
See also
List of Oricon number-one singles of 2010
List of number-one digital singles of 2010 (Japan)
References
External links
Columbia "Ring a Ding Dong" profile
Kaela Kimura songs
2010 singles
RIAJ Digital Track Chart number-one singles
Oricon Weekly number-one singles
Songs about telephones
Songs in Japanese
Songs written by Kaela Kimura
NTT Docomo
2010 songs | Ring a Ding Dong | [
"Technology"
] | 271 | [
"Members of the Conexus Mobile Alliance",
"NTT Docomo"
] |
44,969,442 | https://en.wikipedia.org/wiki/Archimedean%20graph | In the mathematical field of graph theory, an Archimedean graph is a graph that forms the skeleton of one of the Archimedean solids. There are 13 Archimedean graphs, and all of them are regular, polyhedral (and therefore by necessity also 3-vertex-connected planar graphs), and also Hamiltonian graphs.
Along with the 13, the infinite sets of prism graphs and antiprism graphs can also be considered Archimedean graphs.
See also
Platonic graph
Wheel graph
References
Read, R. C. and Wilson, R. J. An Atlas of Graphs, Oxford, England: Oxford University Press, 2004 reprint, Chapter 6 special graphs pp. 261, 267–269.
External links
Regular graphs
Planar graphs | Archimedean graph | [
"Mathematics"
] | 155 | [
"Graph theory stubs",
"Planar graphs",
"Graph theory",
"Mathematical relations",
"Planes (geometry)"
] |
44,969,812 | https://en.wikipedia.org/wiki/Dextroscope | The Dextroscope is a medical equipment system that creates a virtual reality (VR) environment in which surgeons can plan neurosurgical and other surgical procedures.
The Dextroscope is designed to show a patient's 3D anatomical relationships and pathology in great detail. Although its main purpose is for planning surgery, the dextroscope has also proven useful in research in cardiology, radiology and medical education.
History
The Dextroscope started as a research project in the mid-90s at the Kent Ridge Digital Labs research institute (part of Singapore's Agency for Science, Technology and Research (A*STAR)). It was initially named the Virtual Workbench and underwent commercialization in 2000 by the company Volume Interactions Pte Ltd with the name Dextroscope. The Dextroscope was selected in 2021 by A*STAR as one of the 30 innovations and inventions that pushed scientific boundaries, made an economic impact or improved lives over its 30 years history (A*STAR@30: 30 Innovations and Inventions Over Three Decades).
The Dextroscope was designed to be a practical variation of Virtual Reality which introduced an alternative to the prevalent trend of full immersion of the 1990s. Instead of immersing the whole user into a virtual reality, it just immersed the neurosurgeon into the patient data.
Description
The Dextroscope allows its user to interact intuitively with a Virtual Patient. This Virtual Patient is composed of computer-generated 3D multi-modal images obtained from any DICOM tomographic data including CT, MRI, MRA, MRV, functional MRI and CTA, PET, SPECT and Tractography. The Dextroscope can work with any multi-modality combination, supporting polygonal meshes as well.
The surgeon sits at the Dextroscope 3D interaction console and manipulates the Virtual Patient using both hands, similar to real life. Using stereoscopic visualisations displayed via a mirror, the surgeon sees the Virtual Patient floating behind the mirror but within easy reach of the hands. The surgeon uses flexible 3D hand movements to rotate and manipulate the object of interest. The Dextroscope allows virtual segmentation of organs and structures, making accurate 3D measurements, etc.
In one hand the surgeon holds a handle with a switch that, when pressed, allows the 3D image to be moved freely as if it were an object held in real space. The other hand holds a pencil shaped stylus that the surgeon uses to select tools from a virtual control panel and perform detailed manipulations on the 3D image.
The surgeon does not see the stylus, handle or his/her hands directly, as they are hidden behind the surface of the mirror. Instead he/she sees a virtual handle and stylus calibrated to appear in exactly the same position as the real handle and stylus. The virtual handle can serve as a drill tool, measurement tool, cutter, etc.
The Dextroscope allows surgeons to interact with and manipulate the Virtual Patient, such as simulating inter-operative viewpoints or the removal of bone and soft tissue. The surgeon is able to reach inside and manipulate the image interior.
Virtual tools
The Dextroscope provides virtual tools to manipulate the 3D image. The surgeon can use them within the virtual person to extract surgically relevant structures like the cortex or a tumor , extract blood vessels, or to adjust the color and transparency of displayed structures to see deep inside the patient. The surgeon can simulated the removal of bone using a simulated skull drilling tool.
Typical structures that can be segmented are tumors, blood vessels, aneurysms, parts of the skull base, and organs. Segmentation is done either automatically (when the structures are demarcated clearly by their outstanding image intensity - such as the cortex) or through user interaction (using for example an outlining tool to define the extent of the structure manually).
A virtual ‘pick’ tool allows the user to pick a segmented object and uncouple it from its surroundings for closer inspection. A measurement tool provides accurate measurement of straight and curving 3D structures such as the scalp, and measure angles, such as those between vessels or bony structures (for example, when planning the insertion of a screw into the spine).
Neurosurgery planning – case studies and evaluations
The use of the Dextroscope has been reported for several neurosurgical clinical scenarios;
- cerebral arteriovenous malformations
- aneurysms
- cranial nerve decompression (in cases of trigeminal neuralgia and hemifacial spasm)
- meningiomas (convexity, falcine or parasagittal)
- ependymomas or subependymomas
- craniopagus twin separation
- transnasal approaches
- key-hole approaches
- epilepsy
- and a great variety of deep-brain and skull base tumors (pituitary adenomas, craniopharyngiomas, arachnoid cysts, colloid cysts, cavernomas
, hemangioblastomas, chordomas, epidermoids, gliomas, jugular schwannomas, aqueductal stenosis, stenosis of Monro foramen, hippocampal sclerosis).
Brain and spine pathology, such as cervical fractures of the spine, syringomyelia, and sacral nerve root neurinomas have been evaluated.
For other uses of the Dextroscope in neurosurgery refer to
.
Other surgical specialties
The Dextroscope has been applied also outside of neurosurgery to benefit any patient presenting a surgical challenge: an anatomical or structural complexity that requires planning of the surgical (or interventional) approach, for example, ENT orthopedic, trauma and cranio-facial surgery, cardiac surgery and liver resection.
Dextroscope and diagnostic imaging
Dextroscope is not just for surgeons – radiologists use it, too. The rapid growth in multi-modal diagnostic imaging data routinely available has increased their workload tremendously. Using the Dextroscope, radiologists can reconstruct multimodal models from high volumes of 2D slices – hence facilitating a better understanding of the 3D anatomical structures and helping with the diagnosis.
Furthermore, the Dextroscope virtual reality environment helps bridge the gap between radiology and surgery - by allowing the radiologist to easily demonstrate to surgeons important 3D structures in a way that surgeons are familiar with. This demonstration capability makes it also useful as a base for medical educators in which to convey 3D information to students. In order to reach a larger group of people in a classroom or auditorium, a version was manufactured called Dextrobeam.
The Dextroscope was installed, (among other medical and research institutions) at:
Dextroscope in the operating room: DEX-Ray
The Dextroscope was a pre-operative planning system which created 3D patient-specific virtual models. To bring the patient data into the operating room, in particular to neurosurgery, the DEX-Ray augmented reality neurosurgical navigation system was developed in 2006-2008. DEX-Ray overlaid 3D virtual patient information over a video stream obtained from a proprietary handheld tracked video probe designed by the company. This allowed image guidance by displaying co-registered planning data over the real images of the patient seen by the video camera, so that the clinician had 'see-through' visualization on the patient's head, and helped plan the craniotomy and guide during the intervention. The DEX-Ray was clinically tested at the Singapore National Neuroscience Institute (Singapore) and at the Hospital Clinic Barcelona (Spain). It was not released as a commercial product.
Commercialization
The Dextroscope and Dextrobeam were products of Volume Interactions Pte Ltd (a member of the Bracco Group), a company spun-off from the Kent Ridge Digital Labs research institute in Singapore. They received USA FDA 510(K) - class II (2002) clearance, CE Marking - class I (2002), China SFDA Registration - class II (2004) and Taiwan Registration - type P (Radiology) (2007). For a comprehensive overview of the Dextroscope refer to the Springer International Publishing book chaper.
References
Human–computer interaction
Virtual reality
Neuroimaging software
Medical education | Dextroscope | [
"Engineering"
] | 1,690 | [
"Human–computer interaction",
"Human–machine interaction"
] |
44,970,184 | https://en.wikipedia.org/wiki/National%20GPS%20Network | The British National GPS Network, known as OS Net, is a network of global navigation satellite system GNSS base stations covering Great Britain. It is managed by Ordnance Survey.
It provides access to a stable, national coordinate reference system (through downloaded GNSS data) that allows highly accurate location to be determined using suitable equipment, and is used in surveying, construction and precision agriculture industries, among other uses. The use of ground-based stations makes this system more accurate than satellite based GPS systems.
Using a single receiver, without any additional corrections, a civilian user can achieve a positional accuracy equal to 5 m – 10 m 95% of the time, and a height accuracy of 15 m – 20 m 95% of the time. Combined with data or corrections from a service such as OS Net, a positional accuracy of 1 – 2 cm is achievable, depending on the equipment used and environmental factors.
References
Global Positioning System
Ordnance Survey | National GPS Network | [
"Technology",
"Engineering"
] | 193 | [
"Global Positioning System",
"Aerospace engineering",
"Wireless locating",
"Aircraft instruments"
] |
44,970,610 | https://en.wikipedia.org/wiki/Journal%20of%20Materials%20Chemistry%20C | The Journal of Materials Chemistry C is a weekly peer-reviewed scientific journal covering the properties, applications, and synthesis of new materials related to optical, magnetic and electronic devices. It is one of the three journals created from the splitting of Journal of Materials Chemistry at the end of 2012. Its first issue was published in January 2013. The journal is published by the Royal Society of Chemistry and has two sister journals, Journal of Materials Chemistry A and Journal of Materials Chemistry B. The editor-in-chief for the Journal of Materials Chemistry family of journals is currently Nazario Martin. The deputy editor-in-chief for Journal of Materials Chemistry C is Natalie Stingelin.
Abstracting and indexing
The journal is abstracted and indexed in the Science Citation Index.
See also
List of scientific journals in chemistry
Materials Horizons
Journal of Materials Chemistry A
Journal of Materials Chemistry B
References
External links
Chemistry journals
Materials science journals
Royal Society of Chemistry academic journals
Weekly journals
Academic journals established in 2013
English-language journals | Journal of Materials Chemistry C | [
"Materials_science",
"Engineering"
] | 198 | [
"Materials science journals",
"Materials science"
] |
44,973,140 | https://en.wikipedia.org/wiki/Black%20and%20burst | Black and burst, also known as bi-level sync and black burst, is an analogue signal used in broadcasting. It is a composite video signal with a black picture. It is a reference signal used to synchronise video equipment, in order to have them output video signals with the same timing. This allows seamless switching between two video signals.
Black and burst can also be used to synchronise colour phase and provides timing accuracy in the order of tens of nanoseconds which is necessary to perform e.g. analogue video mixing.
Black and burst exists for various colour TV standards, such as PAL, NTSC and SECAM. Because the black and burst signal is a normal video signal, it is transportable via normal video cables and through video distribution equipment.
History
Before colour TV existed, the reference signal was also a black video signal. Inaccuracies meant the video picture would be shifted.
With the introduction of colour, the reference had to be much more accurate. In every composite video signal a reference burst is present in the horizontal sync portion, so all equipment in the chain will be synchronised roughly 16000 times per second. This regular synchronisation is necessary because the colour information is transmitted via quadrature amplitude modulation on the high-frequency colour signal. Incorrect synchronisation means the phase will be off, and consequently the colour will be incorrect. Creating broadcast television usually involves mixing video signals. When doing this in an analogue way, it is essential that all signals have the same colour phase, which was achieved by synchronising all cameras with a black and burst signal. Because of cable length differences, every camera required a (often only slightly) different timing. This could be tuned at the reference source, and/or at the camera.
Black and burst is being replaced by tri-level sync, but as of 2020 it is still quite common. Because the signal chains are now digital, which allows buffering, the timing requirements are not as strict anymore.
Waveform
Its natural waveform is a negative pulse with a level of -40 IRE followed by 10 cycles of the colour sub carrier of video. For most variants of PAL video, the frequency of the sub carrier is 4.43361875 MHz.
See also
Genlock
SMPTE 2059
References
Synchronization
Film and video technology
Broadcast engineering
Television terminology | Black and burst | [
"Engineering"
] | 484 | [
"Broadcast engineering",
"Electronic engineering",
"Telecommunications engineering",
"Synchronization"
] |
44,974,942 | https://en.wikipedia.org/wiki/Alkynol | In organic chemistry, alkynols (hydroxyalkynes) are organic compounds that contain both alkyne and alcohol functional groups. Thus, as structural features, they have a C≡C triple bond and a hydroxyl group. Some alkynols play a role as intermediates in the chemical industry.
The shortened term ynol typically refers to alkynols with the hydroxyl group affixed to one of the two carbon atoms composing the triple bond (), the triple-bond analogues to enols. Ynols can tautomerize to ketenes.
The deprotonated anions of ynols are known as ynolates, the triple-bond analogues to enolates.
Synthesis
Alkynols may be formed by the alkynylation of carbonyl compounds, usually in liquid ammonia.
Ynolates
Ynolates are chemical compounds with a negatively charged oxygen atom attached to an alkyne functionality. They were first synthesized in 1975 by Schöllkopf and Hoppe via the n-butyllithium fragmentation of 3,4-diphenylisoxazole.
Synthetically, they behave as ketene precursors or synthons.
Ynol–ketene tautomerism
Ynols can interconvert with ketenes, much like enols can with aldehydes and ketones. The ynol tautomer is usually unstable, does not survive long, and changes into the ketene. This is because oxygen is more electronegative than carbon and thus forms stronger bonds. For instance, ethynol quickly interconverts with ethenone:
Literature
Allinger, Cava, de Jongh, Johnson, Lebel, Stevens: Organische Chemie, 1. Auflage, Walter de Gruyter, Berlin 1980, , p. 749.
Beyer / Walter: Lehrbuch der Organischen Chemie, 19. Auflage, S. Hirzel Verlag, Stuttgart 1981, , pp. 98–99, 122.
K. Peter C. Vollhardt, Neil E. Schore: Organische Chemie, 4. Auflage, Wiley-VCH, Weinheim 2005, , p. 632.
See also
Thioynol
References
Functional groups | Alkynol | [
"Chemistry"
] | 480 | [
"Functional groups"
] |
29,500,273 | https://en.wikipedia.org/wiki/Toroidal%20expansion%20joint | A Toroidal expansion joint is a metallic assembly that consists of a series of toroidal convolutions which are circular tubes wrapped around pipe ends or weld ends and have a gap at the inside diameter to allow for axial stroke while absorbing changes in expansion or contraction of the pipe line. Convolutions are the portion of the bellows that allow it to be flexible. The convolutions are formed around reinforcing bands so that only the concave portion of the torus allows for flexibility. Toroidal expansion joints are typically used in high pressure applications, where little movement is required, and generally used for heat exchangers. Usually, they are hydraulically formed, but others are free formed. These expansion joints are also referred to as "Omega" bellows due to their shape resembling the Greek letter, Omega.
References
External links
Expansion Joint Manufacturers Association EJMA http://www.ejma.org
Structural connectors | Toroidal expansion joint | [
"Physics",
"Engineering"
] | 193 | [
"Structural engineering",
"Refractory materials",
"Materials",
"Structural connectors",
"Matter"
] |
29,503,316 | https://en.wikipedia.org/wiki/Centrifugal%20micro-fluidic%20biochip | The centrifugal micro-fluidic biochip or centrifugal micro-fluidic biodisk is a type of lab-on-a-chip technology, also known as lab-on-a-disc, that can be used to integrate processes such as separating, mixing, reaction and detecting molecules of nano-size in a single piece of platform, including a compact disk or DVD. This type of micro-fluidic biochip is based upon the principle of microfluidics to take advantage of non-inertial pumping; for lab-on-a-chip devices using non-inertial valves and switches under centrifugal force and Coriolis effect, this is in order to distribute fluids about the disks in a highly parallel order.
This biodisk is an integration of multiple technologies in different areas. The designer must be familiar with the process of biology testing before designing the complex micro-structures in the compact disk. Some basic components such as valves, mixing units, and separating units should all be used to complete the full testing process. The basic principles applied in such micro-fluidic structures are centrifugal force, Coriolis effect, and surface tension. The micromachining techniques, including patterning, photolithography, and etching should all be used as long as the design is verified. Once the testing process is successful in the biodisk, the detection technique is started. There are many methods proposed by scientists in this area. The most popular method is immunoassay which is widely used in the testing of biology. The final step is receiving data from the biodisk by means of a CD drive and modifying either software or hardware that can achieve this function. An example method is reading data from the biodisk using a common CD drive with dedicated software, which has the advantage of being low on cost.
Once the centrifugal micro-fluidic biochip is developed well enough to be manufactured on a large scale, it is theorized to cause a wide effect on the industry as well as medical care, especially in developing countries, where high-precision equipment is not available. People in developed countries who are willing to do such regular home-care detections can also benefit from this technology.
History
The centrifugal microfluidic platform, including the chip and the device, has been a focus of academic and industrial research efforts for almost 40 years. Primarily targeting biomedical applications, a range of assays have been adapted on the system. The platform has found success as a research or clinical tool and has been further commercialized recently. Nonetheless, this micro-fluidic lab-on-a-chip technology has experienced a rapid surge over the last 10–15 years, and new developments in centrifugal microfluidic technologies have the potential to establish widespread utilization of the platform. Therefore, different liquid-handling platforms have been developed to implement unit operations such as sample take-up, sample preconditioning, reagent supply, metering, aliquoting, valving, routing, mixing, incubation, washing, as well as analytical or preparative separations. The integration of such sample preparation, incubation, analysis on a self-contained disc in a device that controls the spinning for automatic performance encourages the sample-to-answer diagnosis in the point-of-care biomedical platform.
Dr. Marc Madou in UC Irvine is one of the leaders in the centrifugal micro-fluidic biochip. He has done several research projects on this area and has made success such as pneumatic pumping in centrifugal microfluidic platforms, integration of 3D carbon-electrode dielectrophoresis, and serial siphon valving. His group members are working on projects including cell lysis, PCR card, DNA hybridization, anthrax diagnostics and respiratory virus detection (see external links). Dr. Hua-Zhong Yu in SFU.ca also made great progress in this area, proposing a new digitized molecular diagnostic reading method and a new DNA detection method on plastic CD. (see external links) Dr. Gang Logan Liu in UIUC is currently also focusing on this area (see external links).
Structure design
The design of structure bases on the principle of microfluidics and typical components are used in the platform. many structures for centrifugal microfluidic biochips have been developed, with more interesting ones yet to be released. Madou's group invented the valve-chamber structure in 2004. In recent years, Saki Kondo released the vertical liquid transportation structure, which pushed the design to become a three-dimensional concept. Madou's group also invented a serial siphon valving structure which makes flow control much easier. Hong Chen created a spiral microchannel which allows parallel testing with more steps.
Principle
The principle for the centrifugal micro-fluidic biochip includes the basic forces of a particle as well as the principle of flow control.
For a particle in the flow the basic forces are centrifugal force, Coriolis force, Euler force and viscous force.
The centrifugal force plays a role as a pump in the fluid flowing. It offers the basic source to transfer the fluid flowing from the inner radius of CD to the outer radius. The magnitude of the centrifugal force is determined by the radius of particle location and the rotational speed. The formula for centrifugal force density is:
where N is the mass density of the liquid, ω the angular frequency and r the (radial) distance between the particle and center of the disk.
The formula for Coriolis force density is:
where u is the flow velocity.
The Coriolis force generates when the liquid has a velocity component along the radial direction. This force is generally smaller than the centrifugal force when the rotating speed is not high enough. When it comes to a high angular frequency, the Coriolis force makes a difference to the flow of liquid, which is often used to separate fluid flow in the separation unit.
Another basic force is Euler force, which is often defined as the acceleration of angular frequency. For example, when the CD is rotating at a constant speed, the Euler force is relatively slow. The formula for Euler force density is:
As for a particle in the fluidic flow, the viscous force is:
v is the viscosity of the liquid.
As for the entire fluid flow, surface tension plays an important role in flow control. When the flow comes across a varied cross section, the surface tension will balance the centrifugal force and as a result block the flow of liquid. Higher rotation speed is necessary if the liquid would like to enter the next chamber. In this way, due to surface tension, the flowing process is divided into several steps which makes it simpler to realize flow control.
Typical component
There are various typical units in a centrifugal microfluidic structure, including valves, volume metering, mixing and flow switching. These types of units can make up structures that can be used in a variety of ways.
Valves
The principle of valves is the balance between centrifugal force and surface tension. When the centrifugal force is smaller than the surface tension, the liquid flow will be held in the original chamber; when the centrifugal force overbalances the surface tension due to a higher rotating speed, the liquid flow will break the valve and flow into the next chamber. This can be used to control the flow process simply by controlling the rotating speed of the disk.
The most commonly used valves include the hydrophilic valve, the hydrophobic valve, the syphon valve and sacrificial valve.
As for hydrophilic and hydrophobic valves, the generation of surface tension is almost the same. It is the sudden change of cross section of the channel that generates the surface tension. The liquid flow will be held in a hydrophilic channel when the cross section suddenly becomes large, while the flow will be held when the cross section of hydrophobic channel suddenly shrinks.
The siphon valve is based on the siphon phenomenon. When the cross-section of the channel is small enough, the liquid in the chamber can flow along the channel due to surface tension. Unlike hydrophilic or hydrophobic valves, surface tension acts as a pump in this model while centrifugal force acts as resistance.
The sacrificial valve is a technique that is controlled by laser irradiation. These sacrificial valves are composed of iron oxide nanoparticles dispersed in paraffin wax. Upon excitation with a laser diode, iron oxide nanoparticles within the wax act as integrated nanoscale heating elements, causing the wax to quickly melt at relatively low intensities of laser diode excitation. The valve operation is independent of the spin speed or the location of the valves and therefore allows for more complex biological assays integrated on the disk.
Volume metering
Volume metering is a typical function of centrifugal fluidics to reach a certain amount of liquid reagent. It can be achieved by simply connecting an overflow channel to the chamber. Once the liquid is at the level of the overflow channel, the rest of the liquid will be routed into the waste chamber connected to the overflow channel.
Mixing
Mixing is an important function in microfluidics, which combines various reagents for downstream analysis. As the fluid is confined in the micro-scale domain, mixing becomes difficult due to the low Reynolds number with laminar flow. That indicates that there is no convective mixing but diffusion, which limits the mixing process. This problem can be solved using several methods. A typical way is to rotate the disk in different directions, namely clockwise and anticlockwise rotation.
Flow switching
Flow switching is necessary when routing reagents into different chambers. A common method for flow switching in a centrifugal device is to utilize the Coriolis force within a Y-shaped structure. When the rotating speed is too low, the liquid flow will follow the original path; when the rotating speed is high enough, which is at almost the same level as centrifugal force, the liquid flow will be routed into another chamber.
Others
Other functions such as sedimentation are also used in microfluidic platforms when necessary. Due to the different mass and radius between different particles, these particles can be separated by viscosity and velocity. In this way, the sedimentation of different particles can be achieved.
Materials
Many structures can be formed using the most common, rapid prototyping technology, soft lithography with polydimethylsiloxane(PDMS). PDMS is an inexpensive, clear elastomeric polymer with rubbery mechanical properties at room temperature. In the laboratory, PDMS is mixed in small batches, poured onto moulds, for example, poly(methyl methacrylate) (PMMA), with micro-scale features, and cured at moderate temperatures for minutes to hours. Open PDMS channels are closed by adhering the channel bearing component to a glass slide or a second, flat piece of PDMS. Inlets and outlets can be formed easily using punch tools. Although many surface modifications are not permanent on PDMS due to its relatively high chain mobility compared with polymers, PDMS still remains relevant as a material for microfluidic applications.
Thermoplastics are also coming into use. The use of engineering thermoplastics has many advantages, although most of these advantages have not yet been realized. There are few commodity plastics that have emerged as suitable for medical microfluidic applications. These include PMMA, polystyrene, polycarbonate, and a variety of cyclic polyolefin materials. PMMA has good optical properties for fluorescence, and UV detection modes are relatively easy to seal to themselves. These are available in grades suitable for both injection and compression molding. Polystyrene is a material known for assay development. Polycarbonates have a high glass transition temperature but poor optical properties for fluorescent detection. The cyclic polyolefins appear to have the best combination of optical and mechanical properties.
Detection
Signal sending
Sample preparation
Before the molecules react with the reagents, they should be prepared for the reactions. The most typical is separation by centrifugal force. In the case of blood, for example, the sedimentation of blood cells from plasma can be achieved by rotating the biodisk for some time. After separation, all molecular diagnostic assays require a step of cell/viral lysis in order to release genomic and proteomic material for downstream processing. Typical lysis methods include chemical and physical method. The chemical lysis method, which is the simplest way, uses chemical detergents or enzymes to break down membranes. The physical lysis can be achieved by using bead beating system on a disk. Lysis occurs due to collisions and shearing between the beads and the cells and through friction shearing along the lysis chamber walls.
ELISA/FIA
ELISA (enzyme-linked immunosorbent assays) and FIA (fluorescent immunoassays) are two methods of immunoassays. Immunoassays are standard tools used in clinical diagnostics. These tests rely on the specific detection of either the antibody or antigen and are commonly performed by labeling the antibody/antigen of interest through various means such as fluorescent or enzymatic labels. However, washing, mixing, and incubation always take a great deal of time. When integrated in microfluid biodisks, the detection times become extremely short and such types of tests can be widely used in this area.
In ELISA method, enzymes are used to produce a detectable signal from an antibody–antigen complex. At the first step, any antigen present will bind to capture antibodies which have been coated on the channel's surface. Then, detecting antibodies added to bind to the antigen. The enzyme-linked secondary antibody follows the detecting antibodies and binds to them. Finally, when substrate is added, it will be converted by enzyme to a detectable form. Base on this principle, Sergi Morais achieved multiplexed microimmunoassays on a digital versatile disk. This multiplexed assay could achieve detection limits (IC10) of 0.06μg/L and sensitivities of (IC50) 0.54μg/L.
In addition to typical ELISA assays, fluorescent immunoassays (FIA) are also introduced on a centrifugal microfluidic device. The principle of FIA is almost the same with ELISA; the most significant difference is that fluorescence labels are used instead of enzymes.
Nucleic acid analysis
Nucleic acid sensing using gene-specific nucleic acid amplification with a fluorescence dye or a probe, nucleic acid microarrays, such as DNA microarrays, have become important tools for genetic analysis, gene expression profiling, and genetic-based diagnostics. In the gene-specific nucleic acid amplification, standard PCR or isothermal amplification, such as loop-mediated isothermal amplification (LAMP), is used to amplify the target genetic marker with the DNA-binding fluorescence dye or a sequence-specific probe is applied for signal generation. The fluorescence can be detected in a modified CD/DVD drive or a disc device.
In the nucleic acid microarrays, the process of probe immobilization and signal amplification can be separated into five steps. The surface of the micro-channel is first irradiated with UV light in the presence of ozone to produce a hydrophilic surface with a high density of carboxylic acid groups (step 1). Then, the probe molecules (biotin, DNA, or human plasma IgG) are covalently attached to the polycarbon surface via amide coupling (step 2). Later, the target molecules are labeled with fluorescent tags and this biotin-labeled target DNA is hybridized with the probe DNA immobilized on the disk (step 3). Subsequently, gold nanoparticles are bonded with the target via streptavidin conjugate (step 4). Silver is then deposited onto the gold “seed” (step 5) to increase the particle size from a few to several hundred nanometers. The amplification of fluorescence will be detected by the detection system in the CD drive.
Signal receiving
The detection system should be completed by the signal receiving component. There are roughly three types of systems which can be used for detecting. The first is hardware and software modification, which means the CD/DVD drive should be modified and the software should also be developed at the same time. This type will cause superfluous labor and expenses and may not be versatile in developing countries or indigenous areas. The second type is software modification with standard hardware, which means that the detection can be achieved by developing dedicated interpretation software on platforms such as C++ without making any changes to hardware. The third is standard hardware and existing software, which means that the detection can be realized simply by using the existing equipment. Manu Pallapa described a new protocol to read and quantify biotin–streptavidin binding assays with a standard optical drive by using a current CD-data analysis software (IsoBuster) successfully. The latter two types are both considerable when coming across different situations.
No matter which type of detection system one uses, the reading method is an important factor. There are mainly two reading methods, which are AAS (acquired analog signals) and ERD (error reading detection). In the AAS method, to determine multianalytes on a DVD, the analog signals acquired directly from the photodiode of a CD/DVD drive correlate well with the optical density of the reaction products. The ERD method is based on the analysis of reading errors. It can use the same digital versatile disk and a standard DVD drive without any supplementary hardware.
ERD
In the ERD method, the position and level of the resulting reading error correspond to the physical location and the intensity of the bioassay signal, respectively. The errors are then compared with a perfectly recorded CD to identify the time when one certain error was read out. There are several free CD-quality diagnostic programs, such as PlexTools Professional, Kprobe, and CD-DVD Speed, which can be used to access the error-statistic information in a CD/DVD drive and to generate a plot displaying the variation of the block error rate as function of playtime. In a typical 700-MB CD-R containing 79.7 minutes of audio data, for example, the radius that error occurs can be calculated from the following equation:
t is the reading time and r is the radius location.
AAS
In the AAS method, the set of servo systems (focus, tracking, sled, and spindle servos) keeps the laser beam focused on the spiral track and allows disc rotation and laser head motion during the scanning. The amplification/detection board (DAB) is integrated into the CD/DVD drive unit and incorporates a photosensor and electronic circuitry to amplify the RF signal extracted from the photodiode transducer. The photosensor generates a trigger signal when detecting the trigger mark. Both signals are brought to the USB2.0 data acquisition board (DAQ) for digitization and quantification.
See also
lab on a chip
point-of-care testing
diagnostic testing
MEMS
Immunoassay
Notes
References
External links
Marc Madou's BioMEMS lab in UC Irvine
YKCho's FRUITS lab in UNIST
Hua-Zhong Yu's webpage in SFU
Gang Logan Liu's research webpage on biodisk
Lab-on-a-chip publishing
Cell culture techniques
Microfluidics | Centrifugal micro-fluidic biochip | [
"Chemistry",
"Materials_science",
"Biology"
] | 4,112 | [
"Biochemistry methods",
"Microfluidics",
"Cell culture techniques",
"Microtechnology"
] |
29,508,192 | https://en.wikipedia.org/wiki/Tris%28tert-butoxy%29silanethiol | Tris(tert-butoxy)silanethiol is a silicon compound containing three tert-butoxy groups and a rare Si–S–H functional group. This colourless compound serves as an hydrogen donor in radical chain reactions. It was first prepared by alcoholysis of silicon disulfide and purified by distillation:
3 (CH3)3COH + SiS2 → [(CH3)3CO]3SiSH + H2S
Since 1962 it was thoroughly studied including its acid-base properties and coordination chemistry with metal ions. It coordinates to metal ions via the sulfur and oxygen donor atoms.
References
Hydrogen compounds
Silicon compounds
Tert-butyl compounds
Reagents for organic chemistry | Tris(tert-butoxy)silanethiol | [
"Chemistry"
] | 150 | [
"Reagents for organic chemistry"
] |
29,509,313 | https://en.wikipedia.org/wiki/Viscous%20liquid | In condensed matter physics and physical chemistry, the terms viscous liquid, supercooled liquid, and glass forming liquid are often used interchangeably to designate liquids that are at the same time highly viscous (see Viscosity of amorphous materials), can be or are supercooled, and able to form a glass.
Working points in glass processing
The mechanical properties of glass-forming liquids depend primarily on the viscosity. Therefore, the following working points are defined in terms of viscosity. The temperature is indicated for industrial soda lime glass:
Fragile-strong classification
In a widespread classification, due to chemist Austen Angell, a glass-forming liquid is called strong if its viscosity approximately obeys an Arrhenius law (log η is linear in 1/T ). In the opposite case of clearly non-Arrhenius behaviour the liquid is called fragile. This classification has no direct relation with the common usage of the word "fragility" to mean brittleness.
Viscous flow in amorphous materials is characterised by deviations from the Arrhenius-type behaviour: the activation energy of viscosity Q changes from a high value QH at low temperatures (in the glassy state) to a low value QL at high temperatures (in the liquid state). Amorphous materials are classified accordingly to the deviation from Arrhenius type behaviour of their viscosities as either strong when or fragile when QH-QL≥QL. The fragility of amorphous materials is numerically characterized by the Doremus’ fragility ratio RD=QH/QL . Strong melts are those with (RD-1) < 1, whereas fragile melts are those with (RD-1) ≥ 1. Fragility is related to materials bond breaking processes caused by thermal fluctuations. Bond breaking modifies the properties of an amorphous material so that the higher the concentration of broken bonds termed configurons the lower the viscosity. Materials with a higher enthalpy of configuron formation compared with their enthalpy of motion have a higher Doremus fragility ratio, conversely melts with a relatively lower enthalpy of configuron formation have a lower fragility.
More recently, the fragility has been quantitatively related to the details of the interatomic or intermolecular potential, and it has been shown that steeper interatomic potentials lead to more fragile liquids.
Mode-coupling theory
The microscopic dynamics at low to moderate viscosities is addressed by a mode-coupling theory, developed by Wolfgang Götze and collaborators since the 1980s. This theory describes a slowing down of structural relaxation on cooling towards a critical temperature Tc, typically located 20% above Tg.
Notes and sources
Textbooks
Götze,W (2009): Complex Dynamics of glass forming liquids. A mode-coupling theory. Oxford: Oxford University Press.
Zarzycki,J (1982): Les Verres et l'état vitreux. Paris: Masson. Also available in English translations.
References
Glass physics
Glassforming liquids and melts | Viscous liquid | [
"Physics",
"Materials_science",
"Engineering"
] | 651 | [
"Glass engineering and science",
"Glass physics",
"Condensed matter physics"
] |
40,309,522 | https://en.wikipedia.org/wiki/Thermomechanical%20cuttings%20cleaner | The thermomechanical cuttings cleaner (TCC) is a patented technology mainly used by service providers in the oil and gas industry to separate and recover the components of oil-contaminated drilling waste. A TCC converts kinetic energy to thermal energy in a thermal desorption process which efficiently transforms drilling waste into re-usable products. Using kinetic energy instead of indirect heating allows for very short retention times and as a consequence the quality of the separated components is not affected by the treatment. Thus the recovered water, base oil and solids can be re-used after the treatment process.
References
Waste treatment technology
Industrial machinery
Mechanical engineering
Waste management | Thermomechanical cuttings cleaner | [
"Physics",
"Chemistry",
"Engineering"
] | 129 | [
"Applied and interdisciplinary physics",
"Water treatment",
"Mechanical engineering",
"Environmental engineering",
"Waste treatment technology",
"Industrial machinery"
] |
40,311,909 | https://en.wikipedia.org/wiki/Sinusoidal%20pump | A sinusoidal pump is a type of pump featuring a sine wave-shaped rotor that creates four moving chambers, which gently convey the duty fluid from the inlet port to the higher pressure discharge port.
Typical applications
Ready meals
Soups
Sauce
Frozen foods
Salads
Meat mixes
Juice concentrate
Chocolate
Paint
Advantages
Superior solids handling
Powerful suction with low shear
Little damage to product
See also
Swashplate engine
References
Pumps | Sinusoidal pump | [
"Physics",
"Chemistry",
"Engineering"
] | 82 | [
"Pumps",
"Turbomachinery",
"Physical systems",
"Hydraulics",
"Mechanical engineering",
"Mechanical engineering stubs"
] |
40,313,828 | https://en.wikipedia.org/wiki/Time-resolved%20mass%20spectrometry | Time-resolved mass spectrometry (TRMS) is a strategy in analytical chemistry that uses mass spectrometry platform to collect data with temporal resolution. Implementation of TRMS builds on the ability of mass spectrometers to process ions within sub-second duty cycles. It often requires the use of customized experimental setups. However, they can normally incorporate commercial mass spectrometers. As a concept in analytical chemistry, TRMS encompasses instrumental developments (e.g. interfaces, ion sources, mass analyzers), methodological developments, and applications.
Applications
An early application of TRMS was in the observation of flash photolysis process. It took advantage of a time-of-flight mass analyzer.
TRMS currently finds applications in the monitoring of organic reactions, formation of reactive intermediates, enzyme-catalyzed reactions, convection, protein folding, extraction, and other chemical and physical processes.
Temporal resolution
TRMS is typically implemented to monitor processes that occur on second to millisecond time scale. However, there exist reports from studies in which sub-millisecond resolutions were achieved.
References
Analytical chemistry
Biochemistry
Laboratory techniques
Mass spectrometry
Scientific techniques | Time-resolved mass spectrometry | [
"Physics",
"Chemistry",
"Biology"
] | 241 | [
"Spectrum (physical sciences)",
"Instrumental analysis",
"Mass",
"Mass spectrometry",
"nan",
"Biochemistry",
"Matter"
] |
31,918,680 | https://en.wikipedia.org/wiki/AGATA%20%28gamma-ray%20detector%29 | AGATA, for Advanced GAmma Tracking Array, is a High-Purity Germanium (HPGe) semiconductor detector array for γ-ray spectroscopy that is based on the novel γ-ray tracking concept. It offers excellent position resolution thanks to high segmentation of individual HPGe crystals and refined pulse-shape analysis algorithms, and high detection efficiency and peak-to-total ratio thanks to elimination of Compton-suppression shielding in favour of tracking the path of γ rays through the spectrometer as they are scattered from one HPGe crystal to another. AGATA is being built and operated by a collaboration including 40 research institutions from thirteen countries in Europe. The first Memorandum of Understanding for the construction of AGATA has been signed in 2003 by the participating institutions; the updated Memorandum of Understanding, signed in 2021, foresees the extension of the array to a 3π configuration by 2030. Over the years, AGATA has been steadily growing, and currently is operated in a 1π configuration at Legnaro National Laboratories after campaigns at GANIL (2014-2021), GSI Helmholtz Centre for Heavy Ion Research (2012-2014) and Legnaro National Laboratories (2010-2011). AGATA can be coupled with ancillary detectors, such as magnetic spectrometers, fast-timing detectors, charged particles or neutron detectors.
High-fold segmented high-purity Ge detectors
The AGATA detectors are based on encapsulated and electrically segmented n-type high-purity Ge crystals. They are 36-fold segmented with six-fold azimuthal and six-fold longitudinal segmentation. Each detector is 9 cm long and is circular at the rear side with a diameter of 8 cm, and hexagonal at the front face. The common inner electrode and 36 segments are read out via individual preamplifiers. Three detector shapes exist, making it possible to tightly pack the AGATA crystals in triple cryostats.
The parameters of the detectors are:
Maximum cylinder size: 90.0 mm length, 40.0 mm radius.
Coaxial hole size: 10.0 mm diameter, extension to 13.0 mm from the front face.
Passivation layers: 1.0 mm at the back of the detector, 0.6 mm around the coaxial hole.
Encapsulation: 0.8 mm thickness with a 4.0 mm crystal-can distance
Cryostat: 1.0 mm thickness with a 2.0 mm capsule-cryostat distance.
Operation principle
Gamma rays interact with the detector's material mainly via
Compton effect, photoelectric effect and pair production, transferring their energy to electrons or positrons. They, in turn, generate a cloud of charge carriers (electrons and holes) which induces image charges on the detector electrodes. As the charge carriers drift toward the electrodes, the change of the image charge causes a flow of currents into or out of the electrodes. The evolution of induced charges on the electrodes continues until the primary charge reaches its destination electrode and neutralizes the image.
For a multi-segmented detector, the induced charge can be distributed over several electrodes. By analysing these signals using a pulse-shape analysis it is possible to localize the point where the γ-ray interaction took place with a precision better than the segment size.
Digital signal processing electronics
The interaction positions of gamma rays within the detector are determined from digital pulse-shape analysis. The pre-amplified detector signal is digitized with 14-bit resolution with a speed of 100 Ms/s. They are subsequently compared with a database of calculated pulse shapes in order to obtain, for each interaction point, the energy deposition, its time and three spatial coordinates of the interaction point.
Pulse-shape analysis
To determine the interaction point of a γ ray in a segmented HPGe detector, the shape of the signal induced on the charge-collecting electrode (corresponding to the segment in which the interaction took place) and those of the transient signals measured on the neighbouring segments are analysed. By analysing the rise time of the signal induced on the charge-collecting electrode the radial coordinate of the interaction point can be determined. The mirror charges appearing on the neighbouring segments' electrodes are sensitive to longitudinal and azimunthal coordinates of the interaction point.
In the implementation of the pulse-shape analysis technique for AGATA, the measured pulse shapes are compared, in real time, to the database of signals calculated on a fine (2 mm) grid for each type of AGATA HPGe crystals The calculations have been validated by comparisons with pulses measured using tightly collimated γ-ray sources. The working principle of the MGS code used for these calculations (Multi Geometry Simulation ) is illustrated in the figure. Effects such as anisotropic carrier drift velocity with respect to the crystallographic axis direction of the Ge crystal are taken into account.
Gamma-ray tracking
Tracking algorithms can be applied to information from the pulse-shape analysis (positions of the interaction points together with the energy deposited at each point, and signal timing) in order to reconstruct the path of each gamma ray through the AGATA spectrometer, including possible scattering from one crystal to another. There are two categories of algorithms used for this task: forward-tracking algorithms, which start from the known position of the source and reconstruct the track of photons as they interact in the detector, and back-tracking algorithms, which start from a potential point of the last interaction in the spectrometer's volume and reconstruct the track backwards to the source. The forward-tracking algorithms have been shown to be more efficient and therefore they have been implemented in the AGATA data-acquisition software.
References
External links
Detectors
Spectrometers | AGATA (gamma-ray detector) | [
"Physics",
"Chemistry"
] | 1,173 | [
"Spectrometers",
"Spectroscopy",
"Spectrum (physical sciences)"
] |
31,924,450 | https://en.wikipedia.org/wiki/Belevitch%27s%20theorem | Belevitch's theorem is a theorem in electrical network analysis due to the Russo-Belgian mathematician Vitold Belevitch (1921–1999). The theorem provides a test for a given S-matrix to determine whether or not it can be constructed as a lossless rational two-port network.
Lossless implies that the network contains only inductances and capacitances – no resistances. Rational (meaning the driving point impedance Z(p) is a rational function of p) implies that the network consists solely of discrete elements (inductors and capacitors only – no distributed elements).
The theorem
For a given S-matrix of degree ;
where,
p is the complex frequency variable and may be replaced by in the case of steady state sine wave signals, that is, where only a Fourier analysis is required
d will equate to the number of elements (inductors and capacitors) in the network, if such network exists.
Belevitch's theorem states that, represents a lossless rational network if and only if,
where,
, and are real polynomials
is a strict Hurwitz polynomial of degree not exceeding
for all .
References
Bibliography
Belevitch, Vitold Classical Network Theory, San Francisco: Holden-Day, 1968 .
Rockmore, Daniel Nahum; Healy, Dennis M. Modern Signal Processing, Cambridge: Cambridge University Press, 2004 .
Circuit theorems
Two-port networks | Belevitch's theorem | [
"Physics",
"Engineering"
] | 294 | [
"Equations of physics",
"Two-port networks",
"Electronic engineering",
"Circuit theorems",
"Physics theorems"
] |
31,926,045 | https://en.wikipedia.org/wiki/Orbital%20magnetization | In quantum mechanics, orbital magnetization, Morb, refers to the magnetization induced by orbital motion of charged particles, usually electrons in solids. The term "orbital" distinguishes it from the contribution of spin degrees of freedom, Mspin, to the total magnetization. A nonzero orbital magnetization requires broken time-reversal symmetry, which can occur spontaneously in ferromagnetic and ferrimagnetic materials, or can be induced in a non-magnetic material by an applied magnetic field.
Definitions
The orbital magnetic moment of a finite system, such as a molecule, is given classically by
where J(r) is the current density at point r. (Here SI units are used; in Gaussian units, the prefactor would be 1/2c instead, where c is the speed of light.) In a quantum-mechanical context, this can also be written as
where −e and me are the charge and mass of the electron, Ψ is the ground-state wave function, and L is the angular momentum operator. The total magnetic moment is
where the spin contribution is intrinsically quantum-mechanical and is given by
where gs is the electron spin g-factor, μB is the Bohr magneton, ħ is the reduced Planck constant, and S is the electron spin operator.
The orbital magnetization M is defined as the orbital moment density; i.e., orbital moment per unit volume. For a crystal of volume V composed of isolated entities (e.g., molecules) labelled by an index j having magnetic moments morb, j, this is
However, real crystals are made up out of atomic or molecular constituents whose charge clouds overlap, so that the above formula cannot be taken as a fundamental definition of orbital magnetization. Only recently have theoretical developments led to a proper theory of orbital magnetization in crystals, as explained below.
Theory
Difficulties in the definition of orbital magnetization
For a magnetic crystal, it is tempting to try to define
where the limit is taken as the volume V of the system becomes large. However, because of the factor of r in the integrand, the integral has contributions from surface currents that cannot be neglected, and as a result the above equation does not lead to a bulk definition of orbital magnetization.
Another way to see that there is a difficulty is to try to write down the quantum-mechanical expression for the orbital magnetization in terms of the occupied single-particle Bloch functions of band n and crystal momentum k:
where p is the momentum operator, L = r × p, and the integral is evaluated over the Brillouin zone (BZ). However, because the Bloch functions are extended, the matrix element of a quantity containing the r operator is ill-defined, and this formula is actually ill-defined.
Atomic sphere approximation
In practice, orbital magnetization is often computed by decomposing space into non-overlapping spheres centered on atoms (similar in spirit to the muffin-tin approximation), computing the integral of r × J(r) inside each sphere, and summing the contributions. This approximation neglects the contributions from currents in the interstitial regions between the atomic spheres. Nevertheless, it is often a good approximation because the orbital currents associated with partially filled d and f shells are typically strongly localized inside these atomic spheres. It remains, however, an approximate approach.
Modern theory of orbital magnetization
A general and exact formulation of the theory of orbital magnetization was developed in the mid-2000s by several authors, first based on a semiclassical approach, then on a derivation from the Wannier representation, and finally from a long-wavelength expansion. The resulting formula for the orbital magnetization, specialized to zero temperature, is
where fn k is 0 or 1 respectively as the band energy En k falls above or below the Fermi energy μ,
is the effective Hamiltonian at wavevector k, and
is the cell-periodic Bloch function satisfying
A generalization to finite temperature is also available. Note that the term involving the band energy En k in this formula is really just an integral of the band energy times the Berry curvature. Results computed using the above formula have appeared in the literature. A recent review summarizes these developments.
Experiments
The orbital magnetization of a material can be determined accurately by measuring the gyromagnetic ratio γ, i.e., the ratio between the magnetic dipole moment of a body and its
angular momentum. The gyromagnetic ratio is related to the spin and orbital magnetization according to
The two main experimental techniques are based either on the Barnett effect or the Einstein–de Haas effect. Experimental data for Fe, Co, Ni, and their alloys have been compiled.
References
Magnetism
Electromagnetism
Quantum mechanics
Electronic structure methods | Orbital magnetization | [
"Physics",
"Chemistry"
] | 970 | [
"Physical phenomena",
"Electromagnetism",
"Quantum chemistry",
"Theoretical physics",
"Quantum mechanics",
"Computational physics",
"Electronic structure methods",
"Computational chemistry",
"Fundamental interactions"
] |
31,931,538 | https://en.wikipedia.org/wiki/Autophagosome | An autophagosome is a spherical structure with double layer membranes. It is the key structure in macroautophagy, the intracellular degradation system for cytoplasmic contents (e.g., abnormal intracellular proteins, excess or damaged organelles, invading microorganisms). After formation, autophagosomes deliver cytoplasmic components to the lysosomes. The outer membrane of an autophagosome fuses with a lysosome to form an autolysosome. The lysosome's hydrolases degrade the autophagosome-delivered contents and its inner membrane.
The formation of autophagosomes is regulated by genes that are well-conserved from yeast to higher eukaryotes. The nomenclature of these genes has differed from paper to paper, but it has been simplified in recent years. The gene families formerly known as APG, AUT, CVT, GSA, PAZ, and PDD are now unified as the ATG (AuTophaGy related) family.
The size of autophagosomes vary between mammals and yeast. Yeast autophagosomes are about 500-900 nm, while mammalian autophagosomes are larger (500-1500 nm). In some examples of cells, like embryonic stem cells, embryonic fibroblasts, and hepatocytes, autophagosomes are visible with light microscopy and can be seen as ring-shaped structures.
Autophagosome formation
The initial step of autophagosome formation of an omegasome on the endoplasmic reticulum, followed by of elongation of structures called phagophores.
The formation of autophagosomes is controlled by Atg genes through Atg12-Atg5 and LC3 complexes. The conjugate of Atg12-Atg5 also interacts with Atg16 to form larger complexes. Modification of Atg5 by Atg12 is essential for the elongation of the initial membrane.
After the formation of the spherical structure, the complex of ATG12-ATG5:ATG16L1 dissociates from the autophagosome. LC3 is cleaved by ATG4 protease to generate cytosolic LC3. LC3 cleavage is required for the terminal fusion of an autophagosome with its target membrane. LC3 is commonly used as a marker of autophagosomes in immunocytochemistry, because it is the essential part of the vesicle and stays associated until the last moment before its fusion. At first, autophagosomes fuse with endosomes or endosome-derived vesicles. These structures are then called amphisomes or intermediate autophagic vacuoles. Nonetheless, these structures contain endocytic markers even small lysosomal proteins such as cathepsin D.
The process is similar in yeast, however the gene names differ. For example, LC3 in mammals is Atg8 in yeast and autophagosomes are generated from Pre-Autophagosomal Structure (PAS) which is distinct from the precursor structures in mammalian cells. The pre-autophagosomal structure in yeast is described as a complex localized near the vacuole. However the significance of this localization is not known. Mature yeast autophagosomes fuse directly with vacuoles or lysosomes and do not form amphisomes as in mammals.
In yeast autophagosome maturation, there are also other known players as Atg1, Atg13 and Atg17. Atg1 is a kinase upregulated upon induction of autophagy. Atg13 regulates Atg1 and together they form a complex called Atg13:Atg1, which receives signals from the master of nutrient sensing – Tor. Atg1 is also important in late stages of autophagosome formation.
Function in neurons
In neurons, autophagosomes are generated at the neurite tip and mature (acidify) as they travel towards the cell body along the axon. This axonal transport is disrupted if huntingtin or its interacting partner HAP1, which colocalize with autophagosomes in neurons, are depleted.
References
Cell biology | Autophagosome | [
"Biology"
] | 896 | [
"Cell biology"
] |
36,126,758 | https://en.wikipedia.org/wiki/Gate%20driver | A gate driver is a power amplifier that accepts a low-power input from a controller IC and produces a high-current drive input for the gate of a high-power transistor such as an IGBT or power MOSFET. Gate drivers can be provided either on-chip or as a discrete module. In essence, a gate driver consists of a level shifter in combination with an amplifier. A gate driver IC serves as the interface between control signals (digital or analog controllers) and power switches (IGBTs, MOSFETs, SiC MOSFETs, and GaN HEMTs). An integrated gate-driver solution reduces design complexity, development time, bill of materials (BOM), and board space while improving reliability over discretely-implemented gate-drive solutions.
History
In 1989, International Rectifier (IR) introduced the first monolithic HVIC gate driver product, the high-voltage integrated circuit (HVIC) technology uses patented and proprietary monolithic structures integrating bipolar, CMOS, and lateral DMOS devices with breakdown voltages above 700 V and 1400 V for operating offset voltages of 600 V and 1200 V.
Using this mixed-signal HVIC technology, both high-voltage level-shifting circuits and low-voltage analog and digital circuits can be implemented. With the ability to place high-voltage circuitry (in a ‘well’ formed by polysilicon rings), that can ‘float’ 600 V or 1200 V, on the same silicon away from the rest of the low-voltage circuitry, high-side power MOSFETs or IGBTs exist in many popular off-line circuit topologies such as buck, synchronous boost, half-bridge, full-bridge and three-phase. The HVIC gate drivers with floating switches are well-suited for topologies requiring high-side, half-bridge, and three-phase configurations.
Purpose
In contrast to bipolar transistors, MOSFETs do not require constant power input, as long as they are not being switched on or off. The isolated gate-electrode of the MOSFET forms a capacitor (gate capacitor), which must be charged or discharged each time the MOSFET is switched on or off. As a transistor requires a particular gate voltage in order to switch on, the gate capacitor must be charged to at least the required gate voltage for the transistor to be switched on. Similarly, to switch the transistor off, this charge must be dissipated, i.e. the gate capacitor must be discharged.
When a transistor is switched on or off, it does not immediately switch from a non-conducting to a conducting state; and may transiently support both a high voltage and conduct a high current. Consequently, when gate current is applied to a transistor to cause it to switch, a certain amount of heat is generated which can, in some cases, be enough to destroy the transistor. Therefore, it is necessary to keep the switching time as short as possible, so as to minimize . Typical switching times are in the range of microseconds. The switching time of a transistor is inversely proportional to the amount of current used to charge the gate. Therefore, switching currents are often required in the range of several hundred milliamperes, or even in the range of amperes. For typical gate voltages of approximately 10-15V, several watts of power may be required to drive the switch. When large currents are switched at high frequencies, e.g. in DC-to-DC converters or large electric motors, multiple transistors are sometimes provided in parallel, so as to provide sufficiently high switching currents and switching power.
The switching signal for a transistor is usually generated by a logic circuit or a microcontroller, which provides an output signal that typically is limited to a few milliamperes of current. Consequently, a transistor which is directly driven by such a signal would switch very slowly, with correspondingly high power loss. During switching, the gate capacitor of the transistor may draw current so quickly that it causes a current overdraw in the logic circuit or microcontroller, causing overheating which leads to permanent damage or even complete destruction of the chip. To prevent this from happening, a gate driver is provided between the microcontroller output signal and the power transistor.
Charge pumps are often used in H-Bridges in high side drivers for gate driving the high side n-channel power MOSFETs and IGBTs. These devices are used because of their good performance, but require a gate drive voltage a few volts above the power rail. When the centre of a half bridge goes low the capacitor is charged via a diode, and this charge is used to later drive the gate of the high side FET gate a few volts above the source or emitter pin's voltage so as to switch it on. This strategy works well provided the bridge is regularly switched and avoids the complexity of having to run a separate power supply and permits the more efficient n-channel devices to be used for both high and low switches.
References
External links
Gate Driver ICs for IGBT and MOSFET
Power MOSFET Gate Drivers
Transistor amplifiers
Power electronics | Gate driver | [
"Engineering"
] | 1,093 | [
"Electronic engineering",
"Power electronics"
] |
36,127,004 | https://en.wikipedia.org/wiki/C14H21ClN2O2 | {{DISPLAYTITLE:C14H21ClN2O2}}
The molecular formula C14H21ClN2O2 (molar mass: 284.78 g/mol, exact mass: 284.1292 u) may refer to:
Clofexamide (or amichlophene)
Clovoxamine
Molecular formulas | C14H21ClN2O2 | [
"Physics",
"Chemistry"
] | 76 | [
"Molecules",
"Set index articles on molecular formulas",
"Isomerism",
"Molecular formulas",
"Matter"
] |
36,128,950 | https://en.wikipedia.org/wiki/Macromolecular%20assembly | In molecular biology, the term macromolecular assembly (MA) refers to massive chemical structures such as viruses and non-biologic nanoparticles, cellular organelles and membranes and ribosomes, etc. that are complex mixtures of polypeptide, polynucleotide, polysaccharide or other polymeric macromolecules. They are generally of more than one of these types, and the mixtures are defined spatially (i.e., with regard to their chemical shape), and with regard to their underlying chemical composition and structure. Macromolecules are found in living and nonliving things, and are composed of many hundreds or thousands of atoms held together by covalent bonds; they are often characterized by repeating units (i.e., they are polymers). Assemblies of these can likewise be biologic or non-biologic, though the MA term is more commonly applied in biology, and the term supramolecular assembly is more often applied in non-biologic contexts (e.g., in supramolecular chemistry and nanotechnology). MAs of macromolecules are held in their defined forms by non-covalent intermolecular interactions (rather than covalent bonds), and can be in either non-repeating structures (e.g., as in the ribosome (image) and cell membrane architectures), or in repeating linear, circular, spiral, or other patterns (e.g., as in actin filaments and the flagellar motor, image). The process by which MAs are formed has been termed molecular self-assembly, a term especially applied in non-biologic contexts. A wide variety of physical/biophysical, chemical/biochemical, and computational methods exist for the study of MA; given the scale (molecular dimensions) of MAs, efforts to elaborate their composition and structure and discern mechanisms underlying their functions are at the forefront of modern structure science.
Biomolecular complex
A biomolecular complex, also called a biomacromolecular complex, is any biological complex made of more than one biopolymer (protein, RNA, DNA,
carbohydrate) or large non-polymeric biomolecules (lipid). The interactions between these biomolecules are non-covalent.
Examples:
Protein complexes, some of which are multienzyme complexes: proteasome, DNA polymerase III holoenzyme, RNA polymerase II holoenzyme, symmetric viral capsids, chaperonin complex GroEL-GroES, photosystem I, ATP synthase, ferritin.
RNA-protein complexes: ribosome, spliceosome, vault, SnRNP. Such complexes in cell nucleus are called ribonucleoproteins (RNPs).
DNA-protein complexes: nucleosome.
Protein-lipid complexes: lipoprotein.
The biomacromolecular complexes are studied structurally by X-ray crystallography, NMR spectroscopy of proteins, cryo-electron microscopy and successive single particle analysis, and electron tomography.
The atomic structure models obtained by X-ray crystallography and biomolecular NMR spectroscopy can be docked into the much larger structures of biomolecular complexes obtained by lower resolution techniques like electron microscopy, electron tomography, and small-angle X-ray scattering.
Complexes of macromolecules occur ubiquitously in nature, where they are involved in the construction of viruses and all living cells. In addition, they play fundamental roles in all basic life processes (protein translation, cell division, vesicle trafficking, intra- and inter-cellular exchange of material between compartments, etc.). In each of these roles, complex mixtures of become organized in specific structural and spatial ways. While the individual macromolecules are held together by a combination of covalent bonds and intramolecular non-covalent forces (i.e., associations between parts within each molecule, via charge-charge interactions, van der Waals forces, and dipole–dipole interactions such as hydrogen bonds), by definition MAs themselves are held together solely via the noncovalent forces, except now exerted between molecules (i.e., intermolecular interactions).
MA scales and examples
The images above give an indication of the compositions and scale (dimensions) associated with MAs, though these just begin to touch on the complexity of the structures; in principle, each living cell is composed of MAs, but is itself an MA as well. In the examples and other such complexes and assemblies, MAs are each often millions of daltons in molecular weight (megadaltons, i.e., millions of times the weight of a single, simple atom), though still having measurable component ratios (stoichiometries) at some level of precision. As alluded to in the image legends, when properly prepared, MAs or component subcomplexes of MAs can often be crystallized for study by protein crystallography and related methods, or studied by other physical methods (e.g., spectroscopy, microscopy).
Virus structures were among the first studied MAs; other biologic examples include ribosomes (partial image above), proteasomes, and translation complexes (with protein and nucleic acid components), procaryotic and eukaryotic transcription complexes, and nuclear and other biological pores that allow material passage between cells and cellular compartments. Biomembranes are also generally considered MAs, though the requirement for structural and spatial definition is modified to accommodate the inherent molecular dynamics of membrane lipids, and of proteins within lipid bilayers.
Virus assembly
During assembly of the bacteriophage (phage) T4 virion, the morphogenetic proteins encoded by the phage genes interact with each other in a characteristic sequence. Maintaining an appropriate balance in the amounts of each of these proteins produced during viral infection appears to be critical for normal phage T4 morphogenesis. Phage T4 encoded proteins that determine virion structure include major structural components, minor structural components and non-structural proteins that catalyze specific steps in the morphogenesis sequence
Research into MAs
The study of MA structure and function is challenging, in particular because of their megadalton size, but also because of their complex compositions and varying dynamic natures. Most have had standard chemical and biochemical methods applied (methods of protein purification and centrifugation, chemical and electrochemical characterization, etc.). In addition, their methods of study include modern proteomic approaches, computational and atomic-resolution structural methods (e.g., X-ray crystallography), small-angle X-ray scattering (SAXS) and small-angle neutron scattering (SANS), force spectroscopy, and transmission electron microscopy and cryo-electron microscopy. Aaron Klug was recognized with the 1982 Nobel Prize in Chemistry for his work on structural elucidation using electron microscopy, in particular for protein-nucleic acid MAs including the tobacco mosaic virus (a structure containing a 6400 base ssRNA molecule and >2000 coat protein molecules). The crystallization and structure solution for the ribosome, MW ~ 2.5 MDa, an example of part of the protein synthetic 'machinery' of living cells, was object of the 2009 Nobel Prize in Chemistry awarded to Venkatraman Ramakrishnan, Thomas A. Steitz, and Ada E. Yonath.
Non-biologic counterparts
Finally, biology is not the sole domain of MAs. The fields of supramolecular chemistry and nanotechnology each have areas that have developed to elaborate and extend the principles first demonstrated in biologic MAs. Of particular interest in these areas has been elaborating the fundamental processes of molecular machines, and extending known machine designs to new types and processes.
See also
Multi-state modeling of biomolecules
Quaternary structure
Multiprotein complex
Organelle: the broadest definition of "organelle" includes not only membrane bound cellular structures, but also very large biomolecular complexes.
Multi-state modeling of biomolecules
References
Further reading
General reviews
Reviews on particular MAs
Primary sources
Other sources
Nobel Prizes in Chemistry (2012), The Nobel Prize in Chemistry 2009, Venkatraman Ramakrishnan, Thomas A. Steitz, Ada E. Yonath, The Nobel Prize in Chemistry 2009, accessed 13 June 2011.
Nobel Prizes in Chemistry (2012), The Nobel Prize in Chemistry 1982, Aaron Klug, The Nobel Prize in Chemistry 1982, accessed 13 June 2011.
External links
Beck Group (2019), Structure and function of large macromolecular assemblies (Beck group home page), Beck Group - Structure and function of large molecular assemblies - EMBL, accessed 13 June 2011.
DMA Group (2019), Dynamics of macromolecular assembly (DMA Group home page), Dynamics of Macromolecular Assembly Section | National Institute of Biomedical Imaging and Bioengineering, accessed 13 June 2011.
Molecular biology
Biochemistry | Macromolecular assembly | [
"Chemistry",
"Biology"
] | 1,885 | [
"Biochemistry",
"nan",
"Molecular biology"
] |
41,740,230 | https://en.wikipedia.org/wiki/Ramsey%20interferometry | Ramsey interferometry, also known as the separated oscillating fields method, is a form of particle interferometry that uses the phenomenon of magnetic resonance to measure transition frequencies of particles. It was developed in 1949 by Norman Ramsey, who built upon the ideas of his mentor, Isidor Isaac Rabi, who initially developed a technique for measuring particle transition frequencies. Ramsey's method is used today in atomic clocks and in the SI definition of the second. Most precision atomic measurements, such as modern atom interferometers and quantum logic gates, have a Ramsey-type configuration. A more modern method, known as Ramsey–Bordé interferometry uses a Ramsey configuration and was developed by French physicist Christian Bordé and is known as the Ramsey–Bordé interferometer. Bordé's main idea was to use atomic recoil to create a beam splitter of different geometries for an atom-wave. The Ramsey–Bordé interferometer specifically uses two pairs of counter-propagating interaction waves, and another method named the "photon-echo" uses two co-propagating pairs of interaction waves.
Introduction
A main goal of precision spectroscopy of a two-level atom is to measure the absorption frequency between the ground state and excited state of the atom. One way to accomplish this measurement is to apply an external oscillating electromagnetic field at frequency and then find the difference (also known as the detuning) between and by measuring the probability to transfer to . This probability can be maximized when , when the driving field is in resonance with the transition frequency of the atom. Looking at this probability of transition as a function of the detuning , the narrower the peak around , the more precision there is. If the peak were very broad about , then it would be difficult to distinguish precisely where is located due to many values of having close to the same probability.
Physical principles
The Rabi method
A simplified version of the Rabi method consists of a beam of atoms, all having the same speed and the same direction, sent through one interaction zone of length . The atoms are two-level atoms with a transition energy of (this is defined by applying a field in an excitation direction , and thus , the Larmor frequency), and with an interaction time of in the interaction zone. In the interaction zone, a monochromatic oscillating magnetic field is applied perpendicular to the excitation direction, and this will lead to Rabi oscillations between and at a frequency of .
The Hamiltonian in the rotating frame (including the rotating-wave approximation) is
The probability of transition from and can be found from this Hamiltonian and is
This probability will be at its maximum when . The line width of this vs. determines the precision of the measurement. Because , by increasing or , and correspondingly decreasing so that their product is , the precision of the measurement increases; i.e. the peak of the graph becomes narrower.
In reality, however, inhomogeneities such as the atoms having a distribution of velocities or there being an inhomogeneous will cause the line shape to broaden and lead to decreased precision. Having a distribution of velocities means having a distribution of interaction times, and therefore there would be many angles through which state vectors would flip on the Bloch sphere. There would be an optimal length in the Rabi setup that would give the greatest precision, but it would not be possible to increase the length infinitely and expect ever increasing precision, as was the case in the perfect, simple Rabi model.
The Ramsey method
Ramsey improved upon Rabi's method by splitting the one interaction zone into two very short interaction zones, each applying a pulse. The two interaction zones are separated by a much longer non-interaction zone. By making the two interaction zones very short, the atoms spend a much shorter time in the presence of the external electromagnetic fields than they would in the Rabi model. This is advantageous because the longer the atoms are in the interaction zone, the more inhomogeneities (such as an inhomogeneous field) lead to reduced precision in determining . The non-interaction zone in Ramsey's model can be made much longer than the one interaction zone in Rabi's method because there is no perpendicular field being applied in the non-interaction zone (although there is still
The primary improvement from the Ramsey method is because the main peak resonance frequency represents an average over the frequencies (and inhomogeneities) in the non-interaction region between the cavities, whereas with the Rabi method the inhomogeneities in the interaction region lead to line broadening. An additional advantage of the Ramsey method for microwave or optical transitions is that the non-interaction region can be made much longer than an interaction region with the Rabi method, resulting in narrower lines.
The Hamiltonian in the rotating frame for the two interaction zones is the same for that of the Rabi method, and in the non-interaction zone the Hamiltonian is only the term. First a pulse is applied to atoms in the ground state, whereupon the atoms reach the non-interaction zone, and the spins precess about the z axis for time . Another pulse is applied, and the probability measured—practically this experiment must be done many times, because one measurement will not be enough to determine the probability of measuring any value (see the Bloch sphere description below). By applying this evolution to atoms of one velocity, the probability to find the atom in the excited state as a function of the detuning and time of flight in the non-interaction zone is (taking here)
This probability function describes the well-known Ramsey fringes.
If there is a distribution of velocities and a "hard pulse" is applied in the interaction zones so that all of the spins of the atoms are rotated on the Bloch sphere regardless of whether or not they all were excited to exactly the same resonance frequency, the Ramsey fringes will look very similar to those discussed above. If a hard pulse is not applied, then the variation in interaction times must be taken into account. What results are Ramsey fringes in an envelope in the shape of the Rabi method probability for atoms of one velocity. The line width of the fringes in this case is what determines the precision with which can be determined and is
By increasing the time of flight in the non-interaction zone, or equivalently increasing the length of the non-interaction zone, the line width can be substantially improved, by a factor of 10 or more, over that of other methods.
Because Ramsey's model allows a longer observation time, one can more precisely determine . This is a statement of the time-energy uncertainty principle: the larger the uncertainty in the time domain, the smaller the uncertainty in the energy domain, or equivalently the frequency domain. Thought of another way, if two waves of almost exactly the same frequency are superimposed upon each other, then it will be impossible to distinguish them if the resolution of our eyes is larger than the difference between the two waves. Only after a long period of time will the difference between two waves become large enough to differentiate the two.
Early Ramsey interferometers used two interaction zones separated in space, but it is also possible to use two pulses separated in time, as long as the pulses are coherent. In the case of time-separated pulses, the longer the time between pulses, the more precise the measurement.
Applications of the Ramsey interferometer
Atomic clocks and the SI definition of the second
An atomic clock is fundamentally an oscillator whose frequency is matched to that of an atomic transition of a two-level atom, . The oscillator is the parallel external electromagnetic field in the non-interaction zone of the Ramsey–Bordé interferometer. By measuring the transition rate from the excited to the ground state, one can tune the oscillator so that by finding the frequency that yields the maximum transition rate. Once the oscillator is tuned, the number of oscillations of the oscillator can be counted electronically to give a certain time interval (e.g. the SI second, which is 9,192,631,770 periods of a cesium-133
Experiments of Serge Haroche
Serge Haroche won the 2012 Nobel Prize in physics (with David J. Wineland) for work involving cavity quantum electrodynamics (QED) in which the research group used microwave-frequency photons to verify the quantum description of electromagnetic fields. Essential to their experiments was the Ramsey interferometer, which they used to demonstrate the transfer of quantum coherence from one atom to another through interaction with a quantum mode in a cavity. The setup is similar to a regular Ramsey interferometer, with key differences being there is a quantum cavity in the non-interaction zone and the second interaction zone has its field phase shifted by some constant relative to the first interaction zone.
If one atom is sent into the setup in its ground state and passed through the first interaction zone, the state would become a superposition of ground and excited states , just as it would with a regular Ramsey interferometer. It then passes through the quantum cavity, which initially contains only a vacuum, and then is measured to be or . A second atom initially in is then sent through the cavity and then through the phase-shifted second Ramsey interaction zone. If the first atom is measured to be in , then the probability that the second atom is in depends on the amount of time between sending in the first and the second atoms. The fundamental reason for this is that if the first atom is measured to be in , then there is a single mode of the electromagnetic field within the cavity that will subsequently affect the measurement outcome of the second atom.
The Ramsey–Bordé interferometer
Early interpretations of atom interferometers, including those of Ramsey, used a classical description of the motion of the atoms, but Bordé introduced an interpretation that used a quantum description of the motion of the atoms. Strictly speaking, the Ramsey interferometer is not an interferometer in real space because the fringe patterns develop due to changes of the pseudo-spin of the atom in the internal atomic space. However, an argument could be made for the Ramsey interferometer to be an interferometer in real space by thinking about the atomic movement quantumly—the fringes can be thought of as the result of the momentum kick imparted to the atoms by the detuning .
Four traveling-wave interaction geometry
The problem that Bordé et al. were trying to solve in 1984 was the averaging-out of Ramsey fringes of atoms whose transition frequencies were in the optical range. When this was the case, first-order Doppler shifts caused the Ramsey fringes to vanish because of the introduced spread in frequencies. Their solution was to have four Ramsey interaction zones instead of two, each zone consisting of a traveling wave but still applying a pulse. The first two waves both travel in the same direction, and the second two both travel in the direction opposite that of the first and second. There are two populations that result from the interaction of the atoms first with the first two zones and subsequently with the second two. The first population consists of atoms whose Doppler-induced de-phasing has cancelled, resulting in the familiar Ramsey fringes. The second consists of atoms whose Doppler-induced de-phasing has doubled and whose Ramsey fringes have completely disappeared (this is known as the "backward-stimulated photon echo", and its signal goes to zero after integrating over all velocities).
The interaction geometry of two pairs of counter-propagating waves that Bordé et al. introduced allows improved resolution of spectroscopy of frequencies in the optical range, such as those of Ca and I2.
Interferometer
Specifically, however, the Ramsey–Bordé interferometer is an atom interferometer that uses this four-traveling-wave geometry and the phenomenon of atomic recoil. In Bordé's notation, is the ground state and is the excited state. When an atom enters any of the four interaction zones, the wavefunction of the atom is divided into a superposition of two states, where each state is described by a specific energy and a specific momentum: , where α is either a or b. The quantum number mα is the number of light momentum quanta that have been exchanged from the initial momentum, where is the wavevector of the laser. This superposition is due to the energy and momentum exchanged between the laser and the atom in the interaction zones during the absorption/emission processes. Because there is initially one atom-wave, after the atom has passed through three zones it is in a superposition of eight different states before it reaches the final interaction zone.
Looking at the probability to transition to after the atom has passed through the fourth interaction zone, one would find dependence on the detuning in the form of Ramsey fringes, but due to the difference in two quantum mechanical paths. After integrating over all velocities, there are only two closed circuit quantum mechanical paths that do not integrate to zero, and those are the and path and the and path, which are the two paths that lead to intersections of the diagram at the fourth interaction zone. The atom-wave interferometer formed by either of these two paths leads to a phase difference that is dependent on both internal and external parameters, i.e. it is dependent on the physical distances by which the interaction zones are separated and on the internal state of the atom, as well as external applied fields. Another way to think about these interferometers in the traditional sense is that for each path there are two arms, each of which is denoted by the atomic state.
If an external field is applied to either rotate or accelerate the atoms, there will be a phase shift due to the induced de Broglie phase in each arm of the interferometer, and this will translate to a shift in the Ramsey fringes. In other words, the external field will change the momentum states, which will lead to a shift in the fringe pattern, which can be detected. As an example, apply the following Hamiltonian of an external field to rotate the atoms in the interferometer:
This Hamiltonian leads to a time evolution operator to first order in :
If is perpendicular to , then the round trip phase factor for one oscillation is given by , where is the length of the entire apparatus from the first interaction zone to the final interaction zone. This will yield a probability such that
where is the wavelength of the atomic two-level transition. This probability represents a shift from by a factor of
For a calcium atom on the Earth's surface that rotates at , using and looking at the transition, the shift in the fringes would be , which is a measurable effect.
A similar effect can be calculated for the shift in the Ramsey fringes caused by the acceleration of gravity. The shifts in the fringes will reverse direction if the directions of the lasers in the interaction zones are reversed, and the shift will cancel if standing waves are used.
The Ramsey–Bordé interferometer provides the potential for improved frequency measurements in the presence of external fields or rotations.
References
Interferometers | Ramsey interferometry | [
"Technology",
"Engineering"
] | 3,126 | [
"Interferometers",
"Measuring instruments"
] |
41,745,711 | https://en.wikipedia.org/wiki/3-3%20duoprism | In the geometry of 4 dimensions, the 3-3 duoprism or triangular duoprism is a four-dimensional convex polytope.
Descriptions
The duoprism is a 4-polytope that can be constructed using Cartesian product of two polygons. In the case of 3-3 duoprism is the simplest among them, and it can be constructed using Cartesian product of two triangles. The resulting duoprism has 9 vertices, 18 edges, and 15 faces—which include 9 squares and 6 triangles. Its cell has 6 triangular prism. It has Coxeter diagram , and symmetry , order 72.
The hypervolume of a uniform 3-3 duoprism with edge length is
This is the square of the area of an equilateral triangle,
The 3-3 duoprism can be represented as a graph with the same number of vertices and edges. Like the Berlekamp–van Lint–Seidel graph and the unknown solution to Conway's 99-graph problem, every edge is part of a unique triangle and every non-adjacent pair of vertices is the diagonal of a unique square. It is a toroidal graph, a locally linear graph, a strongly regular graph with parameters (9,4,1,2), the rook's graph, and the Paley graph of order 9. This graph is also the Cayley graph of the group with generating set .
The minimal distance graph of a 3-3 duoprism may be ascertained by the Cartesian product of graphs between two identical both complete graphs .
3-3 duopyramid
The dual polyhedron of a 3-3 duoprism is called a 3-3 duopyramid or triangular duopyramid., page 45: "The dual of a p,q-duoprism is called a p,q-duopyramid."</ref> It has 9 tetragonal disphenoid cells, 18 triangular faces, 15 edges, and 6 vertices. It can be seen in orthogonal projection as a 6-gon circle of vertices, and edges connecting all pairs, just like a 5-simplex seen in projection.
The regular complex polygon 2{4}3, also 3{ }+3{ } has 6 vertices in with a real representation in matching the same vertex arrangement of the 3-3 duopyramid. It has 9 2-edges corresponding to the connecting edges of the 3-3 duopyramid, while the 6 edges connecting the two triangles are not included. It can be seen in a hexagonal projection with 3 sets of colored edges. This arrangement of vertices and edges makes a complete bipartite graph with each vertex from one triangle is connected to every vertex on the other. It is also called a Thomsen graph or 4-cage.
See also
3-4 duoprism
Tesseract (4-4 duoprism)
Duocylinder
References
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, (Chapter 5: Regular Skew Polyhedra in three and four dimensions and their topological analogues)
Coxeter, H. S. M. Regular Skew Polyhedra in Three and Four Dimensions. Proc. London Math. Soc. 43, 33-62, 1937.
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26)
Norman Johnson Uniform Polytopes, Manuscript (1991)
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
External links
The Fourth Dimension Simply Explained—describes duoprisms as "double prisms" and duocylinders as "double cylinders"
Polygloss – glossary of higher-dimensional terms
Exploring Hyperspace with the Geometric Product
Uniform 4-polytopes | 3-3 duoprism | [
"Physics"
] | 804 | [
"Uniform 4-polytopes",
"Uniform polytopes",
"Symmetry"
] |
41,745,759 | https://en.wikipedia.org/wiki/3-4%20duoprism | In geometry of 4 dimensions, a 3-4 duoprism, the second smallest p-q duoprism, is a 4-polytope resulting from the Cartesian product of a triangle and a square.
The 3-4 duoprism exists in some of the uniform 5-polytopes in the B5 family.
Images
Related complex polygons
The quasiregular complex polytope 3{}×4{}, , in has a real representation as a 3-4 duoprism in 4-dimensional space. It has 12 vertices, and 4 3-edges and 3 4-edges. Its symmetry is 3[2]4, order 12.
Related polytopes
The birectified 5-cube, has a uniform 3-4 duoprism vertex figure:
3-4 duopyramid
The dual of a 3-4 duoprism is called a 3-4 duopyramid. It has 12 digonal disphenoid cells, 24 isosceles triangular faces, 12 edges, and 7 vertices.
See also
Polytope and polychoron
Convex regular polychoron
Duocylinder
Tesseract
Notes
References
Regular Polytopes, H. S. M. Coxeter, Dover Publications, Inc., 1973, New York, p. 124.
Coxeter, The Beauty of Geometry: Twelve Essays, Dover Publications, 1999, (Chapter 5: Regular Skew Polyhedra in three and four dimensions and their topological analogues)
Coxeter, H. S. M. Regular Skew Polyhedra in Three and Four Dimensions. Proc. London Math. Soc. 43, 33–62, 1937.
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26)
Norman Johnson Uniform Polytopes, Manuscript (1991)
N.W. Johnson: The Theory of Uniform Polytopes and Honeycombs, Ph.D. Dissertation, University of Toronto, 1966
External links
The Fourth Dimension Simply Explained—describes duoprisms as "double prisms" and duocylinders as "double cylinders"
Polygloss - glossary of higher-dimensional terms
Exploring Hyperspace with the Geometric Product
Uniform 4-polytopes | 3-4 duoprism | [
"Physics"
] | 465 | [
"Uniform 4-polytopes",
"Uniform polytopes",
"Symmetry"
] |
43,245,211 | https://en.wikipedia.org/wiki/Time%20base%20generator | A time base generator (also timebase or time base) is a special type of function generator, an electronic circuit that generates a varying voltage to produce a particular waveform. Time base generators produce very high frequency sawtooth waves specifically designed to deflect the beam of a cathode ray tube (CRT) smoothly across the face of the tube and then return it to its starting position.
Time bases are used by radar systems to determine range to a target, by comparing the current location along the time base to the time of arrival of radio echoes. Analog television systems using CRTs had two time bases, one for deflecting the beam horizontally in a rapid movement, and another pulling it down the screen 60 times per second. Oscilloscopes often have several time bases, but these may be more flexible function generators able to produce many waveforms as well as a simple time base.
Description
A cathode ray tube (CRT) consists of three primary parts, the electron gun that provides a stream of accelerated electrons, the phosphor-covered screen that lights up when the electrons hit it, and the deflection plates that use magnetic or electric fields to deflect the electrons in-flight and allows them to be directed around the screen. It is the ability for the electron stream to be rapidly moved using the deflection plates that allows the CRT to be used to display very rapid signals, like those of a television signal or to be used for radio direction finding (see huff-duff).
Many signals of interest vary over time at a very rapid rate, but have an underlying periodic nature. Radio signals, for instance, have a base frequency, the carrier, which forms the basis for the signal. Sounds are modulated into the carrier by modifying the signal, either in amplitude (AM), frequency (FM) or similar techniques. To display such a signal on an oscilloscope for examination, it is desirable to have the electron beam sweep across the screen so that the electron beam cycles at the same frequency as the carrier, or some multiple of that base frequency.
This is the purpose of the time base generator, which is attached to one of the set of deflection plates, normally the X axis, while the amplified output of the radio signal is sent to the other axis, normally Y. The result is a visual re-creation of the original waveform.
Use in radar
A typical radar system broadcasts a short pulse of radio signal and then listens for echoes from distant objects. As the signal travels at the speed of light and has to travel to the target object and back, the distance to the target can be determined by measuring the delay between the broadcast and reception, multiplying the speed of light by that time, and then dividing by two (there and back again). As this process occurs very rapidly, a CRT is used to display the signal and look for the echoes.
In the simplest version of a radar display, today known as an "A-scope", a time base generator sweeps the display across the screen so that it reaches one side at the time when the signal has travelled the radar's maximum effective distance. For instance, an early warning radar like Chain Home (CH) might have a maximum range of , a distance that light will travel out and back in 1 millisecond. This would be used with a time base generator that pulls the beam across the CRT once every millisecond, starting the sweep when the broadcast signal ends. Any echoes cause the beam to deflect down (in the case of CH) as it moves across the display.
By measuring the physical location of the "blip" on the CRT, one can determine the range to the target. For instance, if a particular radar has a time base of 1 millisecond, then its maximum range is 150 km. If this is displayed on a four-inch CRT and the blip is measured to be 2 inches from the left side, then the target is 0.5 milliseconds away, or about .
To ensure the blips would line up properly with a mechanical scale, the time base could be adjusted to start its sweep at a certain time. This could be adjusted manually, or automatically trigged by another signal, normally a greatly attenuated version of the broadcast signal.
Later systems modified the time base to include a second signal that periodically produced blips on the display, providing a clock signal that varied with the time base and thus did not need to be aligned. In UK terminology, these were known as strobes.
Use in television
Television signals consist of a series of still images broadcast in sequence, in the NTSC standard such a "frame" is broadcast 30 times a second. Each frame is itself broken down into a series of "lines", 525 in the NTSC standard. If one examines a television broadcast on an oscilloscope, it will appear to be a continual sequence of modulated signals broken up by short periods of "empty" signal. Each modulated portion carries the analog image for a single line.
To display the signal, two time bases are used. One sweeps the beam horizontally from left to right at 15,750 times a second, the time it takes for one line to be sent. A second time base causes the beam to scan down the screen 60 times a second, so that each line appears below the last one drawn and then returns to the top. This causes the entire signal of 525 lines to be drawn down the screen, re-creating a 2-dimensional image.
To ensure the time base began its sweep of the screen at the right time, the signal included several special modulations. With each line there was a brief period, the "front porch" and "back porch" that caused the signal to go negative briefly. This triggered the horizontal time base to start its sweep across the screen, ensuring that the lines started on the left of the display. A much longer but otherwise similar signal, the vertical blanking interval caused the vertical time base to start, with any lengthy delay causing the time base to trigger.
References
Anand Kumar, "Time-Base Generators", Pulse And Digital Circuits, PHI Learning, 2008
"NTSC SIGNAL SPECIFICATIONS", National Television System Committee, 1953
Electronic test equipment
Cathode ray tube
de:Zeitbasis | Time base generator | [
"Technology",
"Engineering"
] | 1,299 | [
"Electronic test equipment",
"Measuring instruments"
] |
43,245,546 | https://en.wikipedia.org/wiki/Institute%20for%20Chemistry%20and%20Biology%20of%20the%20Marine%20Environment | The Institute for Chemistry and Biology of the Marine Environment of the Carl von Ossietzky University of Oldenburg (, abbreviated ICBM)
is one of the marine science institutes at the German coast and the only university-based marine research institute in Lower Saxony, Germany.
The ICBM is located on the campus Wechloy in Oldenburg, with locations in Wilhelmshaven and on the island of Spiekeroog (in relation to the national park centre Wittbülten on the area of the Hermann Lietz School). The ICBM operates the Wadden Sea time series station Spiekeroog (WSS) and several research vessels.
Mission statement and research
The ICBM carries out fundamental and applied research in marine and environmental sciences. Interdisciplinary research will provide understanding of the various interactions of marine environmental systems.
The research focuses on marine biogeochemical cycles and energy fluxes, as well as on the functional role of marine biodiversity, especially in coastal zones worldwide, and in the oceans.
The mathematic modelling of different environmental systems is complemented by modern, high-resolution analytics and in-house marine sensor developments.
The institute is composed of three sections covering altogether 18 research groups:
Section Geochemistry and Analytics
Section Biology and Ecology
Section Physics and Modelling
For a better understanding of the complex relations the ICBM aims to foster interdisciplinary research.
Academic training
The courses of study are closely related to the research activities to provide interdisciplinary and research-oriented training.
The institute offers a broad bachelor programme with contents of marine, environmental and landscape-ecological sciences as well as four Master Programmes which are: Marine Environmental Sciences, Microbiology (held in English), Environmental Modelling and Marine Sensors. In cooperation with the ICBM, the Jade University of Applied Sciences offers a Bachelor Programme which is fundamental for marine engineering. The ICBM is ERASMUS exchange partner for students.
History
In July 1987, Lower Saxony’s minister of Science and Art approved the establishment of the ICBM as a cooperation of the university departments of mathematics, biology, physics and chemistry. In 1991, the ICBM was approved as a central organisation of the University of Oldenburg. The registered association „Centre for Research on Shallow seas, Coastal Zones and the Marine Environment – Research Centre Terramare” (Zentrum für Flachmeer-, Küsten- und Meeresumweltforschung e.V. – Forschungszentrum Terramare) which was founded in 1990 in Wilhelmshaven and financed through federal state resources was incorporated into the ICBM in 2008.
The former directors of the ICBM are Wolfgang Krumbein, Ulrich Kattmann, Hans Joachim Schellnhuber, Bruno Eckhardt, Wolfgang Ebenhöh, Heribert Cypionka, Meinhard Simon, Hans-Jürgen Brumsack, Ulrike Feudel, Jürgen Rullkötter, Helmut Hillebrand, Bernd Blasius and Oliver Zielinski. At present, the institute is headed by Heinz Wilkes.
Cooperations and memberships
The ICBM cooperates closely with the Max Planck Institute for Marine Microbiology und MARUM, both located in Bremen; with the Alfred Wegener Institute in Bremerhaven, as well as with the Senckenberg Institute by the Sea and the Jade University of Applied Sciences, both located in Wilhelmshaven. The ICBM is a member of the German Marine Research Consortium (KDM) and of the Northwest Marine Research Association (NWMV).
References
External links
Homepage
Biogeochemistry
Coastal geography
Research institutes in Lower Saxony
Oceanographic organizations
University of Oldenburg | Institute for Chemistry and Biology of the Marine Environment | [
"Chemistry",
"Environmental_science"
] | 735 | [
"Chemical oceanography",
"Biogeochemistry",
"Environmental chemistry"
] |
39,011,104 | https://en.wikipedia.org/wiki/Water-pouring%20algorithm | The water-pouring algorithm is a technique used in digital communications systems for allocating power among different channels in multicarrier schemes. It was described by R. C. Gallager in 1968 along with the water-pouring theorem which proves its optimality for channels having Additive White Gaussian Noise (AWGN) and intersymbol interference (ISI).
For this reason, it is a standard baseline algorithm for various digital communications systems.
The intuition that gives the algorithm its name is to think of the communication medium as if it was some kind of water container with an uneven bottom. Each of the available channels is then a section of the container having its own depth, given by the reciprocal of the frequency-dependent SNR for the channel.
To allocate power, imagine pouring water into this container (the amount depends on the desired maximum average transmit power). After the water level settles, the largest amount of water is in the deepest sections of the container. This implies allocating more power to the channels with the most favourable SNR. Note, however, that the ratio allocation to each channel is not a fixed proportion but varies nonlinearly with the maximum average transmit power.
References
Telecommunications
Information theory | Water-pouring algorithm | [
"Mathematics",
"Technology",
"Engineering"
] | 245 | [
"Information and communications technology",
"Telecommunications engineering",
"Applied mathematics",
"Telecommunications",
"Computer science",
"Information theory"
] |
39,011,211 | https://en.wikipedia.org/wiki/ISTTOK | The ISTTOK Tokamak ("Instituto Superior Técnico TOKamak") is a research fusion reactor (tokamak) of the Instituto Superior Técnico. It has a circular cross-section due to a poloidal graphite limiter and an iron core transformer. Its particularity is that it is one of the few tokamaks operating in AC (alternating plasma current) regime, as well in DC regime. In 2013, the AC operation allowed the standard discharges to extend from 35 ms to more than 1s.
Characteristics
Minor radius: 0.085 metre
Major radius: 0.46 metre
Plasma current: ~7kA kiloAmperes
Plasma life span: 30/1000 milliseconds (DC/AC)
Maximum toroidal magnetic field: 2.8 Tesla
Nominal toroidal magnetic field: 0.3-0.6 Tesla
See also
Small Tight Aspect Ratio Tokamak
Ball-pen probe
External links
ISTTOK official site
References
Tokamaks
Nuclear research institutes
University of Lisbon | ISTTOK | [
"Physics",
"Engineering"
] | 210 | [
"Nuclear research institutes",
"Plasma physics stubs",
"Nuclear organizations",
"Plasma physics"
] |
39,013,900 | https://en.wikipedia.org/wiki/Non-linear%20coherent%20states | Coherent states are quasi-classical states that may be defined in different ways, for instance as eigenstates of the annihilation operator
,
or as a displacement from the vacuum
,
where is the Sudarshan-Glauber displacement operator.
One may think of a non-linear coherent state by generalizing the
annihilation operator:
,
and then using any of the above definitions by exchanging by . The above definition is also known as an -deformed annihilation operator.
References
Quantum mechanics | Non-linear coherent states | [
"Physics"
] | 104 | [
"Quantum states",
"Quantum mechanics",
"Quantum physics stubs"
] |
39,020,149 | https://en.wikipedia.org/wiki/Shibasaki%20catalyst | Shibasaki catalysts constitute a class of hetero-bimetallic complexes with the general formula [Ln(binol)3(M)3] (M = alkali metal, Ln = lanthanide). They are named after Masakatsu Shibasaki, whose group first developed them, and are used as asymmetric catalysts.
Development
The Shibasaki group produced the first chiral lanthanide-binaphtholate complex in 1992, which was used to catalyse nitroaldol reactions. The complex was not characterised but was the first to perform the reaction enantioselectively.
This success led to further research which resulted in the development of heterometallic complexes with the formula [Ln(binol)3(M)3], the structure of which was elucidated by X-ray crystallography.
Scope
Shibasaki catalysts are effective for a wide range of enantioselective reactions including nitroaldol, Michael, Diels-Alder and hydrophosphonylation reactions.
Their effectiveness arises in part from their ability to act as both a Brønsted base by virtue of the metal alkoxide and a Lewis acid via the lanthanide ion. Enantioselectivity has been found to be sensitive to both Ln and M; with the nitroaldol reaction being most effective when Ln = Eu and M = Li
whereas the Michael reaction requires Ln = La and M = Na. It was observed that alterations of Ln and M caused predictable changes in the bite angle of the binaphthol backbone.
References
Catalysts | Shibasaki catalyst | [
"Chemistry"
] | 345 | [
"Catalysis",
"Catalysts",
"Chemical kinetics"
] |
44,731,194 | https://en.wikipedia.org/wiki/Human%20Factors%20in%20Engineering%20and%20Design | Human Factors in Engineering and Design is an engineering textbook, currently in its seventh edition. First published in 1957 by Ernest J. McCormick, the book is considered a classic in human factors and ergonomics, and one of the best-established texts in the field. It is frequently taught in upper-level and graduate courses in the U.S., and is relied on by practicing human factors and ergonomics professionals.
The text is divided into six sections: Introduction; Information Input; Human Output and Control; Work Space and Arrangement; Environment; and Human Factors: Selected Topics.
Contents
The text is divided into six sections:
Introduction: Provides an overview of the field of human factors and ergonomics, including its history, goals, and methods.
Information Input: Discusses how humans perceive and process information from the environment, including vision, hearing, and other senses.
Human Output and Control: Examines human physical and cognitive capabilities and limitations in controlling systems and performing tasks.
Work Space and Arrangement: Covers the design of workspaces and equipment to optimize human performance and comfort, including anthropometry and workplace layout.
Environment: Explores the effects of environmental factors on human performance, such as lighting, noise, temperature, and vibration.
Human Factors: Selected Topics: Addresses specialized topics such as human-computer interaction, automation, and safety.
Editions
Since its first publication, the book has been updated and expanded several times to reflect advances in the field. The seventh edition, published in 2018 by Mark S. Sanders and Ernest J. McCormick, includes emerging topics such as digital technology, automation, and artificial intelligence.
Impact
Human Factors in Engineering and Design has had a significant impact on the field of human factors and ergonomics. The book has helped shape the development of the field and provided a framework for designing human-centered systems. It continues to be a valuable resource for students, researchers, and practicing professionals.
See also
Anthropometry
Industrial and organizational psychology
References
Engineering textbooks
McGraw-Hill books
Ergonomics
Occupational safety and health
Industrial engineering
Systems psychology | Human Factors in Engineering and Design | [
"Engineering"
] | 416 | [
"Industrial engineering"
] |
44,733,611 | https://en.wikipedia.org/wiki/Enhancer-FACS-seq | Enhancer-FACS-seq (eFS), developed by the Bulyk lab at Brigham and Women’s Hospital and Harvard Medical School, is a highly parallel enhancer assay that aims for the identification of active, tissue-specific transcriptional enhancers, in the context of whole Drosophila melanogaster embryos. This technology replaces the use of microscopy to screen for tissue-specific enhancers with fluorescence activated cell sorting (FACS) of dissociated cells from whole embryos, combined with identification by high-throughput Illumina sequencing.
Introduction
Transcriptional regulation
In metazoans, in order to respond to environmental stress, differentiate properly, and progress normally through the cell cycle, a eukaryotic cell needs a specific and coordinated gene expression program, which involves the highly regulated transcription of thousands of genes. This gene regulation is in large part controlled, in a tissue-specific manner, by the binding of transcription factors to noncoding genomic regions referred to as cis-regulatory modules (CRMs), activating or repressing gene expression by modulating the structure of the chromatin and therefore having a positive or negative effect on transcription regulation. CRMs activating gene expression are often referred to as transcriptional enhancers, whereas those repressing gene expression are referred to as transcriptional silencers.
Enhancer detection in Drosophila melanogaster
Despite being a powerful model organism for biology and the study of transcriptional enhancers, the tissue specific activity of less than 5% of the estimated 50,000 transcriptional enhancers in Drosophila melanogaster have been discovered. Over the past decade, the main method for detection of tissue- or cell-type specific activities of enhancers in Drosophila melanogaster was to test candidate enhancers by traditional reporter assays, which are low-throughput and costly. Over the past few years, even though enhancer discovery has been improved and other parallel reporter assays have been developed, none so far allowed the direct identification of enhancer activity in a genomic context in cell types of interest in a whole embryo.
Methodology
Each candidate CRM (cCRM) is cloned upstream of a reporter gene. Compared to traditional reporter assays, the main innovation is the use of fluorescence activated cell sorting (FACS) of dissociated cells, instead of microscopy, to screen for tissue-specific enhancers. This approach utilizes a two-marker system: in each embryo, one marker (here, the rat CD2 cell surface protein) is used to label cells of a specific tissue for being sorted by FACS, and the other marker (here, green fluorescent protein GFP) is used as a reporter of CRM activity.
Cells are sorted according to their tissue type and then by GFP fluorescence, and the cCRMs are recovered by PCR from double-positive sorted cells, and from total input cells. High-throughput sequencing of both populations then allows measuring the relative abundance of each cCRM in input and sorted populations; one can then assess the enrichment or depletion of each cCRM in double-positive cells versus input as a measure of activity in the CD2-positive cell type being tested.
Significant results
In the initial report on this method, a library of ~500 cCRMs was drawn from a variety of genomic data sources (e.g., TF-bound regions, coactivator-bound regions, DNase I hypersensitive sites, and predictions from the Bulyk lab’s PhylCRM algorithm ) by PCR from genomic DNA, and then screened for activity in embryonic mesoderm and in specific mesodermal cell types. The results were validated by traditional reporter gene assay in Drosophila melanogaster embryos for 68 cCRMs tested by eFS. The specificity of eFS was excellent among significantly enriched cCRMs, while sensitivity was good where the majority of the CD2-positive cells express GFP. It was found that the known enhancer-associated chromatin marks H3K27ac, H3K4me1, and Pol II are significantly enriched among the enhancers found to be active in mesoderm.
Advantages and future applications
Advantages of eFS
Highly parallel identification of active, tissue-specific transcriptional enhancers in whole embryos
Candidate enhancers activity assayed in a genomic context
High specificity of detected enhancers
Future applications
The eFS assay could be used to analyze other cell or tissue types. By assessing enrichment in GFP-expressing CD2-negative as well as CD2-positive cells, and by crossing a common pool of reporter transformant male flies to females expressing CD2 in different cell types, it is possible to assay specificity as well as activity. Accelerating the annotation of the regulatory genome in Drosophila should in principle generate the kind of large-scale regulatory interaction data that would allow exploring the network properties of transcriptional regulation.
References
Gene expression | Enhancer-FACS-seq | [
"Chemistry",
"Biology"
] | 1,038 | [
"Gene expression",
"Molecular genetics",
"Cellular processes",
"Molecular biology",
"Biochemistry"
] |
44,734,124 | https://en.wikipedia.org/wiki/Replacement%20product | In graph theory, the replacement product of two graphs is a graph product that can be used to reduce the degree of a graph while maintaining its connectivity.
Suppose is a -regular graph and is an -regular graph with vertex set Let denote the replacement product of and . The vertex set of is the Cartesian product . For each vertex in and for each edge in , the vertex is adjacent to in . Furthermore, for each edge in , if is the th neighbor of and is the th neighbor of , the vertex is adjacent to in .
If is an -regular graph, then is an -regular graph.
References
External links
Graph products | Replacement product | [
"Mathematics"
] | 128 | [
"Graph theory stubs",
"Mathematical relations",
"Graph theory"
] |
44,736,812 | https://en.wikipedia.org/wiki/Guard%20theory | Guard theory is a branch of immunology which concerns the innate sensing of stereotypical consequences of a virulence factor or pathogen. This is in contrast to the classical understanding of recognition by the innate immune system, which involves recognition of distinct microbial structures- pathogen-associated molecular patterns (PAMPs)- with pattern recognition receptors (PRRs). Some of these stereotypical consequences of virulence factors and pathogens may include altered endosomal trafficking and changes in the cytoskeleton. These recognition mechanisms would work to complement classical pattern recognition mechanisms.
Mechanism
In plants
In plants, guard theory is also known as indirect recognition. This is because rather than direct recognition of a virulence factor or pathogen, there is instead recognition of the result of a process mediated by a virulence factor or pathogen. In these cases, the virulence factor appears to target an accessory protein that is either a target or a structural mimic of the target of that virulence factor, allowing for plant defences to respond to a specific strategy of pathogenesis rather structures that may evolve and change over time at a faster rate than the plant can adapt to. The interaction between pathogen and accessory protein results in some modification of the accessory protein, which allows for recognition by plant NBS-LRR proteins, which monitor for infection. This model is best illustrated by RIN4 protein in A. thaliana. RIN4 forms a complex with the NB-LRR proteins RPM1 and RPS2. Protease effector AvrRpt2 is able to degrade RIN4, causing de-repression of RPS2. On the other hand, AvrB or AvrRPM1-mediated phosphorylation of RIN4 results in activation of RPM1. In short, this example elucidates how one NBS-LRR protein is able to recognize the effects of more than one virulence factor or effector.
Guard defences in humans and relationship with allergies
Little is known concerning guard receptors in humans. One example currently under speculation involves recognition of cysteine proteases secreted by helminths during infection. It has been speculated that some allergies develop as a result of structural similarities between the allergen and high-activity cysteine proteases secreted by helminths during their infectious cycle. One proposed mechanism by which this may take place is that proteases secreted by the helminths cleave proteins which act as detectors, and these detectors in turn activate sensors to alert the immune system.
References
Microbiology
Plant pathogens and diseases
Immunology | Guard theory | [
"Chemistry",
"Biology"
] | 535 | [
"Plant pathogens and diseases",
"Plants",
"Microbiology",
"Immunology",
"Microscopy"
] |
44,737,697 | https://en.wikipedia.org/wiki/WaferCatalyst | WaferCatalyst is a Multi-Project Wafer (MPW) consolidation service by King Abdulaziz City for Science and Technology (KACST), Saudi Arabia. WaferCatalyst is a concept to silicon service and provides a number of tools for community building in the field of integrated circuit (IC) design. These include Multi-project wafer service fabrication, multi-layer mask (MLM), design support, consultancy services and fabrication support.
History
Microsystems Infrastructure Development initiative (MIDI) was launched by Micro-Sensors Division (MSD) under the National Center of Electronics, Communications and Photonics in early 2012 and was chartered to create and develop integrated circuit ecosystem in the Kingdom of Saudi Arabia. The service developed as a result of this initiative is called 'WaferCatalyst' which has been chartered to serve Saudi Arabia, greater Middle East & North Africa region as well as the global community in semiconductor design. It was formally launched on 28 April 2013 by H.H. Prince Dr. Turki bin Saud bin Mohammad Al Saud, Vice President for Research Institutes of the Kingdom of Saudi Arabia.
Programs
WaferCatalyst has a number of programs to enhance the ecosystem in IC development area. These include support services through its Support and Design Portal, support on process development kits, support for universities in taping out of ICs, project titles for undergraduate/post-graduate students and partners program.
WaferCatalyst has been working to develop close coordination and partnerships among the various institutions, research and commercial organizations to add value for themselves while also contributing to the development of a virtual cluster ecosystem.
Related links
Multi-project wafer service
External links
WaferCatalyst Website www.wafercat.com
WaferCatalyst Portal http://portal-wafercat.com
King Abdulaziz City for Science and Technology www.kacst.edu.sa
References
2013 establishments in Saudi Arabia
Science and technology in Saudi Arabia
Semiconductor device fabrication | WaferCatalyst | [
"Materials_science"
] | 407 | [
"Semiconductor device fabrication",
"Microtechnology"
] |
44,739,087 | https://en.wikipedia.org/wiki/Automotive%20head-up%20display | An automotive head-up display or automotive heads-up display — also known as an auto-HUD — is any transparent display that presents data in the automobile without requiring users to look away from their usual viewpoints. The origin of the name stems from a pilot being able to view information with the head positioned "up" and looking forward, instead of angled down looking at lower instruments. At this time, there are three different approaches to OEM HUDs in automobiles. The first is to treat the back of the windshield in such a way that an image projected onto it will reflect to the driver. The second is to have a small combiner that is separate from the windshield. Combiners can be retracted. The third is to laminate a transparent display in between layers of the windshield glass.
Timeline
1988: Nissan was the first manufacturer to offer a HUD in the JDM market with the 1988 Nissan Silvia S13.
1988: General Motors began using head-up displays. Their first HUD units were installed on Oldsmobile Cutlass Supreme Indy Pace Cars and replicas. Optional HUD units were subsequently offered on the Cutlass Supreme and Pontiac Grand Prix before being more widely available.
1989–1994: Nissan offered a head-up display in the Nissan 240SX.
1991: Toyota, for the Japanese market only, released a HUD system for the Toyota Crown Majesta.
1998: The first High Content Reconfigurable display appeared on the Chevrolet Corvette (C5). (1999 Model Year)
1999: Cadillac DTS with night-vision via Head-up Display. (Model Year 2000)
2003: Cadillac introduced a HUD system for the Cadillac XLR.
2003: BMW was involved in large developments for automotive HUD systems for the 2003 E60 5 Series.
2012: Pioneer Corporation introduced a navigation system that projects a HUD in place of the driver's visor that presents animations of conditions ahead, a form of augmented reality (AR).
These displays are becoming increasingly available in production cars, and usually offer speedometer, tachometer, and navigation system displays.
Night vision information is also displayed via HUD on certain General Motors, Honda, Toyota and Lexus vehicles. Other manufactures such as Audi, BMW, Citroën, Nissan, Mazda, Kia, Mercedes and Volvo currently offer some form of HUD system.
Motorcycle helmet HUDs are also commercially available.
Add-on HUD systems also exist, projecting the display onto a glass combiner mounted on the windshield. These systems have been marketed to police agencies for use with in-vehicle computers.
Eyes-on-the-Road-Benefit
The Eyes-on-the-Road-Benefit (ERB), also known as the Head-Up-Display-Advantage, is the term given to the purported advantages provided to motorists when driving using a head-up display (HUD). This can also be referred to as a heads-up-device or heads-up design, as compared to traditional dashboard designs, which are referred to as Head-Down-Design (HDD). The benefit of Eyes-on-the-Road systems stems from increased situational awareness and elimination of the need to look away from the road whilst driving, thereby increasing reaction time to external hazards, such as pedestrians. There is some evidence to suggest that the scope of the ERB is limited to low cognitive load situations in which the driving task is not particularly complex.
Aetiology
Research into the ERB primarily utilizes virtual reality driving simulators to mimic real life driving scenarios while eliminating situational variability. In order to examine HUDs and HDDs, studies often compare hazard reaction time, situational awareness, and quality of driving (such as speed consistency) using both systems. The extent of the ERB on different demographics, particularly those of age and experience level, are of particular interest. The interaction between work-load and the influence of ERB are also frequently examined for research.
Exogenous saccadic gaze
Saccadic gaze is the perceptual mechanism through which the eye is inadvertently drawn to external stimulus without the individual's conscious action. An involuntary gaze is most easily drawn by movement or distinct changes in illumination in an individual's visual field. These external stimuli can be beneficial in such situations as the movement of a pedestrian about to walk out onto the road, in turn allowing the driver to take evasive action. Exogenous cues can also be irrelevant, and often dangerous, leading to distraction from goal behaviours, such as the flashing of a cellphone taking one's eyes off the road. By superimposing vital driving information onto the horizon in a driver's direct line of sight, HUDS allow important exogenous cues, like the movements of other vehicles to draw the gaze of a driver whilst they monitor vital vehicle feedback such as speed or revolution count. It is theorized that this can facilitate faster reaction times to hazards and improve situational awareness. A collaborative project between Faurecia Groupe and Indian Institute of Science developed an eye gaze and finger controlled head up display for cars that can also automatically estimate drivers’ cognitive load and distraction.
Ideal visual field
The ideal visual field is the area in which stimuli are most accurately, rapidly, and efficiently processed by the eye. In humans, this field is thought to be within 20 degrees above or below the vertical meridian of an individual's gaze and 60 degrees either side of the horizontal meridian. If an object is beyond these boundaries it will require eye movement to bring the stimuli out of periphery. By including feedback instruments in the primary field of vision, HUDs allow for the horizon and all associated stimuli to stay in the primary field vision where the information may still be processed and acknowledged by a motorist.
Manifestation
Reaction time
Reaction time, and more specifically delayed reaction, is widely cited as a key contributor to vehicular accidents. Reaction time in relation to the ERB is defined as the time it takes for a motorist to react to an external hazard or stimuli and then carry out the appropriate reaction, or evasive maneuver such as braking when a vehicle in front stops. The feedback offered by a HUD is projected onto the windshield of a vehicle with the aim of integrating outside stimuli and the instrumental feedback; thus removing the need to remove a driver's eyes from the road. Studies of reaction time to hazards in HUD vs HDD designs have found that the average reaction times for HUD are faster. This trend appears to continue across demographics, including both categories of experience level and age.
Speed maintenance and driving quality
Speed maintenance is the extent to which a driver maintains a speed and adjusts their speed to suit traffic laws and environmental conditions. The use of HUDs appears to produce better speed maintenance in drivers under experimental conditions when compared to HDDs. It is theorized that this is because having the speedometer at the eye level of the vehicle operator allows for continuous monitoring of the vehicle's speed. HUD use also appears to increase general driving quality, including staying within road markings, and increased smoothness of driving and navigation abilities. Drivers’ capacity to focus on external cues, such as road texture, road demarcations and street signs is increased by using a seamless interface where focus on the road isn't interrupted to assess speed and other information.
Limitations
Work load
The influence of ERB on drivers is not universal. There is evidence that as the complexity of driving tasks increases, the benefits of using a HUD are decreased, and in some circumstances, they are no longer statistically significant. The ERB is diminished, for example, when individuals are driving cognitively demanding vehicles, such as industrial vehicles, or when they are asked to multitask while driving. One study has shown that when placed in a cognitively demanding condition, individuals shift their focus from the road alone to focus on other tasks such as shifting gears or talking to others. Subsequently, a driver's ability to process HUD feedback requires diversion of attention, much akin to that which occurs whilst using a HDD.
Placement
There are limitations for where a HUD can be placed or projected in a vehicle before it begins to diminish the ERB and becomes more of a distraction. HUDs can be constructed so that the instrumental feedback appears to be projected out into the horizon, rather than displayed directly on the windshield. In test situations, a projected HUD which appears near the nose of the vehicle is said to result in the most rapid response times and best situational awareness on the part of the driver, as well as facilitating better driving quality. For in-glass laminated HUD, the display glass part is integrated in the windshield while the electronics shall be placed and hidden inside the vehicle body. The information is displayed directly on the windshield.
See also
References
External links
Virtual widescreen
Jaguar is making ghost cars real
Vehicle technology
Automotive technologies
Advanced driver assistance systems
Optical devices
Multimodal interaction
Mixed reality
British inventions
Augmented reality applications | Automotive head-up display | [
"Materials_science",
"Engineering"
] | 1,817 | [
"Glass engineering and science",
"Vehicle technology",
"Mechanical engineering by discipline",
"Optical devices"
] |
44,740,826 | https://en.wikipedia.org/wiki/Process%20performance%20qualification%20protocol | Process performance qualification protocol is a component of process validation: process qualification. This step is vital in maintaining ongoing production quality by recording and having available for review essential conditions, controls, testing, and expected manufacturing outcome of a production process. The Food and Drug Administration recommends the following criteria be included in a PPQ protocol:
Manufacturing conditions: Operating parameters, equipment limits, and component inputs
What data should be recorded and analyzed
What tests should be performed to ensure quality at each production step
A sampling plan to outline sampling methods both during and between production batches
Analysis methodology that allows for data scientific and risk oriented decision making based on statistical data. Variability limits should be defined and contingencies in the event of non-conforming data established
Approval of PPQ protocol from relevant departments
Deviations from the standard operation procedures should be made within the framework of the protocol and at the approval of relevant quality control departments. The FDA further recommends a documentation of the protocol be published internally. The report should include:
A summation of relevant data and analysis from the protocol
An explanation of unexpected data and any other results not mandated by the protocol and its effects on production quality
Identify correlating effects and suggest changes to existing processes
Conclude if the process performance is adequately qualified to meet performance standards. Should production standards not be met appropriate changes should be outlined
References
External links
casss.org
Process Validation Guidance
Formal methods
Enterprise modelling
Business process management
Validity (statistics)
Drug manufacturing | Process performance qualification protocol | [
"Engineering"
] | 287 | [
"Software engineering",
"Systems engineering",
"Enterprise modelling",
"Formal methods"
] |
44,742,521 | https://en.wikipedia.org/wiki/CACNA2D3 | Calcium channel, voltage-dependent, alpha 2/delta subunit 3 is a protein that in humans is encoded by the CACNA2D3 gene on chromosome 3 (locus 3p21.1).
Function
This gene encodes a member of the alpha-2/delta subunit family, a protein in the voltage-dependent calcium channel complex. Calcium channels mediate the influx of calcium ions into the cell upon membrane polarization and consist of a complex of alpha-1, alpha-2/delta, beta, and gamma subunits in a 1:1:1:1 ratio. Various versions of each of these subunits exist, either expressed from similar genes or the result of alternative splicing. Research on a highly similar protein in rabbit suggests the protein described in this record is cleaved into alpha-2 and delta subunits. Alternate transcriptional splice variants of this gene have been observed but have not been thoroughly characterized.
Clinical significance
Number of studies reported an association between methylation of the CACNA2D3 gene and cancer.
Breast cancer
Methylation-dependent transcriptional silencing of CACNA2D3 gene may contribute to the metastatic phenotype of breast cancer. Analysis of methylation in the CACNA2D3 CpG island may have potential as a biomarker for risk of development of metastatic disease.
Gastric cancer
The loss of CACNA2D3 gene expression through aberrant promoter hypermethylation may contribute to gastric carcinogenesis, and CACNA2D3 gene methylation is a useful prognostic marker for patients with advanced gastric cancer. Physical exercise was correlated with a lower methylation frequency of CACNA2D3.
References
External links
Further reading
Ion channels | CACNA2D3 | [
"Chemistry"
] | 361 | [
"Neurochemistry",
"Ion channels"
] |
21,893,074 | https://en.wikipedia.org/wiki/Fogbank | Fogbank (stylized as FOGBANK) is a code name given to a secret material used in the W76, W78 and W88 nuclear warheads that are part of the United States nuclear arsenal. The process to create Fogbank was lost by 2000, when it was needed for the refurbishment of old warheads. Fogbank was then reverse engineered by the National Nuclear Security Administration (NNSA) over five years and at the cost of tens of millions of dollars.
Fogbank's precise nature is classified; in the words of former Oak Ridge National Laboratory general manager Dennis Ruddy, "The material is classified. Its composition is classified. Its use in the weapon is classified, and the process itself is classified." Department of Energy Nuclear Explosive Safety documents simply describe it as a material "used in nuclear weapons and nuclear explosives" along with lithium hydride (LiH) and lithium deuteride (LiD), beryllium (Be), uranium hydride (UH3), and plutonium hydride.
However, NNSA Administrator Tom D'Agostino disclosed the role of Fogbank in the weapon: "There's another material in the—it's called interstage material, also known as Fogbank", and arms experts believe that Fogbank is an aerogel material which acts as an interstage material in a nuclear warhead; i.e., a material designed to become a superheated plasma following the detonation of the weapon's fission stage, the plasma then triggering the fusion-stage detonation.
History
It has been revealed by unclassified official sources that Fogbank was originally manufactured in Facility 9404-11 of the Y-12 National Security Complex in Oak Ridge, Tennessee, from 1975 until 1989, when the final batch of W76 warheads was completed. After that, the facility was deactivated and finally slated for decommissioning by 1993. Only a small pilot plant was left, which had been used to produce small batches of Fogbank for testing purposes.
In 1996, the US government decided to replace, refurbish, or decommission large numbers of its nuclear weapons. Accordingly, the Department of Energy established a refurbishment program to extend the service lives of older nuclear weapons. In 2000, the NNSA specified a life-extension program for W76 warheads that would enable them to remain in service until at least 2040.
It was soon realized that the Fogbank material was a potential source of problems for the program, as few records of its manufacturing process had been retained when it was originally manufactured in the 1980s, and nearly all staff members who had expertise in its production had either retired or left the agency. The NNSA briefly investigated sourcing a substitute for Fogbank but eventually decided that since Fogbank had been produced previously, they would be able to repeat it. Additionally, "Los Alamos computer simulations at that time were not sophisticated enough to determine conclusively that an alternate material would function as effectively as Fogbank," according to a Los Alamos publication.
With Facility 9404-11 long since decommissioned, a new production facility was required. Delays arose during its construction. Engineers repeatedly encountered failure in their efforts to produce Fogbank. Manufacture involves the moderately toxic, highly volatile solvent acetonitrile, which presents a hazard for workers (causing three evacuations in March 2006 alone). As multiple deadlines expired, and the schedule was pushed back repeatedly, the NNSA eventually invested $23 million to find an alternative to Fogbank.
In March 2007, engineers devised a manufacturing process for Fogbank. The material turned out to have problems when tested, and in September 2007 the Fogbank project was upgraded to "Code Blue" status by the NNSA, making it a major priority. In 2008, following the expenditure of a further $69 million, the NNSA managed to manufacture Fogbank, and 7 months later the first refurbished warhead was provided to the U.S. Navy, nearly a decade after the commencement of the refurbishment program. In May 2009 a U.S. Navy spokesman said that they had not received any refurbished weapons. The Energy Department stated that the current plan was to begin shipping refurbished weapons in late 2009, two years behind schedule.
The experience of reverse engineering Fogbank produced some improvements in scientific knowledge of the process. The new production scientists noticed that certain problems in production resembled those noted by the original team. These problems were traced to a particular impurity in the final product that was required to meet quality standards. A root cause investigation showed that input materials were subject to cleaning processes that had not existed during the original production run. This cleaning removed a substance that generated the required impurity. With the implicit role of this substance finally understood, the production scientists could control output quality better than during the original run.
The W76 life-extension project was completed in December 2018, when 800 W76s were upgraded to the W76-1 design. It is unclear whether the new W76-2 uses Fogbank.
References
Nuclear weapons of the United States
Foams
Plastics
Classified information in the United States
Nuclear weapon design
Aerogels | Fogbank | [
"Physics",
"Chemistry"
] | 1,069 | [
"Foams",
"Unsolved problems in physics",
"Aerogels",
"Amorphous solids",
"Plastics"
] |
21,898,118 | https://en.wikipedia.org/wiki/Underwater%20acoustic%20positioning%20system | An underwater acoustic positioning system is a system for the tracking and navigation of underwater vehicles or divers by means of acoustic distance and/or direction measurements, and subsequent position triangulation. Underwater acoustic positioning systems are commonly used in a wide variety of underwater work, including oil and gas exploration, ocean sciences, salvage operations, marine archaeology, law enforcement and military activities.
Method of operation
Figure 1 describes the general method of operation of an acoustic positioning system, this is an example of a long baseline (LBL) positioning system for ROV
Baseline station deployment and survey
Acoustic positioning systems measure positions relative to a framework of baseline stations, which must be deployed prior to operations. In the case of a long-baseline (LBL) system, a set of three or more baseline transponders are deployed on the sea floor. The location of the baseline transponders either relative to each other or in global coordinates must then be measured precisely. Some systems assist this task with an automated acoustic self-survey, and in other cases GPS is used to establish the position of each baseline transponder as it is deployed or after deployment.
Tracking or navigation operations
Following the baseline deployment and survey, the acoustic positioning system is ready for operations. In the long baseline example (see figure 1), an interrogator (A) is mounted on the ROV that is to be tracked. The interrogator transmits an acoustic signal that is received by the baseline transponders (B, C, D, E). The reply of the baseline transponders is received again at the ROV. The signal time-of-flight or the corresponding distances A-B, A-C, A-D and A-E are transmitted via the ROV umbilical (F) to the surface, where the ROV position is computed and displayed on a tracking screen. The acoustic distance measurements may be augmented by depth sensor data to obtain better positioning accuracy in the three-dimensional underwater space.
Acoustic positioning systems can yield an accuracy of a few centimeters to tens of meters and can be used over operating distance from tens of meters to tens of kilometers. Performance depends strongly on the type and model of the positioning system, its configuration for a particular job, and the characteristics of the underwater acoustic environment at the work site.
Classes
Underwater acoustic positioning systems are generally categorized into three broad types or classes
Long-baseline (LBL) systems, as in figure 1 above, use a sea-floor baseline transponder network. The transponders are typically mounted in the corners of the operations site. LBL systems yield very high accuracy of generally better than 1 m and sometimes as good as 0.01m along with very robust positions This is due to the fact that the transponders are installed in the reference frame of the work site itself (i.e. on the sea floor), the wide transponder spacing results in an ideal geometry for position computations, and the LBL system operates without an acoustic path to the (potentially distant) sea surface.
Ultra-short-baseline (USBL) systems and the related super-short-baseline (SSBL) systems rely on a small (ex. 230 mm across), tightly integrated transducer array that is typically mounted on the bottom end of a strong, rigid transducer pole which is installed either on the side or in some cases on the bottom of a surface vessel. Unlike LBL and SBL systems, which determine position by measuring multiple distances, the USBL transducer array is used to measure the target distance from the transducer pole by using signal run time, and the target direction by measuring the phase shift of the reply signal as seen by the individual elements of the transducer array. The combination of distance and direction fixes the position of the tracked target relative to the surface vessel. Additional sensors including GPS, a gyro or electronic compass and a vertical reference unit are then used to compensate for the changing position and orientation (pitch, roll, bearing) of the surface vessel and its transducer pole. USBL systems offer the advantage of not requiring a sea floor transponder array. The disadvantage is that positioning accuracy and robustness is not as good as for LBL systems. The reason is that the fixed angle resolved by a USBL system translates to a larger position error at greater distance. Also, the multiple sensors needed for the USBL transducer pole position and orientation compensation each introduce additional errors. Finally, the non-uniformity of the underwater acoustic environment cause signal refractions and reflections that have a greater impact on USBL positioning than is the case for the LBL geometry.
Short-baseline (SBL) systems use a baseline consisting of three or more individual sonar transducers that are connected by wire to a central control box. Accuracy depends on transducer spacing and mounting method. When a wider spacing is employed as when working from a large working barge or when operating from a dock or other fixed platform, the performance can be similar to LBL systems. When operating from a small boat where transducer spacing is tight, accuracy is reduced. Like USBL systems, SBL systems are frequently mounted on boats and ships, but specialized modes of deployment are common too. For example, the Woods Hole Oceanographic Institution uses a SBL system to position the Jason deep-ocean ROV relative to its associated MEDEA depressor weight with a reported accuracy of 9 cm
GPS intelligent buoys (GIB) systems are inverted LBL devices where the transducers are replaced by floating buoys, self-positioned by GPS. The tracked position is calculated in realtime at the surface from the Time-Of-Arrival (TOAs) of the acoustic signals sent by the underwater device, and acquired by the buoys. Such configuration allow fast, calibration-free deployment with an accuracy similar to LBL systems. At the opposite of LBL, SBL or USBL systems, GIB systems use one-way acoustic signals from the emitter to the buoys, making it less sensitive to surface or wall reflections. GIB systems are used to track AUVs, torpedoes, or divers, may be used to localize airplanes black-boxes, and may be used to determine the impact coordinates of inert or live weapons for weapon testing and training purposes references: Sharm-El-Sheih, 2004; Sotchi, 2006; Kayers, 2005; Kayser, 2006; Cardoza, 2006 and others...).
History and examples of use
An early use of underwater acoustic positioning systems, credited with initiating the modern day development of these systems, involved the loss of the American nuclear submarine USS Thresher on 10 April 1963 in a water depth of 2560m. An acoustic short baseline (SBL) positioning system was installed on the oceanographic vessel USNS Mizar. This system was used to guide the bathyscaphe Trieste 1 to the wreck site. Yet, the state of the technology was still so poor that out of ten search dives by Trieste 1, visual contact was only made once with the wreckage. Acoustic positioning was again used in 1966, to aid in the search and subsequent recovery of a nuclear bomb lost during the crash of a B-52 bomber at sea off the coast of Spain.
In the 1970s, oil and gas exploration in deeper waters required improved underwater positioning accuracy to place drill strings into the exact position referenced earlier thorough seismic instrumentation and to perform other underwater construction tasks.
But, the technology also started to be used in other applications. In 1998, salvager Paul Tidwell and his company Cape Verde Explorations led an expedition to the wreck site of the World War 2 Japanese cargo submarine I-52 in the mid-Atlantic. Resting at a depth of 5240 meters, it had been located and then identified using side scan sonar and an underwater tow sled in 1995. War-time records indicated the I-52 was bound for Germany, with a cargo including 146 gold bars in 49 metal boxes. This time, Mr. Tidwell's company had hired the Russian oceanographic vessel, the Akademik Mstislav Keldysh with its two manned deep-ocean submersibles MIR-1 and MIR-2 (figure 3). In order to facilitate precise navigation across the debris field and assure a thorough search, MIR-1 deployed a long baseline transponder network on the first dive. Over a series of seven dives by each submersible, the debris field was progressively searched. The LBL positioning record indicated the broadening search coverage after each dive, allowing the team to concentrate on yet unsearched areas during the following dive. No gold was found, but the positioning system had documented the extent of the search.
In recent years, several trends in underwater acoustic positioning have emerged. One is the introduction of compound systems such the combination of LBL and USBL in a so-called LUSBL configuration to enhance performance. These systems are generally used in the offshore oil & gas sector and other high-end applications. Another trend is the introduction of compact, task optimized systems for a variety of specialized purposes. For example, the California Department of Fish and Game commissioned a system (figure 4), which continually measures the opening area and geometry of a fish sampling net during a trawl. That information helps the department improve the accuracy of their fish stock assessments in the Sacramento River Delta.
Water-proof smart devices like Apple Watch Ultra and Garmin Descent have been introduced to function as dive computers. These devices have a depth gauge sensor, provide a dive profile, and safety alerts for fast ascents and mandatory safety stops using the depth data. In 2023, University of Washington researchers demonstrated a fourth class of 3D underwater positioning for these smart devices that does not require infrastructure support like buoys. Instead they use distributed localization techniques by computing the pairwise distances between a network of diver devices to determine the shape of the resulting network topology. Combining this with depth sensor data from these devices, the lead diver can then compute the relative 3D positions of all the other diver devices.
References
External links
Safran Electronics & Defense Naval solutions
IXBLUE Subsea navigation and positioning
Underwater GPS at ACSA Underwater GPS
Navigation
Surveying
Oceanography
Shipbuilding
Watercraft components
Geopositioning | Underwater acoustic positioning system | [
"Physics",
"Engineering",
"Environmental_science"
] | 2,105 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Oceanography",
"Shipbuilding",
"Surveying",
"Civil engineering",
"Marine engineering"
] |
21,898,316 | https://en.wikipedia.org/wiki/Meteorological%20intelligence | Meteorological intelligence is information measured, gathered, compiled, exploited, analyzed and disseminated by meteorologists, climatologists and hydrologists to characterize the current state and/or predict the future state of the atmosphere at a given location and time. Meteorological intelligence is a subset of environmental intelligence and is synonymous with the term weather intelligence.
The earliest known use of the term "meteorological intelligence" in a written document dates to 1854 on pg. 168 of the Eighth Annual Report of the Board of Regents of the Smithsonian Institution. This report discusses the Smithsonian Institution's initiative to transmit meteorological intelligence via telegraph lines. An early reference to "meteorological intelligence" in England dates an 1866 issue of The Edinburgh Review which was a prominent Scottish journal during the 19th century (Reeve 1866, pg. 75).
Another documented, early use of the term dates to 1874 in a historical compilation entitled, "The American Historical Record" (Lossing 1874, pg. 125). In this book, Lossing uses the term to refer to weather observations transmitted over telegraph lines for the purpose of studying the nature of storms with the ultimate goal of enhancing public safety through the issuance of storm warnings. This mission was carried out by the Army Signal Service starting in the 1870s who was responsible for communication (via telegraph) of technical intelligence for the army as well as "meteorological intelligence" for the general welfare of the country (Ingersoll 1879, pg. 156).
From the viewpoint of the intelligence community, the term meteorological intelligence is more limited in its use referring to the use of clandestine or technical means to learn about environmental conditions over enemy territory (Shulsky and Schmitt 2002) as in the North Atlantic weather war. In the military intelligence context, weather information is often referred to as meteorological or environmental intelligence (Hinsley 1990, pg. 420; Platt 1957, pg. 14; U.S. Congress, pg. 164).
With regard to private sector meteorology, the term meteorological intelligence is a broad term of art that is primarily associated with observed and forecast weather information provided to decision makers in one of a number of weather sensitive business areas including: Energy, forestry, agriculture, telecommunications, transportation, aviation, entertainment, retail and construction (CMOS 2001, pg. 23) . It is considered a key aspect of weather risk management for the legal and insurance industries.
Notes
See also
Military intelligence
Business intelligence
Intelligence (information gathering)
Weather risk management
References
Canadian Meteorological and Oceanographic Society (CMOS), 2001: "Baseline Status of Private Meteorological Services Sector in Canada", prepared by Global Change Strategies International
Dear, I.C.B. and Foot, M.R.D.: "meteorological intelligence." The Oxford Companion to World War II. Oxford University Press. 2001. Encyclopedia.com. (March 10, 2009).
Hinsley, Francis F., 1990: "British Intelligence in the Second World War: Its Influence on Strategy and Operations". Cambridge University Press
Ingersoll, Lurton D., 1879: "A History of the War Department of the United States", published by Francis D. Alohun, 613 pages
Lossing, Benson J., ed., 1874: "The American Historical Record”, Vol. III
Platt, Washington, 1957: "Strategic Intelligence Production: Basic Principles", published by P.A. Praeger, 302 pages
Reeve, Henry, ed., 1866: "The Edinburgh Review", Vol CXXIV, published by Archibald Constable, London, 600 pages
Shulsky, Abram N. and Schmitt, Gary J., 2002: "Silent Warfare: Understanding the World of Intelligence", 3rd ed., 285 pages
Smithsonian Institution, 1854: "Eighth Annual Report of the Board of Regents of the Smithsonian Institution", published by The Institution, U.S. Gov't Print Off., 310 pages
U.S. Congress, Office of Technology Assessment, New Technology for NATO: Implementing Follow-On Force Attack, OTA-ISC-309 (Washington, D.C.: US Government Printing Office, June 1987)
Yokoyama, K., 1993: Studies on the utilization of the mesh meteorological intelligence, Bulletin of the Yamagata Prefectural Agricultural Experiment Station (Japan), 31-37
External links
http://www.cmos.ca/Privatesector/metstrategyappB.pdf
http://www.cdef.terre.defense.gouv.fr/publications/doctrine/doctrine03/US/doctrine/art8.pdf
http://www.scotsatwar.co.uk/AZ/dday.htm
https://books.google.com/books?id=gIzUGFtsExAC&dq=meteorological+intelligence&ei=8my2Sa_jKIHqkwTd3pn9Bg
http://www.encyclopedia.com/doc/1O129-meteorologicalintelligenc.html
Branches of meteorology
Intelligence gathering disciplines
Business intelligence
Weather hazards
Warning systems | Meteorological intelligence | [
"Physics",
"Technology",
"Engineering"
] | 1,038 | [
"Physical phenomena",
"Weather hazards",
"Weather",
"Safety engineering",
"Measuring instruments",
"Warning systems"
] |
21,902,464 | https://en.wikipedia.org/wiki/Algenol | Algenol is an industrial biotechnology company that is commercializing patented algae technology for production of ethanol and other fuels. The company was founded in 2006 and is headquartered in Fort Myers, Florida. The company uses proprietary technologies to produce various products, including personal care products, food supplements, and industrial products, from a patented strain of cyanobacteria and a proprietary photobioreactor system.
History
Algenol was founded in 2006 by Paul Woods, Craig Smith, and Ed Legere. In 2008 the company announced it would begin commercial production of ethanol by 2009 in the Sonoran Desert in northwest Mexico. However, the company was still not in commercial production by 2015. In October 2015, founder Paul Woods resigned and the company announced they were laying off 25% of the staff and changing focus to “water treatment and carbon capture" and possibly return to fuels in the future.
In 2016, their name changed to Algenol Biotech LLC and the company added other algae-based sustainable products to its portfolio.
Locations
Algenol has a large facility in Southwest Florida, just north of Florida Gulf Coast University in Fort Myers which opened in October 2010. The aim of the facility is to produce commercially viable fuel from algae. The site features research labs including engineering facilities, advanced molecular biology, management, separations, and green chemistry advanced labs and an outdoor process development production unit on 40 acres. In October 2011, Algenol began construction on a pilot-scale Integrated Biorefinery, allowing the company to work with algae from a single strain in the lab all the way to commercial-scale production.
Algenol also has subsidiaries located in Berlin, Germany and Zug, Switzerland.
Research and projects
One of Algenol's primary research goals has been to produce four fuels—ethanol, gasoline, jet, and diesel fuel—at commercial scale from marine cyanobacteria using patented bioreactors. They received $22 million dollars of funding from the U.S. Department of Energy from 2010 through 2013 for a project to prove viability of algal-produced ethanol at commercial scales using carbon dioxide captured from industrial sources. From this project, they built a 2-acre system with over 6,000 photobioreactors and they were able to operate 4,000 of these for over 500 days. While this project made significant progress towards large-scale algal biofuel production, the 2015 report from the Department of Energy notes that it is "unclear whether closed photobioreactors will ever be a viable commercial option".
Algenol licenses the DIRECT TO ETHANOL® technology. One of these licenses is with BioFields SAPI de CV in Mexico, which has access to over 42,000 acres of non-arable land in the Sonoran Desert in Mexico. Algenol has stated that they are discussing commercial "Direct to Ethanol" projects with several partners in the United States, South America, Israel, and Africa.
A 2017 report from Biofuelwatch has criticized Algenol as a case-study of many failed algae biofuel ventures.
Partnerships and funding
Algenol has a number of partners including the United States National Renewable Energy Laboratory, BioFields in Mexico, Reliance Industries Ltd. in Mumbai, India, and Membrane Technology and Research. Algenol also is partnered with Lee County, Florida, the U.S. Department of Energy, and multiple universities including Florida Gulf Coast University, Georgia Tech, and Humboldt University of Berlin.
In December 2009, Algenol received a $25 million United States Department of Energy grant to help build the Integrated Biorefinery Direct to Ethanol project in Lee County, Florida. Algenol also received a $10M grant from Lee County to employ people in Lee County and also build the Integrated Biorefinery Direct to Ethanol project. In 2016, the Office of Energy Efficiency and Renewable Energy (part of the U.S. Department of Energy) announced another $15 million grant to be split between Algenol and two other companies to continue research on commercial-scale biofuel production.
References
External links
Algenol company website
Algenol company website (archived version)
Marketwatch article with video of the Algenol process from March 2010
Algal fuel producers
Alcohol fuel producers
Algae biomass producers
Biofuel in the United States | Algenol | [
"Engineering",
"Biology"
] | 871 | [
"Synthetic biology",
"Algae biomass producers",
"Genetic engineering"
] |
21,902,856 | https://en.wikipedia.org/wiki/Air%20well%20%28condenser%29 | An air well or aerial well is a structure or device that collects water by promoting the condensation of moisture from air. Designs for air wells are many and varied, but the simplest designs are completely passive, require no external energy source and have few, if any, moving parts.
Three principal designs are used for air wells, designated as high mass, radiative, and active:
High-mass air wells: used in the early 20th century, but the approach failed.
Low-mass, radiative collectors: Developed in the late 20th century onwards, proved to be much more successful.
Active collectors: these collect water in the same way as a dehumidifier; although the designs work well, they require an energy source, making them uneconomical except in special circumstances. New designs seek to minimise the energy requirements of active condensers or make use of sustainable and renewable energy resources.
Background
All air well designs incorporate a substrate with a temperature sufficiently low that dew forms. Dew is a form of precipitation that occurs naturally when atmospheric water vapour condenses onto a substrate. It is distinct from fog, in that fog is made of droplets of water that condense around particles in the air. Condensation releases latent heat which must be dissipated in order for water collection to continue.
An air well requires moisture from the air. Everywhere on Earth, even in deserts, the surrounding atmosphere contains at least some water. According to Beysens and Milimouk: "The atmosphere contains of fresh water, composed of 98 percent water vapour and 2 percent condensed water (clouds): a figure comparable to the renewable liquid water resources of inhabited lands ." The quantity of water vapour contained within the air is commonly reported as a relative humidity, this depends on temperature, warmer air contains more water vapour than cooler air. When air is cooled to the dew point, it becomes saturated, and moisture will condense on a suitable surface. For instance, the dew point temperature of air at and 80 percent relative humidity is . The dew point temperature falls to if the relative humidity is 50 percent.
A related, but quite distinct, technique of obtaining atmospheric moisture is the fog fence.
An air well should not be confused with a dew pond. A dew pond is an artificial pond intended for watering livestock. The name dew pond (sometimes cloud pond or mist pond) derives from the widely held belief that the pond was filled by moisture from the air. In fact, dew ponds are primarily filled by rainwater.
A stone mulch can significantly increase crop yields in arid areas. This is most notably the case in the Canary Islands: on the island of Lanzarote there is about of rain each year and no permanent rivers. Despite this, substantial crops can be grown by using a mulch of volcanic stones, a trick discovered after volcanic eruptions in 1730. Some credit the stone mulch with promoting dew; although the idea has inspired some thinkers, it seems unlikely that the effect is significant. Rather, plants are able to absorb dew directly from their leaves, and the main benefit of a stone mulch is to reduce water loss from the soil and to eliminate competition from weeds.
History
Beginning in the early 20th century, a number of inventors experimented with high-mass collectors. Notable investigators were the Russian engineer Friedrich Zibold (sometimes given as Friedrich Siebold), the French bioclimatologist Leon Chaptal, the German-Australian researcher Wolf Klaphake, and the Belgian inventor .
Zibold's collector
In 1900, near the site of the ancient Byzantine city of Theodosia, thirteen large piles of stones were discovered by Zibold, who was a forester and engineer in charge of the area. Each stone pile covered just over , and was about tall. The finds were associated with the remains of terracotta pipes that apparently led to wells and fountains in the city. Zibold concluded that the stacks of stone were condensers that supplied Theodosia with water and he calculated that each air well produced more than each day.
To verify his hypothesis, Zibold constructed a stone-pile condenser at an altitude of on mount Tepe-Oba near the ancient site of Theodosia. Zibold's condenser was surrounded by a wall high, wide, around a bowl-shaped collection area with drainage. He used sea stones in diameter piled high in a truncated cone that was in diameter across the top. The shape of the stone pile allowed a good air flow with only minimal thermal contact between the stones.
Zibold's condenser began to operate in 1912 with a maximum daily production that was later estimated to have been – Zibold made no public record of his results at the time. The base developed leaks that forced the experiment to end in 1915 and the site was partially dismantled before being abandoned. (The site was rediscovered in 1993 and cleaned up.) Zibold's condenser was approximately the same size as the ancient stone piles that had been found, and although the yield was very much less than the yield Zibold had calculated for the original structures, the experiment was an inspiration for later developers.
Chaptal's collector
Inspired by Zibold's work, Chaptal built a small air well near Montpellier in 1929. Chaptal's condenser was a pyramidal concrete structure square and high, it was filled with of limestone pieces being about in diameter. Small vent holes ringed the top and bottom of the pyramid. These holes could be closed or opened as required to control the flow of air. The structure was allowed to cool during the night, and then warm moist air was let in during the day. Dew formed on the limestone pieces and collected in a reservoir below ground level. The amount of water obtained varied from to per day depending on the atmospheric conditions.
Chaptal did not consider his experiment a success. When he retired in 1946, he put the condenser out of order, possibly because he did not want to leave an improper installation to mislead those who might later continue studies on air wells.
Klaphake's collectors
Wolf Klaphake was a successful chemist working in Berlin during the 1920s and 1930s. During that time, he tested several forms of air wells in Yugoslavia and on Vis Island in the Adriatic Sea. Klaphake's work was inspired by Zibold and by the works of Maimonides, a known Jewish scholar who wrote in Arabic about 1,000 years ago and who mentioned the use of water condensers in Palestine.
Klaphake experimented with a very simple design: an area of mountain slope was cleared and smoothed with a watertight surface. It was shaded by a simple canopy supported by pillars or ridges. The sides of the structure were closed, but the top and bottom edges were left open. At night the mountain slope would cool, and in the day moisture would collect on and run down the smoothed surface. Although the system apparently worked, it was expensive, and Klaphake finally adopted a more compact design based on a masonry structure. This design was a sugarloaf-shaped building, about high, with walls at least thick, with holes on the top and at the bottom. The outer wall was made of concrete to give a high thermal capacity, and the inner surface was made of a porous material such as sandstone. According to Klaphake:
Traces of Klaphake's condensers have been tentatively identified.
In 1935, Wolf Klaphake and his wife Maria emigrated to Australia. The Klaphakes' decision to emigrate was probably primarily the result of Maria's encounters with Nazi authorities; their decision to settle in Australia (rather than, say, in Britain) was influenced by Wolf's desire to develop a dew condenser. As a dry continent, Australia was likely to need alternative sources of fresh water, and the Premier of South Australia, whom he had met in London, had expressed an interest. Klaphake made a specific proposal for a condenser at the small town of Cook, where there was no supply of potable water. At Cook, the railway company had previously installed a large coal-powered active condenser, but it was prohibitively expensive to run, and it was cheaper to simply transport water. However, the Australian government turned down Klaphake's proposal, and he lost interest in the project.
Knapen's aerial well
Knapen, who had previously worked on systems for removing moisture from buildings, was in turn inspired by Chaptal's work and he set about building an ambitiously large puits aerien (aerial well) on a high hill at Trans-en-Provence in France. Beginning in 1930, Knapen's dew tower took 18 months to build; it still stands today, albeit in dilapidated condition. At the time of its construction, the condenser excited some public interest.
The tower is high and has massive masonry walls about thick with a number of apertures to let in air. Inside there is a massive column made of concrete. At night, the whole structure is allowed to cool, and during the day warm moist air enters the structure via the high apertures, cools, descends, and leaves the building by the lower apertures. Knapen's intention was that water should condense on the cool inner column. In keeping with Chaptal's finding that the condensing surface must be rough and the surface tension must be sufficiently low that the condensed water can drip, the central column's outer surface was studded with projecting plates of slate. The slates were placed nearly vertically to encourage dripping down to a collecting basin at the bottom of the structure. Unfortunately, the aerial well never achieved anything like its hoped-for performance and produced no more than a few litres of water each day.
International Organisation for Dew Utilization
By the end of the twentieth century, the mechanics of how dew condenses were much better understood. The key insight was that low-mass collectors which rapidly lose heat by radiation perform best. A number of researchers worked on this method. In the early 1960s, dew condensers made from sheets of polyethylene supported on a simple frame resembling a ridge tent were used in Israel to irrigate plants. Saplings supplied with dew and very slight rainfall from these collectors survived much better than the control group planted without such aids – they all dried up over the summer. In 1986 in New Mexico condensers made of a special foil produced sufficient water to supply young saplings.
In 1992 a party of French academics attended a condensed matter conference in Ukraine where physicist Daniel Beysens introduced them to the story of how ancient Theodosia was supplied with water from dew condensers. They were sufficiently intrigued that in 1993 they went to see for themselves. They concluded that the mounds that Zibold identified as dew condensers were in fact ancient burial mounds (a part of the necropolis of ancient Theodosia) and that the pipes were medieval in origin and not associated with the construction of the mounds. They found the remains of Zibold's condenser, which they tidied up and examined closely. Zibold's condenser had apparently performed reasonably well, but in fact his exact results are not at all clear, and it is possible that the collector was intercepting fog, which added significantly to the yield. If Zibold's condenser worked at all, this was probably due to fact that a few stones near the surface of the mound were able to lose heat at night while being thermally isolated from the ground; however, it could never have produced the yield that Zibold envisaged.
Fired with enthusiasm, the party returned to France and set up the International Organisation for Dew Utilization (OPUR), with the specific objective of making dew available as an alternative source of water.
OPUR began a study of dew condensation under laboratory conditions; they developed a special hydrophobic film and experimented with trial installations, including a collector in Corsica. Vital insights included the idea that the mass of the condensing surface should be as low as possible so that it cannot easily retain heat, that it should be protected from unwanted thermal radiation by a layer of insulation, and that it should be hydrophobic, so as to shed condensed moisture readily.
By the time they were ready for their first practical installation, they heard that one of their members, Girja Sharan, had obtained a grant to construct a dew condenser in Kothara, India. In April 2001, Sharan had incidentally noticed substantial condensation on the roof of a cottage at Toran Beach Resort in the arid coastal region of Kutch, where he was briefly staying. The following year, he investigated the phenomenon more closely and interviewed local people. Financed by the Gujarat Energy Development Agency and the World Bank, Sharan and his team went on to develop passive, radiative condensers for use in the arid coastal region of Kutch. Active commercialisation began in 2006.
Sharan tested a wide range of materials and got good results from galvanised iron and aluminium sheets, but found that sheets of the special plastic developed by the OPUR just thick generally worked even better than the metal sheets and were less expensive. The plastic film, known as OPUR foil, is hydrophilic and is made from polyethylene mixed with titanium dioxide and barium sulphate.
Types
There are three principal approaches to the design of the heat sinks that collect the moisture in air wells: high mass, radiative, and active. Early in the twentieth century, there was interest in high-mass air wells, but despite much experimentation including the construction of massive structures, this approach proved to be a failure.
From the late twentieth century onwards, there has been much investigation of low-mass, radiative collectors; these have proved to be much more successful.
High-mass
The high-mass air well design attempts to cool a large mass of masonry with cool nighttime air entering the structure due to breezes or natural convection. In the day, the warmth of the sun results in increased atmospheric humidity. When moist daytime air enters the air well, it condenses on the presumably cool masonry. None of the high-mass collectors performed well, Knapen's aerial well being a particularly conspicuous example.
The problem with the high-mass collectors was that they could not get rid of sufficient heat during the night – despite design features intended to ensure that this would happen. While some thinkers have believed that Zibold might have been correct after all, an article in Journal of Arid Environments discusses why high-mass condenser designs of this type cannot yield useful amounts of water:
Although ancient air wells are mentioned in some sources, there is scant evidence for them, and persistent belief in their existence has the character of a modern myth.
Radiative
A radiative air well is designed to cool a substrate by radiating heat to the night sky. The substrate has a low mass so that it cannot hold onto heat, and it is thermally isolated from any mass, including the ground. A typical radiative collector presents a condensing surface at an angle of 30° from the horizontal. The condensing surface is backed by a thick layer of insulating material such as polystyrene foam and supported above ground level. Such condensers may be conveniently installed on the ridge roofs of low buildings or supported by a simple frame. Although other heights do not typically work quite so well, it may be less expensive or more convenient to mount a collector near to ground level or on a two-story building.
A radiative condenser illustrated to the left is built near the ground. In the area of northwest India where it is installed dew occurs for 8 months a year, and the installation collects about of dew water over the season with nearly 100 dew-nights. In a year it provides a total of about of potable water for the school which owns and operates the site.
Although flat designs have the benefit of simplicity, other designs such as inverted pyramids and cones can be significantly more effective. This is probably because the designs shield the condensing surfaces from unwanted heat radiated by the lower atmosphere, and, being symmetrical, they are not sensitive to wind direction.
New materials may make even better collectors. One such material is inspired by the Namib Desert beetle, which survives only on the moisture it extracts from the atmosphere. It has been found that its back is coated with microscopic projections: the peaks are hydrophilic and the troughs are hydrophobic. Researchers at the Massachusetts Institute of Technology have emulated this capability by creating a textured surface that combines alternating hydrophobic and hydrophilic materials.
Active
Active atmospheric water collectors have been in use since the commercialisation of mechanical refrigeration. Essentially, all that is required is to cool a heat exchanger below the dew point, and water will be produced. Such water production may take place as a by-product, possibly unwanted, of dehumidification. The air conditioning system of the Burj Khalifa in Dubai, for example, produces an estimated of water each year that is used for irrigating the tower's landscape plantings.
Because mechanical refrigeration is energy intensive, active collectors are typically restricted to places where there is no supply of water that can be desalinated or purified at a lower cost and that are sufficiently far from a supply of fresh water to make transport uneconomical. Such circumstances are uncommon, and even then large installations such as that tried in the 1930s at Cook, South Australia failed because of the cost of running the installation – it was cheaper to transport water over large distances.
In the case of small installations, convenience may outweigh cost. There is a wide range of small machines designed to be used in offices that produce a few litres of drinking water from the atmosphere. However, there are circumstances where there really is no source of water other than the atmosphere. For example, in the 1930s, American designers added condenser systems to airships – in this case the air was that emitted by the exhaust of the engines, and so it contained additional water as a product of combustion. The moisture was collected and used as additional ballast to compensate for the loss of weight as fuel was consumed. By collecting ballast in this way, the airship's buoyancy could be kept relatively constant without having to release helium gas, which was both expensive and in limited supply.
More recently, on the International Space Station, the Zvezda module includes a humidity control system. The water it collects is usually used to supply the Elektron system that electrolyses water into hydrogen and oxygen, but it can be used for drinking in an emergency.
There are a number of designs that minimise the energy requirements of active condensers:
One method is to use the ground as a heat sink by drawing air through underground pipes. This is often done to provide a source of cool air for a building by means of a ground-coupled heat exchanger (also known as Earth tubes), wherein condensation is typically regarded as a significant problem. A major problem with such designs is that the underground tubes are subject to contamination and difficult to keep clean. Designs of this type require air to be drawn through the pipes by a fan, but the power required may be provided (or supplemented) by a wind turbine.
Cold seawater is used in the Seawater Greenhouse to both cool and humidify the interior of greenhouse-like structure. The cooling can be so effective that not only do the plants inside benefit from reduced transpiration, but dew collects on the outside of the structure and can easily be collected by gutters.
Another type of atmospheric water collector makes use of desiccants which adsorb atmospheric water at ambient temperature, this makes it possible to extract moisture even when the relative humidity is as low as 14 percent. Systems of this sort have proved to be very useful as emergency supplies of safe drinking water. For regeneration, the desiccant needs to be heated. In some designs regeneration energy is supplied by the sun; air is ventilated at night over a bed of desiccants that adsorb the water vapour. During the day, the premises are closed, the greenhouse effect increases the temperature, and, as in solar desalination pools, the water vapour is partially desorbed, condenses on a cold part and is collected. Nanotechnology is improving these types of collectors, as well. One such adsorption-based device collected 0.25 L of water per kg of a metal-organic framework in an exceptionally arid climate with sub-zero dew points (Tempe, Arizona, USA).
A French company has recently designed a small wind turbine that uses a 30 kW electric generator to power an onboard mechanical refrigeration system to condense water.
See also
Atmospheric water generator
Condensation trap (Solar still)
Dew pond
Fog collection
Fog drip
Groasis Waterboxx
Passive cooling, including fluorescent cooling, which can cool to below the ambient air temperature
Rainwater harvesting
Solar chimney
Water potential
Watermaker
References
Notes
Sources
External links
Precipitation
Water supply
Hydrology
Appropriate technology
Drinking water
Water and the environment | Air well (condenser) | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 4,361 | [
"Hydrology",
"Water supply",
"Environmental engineering"
] |
21,903,944 | https://en.wikipedia.org/wiki/WolframAlpha | WolframAlpha ( ) is an answer engine developed by Wolfram Research. It is offered as an online service that answers factual queries by computing answers from externally sourced data.
WolframAlpha was released on May 18, 2009, and is based on Wolfram's earlier product Wolfram Mathematica, a technical computing platform. WolframAlpha gathers data from academic and commercial websites such as the CIA's The World Factbook, the United States Geological Survey, a Cornell University Library publication called All About Birds, Chambers Biographical Dictionary, Dow Jones, the Catalogue of Life, CrunchBase, Best Buy, and the FAA to answer queries. A Spanish language version was launched in 2022.
Technology
Overview
Users submit queries and computation requests via a text field. WolframAlpha then computes answers and relevant visualizations from a knowledge base of curated, structured data that come from other sites and books. It can respond to particularly phrased natural language fact-based questions. It displays its "Input interpretation" of such a question, using standardized phrases. It can also parse mathematical symbolism and respond with numerical and statistical results.
Development
WolframAlpha is written in the Wolfram Language, a multi-paradigm programming language, and implemented in Mathematica. Wolfram language is proprietary and is not commonly used by developers.
Usage
WolframAlpha was used to power some searches in the Microsoft Bing and DuckDuckGo search engines but is no longer used to provide search results. For factual question answering, WolframAlpha was used by Apple's Siri in October 2011 and Amazon Alexa in December 2018 for math and science queries. Users would notice that the Wolfram Integration for Siri was changed in June 2013 to use Bing to query certain results on IOS 7. Starting IOS 17, several users would report that Wolfram for Siri would no longer answer mathematical equations, instead fully defaulting on web search queries with no notable explanation. WolframAlpha , sets of curated information and formulas that assist in creating, categorization, and filling of spreadsheet information, became available in July 2020 within Microsoft Excel. The Microsoft-Wolfram partnership ended nearly two years later, in 2022, in favor of Microsoft Power Query data types. WolframAlpha functionality in Microsoft Excel ended in June 2023.
History
Launch preparations for WolframAlpha began on May 15, 2009, at 7:00 pm CDT and were broadcast live on Justin.tv. The plan was to publicly launch the service a few hours later. However, there were issues due to extreme load. The service officially launched on May 18, 2009, receiving mixed reviews. In 2009, WolframAlpha advocates pointed to its , some stating that how it determines results is more important than current usefulness. WolframAlpha was free at launch, but later Wolfram Research attempted to monetize the service by launching an iOS application with a cost of $50, while the website itself was free. That plan was abandoned after criticism.
On February 8, 2012, WolframAlpha Pro was released, offering users additional features for a monthly subscription fee.
Some high-school and college students use WolframAlpha to cheat on math homework, though Wolfram Research says the service helps students understand math with its problem-solving capabilities.
Copyright claims
InfoWorld published an article warning readers of the potential implications of giving an automated website proprietary rights to the data it generates. Free software advocate Richard Stallman also opposes recognizing the site as a copyright holder and suspects that Wolfram Research would not be able to make this case under existing copyright law.
See also
Commonsense knowledge problem
Artificial general intelligence
Strong AI
Watson (computer)
References
External links
Agent-based software
Educational math software
Information retrieval systems
Internet properties established in 2009
Mathematics education
Natural language processing software
Open educational resources
Physics education
Semantic Web
Software calculators
Virtual assistants
Web analytics
Websites which mirror Wikipedia
Wolfram Research
Computer algebra systems | WolframAlpha | [
"Physics",
"Mathematics",
"Technology"
] | 802 | [
"Computer algebra systems",
"Software calculators",
"Applied and interdisciplinary physics",
"Information retrieval systems",
"Physics education",
"Information technology",
"Educational math software",
"Mathematical software"
] |
37,562,074 | https://en.wikipedia.org/wiki/High-temperature%20operating%20life | High-temperature operating life (HTOL) is a reliability test applied to integrated circuits (ICs) to determine their intrinsic reliability. This test stresses the IC at an elevated temperature, high voltage and dynamic operation for a predefined period of time. The IC is usually monitored under stress and tested at intermediate intervals. This reliability stress test is sometimes referred to as a lifetime test, device life test or extended burn in test and is used to trigger potential failure modes and assess IC lifetime.
There are several types of HTOL:
AEC Documents.
JEDEC Standards.
Mil standards.
Design considerations
The main aim of the HTOL is to age the device such that a short experiment will allow the lifetime of the IC to be predicted (e.g. 1,000 HTOL hours shall predict a minimum of "X" years of operation). Good HTOL process shall avoid relaxed HTOL operation and also prevents overstressing the IC. This method ages all IC's building blocks to allow relevant failure modes to be triggered and implemented in a short reliability experiment. A precise multiplier, known as the Acceleration Factor (AF) simulates long lifetime operation.
The AF represents the accelerated aging factor relative to the useful life application conditions.
For effective HTOL stress testing, several variables should be considered:
Digital toggling factor
Analog modules operation
I/O ring activity
Monitor design
Ambient temperature (Ta)
Junction temperature (Tj)
Voltage stress (Vstrs)
Acceleration factor (AF)
Test duration (t)
Sample size (SS)
A detailed description of the above variables, using a hypothetical, simplified IC with several RAMs, digital logic, an analog voltage regulator module and I/O ring, together with the HTOL design considerations for each are provided below.
Digital toggling factor
The digital toggling factor (DTF) represents the number of transistors that change their state during the stress test, relative to the total number of gates in the digital portion of the IC. In effect, the DTF is the percentage of transistors toggling in one time unit. The time unit is relative to the toggling frequency, and is usually limited by the HTOL setup to be in the range of 10–20Mhz.
Reliability engineers strive to toggle as many as possible transistors for each time unit of measure. The RAMs (and other memory types) are usually activated using the BIST function, while the logic is usually activated with the SCAN function, LFSR or logic BIST.
The power and the self-heating of the digital portion of the IC are evaluated and the device's aging estimated. These two measures are aligned so that they are similar to the aging of other elements of the IC. The degrees of freedom for aligning these measures are the voltage stress and/or the time period during which the HTOL program loops these blocks relative to other IC blocks.
Analog modules operation
The recent trend of integrating as many electronic components as possible into a single chip is known as system on a chip (SoC).
This trend complicates reliability engineers' work because (usually) the analog portion of the chip dissipates higher power relative to the other IC elements.
This higher power may generate hot spots and areas of accelerated aging. Reliability engineers must understand the power distribution on the chip and align the aging so that it is similar for all elements of an IC.
In our hypothetical SoC the analog module only includes a voltage regulator. In reality, there may be additional analog modules e.g. PMIC, oscillators, or charge pumps. To perform efficient stress tests on the analog elements, reliability engineers must identify the worst-case scenario for the relevant analog blocks in the IC. For example, the worst-case scenario for voltage regulators may be the maximum regulation voltage and maximum load current; for charge pumps it may be the minimum supply voltage and maximum load current.
Good engineering practice calls for the use of external loads (external R, L, C) to force the necessary currents. This practice avoids loading differences due to the chip's different operational schemes and operation trimming of its analog parts.
Statistical methods are used to check statistical tolerances, variation and temperature stability of the loads used, and to define the right confidence bands for the loads to avoid over/under stress at HTOL operating range. The degrees of freedom for aligning the aging magnitude of analog parts is usually the duty-cycle, external load values and voltage stress.
I/O ring activity
The interface between the "outside world" and the IC is made via the input/output (I/O) ring. This ring contains power I/O ports, digital I/O ports and analog I/O ports. The I/Os are (usually) wired via the IC package to the "outside world" and each I/O executes its own specific command instructions, e.g. JTAG ports, IC power supply ports etc. Reliability engineering aims to age all I/Os in the same way as the other IC elements. This can be achieved by using a Boundary scan operation.
Monitor design
As previously mentioned, the main aim of the HTOL is aging the samples by dynamic stress at elevated voltage and/or temperature. During the HTOL operation, we need to assure that the IC is active, toggling and constantly functioning.
At the same time, we need to know at what point the IC stops responding, these data are important for calculating price reliability indices and for facilitating the FA. This is done by monitoring the device via one or more vital IC parameters signals communicated and logged by the HTOL machine and providing continuous indication about the IC's functionality throughout the HTOL run time. Examples of commonly used monitors include the BIST "done" flag signal, the SCAN output chain or the analog module output.
There are three types of monitoring:
Pattern matching: The actual output signal is compared to the expected one and alerts about any deviation. The main disadvantage of this monitor type is its sensitivity to any minor deviation from the expected signal. During the HTOL, the IC runs at a temperature and/or voltages that occasionally fall outside its specification, which may cause artificial sensitivity and/or a malfunction that fails the matching but is not a real failure.
Activity: Counts the number of toggles and if the results are higher than a predefined threshold the monitor indicates OK. The main disadvantage of this type of monitoring is the chance that unexpected noise or signal could be wrongly interpreted. This issue arises mainly in the case of low count toggling monitor.
Activity within a predefine range: Checks that the monitor responds within a predefined limit, for example when then number of toggles is within a predefined limit or the output of the voltage regulator is within a predefined range.
Ambient temperature (Ta)
According to JEDEC standards, the environmental chamber should be capable of maintaining the specified temperature within a tolerance of ±5 °C throughout while parts are loaded and unpowered. Today's environmental chambers have better capabilities and can exhibit temperature stability within a range of ±3 °C throughout.
Junction temperature (Tj)
Low power ICs can be stressed without major attention to self-heating effects. However, due to technology scaling and manufacturing variations, power dissipation within a single production lot of devices can vary by as much as 40%. This variation, in addition to high power IC makes advanced contact temperature controls necessary for facilitating individual control systems for each IC
Voltage stress (Vstrs)
The operating voltage should be at least the maximum specified for the device. In some cases a higher voltage is applied to obtain lifetime acceleration from voltage as well as temperature.
To define the maximum permitted voltage stress, the following methods can be considered:
Force 80% of breakdown voltage;
Force six-sigma less than the breakdown voltage;
Set the overvoltage to be higher than the maximum specified voltage. An overvoltage level of 140% of the maximum voltage is occasionally used for MIL and automotive applications.
Reliability engineers must check that Vstress does not exceed the maximum rated voltage for the relevant technology, as specified by the FAB.
Acceleration factor (AF)
The acceleration factor (AF) is a multiplier that relates a product's life at an accelerated stress level to the life at the use stress level.
An AF of 20 means 1 hour at stress condition is equivalent to 20 hours at useful condition.
The voltage acceleration factor is represented by AFv. Usually the stress voltage is equal to or higher than the maximum voltage. An elevated voltage provides additional acceleration and can be used to increase effective device hours or achieve an equivalent life point.
There are several AFv models:
E model or the constant field/voltage acceleration exponential model;
1/E model or, equivalently, the anode hole injection model;
V model, where the failure rate is exponential to voltage
Anode hydrogen release for the power-law model
AFtemp is the acceleration factor due to changes in temperature and is usually based on the Arrhenius equation. The total acceleration factor is the product of AFv and AFtemp
Test duration (t)
The reliability test duration assures the device's adequate lifetime requirement.
For example, with an activation energy of 0.7 eV, 125 °C stress temperature and 55 °C use temperature, the acceleration factor (Arrhenius equation) is 78.6. This means that 1,000 hours' stress duration is equivalent to 9 years of use. The reliability engineer decides on the qualification test duration. Industry good practice calls for 1,000 hours at a junction temperature of 125 °C.
Sample size (SS)
The challenge for new reliability assessment and qualification systems is determining the relevant failure mechanisms to optimize sample size.
Sample plans are statistically derived from manufacturer risk, consumer risk, and the expected failure rate. The commonly used sampling plan of zero rejects out of 230 samples is equal to three rejects out of 668 samples assuming LTPD = 1 and a 90% confidence interval.
HTOL policy
Sample selection
Samples shall include representative samples from at least three nonconsecutive lots to represent manufacturing variability. All test samples shall be fabricated, handled, screened and assembled in the same way as during the production phase.
Sample preparation
Samples shall be tested prior to stress and at predefined checkpoints. It is good engineering practice to test samples at maximum and minimum rating temperatures as well as at room temperature. Data logs of all functional and parametric tests shall be collated for further analysis.
Test duration
Assuming Tj = 125 °C, commonly used checkpoints are after 48, 168, 500 and 1,000 hours.
Different checkpoints for different temperatures can be calculated by using the Arrhenius equation. For example, with an activation energy of 0.7e V, Tj of 135 °C and Tuse of 55 °C the equivalent checkpoints will be at 29, 102, 303 and 606 hours.
Electrical testing should be completed as soon as possible after the samples are removed. If the samples cannot be tested soon after their removal, additional stress time should be applied. The JEDEC standard requires samples be tested within 168 hours of removal.
If testing exceeds the recommended time window, additional stress should be applied according to the table below:
Merit numbers
The merit number is the outcome of statistical sampling plans.
Sampling plans are inputted to SENTENCE, an audit tool, to ensure that the output of a process meets the requirements. SENTENCE simply accepts or rejects the tested lots. The reliability engineer implements statistical sampling plans based on predefined Acceptance Quality Limits, LTPD, manufacturer risk and customer risk. For example, the commonly used sampling plan of 0 rejects out of 230 samples is equal to 3 rejects out of 668 samples assuming LTPD = 1.
HTOL in various industries
The aging process of an IC is relative to its standard use conditions. The tables below provide reference to various commonly used products and the conditions under which they are used.
Reliability engineers are tasked with verifying the adequate stress duration. For example, for an activation energy of 0.7 eV, a stress temperature of 125 °C and a use temperature of 55 °C, an expected operational life of five years is represented by a 557-hour HTOL experiment.
Commercial use
Automotive use
Example Automotive Use Conditions
Telecommunication use
Example European Telecom use Conditions definition
Example US Telecom use conditions definition
Military use
Example military use conditions
Example
Number of Failures = r
Number of Devices = D
Test Hours per Device = H
Celsius + 273 = T (Calculation Temperature in Kelvin)
Test Temperature (HTRB or other burn-in temperature)=
Use Temperature (standardized at 55 °C or 328K) =
Activation Energy (eV) =
Chi Squared/2 is the probability estimation for number of failures at α and ν
Confidence Level for X^2 distribution; reliability calculations use α=60% or 0.60 = α (alpha)
Degrees of Freedom for distribution; reliability calculations use ν=2r + 2.0 = ν (nu)
Acceleration Factor from the Arrhenius equation =
Boltzmann constant () =
Device Hours (DH) = D × H
Equivalent Device Hours (EDH) = D × H ×
Failure Rate per hour =
Failures in Time = Failure Rate per billion hours = FIT =
Mean Time to Failure = MTTF
Where the Acceleration Factor from the Arrhenius equation is:
Failure Rate per hour =
Failures in Time = Failure Rate per billion hours = FIT =
Mean Time to Failure in hours =
Mean Time to Failure in years = ´
In case you want to calculate the acceleration factor including the Humidity the so-called Highly accelerated stress test (HAST), then:
the Acceleration Factor from the Arrhenius equation would be:
where is the stress test relative humidity (in percentage). Typically is 85%.
where is the typical use relative humidity (in percentage). Typically this is measured at the chip surface ca. 10–20%.
where is the failure mechanism scale factor. Which is a value between 0.1 and 0.15.
In case you want to calculate the acceleration factor including the Humidity (HAST) and voltage stress then:
the Acceleration Factor from the Arrhenius equation would be:
where is the stress voltage (in volts). Typically is the VCCx1.4 volts. e.g. 1.8x1.4=2.52 volts.
where is the typical usage voltage or VCC (in volts). Typically VCC is 1.8v. Depending on the design.
where is the failure mechanism scale factor. Which is a value between 0 and 3.0. Typically 0.5 for Silican junction defect.
See also
Transistor aging
Arrhenius equation
Stress migration
Reliability (semiconductor)
Failure modes of electronics
Bathtub curve
References
External links
siliconfareast
Comparing the Effectiveness of Stress-based Reliability Qualification Stress Conditions
Reliability Hotwire eMagazine
SEMATECH Handbook
Semiconductors
Semiconductor analysis
Environmental testing | High-temperature operating life | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 3,072 | [
"Electrical resistance and conductance",
"Physical quantities",
"Reliability engineering",
"Semiconductors",
"Materials",
"Electronic engineering",
"Condensed matter physics",
"Environmental testing",
"Solid state engineering",
"Matter"
] |
37,564,581 | https://en.wikipedia.org/wiki/SX%20Leonis%20Minoris | SX Leonis Minoris is a dwarf nova of the SU Ursae Majoris type that was first discovered as a 16th magnitude blue star in 1957, before its identity was confirmed as a dwarf nova in 1994. The system consists of a white dwarf and a donor star which orbit around a common centre of gravity every 97 minutes. The white dwarf sucks matter from the other star via its Roche lobe onto an accretion disc which is heated to between 6000 and 10000 K and periodically erupts every 34 to 64 days, reaching magnitude 13.4 in these outbursts and remaining at magnitude 16.8 when quiet. These outbursts can be split into frequent eruptions and less frequent supereruptions. The former are smooth, while the latter exhibit short "superhumps" of heightened activity and last 2.6% longer.
SX Leonis Minoris has been calculated as lying 489 to 688 light years (150 to 211 parsecs) distant from the Solar System via extrapolation of the, or more recently as 360 parsecs (1174 light years) via calculation using orbital period and absolute magnitude. The donor star has been calculated to have only 11% the mass of the white dwarf. It is thought to be a red dwarf of spectral type M5 to M7, but infrared observation or spectroscopy is obscured by the white dwarf's accretion disc.
References
Citations
Sources
External links
Leo Minor
Dwarf novae
Leonis Minoris, SX | SX Leonis Minoris | [
"Astronomy"
] | 302 | [
"Leo Minor",
"Constellations"
] |
37,568,071 | https://en.wikipedia.org/wiki/Hypnozygote | A hypnozygote is a resting cyst resulting from sexual fusion; it is commonly thick-walled. A synonym of zygotic cyst. Hypnozygotes have the ability to remain dormant in mud and other sediments until conditions become more favorable for growth.
References
Cysts
Microbiology | Hypnozygote | [
"Chemistry",
"Biology"
] | 66 | [
"Microbiology",
"Microscopy"
] |
23,406,255 | https://en.wikipedia.org/wiki/Cryogenic%20engineering | Cryogenic engineering is a sub stream of mechanical engineering dealing with cryogenics, and related very low temperature processes such as air liquefaction, cryogenic engines (for rocket propulsion), cryosurgery. Generally, temperatures below cold come under the purview of cryogenic engineering.
Cryogenics may be considered as the recent advancement in the field of refrigeration. Though there is no fixed demarcation as to where refrigeration ends and cryogenics begins, for general reference, temperatures below –150c(120k) are considered as cryogenic temperature. The four gases which mainly contribute for cryogenic application and research are (O2-B.P.90K), (N2-B.P.77K), (Helium-B.P.4.2k) & (H2-B.P.20K).
The word "cryogenic" is derived from Greek κρύο (cryo) - "icy cold" + γονική (genic) – "having to do with production".
References
Cryogenics
Mechanical engineering | Cryogenic engineering | [
"Physics",
"Chemistry",
"Engineering"
] | 234 | [
"Thermodynamics stubs",
"Applied and interdisciplinary physics",
"Cryogenics",
"Thermodynamics",
"Mechanical engineering",
"Physical chemistry stubs"
] |
24,875,728 | https://en.wikipedia.org/wiki/Jason-3 | Jason-3 is a satellite altimeter created by a partnership of the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) and National Aeronautic and Space Administration (NASA), and is an international cooperative mission in which National Oceanic and Atmospheric Administration (NOAA) is partnering with the Centre National d'Études Spatiales (CNES, French space agency). The satellite's mission is to supply data for scientific, commercial, and practical applications to sea level rise, sea surface temperature, ocean temperature circulation, and climate change.
Mission objectives
Jason-3 makes precise measurements related to global sea-surface height. Because sea surface height is measured via altimetry, mesoscale ocean features are better simulated since the Jason-3 radar altimeter can measure global sea-level variations with very high accuracy. The scientific goal is to produce global sea-surface height measurements every 10 days to an accuracy of less than 4 cm. In order to calibrate the radar altimeter, a microwave radiometer measures signal delay caused by atmospheric vapors, ultimately correcting the altimeter's accuracy to 3.3 cm. This data is important to collect and analyze because it is a critical factor in understanding the changes in Earth's climate brought on by global warming as well as ocean circulation. NOAA's National Weather Service uses Jason-3's data to more accurately forecast tropical cyclones.
Scientific applications
The primary users of Jason-3 data are people who are dependent on marine and weather forecasts for public safety, commerce and environmental purposes. Other users include scientists and people who are concerned with global warming and its relation to the ocean. National Oceanic and Atmospheric Administration (NOAA) and European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) are using the data primarily for monitoring wind and waves on the high seas, hurricane intensity, ocean surface currents, El Niño and La Niña forecasts, water levels of lakes and rivers. Jason-3 also reports on environmental issues such as algae blooms and oil spills. NASA and CNES are more interested in the research aspect, in terms of understanding and planning for climate change. Jason-3 can measure climate change via sea surface height because sea surface rise, averaged over annual time scales, is accelerated by warming global temperatures. Ultimately, the benefits of Jason-3 data will transfer to people and to the economy.
Orbit
Jason-3 flies at the same 9.9-day repeat track orbit and this means the satellite will make observations over the same ocean point every 9.9 days. The orbital parameters are: 66.05º inclination, 1,380 km apogee, 1,328 km perigee, 112 minutes per revolution around Earth. It was set to fly 1 minute behind the now decommissioned Jason-2. The 1-minute time delay was applied in order to not miss any data collection between missions.
Orbit determination instruments
In order to detect sea level change, we need to know the orbit height of the satellites as they revolve around Earth, to within . Combining instruments from three different techniques—GPS, DORIS, LRA. The GPS receiver on Jason-3 uses data from the constellation of GPS satellites in orbit to constantly determine its position in orbit. Similarly, DORIS is another system to help determine orbit positioning. Designed by CNES in France, DORIS uses the Doppler effect to found its system, which describes the differences in frequencies of waves between source and object. Thirdly, LRA (Laser Retroreflector Array), which is an instance of satellite laser ranging (SLR), uses corner reflectors on board the satellite to track the time it takes for lasers shot from Earth to reach the satellite and be reflected back, which can then be analyzed to understand the orbital positioning of Jason-3 from ground tracking stations. These three techniques (GPS, DORIS, LRA) all aid in determining orbit height and positioning.
Launch
Appearing on the SpaceX manifest as early as July 2013, Jason-3 was originally scheduled for launch on 22 July 2015. However, this date was pushed back to 19 August 2015 following the discovery of contamination in one of the satellite's thrusters, requiring the thruster to be replaced and further inspected. The launch was further delayed by several months due to the loss of a Falcon 9 rocket with the CRS-7 mission on 28 June 2015.
After SpaceX conducted their return-to-flight mission in December 2015 with the upgraded Falcon 9 Full Thrust, Jason-3 was assigned to the final previous-generation Falcon 9 v1.1 rocket, although some parts of the rocket body had been reworked following the findings of the failure investigation.
A 7-second static fire test of the rocket was completed on 11 January 2016. The Launch Readiness Review was signed off by all parties on 15 January 2016, and the launch proceeded successfully on 17 January 2016, at 18:42 UTC. The Jason-3 payload was deployed into its target orbit at altitude after an orbital insertion burn about 56 minutes into the flight. It was the 21st Falcon 9 flight overall and the second into a high-inclination orbit from Vandenberg Air Force Base Space Launch Complex 4E in California.
Post-mission landing test
Following paperwork filed with US regulatory authorities in 2015, SpaceX confirmed in January 2016 that they would attempt a controlled-descent flight test and vertical landing of the rocket's first stage on their west-coast floating platform Just Read the Instructions, located about out in the Pacific Ocean.
This attempt followed the first successful landing and booster recovery on the previous launch in December 2015. The controlled descent through the atmosphere and landing attempt for each booster is an arrangement that is not used on other orbital launch vehicles.
Approximately nine minutes into the flight, the live video feed from the drone ship went down due to the losing its lock on the uplink satellite. Elon Musk later reported that the first stage did touch down smoothly on the ship, but a lockout on one of the four landing legs failed to latch, so that the booster fell over and was destroyed.
Debris from the fire, including several rocket engines attached to the octaweb assembly, arrived back to shore on board the floating landing platform on 18 January 2016.
See also
French space program
TOPEX/Poseidon
Jason-1
OSTM/Jason-2
Sentinel-6 Michael Freilich (Jason-CS A)
List of Falcon 9 and Falcon Heavy launches
References
External links
About the satellite
Jason-3 website by NASA JPL
Jason-3 website by NASA JPL's Ocean Surface Topography program
Jason-3 website by NOAA
Jason-3 website by CNES
Jason-3 website by EUMETSAT
Jason-3 website by ESA's eoPortal
About the flight
Jason-3 press kit by SpaceX
NASA satellites orbiting Earth
Satellites of France
Earth observation satellites
Satellites orbiting Earth
Spacecraft launched in 2016
SpaceX payloads contracted by NASA
Physical oceanography
Jason satellite series
CNES | Jason-3 | [
"Physics"
] | 1,402 | [
"Applied and interdisciplinary physics",
"Physical oceanography"
] |
24,876,073 | https://en.wikipedia.org/wiki/Critical%20Manufacturing | Critical Manufacturing is a subsidiary of ASMPT. It was founded in 2009 and is focused on providing automation and manufacturing software for high-tech industries, such as semiconductor, electronics, medical devices and industrial equipment. It has offices in Portugal, China, Germany, Malaysia, Mexico and USA. In 2018, it became a subsidiary of ASM Pacific Technology Limited.
Products
The company flagship product is Critical Manufacturing MES, a next-generation manufacturing operations management system.
Critical Manufacturing MES uses technologies from Microsoft, providing an Internet application user experience.
References
External links
Official Website
Critical Software
Software companies of Portugal
Industrial automation | Critical Manufacturing | [
"Engineering"
] | 122 | [
"Industrial automation",
"Industrial engineering",
"Automation"
] |
24,876,353 | https://en.wikipedia.org/wiki/Neuropeptidergic | Neuropeptidergic means "related to neuropeptides".
A neuropeptidergic agent (or drug) is a chemical which functions to directly modulate the neuropeptide systems in the body or brain. An example is opioidergics.
See also
Adenosinergic
Cannabinoidergic
Cholinergic
GABAergic
Glutamatergic
Glycinergic
Histaminergic
Monoaminergic
Opioidergic
References
Neurochemistry
Neurotransmitters | Neuropeptidergic | [
"Chemistry",
"Biology"
] | 113 | [
"Biochemistry",
"Neurochemistry",
"Neurotransmitters"
] |
24,876,489 | https://en.wikipedia.org/wiki/Opioidergic | An opioidergic agent (or drug) is a chemical which directly or indirectly modulate the function of opioid receptors. Opioidergics comprise opioids, as well as allosteric modulators and enzyme affecting agents like enkephalinase inhibitors.
Allosteric modulators
BMS-986121: μ-PAM
BMS-986122: μ-PAM
BPRMU191: confers agonistic properties to small-molecule morphinan antagonists
Ignavine
Oxytocin: μ-PAM
δ-PAM (see reference)
Cannabidiol
Tetrahydrocannabinol
Sodium (Na+)
See also
List of opioids
Adenosinergic
Adrenergic
Cannabinoidergic
Cholinergic
Dopaminergic
GABAergic
Glycinergic
Histaminergic
Melatonergic
Monoaminergic
Serotonergic
References
Neurochemistry
Neurotransmitters
Opioids | Opioidergic | [
"Chemistry",
"Biology"
] | 211 | [
"Biochemistry",
"Neurochemistry",
"Neurotransmitters"
] |
24,880,495 | https://en.wikipedia.org/wiki/Interface%20segregation%20principle | In the field of software engineering, the interface segregation principle (ISP) states that no code should be forced to depend on methods it does not use. ISP splits interfaces that are very large into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them. Such shrunken interfaces are also called role interfaces. ISP is intended to keep a system decoupled and thus easier to refactor, change, and redeploy. ISP is one of the five SOLID principles of object-oriented design, similar to the High Cohesion Principle of GRASP. Beyond object-oriented design, ISP is also a key principle in the design of distributed systems in general and one of the six IDEALS principles for microservice design.
Importance in object-oriented design
Within object-oriented design, interfaces provide layers of abstraction that simplify code and create a barrier preventing coupling to dependencies. A system may become so coupled at multiple levels that it is no longer possible to make a change in one place without necessitating many additional changes. Using an interface or an abstract class can prevent this side effect.
Origin
The ISP was first used and formulated by Robert C. Martin while consulting for Xerox. Xerox had created a new printer system that could perform a variety of tasks such as stapling and faxing. The software for this system was created from the ground up. As the software grew, making modifications became more and more difficult so that even the smallest change would take a redeployment cycle of an hour, which made development nearly impossible.
The design problem was that a single Job class was used by almost all of the tasks. Whenever a print job or a stapling job needed to be performed, a call was made to the Job class. This resulted in a 'fat' class with multitudes of methods specific to a variety of different clients. Because of this design, a staple job would know about all the methods of the print job, even though there was no use for them.
The solution suggested by Martin utilized what is today called the Interface Segregation Principle. Applied to the Xerox software, an interface layer between the Job class and its clients was added using the Dependency Inversion Principle. Instead of having one large Job class, a Staple Job interface or a Print Job interface was created that would be used by the Staple or Print classes, respectively, calling methods of the Job class. Therefore, one interface was created for each job type, which was all implemented by the Job class.
Typical violation
A typical violation of the Interface Segregation Principle is given in Agile Software Development: Principles, Patterns, and Practices in 'ATM Transaction example' and in an article also written by Robert C. Martin specifically about the ISP. This example discusses the User Interface for an ATM, which handles all requests such as a deposit request, or a withdrawal request, and how this interface needs to be segregated into individual and more specific interfaces.
See also
SOLID – the "I" in SOLID stands for Interface segregation principle
References
External links
Principles Of OOD – Description and links to detailed articles on SOLID.
Software design
Programming principles
Object-oriented programming | Interface segregation principle | [
"Engineering"
] | 649 | [
"Design",
"Software design"
] |
35,063,935 | https://en.wikipedia.org/wiki/C23H24N2O4 | {{DISPLAYTITLE:C23H24N2O4}}
The molecular formula C23H24N2O4 may refer to:
25N-NBPh
PD-102,807 | C23H24N2O4 | [
"Chemistry"
] | 45 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
35,066,763 | https://en.wikipedia.org/wiki/Proof%20compression | In proof theory, an area of mathematical logic, proof compression is the problem of algorithmically compressing formal proofs. The developed algorithms can be used to improve the proofs generated by automated theorem proving tools such as SAT solvers, SMT-solvers, first-order theorem provers and proof assistants.
Problem Representation
In propositional logic a resolution proof of a clause from a set of clauses C is a directed acyclic graph (DAG): the input nodes are axiom inferences (without premises) whose conclusions are elements of C, the resolvent nodes are resolution inferences, and the proof has a node with conclusion .
The DAG contains an edge from a node to a node if and only if a premise of is the conclusion of . In this case, is a child of , and is a parent of . A node with no children is a root.
A proof compression algorithm will try to create a new DAG with fewer nodes that represents a valid proof of or, in some cases, a valid proof of a subset of .
A simple example
Let's take a resolution proof for the clause from the set of clauses
Here we can see:
and are input nodes.
The node has a pivot ,
left resolved literal
right resolved literal
conclusion is the clause
premises are the conclusion of nodes and (its parents)
The DAG would be
and are parents of
is a child of and
is a root of the proof
A (resolution) refutation of C is a resolution proof of from C. It is a common given a node , to refer to the clause or ’s clause meaning the conclusion clause of , and (sub)proof meaning the (sub)proof having as its only root.
In some works can be found an algebraic representation of resolution inferences. The resolvent of and with pivot can be denoted as . When the pivot is uniquely defined or irrelevant, we omit it and write simply . In this way, the set of clauses can be seen as an algebra with a commutative operator; and terms in the corresponding term algebra denote resolution proofs in a notation style that is more compact and more convenient for describing resolution proofs than the usual graph notation.
In our last example the notation of the DAG would be or simply
We can identify .
Compression algorithms
Algorithms for compression of sequent calculus proofs include cut introduction and cut elimination.
Algorithms for compression of propositional resolution proofs include
RecycleUnits,
RecyclePivots,
RecyclePivotsWithIntersection,
LowerUnits,
LowerUnivalents,
Split,
Reduce&Reconstruct, and Subsumption.
Notes
Proof theory | Proof compression | [
"Mathematics"
] | 542 | [
"Mathematical logic",
"Proof theory"
] |
35,069,405 | https://en.wikipedia.org/wiki/Authoritarian%20leadership%20style | An authoritarian leadership style is described as being as "leaders' behavior that asserts absolute authority and control over subordinates and [that] demands unquestionable obedience from subordinates." Such a leader has full control of the team, leaving low autonomy within the group. The group is expected to complete the tasks under very close supervision, while unlimited authority is self-bestowed by the leader. Subordinates' responses to the orders given are either punished or rewarded. A way that those that have authoritarian leadership behaviors tend to lean more on "...unilateral decision-making through the leader and strive to maintain the distance between the leader and his or her followers."
Background
Authoritarian leaders are commonly referred to as "autocratic" leaders. They sometimes, but not always, provide clear expectations for what needs to be done, when it should be done, and how it should be done. There is also a clear divide between the leader and the followers. Bob Altemeyer conducted research on what he labeled right-wing authoritarianism (RWA), and presented an analysis of the personality-types of both the authoritarian leaders and the authoritarian followers.
Authoritarian leaders make decisions independently with little or no input from others.
They uphold stringent control over their followers by directly regulating rules, methodologies, and actions.
Authoritarian leaders construct gaps and build distance between themselves and their followers with the intention of stressing role-distinctions. This type of leadership dates back to the earliest tribes and even empires. It is often used in present-day when there is little room for error, such as construction jobs or manufacturing jobs.
Authoritarian leadership typically fosters little creativity in decision-making. Lewin also found that it is more difficult to move from an authoritarian style to a democratic style than vice versa. Abuse of this style is usually viewed as controlling, bossy and dictatorial. Authoritarian leadership is best applied to situations where there is little time for group discussion.
Some approaches to leading make a virtue of limiting or eschewing authoritarian traits.
Views of authoritarian leaders
A common belief of many authoritarian leaders is that followers require direct supervision at all times, or else they would not operate effectively. This belief is in accordance with one of Douglas McGregor's philosophical views of humankind, Theory X. This concept proposes that it is a leader's role to coerce and control followers because people have an inherent aversion to work and will abstain from it whenever possible. Theory X also postulates that people must be compelled through force, intimidation, or authority, and controlled, directed, or threatened with punishment in order to get them to accomplish the organizational needs.
In the minds of authoritarian leaders, people who are left to work autonomously will ultimately be unproductive. “Examples of authoritarian communicative behavior include a police officer directing traffic, a teacher ordering a student to do his or her assignment, and a supervisor instructing a subordinate to clean a workstation.” However, studies do show that having some form of authoritarian leader around can produce some improvement through any field of work, and daily tasks with those of authoritarian styles of leadership. In an article titled, "How Authoritarian Leadership Affects Employee's Helping Behavior? The Mediating Role of Rumination and Moderating Role of Psychological Ownership," states that having this form of leadership actually helps. However, this is done by having the other party instill the same effort by the other party. This means that the other party has to positive behaviors, have commitment, are wanting to work, and respect the leadership above them, they are willing to see growth and have achievement throughout the relationship of the leader and the citizen.
Communication patterns
Downward, one-way communication (i.e. leaders to followers, or supervisors to subordinates)
Controls discussion with followers
Dominates interaction
Independently/unilaterally sets policy and procedures
Individually directs the completion of tasks
Does not offer constant feedback
Rewards acquiescent obedient behavior and punishes erroneous actions
Poor listener
Uses conflict for individual gain
Limited communication and control
Can increase perceptions of abusive behavior.
Cultural background affects how authoritarian leadership is viewed.
Ways to properly incorporate authoritarian leadership
Always explain rules: it allows your subordinates to complete the task you want to be done efficiently.
Be consistent: if you are to enforce rules and regulations, make sure to do so regularly so your subordinates take you seriously. This will form a stronger level of trust.
Respect your subordinates: always recognize your subordinates' efforts and achievements.
Educate your subordinates by enforcing rules: do not present them with any surprises. This can lead to problems in the future due to false communication.
Listen to suggestions from your subordinates, even if you do not incorporate them.
Effects of authoritarian leadership communication styles
Increase in productivity when the leader is present
Produces more accurate solutions when the leader is knowledgeable
Is more positively accepted in larger groups
Enhances performance on simple tasks and decreases performance on complex tasks
Increases aggression levels among followers
Increases turnover rates
Successful when there is a time urgency for completion of projects
Improves the future work of those subordinates whose skills are not very applicable or helpful without the demands of another
Downfalls
Long-term use can cause resentment from subordinates.
It has been found by researchers that these types of leaders lack creative problem solving skills
Without proper instruction and understanding from subordinates, confusion may arise
A lot of emotional strain and stress for workers can be created because of authoritarian leadership. When leaders use a lot of pressure and control, it can lead to burnout and possibly lower job satisfaction. Workers may feel stressed when they think their efforts aren't being recognized or unfairly criticized. Over time, this can affect their motivation and overall well-being.
Authoritarian leadership is connected to higher employee turnover over time. Research suggests that this leadership style can reduce job satisfaction and engagement, making employees feel less connected to the organization and more likely to leave. While it may effectively achieve short-term goals, the lack of collaboration can create a workplace environment where employees feel undervalued or overly controlled. This can contribute to more employees leaving, especially when workers seek roles with more autonomy or supportive leadership styles.
Examples of authoritarian leaders
Engelbert Dollfuss, chancellor of Austria from 1932 to 1934, destroyed the Austrian Republic and established an authoritarian regime based on conservative Roman Catholic and Italian Fascist principles. In May 1932 when he became chancellor, Dollfuss headed a conservative coalition led by the Christian Social Party. When faced with a severe economic crisis caused by the Great Depression, Dollfuss decided against joining Germany in a customs union, a course advocated by many Austrians. Severely criticized by Social Democrats, Pan-German nationalists, and Austrian Nazis, he countered by drifting toward an increasingly authoritarian regime.
The Italian leader Benito Mussolini became Dollfuss' principal foreign ally. Italy guaranteed Austrian independence at Riccione (August 1933), but in return Austria had to abolish all political parties and reform its constitution on the Fascist model. In March 1933, Dollfuss’ attacks on Parliament culminated that September in the permanent abolition of the legislature and the formation of a corporate state based on his Vaterländische Front (“Fatherland Front”); with which he expected to replace Austria’s political parties. In foreign affairs, he steered a course that converted Austria virtually into an Italian satellite state. Hoping therewith to prevent Austria’s incorporation into Nazi Germany, he fought his domestic political opponents along fascist-authoritarian lines.
In February 1934 paramilitary formations loyal to the chancellor crushed Austria’s Social Democrats. With a new constitution of May 1934, his regime became completely dictatorial. In June, however, Germany incited the Austrian Nazis to civil war. Dollfuss was assassinated by the Nazis in a raid on the chancellery.
References
Leadership studies
Authoritarianism
Harassment and bullying
Coercion | Authoritarian leadership style | [
"Biology"
] | 1,574 | [
"Harassment and bullying",
"Behavior",
"Aggression"
] |
35,076,618 | https://en.wikipedia.org/wiki/Maltoside | A maltoside is a glycoside with maltose as the glycone (sugar) functional group. Among the most common are alkyl maltosides, which contain hydrophobic alkyl chains as the aglycone. Given their amphiphilic properties, these comprise a class of detergents, where variation in the alkyl chain confers a range of detergent properties including CMC and solubility. Maltosides are most often used for the solubilization and purification of membrane proteins.
History
In 1980 Ferguson-Miller et al. at Michigan State developed n-dodecyl-β-D-maltopyranoside (DDM) as part of a successful effort to purify an active, stable, monodisperse form of cytochrome c oxidase. Maltosides have been used extensively to stabilize membrane proteins for biophysical and structural studies.
Table of detergent properties
References
Disaccharides
Reagents for biochemistry
Non-ionic surfactants | Maltoside | [
"Chemistry",
"Biology"
] | 215 | [
"Biochemistry methods",
"Reagents for biochemistry",
"Biochemistry"
] |
35,079,505 | https://en.wikipedia.org/wiki/Nomenclator%20Botanicus%20Hortensis | Nomenclator Botanicus Hortensis aka Nomencl. Bot. Hort. is an 1840-46 alphabetic index of cultivated plants from the gardens of Europe by the German botanist Gustav Heynhold. It includes synonyms, botanical authors, countries of origin and cultivation. It was published in Dresden and Leipzig by the firm Arnoldischen Buchhandlung with an introduction by Ludwig Reichenbach.
'Nomenclator' is one who bestows names or draws up a classified system of names, and by extension the term has become used for a book setting out such a system. The word 'nomenclature' is derived from 'nomenclator'. Well-known nomenclators have been the 1821 Nomenclator Botanicus of Ernst Gottlieb von Steudel, the 1826 Hortus Britannicus of Robert Sweet and John Claudius Loudon's 1830 Hortus Britannicus. Sweet's Hortus Britannicus is described as a 'Catalogue of Plants cultivated in the Gardens of Great Britain' and 'Arranged in Natural Orders with the addition of the Linnean Classes and Orders to which they belong, Reference to the Books where they are described, their Native Places of Growth, when introduced, Time of Flowering, Duration, and Reference to Figures with numerous synonyms'.
External links
Nomenclator Botanicus Hortensis online
Notes
Botany books
Taxonomy (biology) books | Nomenclator Botanicus Hortensis | [
"Biology"
] | 293 | [
"Taxonomy (biology)",
"Taxonomy (biology) books"
] |
40,319,910 | https://en.wikipedia.org/wiki/Institute%20for%20Collaborative%20Biotechnologies | The Institute for Collaborative Biotechnologies (ICB) is a University Affiliated Research Center (UARC) primarily funded by the United States Army. Headquartered at the University of California, Santa Barbara (UCSB) and in collaboration with MIT, Caltech and industry partners, ICB's interdisciplinary approach to research aims to enhance military technology by transforming biological systems into technological applications.
Founding
UARC proposed the ICB's development in January 2003 and the institute came to fruition by August 22, 2003 with the US Army's announcement to grant $50 million for military research. Since that time, the ICB has remained intact and expanded to include 60 faculty members and 150 researchers that have completed over 135 research projects.
Research
The ICB's research aim is to model biological mechanisms for use in military materials and tools. Quoting Army Research Office program manager Robert Campbell, "The inspiration for the ICB comes from the fact that biology uses different mechanisms to produce materials and integrated circuits for high-performance sensing, computing and information processing, and actuation than are presently used in human manufacturing." Much research is focused on evaluating biomolecular sensors, bio-inspired materials and energy, biodiscovery tools, bio-inspired network science, and cognitive neuroscience through the disciplines of cellular and molecular biology, materials science, chemical engineering, mechanical engineering, and psychology.
Leadership
Present
Francis J. Doyle III, ICB Director
Scott Grafton, ICB Associate Director
David Gay, ICB Director of Technology
Robert Kokoska, ICB Army Program Manager
Past
Daniel Morse, Founding Director
Affiliates
The ICB is affiliated with the following:
Army Partners
Aviation & Missile Research, Development & Engineering Center (AMRDEC)
Army Research Laboratory (ARL)
Army Research Office (ARO)
Communication-Electronics Research Development & Engineering Center (CERDEC)
Edgewood Chemical Biological Center (ECBC)
Engineer Research & Development Center (ERDC)
Medical Research & Materiel Command (MRMC)
Natick Soldier Research, Development & Engineering Center (NSRDEC)
Tank Automotive Research, Development & Engineering Center (TARDEC)
Training and Doctrine Command (TRADOC ARCIC)
Industry Partners
The Aerospace Corporation
Cynvenio Biosystems, LLC
CytomX, Inc.
General Atomics
Innovative Micro Technologies
Integrated Diagnostics
Lockheed Martin
Raytheon Vision Systems
SAIC
Sirigen
Spectrafluidics
Teledyne Scientific Company
Toyon Research Corporation
United Technologies Research Center
Controversy
In 2008, S.B. Antiwar protested ICB's annual military conference by blocking UCSB's Pardall Tunnel, the main path and bikeway between the campus and city of Isla Vista. Between 200 and 300 students and community members participated and a total of three arrests were made. The conference area was then secured by campus police and the event continued as planned. The ICB has continued to hold conferences at UCSB each year without incident.
References
Biotechnology organizations
University of California, Santa Barbara
2003 establishments in the United States
2003 establishments in California
Organizations established in 2003 | Institute for Collaborative Biotechnologies | [
"Engineering",
"Biology"
] | 611 | [
"Biotechnology organizations"
] |
40,325,028 | https://en.wikipedia.org/wiki/Quantum%20jump%20method | The quantum jump method, also known as the Monte Carlo wave function (MCWF) is a technique in computational physics used for simulating open quantum systems and quantum dissipation. The quantum jump method was developed by Dalibard, Castin and Mølmer at a similar time to the similar method known as Quantum Trajectory Theory developed by Carmichael. Other contemporaneous works on wave-function-based Monte Carlo approaches to open quantum systems include those of Dum, Zoller and Ritsch and Hegerfeldt and Wilser.
Method
The quantum jump method is an approach which is much like the master-equation treatment except that it operates on the wave function rather than using a density matrix approach. The main component of this method is evolving the system's wave function in time with a pseudo-Hamiltonian; where at each time step, a quantum jump (discontinuous change) may take place with some probability. The calculated system state as a function of time is known as a quantum trajectory, and the desired density matrix as a function of time may be calculated by averaging over many simulated trajectories. For a Hilbert space of dimension N, the number of wave function components is equal to N while the number of density matrix components is equal to N2. Consequently, for certain problems the quantum jump method offers a performance advantage over direct master-equation approaches.
References
Further reading
External links
mcsolve Quantum jump (Monte Carlo) solver from QuTiP for Python.
QuantumOptics.jl the quantum optics toolbox in Julia.
Quantum Optics Toolbox for Matlab
Quantum mechanics
Computational physics
Monte Carlo methods | Quantum jump method | [
"Physics"
] | 333 | [
"Monte Carlo methods",
"Theoretical physics",
"Quantum mechanics",
"Computational physics",
"Quantum physics stubs"
] |
40,328,133 | https://en.wikipedia.org/wiki/Detyrosination | Detyrosination is a form of posttranslational modification that occurs on alpha-tubulin. It consists of the removal of the C-terminal tyrosine to expose a glutamate at the newly formed C-terminus. Tubulin polymers, called microtubules, that contain detyrosinated alpha-tubulin are usually referred to as Glu-microtubules while unmodified polymers are called Tyr-microtubules.
The detyrosynating activity was first identified in the late 1970s. It is a slow acting enzyme that uses polymeric tubulin as a substrate. As a result, only stabilized microtubules accumulate this particular modification. Tubulin detyrosination is reversed by the tubulin-tyrosine ligase, which acts only on alpha-tubulin monomer. Since the majority of microtubules are very dynamic, they do not contain much detyrosinated tubulin.
See also
Polyglutamylation
Polyglycylation
Acetylation
References
Vasohibins/SVBP are tubulin carboxypeptidases (TCPs) that regulate neuron differentiation.
Aillaud C, Bosc C, Peris L, Bosson A, Heemeryck P, Van Dijk J, Le Friec J, Boulan B, Vossier F, Sanman LE, Syed S, Amara N, Couté Y, Lafanechère L, Denarier E, Delphin C, Pelletier L, Humbert S, Bogyo M, Andrieux A, Rogowski K, Moutin MJ.
Science. 2017 Dec 15;358(6369):1448-1453. doi: 10.1126/science.aao4165. Epub 2017 Nov 16.
PMID: 29146868
Post-translational modification
Protein structure | Detyrosination | [
"Chemistry"
] | 391 | [
"Gene expression",
"Biochemical reactions",
"Post-translational modification",
"Structural biology",
"Protein structure"
] |
40,328,469 | https://en.wikipedia.org/wiki/Quarter%20hypercubic%20honeycomb | In geometry, the quarter hypercubic honeycomb (or quarter n-cubic honeycomb) is a dimensional infinite series of honeycombs, based on the hypercube honeycomb. It is given a Schläfli symbol q{4,3...3,4} or Coxeter symbol qδ4 representing the regular form with three quarters of the vertices removed and containing the symmetry of Coxeter group for n ≥ 5, with = and for quarter n-cubic honeycombs = .
See also
Hypercubic honeycomb
Alternated hypercubic honeycomb
Simplectic honeycomb
Truncated simplectic honeycomb
Omnitruncated simplectic honeycomb
References
Coxeter, H.S.M. Regular Polytopes, (3rd edition, 1973), Dover edition,
pp. 122–123, 1973. (The lattice of hypercubes γn form the cubic honeycombs, δn+1)
pp. 154–156: Partial truncation or alternation, represented by q prefix
p. 296, Table II: Regular honeycombs, δn+1
Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995,
(Paper 22) H.S.M. Coxeter, Regular and Semi Regular Polytopes I, [Math. Zeit. 46 (1940) 380-407, MR 2,10] (1.9 Uniform space-fillings)
(Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318
Honeycombs (geometry)
Uniform polytopes | Quarter hypercubic honeycomb | [
"Physics",
"Chemistry",
"Materials_science"
] | 377 | [
"Uniform polytopes",
"Honeycombs (geometry)",
"Tessellation",
"Crystallography",
"Symmetry"
] |
29,517,643 | https://en.wikipedia.org/wiki/Induction%20period | An induction period in chemical kinetics is an initial slow stage of a chemical reaction; after the induction period, the reaction accelerates. Ignoring induction periods can lead to runaway reactions.
In some catalytic reactions, a pre-catalyst needs to undergo a transformation to form the active catalyst, before the catalyst can take effect. Time is required for this transformation, hence the induction period. For example, with Wilkinson's catalyst, one triphenylphosphine ligand must dissociate to give the coordinatively unsaturated 14-electron species which can participate in the catalytic cycle:
Similarly, for an autocatalytic reaction, where one of the reaction products catalyzes the reaction itself, the rate of reaction is low initially until sufficient products have formed to catalyze the reaction.
Reactions generally accelerate when heat is applied. Where a reaction is exothermic, the rate of the reaction may initially be low. As the reaction proceeds, heat is generated, and the rate of reaction increases. This type of reaction often exhibits an induction period as well.
The reactions to form Grignard reagents are notorious for having induction periods. This is usually due to two reasons: Firstly, the thin film of oxide on the magnesium reagent must be removed before the bulk magnesium can react. Secondly, Grignard reactions, while exothermic, are typically conducted at low temperature for better selectivity. For these two reasons, Grignard reactions often can have a long induction period, followed by a thermal runaway, even causing the reaction solvent to boil-off.
References
Catalysis
Chemical kinetics | Induction period | [
"Chemistry"
] | 331 | [
"Catalysis",
"Chemical reaction engineering",
"Chemical kinetics"
] |
29,524,151 | https://en.wikipedia.org/wiki/Deuterium-depleted%20water | Deuterium-depleted water (DDW) is water which has a lower concentration of deuterium than occurs naturally at sea level on Earth.
DDW is sometimes known as light water or protium water, although "light water" has long referred to ordinary water, specifically in nuclear reactors.
Chemistry
Deuterium-depleted water has less deuterium (H) than occurs in nature at sea level. Deuterium is a naturally-occurring, stable (non-radioactive) isotope of hydrogen with a nucleus consisting of one proton and one neutron. A nucleus of normal hydrogen (protium, H) consists of one proton only, and no neutron. Deuterium thus has about twice the atomic mass as H. Heavy water molecules contain two deuteriums instead of two H atoms. The hydrogen in normal water is about 99.97% H (by weight).
Production of heavy water involves isolating and removing deuterium-containing isotopologues within natural water. The by-product of this process is DDW.
Due to the heterogeneity of hydrological conditions, the isotopic composition of natural water varies around the Earth. Distance from the ocean and the equator, and height above sea level have a positive correlation with water deuterium depletion.
In Vienna Standard Mean Ocean Water (VSMOW) that defines the isotopic composition of seawater, deuterium occurs at a concentration of 155.76 ppm. For the SLAP (Standard Light Antarctic Precipitation) standard that determines the isotopic composition of natural water from the Antarctic, the concentration of deuterium is 89.02 ppm.
Snow water, especially from glacial mountain meltwater, is significantly lighter than ocean water. Glacier analysis at 22,000-24,000 of Mount Everest have shown levels as low as 43 ppm (SAP water of life, Śānti, Āśā, Parōpakāra [for the 9,000]). The weight quantities of isotopologues in natural water are calculated based on data collected using molecular spectroscopy:
According to the table above, the weight concentration of heavy isotopologues in natural water can reach 2.97 g/kg, which is mostly due to HO, i.e. water with light hydrogen and heavy oxygen. Also, there are ~300 mg of deuterium-containing isotopologues per liter of water. This presents a significant value comparable, for example, with the content of mineral salts.
Biological properties of the deuterium content in water
Gilbert N. Lewis was the first to discover that heavy water inhibits (retards) seed growth (1933). His experiments with tobacco seeds showed that cultivation of cells on heavy water dramatically accelerates the aging process and leads to lethal results.
Production
Deuterium-depleted water can be produced in laboratories and factories. Various technologies are used for its production, such as electrolysis, distillation (low-temperature vacuum rectification), desalination from seawater, Girdler sulfide process, and catalytic exchange.
Health claims and criticism
Harriet Hall investigated health claims being attributed to drinking DDW, which has been sold for as much as $20 per liter. In a July 2020 article published at Skeptical Inquirer online, she reported that the overwhelming majority of DDW studies, despite showing positive outcomes, did not involve humans, and the few that did, did not verify any human efficacy.
See also
Kinetic isotope effect
Light water (disambiguation)
Properties of water
References
Water
Water chemistry
Medical physics
Alternative cancer treatments
Isotopes
Deuterium | Deuterium-depleted water | [
"Physics",
"Chemistry",
"Environmental_science"
] | 732 | [
"Hydrology",
"Applied and interdisciplinary physics",
"Nuclear physics",
"Isotopes",
"nan",
"Medical physics",
"Water"
] |
29,525,748 | https://en.wikipedia.org/wiki/Stormwater%20harvesting | Stormwater harvesting or stormwater reuse is the collection, accumulation, treatment or purification, and storage of stormwater for its eventual reuse. While rainwater harvesting collects precipitation primarily from rooftops, stormwater harvesting deals with collection of runoff from creeks, gullies, ephemeral streams and underground conveyance. It can also include catchment areas from developed surfaces, such as roads or parking lots, or other urban environments such as parks, gardens and playing fields.
Water that comes into contact with impervious surfaces, or saturated surfaces incapable of absorbing more water, is termed surface runoff. As the surface runoff travels greater distance over impervious surfaces it often becomes contaminated and collects an increasing amount of pollutants. A main challenge of stormwater harvesting is the removal of pollutants in order to make this water available for reuse.
Stormwater harvesting projects often have multiple objectives, such as reducing contaminated runoff to sensitive waters, promoting groundwater recharge, and non-potable applications such as toilet flushing and irrigation. Stormwater harvesting is also practiced in areas of the United States as a way to address rising water demands as population rises. Internationally, Australia is notable in its active pursuit of stormwater harvesting.
Systems
Ground catchments systems channel water from a prepared catchment area into storage. These systems are often considered in areas where rainwater is scarce and other sources of water are not available. If properly designed, ground catchment systems can collect large quantities of rainwater. In arid ranch land, a catchwater or cattle tank can be constructed across shallow ephemeral washes to impound and collect what little stormwater does generate there. This untreated water is easily accessed and utilized by livestock. More intricate collection and processing systems are necessary for stormwater harvest to be reused for human uses.
Stormwater capture
Five Core Steps: End Use, Collection, Treatment, Storage, and Distribution
End Use
Water resources become more scarce as the human population grows. Populations need to create systems and methods to minimize water consumption at all levels, while simultaneously engineering new methods of water reuse. For non-potable water purposes with lower water quality needs, people can use stormwater for toilet flushing, gardening, fire fighting, irrigation, etc. For potable water use of higher water quality, stormwater needs to be highly treated before final use. The latter has rarely been used around the world. Some stormwater collection systems aim to simply reduce the amount of stormwater runoff that flows to a nearby waterway. The benefits of these systems include reducing pollution in streams, lakes, and nearshore coastal environments, as well as promoting groundwater recharge. The intended end use of a system will determine the level of treatment and processing of collected stormwater.
Collections
Stormwater collection is a process of directing water into storage from stormwater gathering, such as urban runoff. Generally, there are two types, online storages and offline storages. Online storages are a conventional way of acquiring stormwater directly from waterways or drains. For instance, the urban drainage system of channels and pipes conduct stormwater into storage facilities, often with treatment at or just prior to storage. One drawback of this collection design is the required maintenance that systems may require for structural integrity to prevent conduit failure resulting in stormwater leakage. Water Sensitive Urban Design, or WSUD, is one comprehensive planning and design process that incorporates online stormwater storage into urban development models. Offline storages require additional facilities to conduct water from waterways indirectly, and can serve as storage for stormwater prior to treatment. For instance, weirs divert flows into stormwater containment and contribute to a large part of stormwater catchment for a city, where it is then stored for future treatment and distribution. Stormwater collection is widely practiced for purposes of urban runoff and flood mitigation as well.
Treatment
Stormwater treatment is the greatest challenge for stormwater harvesting. Water treatment processes depend on the intended end use and the catchment equipment, which determines the level of pollutants to be filtered and removed. For instance, construction uses may only require non-potable water where the water processing includes only filtration and disinfection. However, for potable uses of higher water quality, the treatment process requires screening, coagulation, filtration, carbon adsorption, and disinfection.
Storage
There are three factors to consider in terms of storage: function, location, and capacity. The planner is responsible for determining the end use of the stored stormwater, such as fire fighting, industrial water supply, farming and irrigation, recreation, flood mitigation, groundwater recharge, etc. Regarding location of a system and its storage, a water tank in proximity to the waters' end use may be the best design. If the collection system is intended to slow runoff and/or recharge an aquifer, an on-site, below ground infiltration systems may be considered. Choices between online and offline storages can affect the surrounding natural aquatic systems and yields different maintenance costs and flood mitigation effectiveness. The capacity of a storage system will be determined by the type of end use in a particular climate or period of time.
Distribution
Generally, there are two types of stormwater distribution systems. The first is open space irrigation systems. This application uses treated stormwater to irrigate open spaces such as parks, municipal green spaces, golf courses, etc., and can be implemented at a hyper-local scale (ie catchment and reuse occurs at the same park). Another system is a non-potable distribution system which distributes treated stormwater to be used for things like toilet flushing, fire fighting, and some industrial uses. This system may require additional infrastructure such as a third-pipe network for distribution.
Concerns
Major concerns for stormwater harvesting projects include cost effectiveness as well as quality, quantity, and reliability of the reclamation, as well as existing water management infrastructure and soil characteristics. Some projects have estimated stormwater harvesting to be twice as expensive per unit -when including operating costs- versus other potable water alternatives. New construction of third-pipe networks in urban settings can be prohibitively expensive; therefore the ideal project will produce recycled stormwater of potable quality in order to take advantage of existing distribution infrastructure. Attaining quality as well as useful quantity water from stormwater harvesting presents challenges of filtration efficacy as well as source reliability and predictability. However, other valuable (and hard-to-calculate) benefits include reducing soil erosion by slowing flow rates and reducing demands on local aquifers, as well as reduction of pollution into local waterways.
See also
Rainwater Hog
Stormwater detention vault
References
Stormwater management
Water supply
Water conservation
Hydrology and urban planning
Appropriate technology | Stormwater harvesting | [
"Chemistry",
"Engineering",
"Environmental_science"
] | 1,364 | [
"Hydrology",
"Water treatment",
"Stormwater management",
"Water pollution",
"Hydrology and urban planning",
"Environmental engineering",
"Water supply"
] |
46,380,626 | https://en.wikipedia.org/wiki/NGC%20134 | NGC 134 is a barred spiral galaxy that resembles the Milky Way with its spiral arms loosely wrapped around a bright, bar-shaped central region. Its loosely bound spiral arms categorize it as Hubble-type Sbc. It is 60million light years away, and part of the Sculptor constellation.
The VLT image of the galaxy (shown right) reveals the following. A prominent feature of NGC134 is its warped disc, i.e., when viewed sideways it does not appear flat. A trail of gas is stripped from the top edge of the disc. Together, these features suggest that it interacted with another galaxy, but that remains unproven. The galaxy has an abundance of ionized hydrogen regions along its spiral arms where stars are forming. These regions appear red in the picture. It also has many dark lanes of dust that occlude its complete starlight.
The discovery of NGC134 is often attributed to Sir John Herschel at the Cape of Good Hope, but he did note that it might have been the 590th object discovered by James Dunlop in his 1828 publication, six years prior to Herschel's own observations. O'Meara has suggested NGC 134 might be named as the Giant Squid Galaxy.
One Supernova has been observed in NGC134: SN 2009gj (typeIIb, mag. 15.9) was discovered in 2009 by the amateur astronomer Stuart Parker in New Zealand.
See also
List of NGC objects (1–1000)
References
External links
Barred spiral galaxies
Interacting galaxies
0134
001851
Sculptor (constellation)
Discoveries by James Dunlop | NGC 134 | [
"Astronomy"
] | 327 | [
"Constellations",
"Sculptor (constellation)"
] |
46,380,712 | https://en.wikipedia.org/wiki/Ramaria%20gracilis | Ramaria gracilis is a species of coral fungus in the family Gomphaceae.
Taxonomy
The species was originally described in Christiaan Hendrik Persoon's 1797 Commentatio de Fungis Clavaeformibus as Clavaria gracilis. It was subsequently reclassified by Kurt Polycarp Joachim Sprengel as Merisma gracile in 1826, by William Nylander as Ramalina gracilis in 1860, by Petter Adolf Karsten as Clavariella gracilis in 1881. It was then described as Ramaria gracilis in Lucien Quélet's 1888 Flore mycologique de la France et des pays limitrophes, and this name was sanctioned by Elias Magnus Fries. The subsequently described Clavaria fragrantissima (G.F. Atk., 1908) is now considered a synonym. Within Ramaria, R. gracilis is a part of the subgenus Lentoramaria.
Description
Ramaria gracilis fruit bodies (basidiocarps), which are made up of a dense cluster of branches, measure up to in height and in width. The individual branches, which have fairly thin bases, are typically forked and sometimes entangled with one another. In colour, the basidiocarps vary from a pale brown to white to pink-beige. The smell of anise can be used to distinguish the species from the otherwise similar Ramariopsis kunzei and Clavulina cristata.
Ramaria gracilis produces spores which measure from 5 to 7 by 3 to 4.5 micrometres (μm). The spores are elliptic with small warts which can be thin enough to look like spines. They vary in colour from yellow to brown. The cylindrical to club-shaped basidia measure from 25 to 45 by 5 to 7 μm. The hyphae are from 2 to 10 μm thick.
Distribution and habitat
Ramaria gracilis is found in European coniferous woodland, where it grows on leaf litter. It has an uneven distribution, and is very rare. Basidiocarps are most often encountered between August and December. R. gracilis has been reported in Australia, but a 2014 study suggests that such reports were likely misidentifications of R. filicicola.
References
Gomphaceae
Fungi of Europe
Fungi described in 1797
Taxa named by Christiaan Hendrik Persoon
Fungus species | Ramaria gracilis | [
"Biology"
] | 505 | [
"Fungi",
"Fungus species"
] |
46,381,111 | https://en.wikipedia.org/wiki/NASA%20research | Since its establishment in 1958, NASA has conducted research on a range of topics. Because of its unique structure, work happens at various field centers and different research areas are concentrated in those centers. Depending on the technology, hardware and expertise needed, research may be conducted across a range of centers.
Aeronautics
Medicine in space
A variety of large-scale medical studies are being conducted in space by the National Space Biomedical Research Institute (NSBRI). Prominent among these is the Advanced Diagnostic Ultrasound in Microgravity Study, in which astronauts (including former ISS Commanders Leroy Chiao and Gennady Padalka) perform ultrasound scans under the guidance of remote experts to diagnose and potentially treat hundreds of medical conditions in space. Usually there is no physician on board the International Space Station, and diagnosis of medical conditions is challenging. Astronauts are susceptible to a variety of health risks including decompression sickness, barotrauma, immunodeficiencies, loss of bone and muscle, orthostatic intolerance due to volume loss, sleep disturbances, and radiation injury. Ultrasound offers a unique opportunity to monitor these conditions in space. This study's techniques are now being applied to cover professional and Olympic sports injuries as well as ultrasound performed by non-expert operators in populations such as medical and high school students. It is anticipated that remote guided ultrasound will have application on Earth in emergency and rural care situations, where access to a trained physician is often rare.
Salt evaporation and energy management
In one of the nation's largest restoration projects, NASA technology helps state and federal government reclaim of salt evaporation ponds in South San Francisco Bay. Satellite sensors are used by scientists to study the effect of salt evaporation on local ecology.
NASA has started Energy Efficiency and Water Conservation Program as an agency-wide program directed to prevent pollution and reduce energy and water utilization. It helps to ensure that NASA meets its federal stewardship responsibilities for the environment.
Earth science
Main page: NASA Earth Science
Understanding of natural and human-induced changes on the global environment (such as global warming) is the main objective of NASA's Earth science. NASA currently has more than a dozen Earth science spacecraft/instruments in orbit studying all aspects of the Earth system (oceans, land, atmosphere, biosphere, cryosphere), with several more planned for launch in the next few years. The earth science research program was created and funded in the 1980s under the administrations of Ronald Reagan and George H. W. Bush.
NASA is working in cooperation with National Renewable Energy Laboratory (NREL). The goal is to produce worldwide solar resource maps with great local detail. NASA was also one of the main participants in the evaluation innovative technologies for the cleanup of the sources for dense non-aqueous phase liquids (DNAPLs). On April 6, 1999, the agency signed The Memorandum of Agreement (MOA) along with the United States Environmental Protection Agency, DOE, and USAF authorizing all the above organizations to conduct necessary tests at the John F. Kennedy Space center. The main purpose was to evaluate two innovative in-situ remediation technologies, thermal removal and oxidation destruction of DNAPLs. National Space Agency made a partnership with Military Services and Defense Contract Management Agency named the "Joint Group on Pollution Prevention". The group is working on reduction or elimination of hazardous materials or processes.
On May 8, 2003, Environmental Protection Agency recognized NASA as the first federal agency to directly use landfill gas to produce energy at one of its facilities—the Goddard Space Flight Center, Greenbelt, Maryland.
Ozone depletion
In 1975, NASA was directed by legislation to research and monitor the upper atmosphere. This led to Upper Atmosphere Research Program and later the Earth Observing System (EOS) satellites in the 1990s to monitor ozone depletion. The first comprehensive worldwide measurements were obtained in 1978 with the Nimbus 7 satellite and NASA scientists at the Goddard Institute for Space Studies.
Climate study
Within the Earth science program, NASA researches and publishes on climate issues. Its statements concur with the interpretation that the global climate is heating. Bob Walker, who has advised president-elect Donald Trump on space issues, has advocated that NASA shut down its climate study operations. The Washington Post reported that NASA scientists copied data on climate change held on U.S. government computers, out of a fear that a Trump administration would end access to data on climate change.
References
NASA
Government research
Environmental research
Space medicine
Astronomy projects
Research in the United States
Aeronautics
Space research
Human spaceflight programs | NASA research | [
"Astronomy",
"Engineering",
"Environmental_science"
] | 913 | [
"Astronomy projects",
"Space programs",
"Human spaceflight programs",
"Environmental research"
] |
46,381,123 | https://en.wikipedia.org/wiki/Lightweighting | Lightweighting is a concept in the auto industry about building cars and trucks that are less heavy as a way to achieve better fuel efficiency, battery range, acceleration, braking and handling. In addition, lighter vehicles can tow and haul larger loads because the engine is not carrying unnecessary weight. Excessive vehicle weight is also a contributing factor to particulate emissions from tyre and brake wear.
Carmakers make body structure parts from aluminium sheet, aluminium extrusions,press hardening steel, carbon fibers, windshields from plastic, and bumpers out of aluminum foam, as ways to lessen vehicle load. Replacing car parts with lighter materials does not lessen overall safety for drivers, according to one view, since many grades of aluminium and plastics have a high strength-to-weight ratio; and aluminum has high energy absorption properties for its weight.
The search to replace car parts with lighter ones is not limited to any one type of part; according to a spokesman for Ford Motor Company, engineers strive for lightweighting "anywhere we can." Using lightweight materials such as plastics, high strength steels and aluminium can mean less strain on the engine and better gas mileage as well as improved handling. One material sometimes used to reduce weight for structures that can accept the cost premium is carbon fiber. The auto industry has used the term for many years, as the effort to keep making cars lighter is ongoing.
Another common material used for lightweighting is aluminum. Incorporating aluminum has grown continuously to not only meet CAFE standards but to also improve automotive performance. A light
weighting magazine finds: "Even though aluminum is light, it does not sacrifice strength. Aluminum body structure is equal in strength to steel and can absorb twice as much crash-induced energy." The use of aluminium for lightweighting can be limited for the higher strength grades by their low formability - and in response to this forming challenge new techniques such as roll forming and hot forming (Hot Form Quench) have been introduced in recent years.
Many other materials are used to meet lightweighting goals. Cost of lightweighting, and increasingly sustainability of materials, is becoming an issue in solution selection - with the viable cost increase of a part per kilogram saved being between $5 and $15, depending on the price point and performance needs of the vehicle.
References
Automotive technologies
Materials science
Environmental engineering | Lightweighting | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 467 | [
"Applied and interdisciplinary physics",
"Chemical engineering",
"Materials science",
"Civil engineering",
"nan",
"Environmental engineering"
] |
26,276,910 | https://en.wikipedia.org/wiki/HD%20196761 | HD 196761 is the Henry Draper Catalogue designation for a G-type main-sequence star in the constellation Capricornus. With an apparent magnitude of 6.37 it is near the limit of what can be seen with the naked eye, but according to the Bortle Scale it may be possible to view it at night from rural skies. Based upon parallax measurements by the Hipparcos spacecraft, it is located about 47 light years from the Solar System.
It has a stellar classification of G8V with about 88% of the radius of the Sun and 81% of the Sun's mass. Compared to the Sun, this star has about half the proportion of elements other than hydrogen and helium. The projected rotational velocity of the star's equator is a relatively leisurely 3.50 km/s. This star has been examined for an infrared excess that could indicate the presence of a circumstellar disk of dust, but as of 2015 none has been detected.
The space velocity components of this star are U = −59, V = 20 and W = 4 km/s. It is presently following an orbit through the Milky Way that has an eccentricity of 0.18, bringing it as close as and as distant as from the Galactic Center. The inclination of this orbit will carry HD 196761 no more than from the plane of the galactic disk.
References
Capricornus
G-type main-sequence stars
HD, 196761
196761
7898
0796
101997
Durchmusterung objects | HD 196761 | [
"Astronomy"
] | 319 | [
"Capricornus",
"Constellations"
] |
44,979,247 | https://en.wikipedia.org/wiki/High-entropy%20alloy | High-entropy alloys (HEAs) are alloys that are formed by mixing equal or relatively large proportions of (usually) five or more elements. Prior to the synthesis of these substances, typical metal alloys comprised one or two major components with smaller amounts of other elements. For example, additional elements can be added to iron to improve its properties, thereby creating an iron-based alloy, but typically in fairly low proportions, such as the proportions of carbon, manganese, and others in various steels. Hence, high-entropy alloys are a novel class of materials. The term "high-entropy alloys" was coined by Taiwanese scientist Jien-Wei Yeh because the entropy increase of mixing is substantially higher when there is a larger number of elements in the mix, and their proportions are more nearly equal. Some alternative names, such as multi-component alloys, compositionally complex alloys and multi-principal-element alloys are also suggested by other researchers.
These alloys are currently the focus of significant attention in materials science and engineering because they have potentially desirable properties.
Furthermore, research indicates that some HEAs have considerably better strength-to-weight ratios, with a higher degree of fracture resistance, tensile strength, and corrosion and oxidation resistance than conventional alloys. Although HEAs have been studied since the 1980s, research substantially accelerated in the 2010s.
Development
Although HEAs were considered from a theoretical standpoint as early as 1981 and 1996, and throughout the 1980s, in 1995 Taiwanese scientist Jien-Wei Yeh came up with his idea for ways of actually creating high-entropy alloys, while driving through the Hsinchu, Taiwan, countryside. Soon after, he decided to begin creating these special alloys in his lab, being in the only region researching these alloys for over a decade. Most countries in Europe, the United States, and other parts of the world lagged behind in the development of HEAs. Significant research interest from other countries did not develop until after 2004 when Yeh and his team of scientists built the world's first high-entropy alloys to withstand extremely high temperatures and pressures. Potential applications include use in state-of-the-art race cars, spacecraft, submarines, nuclear reactors, jet aircraft, nuclear weapons, long range hypersonic missiles, and so on.
A few months later, after the publication of Yeh's paper, another independent paper on high-entropy alloys was published by a team from the United Kingdom composed of Brian Cantor, I. T. H. Chang, P. Knight, and A. J. B. Vincent. Yeh was also the first to coin the term "high-entropy alloy" when he attributed the high configurational entropy as the mechanism stabilizing the solid solution phase. Cantor did the first work in the field in the late 1970s and early 1980s, though he did not publish until 2004. Unaware of Yeh's work, he did not describe his new materials as "high-entropy" alloys, preferring the term "multicomponent alloys". The base alloy he developed, equiatomic CrMnFeCoNi, has been the subject of considerable work in the field, and is known as the "Cantor alloy", with similar derivatives known as Cantor alloys. It was one of the first HEAs to be reported to form a single-phase FCC (face-centred cubic crystal structure) solid solution.
Before the classification of high-entropy alloys and multi-component systems as a separate class of materials, nuclear scientists had already studied a system that can now be classified as a high-entropy alloy: within nuclear fuels Mo-Pd-Rh-Ru-Tc particles form at grain boundaries and at fission gas bubbles. Understanding the behavior of these "five-metal particles" was of specific interest to the medical industry because Tc-99m is an important medical imaging isotope.
Definition
There is no universally agreed-upon definition of a HEA. The originally defined HEAs as alloys containing at least 5 elements with concentrations between 5 and 35 atomic percent. Later research however, suggested that this definition could be expanded. Otto et al. suggested that only alloys that form a solid solution with no intermetallic phases should be considered true high-entropy alloys, because the formation of ordered phases decreases the entropy of the system. Some authors have described four-component alloys as high-entropy alloys while others have suggested that alloys meeting the other requirements of HEAs, but with only 2–4 elements or a mixing entropy between R and 1.5R should be considered "medium-entropy" alloys.
The four core effects of HEAs
Due to their multi-component composition, HEAs exhibit different basic effects than other traditional alloys that are based only on one or two elements. Those different effects are called "the four core effects of HEAs" and are behind a lot of the particular microstructure and properties of HEAs. The four core effects are high entropy, severe lattice distortion, sluggish diffusion, and cocktail effects.
High entropy effect
The high entropy effect is the most important effect because it can enhance the formation of solid solutions and makes the microstructure much simpler than expected. Prior knowledge expected multi component alloys to have many different interactions among elements and thus form many different kinds of binary, ternary, and quaternary compounds and/or segregated phases. Thus, such alloys would possess complicated structures, brittle by nature. This expectation in fact neglects the effect of high entropy. Indeed, according to the second law of thermodynamics, the state having the lowest mixing Gibbs free energy among all possible states would be the equilibrium state. Elemental phases based on one major element have small enthalpy of mixing () and a small entropy of mixing (), and compound phases have large but small ; on the other hand, solid-solution phases containing multiple elements have medium and high . As a result, solid-solution phases become highly competitive for equilibrium state and more stable especially at high temperatures.
Severe lattice distortion effect
Because solid solution phases with multi-principal elements are usually found in HEAs, the conventional crystal structure concept is thus extended from a one or two element basis to a multi-element basis. Every atom is surrounded by different kinds of atoms and thus suffers lattice strain and stress mainly due to atomic size difference. Besides the atomic size difference, both different bonding energy and crystal structure tendency among constituent elements are also believed to cause even higher lattice distortion because non-symmetrical bindings and electronic structure exist between an atom and its first neighbours. This distortion is believed to be the source of some of the mechanical, thermal, electrical, optical, and chemical behaviour of HEAs. Thus, overall lattice distortion would be more severe than that in traditional alloys in which most matrix atoms (or solvent atoms) have the same kind of atoms as their surroundings.
Sluggish diffusion effect
As explained in the last section, an HEA mainly contains a random solid solution and/or an ordered solid solution. Their matrices could be regarded as whole-solute matrices. In HEAs, those whole-solute matrices' diffusion vacancies are surrounded by different element atoms, and thus have a specific lattice potential energy (LPE). This large fluctuation of LPE between lattice sites leads to low-LPE sites, serving as traps and hindering atomic diffusion. This leads to the sluggish diffusion effect.
Cocktail effect
The cocktail effect is used to emphasise the enhancement of the alloy's properties by at least five major elements. Because HEAs might have one or more phases, the whole properties are from the overall contribution of the constituent phases. Besides, each phase is a solid solution and can be viewed as a composite with properties coming not only from the basic properties of the constituent, but by the mixture rule also from the interactions among all the constituents and from severe lattice distortion. The cocktail effect takes into account the effect from the atomic-scale multicomponent phases and from the multiple composite phases at the micro scale.
Alloy design
In conventional alloy design, one primary element such as iron, copper, or aluminum is chosen for its properties. Then, small amounts of additional elements are added to improve or add properties. Even among binary alloy systems, there are few common cases of both elements being used in nearly-equal proportions such as Pb-Sn solders. Therefore, much is known from experimental results about phases near the edges of binary phase diagrams and the corners of ternary phase diagrams and much less is known about phases near the centers. In higher-order (4+ components) systems that cannot be easily represented on a two-dimensional phase diagram, virtually nothing is known.
Early research of HEA was focussed on forming single-phased solid solution, which could maximize the major features of high entropy alloy: high entropy, sluggish diffusion, severe lattice distortion, and cocktail effects. It has been pointed out that most successful materials need some secondary phase to strengthen the material, and that any HEA used in application will have a multiphase microstructure. However, it is still important to form single-phased material since a single-phased sample is essential for understanding the underlying mechanism of HEAs and testing specific microstructures to find structures producing special properties.
Phase formation
Gibbs' phase rule, , can be used to determine an upper bound on the number of phases that will form in an equilibrium system. In his 2004 paper, Cantor created a 20-component alloy containing 5% of Mn, Cr, Fe, Co, Ni, Cu, Ag, W, Mo, Nb, Al, Cd, Sn, Pb, Bi, Zn, Ge, Si, Sb, and Mg. At constant pressure, the phase rule would allow for up to 21 phases at equilibrium, but far fewer actually formed. The predominant phase was a face-centered cubic solid-solution phase, containing mainly Cr, Mn, Fe, Co, and Ni. From that result, the CrMnFeCoNi alloy, which forms only a solid-solution phase, was developed.
The Hume-Rothery rules have historically been applied to determine whether a mixture will form a solid solution. Research into high-entropy alloys has found that in multi-component systems, these rules tend to be relaxed slightly. In particular, the rule that solvent and solute elements must have the same crystal structure does not seem to apply, as Cr, Mn, Fe, Co, and Ni have three different crystal structures as pure elements (and when the elements are present in equal concentrations, there can be no meaningful distinction between "solvent" and "solute" elements).
Thermodynamic mechanisms
Phase formation of HEA is determined by thermodynamics and geometry. When phase formation is controlled by thermodynamics and kinetics are ignored, the Gibbs free energy of mixing is defined as:
where is defined as enthalpy of mixing, is temperature, and is entropy of mixing respectively. and continuously compete to determine the phase of the HEA material. Other important factors include the atomic size of each element within the HEA, where Hume-Rothery rules and 's three empirical rules for bulk metallic glass play a role.
Disordered solids form when atomic size difference is small and is not negative enough. This is because every atom is about the same size and can easily substitute for each other and is not low enough to form a compound. More-ordered HEAs form as the size difference between the elements gets larger and gets more negative. When the size difference of each individual element become too large, bulk metallic glasses form instead of HEAs. High temperature and high also promote the formation of HEA because they significantly lower , making the HEA easier to form because it is more stable than other phases such as intermetallics.
The multi-component alloys that Yeh developed also consisted mostly or entirely of solid-solution phases, contrary to what had been expected from earlier work in multi-component systems, primarily in the field of metallic glasses. Yeh attributed this result to the high configurational, or mixing, entropy of a random solid solution containing numerous elements. The mixing entropy for a random ideal solid solution can be calculated by:
where is the ideal gas constant, is the number of components, and is the atomic fraction of component . From this it can be seen that alloys in which the components are present in equal proportions will have the highest entropy, and adding additional elements will increase the entropy. A five-component, equiatomic alloy will have a mixing entropy of 1.61R.
However, entropy alone is not sufficient to stabilize the solid-solution phase in every system. The enthalpy of mixing (ΔH) must also be taken into account. This can be calculated using:
where is the binary enthalpy of mixing for A and B. Zhang et al. found, empirically, that in order to form a complete solid solution, ΔHmix should be between -10 and 5 kJ/mol. In addition, Otto et al. found that if the alloy contains any pair of elements that tend to form ordered compounds in their binary system, a multi-component alloy containing them is also likely to form ordered compounds.
Both of the thermodynamic parameters can be combined into a single, unitless parameter Ω:
where Tm is the average melting point of the elements in the alloy. Ω should be greater than or equal to 1.0, (or 1.1 in practice), which means entropy dominates over enthalpy at the point of solidification, to promote solid solution development.
Ω can be optimized by adjusting element composition. Waite J. C. has proposed an optimisation algorithm to maximize Ω and demonstrated that slight change in composition could cause huge increase of Ω.
Kinetic mechanisms
The atomic radii of the components must also be similar in order to form a solid solution. Zhang et al. proposed a parameter δ, average lattice mismatch, representing the difference in atomic radii:
where ri is the atomic radius of element i and . Formation of a solid-solution phase requires a δ ≤ 6.6%, which is an empirical number based on experiments on bulk metallic glasses (BMG). Exceptions are found on both sides of 6.6%: some alloys with 4% < δ ≤ 6.6% do form intermetallics, and solid-solution phases do appear in alloys with δ > 9%.
The multi-element lattice in HEAs is highly distorted because all elements are solute atoms and their atomic radii are different. δ helps evaluating the lattice strain caused by disorder crystal structure. When the atomic size difference (δ) is sufficiently large, the distorted lattice would collapse and a new phase such as an amorphous structure would be formed. The lattice distortion effect can result in solid solution hardening.
Other properties
For those alloys that do form solid solutions, an additional empirical parameter has been proposed to predict the crystal structure that will form. HEAs are usually FCC (face-centred cubic), BCC (body-centred cubic), HCP (hexagonal close-packed), or a mixture of the above structures, and each structure has their own advantages and disadvantages in terms of mechanical properties. There are many methods to predict the structure of HEA. Valence electron concentration (VEC) can be used to predict the stability of the HEA structure. The stability of physical properties of the HEA is closely associated with electron concentration (this is associated with the electron concentration rule from the Hume-Rothery rules).
When HEA is made with casting, only FCC structures are formed when VEC is larger than 8. When VEC is between 6.87 and 8, HEA is a mixture of BCC and FCC, and while VEC is below 6.87, the material is BCC. In order to produce a certain crystal structure of HEA, certain phase stabilizing elements can be added. Experimentally, adding elements such as Al and Cr can help the formation of BCC HEA while Ni and Co can help form FCC HEA.
Synthesis
High-entropy alloys are difficult to manufacture using extant techniques , and typically require both expensive materials and specialty processing techniques.
High-entropy alloys are mostly produced using methods that depend on the metals phase – if the metals are combined while in a liquid, solid, or gas state.
Most HEAs have been produced using liquid-phase methods include arc melting, induction melting, and Bridgman solidification.
Solid-state processing is generally done by mechanical alloying using a high-energy ball mill. This method produces powders that can then be processed using conventional powder metallurgy methods or spark plasma sintering. This method allows for alloys to be produced that would be difficult or impossible to produce using casting, such as LiMgAlScTi. These powders have usually an irregular shape and can be transformed into spherical shape via powder spheroidization for the use in various additive manufacturing processes.
The conventional method of mechanical alloying mixes all required elements in one step, where elements A, B, C, and D get milled together to form ABCD directly. Vaidya et al. proposed a new method of creating HEA with mechanical alloying called sequential alloying, where elements are added step by step. In order to create AlCrFeCoNi HEA, Vaidya's team first formed binary CoNi alloy, then added Fe to form tertiary FeCoNi, Cr to form CrFeCoNi, and Al to from AlCrFeCoNi. The same alloy composition can be produced through different sequences, and different sequences lead to different proportions of BCC and FCC phases, showing a path dependence on the method. For example, one sequence of AlCrFeCoNi milling for 70 hours in total produces an alloy with 100% BCC phase, while another produces an alloy with 80% BCC phase.
Gas-phase processing includes processes such as sputtering or molecular beam epitaxy (MBE), which can be used to carefully control different elemental compositions to get high-entropy metallic or ceramic films.
Additive manufacturing can produce alloys with a different microstructure, potentially increasing strength (to 1.3 gigapascals) as well as increasing ductility.
Other techniques include thermal spray, laser cladding, and electrodeposition.
Modeling and simulation
The atomic-scale complexity presents additional challenges to computational modelling of high-entropy alloys. Thermodynamic modeling using the CALPHAD method requires extrapolating from binary and ternary systems. Most commercial thermodynamic databases are designed for, and may only be valid for, alloys consisting primarily of a single element. Thus, they require experimental verification or additional ab initio calculations such as density functional theory (DFT). However, DFT modeling of complex, random alloys has its own challenges, as the method requires defining a fixed-size cell, which can introduce non-random periodicity. This is commonly overcome using the method of "special quasirandom structures", designed to most closely approximate the radial distribution function of a random system, combined with the Vienna Ab initio Simulation Package. Using this method, it has been shown that results of a four-component equiatomic alloy begins to converge with a cell as small as 24 atoms. The exact muffin-tin orbital method with the coherent potential approximation (CPA) has also been employed to model HEAs.
Another approach based on the KKR-CPA formulation of DFT is the theory for multicomponent alloys, which evaluates the two-point correlation function, an atomic short-range order parameter, ab initio. The theory has been used with success to study the Cantor alloy CrMnFeCoNi and its derivatives, the refractory HEAs, as well as to examine the influence of a material's magnetic state on atomic ordering tendencies.
Other techniques include the 'multiple randomly populated supercell' approach, which better describes the random population of a true solid solution (although this is far more computationally demanding). This method has also been used to model glassy and amorphous systems without a crystal lattice (including bulk metallic glasses).
Further, modeling techniques are being used to suggest new HEAs for targeted applications. The use of modeling techniques in this 'combinatorial explosion' is necessary for targeted and rapid HEA discovery and application.
Simulations have highlighted the preference for local ordering in some high-entropy alloys and, when the enthalpies of formation are combined with terms for configurational entropy, transition temperatures between order and disorder can be estimated, allowing one to understand when effects like age hardening and degradation of an alloy's mechanical properties may be an issue.
The transition temperature to reach the solid solution (miscibility gap) was recently addressed with the Lederer-Toher-Vecchio-Curtarolo thermodynamic model.
Phase diagram generation
CALPHAD (CALculation of PHAse Diagrams) is a method to create reliable thermodynamic databases that can be an effective tool when searching for single phase HEAs. However, this method can be limited since it needs to extrapolate from known binary or ternary phase diagrams. This method also does not take into account the process of material synthesis and can only predict equilibrium phases. The phase diagrams of HEAs can be explored experimentally via high throughput experimentation (HTE). This method rapidly produces hundreds of samples, allowing the researcher to explore a region of composition in one step and thus can used to quickly map out the phase diagram of the HEA. Another way to predict the phase of the HEA is via enthalpy concentration. This method accounts for specific combinations of single phase HEA and rejects similar combinations that have been shown not to be single phase. This model uses first principle high throughput density functional theory to calculate the enthalpies, thus requiring no experiment input, and it has shown excellent agreement with reported experimental results.
Properties and potential uses
Mechanical
The crystal structure of HEAs has been found to be the dominant factor in determining the mechanical properties. BCC HEAs typically have high yield strength and low ductility and vice versa for FCC HEAs. Some alloys have been particularly noted for their exceptional mechanical properties. A refractory alloy, VNbMoTaW maintains a high yield strength (>) even at a temperature of , significantly outperforming conventional superalloys such as Inconel 718. However, room temperature ductility is poor, less is known about other important high temperature properties such as creep resistance, and the density of the alloy is higher than conventional nickel-based superalloys.
CrMnFeCoNi has been found to have exceptional low-temperature mechanical properties and high fracture toughness, with both ductility and yield strength increasing as the test temperature was reduced from room temperature to . This was attributed to the onset of nanoscale twin boundary formation, an additional deformation mechanism that was not in effect at higher temperatures. At ultralow temperatures, inhomogenous deformation by serrations has been reported. As such, it may have applications as a structural material in low-temperature applications or, because of its high toughness, as an energy-absorbing material. However, later research showed that lower-entropy alloys with fewer elements or non-equiatomic compositions may have higher strength or higher toughness. No ductile to brittle transition was observed in the BCC AlCrFeCoNi alloy in tests as low as 77 K.
Al0.5CrFeCoNiCu was found to have a high fatigue life and endurance limit, possibly exceeding some conventional steel and titanium alloys, but there was significant variability in the results. This suggests the material is very sensitive to defects introduced during manufacturing such as aluminum oxide particles and microcracks.
A single-phase nanocrystalline Al20Li20Mg10Sc20Ti30 alloy was developed with a density of 2.67 g cm−3 and microhardness of 4.9 – 5.8 GPa, which would give it an estimated strength-to-weight ratio comparable to ceramic materials such as silicon carbide, though the high cost of scandium limits the possible uses.
Rather than bulk HEAs, small-scale HEA samples (e.g. NbMoTaW micro-pillars) exhibit extraordinarily high yield strengths of 4 – 10 GPa — one order of magnitude higher than that of its bulk form – and their ductility is considerably improved. Additionally, such HEA films show substantially enhanced stability for high-temperature, long-duration conditions (at 1,100 °C for 3 days). Small-scale HEAs combining these properties represent a new class of materials in small-dimension devices potentially for high-stress and high-temperature applications.
In 2018, new types of HEAs based on the careful placement of ordered oxygen complexes, a type of ordered interstitial complex, have been produced. In particular, alloys of titanium, hafnium, and zirconium have been shown to have enhanced work hardening and ductility characteristics.
Bala et al. studied the effects of high-temperature exposure on the microstructure and mechanical properties of the Al5Ti5Co35Ni35Fe20 high-entropy alloy. After hot rolling and air-quenching, the alloy was exposed to a temperature range of 650-900 °C for 7 days. The air-quenching caused γ′ precipitation distributed uniformly throughout the microstructure. The high-temperature exposure resulted in growth of the γ′ particles and at temperatures higher than 700 °C, additional precipitation of γ′ was observed. The highest mechanical properties were obtained after exposure to 650 °C with a yield strength of 1050 MPa and an ultimate tensile yield strength of 1370 MPa. Increasing the temperature further decreased the mechanical properties.
Liu et al. studied a series of quaternary non-equimolar high-entropy alloys AlxCr15xCo15xNi70−x with x ranging from 0 to 35%. The lattice structure transitioned from FCC to BCC as Al content increased and with Al content in the range of 12.5 to 19.3 at%, the γ′ phase formed and strengthened the alloy at both room and elevated temperatures. With Al content at 19.3 at%, a lamellar eutectic structure formed composed of γ′ and B2 phases. Due to high γ′ phase fraction of 70 vol%, the alloy had a compressive yield strength of 925 MPa and fracture strain of 29% at room temperature and high yield strength at high temperatures as well with values of 789, 546, and 129 MPa at the temperatures of 973, 1123, and 1273K.
In general, refractory high-entropy alloys have exceptional strength at elevated temperatures but are brittle at room temperature. The TiZrNbHfTa alloy is an exception, with plasticity of over 50% at room temperature. However, its strength at high temperature is insufficient. With the aim of increasing high temperature strength, Chien-Chuang et al. modified the composition of TiZrNbHfTa and studied the mechanical properties of the refractory high-entropy alloys TiZrMoHfTa and TiZrNbMoHfTa. Both alloys have simple BCC structure. Their experiments showed that the yield strength of TiZrNbMoHfTa had a yield strength 6 times greater than TiZrMoHfTa at 1200 °C with a fracture strain of 12% retained in the alloy at room temperature.
Electrical and magnetic
CrFeCoNiCu is an FCC alloy that was found to be paramagnetic. But upon adding titanium, it forms a complex microstructure consisting of FCC solid solution, amorphous regions and nanoparticles of Laves phase, resulting in superparamagnetic behavior. High magnetic coercivity has been measured in a FeMnNiCoBi alloy. There are several magnetic high-entropy alloys which exhibit promising soft magnetic behavior with strong mechanical properties. Superconductivity was observed in TiZrNbHfTa alloys, with transition temperatures between 5.0 and 7.3 K.
High-entropy alloys are promising for electronics due to their thermal stability and electrical conductivity. They are being used for high-performance applications like power electronics, heat spreaders, sensors, and inductors, and show potential for efficient conductive materials in advanced components.
Thermal Stability
Since high-entropy alloys are likely utilized in high temperature environments, thermal stability is very important for designing HEA. Nano-crystallinity is especially critical where extra driving force exists for grain growth. Two aspects need to be considered for nano-crystalline HEAs: the stability of phases formed, which is dominated by the thermodynamics mechanism (see alloy design), and the retention of nanocrystallinity. The stability of nano-crystalline HEAs are controlled by many factors, including grain boundary diffusion, presence of oxide, etc.
Other
The high concentrations of multiple elements leads to slow diffusion. The activation energy for diffusion was found to be higher for several elements in CrMnFeCoNi than in pure metals and stainless steels, leading to lower diffusion coefficients.
Some equiatomic multicomponent alloys have also been reported to show good resistance to damage by energetic radiation. High-entropy alloys are being investigated for hydrogen storage applications. Some high-entropy alloys such as TiZrCrMnFeNi show fast and reversible hydrogen storage at room temperature with good storage capacity for commercial applications. The high-entropy materials have high potential for a wider range of energy applications, particularly in the form of high-entropy ceramics. The development of high-entropy photocatalysts, which was initiated in 2020 , is one of the applications which has been employed for hydrogen production, oxygen production, carbon dioxide conversion and plastic waste conversion.
High-entropy alloy films (HEAFs)
Introduction
Most HEAs are prepared by vacuum arc melting, which obtains larger grain sizes at the μm-level. As a result, studies regarding high-performance high entropy alloy films (HEAFs) have attracted more material scientists. Compared to the preparation methods of HEA bulk materials, HEAFs are easily achieved by rapid solidification with a faster cooling rate of 109 K/s. A rapid cooling rate can limit the diffusion of the constituent elements, inhibit phase separation, favor the formation of the single solid-solution phase or even an amorphous structure, and obtain a smaller grain size (nm) than those of HEA bulk materials (μm). So far, lots of technologies have been used to fabricate the HEAFs such as spraying, laser cladding, electrodeposition, and magnetron sputtering. Magnetron sputtering technique is the most-used method to fabricate the HEAFs. An inert gas (Ar) is introduced in a vacuum chamber and it's accelerated by a high voltage that is applied between the substrate and the target. As a result, a target is bombarded by the energetic ions and some atoms are ejected from the target surface, then these atoms reach the substrate and condense on the substrate to form a thin film. The composition of each constituent element in HEAFs can be controlled by a given target and the operational parameters like power, gas flow, bias, and working distance between substrate and target during film deposition. Also, the oxide, nitride, and carbide films can be readily prepared by introducing reactive gases such as O2, N2, and C2H2. Until now, three routes has been investigated to prepare HEAFs via the magnetron sputtering technique. First, a single HEA target can be used to fabricate the HEAFs. The related contents of the as-deposited films are approximately equal to that of the original target alloy even though each element has a different sputtering yield with the help of the pre-sputtering step. However, preparing a single HEA target is very time-consuming and difficult. For example, it's hard to produce an equiatomic CoCrFeMnNi alloy target due to the high evaporation rate of Mn. Thus, the additional amount of Mn is hard to expect and calculate to ensure each element is equiatomic. Secondly, HEAFs can be synthesized by co-sputtering deposition with various metal targets. A wide range of chemical compositions can be controlled by varying the processing conditions such as power, bias, gas flow, etc. Based on the published papers, lots of researchers doped different quantities of elements such as Al, Mo, V, Nb, Ti, and Nd into the CrMnFeCoNi system, which can modify the chemical composition and structure of the alloy and improve the mechanical properties. These HEAFs were prepared by co-sputtering deposition with a single CrMnFeCoNi alloy and Al/Ti/V/Mo/Nb targets. However, it needs trial and error to obtain the desired composition. Take AlxCrMnFeCoNi films as an example. The crystalline structure changed from the single FCC phase for x = 0.07 to duplex FCC + BCC phases for x = 0.3, and eventually, to a single BCC phase for x = 1.0. The whole process was manipulated by varying both powers of CoCrFeMnNi and Al targets to obtain desired compositions, showing a phase transition from FCC to BCC phase with increasing Al contents. The last one is via the powder targets. The compositions of the target are simply adjusted by altering the weight fractions of the individual powders, but these powders must be well-mixed to ensure homogeneity. AlCrFeCoNiCu films were successfully deposited by sputtering pressed power targets.
Recently, there are more researchers investigating the mechanical properties of the HEAFs with nitrogen incorporation due to superior properties like high hardness. As above-mentioned, nitride-based HEAFs can be synthesized via magnetron sputtering by incorporating N2 and Ar gases into the vacuum chamber. Adjusting the nitrogen flow ratio, RN = N2/(Ar + N2), can obtain different amounts of nitrogen. Most of them increased the nitrogen flow ratio to study the correlation between phase transformation and mechanical properties.
Hardness and related modulus values
Both values of hardness and related moduli like reduced modulus (Er) or elastic modulus (E) will significantly increase through the magnetron sputtering method. This is because the rapid cooling rate can limit the growth of grain size, i.e., HEAFs have smaller grain sizes compared to bulk counterparts, which can inhibit the motion of dislocation and then lead to an increase in mechanical properties such as hardness and elastic modulus. For instance, CoCrFeMnNiAlx films were successfully prepared by the co-sputtering method. The as-deposited CoCrFeMnNi film (Al0) exhibited a single FCC structure with a lower hardness of around 5.71 GPa, and the addition of a small amount of Al atoms resulted in an increase to 5.91 GPa in the FCC structure of Al0.07. With the further addition of Al, the hardness increased drastically to 8.36 GPa in the duplex FCC + BCC phases region. When the phase transformed to a single BCC structure, the Al1.3 film reached a maximum hardness of 8.74 GPa. As a result, the structural transition from FCC to BCC led to hardness enhancements with the increasing Al content. It is worth noting that Al-doped CoCrFeMnNi HEAs have been processed and their mechanical properties have been characterized by Xian et al. and the measured hardness values are included in Hsu et al. work for comparison. Compared to Al-doped CoCrFeMnNi HEAs, Al-doped CoCrFeMnNi HEAFs had a much higher hardness, which could be attributed to the much smaller size of HEAFs (nm vs. μm). Also, the reduced modulus in Al0 and Al1.3 are 172.84 and 167.19 GPa, respectively.
In addition, the RF-sputtering technique was capable of depositing CoCrFeMnNiTix HEAFs by co-sputtering of CoCrFeMnNi alloy and Ti targets. The hardness increased drastically to 8.61 GPa for Ti0.2 by adding Ti atoms to the CoCrFeMnNi alloy system, suggesting good solid solution strengthening effects. With the further addition of Ti, the Ti0.8 film had a maximum hardness of 8.99 GPa. The increase in hardness was due to both the lattice distortion effect and the presence of the amorphous phase that was attributed to the addition of the larger Ti atoms to the CoCrFeMnNi alloy system. This is different from CoCrFeMnNiTix HEAs because the bulk alloy has intermetallic precipitate in the matrix. The reason is the difference in cooling rate, i.e., the preparation method of the bulk HEAs has slower cooling rate and thus intermetallic compound will appear in HEAs. Instead, HEAFs have higher cooling rate and limit the diffusion rate, so they seldom have intermetallic phases. And the reduced modulus in Ti0.2 and Ti0.8 are 157.81 and 151.42 GPa, respectively. Other HEAFs were successfully fabricated by the magnetron sputtering technique and the hardness and the related modulus values are listed in Table 1.
For nitride-HEAFs, Huang et al. prepared (AlCrNbSiTiV)N films and investigated the effect of nitrogen content on structure and mechanical properties. They found that both values of hardness (41 GPa) and elastic modulus (360 GPa) reached a maximum when RN = 28%. The (AlCrMoTaTiZr)Nx film deposited at RN = 40% with the highest hardness of 40.2 GPa and elastic modulus of 420 GPa. Chang et al. fabricated (TiVCrAlZr)N on silicon substrates under different RN = 0 ~ 66.7%. At RN = 50%, the hardness and elastic modulus of the films reached maximum values of 11 and 151 GPa. Liu et al. studied the (FeCoNiCuVZrAl)N HEAFs and increased the RN ratio from 0 to 50%. They observed both values of hardness and elastic modulus exhibited maxima of 12 and 166 GPa with an amorphous structure at RN = 30%. Other related nitride-based HEAFs are summarized in Table 2. Compared to pure metallic HEAFs (Table 1), most nitride-based films have larger hardness and elastic modulus due to the formation of binary compound consisting of nitrogen. However, there are still some films possessing relatively low hardness, which are smaller than 20 GPa because of the inclusion of non-nitride-forming elements.
There have been many studies focused on the HEAFs and designed different compositions and techniques. The grain size, phase transformation, structure, densification, residual stress, and the content of nitrogen, carbon, and oxygen also can affect the values of hardness and elastic modulus. Therefore, they still delve into the correlation between the microstructures and mechanical properties and their related applications.
Table 1. The published papers regarding the pure metallic HEAFs and their phase, hardness and related modulus values via magnetron sputtering method.
Table 2. Current publications regarding the nitride-based HEAFs and their structures, the related hardness and elastic modulus values.
High-entropy ultra-high temperature ceramics
A subset of ultra-high temperature ceramics (UHTC) includes high-entropy ultra-high temperature ceramics, also referred to as compositionally complex ceramics (CCC). This class of materials is a leading choice for applications that experience extreme conditions, such as hypersonic applications which endure very high temperature, corrosion, and high strain rates. In general, UHTCs possess desirable properties including high melting temperature, high thermal conductivity, high stiffness and hardness, and high corrosion resistance. CCCs exemplify the tunability of UHTC systems by adding in more elements to the overall composition in approximately equimolar proportions. These high-entropy materials have displayed enhanced mechanical properties and performance compared to the traditional UHTC system.
As an emerging field, a fully comprehensive relationship between composition, microstructure, processing, and properties is not yet completely developed. Therefore, there is a lot of ongoing research in this field to better understand this system and its ability to scale to implementation in extreme environment applications. A multitude of factors contribute to the elevated mechanical properties in CCC. Notably, the complex microstructure and particular processing parameters enables these systems to display improved properties such as higher hardness. A plausible reason as to why CCCs may exhibit even higher hardness than traditional UHTCs may be due to the integration of various transition metals of different sizes in the CCC high-entropy lattice, rather than just a single repeating element of the same size in the metallic sites. Plastic deformation in materials is due to the movement of dislocations. Generally speaking, increased movement of dislocations throughout the lattice leads to deformation, while inhibition of dislocation motion leads to less deformation and a harder material. In ceramics, dislocation motion is extremely limited due to more constraints in the ceramic bonding structure, which explains their higher hardness over metals. Since the CCC structure has a wider variety of elemental sizes, it will become even more difficult for any dislocations to move in these systems, increasing the strain energy needed to move dislocations. This phenomenon may explain the further improved hardness that is observed. In addition to the direct effects that the microstructure has on enhancing properties, optimizing processing parameters for CCCs is crucial. For instance, powders may be processed using high energy ball milling (HEBM) which relies on the principle of mechanical alloying. Mechanical alloying balances competing mechanisms of deformation and recovery, including micro-forging, cold welding, and fracturing. With the proper balance achieved, this processing step yields a refined and homogeneous powder, which subsequently facilitates proper densification of the final part and desirable mechanical properties. Incomplete densification or an unacceptable fraction of voids diminishes the overall mechanical properties, as it would lead to premature failure. To conclude, high-entropy UHTCs or CCCs are extremely promising candidates for applications in extreme environments as evidenced so far by their enhanced properties.
See also
Amorphous metal
High-entropy-alloy nanoparticles
Nanocrystalline material
Hume-Rothery rules
References
Alloys
Chemical physics
Materials science
Statistical mechanics
Taiwanese inventions
Thermodynamic entropy | High-entropy alloy | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 8,803 | [
"Applied and interdisciplinary physics",
"Physical quantities",
"Materials science",
"Thermodynamic entropy",
"Entropy",
"Alloys",
"Chemical mixtures",
"nan",
"Statistical mechanics",
"Chemical physics"
] |
44,983,434 | https://en.wikipedia.org/wiki/18%20Tauri | 18 Tauri is a single star in the zodiac constellation of Taurus, located 444 light years away from the Sun. It is visible to the naked eye as a faint, blue-white hued star with an apparent visual magnitude of 5.66. The star is moving further from the Earth with a heliocentric radial velocity of +4.8. It is a member of the Pleiades open cluster, which is positioned near the ecliptic and thus is subject to lunar occultations.
This is a B-type main-sequence star with a stellar classification of B8 V, and is about halfway through its main sequence lifetime. It displays an infrared excess, suggesting the presence of an orbiting debris disk with a black body temperature of 75 K at a separation of from the host star. The star has 3.34 times the mass of the Sun and 2.89 times the Sun's radius. It is radiating 160 times the Sun's luminosity from its photosphere at an effective temperature of 13,748 K. 18 Tauri has a high rate of spin, showing a projected rotational velocity of 212 km/s.
References
B-type main-sequence stars
Pleiades
Taurus (constellation)
Durchmusterung objects
Tauri, 018
023324
017527
1144 | 18 Tauri | [
"Astronomy"
] | 271 | [
"Taurus (constellation)",
"Constellations"
] |
44,983,571 | https://en.wikipedia.org/wiki/HD%2028375 | HD 28375 is a single star in the equatorial constellation of Taurus, near the southern constellation border with Eridanus. It was previously known by the Flamsteed designation 44 Eridani, although the name has fallen out of use because constellations were redrawn, placing the star out of Eridanus and into Taurus. The star is blue-white in hue and is dimly visible to the naked eye with an apparent visual magnitude of 5.53. The distance to this star is approximately 480 light-years based on parallax. It is drifting further away with a radial velocity of 18 km/s, after having come to within an estimated some 3.7 million years ago.
Cowley (1972) and later Bragança et al. (2012) found a stellar classification of B3V for this object, matching a B-type main-sequence star. Houk and Swift assigned it a class of B5 III/IV, suggesting it is a more evolved star that is entering the giant stage. It has five times the mass of the Sun and is around three million years old, with a projected rotational velocity of just 13 km/s. The star is radiating 127 times the luminosity of the Sun from its photosphere at an effective temperature of about 13,000 K.
An infrared excess has been detected, indicating the presence of a circumstellar disk. The dust has a temperature of about 119 K and is orbiting from the star.
References
B-type main-sequence stars
Circumstellar disks
Taurus (constellation)
Durchmusterung objects
Eridani, 44
028375
020884
1415 | HD 28375 | [
"Astronomy"
] | 346 | [
"Taurus (constellation)",
"Constellations"
] |
44,986,732 | https://en.wikipedia.org/wiki/Topological%20monoid | In topology, a branch of mathematics, a topological monoid is a monoid object in the category of topological spaces. In other words, it is a monoid with a topology with respect to which the monoid's binary operation is continuous. Every topological group is a topological monoid.
See also
H-space
References
External links
topological monoid from symmetric monoidal category
Topological spaces
Algebraic topology | Topological monoid | [
"Mathematics"
] | 81 | [
"Mathematical structures",
"Algebraic topology",
"Space (mathematics)",
"Topology stubs",
"Topological spaces",
"Topology",
"Fields of abstract algebra"
] |
44,989,402 | https://en.wikipedia.org/wiki/Procurement%20G6 | The Procurement G6 is an informal group of six national central purchasing bodies. It is also known as the Multilateral Meeting on Government Procurement (MMGP).
Members
Members of the Procurement G6 are:
: Public Services and Procurement Canada
: ChileCompra
: Consip
: Public Procurement Service
: Crown Commercial Service
: General Services Administration
Scope
Each country shares experiences about:
e–procurement systems
challenges, opportunities and actions for small and medium enterprises (SMEs)
their qualification systems for enterprises
instruments and indicators for the performance measurement of the Central Purchasing Bodies and their impact on the economic system, on the public sector and on the enterprises
actions to minimize the risk of corruption
the green procurement scenarios
Past meetings of the Procurement G6 have included:
June 15–16, 2009 — San Antonio,
June 10–12, 2010 – Rome,
September 24–26, 2013 — Seoul,
May 24–25, 2016 – Rome,
October 10–11, 2018 — Vancouver
See also
Agreement on Government Procurement
Auction
E–procurement
Expediting
Global sourcing
Group purchasing organization
Purchasing
Strategic sourcing
Notes and references
External links
Consip
CC – Direcciòn ChileCompra
GSA – General Services Administration
OGC – Office of Government Commerce
PPS – Public Procurement Service
PWGSC – Public Works and Government Services Canada
Systems engineering
Public eProcurement | Procurement G6 | [
"Engineering"
] | 270 | [
"Systems engineering"
] |
44,990,183 | https://en.wikipedia.org/wiki/Worst-case%20distance | In fabrication, the yield (Y = number of good samples/total number of samples) is one of the most important metrics. During the design phase, engineers aim to maximize yield by using simulation techniques and statistical models. Often, data follows a normal distribution, and for such distributions, there is a direct relationship between the design margin (relative to a given specification limit) and the yield. By expressing the specification margin in terms of standard deviation (sigma), yield (Y) can be calculated according to this specification. The concept of worst-case distance (WCD) extends this idea to more complex problems, such as non-normal distributions and multiple specifications.
WCD is a metric originally applied in electronic design for yield optimization and design centering, and it is now used as a metric for quantifying the robustness of electronic systems and devices.
In yield optimization for electronic circuit design, WCD relates the following yield-influencing factors:
Statistical distribution of design parameters, typically based on the technology process
Operating range of conditions under which the design will function
Performance specification for performance parameters
Although the strict mathematical formalism is complex, in a simplified interpretation, WCD is the maximum of all possible performance variances (i.e., within specification limits) divided by the distance to the performance specification, with the performance variances evaluated within the space defined by the operating range.
References
External links
Survey on several statistical analysis approaches
Electronic design | Worst-case distance | [
"Engineering"
] | 290 | [
"Electronic design",
"Electronic engineering",
"Design"
] |
43,254,635 | https://en.wikipedia.org/wiki/C19H28N4O2 | {{DISPLAYTITLE:C19H28N4O2}}
The molecular formula C19H28N4O2 (molar mass: 344.459 g/mol) may refer to:
ADB-PINACA
ADB-P7AICA | C19H28N4O2 | [
"Chemistry"
] | 59 | [
"Isomerism",
"Set index articles on molecular formulas"
] |
36,131,613 | https://en.wikipedia.org/wiki/Spherical%20nucleic%20acid | Spherical nucleic acids (SNAs) are nanostructures that consist of a densely packed, highly oriented arrangement of linear nucleic acids in a three-dimensional, spherical geometry. This novel three-dimensional architecture is responsible for many of the SNA's novel chemical, biological, and physical properties that make it useful in biomedicine and materials synthesis. SNAs were first introduced in 1996 by Chad Mirkin’s group at Northwestern University.
Structure and function
The SNA structure typically consists of two components: a nanoparticle core and a nucleic acid shell. The nucleic acid shell is made up of short, synthetic oligonucleotides terminated with a functional group that can be utilized to attach them to the nanoparticle core. The dense loading of nucleic acids on the particle surface results in a characteristic radial orientation around the nanoparticle core, which minimizes repulsion between the negatively charged oligonucleotides.
The first SNA consisted of a gold nanoparticle core with a dense shell of 3’ alkanethiol-terminated DNA strands. Repeated additions of salt counterions were used to reduce the electrostatic repulsion between DNA strands and enable more efficient DNA packing on the nanoparticle surface. Since then, silver, iron oxide, silica, and semiconductor materials have also been used as inorganic cores for SNAs. Other core materials with increased biocompatibility, such FDA-approved PLGA polymer nanoparticles, micelles, liposomes, and proteins have also been used to prepare SNAs. Single-stranded and double-stranded versions of these materials have been created using, for example, DNA, LNA, and RNA.
One- and two-dimensional forms of nucleic acids (e.g., single strands, linear duplexes, and plasmids) (Fig. 1) are important biological machinery for the storage and transmission of genetic information. The specificity of DNA interactions through Watson–Crick base pairing provides the foundation for these functions. Scientists and engineers have been synthesizing and, in certain cases, mass-producing nucleic acids for decades to understand and exploit this elegant chemical recognition motif. The recognition abilities of nucleic acids can be enhanced when arranged in a spherical geometry, which allows for polyvalent interactions to occur. This polyvalency, along with the high density and degree of orientation described above, helps explain why SNAs exhibit different properties than their lower-dimensional constituents (Fig. 2).
Over two decades of research has revealed that the properties of a SNA conjugate are a synergistic combination of those of the core and the shell. The core serves two purposes: 1) it imparts upon the conjugate novel physical and chemical properties (e.g., plasmonic, catalytic, magnetic, luminescent), and 2) it acts as a scaffold for the assembly and orientation of the nucleic acids. The nucleic acid shell imparts chemical and biological recognition abilities that include a greater binding strength, cooperative melting behavior, higher stability, and enhanced cellular uptake without the use of transfection agents (compared to the same sequence of linear DNA). It has been shown that one can crosslink the DNA strands at their base, and subsequently dissolve the inorganic core with KCN or I2 to create a core-less (hollow) form of SNA (Fig. 3, right), which exhibits many of the same properties as the original polyvalent DNA gold nanoparticle conjugate (Fig. 3, left).
Due to their structure and function, SNAs occupy a materials space distinct from DNA nanotechnology and DNA origami, (although both are important to the field of nucleic acid–guided programmable materials. With DNA origami, such structures are synthesized via DNA hybridization events. In contrast, the SNA structure can be synthesized independent of nucleic acid sequence and hybridization, instead their synthesis relies upon chemical bond formation between nanoparticles and DNA ligands. Furthermore, DNA origami uses DNA hybridization interactions to realize a final structure, whereas SNAs and other forms of three-dimensional nucleic acids (anisotropic structures templated with triangular prism, rod, octahedra, or rhombic dodecadhedra-shaped nanoparticles) utilize the nanoparticle core to arrange the linear nucleic acid components into functional forms. It is the particle core that dictates the shape of the SNA. SNAs should also not be confused with their monovalent analogues – individual particles coupled to a single DNA strand. Such single strand-nanoparticle conjugate structures have led to interesting advances in their own right, but do not exhibit the unique properties of SNAs.
Proposed applications
Intracellular gene regulation
SNAs are being proposed as therapeutic materials. Despite their high negative charge, they are taken up by cells (also negatively charged) in high quantities without the need for positively charged co-carriers, and they are effective as gene regulation agents in both antisense and RNAi pathways (Fig. 4). The proposed mechanism is that, unlike their linear counterparts, SNAs have the ability to complex scavenger receptor proteins to facilitate endocytosis.
SNAs were shown to deliver small interfering RNA (siRNA) to treat glioblastoma multiforme in a proof-of-concept study using a mouse model. The SNAs target Bcl2Like12, a gene overexpressed in glioblastoma tumors, and silences the oncogene. The SNAs injected intravenously cross the blood–brain barrier and find their target in the brain. In the animal model, the treatment resulted in a 20% increase in survival rate and 3 to 4-fold reduction in tumor size. This SNA-based therapeutic approach establishes a platform for treating a wide range of diseases with a genetic basis via digital drug design (where a new drug is made by changing the sequence of nucleic acid on a SNA).
Immunotherapy agents
SNA properties, such as enhanced cellular uptake, multivalent binding, and endosomal delivery, are desirable for the delivery of immunomodulatory nucleic acids. In particular, SNAs have been used deliver nucleic acids that agonize or antagonize toll-like receptors (proteins involved in innate immune signaling). The use of immunostimulatory SNAs has been shown to result in an 80-fold increase in potency, 700-fold higher antibody titers, 400-fold higher cellular responses to a model antigen, and improved treatment of mice with lymphomas compared to free oligonucleotides (not in SNA form). SNAs have also been used by Mirkin to introduce the concept of “rational vaccinology,” that the chemical structure of an immunotherapy, as opposed to just the components alone, dictates its efficacy. This concept has put a new structural focus on engineering vaccines for a wide range of diseases. This finding opens the possibility that, with previous treatments, researchers had the right components in the wrong structural arrangement – a particularly important lesson, especially in the context of COVID-19.
Intracellular probes
NanoFlares utilize the SNA architecture for intracellular mRNA detection. In this design, alkanethiol-terminated antisense DNA strands (complementary to a target mRNA strand within cells) are attached to the surface of a gold nanoparticle. Fluorophore-labeled “reporter strands” are then hybridized to the SNA construct to form the NanoFlare. When the fluorophore labels are brought in close proximity of the gold surface, as controlled by programmable nucleic acid hybridization, their fluorescence is quenched (Fig. 6). After the cellular uptake of NanoFlares, the reporter strands can dehybridize from the NanoFlare when they are replaced by a longer, target mRNA sequence. Note that mRNA binding is thermodynamically favored since the strands holding the reporter sequence have greater overlap of their nucleotide sequence with the target mRNA. Upon reporter strand release, the dye fluorescence is no longer quenched by the gold nanoparticle core and increased fluorescence is observed. This method for RNA detection provides the only way to sort live cells based upon genetic content.
One publication questions the correlation between fluorescence intensities of SmartFlare probes and the levels of corresponding RNAs assessed by RT-qPCR. Another paper has discussed SmartFlare applicability in early equine conceptuses, equine dermal fibroblast cells, and trophoblastic vesicles, finding that SmartFlares may only be applicable for certain uses. Aptamer nanoflares have also been developed to bind to molecular targets other than intracellular mRNA. Aptamers, or oligonucleotide sequences that bind targets with high specificity and sensitivity, were first combined with the NanoFlare architecture in 2009. The arrangement of aptamers in an SNA geometry resulted in increased cellular uptake and detection of physiologically relevant changes in adenosine triphosphate (ATP) levels.
Materials synthesis
SNAs have been utilized to develop an entire new field of materials science – one that focuses on using SNAs as synthetically programmable building blocks for the construction of colloidal crystals (Fig. 7). In 2011, a landmark paper was published in Science that defines a set of design rules for making superlattice structures of tailorable crystallographic symmetry and lattice parameters with sub-nm precision. The complementary contact model (CCM) proposed in this work can be used to predict the thermodynamically favorable structure, which will maximize the number of hybridized DNA strands (contacts) between nanoparticles.
Design rules for colloidal crystals engineered with DNA are analogous to Pauling's Rules for ionic crystals, but ultimately more powerful. For example, when using atomic or ionic building blocks in the construction of materials, the crystal structure, symmetry, and spacing are fixed by atomic radii and electronegativity. However, in the nanoparticle-based system, crystal structure can be tuned independent of the nanoparticle size and composition by simply adjusting the length and sequence of the attached DNA. As a result, nanoparticle building blocks with the SNA geometry are often referred to as “programmable atom equivalents” (PAEs). This strategy has enabled the construction of novel crystal structures for several materials systems and even crystal structures with no mineral equivalents. To date, over 50 different crystal symmetries have been achieved using colloidal crystal engineering with DNA.
Lessons from atomic crystallization on macroscale structural features like crystal habit also translate to colloidal crystal engineering with DNA. The Wulff construction bound by the lowest surface energy facets can be achieved for certain nanoparticle symmetries by using a slow cooling crystallization method. This concept was first demonstrated with a body-centered cubic symmetry, where the densest-packed planes were exposed on the surface resulting in a rhombic dodecahedron crystal habit. Other habits such as octrahedra, cubes, or hexagonal prisms have been realized using anisotropic nanoparticles or non-cubic unit cells. Colloidal crystals have also been grown through heterogeneous growth on DNA-functionalized substrates, where lithography can be used to define templates or specific crystal orientations.
Introducing anisotropy to the underlying nanoparticle core has also expanded the scope of structures that can be programmed using DNA. When shorter DNA designs are used with anisotropic nanoparticle cores, directional bonding interactions between DNA on particle facets can drive the formation of specific lattice symmetries and crystal habits. Localizing DNA to specific parts of a particle building block can also be achieved using biological cores, such as proteins with chemically anisotropic surfaces. Directional interactions and valency have been used to direct the formation of new lattice symmetries with protein cores that are difficult to access with inorganic particles. DNA origami frameworks borrowed from the structural DNA nanotechnology community have also been applied as cages for inorganic nanoparticle cores to impart valency and direct the formation of new lattice symmetries.
Colloidal crystals engineered using DNA often form crystal structures similar to ionic compounds, but a new method to access colloidal crystals with metallic-like bonding was recently reported in Science. Particle analogs of electrons in colloidal crystals can be made using gold nanoparticles with greatly reduced size and numbers of attached DNA strands. When combined with typical PAEs, these “electron equivalents” (EEs) roam through the lattice like electrons do in metals. This discovery can be used to access new alloy or intermetallic structures in colloidal crystals.
The ability to place nanoparticles of any composition and shape at any location in a well-defined crystalline lattice with nm-scale precision should have far-reaching implications in areas ranging from catalysis to photonics to energy. Catalytically active and porous materials have been assembled using DNA, and colloidal crystals engineered with DNA can also function as plasmonic photonic crystals with applications in nanoscale optical devices. Chemical stimuli, such as salt concentration, pH, or solvent, and physical stimuli like light have been harnessed to design stimuli-responsive colloidal crystals using DNA-mediated assembly.
Economic impact
The economic impact of SNA technology is substantial. Three companies have been founded that are based on SNA technology – Nanosphere in 2000, AuraSense in 2009, and AuraSense Therapeutics (now Exicure, Inc.) in 2011. Hundreds of millions of dollars have been invested in these companies and they have employed hundreds of people. The SmartFlares were commercialised by Merck Millipore between 2013 and 2018 for the detection of mRNAs in life cells before being withdrawn as they in fact do not detect mRNAs in life cells. Nanosphere was one of the first nanotechnology-based biotechnology firms to go public in late 2007. It burnt through over $412.5 million since inception before being sold for $58M in 2016 to Luminex. The FDA-cleared Verigene system is now sold by Luminex with accompanying FDA-cleared panel assays for bloodstream, respiratory tract, and gastrointestinal tract infections. It is being used for COVID-19 surveillance. Exicure went public in 2018 and is listed on the Nasdaq (XCUR). At the end of 2022, it was on its "death spiral".
References
Nucleic acids
Nanoparticles by surface chemistry | Spherical nucleic acid | [
"Chemistry"
] | 3,071 | [
"Biomolecules by chemical classification",
"Nucleic acids"
] |
36,135,685 | https://en.wikipedia.org/wiki/PLCopen | PLCopen is an independent organisation providing efficiency in industrial automation based on the needs of users. PLCopen members have concentrated on technical specifications around IEC 61131-3, creating specifications and implementations in order to reduce cost in industrial engineering. The outcome for example is standardized libraries for different application fields, harmonized language conformity levels and engineering interfaces for exchange. Experts of the PLCopen members are organized in technical committees and together with end users define such open standards.
PLCopen was founded in 1992 just after the world wide programming standard IEC 61131-3 was published. The controls market at that time was a very heterogeneous market with different types of programming methods for many different PLCs. The IEC 61131-3 is a standard defining the programming languages for PLCs, embedded controls, and industrial PCs, harmonizing applications independent from specific dialects, but still based on known methods such as the textual programming languages Instruction List, and Structured Text, the graphical programming languages Function Block Diagram and Ladder Diagram (a.k.a. Ladder logic), and the structuring tool Sequential Function Chart.
Today, IEC 61131-3 is a highly accepted programming standard and many industrial software and hardware companies offer products based on this standard, which in the end are used in many different machinery and other application fields.
Current topics are:
Motion control and
Safety functionality
XML data exchange format standardizing the base data of IEC projects in software systems, as used for instance by AutomationML
Benchmarking projects in order to have a good sophisticated benchmark standard.
And in the field of communication PLCopen has developed together with OPC Foundation the mapping of the IEC 61131-3 software model to the OPC Unified Architecture information model.
External links
Industrial automation
Programmable logic controllers
Organizations established in 1992 | PLCopen | [
"Technology",
"Engineering"
] | 363 | [
"Industrial computing",
"Industrial engineering",
"Automation",
"Programmable logic controllers",
"Industrial automation"
] |
36,135,842 | https://en.wikipedia.org/wiki/NOTT-202 | NOTT-202 is a two-part chemical compound that is capable of selectively absorbing carbon dioxide. It is a metal–organic framework (MOF) that functions like a sponge, adsorbing selected gases at high pressures. Its creation was announced by scientists in 2012. The researchers claimed this structure was an entirely new class of porous material.
References
Carbon capture and storage
Metal-organic frameworks
Indium compounds | NOTT-202 | [
"Chemistry",
"Materials_science",
"Engineering"
] | 86 | [
"Porous polymers",
"Materials science stubs",
"Geoengineering",
"Materials science",
"Metal-organic frameworks",
"Organic compounds",
"Carbon capture and storage",
"Organic compound stubs",
"Organic chemistry stubs"
] |
36,136,100 | https://en.wikipedia.org/wiki/CS-ROSETTA | CS-ROSETTA is a framework for structure calculation of biological macromolecules on the basis of conformational information from NMR, which is built on top of the biomolecular modeling and design software called ROSETTA. The name CS-ROSETTA for this branch of ROSETTA stems from its origin in combining NMR chemical shift (CS) data with ROSETTA structure prediction protocols. The software package was later extended to include additional NMR conformational parameters, such as Residual Dipolar Couplings (RDC), NOE distance restraints, pseudocontact chemical shifts (PCS) and restraints derived from homologous proteins. This software can be used together with other molecular modeling protocols, such as docking to model protein oligomers. In addition, CS-ROSETTA can be combined with chemical shift resonance assignment algorithms to create a fully automated NMR structure determination pipeline. The CS-ROSETTA software is freely available for academic use and can be licensed for commercial use (installation guide). A software manual and tutorials are provided on the supporting website https://csrosetta.chemistry.ucsc.edu/.
The ROSETTA software is written in C++. CS-ROSETTA is distributed together with a toolbox written in Python that facilitates preparation of input files, setting up of large-scale calculations and post-processing of simulation output. CS-ROSETTA calculations require a substantial computational effort and are usually carried out with 200-2000 parallel processes on computer clusters using the Message Passing Interface (MPI) for communication.
References
External links
CS-ROSETTA
ROSETTA
Molecular modelling software | CS-ROSETTA | [
"Chemistry"
] | 331 | [
"Molecular modelling",
"Molecular modelling software",
"Computational chemistry software"
] |
36,139,328 | https://en.wikipedia.org/wiki/Electron%20precipitation | Electron precipitation (also called energetic electron precipitation or EEP) is an atmospheric phenomenon that occurs when previously trapped electrons enter the Earth's atmosphere, thus creating communications interferences and other disturbances. Electrons trapped by Earth's magnetic field spiral around field lines to form the Van Allen radiation belt. The electrons are from the solar wind and may remain trapped above Earth for an indefinite period of time (in some cases years). When broadband very low frequency (VLF) waves propagate the radiation belts, the electrons exit the radiation belt and "precipitate" (or travel) into the ionosphere (a region of Earth's atmosphere) where the electrons will collide with ions. Electron precipitation is regularly linked to ozone depletion. It is often caused by lightning strikes.
Process
An electron's gyrofrequency is the number of times it revolves around a field line. VLF waves traveling through the magnetosphere, caused by lightning or powerful transmitters, propagate through the radiation belt. When those VLF waves hit the electrons with the same frequency as an electron's gyrofrequency, the electron exits the radiation belt and "precipitates" (because it will not be able to re-enter the radiation belt) throughout the Earth's atmosphere and ionosphere.
Often, as an electron precipitates, it is directed into the upper atmosphere where it may collide with neutral particles, thus depleting the electron's energy. If an electron makes it through the upper atmosphere, it will continue into the ionosphere. Groups of precipitated electrons can change the shape and conductivity of the ionosphere by colliding with atoms or molecules (usually oxygen- or nitrogen-based particles) in the region. When colliding with an atom, the electron strips the atom of its other electrons creating an ion. Collisions with the air molecules also release photons which provide a dim "aurora" effect. Because this occurs at such a high altitude, humans in aircraft are not affected by the radiation.
The ionization process, caused by electron precipitation in the ionosphere, increases its electrical conductivity which in turn brings the bottom of the ionosphere to a lower altitude. When this happens, ozone depletion occurs and certain communications may be disrupted. The lowered altitude of the ionosphere is temporary (unless electron precipitation is steady) while the ions and electrons rapidly react to form neutral particles.
Ozone depletion
Electron precipitation can lead to a substantial, short-term loss of ozone (capping out at around 90%). However, this phenomenon also correlates to some long-term ozone depletion as well. Studies have revealed that 60 major electron precipitation events occurred from 2002 to 2012. Different measurement tools (see below) read different ozone depletion averages ranging from 5-90%. However, some of the tools (specifically the ones that reported lower averages) did not take accurate readings or missed a couple of years. Typically, ozone depletion resulting from electron precipitation is more common during the winter season. The largest EEP event from the studies during 2002 to 2012 was recorded in October 2003. This event caused an ozone depletion of up to 92%. It lasted for 15 days and the ozone layer was fully restored a couple of days afterwards. EEP ozone depletion studies are important for monitoring the safety of Earth's environment and variations in the solar cycle.
Types
Electron precipitation can be caused by VLF waves from powerful transmitter based communications and lightning storms.
Lightning-induced Electron Precipitation (LEP)
Lightning-induced electron precipitation (also referred to as LEP) occurs when lightning strikes the Earth. When a bolt of lightning strikes the ground, an electromagnetic pulse (EMP) is released which can hit the trapped electrons in the radiation belt. The electrons are then dislodged and "precipitate" into the Earth's atmosphere. Because the EMP caused by lightning strikes is so powerful and occurs over a large range of spectrums, it is known to cause more electron precipitation than transmitter induced precipitation.
Transmitter-induced Precipitation of Electron Radiation (TIPER)
In order to cause electron precipitation, transmitters must produce very powerful waves with wavelengths from 10 to 100 km. Naval communication arrays often cause transmitter-induced precipitation of electron radiation (TIPER) because powerful waves are needed to communicate through water. These powerful transmitters are operating at almost all times of the day. Occasionally, these waves will have the exact heading and frequency needed to cause an electron to precipitate from the radiation belt.
Measurement Methods
Electron precipitation can be studied by using various tools and methods to calculate its effects on the atmosphere. Scientists use superposed epoch analysis to take into account the strengths and weaknesses of a large set of different measurement methods. They then use that collected data to calculate when an EEP event is taking place and its effects on the atmosphere.
Satellite measurements
In most cases, satellite measurements of electron precipitation are actually measurements of ozone depletion that is then linked to EEP events. The different instruments use a wide variety of methods to calculate ozone levels. While some of the methods may provide significantly inaccurate data, the average of all of the data combined is widely accepted as accurate.
GOMOS
The Global Ozone Monitoring by Occultation of Stars (GOMOS) is a measurement instrument aboard the European satellite Envisat. It measures ozone amounts by using the emitted electromagnetic spectrum from surrounding stars combined with trigonometric calculations in a process called stellar occultation.
SABER
The Sounding of the Atmosphere using Broadband Emission Radiometry (SABER) is a measurement instrument aboard NASA's Thermal Ionosphere Mesosphere Energetics Dynamics (TIMED) satellite. The instrument measures ozone (and other atmospheric conditions) through an infrared radiometer (with a spectral range from 1.27 μm to 17 μm).
MLS
The Microwave Limb Sounder (MLS), an instrument aboard the Aura satellite, measures microwave emission from the Earth's upper atmosphere. This data can help researchers find the levels of ozone depletion to an accuracy of 35%.
MEPED
The Medium Energy Proton Electron Detector (MEPED) measures electrons in the Earth's radiation belt and can estimate the amount of precipitating electrons in the ionosphere.
Sub-ionospheric Detection
With Sub-ionospheric Detection, a signal is sent from a VLF transmitter through the radiation belt to a VLF receiver on the other end. The VLF signal will cause some electrons to precipitate, thus disturbing the VLF signal before it reaches the VLF receiver on the other end. The VLF receiver measures these disturbances and uses the data to estimate amount of precipitated electrons.
PIPER
PIPER is a Stanford-made photometer specifically designed for capturing the photons emitted when ionization occurs in the ionosphere. Researchers can use this data to detect EEP events and measure the amount of precipitated electrons.
X-rays
X-ray equipment can be used in conjunction with other equipment to measure electron precipitation. Because x-rays are emitted during electron collisions, x-rays found in the ionosphere can be correlated to EEP events.
VLF Remote Sensing
VLF Remote Sensing is a technique of monitoring electron precipitation by monitoring VLF transmissions from the U.S. Navy for "Trumi Events" (large changes of phase and amplitude of the waves). Although this method can monitor electron precipitation, it cannot monitor the ionization of said electrons.
History
James Van Allen from the State University of Iowa with his group, were the first to use vehicles with sensors to study electron fluxes precipitating in the atmosphere with rockoon rockets. The rockets would reach a maximum height of 50 km. The soft radiation detected was later named after Van Allen in 1957.
The next advancement of research of electron precipitation was performed by Winckler with his group from the university of Minnesota. They used balloons that carried detectors into the atmosphere.
References
Electron | Electron precipitation | [
"Chemistry"
] | 1,622 | [
"Electron",
"Molecular physics"
] |
36,140,262 | https://en.wikipedia.org/wiki/%28Z%29-9-Tricosene | (Z)-9-Tricosene, known as muscalure, is an insect pheromone found in dipteran flies such as the housefly. Females produce it to attract males to mate. It is used as a pesticide, as in Maxforce Quickbayt by Bayer, luring males to traps to prevent them from reproducing.
Biological functions
(Z)-9-Tricosene is a sex pheromone produced by female house flies (Musca domestica) to attract males. In bees, it is one of the communication pheromones released during the waggle dance.
Uses
As a pesticide, (Z)-9-tricosene is used in fly paper and other traps to lure male flies, trap them, and prevent them from reproducing.
Biosynthesis
(Z)-9-Tricosene is biosynthesized in house flies from nervonic acid. The acid is converted into the acyl-CoA derivative and then reduced to the aldehyde (Z)-15-tetracosenal. Through a decarboxylation reaction, the aldehyde is converted to (Z)-9-tricosene. The process is mediated by a cytochrome P450 enzyme and requires oxygen (O) and nicotinamide adenine dinucleotide phosphate (NADPH).
Safety
Products containing (Z)-9-tricosene are considered safe for humans, wildlife, and the environment.
References
Insect pheromones
Pesticides
Alkenes
Hydrocarbons | (Z)-9-Tricosene | [
"Chemistry",
"Biology",
"Environmental_science"
] | 331 | [
"Insect pheromones",
"Hydrocarbons",
"Pesticides",
"Chemical ecology",
"Toxicology",
"Organic compounds",
"Alkenes",
"Biocides"
] |
36,142,343 | https://en.wikipedia.org/wiki/Gilbert%20tessellation | In applied mathematics, a Gilbert tessellation or random crack network is a mathematical model for the formation of mudcracks, needle-like crystals, and similar structures. It is named after Edgar Gilbert, who studied this model in 1967.
In Gilbert's model, cracks begin to form at a set of points randomly spread throughout the plane according to a Poisson distribution. Then, each crack spreads in two opposite directions along a line through the initiation point, with the slope of the line chosen uniformly at random. The cracks continue spreading at uniform speed until they reach another crack, at which point they stop, forming a T-junction. The result is a tessellation of the plane by irregular convex polygons.
A variant of the model that has also been studied restricts the orientations of the cracks to be axis-parallel, resulting in a random tessellation of the plane by rectangles.
write that, in comparison to alternative models in which cracks may cross each other or in which cracks are formed one at a time rather than simultaneously, "most mudcrack patterns in nature topologically resemble" the Gilbert model.
References
Tessellation
Statistical models
Patterned grounds
Spatial processes | Gilbert tessellation | [
"Physics",
"Mathematics"
] | 242 | [
"Tessellation",
"Planes (geometry)",
"Euclidean plane geometry",
"Symmetry"
] |
31,933,021 | https://en.wikipedia.org/wiki/Lucideon | Lucideon (formerly Ceram) is an independent materials development, testing and assurance company based in Stoke-on-Trent and in the UK. Lucideon owns testing facilities around the world.
History
The British Refractories Research Association was formed in 1920. The pottery industry was required by the Import Duties Advisory Committee in 1937 to create a research association, so the British Pottery Research Association was formed in 1937. The two combined in April 1948 as the British Ceramic Research Association.
The original main building on Queens Road in Penkhull was opened by the Duke of Edinburgh in December 1951. In May 1986 it changed its name to British Ceramic Research Ltd, having been incorporated as a company on 18 November 1985.
From the late 1990s the company traded under the abbreviated name Ceram. On 1 February 2014 the company name changed to Lucideon Limited.
Structure
Lucideon is situated south of the University Hospital of North Staffordshire.
Lucideon incorporates:
UK Headquarters - (formerly CERAM Research Ltd)
Assurance Services - Lucideon CICS Limited
US Laboratories - (formerly M+P Labs) based in Schenectady, New York, Greenville, South Carolina and offices in Raleigh, North Carolina
Lucideon's laboratories and techniques are accredited by the United Kingdom Accreditation Service (UKAS).
Function
Lucideon provides materials development, technologies, consultancy and testing and analysis to a diverse range of industries; principally healthcare, construction, ceramics, aerospace, nuclear and power generation.
References
External links
Lucideon Website
Assurance Division
Company History
Ceramic engineering
Ceramics manufacturers of England
Companies based in Stoke-on-Trent
1948 establishments in England
Materials science institutes
Materials testing
Organizations established in 1948
Science and technology in Staffordshire
Scientific organisations based in the United Kingdom | Lucideon | [
"Materials_science",
"Engineering"
] | 350 | [
"Materials science",
"Materials testing",
"Materials science organizations",
"Materials science institutes",
"Ceramic engineering"
] |
31,938,492 | https://en.wikipedia.org/wiki/AccuRev%20SCM | AccuRev is a software configuration management application developed by AccuRev, Inc. and was first released in 1999. In December 2013 AccuRev was acquired by Micro Focus.
Overview
AccuRev is a centralized version control system which uses a client–server model. Communication is performed via TCP/IP using a proprietary protocol. Servers function as team servers, continuous integration servers, or build servers. AccuRev is built around a stream-based architecture in which streams form a hierarchical structure of code changes where parent streams pass on certain properties to child streams. Developers make changes using command line functions, the Java GUI, the web interface, or one of the IDE plug-ins (Eclipse, Visual Studio, IntelliJ IDEA).
Characteristics
Streams and parallel development
AccuRev captures and controls the relationships between code bases in parallel efforts using a stream-based architecture. This allows teams to safely store work and test it before sharing it with others, code is automatically merged or "inherited" between streams and teams once code changes are shared.
Private developer history
AccuRev has a two step check-in process. Users can check-in code privately to their workspace before sharing it with the rest of the group.
Change packages
AccuRev integrates with various ITS and project management tools. History from check-ins and promotions are tied to issues. Most SCM functions can be done on the issue level instead of by file and directory.
Distributed development
AccuRev enables remote stream structures and replication for distributed teams. Replica servers function as a local cache with all write operations happening after.
Automated merging
Streams in AccuRev automatically share and merge code with each other. This is a main distinction between streams and branches.
See also
List of revision control software
Comparison of revision control software
References
Configuration management
Proprietary version control systems
Micro Focus International | AccuRev SCM | [
"Engineering"
] | 370 | [
"Systems engineering",
"Configuration management"
] |
31,940,112 | https://en.wikipedia.org/wiki/Inogatran | Inogatran (INN) is a low molecular weight peptidomimetic thrombin inhibitor. Inogatran was developed for the potential treatment of arterial and venous thrombotic diseases.
References
Direct thrombin inhibitors
Guanidines
Cyclohexyl compounds | Inogatran | [
"Chemistry"
] | 61 | [
"Guanidines",
"Functional groups"
] |
41,754,493 | https://en.wikipedia.org/wiki/Mike%20Little | Mike Little (born 12 May 1962) is an English web developer and writer. He is the co-founder of the free and open source web publishing software WordPress.
Biography
Mike Little was born in Manchester, England in 1962 to a Nigerian father, who was a mathematics lecturer and musician, and an English mother who worked as a primary school teacher. Little was placed into foster care when he was four months of age, and was later adopted by the same family. He grew up on a council estate in Brinnington, Stockport, and was educated at Stockport School.
In 2003, Little and Matt Mullenweg started working on a project in which they built on b2/cafelog and later named it WordPress, releasing the first version on 27 May 2003.
Little states that, despite not being invited to join his co-founder's for-profit business Automattic, he and Mullenweg remain on good terms. He clarified: "I don’t want it to sound like he cheated me out of something or ripped me off in some way. He didn’t."
In June 2013, Little was awarded the SAScon's "Outstanding Contribution to Digital" award for his part in co-founding and developing WordPress.
Little has been described as "modest" and living in "virtual anonymity". He has one daughter. He identifies as a follower of Stoicism and a humanist, and in 2021, he became a patron of charity Humanists UK.
Bibliography
Douglass, Robert T.; Little, Mike; Smith, Jared W. (2006). Building Online Communities With Drupal, PhpBB, and WordPress
References
External links
Living people
21st-century Black British people
English people of Nigerian descent
Social entrepreneurs
Web development
Computer programmers
Web developers
1962 births
WordPress
Free software programmers
People from Stockport
British computer programmers
English humanists
21st-century English male writers
English adoptees | Mike Little | [
"Technology",
"Engineering"
] | 389 | [
"Software engineering",
"Computing stubs",
"Computer specialist stubs",
"Web development"
] |
39,022,062 | https://en.wikipedia.org/wiki/Ermakov%E2%80%93Lewis%20invariant | Many quantum mechanical Hamiltonians are time dependent. Methods to solve problems where there is an explicit time dependence is an open subject nowadays. It is important to look for constants of motion or invariants for problems of this kind. For the (time dependent) harmonic oscillator it is possible to write several invariants, among them, the Ermakov–Lewis invariant which is developed below.
The time dependent harmonic oscillator Hamiltonian reads
It is well known that an invariant for this type of interaction
has the form
where obeys the Ermakov equation
The above invariant is the so-called Ermakov–Lewis invariant. It is easy to show that may be related to
the time independent harmonic oscillator Hamiltonian via a unitary transformation of the
form
as
This allows an easy form to express the solution of the Schrödinger equation for the time dependent Hamiltonian.
The first exponential in the transformation is the so-called squeeze operator.
This approach may allow to simplify problems such as the Quadrupole ion trap, where an ion is trapped in a harmonic potential with time dependent frequency. The transformation presented here is then useful to take into account such effects.
The geometric meaning of this invariant can be realized within the quantum phase space.
History
It was proposed in 1880 by Vasilij Petrovich Ermakov (1845-1922). The paper is translated in.
In 1966, Ralph Lewis rediscovered the invariant using Kruskal's asymptotic method. He published the solution in 1967.
References
Quantum mechanics | Ermakov–Lewis invariant | [
"Physics"
] | 314 | [
"Theoretical physics",
"Quantum mechanics"
] |
39,022,230 | https://en.wikipedia.org/wiki/Buccal%20administration | Buccal administration is a topical route of administration by which drugs held or applied in the buccal () area (in the cheek) diffuse through the oral mucosa (tissues which line the mouth) and enter directly into the bloodstream. Buccal administration may provide better bioavailability of some drugs and a more rapid onset of action compared to oral administration because the medication does not pass through the digestive system and thereby avoids first pass metabolism. Drug forms for buccal administration include tablets and thin films.
As of May 2014, the psychiatric drug asenapine; the opioid drugs buprenorphine, naloxone, and fentanyl; the cardiovascular drug nitroglycerin; the nausea medication prochlorperazine; the hormone replacement therapy testosterone; and nicotine as a smoking cessation aid were commercially available in buccal forms, as was midazolam, an anticonvulsant, used to treat acute epileptic seizures.
Buccal administration of vaccines has been studied, but there are challenges to this approach due to immune tolerance mechanisms that prevent the body from overreacting to immunogens encountered in the course of daily life.
Tablets
Buccal tablets are a type of solid dosage form administered orally in between the gums and the inner linings of the cheek. These tablets, held within the buccal pouch, either act on the oral mucosa or are rapidly absorbed through the buccal mucosal membrane. Since drugs "absorbed through the buccal mucosa bypass gastrointestinal enzymatic degradation and hepatic first-pass effect", prescribing buccal tablets is increasingly common among healthcare professionals.
Buccal tablets serve as an alternative drug delivery in patients where compliance is a known issue, including those who are unconscious, nauseated, or having difficulty in swallowing (i.e. dysphagia). A wide variety of these drugs are available on the market to be prescribed in hospitals and other healthcare settings, including common examples like Corlan, Fentora, and Buccastem.
The most common route for drug transport through the buccal mucosa is the paracellular pathway. Most hydrophilic drugs permeate the cheek linings via the paracellular pathway through the mechanism of passive diffusion, and hydrophobic drugs are transported through the transcellular pathway. This route of administration is beneficial for mucosal administration and transmucosal administration. Buccal tablets are typically formulated through the direct compression of drug, powder mixture, swollen polymer, and other agents that assist in processing.
Buccal tablets offer many advantages in terms of accessibility, ease of administration and withdrawal, and hence may improve patient compliance. Notable drawbacks of buccal tablets include the hazard of choking by involuntarily swallowing the tablet and irritation of the gums. Caution should be exercised along with counselling from medical practitioners before use of these tablets.
Clinical uses and common drug examples
With recent advances on buccal tablets and in conditions where the conventional oral route (i.e. swallowing of tablet) cannot be delivered effectively, some commonly prescribed buccal tablets available in healthcare settings are listed below as examples.
Hydrocortisone
Hydrocortisone is a corticosteroid that is clinically used to relieve the pain and discomfort of mouth ulcers and functions to speed the healing of mouth ulcers. Common side effects include: oral thrush, visual disturbances (e.g. blurry vision), worsening of diabetes, worsening of mouth infections, and allergic reactions (e.g. skin rash). Hydrocortisone is contraindicated in patients hypersensitive to hydrocortisone and those with mouth ulcers caused by dentures or infection as it can worsen the severity of mouth ulcers.
Some cautions and remarks include needing to gargle and spit water once tablet is fully dissolved to minimise risk of oral thrush, prolonged use may lead to withdrawal symptoms, chewing and swallowing of the tablet may limit its efficacy and give rise to additional side effects, and caution with CYP3A4 inhibitors.
Fentanyl
Fentanyl is an opioid analgesic used for the treatment of breakthrough pain in cancer patients who are already receiving and/or are tolerant to maintenance opioid therapy for chronic cancer pain Common side effects include: nausea, vomiting, headache, constipation and drowsiness. Fentanyl is contraindicated in patients hypersensitive to fentanyl, opioid non-tolerant patients, management of acute or postoperative pain, and those with severe hypotension or severe obstructive airway diseases (e.g. COPD)
Some cautions include needing to keep tablets out of the sight and reach of children, and must not be sucked, chewed or swallowed. Other remarks include caution when administered in patients with hepatic or renal impairment, having drug interactions with CYP3A4 inducers and inhibitors and co-administration with CNS sedative agents (e.g. antihistamines) will increase CNS side effects.
Prochlorperazine maleate
Prochlorperazine maleate is under the class of antiemetics and antipsychotics. These buccal tablets are administered for the treatment of severe nausea and vomiting associated with migraine, as well as managed in symptoms of schizophrenia. Side effects typically seen in patients using prochlorperazine maleate tablets include drowsiness, blurred vision, dry mouth, and headache. In rare cases, these tablets may cause serious allergic reactions (i.e. anaphylaxis). Prochlorperazine maleate is contraindicated in certain patient groups, including hypersensitivity to prochlorperazine maleate, certain diseases like glaucoma, epilepsy and Parkinson's disease. They are also avoided in those with hepatic and prostate gland problems.
Special caution is taken in patients with high risk of blood clot and stroke, along with associated risk factors (e.g. high blood pressure and high cholesterol levels). Those taking prochlorperazine maleate should avoid exposure to direct sunlight due to photosensitivity and taken certain drugs that are either sedative and give dry mouth (e.g. anticholinergics) or target the heart (e.g. antihypertensives and anticoagulants). Other remarks include being most effective when taken after food and possible withdrawal symptoms if they are abruptly stopped.
Mechanism of action
The buccal mucosa, along with the gingival and sublingual mucosa, is part of the oral mucosa. It is composed of non-keratinised tissue. Unlike intestinal and nasal mucosae, it lacks tight junctions and is instead equipped with loose intercellular links of desmosomes, gap junctions and hemidesmosomes. While it has a less permeable effect than sublingual administration, buccal administration is still capable of creating local or systemic effects following drug administration. In the oral cavity, buccal tablets potentiate their effect by entering the bloodstream direction through the internal jugular vein into the superior vena cava, avoiding acidic hydrolysis to take place in the gastrointestinal tract.
There are two major routes for drug transportation through the buccal mucosa: transcellular and paracellular pathways.
Small hydrophobic molecules and other lipophilic compounds mostly move across the buccal mucosa via the transcellular pathway. Drugs are transferred via the transcellular pathway through either facilitated diffusion for polar or ionic compounds, diffusion for low molecular weight molecules, or transcytosis and endocytosis for macromolecules. The physicochemical properties of the drug, for example, its oil/water partition coefficient, molecular weight, structural conformation, determines whether the molecules are transported through the transcellular pathway.
As the cell membrane is lipophilic, it is more difficult for drugs that are hydrophilic to permeate the membrane. Hence, the excipients of the formulation and the phospholipid bilayer assist in enhancing the diffusion of hydrophilic compounds (i.e. peptides, proteins, macromolecules).
Generally, small low-molecular-weight hydrophilic compounds diffuse across the buccal epithelium through the paracellular pathway via passive diffusion. The extracellular amphiphilic lipid matrix proves to be a major barrier for macromolecular hydrophilic compounds. After the administration of the buccal tablet, it must transport either through the epithelial layers to achieve its effect on the systemic circulation (systemic effect) or remain at a target site to elicit a local effect.
Benefits and limitations
Benefits
Buccal tablets offer many advantages over other solid dosage forms also intended for oral administration (e.g. enteric-coated tablets, chewable tablets, and capsules).
Buccal tablets can be considered in patients who experience difficulty in swallowing, since these tablets are absorbed into the blood stream between the gum and cheek. Difficulty in swallowing can occur in all age groups, especially in young infants and the elderly community. Buccal tablets are also used in unconscious patients. Additionally, in the case of accidental swallowing of a buccal tablet, adverse effects are minimal as most buccal drugs cannot survive hepatic first-pass metabolism.
Compared to orally ingested capsules and tablets, buccal tablets provide a more rapid onset of action because the oral mucosa is highly vascularised. Buccal tablets are also used in emergency situations because they can exert their effects quickly.
Buccal tablets directly enter the systemic circulation, bypassing the gastrointestinal tract and first-pass metabolism in the liver. As such, patients can take a reduced overall dose to minimise symptoms. In addition, buccal tablets can be removed if adverse reactions appear.
Limitations
In general, many drugs are not suitable to be delivered via the buccal mucosa due to the small dose criteria. Buccal tablets are rarely used in healthcare settings due to unwanted properties that may limit patient compliance, for example, unpleasant taste and irritation of the oral mucosa. These undesired characteristics may lead to accidental swallowing or involuntary expulsion of the buccal tablet. Buccal tablets are also not preferred for drugs that require extended-release.
Absorption of drugs via the buccal membrane may not be suitable for all patients. Due to possible undesirable side effects and loss of drug effectiveness, buccal tablets must not be crushed, chewed, or swallowed under any circumstances. As such, buccal tablets are not always appropriate for patients (e.g. individuals on enteral tube feeding). It is also noted that eating, drinking or smoking should be avoided until the buccal tablet is fully dissolved to prevent drug efficacy changes and concerns of choking.
Formulation and manufacturing
Buccal tablets are dry formulations that attain bioadhesion through dehydrating local mucosal surfaces. Many bioadhesive buccal tablet formulations are created through the direct compression method with a release retardant and swollen polymer, and are designed to either release the drug in a unidirectional or multidirectional manner into the saliva.
Conventional dosage forms are unable to ensure therapeutic drug levels in the circulation and the mucosa for mucosal and transmucosal administration because of the washing effect of saliva, and the mechanical stress of the oral cavity. These two mechanisms act as a physiological removal system that removes the formulation from the mucosa, resulting in a decreased exposure time and unpredictable pharmacological profile of the drug's distribution.
This effect can be countered by prolonging the contact between the active substance from the buccal tablet and the mucosa, the tablet should contain: mucoadhesive agents, penetration enhancers, enzyme inhibitors and solubility modifiers.
The mucoadhesive agents assist in the maintenance of prolonged contact between the drug with the absorption site. Penetration enhancers improve the ability of the drug to permeate the mucosa for transmucosal delivery or penetrate into the layers of the epithelium for mucosal delivery. Enzyme inhibitors partake in the protection of the drug from mucosal enzyme degradation, and solubility modifiers increase the solubility of drugs that are poorly absorbed.
See also
Sublabial administration
Sublingual administration
Route of administration
Pharmacology
References
External links
Generex Buccal Morphine and Fentanyl research
Mouth
Routes of administration | Buccal administration | [
"Chemistry"
] | 2,645 | [
"Pharmacology",
"Routes of administration"
] |
39,023,855 | https://en.wikipedia.org/wiki/Metal%20powder | Metal powder is a metal that has been broken down into a powder form. Metals that can be found in powder form include aluminium powder, nickel powder, iron powder and many more. There are four different ways metals can be broken down into this powder form:
Direct reduction
Gas atomization
Liquid atomization
Centrifugal atomization
Processes
The following processes can be used to produce metal powder:
Direct reduction is the result of blending carbon with iron oxide ore, heating the mixture, removing the sponge iron from the carbon, grinding it, annealing it, and regrinding to make the powder form usable for manufacturing.
Gas atomization occurs when a molten metal is passed through a passageway to a gas-filled chamber that cools the metal. As it falls, it is collected and annealed into a powder.
Liquid atomization is similar to gas atomization, but instead the metal is sprayed with high-pressure liquid which solidifies the droplets more rapidly. This results in the powder being more porous, smaller, and cleaner.
Centrifugal atomization occurs when a metal is put into a chamber as a rod and electrically melted, at the end of the rod, to produce melted droplets that fall into another chamber and then solidify.
Types and Uses
Back in the early 1900's, metal powder was the currency used in the United States of America. Depending on the market, metal powder can be more valuable than gold. The following are the types and uses of metal powder:
Aluminum powder: Fireworks, metallic paints, manufacturing in solar cells in the green energy sector
Bismuth powder: Production of batteries, welding rods, creating alloys
Cadmium powder: Glazed used on ceramics, transparent conductors, nickel-cadmium batteries
Iron powder: Magnetic products, printing, brake pads, certain types of dyes and stains
Nickel powder: used for corrosion resistance, such as in the marine industry
Raney nickel: used as a catalyst
Platinum black: used as a catalyst
Titanium powder: used in aerospace applications, medical implants and sporting goods.
See also
Metal swarf
Powder metallurgy
Pressing
Sintering
References
Metals
Metal | Metal powder | [
"Physics",
"Chemistry"
] | 426 | [
"Metals",
"Materials stubs",
"Materials",
"Powders",
"Matter"
] |
39,027,004 | https://en.wikipedia.org/wiki/NASA%20Human%20Exploration%20Rover%20Challenge | The NASA Human Exploration Rover Challenge, prior to 2014 referred to as the Great Moonbuggy Race, is an annual competition for high school and college students to design, build, and race human-powered, collapsible vehicles over simulated lunar/Martian terrain. NASA sponsors the competition, first held in 1994, and, since 1996, the U.S. Space & Rocket Center hosts.
Students created vehicles dubbed "moonbuggies" to face challenges similar to those engineers at NASA's Marshall Space Flight Center addressed in preparation for Apollo 15. On that mission, on July 31, 1971, the first Lunar Roving Vehicle extended the range of astronauts on the Moon to allow for further exploration than was otherwise possible. Two other rovers were sent to the Moon on subsequent missions.
With the 2014 changes in the contest, the motivation changed to mimicking design challenges faced by engineers designing rovers for future exploration missions to a variety of celestial bodies.
The first race, in 1994, was held on July 16, the 25th anniversary of the Apollo 11 launch. It featured six college teams who competed on the same course as had been used to test the lunar rovers previously. The University of New Hampshire finished first, in 18 minutes 55 seconds for the course with twelve obstacles. The prize was a trip for six team members to see a Space Shuttle launch. Other teams from the University of Puerto Rico at Humacao, Texas A & M University, the University of Alabama in Huntsville, Georgia Institute of Technology and Indiana University/Purdue University at Indianapolis participated.
Subsequent races have been held in April. In 1996, the competition was moved to a course at the U.S. Space & Rocket Center; high school teams also began competing.
Rules
The rules change year by year, but are largely summarized thus:
A team of at most six people designs, builds, and races the same vehicle.
Two of the six must ride and propel the vehicle through the course.
Riders must be one male and one female.
The moonbuggy (pre-2014) must fit into a cube and be no more than 4 ft wide. Beginning in 2014, the rover constraint was a cube.
The vehicle needs to carry a simulated high-gain antenna, camera, and other instrumentation which must consume at least .
Various other dimensional and safety criteria apply.
Time penalties are assessed for touching the ground, avoiding obstacles, and other rule violations.
Since 2016, the teams have to design and fabricate their own non-pneumatic tires/wheels. Purchasing a commercially available product will lead to disqualification.
Course
The course is designed to test rovers for stability over varying simulated lunar or extraterrestrial terrain—bumpy, sloped, and rocky—including some tight turns. The first course was the actual track used by Mobility Test Articles, auditioning versions of Lunar Roving Vehicles that were used on the Moon. For the third race the course was moved a few miles, to the U.S. Space & Rocket Center. There, the track has taken varying paths through the rocket park and around the permanent lunar crater feature at the museum. Each year, the obstacles change slightly.
The obstacles are constructed of discarded tires, plywood, some 20 tons of gravel and five tons of sand, all to simulate lunar craters, basins, and rilles. The contest is challenging: in 2009, 29 of 68 teams competing did not complete the race. Sometimes the placement of the obstacle is an issue, with some teams hitting obstacles too fast after a downhill stint.
Before students tackle the race course, their vehicles must pass inspection. At the team's start time, the two riding students must carry the buggy, collapsed to fit in a cube (pre-2014 a cube), for , then expand the rover and ride it across the obstacles and along the track, avoiding cones marking the edges of the course, bales of hay, and other obstructions, while successfully navigating the modest hills of the terrain and obstacles. After the race, another inspection assesses the condition of the vehicle, with time penalties if parts are missing.
Contestants
Contestants are high school and university students largely from the United States, including Puerto Rico. Teams have also come from Canada, Mexico, India, Germany, and Romania to participate.
Awards
Numerous awards are offered each year, some with significant prizes. First place college winners have received trips to Shuttle launches and cash prizes, while others have received weekends at Space Camp. In 2009, there were 11 categories for special recognition with 19 recipients thereof. Consistent from the beginning have been awards for fastest time and for best design. Other awards acknowledge simplicity of design, safety, tenacity, team spirit, improvement over previous years' entries, and exceptional new entries.
Winners
This list gives winners for time(t) and design(d) awards which have been consistently offered since the start. Awards were often also given for other categories but they are not included here in the interest of readability.
tFirst place for time
dBest design award
fFeatherweight award
References
External links
NASA Great Moonbuggy Race official site
Public Affairs site
Flickr images
Science competitions
U.S. Space & Rocket Center
Vehicle design
Competitions in the United States
Engineering competitions
Combination events | NASA Human Exploration Rover Challenge | [
"Technology",
"Engineering"
] | 1,054 | [
"Engineering competitions",
"Vehicle design",
"Science competitions",
"Science and technology awards",
"Design"
] |
39,032,561 | https://en.wikipedia.org/wiki/Design-Oriented%20Programming | Design-oriented programming is a way to author computer applications using a combination of text, graphics, and style elements in a unified code-space. The goal is to improve the experience of program writing for software developers, boost accessibility, and reduce eye-strain. Good design helps computer programmers to quickly locate sections of code using visual cues typically found in documents and web page authoring.
User interface design and graphical user interface builder research are the conceptual precursors to design-oriented programming languages. The former focus on the software experience for end users of the software application and separate editing of the user interface from the code-space. The important distinction is that design-oriented programming involves user experience of programmers themselves and fully merges all elements into a single unified code-space.
See also
User interface design
Graphical user interface builder
Elements of graphical user interfaces
Visual programming language
Experience design
User experience design
Usability
References
Visual programming
Intro to Design-Oriented Programming Languages
Computer programming | Design-Oriented Programming | [
"Technology",
"Engineering"
] | 191 | [
"Software engineering",
"Computer programming",
"Computers"
] |
30,950,031 | https://en.wikipedia.org/wiki/Hygrophorus%20nemoreus | Hygrophorus nemoreus is an edible species of fungus in the genus Hygrophorus.
References
External links
Fungi described in 1801
nemoreus
Taxa named by Christiaan Hendrik Persoon
Fungus species | Hygrophorus nemoreus | [
"Biology"
] | 49 | [
"Fungi",
"Fungus species"
] |
30,950,412 | https://en.wikipedia.org/wiki/Chronostasis | Chronostasis (from Greek , , 'time' and , , 'standing') is a type of temporal illusion in which the first impression following the introduction of a new event or task-demand to the brain can appear to be extended in time. For example, chronostasis temporarily occurs when fixating on a target stimulus, immediately following a saccade (i.e., quick eye movement). This elicits an overestimation in the temporal duration for which that target stimulus (i.e., postsaccadic stimulus) was perceived. This effect can extend apparent durations by up to half a second and is consistent with the idea that the visual system models events prior to perception.
A common occurrence of this illusion is known as the stopped-clock illusion, where the second hand of an analog clock appears to stay still for longer than normal when looking at it for the first time.
This illusion can also occur in the auditory and tactile domain. For instance, a study suggests that when someone listens to a ringing tone through a telephone, while repetitively switching the receiver from one ear to the other, it causes the caller to overestimate the temporal duration between rings.
Mechanism of action
Overall, chronostasis occurs as a result of a disconnection in the communication between visual sensation and perception. Sensation, information collected from our eyes, is usually directly interpreted to create our perception. This perception is the collection of information that we consciously interpret from visual information. However, quick eye movements known as saccades disrupt this flow of information. Because research into the neurology associated with visual processing is ongoing, there is renewed debate regarding the exact timing of changes in perception that lead to chronostasis. However, below is a description of the general series of events that lead to chronostasis, using the example of a student looking up from his desk toward a clock in the classroom.
The eyes receive information from the environment regarding one particular focus. This sensory input is sent directly to the visual cortex to be processed. After visual processing, we consciously perceive this object of focus. In the context of a student in a classroom, the student's eyes focus on a paper on his desk. After his eyes collect light reflected off the paper and this information is processed in his visual cortex, the student consciously perceives the paper in front of him.
Following either a conscious decision or an involuntary perception of a stimulus in the periphery of the visual field, the eyes intend to move to a second target of interest. For the student described above, this may occur as he decides that he wishes to check the clock at the front of the classroom.
The muscles of the eye contract and it begins to quickly move towards the second object of interest through an action known as a saccade. As soon as this saccade begins, a signal is sent from the eye back to the brain. This signal, known as an efferent cortical trigger or efference copy, communicates to the brain that a saccade is about to begin. During saccades, the sensitivity of visual information collected by the eyes is greatly reduced and, thus, any image collected during this saccade is very blurry. In order to prevent the visual cortex from processing blurred sensory information, visual information collected by the eyes during a saccade is suppressed through a process known as saccadic masking. This is also the same mechanism used to prevent the experience of motion blur.
Following the completion of the saccade, the eyes now focus on the second object of interest. As soon as the saccade concludes, another efferent cortical trigger is sent from the eyes back to the brain. This signal communicates to the brain that the saccade has concluded. Prompted by this signal, the visual cortex once again resumes processing visual information. For the student, his eyes have now reached the clock and his brain's visual cortex begins to process information from his eyes. However, this second efferent trigger also communicates to the brain that a period of time has been missing from perception. To fill this gap in perception, visual information is processed in a manner known as neural antedating or backdating. In this visual processing, the gap in perception is "filled in" with information gathered after the saccade. For the student, the gap of time that occurred during the saccade is substituted with the processed image of the clock. Thus, immediately following the saccade, the second hand of the clock appears to stop in place before moving.
In studying chronostasis and its underlying causes, there is potential bias in the experimental setting. In many experiments, participants are asked to perform some sort of task corresponding to sensory stimuli. This could cause the participants to anticipate stimuli, thus leading to bias. Also, many mechanisms involved in chronostasis are complex and difficult to measure. It is difficult for experimenters to observe the perceptive experiences of participants without "being inside their mind." Furthermore, experimenters normally do not have access to the neural circuitry and neurotransmitters located inside the braincases of their subjects.
Modulating factors
Because of its complexity, there are various characteristics of stimuli and physiological actions that can alter the way one experiences chronostasis.
Saccadic amplitude
The greater the amplitude (or duration) of a saccade, the more severe the resulting overestimation. The further the student in the above example's eyes must travel in order to reach the clock, the more dramatic his perception of chronostasis. This connection supports the assertion that overestimation occurs in order to fill in the length of time omitted by saccadic masking. This would mean that, if the saccade lasted for a longer period of time, there would be more time that needed to be filled in with overestimation.
Attention redirection
When shifting focus from one object to a second object, the saccadic movement of one's eyes is also accompanied by a conscious shift of attention. In the context of the stopped clock illusion, not only do your eyes move, but you also shift your attention to the clock. This led researchers to question whether the movement of the eyes or simply the shift of the observer's attention towards the second stimulus initiated saccadic masking. Experiments in which subjects diverted only their attention without moving their eyes revealed that the redirection of attention alone was not enough to initiate chronostasis. This suggests that attention is not the time marker used when perception is filled back in. Rather, the physical movement of the eyes themselves serves as this critical marker. However, this relationship between attention and perception in the context of chronostasis is often difficult to measure and may be biased in a laboratory setting. Because subjects may be biased as they are instructed to perform actions or to redirect their attention, the concept of attention serving as a critical time marker for chronostasis may not be entirely dismissed.
Spatial continuity
Following investigation, one may wonder if chronostasis still occurs if the saccadic target is moving. In other words, would you still experience chronostasis if the clock you looked at were moving? Through experimentation, researchers found that the occurrence of chronostasis in the presence of a moving stimulus was dependent on the awareness of the subject. If the subject were aware that the saccadic target was moving, they would not experience chronostasis. Conversely, if the subject were not aware of the saccadic target's movement, they did experience chronostasis. This is likely because antedating does not occur in the case of a consciously moving target. If, after the saccade, the eye correctly falls on the target, the brain assumes this target has been at this location throughout the saccade. If the target changes position during the saccade, the interruption of spatial continuity makes the target appear novel.
Stimulus properties
Properties of stimuli themselves have shown to have significant effects on the occurrence of chronostasis. In particular, the frequency and pattern of stimuli affect the observer's perception of chronostasis. In regard to frequency, the occurrence of many, similar events can exaggerate duration overestimation and makes the effects of chronostasis more severe. In regard to repetition, repetitive stimuli appear to be of shorter subjective duration than novel stimuli. This is due to neural suppression within the cortex. Investigation using various imaging techniques has shown that repetitive firing of the same cortical neurons cause them to be suppressed over time. This occurs as a form of neural adaptation.
Sensory domain
The occurrence of chronostasis extends beyond the visual domain into the auditory and tactile domains. In the auditory domain, chronostasis and duration overestimation occur when observing auditory stimuli. One common example is a frequent occurrence when making telephone calls. If, while listening to the phone's ring tone, research subjects move the phone from one ear to the other, the length of time between rings appears longer. In the tactile domain, chronostasis has persisted in research subjects as they reach for and grasp objects. After grasping a new object, subjects overestimate the time in which their hand has been in contact with this object. In other experiments, subjects turning a light on with a button were conditioned to experience the light before the button press. This suggests that, much in the same way subjects overestimate the duration of the second hand as they watch it, they may also overestimate the duration of auditory and tactile stimuli. This has led researchers to investigate the possibility that a common timing mechanism or temporal duration scheme is used for temporal perception of stimuli across a variety of sensory domains.
See also
References
External links
Michael Stevens provides a brief overview of the stopped clock illusion
A brief overview of temporal perception by Laci Green
Dan Lewis of 'Now I Know' describes saccadic masking and other visual illusions
Greek words and phrases
Illusions
Measurement
Perception
Vision | Chronostasis | [
"Physics",
"Mathematics"
] | 2,041 | [
"Quantity",
"Physical quantities",
"Measurement",
"Size"
] |
30,950,746 | https://en.wikipedia.org/wiki/Thermoporometry%20and%20cryoporometry | Thermoporometry and cryoporometry are methods for measuring porosity and pore-size distributions. A small region of solid melts at a lower temperature than the bulk solid, as given by the Gibbs–Thomson equation. Thus, if a liquid is imbibed into a porous material, and then frozen, the melting temperature will provide information on the pore-size distribution. The detection of the melting can be done by sensing the transient heat flows during phase transitions using differential scanning calorimetry – DSC thermoporometry, measuring the quantity of mobile liquid using nuclear magnetic resonance – NMR cryoporometry (NMRC) or measuring the amplitude of neutron scattering from the imbibed crystalline or liquid phases – ND cryoporometry (NDC).
To make a thermoporometry / cryoporometry measurement, a liquid is imbibed into the porous sample, the sample cooled until all the liquid is frozen, and then warmed until all the liquid is again melted. Measurements are made of the phase changes or of the quantity of the liquid that is crystalline / liquid (depending on the measurement technique used).
The techniques make use of the Gibbs–Thomson effect: small crystals of a liquid in the pores melt at a lower temperature than the bulk liquid : The melting point depression is inversely proportional to the pore size. The technique is closely related to that of use of gas adsorption to measure pore sizes but uses the Gibbs–Thomson equation rather than the Kelvin equation. They are both particular cases of the Gibbs Equations (Josiah Willard Gibbs): the Kelvin equation is the constant temperature case, and the Gibbs–Thomson equation is the constant pressure case.
Technique variants
DSC Thermoporometry
This technique uses differential scanning calorimetry (DSC) to detect the phase changes. The signal detection relies on transient heat flows of latent heat of fusion at the phase changes, and thus the measurement can not be made arbitrarily slowly, limiting the resolution in pore size. There are also difficulties in obtaining measurements of pore volume.
Nuclear magnetic resonance cryoporometry
NMRC is a recent technique (originated in 1993) for measuring total porosity and pore size distributions. It makes use of the Gibbs–Thomson effect: small crystals of a liquid in the pores melt at a lower temperature than the bulk liquid : The melting point depression is inversely proportional to the pore size. The technique is closely related to that of use of gas adsorption to measure pore sizes but uses the Gibbs–Thomson equation rather than the Kelvin equation. They are both particular cases of the Gibbs Equations (Josiah Willard Gibbs): the Kelvin equation is the constant temperature case, and the Gibbs–Thomson equation is the constant pressure case.
Nuclear magnetic resonance (NMR) may be used as a convenient method of measuring the quantity of liquid that has melted, as a function of temperature, making use of the fact that the relaxation time in a frozen material is usually much shorter than that in a mobile liquid. To make the measurement it is common to just measure the amplitude of an NMR echo at a few milliseconds delay, to ensure that all the signal from the solid has decayed. The technique was developed at the University of Kent in the UK, by Prof. John H. Strange.
NMRC is based on two equations, the Gibbs–Thomson equation, that maps the melting point depression to pore size, and the Strange–Rahman–Smith equation that maps the melted signal amplitude at a particular temperature to pore volume.
To make an NMR cryoporometry measurement, a liquid is imbibed into the porous sample, the sample cooled until all the liquid is frozen, and then warmed slowly, while measuring the quantity of the liquid that is liquid.
Thus NMRC cryoporometry is similar to DSC thermoporosimetry, but has higher resolution, as the signal detection does not rely on transient heat flows, and the measurement can be made arbitrarily slowly. Volume calibration of the total porosity and pore-size can be good, just involving ratioing the NMR signal amplitude at a particular pore diameter to the amplitude when all the liquid (of known mass) is melted. NMRC is suitable for measuring pore diameters in the range 1 nm to about 2 μm. Instrumentation to make NMR Cryoporometric measurements is commercially available.
Note: the Gibbs-Thomson equation contains a geometric term relating to the curvature of the ice-liquid interface. This curvature may be different in different pore geometries; thus using a sol-gel calibration (~spheres) gives about a factor of two error when used with SBA-15 (cylindrical pores). Similarly the freezing and melting curvatures (typically spherical on ice intrusion, and cylindrical on ice melting), result in a difference in freezing and melting temperature even in cylindrical pores where there is no "ink-bottle" effect.
It is also possible to adapt the basic NMRC experiment to provide structural resolution in spatially dependent pore size distributions, by combining NMRC with standard Magnetic resonance imaging protocols, or to provide behavioural information about the confined liquid.
Neutron diffraction cryoporometry
Modern neutron diffractometers have the capability to measure complete scattering spectra in a couple of minutes, as the temperature is ramped, enabling cryoporometry experiments to be performed.
ND cryoporometry has the unique distinction of being able to monitor as a function of temperature the quantity of different crystalline phases (such as hexagonal ice and cubic ice) as well as the liquid phase, and thus can give pore-phase structural information as a function of temperature.
Pore size measurements using both melting and freezing events
The Gibbs–Thomson effect acts to lower both melting and freezing point, and also to raise boiling point. However, simple cooling of an all-liquid sample usually leads to a state of non-equilibrium super cooling and only eventual non-equilibrium freezing – to obtain a measurement of the equilibrium freezing event, it is necessary to first cool enough to freeze a sample with excess liquid outside the pores, then warm the sample until the liquid in the pores is all melted, but the bulk material is still frozen. Then on re-cooling the equilibrium freezing event can be measured, as the external ice will then grow into the pores.
This is in effect an "ice intrusion" measurement (c.f. Mercury Intrusion Porosimetry), and as such in part may provide information on pore throat properties. The melting event was then previously expected to provide more accurate information on the pore body. However, a new melting mechanism has been proposed which means the melting event does not provide accurate information on the pore body. The melting mechanism has been termed advanced melting and is described below.
The advanced melting mechanism
The melting process for the frozen phase is initiated from existing molten phase, such as the liquid-like layer that is retained at the pore wall. This is shown in Figure 1 for a through ink bottle pore model (position A); the arrows show how the liquid-like layer initiates the melting process and this melting mechanism is said to occur via sleeve shaped menisci. For such a melting mechanism, the smaller necks will melt first and as the temperature is raised the large pore will then melt. Therefore, the melting event would give an accurate description of the necks and body.
However, in cylindrical pores, melting would occur at a lower temperature via a hemispherical meniscus (between solid and molten phases), than it would via a sleeve-shaped meniscus. Scanning curves and loops have been used to show that cryoporometry melting curves are prone to pore-pore cooperative effects and this is demonstrated by position B in Figure 1. For the through ink bottle pore, melting is initiated in the outer necks from the thin cylindrical sleeve of permanently unfrozen liquid-like fluid that exists at the pore wall. Once the necks have become molten via the cylindrical sleeve meniscus mechanism, a hemispherical meniscus will be formed at both ends of the larger pore body. The hemispherical menisci can then initiate the melting process in the large pore. Moreover, if the larger pore radius is smaller than the critical size for melting via a hemispherical meniscus at the current temperature, then the larger pore will melt at the same temperature as the smaller pore. Therefore, the melting event will not give accurate information on the pore body. If the incorrect melting mechanism is assumed when deriving a PSD (pore size distribution) there will be at least a 100% error in the PSD. Moreover, it has been shown that advanced melting effects can lead to a dramatic skew towards smaller pores in PSDs for mesoporous sol-gel silicas, determined from cryoporometry melting curves.
Applications
NMR cryoporometry (external cryoporometry website) is a very useful nano- through meso- to micro-metrology technique (nanometrology, nano-science.co.uk/nano-metrology) that has been used to study many materials, and has particularly been used to study porous rocks (i.e. sandstone, shale and chalk/carbonate rocks), with a view to improving oil extraction, shale gas extraction and water abstraction. Also very useful for studying porous building materials such as wood, cement and concrete. A currently exciting application for NMR Cryoporometry is the measurement of porosity and pore-size distributions, in the study of carbon, charcoal and biochar. Biochar is regarded as an important soil enhancer (used since pre-history), and offers great possibilities for carbon dioxide removal from the biosphere.
Materials studied by NMR cryoporometry include:
Possible future application include measuring porosity and pore-size distributions in porous medical implants.
References
External links
cryoporometry.com
nano-science.co.uk/nano-metrology/
Materials testing | Thermoporometry and cryoporometry | [
"Materials_science",
"Engineering"
] | 2,087 | [
"Materials testing",
"Materials science"
] |
30,951,719 | https://en.wikipedia.org/wiki/Gibbs%E2%80%93Thomson%20equation | The Gibbs–Thomson effect, in common physics usage, refers to variations in vapor pressure or chemical potential across a curved surface or interface. The existence of a positive interfacial energy will increase the energy required to form small particles with high curvature, and these particles will exhibit an increased vapor pressure. See Ostwald–Freundlich equation.
More specifically, the Gibbs–Thomson effect refers to the observation that small crystals that are in equilibrium with their liquid, melt at a lower temperature than large crystals. In cases of confined geometry, such as liquids contained within porous media, this leads to a depression in the freezing point / melting point that is inversely proportional to the pore size, as given by the Gibbs–Thomson equation.
Introduction
The technique is closely related to using gas adsorption to measure pore sizes, but uses the Gibbs–Thomson equation rather than the Kelvin equation. They are both particular cases of the Gibbs Equations of Josiah Willard Gibbs: the Kelvin equation is the constant temperature case, and the Gibbs–Thomson equation is the constant pressure case.
This behaviour is closely related to the capillary effect and both are due to the change in bulk free energy caused by the curvature of an interfacial surface under tension.
The original equation only applies to isolated particles, but with the addition of surface interaction terms (usually expressed in terms of the contact wetting angle) can be modified to apply to liquids and their crystals in porous media. As such it has given rise to various related techniques for measuring pore size distributions. (See Thermoporometry and cryoporometry.)
The Gibbs–Thomson effect lowers both melting and freezing point, and also raises boiling point. However, simple cooling of an all-liquid sample usually leads to a state of non-equilibrium super cooling and only eventual non-equilibrium freezing. To obtain a measurement of the equilibrium freezing event, it is necessary to first cool enough to freeze a sample with excess liquid outside the pores, then warm the sample until the liquid in the pores is all melted, but the bulk material is still frozen. Then, on re-cooling the equilibrium freezing event can be measured, as the external ice will then grow into the pores.
This is in effect an "ice intrusion" measurement (cf. mercury intrusion), and as such in part may provide information on pore throat properties. The melting event can be expected to provide more accurate information on the pore body.
For particles
For an isolated spherical solid particle of diameter in its own liquid, the Gibbs–Thomson equation for the structural melting point depression can be written:
where:
TmB = bulk melting temperature
σsl = solid–liquid interface energy (per unit area)
Hf = bulk enthalpy of fusion (per gram of material)
ρs = density of solid
d = nanoparticle size
For liquids in pores
Very similar equations may be applied to the growth and melting of crystals in the confined geometry of porous systems. However the geometry term for the crystal-liquid interface may be different, and there may be additional surface energy terms to consider, which can be written as a wetting angle term . The angle is usually considered to be near 180°. In cylindrical pores there is some evidence that the freezing interface may be spherical, while the melting interface may be cylindrical, based on preliminary measurements for the measured ratio for in cylindrical pores.
Thus for a spherical interface between a non-wetting crystal and its own liquid, in an infinite cylindrical pore of diameter , the structural melting point depression
is given by:
Simplified equation
The Gibbs–Thomson equation may be written in a compact form:
where the Gibbs–Thomson coefficient assumes different values for different liquids and different interfacial geometries (spherical/cylindrical/planar).
In more detail:,
where:
is a geometric constant dependent on the interfacial shape,
is a constant involving parameters specific to the crystalline solid of solid–liquid system, and
is an interfacial energy term.
History
As early as 1886, Robert von Helmholtz (son of the German physicist Hermann von Helmholtz) had observed that finely dispersed liquids have a higher vapor pressure. By 1906, the German physical chemist Friedrich Wilhelm Küster (1861–1917) had predicted that since the vapor pressure of a finely pulverized volatile solid is greater than the vapor pressure of the bulk solid, then the melting point of the fine powder should be lower than that of the bulk solid. Investigators such as the Russian physical chemists Pavel Nikolaevich Pavlov (or Pawlow (in German), 1872–1953) and Peter Petrovich von Weymarn (1879–1935), among others, searched for and eventually observed such melting point depression. By 1932, Czech investigator Paul Kubelka (1900–1956) had observed that the melting point of iodine in activated charcoal is depressed as much as 100 °C. Investigators recognized that the melting point depression occurred when the change in surface energy was significant compared to the latent heat of the phase transition, which condition obtained in the case of very small particles.
Neither Josiah Willard Gibbs nor William Thomson (Lord Kelvin) derived the Gibbs–Thomson equation. Also, although many sources claim that British physicist J. J. Thomson derived the Gibbs–Thomson equation in 1888, he did not. Early in the 20th century, investigators derived precursors of the Gibbs–Thomson equation. However, in 1920, the Gibbs–Thomson equation was first derived in its modern form by two researchers working independently: Friedrich Meissner, a student of the Estonian-German physical chemist Gustav Tammann, and Ernst Rie (1896–1921), an Austrian physicist at the University of Vienna. These early investigators did not call the relation the "Gibbs–Thomson" equation. That name was in use by 1910 or earlier; it originally referred to equations concerning the adsorption of solutes by interfaces between two phases — equations that Gibbs and then J. J. Thomson derived. Hence, in the name "Gibbs–Thomson" equation, "Thomson" refers to J. J. Thomson, not William Thomson (Lord Kelvin).
In 1871, William Thomson published an equation describing capillary action and relating the curvature of a liquid-vapor interface to the vapor pressure:
where:
= vapor pressure at a curved interface of radius
= vapor pressure at a flat interface () =
= surface tension
= density of vapor
= density of liquid
, = radii of curvature along the principal sections of the curved interface.
In his dissertation of 1885, Robert von Helmholtz (son of German physicist Hermann von Helmholtz) showed how the Ostwald–Freundlich equation
could be derived from Kelvin's equation. The Gibbs–Thomson equation can then be derived from the Ostwald–Freundlich equation via a simple substitution using the integrated form of the Clausius–Clapeyron relation:
The Gibbs–Thomson equation can also be derived directly from Gibbs' equation for the energy of an interface between phases.
It should be mentioned that in the literature, there is still not agreement about the specific equation to which the name "Gibbs–Thomson equation" refers. For example, in the case of some authors, it's another name for the "Ostwald–Freundlich equation"—which, in turn, is often called the "Kelvin equation"—whereas in the case of other authors, the "Gibbs–Thomson relation" is the Gibbs free energy that's required to expand the interface, and so forth.
References
Thermodynamic equations
Surface science | Gibbs–Thomson equation | [
"Physics",
"Chemistry",
"Materials_science"
] | 1,543 | [
"Thermodynamic equations",
"Equations of physics",
"Surface science",
"Thermodynamics",
"Condensed matter physics"
] |
30,953,451 | https://en.wikipedia.org/wiki/Carbene%20dimerization | Carbene dimerization is a type of organic reaction in which two carbene or carbenoid precursors react in a formal dimerization to an alkene. This reaction is often considered an unwanted side-reaction but it is also investigated as a synthetic tool. In this reaction type either the two carbenic intermediates react or a carbenic intermediate reacts with a carbene precursor. An early pioneer was Christoph Grundmann reporting on a carbene dimerisation in 1938. In the domain of persistent carbenes the Wanzlick equilibrium describes an equilibrium between a carbene and its alkene.
A reoccurring substrate is a diazo compound and more specifically an alpha-carbonyl diazo compound. For example, ethyl diazoacetate is converted to diethyl maleate using the ruthenium catalyst chloro(cyclopentadienyl)bis(triphenylphosphine)ruthenium:
Grubbs' catalyst is also effective In this reaction type the active intermediate is a transition metal carbene complex. A diazo cross-coupling reaction has also been reported between ethyl diazoacetate and methyl phenyldiazoacetate using the rhodium catalyst [Rh2(OPiv)4].
A direct metal carbene dimerization has been used in the synthesis of novel Polyalkynylethenes
References
Carbenes
Organic reactions | Carbene dimerization | [
"Chemistry"
] | 297 | [
"Organic compounds",
"Inorganic compounds",
"Carbenes",
"Organic reactions"
] |
30,953,521 | https://en.wikipedia.org/wiki/Neutron%20moisture%20gauge | A neutron moisture meter is a moisture meter utilizing neutron scattering. The meters are most frequently used to measure the water content in soil or rock. The technique is non-destructive, and is sensitive to moisture in the bulk of the target material, not just at the surface.
Water, due to its hydrogen content, is an effective neutron moderator, slowing high-energy neutrons. With a source of high-energy neutrons and a detector sensitive to low-energy neutrons (thermal neutrons), the detection rate will be governed by the water content of the soil between the source and the detector. The neutron source typically contains a small amount of a radionuclide. Sources may emit neutrons during spontaneous fission, as with californium; alternatively, an alpha emitter may be mixed with a light element for a nuclear reaction yielding excess neutrons, as with americium in a beryllium matrix.
References
Californium
Moisture gauge
Moisture gauge
Radioactivity
Scattering | Neutron moisture gauge | [
"Physics",
"Chemistry",
"Materials_science"
] | 201 | [
"Scattering",
"Condensed matter physics",
"Particle physics",
"Nuclear physics",
"Radioactivity"
] |
30,957,264 | https://en.wikipedia.org/wiki/Mountain%20Wave%20Project | The Mountain Wave Project (MWP) pursues global scientific research of gravity waves and associated turbulence. MWP seeks to develop new scientific insights and knowledge through high altitude and record seeking glider flights with the goal of increasing overall flight safety and improving pilot training.
Corporate history
Motivation
Wind movement over terrain and ground obstacles can create wavelike wind formations which can reach up to the stratosphere. In 1998 the pilots René Heise and Klaus Ohlmann founded the MWP, a project for global classification, research, and analysis of orographically created wind structures (e.g. Chinook, Foehn, Mistral, Zonda). The MWP is an independent non-profit-project of the Scientific and Meteorological Section of the Organisation Scientifique et Technique du Vol à Voile (OSTIV) and is supported by the Fédération Aéronautique Internationale (FAI).
The MWP was originally focused on achieving better understanding. of the complex thermal and dynamic air movements in the atmosphere, and using that knowledge to achieve ever greater long distance soaring flights. As MWP gained greater awareness of the power inherent to mountain wave-like structures in the atmosphere, and their strong vertical airflows, it became obvious that they presented great dangers to civil aviation in multiple ways. Therefore, the focus of the MWP shifted to a more scientific approach to the airflow phenomena, with the goal of discovering new ways to increase overall aviation safety. Through the support of other scientists and cooperation partners the core group became more powerful and gained greater depth of knowledge. The integration of Joerg Hacker from the Airborne Research Australia (ARA) into the core group significantly enhanced the overall depth of knowledge of the group.
Airborne measurements
In order to learn more about the relevant physical process in the atmosphere, the MWP Team launched two expeditions in the Argentinean Andes in 1999 and 2006. For high altitude flights a modified Stemme S10 VT motorglider was used as a platform for airborne data acquisition and measurement. The pilots were assisted with life support equipment and physiological preparations by the renowned flight physicians of the German Aerospace Center (DLR) and by astronaut Ulf Merbold.
Thanks to the help of qualified scientists and state-of-the-art sensor technology, the MWP achieved its goal to gather and analyze wave structure data with impressive results at the operation in Mendoza in October 2006. Research flights and operations were completed in the region between the Tupungato (5.700 m) and Aconcagua (6.900m), which is very well known for its extremely treacherous turbulence.
Record flights
Between 2000 and 2004 MWP team member Klaus Ohlmann further developed and expanded on the knowledge about wave systems gained in the Andes in 1999, and accumulated a wealth of experience. This educational process allowed him to win the OSTIV Kuettner Prize for the first 2000 km straight out wave flight, as well as completing the world's longest recorded soaring flight of 3,008 km. He was supported by his MWP teammates in Germany using internet communications to provide specific weather predictions using a new weather forecasting tool. In these flights, he provided crucial in-flight data, which in turn helped to improve subsequent weather predictions by the team in Germany.
Two MWP members participated in the 2006 field research campaign of the Terrain Induced Rotor Experiment (T-REX) which took place in the Sierra Nevada (U.S.A.). René Heise served as scientific reviewer for the National Science Foundation and contributed MWP wave forecasts to the data archive. Wolf-Dietrich Herold documented activities in Boulder/CO and Bishop/CA and produced a TV-report of the project for the German TV station RBB.
Programming objectives
Detection and determination of physical processes in the atmosphere, and their associated synoptic characteristics, which play the primary role in the generation and development of mountain waves.
Investigation of rotor bands: determination of their location, spatial extension and classification of associated turbulence
High resolution measurement of relevant meteorological variables (e.g., potential temperature, turbulence parameters, vertical and horizontal wind, humidity, etc.)
Visualisation of the rotors/regions of turbulence with a GeoInformationService (GIS).
Statistical analysis of wave flights (IGC-files of GPS flight loggers) to develop an empirical GIS-based representation of wave and rotor locations
Verification of mesoscale forecast models and fine tuning of the applied parameterisations
Application of the acquired data, scientific results, and prediction tools to enhance the safety and effectiveness of air traffic route planning, and improve pilot training. Furthermore, assisting in the development and creation of focused training methodology, tools and simulator scenarios.
Expeditions
Argentina’99: Base San Martín de los Andes (Argentina); some flights above 1,000 km, a record flight (1,550 km) of Klaus Ohlmann up to Fireland (Rio Grande), the southernmost glider flight in the World
Serres (France) & Jaca (Spain) 2003: Measurement flights of southerly wave conditions in Provence, additionally wave flights under stormy weather conditions in the Lee of Pyrenees
Operation Mendoza 2006: Base Plumerillo (Argentina); Measurement Campaign at invitation of the Argentine Air Force, Flights with BATprobe up to 12,500 m height over the cordillera of the Tupungato-Aconcagua region.
Tibet 2010- site visit: Presentation of the MWP field campaign in Lhasa Exploration of emergency landing strips along the route Shigatse - Tingri
Project results
Development of an operational lee wave forecast in co- operation with the Bundeswehr Geo Information Service and the German Weather Service.
Global and Regional Assessment Tool for wave activities and the risk of turbulence. This experimental forecast tool is used especially with a combination of a relocatable mesoscale models for the regions of Antarctica, Hindukush/ Tian Shan, Kamchatka, Sierra Nevada and Tibet.
First scientific measurement flights of turbulence over the Andes. Validation of airborne measurements of the parameters- wind, temperature, moisture and pressure in combination with soundations and vertical satellite measurements (Radio-Occultation, Remote Sensing with GPS)
Cataloging of over 200 global positions of Rotor-Wave systems, and their visualization in a Geographical Informations System (GIS); Analyzing of accidents and incidents due to mountain wave turbulence in the commercial and general aviation
Development of a mathematical and statistical algorithms to filter wave climbs in GNSS-flight recorder data, using for an optimization of record flights
Aviation Highlights: record flight to Rio Grande in Tierra del Fuego (MWP-Argentina `99); World Record Flight (FAI Category Free Distance) 2.120 km (OSTIV Kuettner Prize)
High altitude physiology preparations and recommendations for pilots (Human Factors)
Awards
2003 OSTIV Kuettner Prize for the first 2000 km straight out wave flight for MWP chief pilot (Klaus Ohlmann)
2007 Second Awardee of „Lilienthal- Preis”
2011 Finalists of the Aerospace Medical Association Jeff Myers Young Investigator Award (Rene Heise)
2011 Outreach & Communication Award of the European Meteorological Society (EMS)
GEO-TV features
2003 Berlin-Brandenburg Broadcasting (RBB) - Rodeo in the Sky - Research for greater flight safety/Rodeo am Himmel - Forschung für mehr Flugsicherheit (45 min; German/English)
2007 ARTE 360° GEO- documentation - The Waveriders of the Andes/Die Windreiter der Anden/Les Enragés du vol à voile (45min; German/French)
2011 3sat TV-feature in connection with the 6th Severe Weather Congress Hamburg 2011, Wellengang in der Luft- hinter Bergen entstehen gefährliche Luftwirbel (6min; German)
References
External links
Mountain Wave Project- official website
Website at MetPanel of OSTIV
FAI World Records - Class D (Gliders)
FAA-Seminar: Where Wild Winds Rule - Mountain Wave Flying Training
Mountain Wave Project - Website of the MWP at MetPanel of OSTIV
Scientific TV-feature about MWP site visit in Tibet 2010 at German Channel 3sat retrieved 2011-04-18
Gliding & Motorgliding International Dec 12, 2000
Frankfurter Zeitung vom 7. April 2010: Auf der perfekten Welle (On the perfect Wave)
Tagesspiegel vom 14. Mai 2007 Tödliche Turbulenzen (Deadly Turbulence)
Spiegel vom 16. Oktober 2006 Expressaufwind aus Sturmwinden (Express-Lift from Storm Winds)
SpektrumDirekt - Wissenschaft online retrieved 2002-11-07
Mountain meteorology
Atmospheric dynamics
Mesoscale meteorology
Waves
Gliding meteorology | Mountain Wave Project | [
"Physics",
"Chemistry"
] | 1,778 | [
"Physical phenomena",
"Atmospheric dynamics",
"Waves",
"Motion (physics)",
"Fluid dynamics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.