id
int64
39
79M
url
stringlengths
32
168
text
stringlengths
7
145k
source
stringlengths
2
105
categories
listlengths
1
6
token_count
int64
3
32.2k
subcategories
listlengths
0
27
52,559,168
https://en.wikipedia.org/wiki/Self-driving%20car%20liability
Increases in the use of autonomous car technologies (e.g., advanced driver-assistance systems) are causing incremental shifts in the control of driving. Liability for incidents involving self-driving cars is a developing area of law and policy that will determine who is liable when a car causes physical damage to persons or property. As autonomous cars shift the control of driving from humans to autonomous car technology, there is a need for existing liability laws to evolve to reasonably identify the appropriate remedies for damage and injury. As higher levels of autonomy are commercially introduced (SAE automation levels 3 and 4), the insurance industry may see higher proportions of commercial and product liability lines of business, while the personal automobile insurance line of business shrinks. Self-driving car liability and self-driving vehicle liability may be impacted by changes in regulation of self-driving vehicles being developing in some countries. Overview The primary motivation of autonomous car technologies is reducing the frequency of traffic collisions. Self-driving car liability is a developing area of law and policy that will determine who is liable when an automated car causes physical damage to persons, or breaks road rules. Similar considerations may also be raised with other automated vehicles and also with damages other than damage to persons. When automated cars shift the control of driving from humans to automated car technology the driver will need to consent to share operational responsibility which will require a legal framework. Driving liability There may be a need for existing liability laws to evolve in order to fairly identify the parties responsible for damage and injury, and to address the potential for conflicts of interest between human occupants, system operator, insurers, and the public purse. Increases in the use of automated car technologies (e.g. advanced driver-assistance systems) may prompt incremental shifts in this responsibility for driving. It is claimed by proponents to have potential to affect the frequency of road accidents, although it is difficult to assess this claim in the absence of data from substantial actual use. If there was a dramatic improvement in safety, the operators may seek to project their liability for the remaining accidents onto others as part of their reward for the improvement. However, there is no obvious reason why they should escape liability if any such effects were found to be modest or nonexistent, since part of the purpose of such liability is to give an incentive to the party controlling something to do whatever is necessary to avoid it causing harm. Potential users may be reluctant to trust an operator if it seeks to pass its normal liability on to others. Transition and driver liability In any case, a well-advised person who is not controlling a car at all (Level 5) would be understandably reluctant to accept liability for something out of their control. And when there is some degree of sharing control possible (Level 3 or 4), a well-advised person would be concerned that the vehicle might try to pass back control at the last seconds before an accident, to pass responsibility and liability back too, but in circumstances where the potential driver has no better prospects of avoiding the crash than the vehicle, since they have not necessarily been paying close attention, and if it is too hard for the very smart car it might be too hard for a human. Since operators, especially those familiar with trying to ignore existing legal obligations (under a motto like 'seek forgiveness, not permission'), such as Waymo or Uber, could be normally expected to try to avoid responsibility to the maximum degree possible, there is potential for attempt to let the operators evade being held liable for accidents while they are in control. As higher levels of automation are commercially introduced (Level 3 and 4), the insurance industry may see a greater proportion of commercial and product liability lines while personal automobile insurance shrinks. Fully autonomous driving liability When it comes to the direction of fully autonomous car liability, torts cannot be ignored. In any car accident the issue of negligence usually arises. In the situation of autonomous cars, negligence would most likely fall on the manufacturer because it would be hard to pin a breach of duty of care on the user who isn't in control of the vehicle. The only time negligence was brought up in an autonomous car lawsuit, there was a settlement between the person struck by the autonomous vehicle and the manufacturer (General Motors). Next, product liability would most likely cause liability to fall on the manufacturer. For an accident to fall under product liability, there needs to be either a defect, failure to provide adequate warnings, or foresee-ability by the manufacturer. Third, is strict liability which in this case is similar to product liability based on the design defect. Based on a Nevada Supreme Court ruling (Ford vs. Trejo) the plaintiff needs to prove failure of the manufacturer to pass the consumer expectation test. That is potentially how the three major torts could function when it comes to autonomous car liability. Current liability frameworks Existing tort liability for drivers and insurers and product liability for manufacturer provide the current basis for governing crashes. Tort liability There are three basic theories of tort liability: traditional negligence, no-fault liability and strict liability. According to a National Motor Vehicle Crash Causation Survey, over 90% of the crashes (representing an estimated 2 million crashes in USA) involved the driver as the critical reason for the crash. Meanwhile, research from the Insurance Institute for Highway Safety (IIHS) shows that Advanced Driver-Assistance Systems, which are seen as stepping stones to get to Level 3 and 4 autonomy, have helped reduce collisions by employing forward-collision warnings and automatic braking. Given these trends, the increased use of autonomous vehicle technology could reduce the number of collisions and prevent crash-related deaths. Consequently, cases of traditional negligence will likely fall, and this will, in turn, reduce automobile-insurance costs. With the onset of fully autonomous cars, it is possible that the need for specialized automobile insurance disappears and that health insurance and homeowner's liability insurance instead cover automobile crashes, much in the same way that they cover bicycle collisions. Moreover, as cases of traditional negligence decrease, no-fault insurance systems appear attractive given their benefits. It would provide compensation to victims relatively quickly, and the compensation would not depend on the identification of a party at-fault. In such systems, individual drivers would be well protected and would encourage the adoption of autonomous cars for their safety and cost-related benefits. Negligence was the basis for the lawsuit Nilsson vs. General Motors. Nilsson was knocked off his motorcycle when a Chevy Bolt switched into his lane while it was in a self-driving mode. Nilsson sued for negligence based on the self-driving car having (1) a duty to follow traffic laws and regulations, (2) breaching that duty while switching lanes when he was passing, (3) and injuring his neck. The case was settled before it went to court, so the answer to the question "was the error of the self-driving car foreseeable?" still remains unclear. Product liability Product liability governs the liability of manufacturers in terms of negligence and strict liability. Autonomous car manufacturers are incentivized by possible product liability torts lawsuits to reduce the danger of their products as much as they can within a reasonable cost structure. Strict liability covers an expansive range of potential harms that manufacturers may find challenging to protect against; instead of reducing less cost-effective risks, manufacturers may, to some degree, pass on potential costs of liability to consumers through higher prices. Furthermore, product liability cases distinguish among various types of defects. Under manufacturing defects, a plaintiff needs to show that the autonomous car failed to work as specified by the manufacturer. In the case of autonomous cars, however, this presents a significant hurdle because no court has applied manufacturing defects to software, which is not something tangible that is manufactured. The wrong performance of the technology system is called “malfunction”, which means that there is a coding error within the system to cause the collisions. When there is a coding error, then the controlling software system may not have functioned as its authors had initially intended. If a crash stems from a software error, then the traditional product liability law on manufacturing defects may not suffice. A greater understanding of how the software will be treated under this liability law, mainly when a software error causes physical parts to malfunction, needs to be explored. Historically, courts have used two tests for the defectiveness of design: consumer-expectations and cost-benefit. Consumer-expectations: "A product is defective in design or formulation when it is more dangerous than an ordinary consumer would expect when used in an intended or reasonably foreseeable manner. Moreover, the question of what an ordinary consumer expects in terms of the risks posed by the product is generally one for the trier of fact." On the other hand, the cost-benefit test weighs the benefits against the costs of a product in determining whether a design is defective. With autonomous cars, the plaintiff could make the argument that a different design, whether in the physical features of the vehicle or in the software that controls the movements of the vehicle, could have made the vehicle safer. For plaintiffs, this creates a high burden of proof and also makes it challenging to find qualified experts. Imposing liability In asking "who do I sue," a plaintiff in a traditional car crash would assign blame to the driver or the car manufacturer, depending on the cause of the crash. In a crash involving an autonomous car, a plaintiff may have four options to pursue. Operator of the vehicle: in Florida and Nevada, an operator is defined as a person who causes the autonomous technology to engage, regardless of whether the person is physically in the vehicle. California, on the other hand, specifies that an operator as “the person who is seated in the driver’s seat, or, if there is no person in the driver’s seat, causes the autonomous technology to engage.” The viability of a claim against the operator will be determined by the level of autonomy. For instance, if the autonomous technology allows the passenger to cede full control of the vehicle, then the passenger will likely not be found to be at fault for a crash caused by the technology. Car manufacturer: with this option, a plaintiff will need to determine whether the manufacturer had a part in installing autonomous technology into the vehicle. States such as Florida, however, are providing protection by limiting product liability for manufacturers. Company that created the finished autonomous car: Volvo is an example of a manufacturer who has pledged to take full responsibility for collisions caused by its self-driving technology. Company that created the autonomous car technology: Companies under this option could include those developing the software behind the autonomous car and those manufacturing the sensor systems that allow a vehicle to detect its surroundings. Possible defenses In defense of such liabilities, autonomous vehicle manufacturers could make the argument of comparative negligence, product misuse, and state of the art. With comparative negligence, the driver or passenger interference is seen as a part of the cause of harm and injury. With product misuse, the driver or passenger may be at fault for disregarding directions or altering the vehicle in a way to affect the proper performance of the vehicle. With state of the art, manufacturers could make the argument that there were not safe alternative designs at the time of manufacturing. Cyber liability As cars become more interconnected and autonomous, the potential for hacking a car system to acquire data and cause harm poses a serious risk. For manufacturers and developers of autonomous technology, liability exposures arise from the collection and storage of data and personal information in the vehicle and the cloud. Currently, manufacturers require indemnification from vendors and subcontractors (dealerships, repair/installation facilities, etc.), and this practice will likely be extended to autonomous technology developers. Transportation systems are vital for the autonomous vehicle as they serve as the commander, and with multiple autonomous vehicles systems used to increase efficiency, the risk of exposure to malicious attacks will dramatically increase. In order to protect the systems, cyber physical system must be implemented with autonomous dynamical subsystems to ensure the decision, interaction, and control. British law In 2018, a British Automated and Electric Vehicles act of parliament defines the rules for: Listing of automated vehicles by the Secretary of State Liability of insurers etc. where accident caused by automated vehicle Contributory negligence etc. Accident resulting from unauthorized software alterations or failure to update software Right of insurer etc. to claim against person responsible for accident basic liability The law defines some cases of automated vehicle liability. liability limited by software For instance: regulator The UK could have a regulator. When there is no user in charge (NUIC) the police contacts the regulator. The regulator sanctions the automated driving system entity until possible withdrawal of authorization. French law On 14 April 2021, in France, a text defines the rules for the level 3 (véhicule à délégation de conduite) and level 5 (transport routier de marchandises, lorsqu'il est effectué au moyen d'un système de transport routier automatisé). This text is titled: ordonnance n° 2021-443 du 14 avril 2021 relative au régime de responsabilité pénale applicable en cas de circulation d'un véhicule à délégation de conduite et à ses conditions d'utilisation. On first July 2021, France is the first European country to update it code de la route for automated cars. This update clarifies driver and car role and responsibility and plans application after Vienna convention update and before September 2022. German law When Mercedes launch its Drive Pilot mid 2021 in Germany, it is expected that Daimler would have to assume insurance liability, depending on the jurisdiction. In 2021, a proposed German law proposed the "self-driving vehicle" would require an operating permit to be used as an automated vehicle. Policy considerations (US) Manufacturers overbearing the costs As argued in the article “The Coming Collision Between Autonomous Vehicles and the Liability System” by Gary Marchant and Rachel Lindor, a manufacturer cannot anticipate all possible scenarios that an autonomous car will encounter. While the manufacturer will design the system to minimize risks of situations that it does anticipate, the collisions that are most damaging and costly will be those that the manufacturer fails to anticipate. This leaves the manufacturer highly vulnerable to design defects, in particular the cost-benefit test. In light of this, Marchant and Lindor argue that “the technology is potentially doomed...because the liability burden on the manufacturer may be prohibitive of further development. Thus, even though an autonomous vehicle may be safer overall than a conventional vehicle, it will shift the responsibility for collisions, and hence liability, from drivers to manufacturers. The shift will push the manufacturer away from the socially-optimal outcome—to develop the autonomous vehicle.” Consequently, policymakers need to be mindful of manufacturer overbearing the liability costs and the potential consequences that may result, such as higher consumer costs and delays in introducing autonomous car technology. In the report “Autonomous Vehicle Technology” by the Rand Corporation, the authors recommend that policymakers consider approaches such as tort preemption, a federal insurance backstop, and long-term cost-benefit analysis of the legal standard for reasonableness. These approaches attempt to align the private and public costs of autonomous car technology such that adoption is not unnecessarily delayed, and one party does not overly-bear the costs. NHTSA guidelines In September 2016, the National Highway Traffic Safety Administration released a policy report to accelerate the adoption of autonomous car technology (or HAVs, highly automated vehicles) and provide guidelines for an initial regulatory framework. The key points are: States are responsible for determining liability rules for HAVs. States should consider how to allocate liability among HAV owners, operators, passengers, manufacturers, and others when a crash occurs. Determination of who or what is the “driver” of an HAV in a given circumstance does not necessarily determine liability for crashes involving that HAV. Rules and laws allocating tort liability could have a significant effect on both consumer acceptance of HAVs and their rate of deployment. Such rules also could have a substantial effect on the level and incidence of automobile liability insurance costs in jurisdictions in which HAVs operate. In the future, the States may identify additional liability issues and seek to develop consistent solutions. It may be desirable to create a commission to study liability and insurance issues and make recommendations to the States. H.R. 3388, the SELF DRIVE Act of 2017 The House of Representatives on September 6, 2017, unanimously passed H.R. 3388, the SELF DRIVE Act of 2017 Advance safety by prioritizing the protection of consumers. Reaffirm the role and responsibilities of federal and state governments. Update the Federal Motor Vehicle Safety Standards to account for advances in technology and the evolution of highly automated vehicles, The Federal Government, with the passing of the SELF DRIVE Act, is limiting the role of States, and this could signal a change in the future of liability laws. With the Federal Government also asserting that consumers will be protected, manufacturers may be at a liability disadvantage and stand to lose surplus. Updating the Federal Motor Vehicle Safety Standards will affect liability law. These laws will continue to protect the consumer while placing stricter standards on producers. The Federal Government has yet to announce any specific autonomous vehicular manslaughter liability laws. Artificial intelligence and liability More broadly, any software with access to the real world, including autonomous vehicles and robots, can cause property damage, injury, and death. This raises questions about civil liability or criminal responsibility. In 2018, The University of Brighton researcher John Kingston analyzed three legal theories of criminal liability that could apply to an entity controlled by artificial intelligence. Perpetrator via another - the programmer (software designer) or the user could be held liable for directly instructing the AI entity to commit the crime. This is used in common law when a person instructs or directly causes an animal or person incapable of criminal responsibility (such as a young child or a person with a severe mental disability) to commit a crime. Natural and probable consequence - the programmer or the user could be held liable for causing the AI entity to commit a crime as a consequence of its natural operation. For example, if a human obstructs the work of a factory robot and the AI decides to squash the human as the easiest way to clear the obstruction to continue working, if this outcome was likely and the programmer knew or should have known that, the programmer could be held criminally liable. Direct liability - the AI system has demonstrated the criminal elements of the recognized theory of liability in criminal law. Strict liability offenses (like speeding) require an action (actus reus), but "conventional" offenses (like murder) require an intention (a type of mens rea). Criminal negligence involves non-performance of duty in the face of evidence of possible harm. Legally, courts may be capable under existing laws of assigning criminal liability to the AI system of an existing self-driving car for speeding; however, it is not clear that this would be a useful thing for a court to do. Possible defenses include unexpected malfunction or infection with malware, which has been successfully used in the United Kingdom in the case of a denial-of-service attack. Kingston identifies two areas of law, depending on the type of entity: For products, product liability laws apply, including the enforcement of warranties. For services, the tort of negligence may apply if the system failed to perform up to its duty of care. The NHTSA investigation of a fatal 2016 crash involving Tesla Autopilot proceeded as an automobile product safety inquiry and determined that despite the crash, there were no defects that required a recall (though Tesla is working to improve the software to avoid similar crashes). Autopilot only gives cars limited autonomy, and human drivers are expected to maintain situational awareness and take over as needed. With fully autonomous vehicles, the software and vehicle manufacturers are expected to be liable for any at-fault collisions (under existing automobile products liability laws), rather than the human occupants, the owner, or the owner's insurance company. Volvo has already announced that it will pay for any injuries or damage caused by its fully autonomous car, which it expects to start selling in 2020. Starting in 2012, laws or regulations specifically regarding autonomous car testing, certification, and sales, with some issuing special driver's licenses; have been passed by some U.S. states, this remains an active area of lawmaking. Human occupants would still be liable for actions they directed, such as choosing where to park (and thus for parking tickets). University of South Carolina law professor Bryant Walker Smith points out that with automated systems, considerably more data will typically be available than with human-driver crashes, allowing more reliable and detailed assessment of liability. He also predicted that comparisons between how an automated system responds and how a human would have or should have responded would be used to help determine fault. State level legislation in the United States According to the NHTSA, states retain their responsibility for motor vehicle insurance and liability regimes, among other traditional responsibilities such as vehicle licensing and registration and traffic laws and enforcement. Several states, such as Michigan and Nevada and Washington D.C., have explicitly written provisions for how liability will be treated. Enacted autonomous vehicle legislation Arizona's Republican Gov. Doug Ducey's new rules, implemented March 1, lay out a specific list of licensing and registration requirements for autonomous car operators. Specifically, Ducey's order specifies that a “person” subject to the laws includes any corporation incorporated in Arizona. Shift in auto insurance marketplace In a white paper titled “Marketplace of Change: Automobile Insurance in the Era of Autonomous Vehicles,” KPMG estimated that personal auto accounted for 87% of loss insurance, while commercial auto accounted for 13% in 2013. By 2040, personal auto is projected to fall to 58%, while commercial auto rises to 28%, and product liability gains 14%. This reflects the view that personal liability will fall as the responsibility of driving shifts to the vehicle and that mobility on demand will take greater hold. In addition, with the view that the overall pie representing losses covered by liability policies will shrink as autonomous cars cause fewer collisions. Although KPMG cautions that this elimination of excess capacity will bring about significant changes to the insurance industry, 32% of insurance firm leaders expect that driverless vehicles will have no material effect on the insurance industry over the next 10 years. Inaction by the large players has opened up opportunities for new entrants. For example, Metromile, an insurance provider start-up founded in 2011, has started to offer usage-based insurance for low-mileage drivers and designed a policy to complement the commercial coverage of Uber drivers. Some states have specif laws for autonomous car insurance. Public statements from car manufacturers In 2015, Volvo issued a press release claiming that Volvo would accept full liability whenever its cars in autonomous mode. President and Chief Executive of Volvo Cars Håkan Samuelsson went further urging "regulators to work closely with car makers to solve controversial outstanding issues such as questions over legal liability in the event that a self-driving car is involved in a crash or hacked by a criminal third party." In an IEEE article, the senior technical leader for safety and driver support technologies at Volvo echoed a similar sentiment saying, “if we made a mistake in designing the brakes or writing the software, it is not reasonable to put the liability on the customer...we say to the customer, you can spend time on something else, we take responsibility.” Starting in September 2023 in the United States, Mercedes-Benz takes liability for the Level 3 Drive Pilot as long as the "user operates Drive Pilot as designed," but "the driver must be ready to take control of the vehicle at all times when prompted to intervene by the vehicle." Specific cases In August 2023 a General Motors Cruise self-driving car drove into wet concrete in a road construction zone in San Francisco, California, and got stuck. The company will pay to repave the road. See also History of self-driving cars Self-driving car Self-driving truck Death of Elaine Herzeburg Regulation of self-driving cars References
Self-driving car liability
[ "Engineering" ]
4,916
[ "Automotive engineering", "Self-driving cars" ]
52,562,388
https://en.wikipedia.org/wiki/N4O
{{DISPLAYTITLE:N4O}} The molecular formula N4O (molar mass: 72.03 g/mol, exact mass: 72.0072 u) may refer to: Nitrosylazide Oxatetrazole See also Dinitrogen tetroxide Inorganic molecular formulas
N4O
[ "Chemistry" ]
68
[ "Isomerism", "Set index articles on molecular formulas", "Inorganic molecular formulas" ]
42,413,466
https://en.wikipedia.org/wiki/Nanosphere%20lithography
Nanosphere lithography (NSL) is an economical technique for generating single-layer hexagonally close packed or similar patterns of nanoscale features. Generally, NSL applies planar ordered arrays of nanometer-sized latex or silica spheres as lithography masks to fabricate nanoparticle arrays. NSL uses self-assembled monolayers of spheres (typically made of polystyrene, often available commercially as an aqueous suspension) as evaporation masks. These spheres can be deposited using multiple methods including Langmuir-Blodgett, dip coating, spin coating, solvent evaporation, force-assembly, and air-water interface. This method has been used to fabricate arrays of various nanopatterns, including gold nanodots with precisely controlled spacings. Nanosphere monolayer preparation Monolayers of nanospheres, to be used as lithography masks can be created using multiple methods: Langmuir-Blodgett is a deposition method in which the nanoparticles are placed in a Langmuir-Blodgett Trough floating on an aqueous solution, forming a monolayer. With the help of barriers and surface pressure sensor, the particles are compressed into the desired packing density automatically. The coating is done in this packing density with the help of a motorized dipper while the barriers maintain the desired particle packing density. The benefits of the Langmuir-Blodgett method include a strict control over the particle packing density and coating thickness (mono or multilayers can be created) as well as the ability to coat large homogeneous areas. Mask preparation with the Langmuir-Blodgett method has been demonstrated for example using SiO2 particles and polystyrene particles. Dip-coating is a simplified version of the Langmuir-Blodgett. In dip coating, the nanosphere packing density isn't controlled but the dipping is performed directly on a colloidal particle solution. Dip coating is an effective method for applications where a precise control over the particle distribution isn't required. Spin Coating and solvent evaporation methods are capable of producing large areas of particles, but with limited control over the layer homogeneity or thickness. Solvent evaporation is accomplished via drop coating, and is arguably the simplest method to produce a monolayer of nanospheres, as the spheres are simply dropped onto the substrate and allowed to dry, self-assembling into a monolayer. Sometimes the substrate is placed at an angle or moved in circular motions to help the suspension of spheres spread and wet the entire surface. Force-assembled monolayers are formed from a dry nanosphere powder, which can typically be obtained by centrifugation of a nanosphere suspension. The powder is then rubbed between two substrates to force them into a monolayer. The substrates are typically coated in a polymer such as polydimethylsiloxane (PDMS) to promote adhesion and spreading of the nanospheres. The air-water interface method relies on the formation of a monolayer of nanospheres on the surface of a water bath, at the air-water interface. In this method, the substrate is held below the surface of the water, and water is then pumped out to gradually lower the surface. Eventually, the water surface is lowered below the level of the substrates, and the monolayer at the air-water interface is deposited onto the substrate surface. Lithography Method with Colloidal Mask NSL is an easily scalable, high-throughput, and low-cost technique that allows nanoscopic precision in an arbitrarily large area. A lithographic mask can be promptly achieved via particle self-assembly, as previously described, whose pattern resolution is entirely dependent on the colloidal size that can be deposited in high-quality monolayer arrays. The best achieved resolutions in the literature range between 50 nm and 200 nm, which is comparable to that of state-of-the-art conventional-lithography systems. Moreover, the fabricated structures can be produced with a high accuracy on a large scale, as the method is not limited in terms of the deposition area, meaning that it offers the possibility to be adapted to mass production techniques such as roll-to-roll. NSL can also be used with a large range of materials as it uses low-temperature steps (<100 °C), making it ideal for usage with temperature-sensitive materials (e.g., polymeric-based flexible substrates). The NSL method generally starts by the preparation of the patterning mask, comprising a self-assembled monolayer array of colloidal nano/micro-particles, followed by the nano/micro-structure production. The method usually involves four main steps, as depicted in the sketch, allowing the formation of different geometries. The variety of techniques that can be used for the colloidal array formation, as well as for the subsequent structure production, shows the high versatility of this method for implementation in various applications. For instance, it is a preferential soft-lithography technique to micro-pattern photovoltaic devices, to produce structures allowing light-management and/or self-cleaning. See also Nanolithography Nanoparticle deposition References External links Fabricating highly organized nanoparticle films Lithography (microfabrication)
Nanosphere lithography
[ "Materials_science" ]
1,106
[ "Nanotechnology", "Microtechnology", "Lithography (microfabrication)" ]
42,414,727
https://en.wikipedia.org/wiki/Photoinjector
A photoinjector is a type of source for intense electron beams which relies on the photoelectric effect. A laser pulse incident onto the cathode of a photoinjector drives electrons out of it, and into the accelerating field of the electron gun. In comparison with the widespread thermionic electron gun, photoinjectors produce electron beams of higher brightness, which means more particles packed into smaller volume of phase space (beam emittance). Photoinjectors serve as the main electron source for single-pass synchrotron light sources, such as free-electron lasers and for ultrafast electron diffraction setups. The first RF photoinjector was developed in 1985 at Los Alamos National Laboratory and used as the source for a free-electron-laser experiment. High-brightness electron beams produced by photoinjectors are used directly or indirectly to probe the molecular, atomic and nuclear structure of matter for fundamental research, as well as material characterization. A photoinjector comprises a photocathode, electron gun (AC or DC), power supplies, driving laser system, timing and synchronization system, emittance compensation magnets. It can include vacuum system and cathode fabrication or transport system. It is usually followed by beam diagnostics and higher-energy accelerators. The key component of a photoinjector is a photocathode, which is located inside the cavity of electron gun (usually, a 0.6-fractional cell for optimal distribution of accelerating field). Extracted electron beam suffers from its own space-charge fields that deteriorate the beam brightness. For that reason, photoelectron guns often have one or more full-size booster cells to increase the beam energy and reduce the space-charge effect. The gun's accelerating field is RF (radio-frequency) wave provided by a klystron or other RF power source. For low-energy beams, such as ones used in electron diffraction and microscopy, electrostatic acceleration (DC) is a suitable. The photoemission on the cathode is initiated by an incident pulse from the driving laser. Depending on the material of the photocathode, the laser wavelength can vary from 1700 nm (infrared) down to 100-200 nm (ultraviolet). Emission from the cavity wall is possible with laser wavelength of about 250 nm for copper walls or cathodes. Semiconductor cathodes are often sensitive to ambient conditions and might require a clean preparation chamber located behind the photoelectron gun. The optical system of the driving laser is often designed to control the pulse structure, and consequently, the distribution of electrons in the extracted bunch. For example, a fs-scale laser pulse with an elliptical transverse profile creates a thin "pancake" electron bunch, that evolves into a uniformly filled ellipsoid under its own space-charge fields. A more sophisticated laser pulse with a comb-like longitudinal profile generates a similarly shaped, comb electron beam. Notes Particle accelerators Accelerator physics Applications of photovoltaics
Photoinjector
[ "Physics" ]
628
[ "Accelerator physics", "Applied and interdisciplinary physics", "Experimental physics" ]
42,419,995
https://en.wikipedia.org/wiki/UrbanGlass
UrbanGlass, located on Fulton Street in the historic 1918 Strand Theatre in the Downtown Brooklyn Cultural District is the New York metropolitan area's leading glass-blowing facility. UrbanGlass was founded in 1977 by three artists and was originally known as the New York Experimental Glass Workshop. It is now the primary studio for more than 200 artists and hosts more than 500 art students for regular classes. UrbanGlass shares the Strand Theatre with BRIC Arts Media, which also reopened in October 2013. New to UrbanGlass upon its reopening in Fort Greene is the Agnes Varis Art Center, which is home to changing exhibits featuring the work of artists who work at UrbanGlass and others. References External links Glass museums and galleries Companies based in Brooklyn American companies established in 1977
UrbanGlass
[ "Materials_science", "Engineering" ]
152
[ "Glass engineering and science", "Glass museums and galleries" ]
42,420,984
https://en.wikipedia.org/wiki/Brillouin%20spectroscopy
Brillouin spectroscopy is an empirical spectroscopy technique which allows the determination of elastic moduli of materials. The technique uses inelastic scattering of light when it encounters acoustic phonons in a crystal, a process known as Brillouin scattering, to determine phonon energies and therefore interatomic potentials of a material. The scattering occurs when an electromagnetic wave interacts with a density wave, photon-phonon scattering. This technique is commonly used to determine the elastic properties of materials in mineral physics and material science. Brillouin spectroscopy can be used to determine the complete elastic tensor of a given material which is required in order to understand the bulk elastic properties. Comparison with Raman spectroscopy Brillouin spectroscopy is similar to Raman spectroscopy in many ways; in fact the physical scattering processes involved are identical. However, the type of information gained is significantly different. The process observed in Raman spectroscopy, Raman scattering, primarily involves high frequency molecular vibrational modes. Information relating to modes of vibration, such as the six normal modes of vibration of the carbonate ion, (CO3)2−, can be obtained through a Raman spectroscopy study shedding light on structure and chemical composition, whereas Brillouin scattering involves the scattering of photons by low frequency phonons providing information regarding elastic properties. Optical phonons and molecular vibrations measured in Raman spectroscopy typically have wavenumbers between 10 and 4000 cm−1, while phonons involved in Brillouin scattering are on the order of 0.1–6 cm−1. This roughly two order of magnitude difference becomes obvious when attempting to perform Raman spectroscopy vs. Brillouin spectroscopy experiments. In Brillouin scattering, and similarly Raman scattering, both energy and momentum are conserved in the relations: Where and are the angular frequency and wavevector of the photon, respectively. While the phonon angular frequency and wavevector are and . The subscripts and denote the incident and scattered waves. The first equation is the result of the application of the conservation of energy to the system of the incident photon, the scattered photon, and the interacting phonon. Applying conservation of energy also sheds light upon the frequency regime in which Brillouin scattering occurs. The energy imparted on an incident photon from a phonon is relatively small, generally around 5-10% that of the photon's energy. Given an approximate frequency of visible light, ~1014 Hz, it is easy to see that Brillouin scattering generally lies in the GHz regime. The second equation describes the application of conservation of momentum to the system. The phonon, which is either generated or annihilated, has a wavevector which is a linear combination of the incident and scattered wavevectors. This orientation will become more apparent and important when the orientation of the experimental setup is discussed. The equations describe both the constructive (Stokes) and destructive (anti-Stokes) interactions between a photon and phonon. Stokes scattering describes the interaction scenario in which the material absorbs the photon, creating a phonon, inelastically emitting a photon with a lower energy than that of the absorbed photon. Anti-Stokes scattering describes the interaction scenario in which the incoming photon absorbs a phonon, phonon annihilation, and a photon with a higher energy than that of absorbed photon is emitted. The figure illustrates the differences between Raman scattering and Brillouin scattering along with Stokes and anti-Stokes interactions as is seen in experimental data. The figure depicts three important details. The first is the Rayleigh line, the peak which has been suppressed at 0 cm−1. This peak is a result of Rayleigh scattering, a form of elastic scattering from the incident photons and the sample. Rayleigh scattering occurs when the induced polarization of the atoms, resulting from the incident photons, does not couple with possible vibrational modes of the atoms. The resulting emitted radiation has the same energy as the incident radiation, meaning no frequency shift is observed. This peak is generally quite intense and is not of direct interest for Brillouin spectroscopy. In an experiment, the incident light is most often a high power laser. This results in a very intense Rayleigh peak which has the ability to wash out the Brillouin peaks of interest. In order to adjust for this, most spectrum are plotted with the Rayleigh peak either filtered out or suppressed. The second noteworthy aspect of the figure is the distinction between Brillouin and Raman peaks. As previously mentioned, Brillouin peaks range from 0.1 cm−1 to approximately 6 cm−1 while Raman scattering wavenumbers ranges from 10–10000 cm−1. As Brillouin and Raman spectroscopy probe two fundamentally different interaction regimes this is not too large of an inconvenience. The fact that Brillouin interactions are such low frequency however creates technical challenges when performing experiments for which a Fabry-Perot interferometer are usually used in order to overcome. A Raman spectroscopy system is generally less technically complicated and can be performed with a diffraction grating–based spectrometer. In some cases a single grating–based spectrometer has been used to collect both Brillouin and Raman spectra from a sample. The figure also highlights the difference between Stokes and anti-Stokes scattering. Stokes scattering, positive photon creation, is displayed as a positive shift in wavenumber. Anti-Stokes scattering, negative photon annihilation, is displayed as a negative shift in wavenumber. The locations of peaks are symmetric about the Rayleigh line because they correspond to the same energy level transition but of a different sign. In practice, six Brillouin lines of interest are generally seen in a Brillouin spectrum. Acoustic waves have three polarization directions one longitudinal and two transverse directions each being orthogonal to the others. Solids can be considered nearly incompressible, within an appropriate pressure regime, as a result, longitudinal waves, which are transmitted via compression parallel to the propagation direction, can transmit their energy through the material easily and thus travel quickly. The motion of transverse waves, on the other hand, is perpendicular to the propagation direction and is thus less easily propagated through the medium. As a result, longitudinal waves travel more quickly through solids than transverse waves. An example of this can be seen in quartz with an approximate acoustic longitudinal wave velocity of 5965 m/s and transverse wave velocity of 3750 m/s. Fluids cannot support transverse waves. As a result, transverse wave signals are not found in Brillouin spectra of fluids. The equation shows the relationship between acoustic wave velocity, , angular frequency , and phonon wavenumber, . According to the equation, acoustic waves with varying speeds will appear on the Brillouin spectra with varying wavenumbers: faster waves with higher magnitude wavenumbers and slower waves with smaller wavenumbers. Therefore, three distinct Brillouin lines will be observable. In isotropic solids, the two transverse waves will be degenerate, as they will be traveling along elastically identical crystallographic planes. In non-isotropic solids the two transverse waves will be distinguishable from one another, but not distinguishable as being horizontally or vertically polarized without a deeper understanding of the material being studied. They are then generically labeled transverse 1 and transverse 2. Applications Brillouin spectroscopy is a valuable tool for determining the complete elastic tensor, , of solids. The elastic tensor is an 81 component 3x3x3x3 matrix which, through Hooke's Law, relates stress and strain within a given material. The number of independent elastic constants found within the elastic tensor can be reduced through symmetry operations and depends on the symmetry of a given material ranging from 2 for non-crystalline substances or 3 for cubic crystals to 21 for systems with triclinic symmetry. The tensor is unique to given materials and thus must be independently determined for each material in order to understand their elastic properties. The elastic tensor is especially important to mineral physicist and seismologists looking to understand the bulk, polycrystalline, properties of deep Earth minerals. It is possible to determine elastic properties of materials such as the adiabatic bulk modulus, , without first finding the complete elastic tensor through techniques such as the determination of an equation of state through a compression study. Elastic properties found in this way, however, do not scale well to bulk systems such as those found within rock assemblages in the Earth's mantle. In order to calculate the elastic properties of bulk material with randomly oriented crystals the elastic tensor is needed. Using Equation 3, it is possible to determine the sound velocity through a material. In order to obtain the elastic tensor the Christoffel Equation needs to be applied: The Christoffel Equation is essentially an eigenvalue problem which relates the elastic tensor, , to the crystal orientation and the orientation of the incident light, , to a matrix, , whose eigenvalues are equal to ρV2, where ρ is density and V is acoustic velocity. The polarization matrix, , contains the corresponding polarizations of the propagating waves. Using the equation, where and are known from the experimental setup and is determined from the Brillouin spectra, it is possible to determine , given the density of the material. For specific symmetries the relationship between a specific combination of elastic constants, , and acoustic wave velocities, ρV2, have been determined and tabulated. For example, in a cubic system reduces to 3 independent components. Equation 5 shows the complete elastic tensor for a cubic material. The relations between the elastic constants and can be found in Table 1. In a cubic material it is possible to determine the complete elastic tensor from pure longitudinal and pure transverse phonon velocities. In order make the above calculations the phonon wavevector, , must be pre-determined from the geometry of the experiment. There are three main Brillouin spectroscopy geometries: 90 degree scattering, backscattering, and platelet geometry. Frequency shift The frequency shift of the incident laser light due to Brillouin scattering is given by where is the angular frequency of the light, is the velocity of acoustic waves (speed of sound in the medium), is the index of refraction, is the vacuum speed of light, and is the angle of incidence of the light. See also Brillouin scattering Raman spectroscopy References Vibrational spectroscopy
Brillouin spectroscopy
[ "Physics", "Chemistry" ]
2,136
[ "Vibrational spectroscopy", "Spectroscopy", "Spectrum (physical sciences)" ]
42,421,874
https://en.wikipedia.org/wiki/Laser%20ignition
Laser ignition is an alternative method for igniting mixtures of fuel and oxidiser. The phase of the mixture can be gaseous or liquid. The method is based on laser ignition devices that produce short but powerful flashes regardless of the pressure in the combustion chamber. Usually, high voltage spark plugs are good enough for automotive use, as the typical compression ratio of an Otto cycle internal combustion engine is around 10:1 and in some rare cases reach 14:1. However, fuels such as natural gas or methanol can withstand high compression without autoignition. This allows higher compression ratios, because it is economically reasonable, as the fuel efficiency of such engines is high. Using high compression ratio and high pressure requires special spark plugs that are expensive and their electrodes still wear out. Thus, even expensive laser ignition systems could be economical, because they would last longer. Further applications of laser ignition Laser ignition is considered as a potential ignition system for non-hypergolic liquid rocket engines, reaction control systems and firearms which need an ignition system. Conventional ignition technologies like torch igniters are more complex in sequencing and need additional components like propellant feed lines and valves. Therefore, they are heavy compared to a laser ignition system. Pyrotechnical devices allow only one ignition per unit and imply increased launch pad precautions as they are made of explosives. See also Electronic firing References Internal combustion engine Laser applications Ignition systems Engine components Applications of control engineering Pyrotechnic initiators Firearm actions
Laser ignition
[ "Technology", "Engineering" ]
303
[ "Internal combustion engine", "Engines", "Combustion engineering", "Control engineering", "Engine components", "Applications of control engineering" ]
42,422,280
https://en.wikipedia.org/wiki/Bullying%20of%20students%20in%20higher%20education
Bullying in higher education refers to the bullying of students as well as faculty and staff taking place at institutions of higher education such as colleges and universities. It is believed to be common although it has not received as much attention from researchers as bullying in some other contexts. This article focuses on bullying of students; see Bullying in academia regarding faculty and staff. In a higher education environment bullying and similar behaviors may include hazing, harassment or stalking. 18.5% of college undergraduates have reported being bullied once or twice, while 22% report being the victim of cyberbullying. All students, regardless of race, weight, gender, ethnicity, etc., can be targeted as victims of bullying. Two research articles have examined bullying at the post-secondary level in great detail. These articles both appeared in the journal Adolescence in 2004 and 2006. It is estimated that 100,000 students drop out of college each year due to bullying. Bullying in academia Bullying of scholars and staff in academia, especially institutions of higher education such as colleges and universities has been known to exist, although has not received as much attention from researchers as bullying in some other contexts. Hazing Hazing is the practice of rituals and other activities involving harassment, abuse or humiliation as a way of initiating a person into a group. Hazing is seen in many different types of social groups, including gangs, sports teams, schools, military units, fraternities and sororities. Hazing is often prohibited by law and may comprise either physical or psychological abuse. It may also include nudity or sexually-oriented offenses. More than half of hazing incidents on college campuses result in pictures publicly posted on the Internet. Students have reported that they are not adequately exposed to hazing prevention programs on campuses. Two out of every five college students acknowledge incidents of hazing on their campus according to RA Magazine. 55% of college students who are involved in campus clubs, teams and other organizations have reported being hazed in some form. Cyberbullying Cyberbullying is the use of electronic communication to bully a person, typically by sending messages of an intimidating or threatening nature. This form of bullying can easily go undetected because of lack of parental/authoritative supervision. Because bullies can pose as someone else, it is the most anonymous form of bullying. Cyberbullying includes, but is not limited to, abuse using email, instant messaging, text messaging, websites, social networking sites, etc. In a study performed at Indiana State University, it was determined that electronic media such as social networking and text messaging are more common outlets for cyberbullying, while chat rooms and other websites are less likely to be used in cyberbullying. Once a young adult enters college, there is little to no computer monitoring, leading to the misuse of technology and the added probability of cyberbullying. There have been occasions where the bullying was not intentional, but still occurred. Even if the bullying was not consciously intended it can still have awful impacts. One study concluded that people had been ostracised online as a way of protecting another group of people. One woman had been listed as a political opponent on a pro-trans website due to misunderstandings and alternating views, which resulted in targeted messages and harassment. The listed person in question did not actually have any reservations against the trans community, and so they ended up being the bullied person. While the motivations of the authors of the website are unknown, it can be assumed that they did not specifically aim to target the person effected and thus the bullying was an inconsiderate result. Techniques According to an article in the Chronicle of Higher Education, academic bullies have initiated a variety of covert behaviors used to target their victims. These subtle actions include interruptions during group meetings, eye-rolling, undermining credibility, and exclusion from social interactions. Because of these techniques, bullying in academia is considered to be of a lower intensity. Reasons NoBullying.com lists a variety of reasons that bullying in college occurs. The first reason is that there are new targets available to the bully’s disclosure. The bully has said goodbye to the people he or she previously socialized with and/or bullied, so there is a need to satisfy such behaviors. Another reason is there is less direct authority. Leaving for college introduces many students to their first time on their own without the interference of parents and guardians. Faculty and staff are less interested in interpersonal relationships between their students and thus pay less attention to classroom dynamics as opposed to the attention a high school teacher might provide. College faculty and staff follow research that encourages them to take a backseat to bullying and allow the students to overcome adversity on their own. Students at most universities and colleges are not afforded the luxury of leaving after the school day as they would in highschool. Most have to spend time outside of school with their classmates whether they choose to or not. In college, a majority of the campuses are residential and thus students may see much more of their potential bullies and/or victims. Roommate conflicts inside the residential dorms can lead to active bullying. In fall 2012, a Rutgers University student committed suicide after his roommate had been filming him and his boyfriend engaging in sexual activities and posted the video online for all to see. The roommate said he did not want him dead, but wanted his friends to know he was disgusted by his behavior. Locations Lynne McDougall uncovered, in her study of bullying in higher education, that the majority of the locations where bullying occurs in colleges were quite conventional. A majority of the bullying is reported as occurring in the same corridor or department, thus suggesting that students within the same groups, divisions or under the same faculty are responsible for the bullying of their peers. Entrance ways of buildings are another prime location for bullying to occur. Entrances and exit ways are common areas where students have the opportunity to smoke and socialize in between their classes. The library was deemed an area of bullying in McDougall’s study as well, hinting that bullying occurs in places where little to no supervision or control is present. The advancement of technology in the classroom has allowed for cyberbullying to occur while students are gathered for the intent of education. Social media websites such as Twitter, allow students to actively post content bashing their classmates. College-specific accounts have been created where members of the student body can send posts and messages to an administrator who then retweets or posts the content for all the account’s followers to read. Legality Colleges are not mandated to produce strategies or policies regarding anti-bullying, however some have codes of conduct that encourage students to exhibit appropriate behavior at all times. In most codes of conduct the word bullying is never cited in the physical text. “Both the perpetrators and the victims are adults, so the legal framework is very, very different,” said Charlie Rose, the U.S. Department of Education’s general counsel. The difference between bullying and sexual harassment is the added context of sexuality. Sexual harassment is defined as unwelcome conduct of a sexual nature that interferes with a student’s ability to learn, work, achieve or participate in activities. This can include unwanted sexual advances, sexual touching, requests for sexual favors, or other verbal, nonverbal and physical actions of a sexual nature. This includes spreading sexual rumors, making sexual comments, jokes, gestures, vandalism, pictures, written materials, rating students sexually and circulating Web content of a sexual nature. Human resource departments may be used to address bullying among faculty and staff, while judicial review committees apply sanctions and regulations to students charged with harassment of their peers. See also Bullying Cyberbullying Bullying in academia Hazing School violence Social exclusion Social isolation References Harassment and bullying
Bullying of students in higher education
[ "Biology" ]
1,578
[ "Harassment and bullying", "Behavior", "Aggression" ]
42,425,338
https://en.wikipedia.org/wiki/H-Soz-Kult
H-Soz-u-Kult (Humanities – Sozial und Kulturgeschichte) is an online information and communication platform for historians which disseminates academic news and publications. The project is committed to the principles of open access and community network. Since its founding in 1996 the central editorial office is located at the History Department of the Humboldt University of Berlin. H-Soz-u-Kult is part of H-Net and one of the most important online communication and information services for Historians in the German-speaking world. It is read by more than 20,000 email subscribers in over 70 countries. In 2012, around one million page views by up to 210,000 unique visitors were registered per month on the website. H-Soz-u-Kult publishes a wide range of book reviews, conference reports, job offers, scholarships, tables of contents of academic journals, literature reports and other news from the historical science community. Most publications are in German but the number of English publications continually increases. The book reviews are the main emphasis of H-Soz-u-Kult – more than 12,000 reviews were accessible on its website in 2013. H-Soz-u-Kult’s main editorial office at the Humboldt University of Berlin is supported by a pro bono editorial staff which consists of over 40 researchers from almost all fields of historical science. H-Soz-u-Kult is a part of Clio-online, a partner in a wide range of other academic projects, and was supported by the German Research Foundation for many years. The editorial range has been augmented with contributions from the complementary forums history.transnational and zeitgeschichte-online since 2004 and infoclio.ch since 2009. Current articles from the academic world can be accessed via H-Soz-u-Kult’s website, email and RSS-feeds. H-Soz-u-Kult is the official media partner of the German Union of Historians. References External links H-Soz-u-Kult (english web page) 1996 establishments in Germany Academic publishing Open access journals History organisations based in Germany Digital humanities Social sciences organizations Electronic mailing lists German-language journals Internet properties established in 1996
H-Soz-Kult
[ "Technology" ]
464
[ "Digital humanities", "Computing and society" ]
40,960,096
https://en.wikipedia.org/wiki/C30H46NO7P
{{DISPLAYTITLE:C30H46NO7P}} The molecular formula C30H46NO7P (molar mass: 563.66 g/mol, exact mass: 563.3012 u) may refer to: Ceronapril, a phosphonate ACE inhibitor that was never marketed Fosinopril, an angiotensin converting enzyme (ACE) inhibitor
C30H46NO7P
[ "Chemistry" ]
88
[ "Isomerism", "Set index articles on molecular formulas" ]
40,960,388
https://en.wikipedia.org/wiki/C20H27N3O6
{{DISPLAYTITLE:C20H27N3O6}} The molecular formula C20H27N3O6 (molar mass: 405.44 g/mol, exact mass: 405.1900 u) may refer to: Imidapril Febarbamate, or phenobamate Molecular formulas
C20H27N3O6
[ "Physics", "Chemistry" ]
70
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
55,461,893
https://en.wikipedia.org/wiki/R.%20J.%20Dwayne%20Miller
R. J. Dwayne Miller is a Canadian chemist and a professor at the University of Toronto. His focus is in physical chemistry and biophysics. He is most widely known for his work in ultrafast laser science, time-resolved spectroscopy, and the development of new femtosecond electron sources. His research has enabled real-time observation of atomic motions in materials during chemical processes and has shed light on the structure-function correlation that underlies biology. Early life and education Miller was born and raised in Winnipeg, Manitoba. In 1978, he received a B.Sc. in chemistry and immunology at the University of Manitoba where Bryan R. Henry was his advisor. He completed his Ph.D. in chemistry at Stanford University in 1983 under the supervision of Michael D. Fayer. His thesis work focused on energy transport in model systems of photosynthesis and is titled Part I, Electronic excited state transport and trapping in disordered systems; Part II, Laser induced ultrasonics. Academic career Following graduation, Miller gained a faculty position at the University of Rochester and immediately took a 12-month leave to do postdoctoral research in solid state physics as a NATO science fellow at the Laboratoire de Spectrometrie Physique (renamed to Laboratoire Interdisciplinaire de Physique in 2011) at the Université Joseph Fourier in Grenoble, France under the direction of Hans Peter Trommsdorff and Robert Romenstain. He returned to University of Rochester in 1984 as an assistant professor of chemistry. He was promoted to associate professor in 1988 and then full professor of chemistry and optics in 1992. In 1995, he moved back to Canada and relocated his research group to the departments of chemistry and physics at the University of Toronto. In 2006, he was appointed as a University Professor and later as a Distinguished Faculty Research Chair. From 2010-2014, R. J. D. Miller was the director of the Max Planck Group, Centre for Free Electron Laser Science/DESY, University of Hamburg. From 2014-2020, he was the co-founding director of Max Planck Institute for the Structure and Dynamics of Matter (MPSD) in Hamburg, Germany. In 2023, he was inducted as a fellow of the Royal Society and has been a fellow of Royal Society of Canada and the Royal Society of Chemistry since 1999 and 2016, respectively. He is also a member of the Chemical Institute of Canada, Canadian Association of Physicists, American Physical Society, and Optical Society of America. Science outreach Beyond his scientific work, Miller is dedicated to the promotion of science education through outreach to school children. He founded and is a board member of Science Rendezvous, an annual science festival that aims to expose general public to science and technology. Bibliography Selected papers Duan HG, Jha A, Chen L, Tiwari V, Cogdell RJ, Ashraf K, Prokhorenko VI, Thorwart M, Miller RJD. Quantum coherent energy transport in the Fenna-Matthews-Olson complex at low temperature. PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA, 119 (49) :e2212630119, 2022. https://doi.org/10.1063/4.0000159 Zhipeng Huang; Meghanad Kayanattil; Stuart A. Hayes; R. J. Dwayne Miller. Picosecond infrared laser driven sample delivery for simultaneous liquid-phase and gas-phase electron diffraction studies. STRUCTURAL DYNAMICS, 9 :054301, 2022. https://doi.org/10.1063/4.0000159 Gourab Chatterjee, Ajay Jha, Alejandro Blanco-Gonzalez, Vandana Tiwari, Madushanka Manathunga, Hong-Guang Duan, Friedjof Tellkamp, Valentyn I. Prokhorenko, Nicolas Ferré, Jyotishman Dasgupta, Massimo Olivucci and R. J. Dwayne Miller. Torsionally broken symmetry assists infrared excitation of biomimetic charge-coupled nuclear motions in the electronic ground state. CHEMICAL SCIENCE, 13 :9392-9400, 2022. https://doi.org/10.1039/D2SC02133A Chiwon Lee, Alexander Marx, Günther H. Kassier & R. J. Dwayne Miller. Disentangling surface atomic motions from surface field effects in ultrafast low-energy electron diffraction. COMMUNICATIONS MATERIALS, 3 (10), 2022. https://doi.org/10.1038/s43246-022-00231-9 Zhang, M., Zhang, S., Xiong, Y., Zhang, H., Ischenko, A., Vendrell, O., Dong, X., Mu, X., Centurion, M., Xu, H., Miller, RJD., Li, Z.. Quantum state tomography of molecules by ultrafast diffraction. NATURE COMMUNICATIONS, 12 :5441, 2021. https://doi.org/10.1038/s41467-021-25770-6 P. Mehrabi, R. Bücker, G. Bourenkov, H.M. Ginn, D. von Stetten, H.M. Müller-Werkmeister, A. Kuo, T. Morizumi, B.T. Eger, W.-L. Ou, S. Oghbaey, A. Sarracini, J.E. Besaw, O. Pare´-Labrosse, S. Meier11, H. Schikora, F. Tellkamp, A. Marx, D.A. Sherrell, D. Ax. Serial femtosecond and serial synchrotron crystallography can yield data of equivalent quality: A systematic comparison. SCIENCE ADVANCES, 7 (12) :eabf1380, 2021. https://www.science.org/doi/10.1126/sciadv.abf1380 Michiel de Kock, Sana Azim, Gunther Kassier, and R. J. Dwayne Miller. Determining the radial distribution function of water using electron scattering: A key to solution phase chemistry. THE JOURNAL OF CHEMICAL PHYSICS, 153 :194504, 2020. https://doi.org/10.1063/5.0024127 Hong-Guang Duan, Ajay Jha, Xin Li4, Vandana Tiwari, Hanyang Ye, Pabitra K. Nayak, Xiao-Lei Zhu4, Zheng Li, Todd J. Martinez, Michael Thorwart, R. J. Dwayne Miller. Intermolecular vibrations mediate ultrafast singlet fission. SCIENCE ADVANCES, 6 (38) :eabb0052, 2020. https://advances.sciencemag.org/content/6/38/eabb0052 Cheng, S., Chatterjee, G., Tellkamp, F., Lang, T., Ruehl, A., Hartl, I., Miller, R.J.D.. Compact Ho:YLF-pumped ZnGeP2-based optical parametric amplifiers tunable in the molecular fingerprint regime. OPTICS LETTERS, 45 (8) :2255–2258, 2020. https://doi.org/10.1364/OL.389535 Krawczyk, K.M., Sarracini, A., Green, P.B., Hasham, M., Tang, K., Paré-Labrosse, O., Voznyy, O., Wilson, M.W.B., Miller, R.J.D.. Anisotropic, Nonthermal Lattice Disordering Observed in Photoexcited PbS Quantum Dots. THE JOURNAL OF PHYSICAL CHEMISTRY C, 125 (40) :22120-22132, 2021. https://doi.org/10.1021/acs.jpcc.1c07064 Books See also John Polanyi Arthur Nozik Ultrafast electron diffraction Time resolved crystallography Two-dimensional infrared spectroscopy Two-dimensional electronic spectroscopy Laser surgery References External links Miller group at the University of Toronto Miller group (former) at MPI for the Structure and Dynamics of Matter in Hamburg Science Rendezvous 1956 births 20th-century Canadian scientists 21st-century Canadian scientists Canadian physical chemists Fellows of the Royal Society of Canada Fellows of the Royal Society Living people People from Winnipeg Scientists from Manitoba Stanford University alumni Spectroscopists Sloan Research Fellows University of Manitoba alumni University of Rochester faculty Academic staff of the University of Toronto Chemical physicists Condensed matter physicists Max Planck Institute directors
R. J. Dwayne Miller
[ "Physics", "Chemistry", "Materials_science" ]
1,847
[ "Condensed matter physicists", "Spectrum (physical sciences)", "Physical chemists", "Analytical chemists", "Spectroscopists", "Condensed matter physics", "Chemical physicists", "Spectroscopy" ]
55,464,594
https://en.wikipedia.org/wiki/ABACABA%20pattern
The ABACABA pattern is a recursive fractal pattern that shows up in many places in the real world (such as in geometry, art, music, poetry, number systems, literature and higher dimensions). Patterns often show a DABACABA type subset. AA, ABBA, and ABAABA type forms are also considered. Generating the pattern In order to generate the next sequence, first take the previous pattern, add the next letter from the alphabet, and then repeat the previous pattern. The first few steps are listed here. ABACABA is a "quickly growing word", often described as chiastic or "symmetrically organized around a central axis" (see: Chiastic structure and Χ). The number of members in each iteration is , the Mersenne numbers (). Gallery See also Arch form Farey sequence Rondo Sesquipower Notes References External links Naylor, Mike: abacaba.org Fractals
ABACABA pattern
[ "Mathematics" ]
196
[ "Mathematical analysis", "Functions and mappings", "Mathematical analysis stubs", "Mathematical objects", "Fractals", "Mathematical relations" ]
55,467,476
https://en.wikipedia.org/wiki/Superpedestrian
Superpedestrian Inc., is a transportation robotics company based in Cambridge, Massachusetts, that developed electrified and AI technologies for micro mobility vehicles. The company ran the LINK e-scooter sharing program, which was active in 57 cities across the US and Europe. In December 2023 Tech Crunch reported that the business would close by December 31, 2023, with all scooters recalled into warehousing. Superpedestrian was acquired in February 2024 by the Norwegian SURF Beyond group led by the two founders Julian Alexander Hahn and Mats André Breesth. History Superpedestrian's first product, The Copenhagen Wheel, was developed at MIT's Senseable City Lab in 2009 in partnership with the city of Copenhagen, and unveiled at the 2009 United Nations Climate Change Conference. In December 2012, Assaf Biderman, a co-inventor of the Wheel and associate director of the MIT Senseable City Lab, founded Superpedestrian to commercialize the Wheel. After several years of engineering, testing, and validation, the Copenhagen Wheel officially launched in April 2017. In December 2018, the Boston Business Journal reported the company would shift its focus to start building electric scooters and supplying the scooter sharing fleet operators such as Bird and Lime. In 2020, the company instead began operating shared scooter fleets itself in the US, before expanding to Italy and Spain. In 2021, the company opened sharing services in Austria, Sweden and Portugal. Products VIS The Vehicle Intelligent Safety system (VIS) is a network of sensors, micro-computers and AI that enable micromobility vehicles to self-detect problems and respond. Copenhagen Wheel The Copenhagen Wheel converted bicycles into e-bikes. It replaced the existing rear wheel. The Copenhagen Wheel contained a custom brushless motor, advanced sensors, control systems, and a lithium-ion battery, all contained within the hub of a single rear bicycle wheel. Combining actual torque, power, cadence, pedal position, and acceleration sensing with high-speed controllers and actuators, the Wheel generated power that seamlessly synchronized with a rider's pedalling. Bluetooth connectivity enabled riders to personalize their cycling experience from their smartphone. A self-diagnostic safety system (VIS) monitored components and proactively responded to events within milliseconds, protecting both rider and Wheel. The Copenhagen Wheel is no longer available for sale. The company no longer responds to emailed requests for assistance, and login to utilize the wheel via the app no longer functions, stranding purchasers with a non-functioning wheel. LINK LINK is a shared electric scooter designed, engineered, manufactured and operated by Superpedestrian. During prototype road-testing, it was called "the Volvo of e-scooters" for its robust build quality. The LINK scooter features an operating system that can be updated wirelessly, over-the-air. The second version of the operating system was released in March 2021 and is called "Briggs". On board the scooter, VIS runs 1,000 health checks every second during rides and monitors 140 safety-critical conditions. VIS also allows the scooter to enforce onboard geofence commands in 0.7s. LINK has a 61-mile battery range. According to Superpedestrian's VP (EMEA), Haya Verwoord Douidi, the LINK scooter cost $75 million in R&D. Scooter sharing In June 2020, Zagster was bought by Superpedestrian to create an in-house shared mobility division using LINK scooters. Superpedestrian beat Bird in a competitive tender process in September 2020 to offer e-scooters in Seattle, US. By April 2021, the company was operating shared e-scooters in 21 cities and announced plans to begin e-scooter sharing services in Ireland. In May 2021, LINK scooters were available in 30 cities in the US, Spain, Italy and Austria. Superpedestrian partnered with ACI in May 2021 to launch a safety course for e-scooter riders in Italy. In December 2021, Superpedestrian acquired the UK subsidiary of Wind Mobility, which operates the e-scooter trial zone in Nottingham. The company then replaced the scooter fleet with its proprietary model. Silicon Republic reported in February 2022 that Superpedestrian was operating shared mobility fleets in 57 cities in the US and Europe. Labor Superpedestrian states that it has never used gig workers in its history, in contrast to the early shared e-scooter sector. Paul Steely White, a long-time active travel advocate in New York City, is Public Affairs Director at Superpedestrian. Haya Verwoord Douidri left Bird to join Superpedestrian on 1 July 2020, heading up the company's scooter expansion in Europe. Funding In December 2020, the company secured $60 million in funding to scale-up e-scooter sharing fleets. Investors included Citi Impact Fund. In February 2022, the Boston Globe revealed that Superpedestrian had secured new investment of $125 million. Investors included the Sony Investment Fund. In December 2023 TechCrunch reported that the business would close by December 31, 2023, with all scooters recalled into warehousing. Awards European Product Design Awards – Platinum – Urban Sustainable Design 2017 European Product Design Awards – Gold – Bicycling and Bicycle Accessories 2017 European Product Design Awards – Gold – Design for Sustainability 2017 European Product Design Awards – Silver – Robotics 2017 CNBC – Top 25 Startup 2017 TIME Magazine - Best Tech of 2017 Red Dot: Design Concept Awards – Luminary 2014 Red Dot: Design Concept Awards – “Best of the Best” 2014 TIME Magazine – The 25 Best Inventions of 2014 Deutscher Werkbund Label – Werkbund Label for design 2014 Green Dot Awards – Winner 2011 Living Labs Global Awards – Shortlisted 2011 Index Award – Finalist 2011 Edison Awards – Silver in Personal Transportation – 2011 U.S. James Dyson Award – Winner – 2010 World Technology Summit and Awards – Winner – 2011 The Grand Challenge Stories Award of the US National Academy of Engineers – Winner – 2010 See also List of electric bicycle brands and manufacturers Outline of cycling References External links Electric cycle manufacturers Cycle manufacturers of the United States Manufacturing companies based in Massachusetts Companies based in Cambridge, Massachusetts Hybrid vehicles Robotics organizations Electric scooters Cycle types Bicycle History of cycling Micromobility Electric Road cycles Vehicles
Superpedestrian
[ "Physics" ]
1,323
[ "Physical systems", "Transport", "Vehicles" ]
50,098,306
https://en.wikipedia.org/wiki/Ethics%20of%20bioprinting
Ethics of bioprinting is a sub-field of ethics concerning bioprinting. Some of the ethical issues surrounding bioprinting include equal access to treatment, clinical safety complications, and the enhancement of human body (Dodds 2015). 3D printing was invented by Charles Hull in the mid-1980s. 3D printing is a process in additive manufacturing which uses a digital design to produce a physical copy. This process is carried out by a specific printer, which uses several layers in order to complete the design. However, bioprinting uses the ways of 3D printing to create things such as organs, tissues, cells, blood vessels, prosthetics and a broad range of other things that can be used in the medical field. The ethics of bioprinting have been a topic of discussion as long as bioprinting has been popular. Ethics are moral principles that govern production, behavior, etc. Equal access to treatment Bioprinting focuses on the individual care rather than developing a universal treatment plan for all patients. Personalized medicine is expensive and increases the disparity between the rich and poor. Since 3D printing is an individual treatment, the general public assumes that it may prevent people with financial issues from receiving care. However, bioprinting improves universal access to healthcare because it will eventually "bring down the time and cost" of treatment. For example, prosthetic limbs and orthopedic surgery can be done in an efficient and inexpensive manner. People would not have to wait months for their prosthetics, which will ultimately decrease the medical expense. The bioprinter may be used to manufacture bone replacements and produce customized prosthetic limbs quickly. Also the printing of human organs, and tissues, are available with decreased time, only taking a few weeks to produce instead of a regular transplant. Currently in the United States, approximately 115,000 people are awaiting a transplant, which can take nearly two years to obtain, while nearly 2 million people have lost a limb. Those who were previously excluded from these medical advancements will now have access to them. Safety Any new treatment involving 3D printers is risky and patients must be well informed of the health implications. Doctors hope in the future to print organs in order to replace dysfunctional bio-structures. Similar to organ donations, the cells must match genetically otherwise the recipient's body will reject the organ. The patient would then have an autoimmune response and destroy the donated tissue. The individual's stem cells must be used to manufacture the organ for the specific patient. In order to advance this technology, the medical field must find a way to test and standardize organ production. Human enhancement Bioprinting may be used to increase human performance, strength, speed, or endurance. For instance, bioprinting may be used to manufacture enhanced bones and replace regular human bones that are stronger and more flexible. The 3D printer could also be used to increase muscle performance by making muscles more "resilient and less likely to become fatigued". Lung capacity could also be improved by replacing it with an artificial lung that can increase oxygen efficiency in the blood. Human enhancement would have a dangerous but incredible impact on society; bioprinting could create a culture without disease or imperfection. Legality and policies Bioprinted items would require regulation to ensure safety and effectiveness. In the United States, this is the job of FDA. The FDA must make sure that printed organs are handled a bit differently than human organs because bioprinting is a new and developing treatment, and therefore little is known about its interactions with the human body. Bioprinting faces trade-offs between restricted use and open use. Restricted use will allow bioprinting to only be done by trained professionals, whereas open use is more of a free-for-all. There are also trade-offs between if it is ethical to mass-produce organs or if it could make issues in transplant cases worse. Selling bioprinted organs may be illegal under existing laws meant to stop the blackmarket trade of human organs. References Bioethics
Ethics of bioprinting
[ "Technology" ]
817
[ "Bioethics", "Ethics of science and technology" ]
50,101,235
https://en.wikipedia.org/wiki/C43H66O14
{{DISPLAYTITLE:C43H66O14}} The molecular formula C43H66O14 (molar mass: 806.98 g/mol, exact mass: 806.4453 u) may refer to: Acetyldigitoxin Gymnemic acid Molecular formulas
C43H66O14
[ "Physics", "Chemistry" ]
65
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
50,105,159
https://en.wikipedia.org/wiki/Ryanodine-Inositol%201%2C4%2C5-triphosphate%20receptor%20calcium%20channels
The ryanodine-inositol 1,4,5-triphosphate receptor Ca2+ channel (RIR-CaC) family includes Ryanodine receptors and Inositol trisphosphate receptors. Members of this family are large proteins, some exceeding 5000 amino acyl residues in length. This family belongs to the Voltage-gated ion channel (VIC) superfamily. Ry receptors occur primarily in muscle cell sarcoplasmic reticular (SR) membranes, and IP3 receptors occur primarily in brain cell endoplasmic reticular (ER) membranes where they effect release of Ca2+ into the cytoplasm upon activation (opening) of the channel. They are redox sensors, possibly providing a partial explanation for how they control cytoplasmic Ca2+. Ry receptors have been identified in heart mitochondria where they provide the main pathway for Ca2+ entry. Sun et al. (2011) have demonstrated oxygen-coupled redox regulation of the skeletal muscle ryanodine receptor-Ca2+ release channel (RyR1;TC# 1.A.3.1.2) by NADPH oxidase 4. Function Ryanodine (Ry)-sensitive and inositol 1,4,5-triphosphate (IP3)-sensitive Ca2+-release channels function in the release of Ca2+ from intracellular storage sites in animal cells and thereby regulate various Ca2+-dependent physiological processes. The Ry receptors are activated as a result of the activity of dihydropyridine-sensitive Ca2+ channels. Ry receptors, IP3 receptors, and dihydropyridine-sensitive Ca2+ channels (TC#1.A.1.11.2) are members of the voltage-sensitive ion channel (VIC) superfamily (TC# 1.A.1). Dihydropyridine-sensitive channels are present in the T-tubular systems of muscle tissues. Ry receptor 2 dysfunction leads to arrhythmias, altered myocyte contraction during the process of EC (excitation-contraction) coupling, and sudden cardiac death. Neomycin is a RyR blocker which serves as a pore plug and a competitive antagonist at a cytoplasmic Ca2+ binding site that causes allosteric inhibition. The generalized transport reaction catalyzed by members of the RIR-CaC family following channel activation is:Ca2+ (out, or sequestered in the ER or SR) → Ca2+ (cell cytoplasm). Structure Ry and IP3 receptors consist of (1) an N-terminal ligand binding domain, (2) a central modulatory domain and (3) a C-terminal channel-forming domain. The 3-D structure (2.2 Å) of the inositol 1,3,5-triphosphate receptor of an IP3 receptor has been solved (). Structural and functional conservation of key domains in IP3 and ryanodine receptors has been reviewed by Seo et al. (2012). Members of the VIC (TC# 1.A.1), RIR-CaC (TC# 2.A.3) and TRP-CC (TC# 1.A.4) families have similar transmembrane domain structures, but very different cytosolic domain structures. The channel domains of the Ry and IP3 receptors comprise a coherent family that shows apparent structural similarities as well as sequence similarity with proteins of the VIC family (TC #1.A.1). The Ry receptors and the IP3 receptors cluster separately on the RIR-CaC family tree. They both have homologues in Drosophila. Based on the phylogenetic tree for the family, the family probably evolved in the following sequence: A gene duplication event occurred that gave rise to Ry and IP3 receptors in invertebrates. Vertebrates evolved from invertebrates. The three isoforms of each receptor arose as a result of two distinct gene duplication events. These isoforms were transmitted to mammals before divergence of the mammalian species. Ry receptors Ry receptors are homotetrameric complexes with each subunit exhibiting a molecular size of over 500,000 daltons (about 5,000 amino acyl residues). They possess C-terminal domains with six putative transmembrane α-helical spanners (TMSs). Putative pore-forming sequences occur between the fifth and sixth TMSs as suggested for members of the VIC family. Recently an 8 TMS topology with four hairpin loops has been suggested. The large N-terminal hydrophilic domains and the small C-terminal hydrophilic domains are localized to the cytoplasm. Mammals possess at least three isoforms which probably arose by gene duplication and divergence before divergence of the mammalian species. Homologues are present in Drosophila melanogaster and Caenorabditis elegans. Tetrameric cardiac and skeletal muscle sarcoplasmic reticular ryanodine receptors (RyR) are large (~2.3 MDa). The complexes include signaling proteins such as 4 FKBP12 molecules, protein kinases, phosphatases, etc. They modulate the activity of and the binding of immunophilin to the channel. FKBP12 is required for normal gating as well as coupled gating between neighboring channels. PKA phosphorylation of RyR dissociates FKBP12 yielding increased Ca2+ sensitivity for activation, part of the excitation-contraction (fight or flight) response. IP3 receptors IP3 receptors resemble Ry receptors in many respects. They are homotetrameric complexes with each subunit exhibiting a molecular size of over 300,000 daltons (about 2,700 amino acyl residues). They possess C-terminal channel domains that are homologous to those of the Ry receptors. The channel domains possess six putative TMSs and a putative channel lining region between TMSs 5 and 6. Both the large N-terminal domains and the smaller C-terminal tails face the cytoplasm. They possess covalently linked carbohydrate on extracytoplasmic loops of the channel domains. They have three currently recognized isoforms (types 1, 2, and 3) in mammals which are subject to differential regulation and have different tissue distributions. They co-localize with Orai channels (TC# 1.A.52) in pancreatic acinar cells. IP3 receptors possess three domains: N-terminal IP3-binding domains, central coupling or regulatory domains and C-terminal channel domains. Channels are activated by IP3 binding, and like the Ry receptors, the activities of the IP3 receptor channels are regulated by phosphorylation of the regulatory domains, catalyzed by various protein kinases. They predominate in the endoplasmic reticular membranes of various cell types in the brain but have also been found in the plasma membranes of some nerve cells derived from a variety of tissues. Specific residues in the putative pore helix, selectivity filter and S6 transmembrane helix of the IP3 receptor, have been mutated in order to examine their effects on channel function. Mutation of 5 of 8 highly conserved residues in the pore helix/selectivity filter region inactivated the channel. Channel function was also inactivated by G2586P and F2592D mutations. These studies defined the pore-forming segment in IP3. See also IP3 receptor Ryanodine receptor Voltage-gated ion channel Ion channel Receptor (biochemistry) References Protein families Membrane proteins Transmembrane proteins Transmembrane transporters Transport proteins Integral membrane proteins
Ryanodine-Inositol 1,4,5-triphosphate receptor calcium channels
[ "Biology" ]
1,609
[ "Protein families", "Protein classification", "Membrane proteins" ]
50,107,126
https://en.wikipedia.org/wiki/XIPS-25
The XIPS-25, or 25-cm Xenon Ion Propulsion System, is a gridded ion thruster manufactured by L-3 Communications. XIPS-25 engine is used on Boeing 702 class satellites for station-keeping as well as orbit-raising. Specifications References Ion engines
XIPS-25
[ "Physics", "Chemistry" ]
61
[ "Ions", "Ion engines", "Matter" ]
50,110,311
https://en.wikipedia.org/wiki/Stefan%20tube
In chemical engineering, a Stefan tube is a device that was devised by Josef Stefan in 1874. It is often used for measuring diffusion coefficients. It comprises a vertical tube, over the top of which a gas flows and at the bottom of which is a pool of volatile liquid that is maintained in a constant-temperature bath. The liquid in the pool evaporates, diffuses through the gas above it in the tube, and is carried away by the gas flow over the tube mouth at the top. One then measures the fall in the level of the liquid in the tube. The tube conventionally has a narrow diameter, in order to suppress convection. The way that a Stefan tube is modelled, mathematically, is very similar to how one can model the diffusion of perfume fragrance molecules from (say) a drop of perfume on skin or clothes, evaporating up through the air to a person's nose. There are some differences between the models. However, they turn out to have little effect on results at highly dilute vapour concentrations. Analysis In the analysis of the system, various assumptions are made. The liquid, conventionally denoted A, is neither soluble in the gas in the tube, conventionally denoted B, nor reacts with it. The decrease in volume of the liquid A and increase in volume of the gas B over time can be ignored for the purposes of solving the equations that describe the behaviour, and an assumption can be made that the instantaneous flux at any time is the steady state value. There are no radial or circumferential components to the concentration gradients, resulting from convection or turbulence caused by excessively vigorous flow at the upper mouth of the tube, and the diffusion can thus be treated as a simple one-dimensional flow in the vertical direction. The mole fraction of A at the upper mouth of the tube is zero, as a consequence of the gas flow. At the interface between A and B the flux of B is zero (because it is insoluble in A) and the mole fraction is the equilibrium value. The flux of B, denoted NB, is thus zero throughout the tube, its diffusive flux downward (along its concentration gradient) is balanced by its convective flux upward caused by A. Applying these assumptions, the system can be modelled using Fick's laws of diffusion or as Maxwell–Stefan diffusion. References Cross-index Sources Further reading Chemical engineering Diffusion
Stefan tube
[ "Physics", "Chemistry", "Engineering" ]
493
[ "Transport phenomena", "Physical phenomena", "Diffusion", "Chemical engineering", "nan" ]
53,922,438
https://en.wikipedia.org/wiki/Quantum%20materials
Quantum materials is an umbrella term in condensed matter physics that encompasses all materials whose essential properties cannot be described in terms of semiclassical particles and low-level quantum mechanics. These are materials that present strong electronic correlations or some type of electronic order, such as superconducting or magnetic orders, or materials whose electronic properties are linked to non-generic quantum effects – topological insulators, Dirac electron systems such as graphene, as well as systems whose collective properties are governed by genuinely quantum behavior, such as ultra-cold atoms, cold excitons, polaritons, and so forth. On the microscopic level, four fundamental degrees of freedom – that of charge, spin, orbit and lattice – become intertwined, resulting in complex electronic states; the concept of emergence is a common thread in the study of quantum materials. Quantum materials exhibit puzzling properties with no counterpart in the macroscopic world: quantum entanglement, quantum fluctuations, robust boundary states dependent on the topology of the materials' bulk wave functions, etc. Quantum anomalies such as the chiral magnetic effect link some quantum materials with processes in high-energy physics of quark-gluon plasmas. History In 2012, Joseph Orenstein published an article in Physics Today about "ultrafast spectroscopy of quantum materials". Orenstein stated, As a paradigmatic example, Orenstein refers to the breakdown of Landau Fermi liquid theory due to strong correlations. The use of the term "quantum materials" has been extended and applied to other systems, such as topological insulators, and Dirac electron materials. The term has gained momentum since the article "The rise of quantum materials" was published in Nature Physics in 2016. Quoting: References Condensed matter physics Materials science Quantum mechanics
Quantum materials
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
357
[ "Applied and interdisciplinary physics", "Theoretical physics", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "nan", "Matter" ]
53,926,298
https://en.wikipedia.org/wiki/Antibody-vaccine%20engineered%20construct
Antibody-vaccine engineered construct abbreviated AVEC is an anti cancer drug in clinical trials that enables the immune system to detect and naturally eliminate malignant cells. It is a biomolecularly engineered molecule consisting of the two main components: (1) antibody; (2) vaccine. (1) The antibody component binds AVEC to the targeted molecule, e.g., to epidermal growth factor receptor 2 (HER2) on breast cancer cells. (2) The vaccine component elicits an immune response, directed against the cancer cells bound by the antibody component, e.g., immune response to the HBV vaccine mounted by the prophylactic immunity gained by vaccination. It has the potential to be used to treat a variety of types of cancer, including breast cancer, ovarian cancer, colorectal cancer and leukemias. Mechanism of action The mechanism of action of AVEC relies upon the immunity acquired through vaccination or natural illness against microbials (e.g., viruses, bacteria, etc.) being redirected, amplified, and accelerated against the cancer cells. For example, a person vaccinated against hepatitis B virus, but suffering from HER2+ breast cancer receives AVEC, consisting of the antibody against HER2 and vaccine against HBV (AVEC: anti-HER2 - HBV). Since this persons immune system has already prepared a response to the HBV virus, it will instantly attack any cell tagged by it, in this case the breast cancer cells, overexpressing HER2. As such, AVEC attracts the components of the immune response like a lightning-rod attracts the thunderbolts during storms. Research A 2016 compared AVECs ability to induce apoptosis and necrosis in HER2+ breast cancer compared to the commonly used drug Trastuzumab. It found that the cells was treated with AVEC had a statistically significant higher amount of both apoptosis and necrosis. The percentage of necrotic cells due to treatment with AVEC more than tripled compared to necrotic cells due to treatment with Trastuzumab. References Biomolecules
Antibody-vaccine engineered construct
[ "Chemistry", "Biology" ]
447
[ "Natural products", "Organic compounds", "Biomolecules", "Structural biology", "Biochemistry", "Molecular biology" ]
58,763,725
https://en.wikipedia.org/wiki/Olivooides
Olivooides is an extinct, sphere-shaped microfossil from Cambrian strata. Fossils are currently known only from China. Olivooides was approximately 600‐870 μm in diameter. It was an egg with a large yolk content. Fossils from Shaanxi, China can be found in the cleavage, gastrulation, organogenesis, cuticularization, pre‐hatching, post‐hatching and subsequent growth stages of development. This fossil is a result of soft-bodied preservation. Olivooides has pentaradial symmetry and is usually preserved by calcium phosphate endocast. The internal structure is rarely preserved. It has no larval stage, so it likely had a quick and direct development. Little is known about which organisms Olivooides is related to. It has similarities to priapulid worms in the embryonic stages. Pentaradial symmetry can be seen in parts of the priapulid worms as well. However, Olivooides has one orifice that is assumed to have been both a mouth and anus. The priapulid worm has a complete gut. Olivooides has also been compared to echinoderms based on its pentaradial symmetry, but this comparison is a bit far-fetched since echinoderms are not the only organisms to have pentaradial symmetry. It can also be seen in both priapulids and cnidarians. Olivooides does not have a calcite skeleton with a mesh structure, either. Olivooides is most likely affiliated with cnidarians. Both have "an annulated conical test, fine longitudinal sculpture and a bluntly tapering apex with radial folds." References Cubozoa Microfossils Prehistoric animals of China Cambrian animals of Asia Cambrian genus extinctions
Olivooides
[ "Chemistry" ]
364
[ "Microfossils", "Microscopy" ]
58,767,194
https://en.wikipedia.org/wiki/Electrochemical%20aptamer-based%20biosensors
Aptamers, single-stranded RNA and DNA sequences, bind to an analyte and change their conformation. They function as nucleic acids selectively binding molecules such as proteins, bacteria cells, metal ions, etc. Aptamers can be developed to have precise specificity to bind to a desired target. Aptamers change conformation upon binding, altering the electrochemical properties which can be measured. The Systematic Evolution of Ligands by Exponential Enrichment (SELEX) process generates aptamers. Electrochemical aptamer-based (E-AB) biosensors is a device that takes advantage of the electrochemical and biological properties of aptamers to take real time, in vivo measurements. An electrochemical aptamer-based (E-AB) biosensor generates an electrochemical signal in response to specific target binding in vivo The signal is measured by a change in Faradaic current passed through an electrode. E-AB sensors are advantageous over previously reported aptamer-based sensors, such as fluorescence generating aptamers, due to their ability to detect target binding in vivo with real-time measurements. An E-AB sensor is composed of a three-electrode cell: an interrogating (or working) electrode, a reference electrode, and a counter electrode. A signal is generated within the electrochemical cell then measured and analyzed by a potentiostat. Several biochemical and electrochemical parameters optimize signal gain for E-AB biosensors. The density packing of DNA or RNA aptamers, the ACV frequency administered by the potentiostat, and the chemistry of the self assembling monolayer (SAM) are all factors that determine signal gain as well as the signal to noise ratio of target binding. E-AB biosensors provide a promising mechanism for in-situ sensing, feedback-controlled drug administration, and cancer biomarkers. Signal generation The DNA or RNA aptamers are fixed on the interrogating electrode, where a redox reaction is reported by a redox tag. Gold is often used as the probe surface for interrogating electrodes. The surface of the gold electrode is packed with redox-tagged DNA or RNA aptamers. The redox reporter is often methylene blue. Upon target binding, the aptamer changes structure by folding, bringing the redox reporter closer to the gold electrode. This increase in proximity from the redox-reporter to the electrode enables faster electron transfer from the redox tag to the gold electrode. The increase in speed of electron transfer contributes to a change in Faradaic current that is detected by the potentiostat. The reference electrode is the site of a known chemical reaction that has a known redox potential. For example, a reference electrode that harbors the reaction of silver-silver chloride (Ag/AgCl) has a fixed redox potential and is the measuring point for the redox potential of the interrogating electrode. The counter electrode (or auxiliary electrode) acts as a cathode or anode to the interrogating electrode. The applied voltage is not passed through the reference electrode due to an impedance supplied by the potentiostat. Therefore, the potential generated within the electrochemical cell is attributed to the interrogating electrode. Current is measured as potential of the interrogating electrode vs. the fixed potential of the reference electrode. The difference in potential is what produces the current in the external circuit and generates a signal. The signal quantifies target binding depending on electron transfer that is stoichimetrically proportional to target binding. Four electrode method has also been demonstrated in an electrochemical nanoporous alumina membrane sensor, where the aptamer was grafted onto the membrane and not on the electrode. The binding of the aptamer with the target protein produces a change in impedance of the membrane which is picked up by the electrochemical sensor using an impedance spectroscopy analyzer. This approach could be beneficial in cases where the electric field of the electrode can change the aptamer structure or the biointerface which may decrease the sensing ability. Signal optimization There are several parameters to consider for optimization of binding-induced electrochemical signal gain. The aptamer probe packing density, the nature of the self-assembling monolayer, and the ACV frequency are factors that affect detecting and measuring of signal. Two main factors are considered when fabricating the packing density on the probe surface. The concentration of aptamer and the surface chemistry of the self-assembling monolayer (SAM) enable variations of desired probe packing density. Aptamer packing density The density of aptamer packing on the electrode surface is an important parameter to optimize signal. Depending on the size and nature of target molecule, different aptamer packing densities favor signal gain. Studies have shown that small target molecules enable a greater signal gain for low density aptamer packing, while larger proteins as a target generate the greatest signal at intermediate probe packing densities. Signal gain decreases as packing density increase above the range of optimal signal gain due to steric hindrance. When the probe surface neighboring an aptamer is blocked by an adjacent aptamer, the redox tag on the target-bound aptamer will not have room to come into contact with the electrode, therefore failing to report target binding. The concentration of aptamer in solution that incubates a clean probe is found to be proportional to the density of aptamers that are immobilized on the probe. Studies have reported suggesting that small targets such as cocaine E-AB sensors generate the most signal with the lowest probe packing density. Conversely, larger protein targets such as the protein Thrombin generate the most signal at intermediate probe packing densities. SAM nature and surface chemistry Consecutively, the probe is incubated in a SAM to make the surface of the probe that is unoccupied unreactive to target or further aptamer binding. The optimized SAM thickness is thick enough for the surface to be passivated against target binding and thin enough to transfer electrons from the redox reporter to the electrode. SAM thickness can be measured as length. It has been reported that cocaine E-AB sensors generate more signal when the SAM is thinner and therefore more conductive. However, reducing the SAM from 6 carbons to 2 carbons decreases signal, and peak current is generated using a 6-carbon SAM. ACV frequency The ACV frequency is used to monitor the Faradaic current, which quantifies target binding. The generation of signal has been reported to be insensitive to ACV frequency as long as the ACV is in a sensible range, therefore, not too low to be detected or too fast. The ACV frequency is used instead of a single-directional current to protect the degradation of the electrodes. Square wave voltammetry is applied and measured to analyze the change in current as the voltage is swept linearly across an electrode. Aptamer generation Design and fabrication of E-AB aptamers is consistent with methods used for previously reported aptamers. SELEX is a well known selection method for fabrication and selection of nucleotide aptamers. In 1990s, scientists introduced SELEX. Aptamers are chosen based on their in vitro target recognition through this process. In SELEX, aptamers are chosen based on their ability to recognize specific targets. This method involves three key steps: First, single-stranded nucleic acids are bound to the target. Next, the bound nucleic acids are separated from unbound ones. Finally, Polymerase Chain Reaction (PCR) amplifies the nucleic acids that have an affinity for the target, allowing for further screening or functional analysis. Following SELEX, high-throughput sequencing is used to identify sequences that have been enriched due to their target-binding abilities. SELEX is relatively limited by the amount of enrichment that can be achieved in a single round. A less-reported screening method for aptamer fabrication that overcomes this limitation is affinity-based library enrichment that has been termed Particle Display. Particle Display Particle Display produces higher yields of higher affinity aptamers in less rounds than conventional selection methods. In this method, libraries of aptamers are separated into aptamer particles and separated by fluorescence-activated cell sorting based on affinity. Only the highest affinity aptamer particles are isolated and sequenced into aptamers. This is an affinity-base selection process that is more efficient than selection methods such as SELEX. Particle display may be a reliable aptamer generation method for E-AB sensors due to the high affinity and specificity of target binding. Researchers tackled the challenge of isolating high-affinity aptamers in conventional SELEX by introducing Particle Display System (PDS). Using parallel single-molecule emulsion polymerase chain reaction (PCR) for monoclonal aptamer screening, PDS employs emulsion PCR and droplet digital PCR to prevent by-product propagation and preserve rare high-affinity sequences. The one-particle-one-sequence nature of PDS transforms the DNA-target interaction into a particle-target interaction, enabling swift confirmation of aptamer candidate affinities through fluorescence-activated cell sorting or flow cytometry assays. Unlike conventional SELEX, PDS efficiently segregates aptamers, providing a streamlined and effective method for identifying and isolating high-affinity binders. PDS significantly enhances the efficiency of enriching high-affinity aptamers, achieving this in a single round of screening. Particle display yields higher quantities of higher affinity aptamers in fewer rounds compared to conventional selection methods. This method separates aptamer libraries into aptamer particles and employs fluorescence-activated cell sorting to isolate particles based on affinity. Only the highest affinity aptamer particles are isolated and sequenced into aptamers. This affinity-based selection process is more efficient than methods such as SELEX. Particle display may be a reliable aptamer generation method for E-AB sensors due to the high affinity and specificity of target binding Advantages EAB sensors possess the potential to significantly advance our comprehension of metabolism, endocrinology, pharmacokinetics, and neurochemistry as valuable research tools. Specifically, these sensors offer improved resolution and more quantitative measurements of phenomena such as drug delivery, clearance, and the maintenance of metabolic homeostasis. With their capability for feedback control, EAB sensors also present unprecedented opportunities to elucidate the correlation between, for instance, plasma drug levels and subsequent clinical or behavioral responses. The simultaneous measurements performed by EAB sensors in multiple body locations can enhance our understanding of drug and metabolite transport within and between bodily compartments. Beyond in-body measurements, EAB sensors could be beneficial for real-time monitoring in cell culture applications, ranging from small-scale (e.g., "organ on a chip") to industrial scale (e.g., monitoring industrial bioreactors). They have already demonstrated utility in applications such as monitoring ATP release in astrocytes and detecting serotonin in cell culture using glass nanopipettes. Aptamers, referred to as "chemical antibodies," are used in therapeutics and biosensing due to their specific recognition and binding capabilities toward target molecules. They offer advantages over classical antibodies as they are significantly lighter, easily penetrate intracellular targets, can be synthetically produced, are non-immunogenic, and exhibit stability. Aptamers excel in discerning proteins, demonstrating precision in diagnostics and therapeutics, and have applications in laboratory assays and separations, particularly in biomolecule purification, chiral separation, and biochemical assays. The ability of aptamers to undergo conformational changes makes them ideal for developing quenching-based biosensors, showcasing flexibility that antibodies lack. Unlike antibodies, which are prone to cross-reactivity and batch variations, aptamers offer customizable selectivity and stability. This is particularly evident in biosensor applications targeting low-molecular-weight entities like small molecules Limitations In E-AB sensors, the signal between electrochemical response and absence of target is small. The aptamer can be reengineered to a large-scale, conformational change. Long flexible loops or complementary strands can also force a change in the aptamers conformation. These techniques to modify aptamers increase the signal ratio, but does not guarantee that it is sufficient to be measured. E-AB sensors are only as sensitive as the aptamer deployed. The selectivity of the aptamer can be a concern when there are similar compounds in the blood or other bodily fluids. cross-reactivity causes interference in in-vivo monitoring and requires understanding of how the aptamer reacts with similar compounds that may be in the sample. Promising applications E-AB biosensors as basis for controlled drug delivery. Feedback-controlled drug delivery for continuous drug administration with dosage levels based on integrating E-AB signal calculations into a drug administering medical device. E-AB biosensors do not require reagents, are inexpensive compared to antibody detection methods, can be used in blood or other fluids with high abundance of non-target molecules, and they are reusable. These are all factors that make E-AB biosensors a promising method for feedback-controlled drug delivery dependent on integrated calculations of computer programming. Research Applications EAB sensors possess the potential to significantly advance our comprehension of metabolism, endocrinology, pharmacokinetics, and neurochemistry as valuable research tools. Specifically, these sensors offer improved resolution and more quantitative measurements of phenomena such as drug delivery, clearance, and the maintenance of metabolic homeostasis. Due to their capability for feedback control, E-AB sensors also present unprecedented opportunities to elucidate the correlation between, for instance, plasma drug levels and subsequent clinical or behavioral responses. The simultaneous measurements performed by E-AB sensors in multiple body locations can enhance our understanding of drug and metabolite transport within and between bodily compartments. Beyond in-body measurements, E-AB sensors could be beneficial for real-time monitoring in cell culture applications, ranging from small-scale (e.g., "organ on a chip") to industrial scale (e.g., monitoring industrial bioreactors). They have already demonstrated utility in applications such as monitoring ATP release in astrocytes and detecting serotonin in cell culture using glass nanopipettes. Clinical Applications E-AB sensors can be adapted into wearable devices that monitor health of patients in real time. E-AB sensors are capable of monitoring specific biomarkers that can aid in detection of diseases in early stages. For example, the measurement of C-reactive protein can aid in detection of heart attacks on a wearable device. E-AB sensors offer groundbreaking possibilities for monitoring molecules within the intricate in-vivo environment, with transformative applications in clinical settings. Envisioning the integration of the E-AB sensing platform into a wearable device, comparable to continuous glucose monitors, holds promise for real-time measurements of drugs and biomarkers reflective of health and disease. Notably, exploring E-AB sensors in the interstitial skin region shows potential in this regard. In instances where sepsis is suspected, the monitoring of infection biomarkers, such as C-reactive protein, stands out as a potentially life-saving approach, providing critical insights into disease prognosis and severity. Similarly, for individuals at high cardiac risk, the deployment of a convenient wearable device could facilitate early detection of heart attacks, considering the association of specific biomarkers like troponin with the onset of cardiac events. The exceptional capability of E-AB sensors to measure picomolar concentrations of specific proteins in real-time within complex sample matrices positions the platform as a well-suited tool for such clinical monitoring applications. Expanding beyond disease detection, E-AB sensors hold the promise of revolutionizing drug dosing practices, particularly in the realm of precision medicine. The prevalent approach to pharmaceutical dosing, grounded in assumptions about the average individual's drug absorption and response, falls short for drugs with narrow therapeutic windows relative to patient variability. Current dosing methodologies, relying on slow and infrequent blood draws or waiting for observable side effects, entail potential risks of underdosing or overdosing. E-AB sensors, with their capability to provide real-time insights into plasma drug levels, present an avenue for significantly enhancing the safety and efficacy of pharmacological treatments through improved therapeutic drug monitoring. References Biosensors Electrochemistry
Electrochemical aptamer-based biosensors
[ "Chemistry", "Biology" ]
3,426
[ "Electrochemistry", "Biosensors" ]
58,773,310
https://en.wikipedia.org/wiki/Olefin%20conversion%20technology
Olefin Conversion Technology, also called the Phillips Triolefin Process, is the industrial process that interconverts propylene with ethylene and 2-butenes. The process is also called the ethylene to propylene (ETP) process. In ETP, ethylene is dimerized to 1-butene, which is isomerized to 2-butenes. The 2-butenes are then subjected to metathesis with ethylene. Rhenium- and molybdenum-containing heterogeneous catalysis are used. Nowadays, only the "reverse" reaction is practiced, i.e., the conversion of ethylene and 2-butene to propylene: CH2=CH2 + CH3CH=CHCH3 → 2 CH2=CHCH3 The technology is founded on an olefin metathesis reaction discovered at Phillips Petroleum Company. The originally described process employed catalysts molybdenum hexacarbonyl, tungsten hexacarbonyl, and molybdenum oxide supported on alumina. References Carbon-carbon bond forming reactions Industrial processes
Olefin conversion technology
[ "Chemistry" ]
238
[ "Carbon-carbon bond forming reactions", "Organic reactions" ]
45,703,004
https://en.wikipedia.org/wiki/Helium%20cryogenics
In the field of cryogenics, helium [He] is utilized for a variety of reasons. The combination of helium’s extremely low molecular weight and weak interatomic reactions yield interesting properties when helium is cooled below its critical temperature of 5.2 K to form a liquid. Even at absolute zero (0K), helium does not condense to form a solid under ambient pressure. In this state, the zero point vibrational energies of helium are comparable to very weak interatomic binding interactions, thus preventing lattice formation and giving helium its fluid characteristics. Within this liquid state, helium has two phases referred to as helium I and helium II. Helium I displays thermodynamic and hydrodynamic properties of classical fluids, along with quantum characteristics. However, below its lambda point of 2.17 K, helium transitions to He II and becomes a quantum superfluid with zero viscosity. Under extreme conditions such as when cooled beyond Tλ, helium has the ability to form a new state of matter, known as a Bose–Einstein condensate (BEC), in which the atoms virtually lose all their energy. Without energy to transfer between molecules, the atoms begin to aggregate creating a volume of equivalent density and energy. From observations, liquid helium only exhibits super-fluidity because it contains isolated islands of BECs, which have well-defined magnitude and phase, as well as well-defined phonon–roton (P-R) modes. A phonon refers to a quantum of energy associated with a compressional wave such as the vibration of a crystal lattice while a roton refers to an elementary excitation in superfluid helium. In the BEC’s, the P-R modes have the same energy, which explains the zero point vibrational energies of helium in preventing lattice formation. When helium is below Tλ, the surface of the liquid becomes smoother, indicating the transition from liquid to superfluid. Experiments involving neutron bombardment correlate with the existence of BEC’s, thereby confirming the source of liquid helium’s unique properties such as super-fluidity and heat transfer. Though seemingly paradoxical, cryogenic helium systems can move heat from a volume of relatively low temperature to a volume of relatively high temperature. Though this phenomenon appears to violate the second law of thermodynamics, experiments have shown this to prevail in systems where the volume of low temperature is constantly heated, and the volume of high temperature is constantly cooled. It is believed this phenomenon is related to the heat associated with the phase change between liquid and gaseous helium. Applications Superconductors Liquid helium is used as a coolant for various superconducting applications. Notable are particle accelerators where magnets are used for steering charged particles. If large magnetic fields are required then superconducting magnets are used. In order for superconductors to be efficient, they must be kept below their respective critical temperature. This requires very efficient heat transfer. Because of the reasons discussed previously, superfluid helium can be used to effectively transfer heat away from superconductors. Quantum computing One proposed use for superfluid helium is in quantum computing. Quantum computers utilize the quantum states of matter, such as the electron spin, as individual quantum bits (qubits), a quantum analogue of the bit used in traditional computers to store information and perform processing tasks. The spin states of the electrons present on the surface of superfluid helium in a vacuum show promise as excellent qubits. In order to be considered a usable qubit, a closed system of individual quantum objects must be created that interact with each other, but whose interaction with the outside world is minimal. In addition, the quantum objects must be able to be manipulated by the computer, and the quantum system’s properties must be readable by the computer to signal the termination of a computational function. It is believed that in vacuum, superfluid helium satisfies many of these criteria since a closed system of its electrons can be read and easily manipulated by the computer in a similar fashion as electrostatically manipulated electrons in semiconductor heterostructures. Another beneficial aspect of the liquid helium quantum system is that application of an electrical potential to liquid helium in a vacuum can move qubits with little decoherence. In other words, voltage can manipulate qubits with little effect on the ordering of the phase angles in the wave functions between the components of the liquid helium quantum system. X-ray crystallography The advent of high-flux X-rays provides a useful tool for developing high-resolution structures of proteins. However, higher energy crystallography incurs radiation damage to the proteins studied. Cryogenic helium systems can be used with greater efficacy than nitrogen cryogenic systems to prevent radical damage to protein crystals. See also Dilution refrigerator References Cryogenics Helium Superfluidity
Helium cryogenics
[ "Physics", "Chemistry", "Materials_science" ]
996
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Phases of matter", "Cryogenics", "Superfluidity", "Condensed matter physics", "Exotic matter", "Matter", "Fluid dynamics" ]
45,704,063
https://en.wikipedia.org/wiki/Low-temperature%20polycrystalline%20silicon
Low-temperature polycrystalline silicon (LTPS) is polycrystalline silicon that has been synthesized at relatively low temperatures (~650 °C and lower). LTPS is important for display industries, since the use of large glass panels prohibits exposure to deformative high temperatures. More specifically, the use of polycrystalline silicon in thin-film transistors (LTPS-TFT) has high potential for large-scale production of electronic devices like flat panel LCD displays or image sensors. LTPS is polycrystalline silicon that has been synthesized at ~650 °C and lower, compared to traditional methods (above 900 °C). Development of polycrystalline silicon Polycrystalline silicon (p-Si) is a pure and conductive form of the element composed of many crystallites, or grains of highly ordered crystal lattice. In 1984, studies showed that amorphous silicon (a-Si) is an excellent precursor for forming p-Si films with stable structures and low surface roughness. Silicon film is synthesized by low-pressure chemical vapor deposition (LPCVD) to minimize surface roughness. First, amorphous silicon is deposited at 560–640 °C. Then it is thermally annealed (recrystallized) at 950–1000 °C. Starting with the amorphous film, rather than directly depositing crystals, produces a product with a superior structure and a desired smoothness. In 1988, researchers discovered that further lowering the temperature during annealing, together with advanced plasma-enhanced chemical vapor deposition (PECVD), could facilitate even higher degrees of conductivity. These techniques have profoundly impacted the microelectronics, photovoltaic, and display enhancement industries. Use in liquid-crystal displays Amorphous silicon TFTs have been widely used in liquid-crystal display (LCD) flat panels because they can be assembled into complex high-current driver circuits. Amorphous Si-TFT electrodes drive the alignment of crystals in LCDs. The evolution of LTPS-TFTs can have many benefits such as higher device resolution, lower synthesis temperature, and reduced price of essential substrates. However, LTPS-TFTs also have several drawbacks. For example, the area of TFTs in traditional a-Si devices is large, resulting in a small aperture ratio (the amount of area which is not blocked by the opaque TFT and thus admits light). The incompatibility of different aperture ratios prevents LTPS-based complex circuits and drivers from being integrated into a-Si material. Additionally, the quality of LTPS decreases over time due to an increase in temperature upon turning on the transistor, which degrades the film by breaking the Si-H bonds in the material. This would cause the device to suffer from drain breakdown and current leakage, most notably in small and thin transistors, which dissipate heat poorly. Processing by laser annealing XeCl Excimer-Laser Annealing (ELA) is the first key method to produce p-Si by melting a-Si material through laser irradiation. The counterpart of a-Si, polycrystalline silicon, which can be synthesized from amorphous silicon by certain procedures, has several advantages over widely used a-Si TFT: High electron mobility rate; High resolution and aperture ratio; Available for high integration of circuits. XeCl-ELA succeeds in crystallizing a-Si (thickness ranges from 500-10000Å) into p-Si without heating the substrates. The polycrystalline form has larger grains that yield better mobility for TFTs due to reduced scattering from grain boundaries. This technique leads to the successful integration of complicated circuits in LCD displays. Development of LTPS-TFT devices Apart from the improvement of the TFTs themselves, the successful application of LTPS to graphical displays also depends on innovative circuits. One recent technique involves a pixel circuit in which the outgoing current from the transistor is independent of the threshold voltage, thus producing uniform brightness. LTPS-TFT is commonly used to drive OLED displays because it has high resolution and accommodation for large panels. However, variations in LTPS structure would result in non-uniform threshold voltage for signals and non-uniform brightness using traditional circuits. The new pixel circuit includes four n-type TFTs, one p-type TFT, a capacitor, and a control element to control the image resolution. Enhancing the performance and microlithography for TFTs is important for advancing LTPS active-matrix OLEDs. These many important techniques have allowed the mobility of crystalline film to reach up to 13 cm2/Vs, and they have helped to mass-produce LEDs and LCDs over 500 ppi in resolution. LTPO Low-temperature polycrystalline oxide (LTPO) is a type of OLED display backplane technology developed by Apple that combines LTPS TFTs and oxide TFTs (indium gallium zinc oxide, or IGZO). In LTPO, the switching circuits use LTPS while the driving TFTs use IGZO materials. LTPO allows for more efficient use of power by dynamically adjusting the refresh rate of the screen based on the content being displayed. This means that the screen can operate at a low refresh rate when displaying static images or text, but can ramp up to a higher refresh rate when displaying dynamic content like videos or games. LTPO displays are known for their improved battery life and can be found in some smartphones, smartwatches, and other mobile devices. Although the core technology in LTPO is developed by Apple, Samsung also has its proprietary technology for LTPO AMOLED panels using a combination of LTPS TFTs and hybrid-oxide and polycrystalline silicon (HOP). See also Indium gallium zinc oxide Light-emitting diode Monocrystalline silicon Photovoltaics Wafer (electronics) References Electronics manufacturing Liquid crystal displays Silicon, Polycrystalline Crystals Silicon solar cells Allotropes of silicon
Low-temperature polycrystalline silicon
[ "Chemistry", "Materials_science", "Engineering" ]
1,262
[ "Allotropes", "Semiconductor materials", "Group IV semiconductors", "Allotropes of silicon", "Crystallography", "Crystals", "Electronic engineering", "Electronics manufacturing" ]
45,704,480
https://en.wikipedia.org/wiki/Industrial%20dye%20degradation
Industrial dye degradation is any of a number of processed by which dyes are broken down, ideally into innocuous products. Many dyes, specifically in the textile industry such as methylene blue or methyl red, are released into ecosystems through water waste. Many of these dyes can be carcinogenic. In paper recycling dyes can be removed from fibres during a deinking stage prior to degradation. Methods Heterogeneous photocataylsis is one approach to the degradation of dyes. As applied to dye-containing effluents from the textile industry, several approaches are standardized for removal or degradation of dyes. These include oxidation, e.g. using air or hydrogen peroxide, ozone, or Fenton chemistry. One challenge is that oxidants can be indiscriminent such that large amounts of reagents can be required (see Chemical oxygen demand). One promising approach combines oxidation with photocatalysis. Reduction is also employed, a standard reagent being dithionite, which traditionally affords leuco dyes. Precipitation, often coupled with flocculation, is yet another approach, although it can produce substantial quantities of solids. References Dyes Pollution control technologies
Industrial dye degradation
[ "Chemistry", "Engineering" ]
252
[ "Pollution control technologies", "Environmental engineering" ]
45,710,371
https://en.wikipedia.org/wiki/Victor%20V%C3%A2lcovici
Victor Vâlcovici ( – 21 June 1970) was a Romanian mechanician and mathematician. Biography Born into a modest family in Galați, he graduated first in his class in 1904 from Nicolae Bălcescu High School in Brăila. Entering the University of Bucharest on a scholarship, he attended its faculty of sciences, where he had as teachers Spiru Haret and Gheorghe Țițeica. After graduating in 1907 with a degree in mathematics, he taught high school for two years before leaving for University of Göttingen on another scholarship to pursue a doctorate in mathematics. He wrote his thesis under the direction of Ludwig Prandtl and defended it in 1913; the thesis, titled Ueber die diskontinuierliche Flussigkeitsbewegungen mit zwei freien Strahlen (Discontinuous flow of liquids in two free dimensions), amplified upon the work of Bernhard Riemann. He was subsequently named assistant professor of mechanics at the University of Iași, rising to full professor in 1918. In 1921, he became rector of the Polytechnic School of Timișoara. There, he was also professor of rational mechanics and founded a laboratory dedicated to the field. During his nine years as rector, he worked to place the recently founded university on a solid foundation. From 1930 until retiring in 1962, he taught experimental mechanics at the University of Bucharest. In the government of Nicolae Iorga, he served as Minister of Public Works from 1931 to 1932. During this time, he introduced a modern road network that featured paved highways. In 1936 he gave an invited talk at the International Congress of Mathematicians in Oslo, with title Sur le sillage derrière un obstacle circulaire (In the wake of a circular obstacle). Elected a corresponding member of the Romanian Academy in 1936, he was stripped of his membership by the new communist regime in 1948, but made a titular member of the Romanian Academy in 1965. His numerous articles on theoretical and applied mechanics covered topics such as the principles of variational mechanics, the mechanics of ideal fluid flow, the theory of elasticity and astronomy. He died in 1970 in Bucharest, and was buried in the city's Bellu Cemetery. Streets have been named after Victor Vâlcovici in Brăila, Galați, and Timișoara; a school in Galați also bears his name. Books Notes References Willi Hager, Hydraulicians in Europe (1800–2000), vol. 2. CRC Press, Boca Raton, Florida, 2009. Eufrosina Otlăcan, "Victor Vâlcovici (1885–1970) – savant și desăvârșit pedagog", NOEMA, vol. VI, 2007, pp. 124–29 External links 1885 births 1970 deaths People from Galați University of Bucharest alumni Mechanical engineers 20th-century Romanian mathematicians Romanian schoolteachers Academic staff of the University of Bucharest Academic staff of Alexandru Ioan Cuza University Academic staff of the Politehnica University of Timișoara Rectors of Politehnica University of Timișoara Titular members of the Romanian Academy Ministers of justice of Romania Ministers of public works of Romania Ministers of communications of Romania Ministers of transport of Romania Members of the Romanian Academy of Sciences Aerodynamicists Burials at Bellu Cemetery Fluid dynamicists
Victor Vâlcovici
[ "Chemistry", "Engineering" ]
674
[ "Mechanical engineers", "Fluid dynamicists", "Mechanical engineering", "Fluid dynamics" ]
43,834,204
https://en.wikipedia.org/wiki/K-space%20%28functional%20analysis%29
In mathematics, more specifically in functional analysis, a K-space is an F-space such that every extension of F-spaces (or twisted sum) of the form is equivalent to the trivial one where is the real line. Examples The spaces for are K-spaces, as are all finite dimensional Banach spaces. N. J. Kalton and N. P. Roberts proved that the Banach space is not a K-space. See also References Functional analysis F-spaces Topological vector spaces
K-space (functional analysis)
[ "Mathematics" ]
102
[ "Functions and mappings", "Functional analysis", "Vector spaces", "Mathematical objects", "Topological vector spaces", "Space (mathematics)", "Mathematical relations" ]
43,834,296
https://en.wikipedia.org/wiki/Extension%20of%20a%20topological%20group
In mathematics, more specifically in topological groups, an extension of topological groups, or a topological extension, is a short exact sequence where and are topological groups and and are continuous homomorphisms which are also open onto their images. Every extension of topological groups is therefore a group extension. Classification of extensions of topological groups We say that the topological extensions and are equivalent (or congruent) if there exists a topological isomorphism making commutative the diagram of Figure 1. We say that the topological extension is a split extension (or splits) if it is equivalent to the trivial extension where is the natural inclusion over the first factor and is the natural projection over the second factor. It is easy to prove that the topological extension splits if and only if there is a continuous homomorphism such that is the identity map on Note that the topological extension splits if and only if the subgroup is a topological direct summand of Examples Take the real numbers and the integer numbers. Take the natural inclusion and the natural projection. Then is an extension of topological abelian groups. Indeed it is an example of a non-splitting extension. Extensions of locally compact abelian groups (LCA) An extension of topological abelian groups will be a short exact sequence where and are locally compact abelian groups and and are relatively open continuous homomorphisms. Let be an extension of locally compact abelian groups Take and the Pontryagin duals of and and take and the dual maps of and . Then the sequence is an extension of locally compact abelian groups. Extensions of topological abelian groups by the unit circle A very special kind of topological extensions are the ones of the form where is the unit circle and and are topological abelian groups. The class S(T) A topological abelian group belongs to the class if and only if every topological extension of the form splits Every locally compact abelian group belongs to . In other words every topological extension where is a locally compact abelian group, splits. Every locally precompact abelian group belongs to . The Banach space (and in particular topological abelian group) does not belong to . References Topological groups Topology
Extension of a topological group
[ "Physics", "Mathematics" ]
431
[ "Space (mathematics)", "Topological spaces", "Topology", "Space", "Geometry", "Topological groups", "Spacetime" ]
48,704,598
https://en.wikipedia.org/wiki/Tianjin%20animal%20cloning%20center
The Tianjin animal cloning center was planned in 2015 and "to be put into use in the first half of 2016" in the Tianjin Economic-Technological Development Area of Tianjin, China, but as of 2022, no opening has been reported. Development The factory was announced to be developed by Sinica, a subsidiary of the Chinese company Boyalife, along with the Institute of Molecular Medicine at Peking University, the Tianjin International Joint Academy of Biomedicine, and the Sooam Bioengineering Research Institute in South Korea. Facility and operations The 14,000-square-metre facility would have hosted a laboratory, a cloning center, a gene bank, and educational exhibits for the public. The consortium planned to spend 200 million RMB (US$31 million) to produce 100,000 cloned cattle per year for China's rapidly growing beef market, and then expand to one million cattle per year (China planned to buy one million head of cattle from Australia in 2016 at a cost of US$2 billion). In addition to cows, the factory had planned to clone many different types of animals, including dogs, horses, and endangered and extinct animals. See also Beijing Genomics Institute (BGI) Stem cell laws and policy in China References Meat industry Intensive farming Ethically disputed business practices towards animals Cloning
Tianjin animal cloning center
[ "Chemistry", "Engineering", "Biology" ]
269
[ "Cloning", "Eutrophication", "Intensive farming", "Genetic engineering" ]
48,707,385
https://en.wikipedia.org/wiki/Proton%20tunneling
Proton tunneling is a type of quantum tunneling involving the instantaneous disappearance of a proton in one site and the appearance of the same proton at an adjacent site separated by a potential barrier. The two available sites are bounded by a double well potential of which its shape, width and height are determined by a set of boundary conditions. According to the WKB approximation, the probability for a particle to tunnel is inversely proportional to its mass and the width of the potential barrier. Electron tunneling is well-known. A proton is about 2000 times more massive than an electron, so it has a much lower probability of tunneling; nevertheless, proton tunneling still occurs especially at low temperatures and high pressures where the width of the potential barrier is decreased. Proton tunneling is usually associated with hydrogen bonds. In many molecules that contain hydrogen, the hydrogen atoms are linked to two non-hydrogen atoms via a hydrogen bond at one end and a covalent bond at the other. A hydrogen atom without its electron is reduced to being a proton. Since the electron is no longer bound to the hydrogen atom in a hydrogen bond, this is equivalent to a proton resting in one of the wells of a double well potential as described above. When proton tunneling occurs, the hydrogen bond and covalent bonds are switched. Once proton tunneling occurs, the same proton has the same probability of tunneling back to its original site provided the double well potential is symmetrical. The base pairs of a DNA strand are connected by hydrogen bonds. In essence, the genetic code is contained by a unique arrangement of hydrogen bonds. It is believed that upon the replication of a DNA strand there is a probability for proton tunneling to occur which changes the hydrogen bond configuration; this leads to a slight alteration of the hereditary code which is the basis of mutations. Likewise, proton tunneling is also believed to be responsible for the occurrence of the dysfunction of cells (tumors and cancer) and ageing. Proton tunneling occurs in many hydrogen based molecular crystals such as ice. It is believed that the phase transition between the hexagonal (ice Ih) and orthorhombic (ice XI) phases of ice is enabled by proton tunneling. The occurrence of correlated proton tunneling in clusters of ice has also been reported recently. See also Quantum tunneling Hydrogen bond References Quantum mechanics Solid state engineering
Proton tunneling
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
474
[ "Theoretical physics", "Quantum mechanics", "Electronic engineering", "Condensed matter physics", "Solid state engineering" ]
48,709,684
https://en.wikipedia.org/wiki/Comammox
Comammox (COMplete AMMonia OXidation) is the name attributed to an organism that can convert ammonia into nitrite and then into nitrate through the process of nitrification. Nitrification has traditionally been thought to be a two-step process, where ammonia-oxidizing bacteria and archaea oxidize ammonia to nitrite and then nitrite-oxidizing bacteria convert to nitrate. Complete conversion of ammonia into nitrate by a single microorganism was first predicted in 2006. In 2015 the presence of microorganisms that could carry out both conversion processes was discovered within the genus Nitrospira, and the nitrogen cycle was updated. Within the genus Nitrospira, the major ecosystems comammox are primarily found in natural aquifers and engineered ecosystems. Complete nitrification step yield more energy (∆G°′ = −349 kJ mol−1 NH3) than either single oxidation alone (∆G°′ = −275 kJ mol−1 NH3 for ammonia oxidation to nitrite and ∆G°′ = −74 kJ mol−1 NO2− for nitrite oxidation to nitrate). Comammox Nitrospira bacteria Complete nitrification of oxidizing ammonia to nitrate is energetically advantageous for Nitrospira. Due to the previous research done on Nitrospira, it was thought that all Nitrospira use nitrite as their energy source. Therefore, comammox Nitrospira were not discovered until 2015. All discovered nitrifiers belong to sublineage II of the genus Nitrospira. The genome of the nitrifying chemolithoautotrophic bacterium from the genus Nitrospira encodes for both ammonia and nitrite oxidation. The genes associated with the growth by ammonia oxidation to nitrate are ammonia monooxygenase and hydroxylamine dehydrogenases genes (e.g. amoA gene and hao cluster). This shows that complete nitrifying Nitrospira serve as cornerstones of the nitrogen-cycling microbial communities found in the environment. Nearly two years after the discovery of comammox organisms, Nitrospira inopinata was the first complete nitrifier to be isolated in pure culture. Kinetic and physiological analysis of Nitrospira inopinata demonstrated that this complete nitrifier has a high affinity for ammonia, slow growth rate, low maximum rate of ammonia oxidation, and high yield. The discovery of comammox Nitrospira provides a view into the modular evolution of the nitrogen cycle and expands upon the complexity of the evolutionary history of nitrification. Ecosystem of comammox Comammox have been identified in many ecosystems, including natural freshwater and terrestrial ecosystems. Notably commamox genes were not found to be abundant in oceans. Additionally, the use of engineered ecosystems for comammox could be used for ammonium removal during water and wastewater treatment. Comammox have been found in many engineered systems including aquaculture biofiltration units, drinking water treatment and distribution systems, and wastewater treatment plants. The growth of comammox in these engineered ecosystems co-occur with ammonia-oxidizing bacteria and/or archaea, and in some cases outnumber other ammonia-oxidizing prokaryotes. The ecosystem of comammox is currently unknown in terms of biogeography, including their distribution and abundance, due to the influences of process configuration and chemical composition of the treated wastewater. Following these findings, it was determined that comammox may out-select canonical nitrite oxidizing bacteria in the genus Nitrospira in some engineered environments, suggesting the potentially important role for comammox in efficient biological nitrogen removal in wastewater treatment processes. See also Anammox References Nitrogen cycle
Comammox
[ "Chemistry" ]
786
[ "Nitrogen cycle", "Metabolism" ]
48,712,402
https://en.wikipedia.org/wiki/Parthanatos
Parthanatos (derived from the Greek Θάνατος, "Death") is a form of programmed cell death that is distinct from other cell death processes such as necrosis and apoptosis. While necrosis is caused by acute cell injury resulting in traumatic cell death and apoptosis is a highly controlled process signalled by apoptotic intracellular signals, parthanatos is caused by the accumulation of Poly(ADP ribose) (PAR) and the nuclear translocation of apoptosis-inducing factor (AIF) from mitochondria. Parthanatos is also known as PARP-1 dependent cell death. PARP-1 mediates parthanatos when it is over-activated in response to extreme genomic stress and synthesizes PAR which causes nuclear translocation of AIF. Parthanatos is involved in diseases that afflict hundreds of millions of people worldwide. Well known diseases involving parthanatos include Parkinson's disease, stroke, heart attack, and diabetes. It also has potential use as a treatment for ameliorating disease and various medical conditions such as diabetes and obesity. History Name The term parthanatos was not coined until a review in 2009. The word parthanatos is derived from Thanatos, the personification of death in Greek mythology. Discovery Parthanatos was first discovered in a 2006 paper by Yu et al. studying the increased production of mitochondrial reactive oxygen species (ROS) by hyperglycemia. This phenomenon is linked with negative effects arising from clinical complications of diabetes and obesity. Researchers noticed that high glucose concentrations led to overproduction of reactive oxygen species and rapid fragmentation of mitochondria. Inhibition of mitochondrial pyruvate uptake blocked the increase of ROS, but did not prevent mitochondrial fragmentation. After incubating cells with the non-metabolizable stereoisomer L-glucose, neither reactive oxygen species increase nor mitochondrial fragmentation were observed. Ultimately, the researchers found that mitochondrial fragmentation mediated by the fission process is a necessary component for high glucose-induced respiration increase and ROS overproduction. Extended exposure to high glucose conditions are similar to untreated diabetic conditions, and so the effects mirror each other. In this condition, the exposure creates a periodic and prolonged increase in ROS production along with mitochondrial morphology change. If mitochondrial fission was inhibited, the periodic fluctuation of ROS production in a high glucose environment was prevented. This research shows that when cell damage to the ROS is too great, PARP-1 will initiate cell death. Morphology Structure of PARP-1 Poly(ADP-ribose) polymerase-1 (PARP-1) is a nuclear enzyme that is found universally in all eukaryotes and is encoded by the PARP-1 gene. It belongs to the PARP family, which is a group of catalysts that transfer ADP-ribose units from NAD (nicotinamide dinucleotide) to protein targets, thus creating branched or linear polymers. The major domains of PARP-1 impart the ability to fulfill its functions. These protein sections include the DNA-binding domain on the N-terminus (allows PARP-1 to detect DNA breaks), the automodification domain (has a BRCA1 C terminus motif which is key for protein-protein interactions), and a catalytic site with the NAD+-fold (characteristic of mono-ADP ribosylating toxins). Role of PARP-1 Normally, PARP-1 is involved in a variety of functions that are important for cell homeostasis such as mitosis. Another of these roles is DNA repair, including the repair of base lesions and single-strand breaks. PARP-1 interacts with a wide variety of substrates including histones, DNA helicases, high mobility group proteins, topoisomerases I and II, single-strand break repair factors, base-excision repair factors, and several transcription factors. Role of PAR PARP-1 accomplishes many of its roles through regulating poly(ADP-ribose) (PAR). PAR is a polymer that varies in length and can be either linear or branched. It is negatively charged which allows it to alter the function of the proteins it binds to either covalently or non-covalently. PAR binding affinity is strongest for branched polymers, weaker for long linear polymers and weakest for short linear polymers. PAR also binds selectively with differing strengths to the different histones. It is suspected that PARP-1 modulates processes (such as DNA repair, DNA transcription, and mitosis) through the binding of PAR to its target proteins. Pathway The parthanatos pathway is activated by DNA damage caused by genotoxic stress or excitotoxicity. This damage is recognized by the PARP-1 enzyme which causes an upregulation in PAR. PAR causes translocation of apoptosis-inducing factor (AIF) from the mitochondria to the nucleus where it induces DNA fragmentation and ultimately cell death. This general pathway has been outlined now for almost a decade. While considerable success has been made in understanding the molecular events in parthanatos, efforts are still ongoing to completely identify all of the major players within the pathway, as well how spatial and temporal relationships between mediators affect them. Pathway activation Extreme damage of DNA causing breaks and changes in chromatin structure have been shown to induce the parthanatos pathway. Stimuli that causes the DNA damage can come from a variety of different sources. Methylnitronitrosoguanidine, an alkylating agent, has been widely used in several studies to induce the parthanatos pathway. A noted number of other stimuli or toxic conditions have also been used to cause DNA damage such as H2O2, NO, and ONOO− generation (oxygenglucose deprivation). The magnitude, length of exposure, type of cell used, and purity of the culture, are all factors that can influence the activation of the pathway. The damage must be extreme enough for the chromatin structure to be altered. This change in structure is recognized by the N-terminal zinc-finger domain on the PARP-1 protein. The protein can recognize both single and double strand DNA breaks. Cell death initiation Once the PARP-1 protein recognizes the DNA damage, it catalyzes post-transcriptional modification of PAR. PAR will be formed either as a branched or linear molecule. Branching and long-chain polymers will be more toxic to the cell than simple short polymers. The more extreme the DNA damage, the more PAR accumulates in the nucleus. Once enough PAR has accumulated, it will translocate from the nucleus into the cytosol. One study has suggested that PAR can translocate as a free polymer, however translocation of a protein-conjugated PAR cannot be ruled out and is in fact a topic of active research. PAR moves through the cytosol and enters the mitochondria through depolarization. Within the mitochondria, PAR binds directly to the AIF which has a PAR polymer binding site, causing the AIF to dissociate from the mitochondria. AIF is then translocated to the nucleus where it induces chromatin condensation and large scale (50Kb) DNA fragmentation. How AIF induces these effects is still unknown. It is thought that an AIF associated nuclease (PAAN) that is currently unidentified may be present. Human AIF have a DNA binding site that would indicate that AIF binds directly to the DNA in the nucleus directly causing the changes. However, as mice AIF do not have this binding domain and are still able to undergo parthanatos, it is evident that there must be another mechanism involved. PARG PAR, which is responsible for the activation of AIF, is regulated in the cell by the enzyme poly(ADP-ribose) glycohydrolase (PARG). After PAR is synthesized by PARP-1, it is degraded through a process catalyzed by PARG. PARG has been found to protect against PAR-mediated cell death while its deletion has increased toxicity through the accumulation of PAR. Other proposed mechanisms Before the discovery of the PAR and AIF pathway, it was thought that the overactivation of PARP-1 lead to over consumption of NAD+. As a result of NAD+ depletion, a decrease of ATP production would occur, and the resulting loss of energy would kill the cell. However it is now known that this loss of energy would not be enough to account for cell death. In cells lacking PARG, activation of PARP-1 leads to cell death in the presence of ample NAD+. Differences between cell death pathways Parthanatos is defined as a unique cell death pathway from apoptosis for a few key reasons. Primarily, apoptosis is dependent on the caspase pathway activated by cytochrome c release, while the parthanatos pathway is able to act independently of caspase. Furthermore, unlike apoptosis, parthanatos causes large scale DNA fragmentation (apoptosis only produces small scale fragmentation) and does not form apoptotic bodies. While parthanatos does share similarities with necrosis, is also has several differences. Necrosis is not a regulated pathway and does not undergo any controlled nuclear fragmentation. While parthanatos does involve loss of cell membrane integrity like necrosis, it is not accompanied by cell swelling. Comparison of cell death types Pathology and treatment Neurotoxicity The PAR enzyme was originally connected to neural degradation pathways in 1993. Elevated levels of nitric oxide (NO) have been shown to cause neurotoxicity in samples of rat hippocampal neurons. A deeper look into the effects of NO on neurons showed that nitric oxides cause damage to DNA strands; the damage in turn elicits PAR enzyme activity that leads to further degradation and neuronal death. PAR- blockers halted the cell death mechanisms in the presence of elevated NO levels. PARP activity has also been linked to the neurodegenerative properties of toxin induced Parkinsonism. 1-Methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) is a neurotoxin that has been linked to neurodegeneration and development of Parkinson Disease-like symptoms in patients since 1983. The MPTP toxin's effects were discovered when four people were intravenously injecting the toxin that they produced inadvertently when trying to street-synthesise the merpyridine (MPPP) drug. The link between MPTP and PARP was found later when research showed that the MPTP effects on neurons were reduced in mutated cells lacking the PARP gene. The same research also showed highly increased PARP activation in dopamine producing cells in the presence of MPTP. Alpha-synuclein is a protein that binds to DNA and modulates DNA repair. A key feature of Parkinson's disease is the pathologic accumulation and aggregation of alpha-synuclein. In the neurons of individuals with Parkinson's disease, alpha-synuclein is deposited as fibrils in intracytoplasmic structures referred to as Lewy bodies. Formation of pathologic alpha-synuclein is associated with activation of PARP1, increased poly(ADP) ribose generation and further acceleration of pathologic alpha-synuclein formation. This process can lead to cell death by parthanatos. Multisystem involvement Parthanatos, as a cell death pathway, is being increasingly linked to several syndromes connected with specific tissue damage outside of the nervous system. This is highlighted in the mechanism of streptozotocin (STZ) induced diabetes. STZ is a chemical that is naturally produced by the human body. However, in high doses, STZ has been shown to produce diabetic symptoms by damaging pancreatic β cells, which are insulin-producing. The degradation of β cells by STZ was linked to PARP in 1980 when studies showed that a PAR synthesis inhibitor reduced STZ's effects on insulin synthesis. Inhibition of PARP causes pancreatic tissue to sustain insulin synthesis levels, and reduce β cell degradation even with elevated STZ toxin levels. PARP activation has also been preliminarily connected with arthritis, colitis, and liver toxicity. Therapy The multi-step nature of the parthanatos pathway allows for chemical manipulation of its activation and inhibition for use in therapy. This rapidly developing field seems to be currently focused on the use of PARP blockers as treatments for chronically degenerative illnesses. This culminated in 3rd generation inhibitors such as midazoquinolinone and isoquinolindione currently going to clinical trials. Another path for treatments is to recruit the parthanatos pathway to induce apoptosis into cancer cells, however no treatments have passed the theoretical stage. See also Apoptosis inducing factor Programmed cell death PARP1 References Cellular processes Programmed cell death Medical aspects of death
Parthanatos
[ "Chemistry", "Biology" ]
2,701
[ "Senescence", "Programmed cell death", "Cellular processes", "Signal transduction" ]
48,713,014
https://en.wikipedia.org/wiki/Transcription%20%28journal%29
Transcription is a scientific journal published by Taylor & Francis focusing on the subject of the transcription of DNA. Its stated aim is to publish "high-quality articles that provide novel insights, provocative questions, and new hypotheses into the expanding field of gene transcription". External links References Molecular and cellular biology journals Taylor & Francis academic journals 5 times per year journals
Transcription (journal)
[ "Chemistry" ]
74
[ "Molecular and cellular biology journals", "Molecular biology" ]
34,220,486
https://en.wikipedia.org/wiki/IEC%2062325
IEC 62325 is a set of standards related to deregulated energy market communications, based on the Common Information Model. IEC 62325 is a part of the International Electrotechnical Commission's (IEC) Technical Committee 57 (TC57) reference architecture for electric power systems, and is the responsibility of Working Group 16 (WG16). Standard documents IEC 62325 consists of the following parts, detailed in separate IEC 62325 standard documents: IEC 62325-301: Common information model (CIM) extensions for markets IEC 62325-351: CIM European market model exchange profile IEC 62325-450: Profile and context modelling rules IEC 62325-451-1: Acknowledgement business process and contextual model for CIM European market IEC 62325-451-2: Scheduling business process and contextual model for CIM European market IEC 62325-451-3: Transmission capacity allocation business process and contextual models for European market IEC 62325-451-4: Settlement and reconciliation business process, contextual and assembly models for European market IEC 62325-451-5: Problem statement and status request business processes, contextual and assembly models for European market IEC 62325-451-6: Publication of information on market, contextual and assembly models for European style market IEC 62325-452: North American style market profiles IEC 62325-502: Profile of ebXML IEC 62325-503: Market data exchanges guidelines for the IEC 62325-351 profile IEC 62325-504: Utilization of web services for electronic data interchanges on the European energy market for electricity IEC 62325-550-2: Common dynamic data structures for North American style markets IEC 62325-552-1: Dynamic data structures for day ahead markets (DAM) See also IEC TC 57 IEC 61968 IEC 61970 References External links IEC Website for IEC 62325 standards Electronic Data Interchange (EDI) Library IEC 62325-504 open source implementation. 62325 Electric power Smart grid
IEC 62325
[ "Physics", "Technology", "Engineering" ]
436
[ "Physical quantities", "Computer standards", "IEC standards", "Power (physics)", "Electric power", "Electrical engineering" ]
39,648,645
https://en.wikipedia.org/wiki/Industrial%20dryer
Industrial dryers are used to efficiently process large quantities of bulk materials that need reduced moisture levels. Depending on the amount and the makeup of material needing to be dried, industrial dryers come in many different models constructed specifically for the type and quantity of material to be processed. The most common types of industrial dryers are fluidized bed dryers, rotary dryers, rolling bed dryers, conduction dryers, convection dryers, pharmaceutical dryers, suspension/paste dryers, toroidal bed or TORBED dryers and dispersion dryers. Various factors are considered in determining the correct type of dryer for any given application, including the material to be dried, drying process requirements, production requirements, final product quality requirements and available facility space. See also Fluidized bed Rotary dryer Rolling bed dryer Toroidal Bed or TORBED dryer References Industrial equipment Dryers
Industrial dryer
[ "Chemistry", "Engineering" ]
179
[ "Dryers", "Chemical equipment", "nan" ]
39,649,787
https://en.wikipedia.org/wiki/Labor%20burden
Labor burden is the actual cost of a company to have an employee, aside from the salary the employee earns. Labor burden costs include benefits that a company must, or chooses to, pay for employees included on their payroll. These costs include but are not limited to payroll taxes, pension costs, health insurance, dental insurance, and any other benefits that a company provides an employee. Fully-burdened costs for individual employees can be expressed as a yearly total to provide an estimate of how much the company will spend that year on an employee. It can also be expressed as an hourly cost by dividing the total yearly cost by the number of hours the employee will work. See also Direct labor cost Overhead (business) Wage References Construction
Labor burden
[ "Engineering" ]
147
[ "Construction" ]
39,654,121
https://en.wikipedia.org/wiki/DNA-directed%20RNA%20interference
DNA-directed RNA interference (ddRNAi) is a gene-silencing technique that utilizes DNA constructs to activate a cell's endogenous RNA interference (RNAi) pathways. DNA constructs are designed to express self-complementary double-stranded RNAs, typically short-hairpin RNAs (shRNA), that bring about the silencing of a target gene or genes once processed. Any RNA, including endogenous messenger RNA (mRNAs) or viral RNAs, can be silenced by designing constructs to express double-stranded RNA complementary to the desired mRNA target. This mechanism has been recently demonstrated to work therapeutically to silence disease-causing genes across a range of disease models, including viral diseases such as HIV, hepatitis B or hepatitis C, and diseases associated with altered expression of endogenous genes such as drug-resistant lung cancer, neuropathic pain, advanced cancer, and retinitis pigmentosa. ddRNAi mechanism Unlike small interfering RNAs (siRNA) that turn over within a cell and silence genes transiently, DNA constructs are continually transcribed, replenishing the cellular supply of siRNAs. This allows for the long-term silencing of targeted genes, which has the potential for ongoing clinical benefit with reduced medical intervention. Organization of ddRNAi constructs Figure 1 illustrates the most common type of ddRNAi DNA construct designed to express a shRNA. This figure consists of a promoter sequence driving the expression of sense and antisense sequences separated by a loop sequence, followed by a transcriptional terminator. The antisense sequence processed from the shRNA can bind to the target RNA and specify its degradation. shRNA constructs typically encode sense and antisense sequences of 20–30 nucleotides. Flexibility in construct design is possible; for example, the positions of sense and antisense sequences can be reversed, and other modifications and additions can alter intracellular shRNA processing. Moreover, a variety of promoter loop and terminator sequences can be used. A variant of this is the multi-cassette (Figure 2b). Designed to express two or more shRNAs, it can target multiple sequences for degradation simultaneously, which can be beneficial in circumstances such as when targeting viruses. Natural sequence variations can render a single shRNA-target site unrecognizable, preventing RNA degradation. Multi-cassette constructs that target multiple sites within the same viral RNA circumvent this issue. Delivery Delivery of ddRNAi DNA constructs is a major challenge for RNAi-based therapy. There are a number of clinically-approved gene therapy vectors developed for therapeutic use. Two broad strategies to facilitate the delivery of DNA constructs to the desired cells are available: these use either viral vectors or one of several classes of transfection reagents. In vivo delivery of ddRNAi constructs has been demonstrated using a range of vectors and reagents with different routes of administration (ROA). ddRNAi constructs have also been successfully delivered into host cells ex vivo, and then transplanted back into the host. For example, in a phase I clinical trial at the City of Hope National Medical Center, California, four HIV-positive patients with non-Hodgkin's lymphoma were successfully treated with autologous hematopoietic progenitor cells pre-transduced ex vivo with ddRNAi constructs using lentiviral vectors. This construct was designed to express three therapeutic RNAs, one of which was a shRNA, thereby combating HIV replication in three different ways: shRNA, silencing the tat and rev genes of the HIV genome; CCR5 ribozyme, inhibiting viral cell entry; TAR decoy RNA, inhibiting initiation of viral transcription. Ongoing expression of the shRNA has been confirmed in T cells, monocytes, and B cells more than one year after transplantation. Therapeutic applications Neuropathic pain Nervana is an investigational V construct that knocks down the expression of protein kinase C gamma (PKCγ) known to be associated with neuropathic pain and morphine tolerance. Two conserved PKCγ sequences across all key model species and humans have been identified, and both single and double DNA cassettes are designed. In vitro, the expression of PKCγ was silenced by 80%. When similar ddRNAi constructs were delivered intrathecally using a lentiviral vector, pain relief in a neuropathic-rat model was demonstrated. Drug-resistant non-small-cell lung cancer The development of resistance to chemotherapies such as paclitaxel and cisplatin in non-small-cell lung cancer (NSCLC) is strongly associated with overexpression of beta III tubulin. Investigations by the Children's Cancer Institute Australia (University of New South Wales, Lowy Cancer Research Centre) demonstrated that beta III-tubulin knockdown by ddRNAi delayed tumor growth and increased chemosensitivity in mouse models. Tributarna is a triple DNA cassette expressing three shRNA molecules each of which separately targets beta III tubulin and strongly inhibits its expression. Studies in an orthotopic-mouse model, where the construct is delivered by a modified polyethylenimine vector, jetPEI, that targets lung tissue are in progress. Hepatitis B viral infection The hepatitis B virus (HBV) genome encodes its own DNA polymerase for replication. Biomics Biotechnologies has evaluated around 5000 siRNA sequences of this gene for effective knockdown; five sequences were chosen for further investigation and were shown to have silencing activity when converted into shRNA expression cassettes. A multi-cassette construct, Hepbarna, is under preclinical development for delivery by an adeno-associated virus 8 (AAV-8) liver-targeting vector. Oculopharyngeal muscular dystrophy Classified as an orphan disease, there is currently no therapy for oculopharyngeal muscular dystrophy (OPMD), as it is caused by a mutation in the poly(A) binding protein nuclear 1 (PABPN1) gene. Silencing the mutant gene using ddRNAi offers a potential therapeutic approach. HIV/AIDS Besides the ex vivo approach discussed above, the Center for Infection and Immunity Amsterdam (CINIMA) of the University of Amsterdam, Netherlands, is extensively researching the therapeutic potential of multi-cassette DNA constructs for HIV. Concerns As with all gene therapies, a number of safety and toxicity issues must be evaluated during the development of ddRNAi therapeutic techniques. Oncogene activation by viral insertion Some gene therapy vectors integrate into the host genome, thereby acting as insertional mutagens. This was a particular issue with early retroviral vectors, where insertions adjacent to oncogenes resulted in the development of lymphoid tumors. Adeno-associated virus (AAV) vectors are considered low-risk for host-genome integration, as AAV infection has not been associated with the induction of cancers in humans despite widespread prevalence across the general population. Moreover, extensive clinical use of AAV vectors has provided no evidence of carcinogenicity. While lentiviral vectors do integrate into the genome, they do not appear to show a propensity to activate oncogene expression. Immune response to gene therapy vectors An immunological response to an adenoviral vector resulted in the death of a patient in an early human trial. Careful monitoring of potential toxicities in preclinical testing and analysis of any pre-existing antibodies to gene therapy vectors in patients minimizes such risks. Innate immune response siRNAs have been shown to activate immune responses through interaction with toll-like receptors (TLRs), leading to interferon responses. These TLRs reside on the cell's outer surface, so ddRNAi constructs, which are delivered directly into the intracellular space, are not expected to induce such a response. Toxic effects due to over-expression of shRNAs High-level expression of shRNAs has been shown to be toxic. Strategies to minimize levels of shRNA expression or promote precise processing of shRNAs can mitigate this issue. Off-target effects The unintended silencing of genes that share sequence homology with expressed shRNAs could theoretically occur. Careful selection of shRNA sequences and thorough preclinical testing of constructs can mitigate this risk. Further reading References RNA DNA RNA interference Gene expression Molecular genetics Medical genetics Genetic engineering Stem cells Biotechnology products
DNA-directed RNA interference
[ "Chemistry", "Engineering", "Biology" ]
1,772
[ "Biological engineering", "Biotechnology products", "Gene expression", "Genetic engineering", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
39,654,522
https://en.wikipedia.org/wiki/Tensor%20operator
In pure and applied mathematics, quantum mechanics and computer graphics, a tensor operator generalizes the notion of operators which are scalars and vectors. A special class of these are spherical tensor operators which apply the notion of the spherical basis and spherical harmonics. The spherical basis closely relates to the description of angular momentum in quantum mechanics and spherical harmonic functions. The coordinate-free generalization of a tensor operator is known as a representation operator. The general notion of scalar, vector, and tensor operators In quantum mechanics, physical observables that are scalars, vectors, and tensors, must be represented by scalar, vector, and tensor operators, respectively. Whether something is a scalar, vector, or tensor depends on how it is viewed by two observers whose coordinate frames are related to each other by a rotation. Alternatively, one may ask how, for a single observer, a physical quantity transforms if the state of the system is rotated. Consider, for example, a system consisting of a molecule of mass , traveling with a definite center of mass momentum, , in the direction. If we rotate the system by about the axis, the momentum will change to , which is in the direction. The center-of-mass kinetic energy of the molecule will, however, be unchanged at . The kinetic energy is a scalar and the momentum is a vector, and these two quantities must be represented by a scalar and a vector operator, respectively. By the latter in particular, we mean an operator whose expected values in the initial and the rotated states are and . The kinetic energy on the other hand must be represented by a scalar operator, whose expected value must be the same in the initial and the rotated states. In the same way, tensor quantities must be represented by tensor operators. An example of a tensor quantity (of rank two) is the electrical quadrupole moment of the above molecule. Likewise, the octupole and hexadecapole moments would be tensors of rank three and four, respectively. Other examples of scalar operators are the total energy operator (more commonly called the Hamiltonian), the potential energy, and the dipole-dipole interaction energy of two atoms. Examples of vector operators are the momentum, the position, the orbital angular momentum, , and the spin angular momentum, . (Fine print: Angular momentum is a vector as far as rotations are concerned, but unlike position or momentum it does not change sign under space inversion, and when one wishes to provide this information, it is said to be a pseudovector.) Scalar, vector and tensor operators can also be formed by products of operators. For example, the scalar product of the two vector operators, and , is a scalar operator, which figures prominently in discussions of the spin–orbit interaction. Similarly, the quadrupole moment tensor of our example molecule has the nine components Here, the indices and can independently take on the values 1, 2, and 3 (or , , and ) corresponding to the three Cartesian axes, the index runs over all particles (electrons and nuclei) in the molecule, is the charge on particle , and is the -th component of the position of this particle. Each term in the sum is a tensor operator. In particular, the nine products together form a second rank tensor, formed by taking the outer product of the vector operator with itself. Rotations of quantum states Quantum rotation operator The rotation operator about the unit vector n (defining the axis of rotation) through angle θ is where are the rotation generators (also the angular momentum matrices): and let be a rotation matrix. According to the Rodrigues' rotation formula, the rotation operator then amounts to An operator is invariant under a unitary transformation U if in this case for the rotation , Angular momentum eigenkets The orthonormal basis set for total angular momentum is , where j is the total angular momentum quantum number and m is the magnetic angular momentum quantum number, which takes values −j, −j + 1, ..., j − 1, j. A general state within the j subspace rotates to a new state by: Using the completeness condition: we have Introducing the Wigner D matrix elements: gives the matrix multiplication: For one basis ket: For the case of orbital angular momentum, the eigenstates of the orbital angular momentum operator L and solutions of Laplace's equation on a 3d sphere are spherical harmonics: where Pm is an associated Legendre polynomial, is the orbital angular momentum quantum number, and m is the orbital magnetic quantum number which takes the values −, − + 1, ... − 1, The formalism of spherical harmonics have wide applications in applied mathematics, and are closely related to the formalism of spherical tensors, as shown below. Spherical harmonics are functions of the polar and azimuthal angles, ϕ and θ respectively, which can be conveniently collected into a unit vector n(θ, ϕ) pointing in the direction of those angles, in the Cartesian basis it is: So a spherical harmonic can also be written . Spherical harmonic states rotate according to the inverse rotation matrix , while rotates by the initial rotation matrix . Rotation of tensor operators We define the Rotation of an operator by requiring that the expectation value of the original operator with respect to the initial state be equal to the expectation value of the rotated operator with respect to the rotated state, Now as, we have, since, is arbitrary, Scalar operators A scalar operator is invariant under rotations: This is equivalent to saying a scalar operator commutes with the rotation generators: Examples of scalar operators include the energy operator: potential energy V (in the case of a central potential only) kinetic energy T: the spin–orbit coupling: Vector operators Vector operators (as well as pseudovector operators) are a set of 3 operators that can be rotated according to: Any observable vector quantity of a quantum mechanical system should be invariant of the choice of frame of reference. The transformation of expectation value vector which applies for any wavefunction, ensures the above equality. In Dirac notation:where the RHS is due to the rotation transformation acting on the vector formed by expectation values. Since is any quantum state, the same result follows:Note that here, the term "vector" is used two different ways: kets such as are elements of abstract Hilbert spaces, while the vector operator is defined as a quantity whose components transform in a certain way under rotations. From the above relation for infinitesimal rotations and the Baker Hausdorff lemma, by equating coefficients of order , one can derive the commutation relation with the rotation generator: where εijk is the Levi-Civita symbol, which all vector operators must satisfy, by construction. The above commutator rule can also be used as an alternative definition for vector operators which can be shown by using the Baker Hausdorff lemma. As the symbol εijk is a pseudotensor, pseudovector operators are invariant up to a sign: +1 for proper rotations and −1 for improper rotations. Since operators can be shown to form a vector operator by their commutation relation with angular momentum components (which are generators of rotation), its examples include: the position operator: the momentum operator: and peusodovector operators include the orbital angular momentum operator: as well the spin operator S, and hence the total angular momentum Scalar operators from vector operators If and are two vector operators, the dot product between the two vector operators can be defined as: Under rotation of coordinates, the newly defined operator transforms as:Rearranging terms and using transpose of rotation matrix as its inverse property:Where the RHS is the operator originally defined. Since the dot product defined is invariant under rotation transformation, it is said to be a scalar operator. Spherical vector operators A vector operator in the spherical basis is where the components are: using the various commutators with the rotation generators and ladder operators are: which are of similar form of In the spherical basis, the generators of rotation are: From the transformation of operators and Baker Hausdorff lemma: compared to it can be argued that the commutator with operator replaces the action of operator on state for transformations of operators as compared with that of states: The rotation transformation in the spherical basis (originally written in the Cartesian basis) is then, due to similarity of commutation and operator shown above: One can generalize the vector operator concept easily to tensorial operators, shown next. Tensor operators In general, a tensor operator is one that transforms according to a tensor:where the basis are transformed by or the vector components transform by . In the subsequent discussion surrounding tensor operators, the index notation regarding covariant/contravariant behavior is ignored entirely. Instead, contravariant components is implied by context. Hence for an n times contravariant tensor: Examples of tensor operators The Quadrupole moment operator, Components of two tensor vector operators can be multiplied to give another Tensor operator. In general, n number of tensor operators will also give another tensor operatoror, Note: In general, a tensor operator cannot be written as the tensor product of other tensor operators as given in the above example. Tensor operator from vector operators If and are two three dimensional vector operators, then a rank 2 Cartesian dyadic tensors can be formed from nine operators of form ,Rearranging terms, we get:The RHS of the equation is change of basis equation for twice contravariant tensors where the basis are transformed by or the vector components transform by which matches transformation of vector operator components. Hence the operator tensor described forms a rank 2 tensor, in tensor representation,Similarly, an n-times contravariant tensor operator can be formed similarly by n vector operators. We observe that the subspace spanned by linear combinations of the rank two tensor components form an invariant subspace, ie. the subspace does not change under rotation since the transformed components itself is a linear combination of the tensor components. However, this subspace is not irreducible ie. it can be further divided into invariant subspaces under rotation. Otherwise, the subspace is called reducible. In other words, there exists specific sets of different linear combinations of the components such that they transforms into a linear combination of the same set under rotation. In the above example, we will show that the 9 independent tensor components can be divided into a set of 1, 3 and 5 combination of operators that each form irreducible invariant subspaces. Irreducible tensor operators The subspace spanned by can be divided two subspaces; three independent antisymmetric components and six independent symmetric component , defined as and . Using the transformation under rotation formula, it can be shown that both and are transformed into a linear combination of members of its own sets. Although is irreducible, the same cannot be said about . The six independent symmetric component set can be divided into five independent traceless symmetric component and the invariant trace can be its own subspace. Hence, the invariant subspaces of are formed respectively by: One invariant trace of the tensor, Three linearly independent antisymmetric components from: Five linearly independent traceless symmetric components from If , the invariant subspaces of formed are represented by: One invariant scalar operator Three linearly independent components from Five linearly independent components from From the above examples, the nine component are split into subspaces formed by one, three and five components. These numbers add up to the number of components of the original tensor in a manner similar to the dimension of vector subspaces adding to the dimension of the space that is a direct sum of these subspaces. Similarly, every element of can be expressed in terms of a linear combination of components from its invariant subspaces: or where: In general cartesian tensors of rank greater than 1 are reducible. In quantum mechanics, this particular example bears resemblance to the addition of two spin one particles where both are 3 dimensional, hence the total space being 9 dimensional, can be formed by spin 0, spin 1 and spin 2 systems each having 1 dimensional, 3 dimensional and 5 dimensional space respectively. These three terms are irreducible, which means they cannot be decomposed further and still be tensors satisfying the defining transformation laws under which they must be invariant. Each of the irreducible representations T(0), T(1), T(2) ... transform like angular momentum eigenstates according to the number of independent components. It is possible that a given tensor may have one or more of these components vanish. For example, the quadrupole moment tensor is already symmetric and traceless, and hence has only 5 independent components to begin with. Spherical tensor operators Spherical tensor operators are generally defined as operators with the following transformation rule, under rotation of coordinate system: The commutation relations can be found by expanding LHS and RHS as: Simplifying and applying limits to select only first order terms, we get: For choices of or , we get:Note the similarity of the above to:Since and are linear combinations of , they share the same similarity due to linearity. If, only the commutation relations hold, using the following relation, we find due to similarity of actions of on wavefunction and the commutation relations on , that: where the exponential form is given by Baker–Hausdorff lemma. Hence, the above commutation relations and the transformation property are equivalent definitions of spherical tensor operators. It can also be shown that transform like a vector due to their commutation relation. In the following section, construction of spherical tensors will be discussed. For example, since example of spherical vector operators is shown, it can be used to construct higher order spherical tensor operators. In general, spherical tensor operators can be constructed from two perspectives. One way is to specify how spherical tensors transform under a physical rotation - a group theoretical definition. A rotated angular momentum eigenstate can be decomposed into a linear combination of the initial eigenstates: the coefficients in the linear combination consist of Wigner rotation matrix entries. Or by continuing the previous example of the second order dyadic tensor T = a ⊗ b, casting each of a and b into the spherical basis and substituting into T gives the spherical tensor operators of the second order. Construction using Clebsch–Gordan coefficients Combination of two spherical tensors and in the following manner involving the Clebsch–Gordan coefficients can be proved to give another spherical tensor of the form: This equation can be used to construct higher order spherical tensor operators, for example, second order spherical tensor operators using two first order spherical tensor operators, say A and B, discussed previously: Using the infinitesimal rotation operator and its Hermitian conjugate, one can derive the commutation relation in the spherical basis:and the finite rotation transformation in the spherical basis can be verified: Using Spherical Harmonics Define an operator by its spectrum:Since for spherical harmonics under rotation:It can also been shown that:Then , where is a vector operator, also transforms in the same manner ie, is a spherical tensor operator. The process involves expressing in terms of x, y and z and replacing x, y and z with operators Vx Vy and Vz which from vector operator. The resultant operator is hence a spherical tensor operator . This may include constant due to normalization from spherical harmonics which is meaningless in context of operators. The Hermitian adjoint of a spherical tensor may be defined asThere is some arbitrariness in the choice of the phase factor: any factor containing will satisfy the commutation relations. The above choice of phase has the advantages of being real and that the tensor product of two commuting Hermitian operators is still Hermitian. Some authors define it with a different sign on , without the , or use only the floor of . Angular momentum and spherical harmonics Orbital angular momentum and spherical harmonics Orbital angular momentum operators have the ladder operators: which raise or lower the orbital magnetic quantum number m by one unit. This has almost exactly the same form as the spherical basis, aside from constant multiplicative factors. Spherical tensor operators and quantum spin Spherical tensors can also be formed from algebraic combinations of the spin operators Sx, Sy, Sz, as matrices, for a spin system with total quantum number j = + s (and = 0). Spin operators have the ladder operators: which raise or lower the spin magnetic quantum number ms by one unit. Applications Spherical bases have broad applications in pure and applied mathematics and physical sciences where spherical geometries occur. Dipole radiative transitions in a single-electron atom (alkali) The transition amplitude is proportional to matrix elements of the dipole operator between the initial and final states. We use an electrostatic, spinless model for the atom and we consider the transition from the initial energy level Enℓ to final level En′ℓ′. These levels are degenerate, since the energy does not depend on the magnetic quantum number m or m′. The wave functions have the form, The dipole operator is proportional to the position operator of the electron, so we must evaluate matrix elements of the form, where, the initial state is on the right and the final one on the left. The position operator r has three components, and the initial and final levels consist of 2ℓ + 1 and 2ℓ′ + 1 degenerate states, respectively. Therefore if we wish to evaluate the intensity of a spectral line as it would be observed, we really have to evaluate 3(2ℓ′+ 1)(2ℓ+ 1) matrix elements, for example, 3×3×5 = 45 in a 3d → 2p transition. This is actually an exaggeration, as we shall see, because many of the matrix elements vanish, but there are still many non-vanishing matrix elements to be calculated. A great simplification can be achieved by expressing the components of r, not with respect to the Cartesian basis, but with respect to the spherical basis. First we define, Next, by inspecting a table of the Yℓm′s, we find that for ℓ = 1 we have, where, we have multiplied each Y1m by the radius r. On the right hand side we see the spherical components rq of the position vector r. The results can be summarized by, for q = 1, 0, −1, where q appears explicitly as a magnetic quantum number. This equation reveals a relationship between vector operators and the angular momentum value ℓ = 1, something we will have more to say about presently. Now the matrix elements become a product of a radial integral times an angular integral, We see that all the dependence on the three magnetic quantum numbers (m′,q,m) is contained in the angular part of the integral. Moreover, the angular integral can be evaluated by the three-Yℓm formula, whereupon it becomes proportional to the Clebsch-Gordan coefficient, The radial integral is independent of the three magnetic quantum numbers (m′, q, m), and the trick we have just used does not help us to evaluate it. But it is only one integral, and after it has been done, all the other integrals can be evaluated just by computing or looking up Clebsch–Gordan coefficients. The selection rule m′ = q + m in the Clebsch–Gordan coefficient means that many of the integrals vanish, so we have exaggerated the total number of integrals that need to be done. But had we worked with the Cartesian components ri of r, this selection rule might not have been obvious. In any case, even with the selection rule, there may still be many nonzero integrals to be done (nine, in the case 3d → 2p). The example we have just given of simplifying the calculation of matrix elements for a dipole transition is really an application of the Wigner–Eckart theorem, which we take up later in these notes. Magnetic resonance The spherical tensor formalism provides a common platform for treating coherence and relaxation in nuclear magnetic resonance. In NMR and EPR, spherical tensor operators are employed to express the quantum dynamics of particle spin, by means of an equation of motion for the density matrix entries, or to formulate dynamics in terms of an equation of motion in Liouville space. The Liouville space equation of motion governs the observable averages of spin variables. When relaxation is formulated using a spherical tensor basis in Liouville space, insight is gained because the relaxation matrix exhibits the cross-relaxation of spin observables directly. Image processing and computer graphics See also Wigner–Eckart theorem Structure tensor Clebsch–Gordan coefficients for SU(3) References Notes Sources Further reading Spherical harmonics Angular momentum and spin Condensed matter physics Magnetic resonance Image processing External links (2012) Clebsch-Gordon (sic) coefficients and the tensor spherical harmonics The tensor spherical harmonics (2010) Irreducible Tensor Operators and the Wigner-Eckart Theorem Tensor operators M. Fowler (2008), Tensor Operators Tensor_Operators (2009) Tensor Operators and the Wigner Eckart Theorem The Wigner-Eckart theorem (2004) Rotational Transformations and Spherical Tensor Operators Tensor operators Evaluation of the matrix elements for radiative transitions D.K. Ghosh, (2013) Angular Momentum - III : Wigner- Eckart Theorem B. Baragiola (2002) Tensor Operators Spherical Tensors Image processing Quantum mechanics Condensed matter physics Linear algebra Tensors Spherical geometry
Tensor operator
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
4,440
[ "Tensors", "Algebra", "Theoretical physics", "Phases of matter", "Quantum mechanics", "Materials science", "Condensed matter physics", "Linear algebra", "Matter" ]
60,270,031
https://en.wikipedia.org/wiki/Niobium%20disulfide
Niobium disulfide is the chemical compound with the formula NbS2. It is a black layered solid that can be exfoliated into ultrathin grayish sheets similar to other transition metal dichalcogenides. These layers exhibit superconductivity, where the transition temperature increases from ca. 2 to 6 K with the layer thickness increasing from 6 to 12 nm, and then saturates with thickness. References Niobium(IV) compounds Disulfides Transition metal dichalcogenides Monolayers
Niobium disulfide
[ "Physics" ]
111
[ "Monolayers", "Atoms", "Matter" ]
60,271,117
https://en.wikipedia.org/wiki/Neurosexism
Neurosexism is an alleged bias in the neuroscience of sex differences towards reinforcing harmful gender stereotypes. The term was coined by feminist scholar Cordelia Fine in a 2008 article and popularised by her 2010 book Delusions of Gender. The concept is now widely used by critics of the neuroscience of sex differences in neuroscience, neuroethics and philosophy. Definition Neuroscientist Gina Rippon defines neurosexism as follows: Neurosexism' is the practice of claiming that there are fixed differences between female and male brains, which can explain women's inferiority or unsuitability for certain roles." For example, "this includes things such as men being more logical and women being better at languages or nurturing." Fine and Rippon, along with Daphna Joel, state that "the point of critical enquiry is not to deny differences between the sexes, but to ensure a full understanding of the findings and meaning of any particular report." Many of the issues they discuss to support their position are "serious issues for all areas of behavioral research", but they argue that "in sex/gender differences research... they are often particularly acute." Nonetheless, the common factor influencing logical maturity between males and females is the maturity of the frontal cortex, which matures at the age of 25, at the earliest. The topic of neurosexism is thus closely tied to wider debates about scientific methodology, especially in the behavioral sciences. History The history of science contains many examples of scientists and philosophers drawing conclusions about the mental inferiority of women, or their lack of aptitude for certain tasks, on the basis of alleged anatomical differences between male and female brains. In the late 19th century, George J. Romanes used the difference in average brain weight between men and women to explain the "marked inferiority of intellectual power" of the latter. Absent a sexist background assumption about male superiority, there would be nothing to explain here. Despite these historical pseudo-scientific studies, Becker et al. argue that "for decades" the scientific community has abstained from studying sex-differences. Larry Cahill asserts that today there is a widely held belief in the scientific community that sex-differences do not matter to large parts of biology and neuroscience, apart from explaining reproduction and the workings of reproduction hormones. Although overtly sexist statements may no longer have a place within the scientific community, Cordelia Fine, Gina Rippon and Daphna Joel contend that similar patterns of reasoning still exist. They claim that many researchers who make claims about gendered brain differences fail to provide sufficient warrant for their position. Philosophers of science who believe in a value-free normative standard for science find the practice of neurosexism particularly problematic. They hold that science should be free from values and biases, and argue that only epistemic values have a legitimate role to play in scientific inquiry. However, contrary to the value-free ideal view, Heather Douglas argues that 'value-free science is inadequate science' Examples in science Prenatal hormone theory Contemporary research continues in a more subtle vein through Prenatal Hormone Theory. According to the Prenatal Hormone Theory, "male and female foetuses differ in testosterone concentrations beginning as early as week 8 of gestation [and] the early hormone difference exerts permanent influences on brain development and behaviour." Charges of neurosexism may then be moved against the PHT if these alleged hormonal differences are interpreted as causing the male/female brain distinction and in turn are used to reinforce stereotypical behaviours and gender roles. Empathising–systemising theory The notion that there are hard-wired differences between male and female brains is particularly explicit in Simon Baron-Cohen's empathising-systematising (E-S) theory. Empathy is defined as the drive to identify and respond appropriately to emotions and thoughts in others, and systematising is defined as the drive to analyse and explore a system, isolate the underlying rules that govern the behaviour of that system, and build new systems. These two characteristics can be seen amongst young girls and boys. Girls have a tendency to play with baby dolls when they are young, enacting their social and emotional skills. Boys tend to play with plastic cars, illustrating a more mechanical, system-driven mind. This may be of course due simply to the environment and to social norms. However, the empathising-systematising theory posits three broad brain types, or organisation structures: type E, the empathiser; type S, the systematiser; type B, the 'balanced brain'. Given that females are twice as likely to display brain type E, and males are twice as likely to display brain type S, he labels these brain 'types' the 'female brain' and the 'male brain', respectively. This type of analysis suggests therefore that most (or at least some) differences in skills and occupation between males and females can be explained by virtue of them having different brain structures. Baron Cohen's theory has been criticised because it presents a clear-cut dichotomy between male and female brains, whilst this is not necessarily the case: there are females with 'male brains', and males with 'female brains'. Using the gendered labels makes it significantly more likely that evidence of gendered brain differences will be over-stated in the media, in way that might actively shape the gender norms within society. Neuroimaging In Delusions of Gender, Cordelia Fine criticises work by Ruben and Raquel Gur and collaborators. In the context of explaining the under-representation of women in science and mathematics, she quotes them as claiming that "the greater facility of women with interhemispheric communications may attract them to disciplines that require integration rather than detailed scrutiny of narrowly characterized processes." This contention is however corroborated by a 2014 study about the structural connectome. The study used 949 youths to establish novel sex differences, establishing the key difference that male brains are optimised for intrahemispheric communication, while female brains are optimised for interhemispheric communication. Furthermore, the development timeframe of male and female brain are vastly different. However, this study used youths from age 8 to 22, where the brain is still developing so the results may not be conclusive enough. In a 1999 study, Gur et al. found a link between the amount of white matter in a person's brain and their performance on spatial tasks. Fine points out that the sample size of ten people is a small sample size, and the researchers tested for thirty-six different relationships in this sample. Fine argues that results like these should be treated with caution, because, given the sample size and the number of relationships tested, the correlation found between white matter volume and performance in the tasks could be a false positive. Fine accuses the researchers of downplaying the risk of a false positive after conducting many statistical analyses of past research projects, she argues that using their results as the basis of an explanation for why women are under-represented in scientific fields is inadequate here. Fine also discusses a 2004 neuroimaging study by neuroscientist Sandra Witelson and collaborators. This study was taken to support sex differences in emotional processing by Allan and Barbara Pease in their book Why Men Don't Listen and Women Can't Read Maps and by Susan Pinker in her book The Sexual Paradox. Fine argues that, with a sample size of just 16, the results could easily have been false positives. She compares the study to a famous 2009 study in which, to illustrate the risk of false positives in neuroimaging research, researchers showed increased brain activity in a dead salmon during a perspective-taking task. Dispute between Fine and Baron-Cohen A notable dispute in 2010 between Fine and neuroscientist Simon Baron-Cohen in The Psychologist magazine centred on a study into sex differences in the responses of newborn babies to human faces and mechanical mobiles. The research took babies under 24 hours old and showed them a human face, or a mechanical mobile. If they were shown the human face first, they were then shown the mechanical mobile and vice versa. The babies responses were recorded, and judges coded the eye movements of the babies to discern which, if either, of the stimuli the babies looked at for longer. The study concluded that female babies looked at human faces for longer, and male babies looked at mechanical mobiles for longer. Therefore, this theory derived the conclusion that female brains are programmed towards empathy whereas male brains are more inclined towards practicality and building systems. This theory suggested that an individual can be characterised into having a certain "brain type" where empathising was called the brain type E, and systemising was called the brain type S. However, some individuals can be equally strong at empathising and systemising and therefore, they possess a "balanced brain". This has the brain type B. Fine criticised the study, arguing that because the babies were shown one stimulus first and then the other, they may have become fatigued, affecting the results of the study. Furthermore, Fine also argued that the panel of judges watching the babies' eye movements may have been able to guess the sex of the baby, for example if the baby was dressed in certain clothing or had particular congratulatory cards present, giving rise to confirmation type biases. Baron-Cohen has countered these criticisms. Baron-Cohen replied to the fatigue argument by explaining that the stimuli were shown in randomised order, so as to avoid the problem of specific stimuli fatigue in either sex. In response to the claim about bias, he argued that the judges were only able to assess the babies' eye movements by watching a video of the eye area of the baby, through which it would have been almost impossible to derive the sex of the baby. Notwithstanding this, Fine argued that the effort to conceal the babies' sex from the experimenters in the room with the babies was "minimal", allowing room for implicit bias, rendering the results unreliable. Congenital adrenal hyperplasia Rebecca Jordan-Young provides a good case study of neurosexism in studies of those with congenital adrenal hyperplasia (CAH). Because Prenatal Hormone Theory posits early steroid hormones during fetal development as conducive to sex-typical behaviours, studies of genetic females with CAH are important to test the feasibility of this hypothesis. Jordan-Young conducts a comprehensive review of these studies, finding them to neglect four broad categories of variables that plausibly affect psychosexual development: "(1) physiological effects of CAH, including complex disruption of steroid hormones from early development onwards; (2) intensive medical intervention and surveillance, which many women with CAH describe as traumatic; (3) direct effects of genital morphology on sexuality; and (4) expectations of masculinisation that likely affect both the development and evaluation of gender and sexuality in CAH." Complex and continuous interactions between biological factors, medical intervention, and social pressures suggest a more holistic explanation for atypicalities in the psychological make up and behaviour of those with CAH than the conventional explanation that prenatal hormones "masculinise" the brain. Neglecting these four categories in our methodology of studies into those with CAH then favours the sex difference hypothesis, providing a clear example of neurosexism in scientific research. However, studies of CAH fail to account for unusual childhood experiences, parental expectations or reporting bias. Examples in scientific communication The media reporting of the neuroscience of sex differences has also attracted criticism. A high-profile example was the reporting of a 2014 neuroimaging study on sex differences the structural connectome of the human brain. The study used diffusion tensor imaging to investigate white matter connections in the brains of 949 participants ranging from 8 to 22 years old. The authors claimed to have discovered "fundamental sex differences in the structural architecture of the human brain". The study was widely reported by media organizations around the world. A content analysis of the media coverage investigated the claims made in the original scientific article and in several different types of media reporting. The analysis showed that information from the scientific article was given "increasingly diversified, personalized, and politicized meaning" in media outlets and was widely seen to have vindicated traditional gender stereotypes, even though the neuroimaging technique used could only detect structural differences, not functional differences, between the sexes. The changing media environment The way scientific information is passed from the scientific community into the public consciousness has changed with the development of technology, social media, and news platforms. The traditional route from study, to media, to public consciousness no longer permits. The advent of the "blogosphere" and other forms of social media mean that audiences now actively produce and critique scientific alongside other scientists, whether or not this is a benefit or hindrance to the scientific community is yet to be realised given the infancy of these channels. We should, however, remain alert to the problems arising from greater public involvement in our scientific communication, particularly for the understanding of findings. Cliodhna O'Connor and Helene Joffe examine how traditional media, blogs, and their comment sections autonomously project prevailing understanding of sex differences (emotion-rationality dualism and traditional role divisions) onto mute findings, construing men as purely rational and women as highly emotional, noting how both social representation theory and system justification theory may be causing bias in the interpretation of these findings. The findings of their study showed significant scope for parties to apply their own personal and cultural agendas onto the findings, and share these through blogs and comments. In projecting prevailing stereotypes on the mute findings, we have a prime example of how neurosexism can arrive in stages outside the domain of science, raising further concerns for the feminist camp for whilst we can apply the necessary checks and balances in the method of our science, once the information is in the public consciousness they can manipulate and construe research however they see fit. Communication and neurological discoveries The interest and coverage generated by neurological studies on sex differences is an instance of a wider phenomenon. It is possible to see the 'neuro-' prefix being widely used: "neuromarketing", "neuroeconomics", "neurodrinks". One study documented in the Journal of Cognitive Neuroscience tested the hypothesis that irrelevant neuroscience explanations accompanying descriptions of psychological phenomena causes people to rate the descriptions as better quality. Results showed that irrelevant neuroscience information does indeed cause people to rate explanations more satisfying than without, even in cases where neuroscience was not useful to explain the phenomenon. Methodological issues According to Cordelia Fine and Gina Rippon, there are systematic methodological issues in the neuroscience of sex differences that increase the chances of neurosexism. In other words, questions of neurosexism are not entirely independent of questions about scientific methodology. Reverse inferences A reverse inference infers that activation in a particular brain region causes the presence of a mental process. Fine argues that such inferences are routine in the neuroscience of sex differences, yet "the absence of neat one-to-one mapping between brain regions and mental processes renders reverse inferences logically invalid". She emphasises that mental processes arise from complex interactions between a multiplicity of brain regions; the inference from correlation to causation is invalid, because the interactions between brain regions and mental processes are vastly complex. The invalidity stems from brain region activation being multiply realisable. For example, the mental processes of experiencing visual art and experiencing the taste of food both activate the nucleus accumbens; activation of the nucleus accumbens then doesn't necessarily cause the mental process of tasting food, since activation could be causing another mental process (e.g. experiencing visual art). Plasticity Plasticity refers to the brain's ability to change as a result of experiences in one's life. Because of the brain's plasticity, it is possible in principle for social phenomena related to gender to influence the organization of a person's brain. Fine has argued that the neuroscience of sex differences does not do enough to take plasticity into account. In Fine's view, neuroscientists tend to take a snap-shot comparison (looking at current neural differences) and describe the results as "hard-wired", without considering that the observed patterns could change over time. To examine one possible example of this, consider the 2014 Ingalhalikar et al. study, which used diffusion tensor imaging to find relatively greater within-hemisphere neural connectivity in males' brains, and relatively greater across-hemisphere connectivity in females' brains. This was then employed to naturalise sex-specific cognitive differences, which then naturalised their suitability for divergent skill-sets. However, given the aforementioned concept of brain plasticity, the notion that these connectivity differences are exclusively a result of natural biology can be challenged. This is because plasticity introduces the alternative possibility that individuals' sex-specific learned behaviours could have also impacted their brain connectome. Thus, the concept of brain plasticity raises the question of whether the observed brain differences from the study are caused by nature or nurture. Sample sizes Fine has criticised the small sample sizes that are typical of Functional Neuroimaging (FNI) studies reporting sex differences in the brain. She supports this claim with a meta-analysis. She takes a sample of thirty-nine studies from Medline, Web of Science, and PsycINFO databases, published between 2009 and 2010, in which sex differences were referred to in the article title. Fine reports that over the entire sample, the mean number of males was 19, and the mean number of females was 18.5. Disregarding the studies making sex-by-age and sex-by-group comparisons (which require larger sample sizes), the average sample sizes were even smaller, with a mean of 13.5 males, and a mean of 13.8 females. She also points out that the second largest study in the group reported a null finding. Small sample sizes are problematic because they increase the risk of False positives. Not only do false positives misinform, but they also "tend to persist because failures to replicate are inconclusive and unappealing both to attempt by researchers and to publish by journals". Criticism Simon Baron-Cohen has defended the neuroscience of sex differences against the charge of neurosexism. In a review of Delusions of Gender, he said that "Ultimately, for me, the biggest weakness of Fine's neurosexism allegation is the mistaken blurring of science with politics", saying that "You can be a scientist interested in the nature of sex differences while being a clear supporter of equal opportunities and a firm opponent of all forms of discrimination in society." See also Cissexism Cisnormativity Heterosexism Monosexism Allonormativity Intersexism Transmedicalism References External links Cordelia Fine, "Delusions of Gender", the book's official site Neuroscience Sexism Gender Gender-related stereotypes Sex differences in humans 2010 neologisms
Neurosexism
[ "Biology" ]
3,930
[ "Neuroscience", "Behavior", "Gender", "Human behavior" ]
60,275,283
https://en.wikipedia.org/wiki/Cell%20Transplantation
Cell Transplantation is a monthly peer-reviewed medical journal covering regenerative medicine. It was established in 1992 and was originally published by Cognizant Communication Corporation until 2017, when it was acquired by SAGE Publications. The editors-in-chief are Paul R. Sanberg (University of South Florida College of Medicine) and Shinn-zong Lin (Tzu Chi University). According to the Journal Citation Reports, the journal has a 2017 impact factor of 2.885, ranking it 62nd out of 133 journals in the category "Medicine, Research & Experimental". References External links Regenerative medicine journals SAGE Publishing academic journals Academic journals established in 1992 Monthly journals English-language journals
Cell Transplantation
[ "Biology" ]
143
[ "Regenerative medicine journals", "Stem cell research" ]
60,282,751
https://en.wikipedia.org/wiki/Sodium%20trifluoroacetate
Sodium trifluoroacetate is a chemical compound with a formula of CF3CO2Na. It is the sodium salt of trifluoroacetic acid. It is used as a source of trifluoromethylations. Basicity With a pKa of 0.23 for trifluoroacetic acid, the trifluoroacetate ion is an extremely weak base compared to acetic acid, which has a pKa of 4.76. This is due to the electron-withdrawing effect of the three fluorine atoms adjacent the carboxylate group. Strong acids such as hydrochloric acid or sulfuric acid can protonate the trifluoroacetate ion to trifluoroacetic acid: In general, trifluoroacetate reacts in equilibrium with hydronium cations to form trifluoroacetic acid: <=>> The general reaction with hydronium is in equilibrium due to the similarity in pKa between trifluoroacetic acid and the hydronium ion. Preparation One convenient method is by dissolving an equivalent amount of sodium carbonate in 50% aqueous solution of trifluoroacetic acid. The solution is filtered and evaporated by vacuum evaporation (with special care to avoid decomposition of the salt by overheating). The solid obtained is dried under vacuum at 100 °C. Uses Sodium trifluoroacetate is a useful reagent for trifluoromethylation. See also Sodium fluoroacetate Trifluoroacetic acid Sodium acetate References Organic sodium salts Trifluoroacetates
Sodium trifluoroacetate
[ "Chemistry" ]
340
[ "Organic sodium salts", "Salts" ]
47,013,690
https://en.wikipedia.org/wiki/Automated%20synthesis
Automated synthesis or automatic synthesis is a set of techniques that use robotic equipment to perform chemical synthesis in an automated way. Automating processes allows for higher efficiency and product quality although automation technology can be cost-prohibitive and there are concerns regarding overdependence and job displacement. Chemical processes were automated throughout the 19th and 20th centuries, with major developments happening in the previous thirty years, as technology advanced. Tasks that are performed may include: synthesis in variety of different conditions, sample preparation, purification, and extractions. Applications of automated synthesis are found on research and industrial scales in a wide variety of fields including polymers, personal care, and radiosynthesis. Process An automated synthesis is very similar in procedure to performing a manual synthesis. The overseeing chemist decides on a target molecule then formulates the experimental plan, which is a sequential series of steps. Then, they collect the required equipment and execute the plan. The automated synthesis follows the same pathway, except that the computer devises and executes the experimental plan. However, human revision is usually still required to ensure the automated route is practical and there are no implicit steps or conditions missing from the proposed procedure. In organic synthesis, organic synthesis software is used to automate the process of identifying sequences of reactions or routes that can be used to synthesize organic compounds. Benefits of automated synthesis Automation of synthesis has three main benefits: increased efficiency, quality (yields and purity), and safety, all resulting from decreased human involvement. As machines work faster than humans and are not prone to human error, throughput and reproducibility increases. Additionally, as humans spend less time in the lab exposure to dangerous chemicals is significantly decreased. This allows chemists additional time for theory and collaborative discussions. Additional benefits include: multitasking, performing tasks beyond the scope of human precision or ability, exhaustive analysis, etc. Concerns with automated synthesis The primary concern of automated synthesis is job displacement. Other concerns are high initial investment and maintenance costs, privacy concerns, and an over-dependence on technology. There are also ethical concerns, regarding the use of artificial intelligence and robotics. See Ethics of artificial intelligence, Robot ethics, Machine ethics. Parts of procedures and techniques were automated throughout the 19th and 20th centuries, using simple circuit boards. The first fully automatic synthesis was a peptide synthesis by Robert Merrifield and John Stewart in 1966. Applications of artificial intelligence to organic synthesis also started in the 1960s with the Dendral Project, which helped organic chemists characterize and identify molecules using mass spectrometry. True computer-assisted organic synthesis software (CAOS) such as LHASA became feasible as artificial intelligence and machine learning developed in the 1980s. Important developments in automated radiosynthetic modules were also made in the 1980s. In the late 1990s, the main challenge of automation was overcoming phase-separation issues and increasing system integration. At this time there were only specific systems that belonged to one of four designs: a flow reactor, a batch reactor connected by flow lines, one robot, two robots: one for synthesis and one for analysis, and special larger systems that were a combination of the aforementioned. The 2000s and 2010s saw significant development in industrial automation of molecules as well as the emergence of general synthesis systems that could synthesise a wide variety of molecules on-demand, whose operation Melanie Trobe and Martin D. Burke compared to that of a 3D printer. Applications Automated synthesis systems find new applications with a development of new robotic platforms. Possible applications include: uncontrolled synthesis, time-dependent synthesis, radiosynthesis, synthesis in demanding conditions (low temperatures, presence of specific atmosphere like CO, H2, N2, high pressure or under vacuum) or whenever the same or similar workflow needs to be applied multiple times with the aim to: optimize reactions, synthesize many derivatives in small scale, perform reactions of iterative homologations or radiosynthesis. Automated synthesis workflows are needed both in academic research and a wide array of industrial R&D settings (pharmaceuticals, agrochemicals, fine & specialty chemicals, renewables & energy research, catalysts, polymers, ceramics & abrasives, porous materials, nanomaterials, biomaterials, lubricants, paints & coatings, home care, personal care, nutrition, forensics). Polymers Parallel synthesis Overall, automated synthesis has improved the efficiency for the parallel synthesis and combinatorial methods of polymers. These techniques aim to design new materials, in addition to studying the relationships of their structure and properties. However, while screening for polymers enables this investigation, it becomes increasingly demanding for researchers to create the libraries for these synthetic compositions. In addition, preparation requires a large number of repetitive reactions to be completed, leading to an immense burden of planning and labor. Using automated synthesis, this process can be refined, increasing the efficiency of the reaction and removing the impact of human error. Polycondensation Polycondensation involves the formation of polymers through condensation reactions between different species, creating condensation polymers. With automated synthesis, General electric manufactured an approach for melt-polymerizations of BPA and diphenyl carbonate (DPC), using sodium hydroxide (NaOH) as the catalyst. Once the results were analyzed, it was shown that, by using an automated method of polymerization, the effect of varying the catalyst amount became more distinct and improved the reproducibility for the reaction. Furthermore, it demonstrated an increase within the homogeneity of the polymers in the microreactors. Free-radical polymerization In addition to polycondensation, automated synthesis has been applied to the various methods of radical polymerization, such as ring-opening and polyolefins. This includes free-radical polymerization, such as the development of an automated process to synthesize and evaluate molecularly imprinted polymers (MIPs). Through thermal initiation, around sixty polymers could be prepared in parallel and evaluated through their binding constants to the imprinted analytes. Furthermore, adding another approach to the repertoire, Long et al. demonstrated the abilities of robotic systems and their use with varying the monomer for the synthesis of poly(styrene-co-methyl methacrylate) and poly(styrene-co-butyl methacrylate). After automatically precipitating, the products were characterized with standard analytics and added to the polymer library. Another example includes the method described by Symyx Technologies Inc. with the application of an ink-jet printer, delivering different ratios of styrene and acrylonitrile, which was used as the terminator. While these are examples of suspension polymerization, the first instance of automated synthesis for parallel emulsion was reported by Voorn et al. with five parallel reactors containing well-defined systems of styrene and vinyl acetate. After optimizing the vortex speed, the results between the methods of automated synthesis and classical stirring for emulsion polymerization were compared, which found that the products were comparable. Controlled radical polymerization While juxtaposed against free-radical polymerization, the application of automated synthesis can be utilized for controlled radical polymerization too. These methods have been used within reversible addition-fragmentation transfer (RAFT), atom-transfer radical (ATRP), and nitroxide-mediated polymerizations, demonstrating the ability of robots to improve efficiency and reduce the hardship of performing reactions. For example, with the automatic dispensation of reagents, Symyx Technologies Inc. was able to polymerize styrene and butyl acrylate through ATRP. In addition, this functionality was supported by Zhang et al. within their research, finding that reproducibility and comparability were equivalent to classical ATRP. Ring-opening polymerization With ring-opening polymerization, automated synthesis has been used for rapid screening and optimization, including with catalyst + initiator systems and their polymerization conditions. For example, Hoogenboom et al. determined the optimal temperature for the polymerization of 2-ethyl-2-oxazoline in dimethylacetamide (DMAc), allowing for individual heating of the parallel reactors, which shortened the time needed for preparation and analysis. Polyolefins To aid with the catalyst research for polyolefins, Symyx Technologies Inc. used automated synthesis to create a library of palladium and nickel catalysts, which were screened for ethylene polymerization. This process found that the largest polyethylene polymers were created by the complexes with the highest steric hindrance for the ortho-positions of the aryl rings, while electronic factors did not influence yield or molecular weight. In addition, Tuchbreiter and Mülhaupt used automated synthesis to demonstrate the improvements of minireactors for the polymerization of olefins, with quality improving as compared to utilizing simple arrays. Supramolecular polymerization Within the field of supramolecular polymerization, Schmatloch et al. used automated synthesis to create main-chain supramolecular coordination polymers, reacting bis(2,2′:6′,2″-terpyridine)-functionalized poly(ethylene oxide) with various metal(II) acetates. From this, it was revealed that classical laboratory approaches could be transferred to automatic synthesis, optimizing the processes to increase efficiency and aid with reproducibility. Recent developments Over the years, multiple synthesizers have been developed to assist with automated synthesis, including the Chemspeed Accelerator (SLT106, SLT II, ASW2000, SwingSLT, Autoplant A100, and SLT100), the Symyx system, and Freeslate ScPPR. Recently, researchers have investigated the optimization of these methods for controlled/living radical polymerization (CLRP), which faces issues with oxygen intolerance. This research has led to the development of oxygen-tolerant CLRP, including with the use of enzyme degassing of RAFT (Enz-RAFT), atom-transfer radical (ATRP) that possesses tolerance to air, and photoinduced electron/energy transfer–RAFT (PET–RAFT) polymerization. Through the use of liquid-handling robots, Tamasi et al. demonstrated the use of automated synthesis with executing multi-step procedures, enabling the reactions to investigate more elaborate schemes, such as with scale and complexity. Robotic platforms Automated synthesis systems are laboratory robots that combine of software and hardware. As synthesis is a linear combination of steps, the individual steps can be modularized into hardware that accomplishes the specific step (mixing, heating or cooling, product analysis, etc.). Such hardware includes robotic arms that use dispensers and grippers to transfer materials and shakers that adjust the stirring speed and cartesian coordinate system robots that operate on a X Y Z axis and can move items and perform synthesis within designated bounds. Conditions of reactions (atmosphere, temperature, pressure) are controlled with the help of peripherals like: gas cylinders, vacuum pump, reflux system and cryostat. Modular platforms use a variety of tools in order to perform all operations needed in synthesis. There are many commercial modular hardware solutions available to execute synthesis. New software programs are available that can compile an automated synthesis procedure in executable code directly from existing literature. There are also software programs that can retro-synthetically generate a procedure at the level of proficiency of a graduate student. References Chemical synthesis
Automated synthesis
[ "Chemistry" ]
2,357
[ "nan", "Chemical synthesis" ]
47,013,764
https://en.wikipedia.org/wiki/Radiosynthesis
Radiosynthesis is a fully automated synthesis method in which radioactive compounds are produced. Radiosynthesis is generally carried out by several nuclear interface modules, which are protected by the lead shielding and controlled by a computer semi-automatically. The set-ups of modules are different depending on the type of product and synthesis process. Consequently, the modules should be adapted with the synthesis stages. In some cases, such stages of synthesis are carried out manually in order to optimize the radiochemical yield or due to the incompatibility or lack of module. Module Radiosynthesis modules consist of following constant components: Reservoir: To store reactants Pipe : To link all components together Valve : To manage the flow of liquids Reactor : To proceed the reaction(s) Temperature and Radioactivity sensor : To adjust and measure the temperature and radioactivity Preparative HPLC : To purify the Product There are also some components which are added based on the synthesis set-up such as stirrers, sterile filters, Sep-Paks™, vials, bottles, detectors etc.. Before every synthesis, the modules should be washed. It should also be mentioned that according to Half-life of radionuclides, a relaxation time is needed between the syntheses in each module. Fig.1 shows the schematic of a sample module. Radiosynthesis modules are often combined with a cyclotron or other radio nuclide generator. See also Hot cell Medicinal radiocompound References Chemical synthesis
Radiosynthesis
[ "Chemistry" ]
307
[ "Chemical reaction stubs", "nan", "Chemical synthesis" ]
47,015,915
https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Aerospace%20and%20Electronic%20Systems
IEEE Transactions on Aerospace and Electronic Systems is a bimonthly peer-reviewed scientific journal published by the IEEE Aerospace and Electronic Systems Society. It covers the organization, design, development, integration, and operation of complex systems for space, air, ocean, or ground environment. The editor-in-chief is Gokhan Inalhan. According to the Journal Citation Reports, the journal has a 2020 impact factor of 4.102. Publication History The origins of IEEE Transactions on Aerospace and Electronic Systems are found in the Institute of Radio Engineers (IRE). In 1948 the IRE formed a number of "Professional Groups" to accommodate the post-war growth in its membership. Professional groups were designed to meet the needs of specialized groups within the larger IRE membership by holding meetings, sponsoring conferences, publishing specialized journals. Three journals, sponsored and published in parallel by three professional groups, merged to form IEEE Transactions on Aerospace and Electronic Systems in 1965. The first was the Transactions of the IRE Professional Group on Airborne Electronics (1951–1952) published by the Professional Group on Airborne Electronics beginning in 1951. In response to the expanding scope of the professional group, the group changed its name and the group's journal became Transactions of the IRE Professional Group on Aeronautical and Navigational Electronics (1953–1954). The journal name was updated to IRE Transactions on Aeronautical and Navigational Electronics (1955–1960) in 1955. As the scope the professional group continued to evolve, the professional group name and its journal became the IRE Transactions on Aerospace and Navigational Electronics (1961–1962). Accompanying the merger of the IRE and the American Institute of Electrical Engineers (AIEE) to form the Institute of Electrical and Electronics Engineers (IEEE) in 1963, the journal changed its name to IEEE Transactions on Aerospace and Navigational Electronics (1963–1965). The second journal was first published by the Professional Group on Radio Telemetry and Remote Control in 1954 and was called Transactions of the IRE Professional Group on Radio Telemetry and Remote Control (1954). In 1955 the journal name was updated to the IRE Transactions on Telemetry and Remote Control (1955–1958). As the scope of the professional group evolved with the US space program, the professional group changed its name and the journal was renamed IRE Transactions on Space Electronics and Telemetry (1959–1962). With the IRE and AIEE merger to form IEEE, the journal name was updated to IEEE Transactions on Space Electronics and Telemetry (1963–1965). The third journal began with the newly formed Professional Group on Military Electronics in 1957: IRE Transactions on Military Electronics (1957–1962). The journal changed its name in the wake of the IRE and AIEE merger to form IEEE to IEEE Transactions on Military Electronics in (1963–1965). In 1965 four groups (the Aerospace Group, the Aerospace and Navigational Electronics Group, the Military Electronics Group, and the Space Electronics and Telemetry Group) merged to form the Aerospace and Electronic Systems Group. The last three of the four groups published separate journals. These three journals were combined to form IEEE Transactions on Aerospace and Electronic Systems (1965–present). In 1973, the Aerospace and Electronic Systems group became the Aerospace and Electronic Systems Society. Since its founding in 1965, the following have served as Editor-in-Chief: Harry Mimno 1965-1975 William Brown 1976-1988 Jack Harris 1988-1995 Cary Sptizer 1996-1999 Dale Blair 1999-2005 Peter Willett 2006-2011 Lance Kaplan 2012-2017 Michael Rice 2018-2022 Gokhan Inalhan 2023-present M. Barry Carlton Award Each year, since 1962, the M. Barry Carlton Award is given to the author(s) of the best paper to appear in the journal. The award was established in 1957 by the Professional Group on Military Electronics and initially given to the best paper to appear in the IRE Transactions on Military Electronics (1957–1962). The award was named after M. Barry Carlton, former Assistant Secretary, Research and Development in the United States Department of Defense who died in the 1956 Grand Canyon mid-air collision. The first award was given in 1962 to David Barton for the paper "The Future of Pulse Radar for Missile and Space Range Instrumentation" that appeared in the October 1961 issue of the IRE Transactions on Military Electronics. After the 1965 merger that formed the Aerospace and Electronic Systems Group, the first paper published in IEEE Transactions on Aerospace and Electronic Systems to receive the award was Raymond Robbiani for "High Performance Weather Radar" that appeared in the April 1965 issue. A list of all recipients of the award is available on AESS M. Barry Carlton Award website. Notes References External links Transactions on Aerospace and Electronic Systems, IEEE Bimonthly journals Academic journals established in 1951 English-language journals Engineering journals Aerospace engineering journals
IEEE Transactions on Aerospace and Electronic Systems
[ "Engineering" ]
984
[ "Aerospace engineering journals", "Aerospace engineering" ]
47,016,776
https://en.wikipedia.org/wiki/Extrafarma
Extrafarma is the drugstore chain owned by Ultrapar. The company is among the top 10 largest pharmacy chains in Brazil, with stores located throughout the north, northeast and southern regions of the country. The company has more than 400 stores in 10 States and more than 7000 employees. History Pedro de Castro Lazera founded the company Imifarma on 2 December 1960. It was initially focused on the drug distribution market. In the 1990s, Imifarma started to operate in the retail market through its own network of pharmacies under the name Extrafarma. The store chain began in the city of Belém and expanded to other areas in the state of Pará and in the neighbor state of Amapá. Later, it expanded to the states of Maranhão, Ceará, Piauí and Rio Grande do Norte. In 2013, Extrafarma was acquired by Ultra. With the acquisition, Ultra entered the pharmaceutical retail industry and made Extrafarma its third distribution business and specialty retail chain, along with Ipiranga and Ultragaz. With the acquisition, Extrafarma will expand and open new pharmacy stores inside Ipiranga gas stations and Ultragaz resellers. Awards ADVB-PA Award – Association of Sales and Marketing Managers from Brazil: Top Environmental Company Award, 2012. References Brazilian brands Companies based in Belém (Pará) Medicinal chemistry Pharmacies of Brazil Ultrapar
Extrafarma
[ "Chemistry", "Biology" ]
287
[ "Biochemistry", "nan", "Medicinal chemistry" ]
47,021,524
https://en.wikipedia.org/wiki/H1821%2B643
H1821+643 is an extraordinarily luminous, radio-quiet quasar in the constellation of Draco. The associated Active Galactic Nucleus (AGN) is situated in the Brightest Central Galaxy (BCG) of a massive (), strong cooling flow cluster, CL 1821+64. Russel et al (2010) spatially isolated its X-ray signal from the surrounding cluster in Chandra X-ray observatory observations and computed from the observed X-ray luminosity. Supermassive Black Hole The SMBH centred in CL 1821+64 is believed to be among the most massive in the known Universe. A variety of techniques have found different values for the mass. 5 studies found values . Kim et al (2004) and Floyd et al (2008) used galactic bulge luminosity fits derived from Hubble data to find and respectively. Russell et al (2010) provided a rough estimate of . This was an underestimate with . Kolman et al (1991) and Shapovalova (2016) independently modelled the quasar UV spectrum to find . Capellupo et al (2017) found using line emissions. 2 independent X-ray studies found significantly higher values. Reynolds et al (2014) found by modelling reflection from the accretion disc and Walker et al found by modelling the interaction of the black hole with the Intracluster medium (ICM) as a Compton-cooled feeding cycle. is in the range . The Schwarzschild diameter of this black hole is between and , which is about 16 times the diameter of Pluto's orbit. If the hole were a Euclidean sphere, the average density would be 18 g/m3, the density of air at sea level on Earth. Footnotes References External links Simbad, SIMBAD NED, NED, NASA/IPAC Extragalactic Database Draco (constellation) Quasars Supermassive black holes
H1821+643
[ "Physics", "Astronomy" ]
391
[ "Black holes", "Unsolved problems in physics", "Supermassive black holes", "Constellations", "Draco (constellation)" ]
47,021,614
https://en.wikipedia.org/wiki/Dualizing%20sheaf
In algebraic geometry, the dualizing sheaf on a proper scheme X of dimension n over a field k is a coherent sheaf together with a linear functional that induces a natural isomorphism of vector spaces for each coherent sheaf F on X (the superscript * refers to a dual vector space). The linear functional is called a trace morphism. A pair , if it is exists, is unique up to a natural isomorphism. In fact, in the language of category theory, is an object representing the contravariant functor from the category of coherent sheaves on X to the category of k-vector spaces. For a normal projective variety X, the dualizing sheaf exists and it is in fact the canonical sheaf: where is a canonical divisor. More generally, the dualizing sheaf exists for any projective scheme. There is the following variant of Serre's duality theorem: for a projective scheme X of pure dimension n and a Cohen–Macaulay sheaf F on X such that is of pure dimension n, there is a natural isomorphism . In particular, if X itself is a Cohen–Macaulay scheme, then the above duality holds for any locally free sheaf. Relative dualizing sheaf Given a proper finitely presented morphism of schemes , defines the relative dualizing sheaf or as the sheaf such that for each open subset and a quasi-coherent sheaf on , there is a canonical isomorphism , which is functorial in and commutes with open restrictions. Example: If is a local complete intersection morphism between schemes of finite type over a field, then (by definition) each point of has an open neighborhood and a factorization , a regular embedding of codimension followed by a smooth morphism of relative dimension . Then where is the sheaf of relative Kähler differentials and is the normal bundle to . Examples Dualizing sheaf of a nodal curve For a smooth curve C, its dualizing sheaf can be given by the canonical sheaf . For a nodal curve C with a node p, we may consider the normalization with two points x, y identified. Let be the sheaf of rational 1-forms on with possible simple poles at x and y, and let be the subsheaf consisting of rational 1-forms with the sum of residues at x and y equal to zero. Then the direct image defines a dualizing sheaf for the nodal curve C. The construction can be easily generalized to nodal curves with multiple nodes. This is used in the construction of the Hodge bundle on the compactified moduli space of curves: it allows us to extend the relative canonical sheaf over the boundary which parametrizes nodal curves. The Hodge bundle is then defined as the direct image of a relative dualizing sheaf. Dualizing sheaf of projective schemes As mentioned above, the dualizing sheaf exists for all projective schemes. For X a closed subscheme of Pn of codimension r, its dualizing sheaf can be given as . In other words, one uses the dualizing sheaf on the ambient Pn to construct the dualizing sheaf on X. See also coherent duality reflexive sheaf Gorenstein ring Dualizing module Note References External links Relative dualizing sheaf (reference, behavior) Algebraic geometry
Dualizing sheaf
[ "Mathematics" ]
693
[ "Fields of abstract algebra", "Algebraic geometry" ]
36,780,732
https://en.wikipedia.org/wiki/Australian%20Institute%20of%20Petroleum
The Australian Institute of Petroleum (AIP) is a representative body for Australia's petroleum industry. it was established in 1976 and Its headquarters a located in Canberra. The formation of the AIP aimed to foster industry self-regulation and facilitate productive communication among the oil industry, government, and the community. It served as a platform for promoting effective dialogue and cooperation. The AIP took over the role of the Petroleum Information Bureau, an organization that had been active in Australia since the early 1950s, among other entities. The body is managed by a board composed of chief executives, senior representatives and an executive director. The organisation aims to support the development of sustainable, internationally competitive petroleum products industry. The four core members are Ampol, BP, Mobil and Shell. Its members own and operate all oil refineries in Australia. The AIP owns the Australian Marine Oil Spill Centre. It is a member of the Australian Industry Greenhouse Network. The AIP produces a weekly fuel report detailing average prices for transport fuels by location. See also Energy in Australia Road transport in Australia References External links Business organisations based in Australia Organizations established in 1976 Petroleum industry in Australia Petroleum organizations 1976 establishments in Australia
Australian Institute of Petroleum
[ "Chemistry", "Engineering" ]
238
[ "Petroleum", "Petroleum organizations", "Energy organizations" ]
36,781,827
https://en.wikipedia.org/wiki/Materials%20Chemistry%20and%20Physics
Materials Chemistry and Physics (including Materials Science Communications) is a peer-reviewed scientific journal published 18 times per year by Elsevier. The focus of the journal is interrelationships among structure, properties, processing and performance of materials. It covers conventional and advanced materials. Publishing formats are short communications, full-length papers and feature articles. The editor-in-chief is Jenq-Gong Duh (National Tsing Hua University). According to the Journal Citation Reports, the journal has a 2022 impact factor of 4.6, ranking it 57th out of 423 in the category of Condensed Matter Physics. Abstracting and indexing This journal is abstracted and indexed by: References External links Materials science journals English-language journals Academic journals established in 1983 Elsevier academic journals
Materials Chemistry and Physics
[ "Materials_science", "Engineering" ]
159
[ "Materials science journals", "Materials science" ]
36,791,923
https://en.wikipedia.org/wiki/Gunpowder%20engine
A gunpowder engine, also known as an explosion engine or Huygens' engine, is a type of internal combustion engine using gunpowder as its fuel. The concept was first explored during the 1600s, most notably by famous Dutch polymath Christiaan Huygens. George Cayley also experimented with the design in the early 1800s as an aircraft engine, and claims to have made models that worked for a short time. There is also a persistent claim that conventional carboretted gasoline engine can be run on gunpowder, but no examples of a successful conversion can be documented. Earliest mentions The gunpowder engine is based on many previous ideas and scientific discoveries, developed by multiple people independently. Early devices just aimed at lifting and/or holding weight (usually to study and demonstrate the physics), while engines aim at doing work continuously (usually with the intention of doing something useful). Vacuum devices to lift/hold weight Leonardo da Vinci described in 1508 a device to "lift heavy weight with fire" using a cannon barrel and gunpowder. Galileo Galilei made thorough experiments about lifting weight using vacuum. Otto von Guericke researched vacuum practically, but used pumps to create the vacuum. Robert Hooke did hide a phrase that translates to The ‘vacuum’ left by fire lifts a weight. in his 1676 book Description of Helioscopes and Other Instruments. Early engines The earliest references to a gunpowder engine appear to be those of Samuel Morland in 1661. This consists solely of a letter of patent written by King Charles the Second that was received at Whitehall on 11 December 1661. No other information about this "engine" remains, but the description involves the use of vacuum and powder to draw water. The next known reference is by Jean de Hautefeuille in 1678, suggested as a solution to the problem of raising water from the Seine to supply Versailles. He presented two ideas, one using the vacuum like Morland's idea, and a second that used a U-shaped tube with water in one side and air in the other. When the gunpowder was lit in the air-filled side, the rise in pressure would drive the water up the other side. Like early steam engine designs, these engines used the air or vacuum created by gunpowder to directly lift the water. There were no mechanical parts in the manner of modern engines, which translate the power in the gas pressure into any needed mechanical form. Huygens and Papin In 1671, Denis Papin was given a job at the Academy of the Royal Library in Paris, where he worked under the Curator of Experiments, Christiaan Huygens. Huygens set Papin to the task of carrying out a research effort on air and vacuum, at that time a matter of widespread international study. As part of the experiments, Papin measured the force of a small amount of gunpowder lit in small iron and copper vessels. Papin published an account of all of these experiments in 1674 in New experiments on the vacuum, with a description of the machines used for making them. Papin moved to London shortly after publication, and from then on was more involved in the development of steam. Although his developments pointed the way towards the early steam engine, Papin himself became more interested in the latent heat of steam and developed the "steam digester", the first pressure cooker. He also conceived of a number of devices using air pressure as a working fluid, include a series of fountains, pumps, and similar devices. In spite of there being no further examples of particle work on the part of Papin, he did carry on a continued correspondence with Gottfried Wilhelm Leibniz on this and other topics. Leibniz tried to interest Papin in further development throughout, at one point noting "Yet I would well counsel [you], Monsieur, to undertake more considerable things which would force everyone to give their approbation and would truly change the state of things. The two items of binding together the pneumatic machine and gunpowder and applying the force of fire to vehicles would truly be of this nature". Papin replied that he had constructed a small model of a paddle-wheel boat, but the type of engine is not stated. Huygens' engine Huygens, however, became interested in the mechanical power of the vacuum, and the possibility of using gunpowder to produce one. In 1678 he outlined a gunpowder engine consisting of a vertical tube containing a piston. Gunpowder was inserted into the tube and lit through a small hole at the base, like a cannon. The expanding gasses would drive the piston up the tube until it reached a point near the top. Here, the piston uncovered holes in the tube that allowed any remaining hot gasses to escape. The weight of the piston and the vacuum formed by the cooling gasses in the now-closed cylinder drew the piston back into the tube, lifting a test mass to provide power. According to sources, a single example of this sort of engine was built in 1678 or 79 using a cannon as the cylinder. The cylinder was held down to a base where the gunpowder sat, making it a breech loading design. The gasses escaped via two leather tubes attached at the top of the barrel. When the piston reached them the gasses blew the tubes open, and when the pressure fell, gravity pulled the leather down causing the tubes droop to the side of the cylinder, sealing the holes. Huygens presented a paper on his invention in 1680, A New Motive Power by Means of Gunpowder and Air. By 1682, the device had successfully shown that a dram (1/16th of an ounce) of gunpowder, in a cylinder seven or eight feet high and fifteen or eighteen inches in diameter, could raise seven or eight boys (or about 1,100 pounds) into the air, who held the end of the rope. However, there is considerable debate in modern sources as to whether or not the engine could have been built. Sealing the piston within the cylinder proved to be a very difficult problem in modern recreations. From that point, few mentions of early gunpowder engines are found. The use of steam, especially after the introduction of the atmospheric engine in 1712, captured all further development effort. Cayley As part of his investigations of powered flight, George Cayley was concerned about the low power-to-weight ratio of steam engines, complaining that "the steam engine has hither proved too weighty and cumbrous for most purposes of locomotion." He took up development of a new engine design starting in 1807, and quickly settled on a gunpowder engines as the preferred solution, noting "Being in want of a simple & light first mover on a small scale for the purpose of some preparatory experiments on aerial navigation, I constructed one in which the force of gunpowder & the heat evolved by its explosion, acting upon a quantity of common air, was employed." His notebooks show a design of considerable improvement over those of Huygens and similar. In Cayley's design, two cylinders were arranged one over the other, the lower acting as a combustion chamber, and the upper containing a piston. A small charge of gunpowder was introduced into the bottom of the lower cylinder and lit by a hot rod heated by candles. The expanded gasses pushed the piston up, and this energy was captured in a large bow, in effect, drawing the bow back as if readying to fire an arrow. The bowstring pushed the piston rod back down as the gasses escaped and cooled, completing the cycle. In a later version, Cayley attempted to solve the problem of continual cycling. In this version, the combustion chamber was removed to a separate cylinder placed to the side of the power cylinder. Gunpowder was stored in the upper portion of this chamber, and small amounts were metered out to fall into the combustion area below. The hot gasses were then piped out of the combustion area into the power cylinder. This consisted of two pistons on a common piston rod, with the gasses flowing into alternate sides of the cylinder to form a double-acting engine. In a letter, Cayley stated that he had constructed one of these designs (although which is not mentioned), but also stated that it did not work very well. Over time he designed several flying machines using the engine, but no larger working model appears to have been attempted. Paine and others Thomas Paine introduced an entirely new type of engine design, one that bore more resemblance to a water wheel than a conventional engine. In Paine's engine, a series of cup-like combustion chambers were arranged around a wheel. As the wheel turned, each cup received a small amount of gunpowder from a central container and was then lit. The literature contains numerous other mentions of gunpowder engines, but it does not appear any were used operationally. In modern engines The idea that a conventional gasoline engine can be run on gunpowder is a persistent topic of discussion. It was taken up by MythBusters on Episode 63, and after a number of attempts it was considered "busted". References Citations Bibliography Internal combustion engine
Gunpowder engine
[ "Technology", "Engineering" ]
1,844
[ "Internal combustion engine", "Combustion engineering", "Engines" ]
57,276,769
https://en.wikipedia.org/wiki/Physiology%20of%20marathons
The physiology of marathons is typically associated with high demands on a marathon runner's cardiovascular system and their locomotor system. The marathon was conceived centuries ago and as of recent has been gaining popularity among many populations around the world. The 42.195 km (26.2 mile) distance is a physical challenge that entails distinct features of an individual's energy metabolism. Marathon runners finish at different times because of individual physiological characteristics. The interaction between different energy systems captures the essence of why certain physiological characteristics of marathon runners exist. The differing efficiency of certain physiological features in marathon runners evidence the variety of finishing times among elite marathon runners that share similarities in many physiological characteristics. Aside from large aerobic capacities and other biochemical mechanisms, external factors such as the environment and proper nourishment of a marathon runner can further the insight as to why marathon performance is variable despite ideal physiological characteristics obtained by a runner. History The first marathon was perhaps a 25 mile run by Pheidippides, a Greek soldier who ran to Athens from the town of Marathon, Greece to deliver news of a battle victory over the Persians in 490 B.C. According to this belief, he dropped dead of exhaustion shortly after arriving in Athens. Thousands of years later, marathon running became part of world sports, starting at the inaugural Marathon in the 1896 Modern Olympic Games. After around 40 years of various distances, the 42.195 kilometer (26.2) mile trek became standard. The number of marathons in the United States has grown over 45 times in this period. With an increase in popularity, the scientific field has a large basis to analyze some of the physiological characteristics and the factors influencing these traits that led to Pheidippides's death. The high physical and biochemical demands of marathon running and variation across finishing times make for an intricate field of study that entangles multiple facets of human capacities. Energy pathways during exercise Humans metabolize food to transfer potential energy from food to adenosine triphosphate (ATP). This molecule provides the human body's instant accessible form of energy for all functions of cells within the body. For exercise the human body places high demand for ATP to supply its self with enough energy to support all the corresponding changes in the body at work. The 3 energy systems involved in exercise are the Phosphogenic, Anaerobic and Aerobic energy pathways. The simultaneous action of these three energy pathways prioritizes one specific pathway over the others depending on the type of exercise an individual is partaking in. This differential prioritization is based on the duration and intensity of the particular exercise. Variable use of these energy pathways is central to the mechanisms that support long, sustained exercise—such as running a marathon. Phosphogenic The phosphogenic (ATP-PC) anaerobic energy pathway restores ATP after its breakdown via creatine phosphate stored in skeletal muscle. This pathway is anaerobic because it does not require oxygen to synthesize or use ATP. ATP restoration only lasts for approximately the first 30 seconds of exercise. This rapid rate of ATP production is essential at the onset of exercise. The amount of creatine phosphate and ATP stored in the muscle is small, readily available, and used quickly due these two factors. Weight lifting or running sprints are examples of exercises that use this energy pathway. Anaerobic The anaerobic glycolytic energy pathway is the source of human energy after the first 30 seconds of an exercise until 3 minutes into that exercise. The first 30 seconds of exercise are most heavily reliant on the phosphogenic pathway for energy production. Through glycolysis, the breakdown of carbohydrates from blood glucose or muscle glycogen stores yields ATP for the body without the need for oxygen. This energy pathway is often thought of as the transitional pathway between the phosphogenic energy pathway and the aerobic energy pathway due to the point in exercise this pathway onsets and terminates. A 300-800 meter run is an example of an exercise that uses this pathway—as it is typically higher intensity than endurance exercise, and only sustained for 30–180 seconds, depending on training. Aerobic (Oxidative) The aerobic energy pathway is the third and slowest ATP producing pathway that is oxygen dependent. This energy pathway typically supplies the bulk of the body's energy during exercise—after three minutes from the onset of exercise until the end, or when the individual experiences fatigue. The body uses this energy pathway for lower intensity exercise that lasts longer than three minutes, which corresponds to the rate at which the body produces ATP using oxygen. This energy system is essential to endurance athletes such as marathon runners, triathletes, cross-country skiers, etc. The Aerobic Energy Pathway is able to produce the largest amount of ATP out of these three systems. This is largely because of this energy system's ability to convert fats, carbohydrates, and protein into a state that can enter the mitochondria, the site of aerobic ATP production. Physiological characteristics of marathon runners Aerobic capacity (VO2Max) Marathon runners obtain above average aerobic capacities, oftentimes up to 50% larger than normally active individuals. Aerobic capacity or VO2Max is an individual's ability to maximally take up and consume oxygen in all bodily tissue during exhaustive exercise. Aerobic capacity serves as a good measure of exercise intensity as it is the upper limit of one's physical performance. An individual cannot perform any exercise at 100% VO2Max for extended periods of time. The marathon is generally run at about 70-90% of VO2Max and the fractional use of one's aerobic capacity serves as a key component of marathon performance. The physiological mechanisms that aerobic capacity or VO2Max consist of are blood transportation/distribution and the use of this oxygen within muscle cells. VO2Max is one of the most salient indicators of endurance exercise performance. The VO2Max of an elite runner at maximal exercise is almost two times the value of a fit or trained adult at maximal exercise. Marathon runners demonstrate physiological characteristics that enable them to deal with the high demands of a 26.2 mile (42.195 km) run. Components of aerobic capacity The primary components of an individual's VO2Max are the properties of aerobic capacity that influence the fractional use (%VO2Max) of this ability to take up and consume oxygen during exhaustive exercise. The transportation of large amounts of blood to and from the lungs to reach all bodily tissues depends on a high cardiac output and sufficient levels of total body hemoglobin. Hemoglobin is the oxygen carrying protein within blood cells that transports oxygen from the lungs to other bodily tissues via the circulatory system. For effective transportation of oxygen in blood during a marathon, distribution of blood must be efficient. The mechanism that allows for this distribution of oxygen to the muscle cells is muscle blood flow. A 20 fold increase of local blood flow within skeletal muscle is necessary for endurance athletes, like marathon runners, to meet their muscles' oxygen demands at maximal exercise that are up to 50 times greater than at rest. Upon successful transportation and distribution of oxygen in the blood, the extraction and use of the blood within skeletal muscle are what give effect to a marathoner's increased aerobic capacity and the overall improvement of an individual's marathon performance. Extraction of oxygen from the blood is performed by myoglobin within the skeletal muscle cells that accept and store oxygen. These components of aerobic capacity help define the maximal uptake and consumption of oxygen in bodily tissues during exhaustive exercise. Limitations to aerobic capacity (VO2Max) Cardiac Marathon runners often present enlarged dimensions of the heart and decreased resting heart rates that enable them to achieve greater aerobic capacities. Although these morphological and functional changes in a marathon runner's heart aid in maximizing their aerobic capacity, these factors are also what set the limit for an individual to maximally take up and consume oxygen in their bodily tissues during endurance exercise. Increased dimensions of the heart enable an individual to achieve a greater stroke volume. A concomitant decrease in stroke volume occurs with the initial increase in heart rate at the onset of exercise. The highest heart rate an individual can achieve is limited and decreases with age (Estimated Maximum Heart Rate = 220 - age in years). Despite an increase in cardiac dimensions, a marathoner's aerobic capacity is confined to this capped and ever decreasing heart rate. An athlete's aerobic capacity cannot continuously increase because their maximum heart rate can only pump a specific volume of blood. Oxygen carrying capacity An individual running a marathon experiences appropriation of blood to the skeletal muscles. This distribution of blood maximizes oxygen extraction by the skeletal muscles to aerobically produce as much ATP needed to meet demand. To achieve this, blood volume increases. The initial increase in blood volume during marathon running can later lead to decreased blood volume as a result of increased core body temperature, pH changes in skeletal muscles, and the increased dehydration associated with cooling during such exercise. Oxygen affinity of the blood depends on blood plasma volume and an overall decrease in blood volume. Dehydration, temperature and pH differences between the lungs and the muscle capillaries can limit one's ability to fractionally use their aerobic capacity (%VO2Max). Secondary limitations Other limitations affecting a marathon runner's VO2Max include pulmonary diffusion, mitochondria enzyme activity, and capillary density. These features of a marathon runner can be enlarged compared to that of an untrained individual but have upper limits determined by the body. Improved mitochondria enzyme activity and increased capillary density likely accommodate more aerobically produced ATP. These increases only occur to a certain point and help to determine peak aerobic capacity. Especially in fit individuals, the pulmonary diffusion of these individuals correlates strongly with VO2Max and can limit these individuals in an inability to efficiently saturate hemoglobin with oxygen due to large cardiac output. The shorter transit time of larger amounts of blood being pumped per unit time can be attributed to this insufficient oxygen saturation often seen in well trained athletes such as marathoners. Not all inspired air and its components make it into the pulmonary system due to the human body's anatomical dead space, which, in terms of exercise, is a source of oxygen wasted. Running economy Despite being one of the most salient predictors of marathon performance, a large VO2Max is only one of the factors that may affect marathon performance. A marathoner's running economy is their sub maximal requirement for oxygen at specific speeds. This concept of running economy helps explain different marathon times for runners with similar aerobic capacities. The steady state oxygen consumption used to define running economy demonstrates the energy cost of running at sub maximal speeds. This is often measured by the volume of oxygen consumed, either in liters or milliliters, per kilogram of body weight per minute (L/kg/min or mL/kg/min). Discrepancies in time of winning performances of various marathon runners with almost identical VO2Max and %VO2Max values can be explained by different levels of oxygen consumption per minute at the same speeds. For this reason, it can be seen that Jim McDonagh has run the marathon faster than Ted Corbitt in his winning performances compared to that of Corbitt. This greater requirement for sub maximal oxygen consumption (3.3L of oxygen per minute for Corbitt vs. 3.0L of oxygen per minute for McDonagh) is positively correlated with a greater level of energy expenditure while running the same speed. Running economy (efficiency) can be credited with being an important factor in elite marathon performance as energy expenditure is weakly correlated with a runner's mean velocity increase. A disparity in running economy determined differences in marathon performance and the efficiency of these runners exemplifies the marginal differences in total energy expenditure when running at greater velocities than recreational athletes. Lactate threshold A marathon runner's velocity at lactate threshold is strongly correlated to their performance. Lactate threshold or anaerobic threshold is considered a good indicator of the body's ability to efficiently process and transfer chemical energy into mechanical energy. A marathon is considered an aerobic dominant exercise, but higher intensities associated with elite performance use a larger percentage of anaerobic energy. The lactate threshold is the cross over point between predominantly aerobic energy usage and anaerobic energy usage. This cross over is associated with the anaerobic energy system's inability to efficiently produce energy leading to the buildup of blood lactate often associated with muscle fatigue. In endurance trained athletes, the increase in blood lactate concentration appears at about 75%-90%VO2Max, which directly corresponds to the VO2Max marathoner's run at. With this high of an intensity endured for over two hours, a marathon runner's performance requires more energy production than that solely supplied by mitochondrial activity. This causes a higher anaerobic to aerobic energy ratio during a marathon. The higher the velocity and fractional use of aerobic capacity an individual has at their lactic threshold, the better their overall performance. Uncertainty exists about how lactate threshold effects endurance performance. Contribution of blood lactate levels accumulating is attributed to potential skeletal muscle hypoxemia but also to the production of more glucose that can be used as energy. The inability to establish a singular set of physiological contributions to blood lactate accumulation's effect on the exercising individual creates a correlative role for lactate threshold in marathon performance as opposed to a causal role. Alternative factors contributing to marathon performance Fuel To sustain high intensity running, a marathon runner must obtain sufficient glycogen stores. Glycogen can be found in the skeletal muscles or liver. With low levels of glycogen stores at the onset of the marathon, premature depletion of these stores can reduce performance or even prevent completion of the race. ATP production via aerobic pathways can further be limited by glycogen depletion. Free Fatty Acids serve as a sparing mechanism for glycogen stores. The artificial elevation of these fatty acids along with endurance training demonstrate a marathon runner's ability to sustain higher intensities for longer periods of time. The prolonged sustenance of running intensity is attributed to a high turnover rate of fatty acids that allows the runner to preserve glycogen stores later into the race. Some suggest that ingesting monosaccharides at low concentrations during the race could delay glycogen depletion. This lower concentration, as opposed to a high concentration of monosaccharides, is proposed as a means to maintain a more efficient gastric emptying and faster intestinal uptake of this energy source. Carbohydrates may be the most efficient source of energy for ATP. Pasta parties and the consumption of carbohydrates in the days leading up to a marathon are common practice of marathon runners at all levels. Thermo-regulation and body fluid loss Maintaining internal core body temperature is crucial to a marathon runner's performance and health. An inability to reduce rising core body temperature can lead to hyperthermia. To reduce body heat, the body must remove metabolically produced heat by sweating (also known as evaporative cooling). Heat dissipation by sweat evaporation can lead to significant bodily water loss. A marathon runner can lose water adding up to about 8% of body weight. Fluid replacement is limited, but can help keep internal temperatures cooler. Fluid replacement is physiologically challenging during exercise of this intensity due to the inefficient emptying of the stomach. Partial fluid replacement can serve to avoid a marathon runner's body over heating but not enough to keep pace with the loss of fluid via sweat evaporation. Environmental factors Environmental factors such as air resistance, rain, terrain, and heat contribute to a marathon runner's ability to perform at their full physiological ability. Air resistance or wind, and the marathon course terrain (hilly or flat) are factors. Rain can affect performance by adding weight to the runner's attire. Temperature, in particular heat, is the strongest environmental impediment to marathon performance. An increase in air temperature affects all the runners the same. This negative correlation of increased temperature and decreased race time is affiliated with marathon runners' hospitalizations and exercise induced hyperthermia. There are other environmental factors less directly associated with marathon performance such as the pollutants in the air and even prize money associated with a specific marathon itself. References Human physiology Sports medicine Marathons
Physiology of marathons
[ "Chemistry", "Biology" ]
3,404
[ "Biochemistry", "Exercise biochemistry" ]
52,566,024
https://en.wikipedia.org/wiki/Retinol-binding%20protein
Retinol-binding proteins (RBP) are a family of proteins with diverse functions. They are carrier proteins that bind retinol. Assessment of retinol-binding protein is used to determine visceral protein mass in health-related nutritional studies. Retinol and retinoic acid play crucial roles in the modulation of gene expression and overall development of an embryo. However, deficit or excess of either one of these substances can cause early embryo mortality or developmental malformations. Regulation of transport and metabolism of retinol necessary for a successful pregnancy is accomplished via RBP. Retinol-binding proteins have been identified within the uterus, embryo, and extraembryonic tissue of the bovine, ovine, and porcine, clearly indicating that RBP plays a role in proper retinol exposure to the embryo and successful transport at the maternal-fetal interface. Further research is necessary to determine the exact effects of poor RBP expression on pregnancy and threshold levels for said expression. Genes Cellular: RBP1, RBP2, RBP5, RBP7 Interstitial: RBP3 Plasma: RBP4 RBP in pregnancy Retinol plays a crucial role in the growth and differentiation of various body tissues, and it has been previously characterized that embryos are extremely sensitive to alterations in retinol concentration that can lead to spontaneous abortion and malformations occurring during development. Within a mature animal, retinol is transported from the liver via the circulatory system while bound to RBP to the desired target tissue. RBP is also bound to a carrier protein, transthyretin. The process by which RBP releases retinol for cellular availability is still unknown and not concisely determined. Sites of synthesis Traditionally, RBP is synthesized within the liver with secretion being dependent upon retinol concentrations. However, the concentrations levels do not appear to have an effect upon transcription of RBP messenger RNA (mRNA) which remains constant. Literature reveals that the bovine endometrium has also been identified as a location of RBP synthesis, as well as, the conceptus and extraembryonic tissues of various livestock species. Types Plasma retinol-binding protein, the retinol transport vehicle in serum. CRBP I/II, cellular-binding proteins involved in transport of retinol and metabolites into retinyl esters for storage or into retinoic acid. CRABPs, cellular retinoic acid–binding proteins capable of binding retinol and retinoic acid with high affinity. It has also been characterized that CRABPs are involved in many aspects of the retinoic acid signaling pathway such as the regulation and availability of retinoic acid to nuclear receptors. Presence in livestock species during gestation Bovine/Ovine RBP, identical to that found in plasma has been identified in the placental tissues of both the ovine and the bovine, suggesting that RBP may be highly involved in retinol transport and metabolism during pregnancy. However, exact timing of expression had been yet to be identified. An antiserum specific for bovine conceptus RBP and immunohistochemistry has been utilized to identify the presence of RBP at different stages of early pregnancy. Strong immunostaining and hybridization were observed in the trophectoderm of tubular, but not spherical blastocysts at day 13. RBP mRNA was localized to epithelial cells of the chorion, allantois, and amnion at day 45 of pregnancy. Lastly, RBP mRNA was detected in the cotyledons, the fetal contribution to the placenta and the site of attachment to the uterine epithelium for fetal/maternal exchange. Expression of RBP in developing conceptuses, extraembryonic membranes, and at the fetal-maternal interface indicate that there may be some regulation of retinol transport and metabolism that occurs due to RBP by the extraembryonic membranes. Within the uterus of pregnant bovines, it has been found that RBP synthesis in the luminal and glandular epithelium is quite similar to that of a cyclic animal's; however upon reaching day 17 of the estrous cycle, levels of RBP remain constant and continue to gradually rise throughout gestation. It has also been suggested that ovarian steroids may play a role in regulating uterine RBP expression. Porcine All three previously mentioned types of retinol-binding proteins (RBP, CRBP, CRABP) have been identified within the porcine placenta during pregnancy via immunohistochemistry. As previously mentioned, retinol and retinoic acid are modulators of gene expression and are necessary for the proper development and growth of a conceptus. Porcine exhibit a diffuse type placenta that has areolar-gland subunits which allows for transport of larger molecules between dam and fetus. RBP and CRBP have been identified in the endometrial glands and areolar trophoblasts, suggesting that RBP is crucial in transport of retinol from the gland to the trophectoderm of the conceptus. RBP expression has also been identified within the yolk sac, myometrium, oviduct, and numerous other fetal tissues. See also STRA6 (Vitamin A receptor) References Further reading link External links Proteins Peripheral membrane proteins
Retinol-binding protein
[ "Chemistry" ]
1,120
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
52,570,425
https://en.wikipedia.org/wiki/Zeitschrift%20f%C3%BCr%20Kristallographie%20%E2%80%93%20Crystalline%20Materials
Zeitschrift für Kristallographie – Crystalline Materials is a monthly peer-reviewed scientific journal published in English. The journal publishes theoretical and experimental studies in crystallography of both organic and inorganic substances. The editor-in-chief of the journal is from the University of Münster. The journal was founded in 1877 under the title Zeitschrift für Krystallographie und Mineralogie by crystallographer and mineralogist Paul Heinrich von Groth, who served as the editor for 44 years. It has used several titles over its history, with the present title having been adopted in 2010. The journal is indexed in a variety of databases and has a 2020 impact factor of 1.616. History The journal was established in 1877 by Paul von Groth as a German-language publication under the title Zeitschrift für Krystallographie und Mineralogie, and he served as its editor until the end of 1920. Groth was appointed as the inaugural Professor of Mineralogy at the University of Strasbourg in 1872 and made great contributions to the disciplines of mineralogy and crystallography both there and, from 1883, as the curator at the Deutsches Museum in Munich. Groth was the first to classify minerals according to their chemical composition and contributed to the understanding of isomorphism and morphotropy in crystalline systems. Using the data from 55 volumes of the journal covering 39 years of publications (1877–1915) plus other sources, Groth produced the five volume work Chemische Krystallographie between 1906 and 1919. This work catalogued the chemical and physical properties of the between 9,000 and 10,000 crystalline substances known at the time. It has used a series of names over its history (see table below), finally becoming Zeitschrift für Kristallographie – Crystalline Materials in 2010, a name distinguishing it from the 1987 spin-off journal Zeitschrift für Kristallographie – New Crystal Structures. Special issues Beginning in December 2002, the journal has produced special issues with articles grouped around a single theme. Topics covered include the analysis of complex materials using pair distribution function methods, borates (double issue), hydrogen storage, in situ crystallisation, mathematical crystallography, mineral structures, nanocrystallography, phononic crystals, photocrystallography, the application of precession electron diffraction methods, twinned crystals, and zeolites (double issue). On four occasions, one or two issues of the journal have been dedicated to the memory of a crystallographer or mineralogist, usually with a theme associated with the individual's work and a description of their contribution to the field. These are summarised in the table below: Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Service Current Contents/Physical, Chemical and Earth Sciences EBSCO databases Inspec Science Citation Index Expanded Scopus According to the Journal Citation Reports, the journal has a 2015 impact factor of 2.560, and it is ranked 8th amongst the 26 crystallography journals. References External links Chemistry journals Crystallography journals Monthly journals Publications established in 1877 Multilingual journals De Gruyter academic journals 1877 establishments in Germany
Zeitschrift für Kristallographie – Crystalline Materials
[ "Chemistry", "Materials_science" ]
666
[ "Crystallography journals", "Crystallography" ]
52,570,518
https://en.wikipedia.org/wiki/Zeitschrift%20f%C3%BCr%20Kristallographie%20%E2%80%93%20New%20Crystal%20Structures
Zeitschrift für Kristallographie – New Crystal Structures is a bimonthly peer-reviewed scientific journal published in English. Its first issue was published in December 1997 and bore the subtitle "International journal for structural, physical, and chemical aspects of crystalline materials." Created as a spin-off of Zeitschrift für Kristallographie for reporting novel and refined crystal structures, it began at volume 212 in order to remain aligned with the numbering of the parent journal. Paul von Groth, Professor of Mineralogy at the University of Strasbourg, established Zeitschrift für Krystallographie und Mineralogie in 1877; after several name changes, the journal adopted its present name, Zeitschrift für Kristallographie – Crystalline Materials, in 2010. The inaugural editors-in-chief were Hans Georg von Schnering of the Max Planck Institute for Solid State Research in Stuttgart and Heinz Hermann Schulz of the Ludwig-Maximilians-Universität München. In 2016, the editor-in-chief was Hubert Huppertz (Universität Innsbruck). In the last years the journal sharpened its profile as a journal providing new crystal structure determinations (and redeterminations) together with a short description of the source of the material and the most important features of each structure. Editorial Board Christian Hübschle, Bayreuth University, Germany; Oliver Janka, Münster University, Germany; Andreas Lemmerer, Johannesburg University, South Africa; Guido J. Reiss, Düsseldorf University, Germany; Edward R. T. Tiekink, Sunway University, Malaysia The journal is indexed in various databases and, according to the Journal Citation Reports, had a 2020 impact factor of 0.451. Abstracting and indexing The journal is abstracted and indexed in: Chemical Abstracts Service Current Contents/Physical, Chemical and Earth Sciences EBSCO databases Inspec Science Citation Index Expanded Scopus publons Web of Science - Current Contents/Physical, Chemical and Earth Science Reaxys References External links Chemistry journals Crystallography journals Quarterly journals Academic journals established in 1987 English-language journals De Gruyter academic journals
Zeitschrift für Kristallographie – New Crystal Structures
[ "Chemistry", "Materials_science" ]
446
[ "Crystallography journals", "Crystallography" ]
40,971,137
https://en.wikipedia.org/wiki/NIRSpec
The NIRSpec (Near-Infrared Spectrograph) is one of the four scientific instruments flown on the James Webb Space Telescope (JWST). The JWST is the follow-on mission to the Hubble Space Telescope (HST) and is developed to receive more information about the origins of the universe by observing infrared light from the first stars and galaxies. In comparison to HST, its instruments will allow looking further back in time and will study the so-called Dark Ages during which the universe was opaque, about 150 to 800 million years after the Big Bang. The NIRSpec instrument is a multi-object spectrograph and is capable of simultaneously measuring the near-infrared spectrum of up to 100 objects like stars or galaxies with low, medium and high spectral resolutions. The observations are performed in a 3 arcmin × 3 arcmin field of view over the wavelength range from 0.6 μm to 5.0 μm. It also features a set of slits and an aperture for high contrast spectroscopy of individual sources, as well as an integral-field unit (IFU) for 3D spectroscopy. The instrument is a contribution of the European Space Agency (ESA) and is built by Astrium together with a group of European subcontractors. Overview The James Webb Space Telescope's main science themes are: First light and reionization the assembly of galaxies, the birth of stars and protoplanetary systems the birth of planetary systems and the origins of life The NIRSpec instrument operates at −235 °C and is passively cooled by cold space radiators which are mounted on the JWST Integrated Science Instrument Module (ISIM). The radiators are connected to NIRSpec using thermally conductive heat straps. The mirror mounts and the optical bench base plate all manufactured out of silicon carbide ceramic SiC100. The instrument size is approximately and weighs including 100 kg of silicon carbide. The operation of the instrument is performed with three electronic boxes. NIRSpec includes 4 mechanisms which are: the Filter Wheel Assembly (FWA) – 8 positions, carrying 4 long pass filters for science, 2 broadband filters for target acquisition, one closed and one open position the Refocus Mechanism Assembly (RMA) – carrying 2 mirrors for instrument refocusing the Micro Shutter Assembly (MSA) – for multi-object spectroscopy but also carrying the fixed slits and IFU aperture the Grating Wheel Assembly (GWA) – 8 positions, carrying 6 gratings and one prism for science and one mirror for target acquisition Further NIRSpec includes two electro-optical assemblies which are: Calibration Assembly (CAA) – carrying 11 illumination sources and an integrating sphere; for instrument internal spectral and flat-field calibration Focal Plane Assembly (FPA) – includes the focal plane which consists of 2 sensor chip assemblies And finally the Integral Field Unit (IFU) image slicer, used in the instrument IFU mode. The optical path is represented by the following silicon carbide mirror assemblies: the Coupling Optics Assembly – which couples the light from the JWST telescope into NIRSpec the Fore Optics TMA (FOR) – which provides the intermediate focal plane for the MSA the Collimator Optics TMA (COL) – collimating the light onto the Grating Wheel dispersive element the Camera Optics TMA (CAM) – which finally images the spectra on the detector Science objectives The end of the Dark Ages – first light and re-ionization: Near-infrared spectroscopy (NIRS) at spectral resolutions around 100 and 1000 for studying the first light sources (stars, galaxies and active nuclei) that mark the beginning of the phase of re-ionization of the Universe that is believed to take place between redshifts 15–14 and 6. The assembly of galaxies: Near-infrared multi-object spectroscopic observations (redshift range typically from 1 to 7) at spectral resolutions around 1000 observation of a large number of galaxies and spatially-resolved NIRS at spectral resolutions around 1000 and 3000 in order to conduct detailed studies of a smaller number of objects. The birth of stars and planetary systems: Near-infrared high-contrast slit spectroscopy at spectral resolution ranging from 100 to several thousands in order to gain a more complete view of the formation and evolution of the stars and their planetary systems. Planetary systems and the origin of life: In order to observe various components of the Solar System (from planets and satellites to comets and Kuiper belt objects) as well as of extra-solar planetary systems, high-contrast and spatially resolved NIRS at medium to high spectral resolution while maintaining high relative spectro-photometric stability is required. Operational modes In order to achieve the scientific objectives NIRSpec has four operational modes: Multi-Object Spectroscopy (MOS) In MOS the total instrument field of view of 3 × 3 arcminutes is covered using 4 arrays of programmable slit masks. These programmable slit masks consist of 250 000 micro shutters where each can individually be programmed to 'open' or 'closed'. The contrast between an 'open' or 'closed' shutter is better than 1:2000. If an object like e.g. a galaxy is placed into an 'open' shutter, the spectra of the light emitted by the object can be dispersed and imaged onto the detector plane. In this mode up to 100 objects can simultaneously be observed and the spectra be measured. Integral Field Unit Mode (IFU) The integral field spectrometry will primarily be used for large, extended objects like galaxies. In this mode a 3 × 3 arcsecond field of view is sliced into 0.1 arcsecond bands which are thereafter re-arranged into a long slit. This allows to obtain spatially resolved spectra of large scenes and can be used to measure the motion speed and direction within an extended object. Since measured spectra in the IFU mode would overlap with spectra of the MOS mode it can not be used in parallel. High-Contrast Slit Spectroscopy (SLIT) A set of 5 fixed slits are available in order to perform high contrast spectroscopic observations which is e.g. required for spectroscopic observations of transiting extra-solar planets. Of the five fixed slits, three are 0.2 arcseconds wide, one is 0.4 arcsecond wide and one is a square aperture of 1.6 arcseconds. The SLIT mode can be used simultaneously with the MOS or IFU modes. Imaging Mode (IMA) The imaging mode is used for target acquisition only. In this mode no dispersive element is placed in the optical path and any objects are directly imaged on the detector. Since the microshutter array which is sitting in an instrument intermediate focal plan is imaged in parallel, it is possible to arrange the JWST observatory such that any to be observed objects fall directly into the center of open shutters (MOS-mode), the IFU aperture (IFU-mode) or the slits (SLIT mode). Performance parameters The NIRSpec key performance parameters are: . Industrial partners NIRSpec has been built by Astrium Germany with subcontractors and partners spread over Europe and with the contribution of NASA from the US which provided the Detector Subsystem and the Micro-shutter Assembly. The individual subcontractors and their corresponding contributions were: APCO Technologies SA – Mechanical Ground Support Equipment and Kinematic Mounts Astrium CASA Espacio – Optical Instrument Harness Astrium CRISA – Instrument Control Electronic and Software Astrium SAS – Silicon Carbide (SiC) Engineering Support (AIP) – Instrument Quick Look, Analysis and Calibration Software Contribution Boostec – SiC Mirrors and Structures Manufacturing Cassidian Optronics: Filter Wheel Assembly Grating Wheel Assembly (CRAL) – Instrument Performance Simulator European Space Agency (ESA) – NIRSpec Customer Iberespacio – Optical Assembly Cover (IABG) – Instrument Test Facilities Mullard Space Science Laboratory (MSSL): Calibration Assembly Optical Ground Support Equipment (Shack-Hartman-Sensor, Calibration Light Source) National Aeronautics and Space Administration (NASA) – Customer Furnished Items: Detector Subsytem Microshutter Subsystem Sagem – Mirror Polishing and Mirror Assembly, Integration and Testing Selex Galileo – Refocus Mechanism Surrey Satellite Technology Ltd (SSTL) – Integral Field Unit Terma – Electrical Ground Support Equipment (Data Handling System) Images Multi-Object Spectroscopy (MOS) Integral Field Unit See also Fine Guidance Sensor and Near Infrared Imager and Slitless Spectrograph MIRI (Mid-Infrared Instrument) (James Webb Space Telescope's 5–28 micron camera/spectrograph) NIRCam (NIR camera for JWST up to 5 micron wavelength light) Integrated Science Instrument Module (ISIM, houses NIRSpec and the other JWST instruments) References External links James Webb Space Telescope on NASA James Webb Space Telescope on ESA NIRSpec on YouTube James Webb Space Telescope instruments Infrared telescopes Spectrographs
NIRSpec
[ "Physics", "Chemistry" ]
1,877
[ "Spectrographs", "Spectroscopy", "Spectrum (physical sciences)" ]
40,971,412
https://en.wikipedia.org/wiki/C23H28N2O4
{{DISPLAYTITLE:C23H28N2O4}} The molecular formula C23H28N2O4 may refer to: Pacrinolol, a beta adrenergic receptor antagonist Pleiocarpine, an anticholinergic alkaloid Molecular formulas
C23H28N2O4
[ "Physics", "Chemistry" ]
62
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
40,973,765
https://en.wikipedia.org/wiki/Bayesian%20optimization
Bayesian optimization is a sequential design strategy for global optimization of black-box functions, that does not assume any functional forms. It is usually employed to optimize expensive-to-evaluate functions. With the rise of artificial intelligence innovation in the 21st century, Bayesian optimizations have found prominent use in machine learning problems, for optimizing hyperparameter values. History The term is generally attributed to and is coined in his work from a series of publications on global optimization in the 1970s and 1980s. Strategy Bayesian optimization is typically used on problems of the form , where is a set of points, , which rely upon less (or equal to) than 20 dimensions (), and whose membership can easily be evaluated. Bayesian optimization is particularly advantageous for problems where is difficult to evaluate due to its computational cost. The objective function, , is continuous and takes the form of some unknown structure, referred to as a "black box". Upon its evaluation, only is observed and its derivatives are not evaluated. Since the objective function is unknown, the Bayesian strategy is to treat it as a random function and place a prior over it. The prior captures beliefs about the behavior of the function. After gathering the function evaluations, which are treated as data, the prior is updated to form the posterior distribution over the objective function. The posterior distribution, in turn, is used to construct an acquisition function (often also referred to as infill sampling criteria) that determines the next query point. There are several methods used to define the prior/posterior distribution over the objective function. The most common two methods use Gaussian processes in a method called kriging. Another less expensive method uses the Parzen-Tree Estimator to construct two distributions for 'high' and 'low' points, and then finds the location that maximizes the expected improvement. Standard Bayesian optimization relies upon each being easy to evaluate, and problems that deviate from this assumption are known as exotic Bayesian optimization problems. Optimization problems can become exotic if it is known that there is noise, the evaluations are being done in parallel, the quality of evaluations relies upon a tradeoff between difficulty and accuracy, the presence of random environmental conditions, or if the evaluation involves derivatives. Acquisition functions Examples of acquisition functions include probability of improvement expected improvement Bayesian expected losses upper confidence bounds (UCB) or lower confidence bounds Thompson sampling and hybrids of these. They all trade-off exploration and exploitation so as to minimize the number of function queries. As such, Bayesian optimization is well suited for functions that are expensive to evaluate. Solution methods The maximum of the acquisition function is typically found by resorting to discretization or by means of an auxiliary optimizer. Acquisition functions are maximized using a numerical optimization technique, such as Newton's Method or quasi-Newton methods like the Broyden–Fletcher–Goldfarb–Shanno algorithm. Applications The approach has been applied to solve a wide range of problems, including learning to rank, computer graphics and visual design, robotics, sensor networks, automatic algorithm configuration, automatic machine learning toolboxes, reinforcement learning, planning, visual attention, architecture configuration in deep learning, static program analysis, experimental particle physics, quality-diversity optimization, chemistry, material design, and drug development. Bayesian Optimization has been applied in the field of facial recognition. The performance of the Histogram of Oriented Gradients (HOG) algorithm, a popular feature extraction method, heavily relies on its parameter settings. Optimizing these parameters can be challenging but crucial for achieving high accuracy. A novel approach to optimize the HOG algorithm parameters and image size for facial recognition using a Tree-structured Parzen Estimator (TPE) based Bayesian optimization technique has been proposed. This optimized approach has the potential to be adapted for other computer vision applications and contributes to the ongoing development of hand-crafted parameter-based feature extraction algorithms in computer vision. See also Multi-armed bandit Kriging Thompson sampling Global optimization Bayesian experimental design Probabilistic numerics Pareto optimum Active learning (machine learning) Multi-objective optimization References Sequential methods Sequential experiments Stochastic optimization Machine learning
Bayesian optimization
[ "Engineering" ]
853
[ "Artificial intelligence engineering", "Machine learning" ]
40,975,991
https://en.wikipedia.org/wiki/Pupil%20function
The pupil function or aperture function describes how a light wave is affected upon transmission through an optical imaging system such as a camera, microscope, or the human eye. More specifically, it is a complex function of the position in the pupil or aperture (often an iris) that indicates the relative change in amplitude and phase of the light wave. Sometimes this function is referred to as the generalized pupil function, in which case pupil function only indicates whether light is transmitted or not. Imperfections in the optics typically have a direct effect on the pupil function, it is therefore an important tool to study optical imaging systems and their performance. Relationship with other functions in optics The complex pupil function can be written in polar coordinates using two real functions: , where is the phase change (in radians) introduced by the optics, or the surrounding medium. It captures all optical aberrations that occur between the image plane and the focal plane in the scene or sample. The light may also be attenuated differently at different positions in the pupil, sometimes deliberately for the purpose of apodization. Such change in amplitude of the light wave is described by the factor . The pupil function is also directly related to the point spread function by its Fourier transform. As such, the effect of aberrations on the point spread function can be described mathematically using the concept of the pupil function. Since the (incoherent) point spread function is also related to the optical transfer function via a Fourier transform, a direct relationship exists between the pupil function and the optical transfer function. In the case of an incoherent optical imaging system, the optical transfer function is the auto correlation of the pupil function. Examples In focus In a homogeneous medium, a point source emits light with spherical wave fronts. A lens that is focused onto the point source will have optics that change the spherical wave front into a planar wave before it passes through the pupil or aperture stop. Often, additional lens element refocus the light onto a sensor or photographic film, by converting the planar wave front to a spherical wave front, centered onto the image plane. The pupil function of such an ideal system is equal to one at every point within the pupil, and zero out with it. In case of a circular pupil, this can be written mathematically as: where is the pupil radius. Out of focus When the point source is out of focus, the spherical wave will not be completely made planar by the optics, but will have an approximately parabolic wave front: . Such a variation in optical path length corresponds to a radial variation in the complex argument of the pupil function: otherwise. It is thus possible to deduce the point-spread function of the out of focus point source as the Fourier transform of the pupil function. Aberrated Optics The spherical wave could also be deformed by imperfect optics to an approximately cylindrical wave front: . otherwise. Such a variation in optical path length will create an image that is blurred only in one dimension as is typical of systems with astigmatism. See also Fourier optics Point spread function Optical transfer function References Optics
Pupil function
[ "Physics", "Chemistry" ]
630
[ "Applied and interdisciplinary physics", "Optics", " molecular", "Atomic", " and optical physics" ]
40,977,477
https://en.wikipedia.org/wiki/Cross-species%20transmission
Cross-species transmission (CST), also called interspecies transmission, host jump, or spillover, is the transmission of an infectious pathogen, such as a virus, between hosts belonging to different species. Once introduced into an individual of a new host species, the pathogen may cause disease for the new host and/or acquire the ability to infect other individuals of the same species, allowing it to spread through the new host population. The phenomenon is most commonly studied in virology, but cross-species transmission may also occur with bacterial pathogens or other types of microorganisms. Steps involved in the transfer of pathogens to new hosts include contact between the pathogen and the host; the successful infection of an initial individual host, which may lead to amplification and an outbreak; and the adaptation of the pathogen, within either the original or new host, which may render it capable of spreading efficiently between individuals in populations of the new host. The concept is important in understanding and controlling emerging infectious diseases in humans, especially those caused by viruses. Most viral diseases of humans are zoonotic in origin, having been historically transmitted to human populations from various animal species; examples include SARS, Ebola, swine flu, rabies, and avian influenza. The exact mechanisms which facilitate cross-species transmission vary by pathogen, and even for common diseases are often poorly understood. It is believed that viruses with high mutation rates are able to rapidly adapt to new hosts and thereby overcome host-specific immunological defenses, allowing their continued transmission. A host shifting event occurs when a strain that was previously zoonotic begins to circulate exclusively among the new host species. Pathogen transfer is most likely to occur between species which are frequently in close contact with each other. It can also occur indirectly between species with less frequent contact if facilitated by an intermediary species; for example, a reservoir species may transfer the virus to a vector species, which in turn transfers the virus to humans. The degree of phylogenetic relatedness between host species also influences the likelihood that a pathogen is transmitted between them, likely because of the similarity of the hosts' immunological defenses; for example, most human zoonotic transmissions come from other species of mammals. Pathogens of more distantly related species, on the other hand, such as plant viruses, may not be capable of infecting humans at all. Other factors influencing transmission rates include geographic proximity and intraspecies behaviors. Due to climate change and habitat loss owing to land use expansion, the risk of viral spillover is predicted to significantly increase. Prevalence and control Cross-species transmission is the most significant cause of disease emergence in humans and other species. Wildlife zoonotic diseases of microbial origin are also the most common group of human emerging diseases, and CST between wildlife and livestock has appreciable economic impacts in agriculture by reducing livestock productivity and imposing export restrictions. This makes CST of major concern for public health, agriculture, and wildlife management. The authors of a study on the bubonic plague in Oran stress that the disease "is primarily a bacterial zoonosis affecting rodents. It is caused by Yersinia pestis and is transmitted from animal to animal by fleas. Humans usually become infected through the bite of an infected rodent flea." The sanitary control measure instituted by the public health authority was chemical in nature: "Intra- and peridomestic spraying with permethrin was conducted. Deltamethrin was dusted on the tracks and around the burrows of rodents located in a radius of 10 km around the dwelling of the patients. Uncontrolled killing of rats was prohibited." A large proportion of viral pathogens that have emerged recently in humans are considered to have originated from various animal species. This is shown by several recent epidemics such as, avian flu, Ebola, monkeypox, and Hanta viruses. There is evidence to suggest that some diseases can potentially be re-introduced to human populations through animal hosts after they have been eradicated in humans. There is a risk of this phenomenon occurring with morbilliviruses as they can readily cross species barriers. CST can also have a significant effect on produce industries. Genotype VI-Avian paramyxovirus serotype 1 (GVI-PMV1) is a virus that arose through cross-species transmission events from Galliformes (i.e. chicken) to Columbiformes, and has become prevalent in the poultry industry. CST of rabies virus variants between many different species populations is a major concern of wildlife management. Introduction of these variants into non-reservoir animals increases the risk of human exposures and threatens current advances toward rabies control. Many pathogens are thought to have host specialization, which explains the maintenance of distinct strains in host species. Pathogens would have to overcome their host specificity to cross to a new host species. Some studies have argued that host specializations may be exaggerated, and pathogens are more likely to exhibit CST than previously thought. Original hosts usually have low death rates when infected with a pathogen, with fatality rates tending to be much higher in new hosts Between non-human primates and humans Due to the close relation of nonhuman primates (NHP) and humans, disease transmission between NHP and humans is relatively common and can become a major public health concern. Diseases such as HIV and human adenoviruses have been associated with NHP interactions. In places where contact between humans and NHPs is frequent, precautions are often taken to prevent disease transmission. Simian foamy viruses (SFV) is an enzootic retrovirus that has high rates of cross-species transmission and has been known to affect humans bitten by infected NHPs. It has caused health concerns in places like Indonesia where visitors at monkey temples can contract SFV from temple macaques (Macaca fascicularis). TMAdV (titi monkey adenovirus) is a highly divergent, sharing <57% pairwise nucleotide identity with other adenoviruses, NHP virus that had a high fatality rate (83%) in monkeys and is capable of spreading through human hosts. Predicting and preventing transmission between species Prediction and monitoring are important for the study of CSTs and their effects. However, factors that determine the origin and fate of cross-species transmission events remain unclear for the majority of human pathogens. This has resulted in the use of different statistical models for the analysis of CST. Some of these include risk-analysis models, single rate dated tip (SRDT) models, and phylogenetic diffusion models. The study of the genomes of pathogens involved in CST events is very useful in determining their origin and fate. This is because a pathogens genetic diversity and mutation rate are key factors in determining if it can transmit across multiple hosts. This makes it important for the genomes of transmission species to be partially or completely sequenced. A change in genomic structure could cause a pathogen that has a narrow host range to become capable of exploiting a wider host range. Genetic distance between different species, geographical range, and other interaction barriers will also influence cross-species transmission. One approach to risk assessment analysis of CST is to develop risk-analysis models that break the ‘‘process’’ of disease transmission into parts. Processes and interactions that could lead to cross-species disease transmission are explicitly described as a hypothetical infection chain. Data from laboratory and field experiments are used to estimate the probability of each component, expected natural variation, and margins of error. Different types of CST research would require different analysis pathways to meet their needs. A study on identification of viruses in bats that could spread to other mammals used the workflow: sequencing of genomic samples → “cleaning” of raw reads → elimination of host reads and eukaryotic contaminants → de novo assembly of the remaining reads → annotation of viral contigs → molecular detection of specific viruses → phylogenetic analysis → interpretation of data. Detecting CST and estimating its rate based on prevalence data is challenging. Due to these difficulties, computational methods are used to analyse CST events and the pathogens associated with them. The explosive development of molecular techniques has opened new possibilities for using phylogenetic analysis of pathogen genetics to infer epidemiological parameters. This provides some insight into the origins of these events and how they could be addressed. Methods of CST prevention are currently using both biological and computational data. An example of this is using both cellular assays and phylogenetic comparisons to support a role for TRIM5α, the product of the TRIM5 gene, in suppressing interspecies transmission and emergence of retroviruses in nature. Analysis Phylogeny The comparison of genomic data is very important for the study of cross-species transmission. Phylogenetic analysis is used to compare genetic variation in both pathogens associated with CST and the host species that they infect. Taken together, it is possible to infer what allowed a pathogen to crossover to a new host (i.e. mutation in a pathogen, change in host susceptibility) and how this can be prevented in the future. If the mechanisms a pathogens uses to initially enter a new species are well characterized and understood a certain level of risk control and prevention can be obtained. In contact, a poor understanding of pathogens, and their associated diseases, makes it harder for preventive measures to be taken Alternative hosts can also potentially have a critical role in the evolution and diffusion of a pathogen. When a pathogen crosses species it often acquires new characteristics that allow it to breach host barriers. Different pathogen variants can have very different effects on host species. Thus it can be beneficial to CST analysis to compare the same pathogens occurring in different host species. Phylogenetic analysis can be used to track a pathogens history through different species populations. Even if a pathogen is new and highly divergent, phylogenetic comparison can be very insightful. A useful strategy for investigating the history of epidemics caused by pathogen transmission combines molecular clock analysis, to estimate the timescale of the epidemic, and coalescent theory, to infer the demographic history of the pathogen. When constructing phylogenies, computer databases and tools are often used. Programs, such as BLAST, are used to annotate pathogen sequences, while databases like GenBank provide information about functions based on the pathogens genomic structure. Trees are constructed using computational methods such as MPR or Bayesian Inference, and models are created depending on the needs of the study. Single rate dated tip (SRDT) models, for example, allows for estimates of timescale under a phylogenetic tree. Models for CST prediction will vary depending on what parameters need to be accounted for when constructing the model. Most parsimonious reconstruction (MPR) Parsimony is the principle in which one chooses the simplest scientific explanation that fits the evidence. In terms of building phylogenetic trees, the best hypothesis is the one that requires the fewest evolutionary changes. Using parsimony to reconstruct ancestral character states on a phylogenetic tree is a method for testing ecological and evolutionary hypotheses. This method can be used in CST studies to estimate the number of character changes that exist between pathogens in relation to their host. This makes MPR useful for tracking a CST pathogen to its origins. MPR can also be used to the compare traits of host species populations. Traits and behaviours within a population could make them more susceptible to CST. For example, species which migrate regionally are important for spreading viruses through population networks. Despite the success of parsimony reconstructions, research suggests they are often sensitive and can sometimes be prone to bias in complex models. This can cause problems for CST models that have to consider many variables. Alternatives methods, such as maximum likelihood, have been developed as an alternative to parsimony reconstruction. Using genetic markers Two methods of measuring genetic variation, variable number tandem repeats (VNTRs) and single nucleotide polymorphisms (SNPs), have been very beneficial to the study of bacterial transmission. VNTRs, due to the low cost and high mutation rates, make them particularly useful to detect genetic differences in recent outbreaks, and while SNPs have a lower mutation rate per locus than VNTRs, they deliver more stable and reliable genetic relationships between isolates. Both methods are used to construct phylogenies for genetic analysis, however, SNPs are more suitable for studies on phylogenies contraction. However, it can be difficult for these methods to accurately simulate CSTs everts. Estimates of CST based on phylogenies made using the VNTR marker can be biased towards detecting CST events across a wide range of the parameters. SNPs tend to be less biased and variable in estimates of CST when estimations of CST rates are low and a low number of SNPs is used. In general, CST rate estimates using these methods are most reliable in systems with more mutations, more markers, and high genetic differences between introduced strains. CST is very complex and models need to account for a lot of parameters to accurately represent the phenomena. Models that oversimplify reality can result in biased data. Multiple parameters such as number of mutations accumulated since introduction, stochasticity, the genetic difference of strains introduced, and the sampling effort can make unbiased estimates of CST difficult even with whole-genome sequences, especially if sampling is limited, mutation rates are low, or if pathogens were recently introduced. The process of using genetic markers to estimate CST rates should take into account several important factors to reduce bias. One is that the phylogenetic tree constructed in the analysis needs to capture the underlying epidemiological process generating the tree. The models need to account for how the genetic variability of a pathogen influences a disease in a species, not just general differences in genomic structure. Two, the strength of the analysis will depend on the amount of mutation accumulated since the pathogen was introduced in the system. This is due to many models using the number of mutations as an indicator of CST frequency. Therefore, efforts are focused on estimating either time since the introduction or the substitution rate of the marker (from laboratory experiments or genomic comparative analysis). This is important not only when using the MPR method but also for Likelihood approaches that require an estimation of the mutation rate. Three, CST will also affect disease prevalence in the potential host, so combining both epidemiological time series data with genetic data may be an excellent approach to CST study Bayesian analysis Bayesian frameworks are a form of maximum likelihood-based analyses and can be very effective in cross-species transmission studies. Bayesian inference of character evolution methods can account for phylogenetic tree uncertainty and more complex scenarios, with models such as the character diffusion model currently being developed for the study of CST in RNA viruses. A Bayesian statistical approach presents advantages over other analyses for tracking CST origins. Computational techniques allow integration over an unknown phylogeny, which cannot be directly observed, and unknown migration process, which is usually poorly understood. The Bayesian frameworks are also well suited to bring together different kinds of information. The BEAST software, which has a strong focus on calibrated phylogenies and genealogies, illustrates this by offering a large number of complementary evolutionary models including substitution models, demographic and relaxed clock models that can be combined into a full probabilistic model. By adding spatial reconstruction, these models create the probability of biogeographical history reconstruction from genetic data. This could be useful for determining the origins of cross-species transmissions. The high effectiveness of Bayesian statistical methods has made them instrumental in evolutionary studies. Bayesian ancestral host reconstruction under discrete diffusion models can be used to infer the origin and effects of pathogens associated with CST. One study on Human adenoviruses using Bayesian supported a gorilla and chimpanzee origin for the viral species, aiding prevention efforts. Despite presumably rare direct contact between sympatric populations of the two species, CST events can occur between them. The study also determined that two independent HAdV-B transmission events to humans occurred and that the HAdV-Bs circulating in humans are of zoonotic origin and have probably affected global health for most of our species lifetime. Phylogenetic diffusion models are frequently used for phylogeographic analyses, with the inference of host jumping becoming of increasing interest. The Bayesian inference approach enables model averaging over several potential diffusion predictors and estimates the support and contribution of each predictor while marginalizing over phylogenetic history. For studying viral CST, the flexibility of the Bayesian statistical framework allows for the reconstruction of virus transmission between different host species while simultaneously testing and quantifying the contribution of multiple ecological and evolutionary influences of both CST spillover and host shifting. One study on rabies in bats showed geographical range overlap is a modest predictor for CST, but not for host shifts. This highlights how Bayesian inferences in models can be used for CST analysis. See also Mathematical modelling of infectious disease Reverse zoonosis Spillover infection Vector Zoonosis Feline zoonosis References External links Bayesian modeling book and examples available for downloading. Bayesian statistics at Wikiversity Epidemiology Viruses Zoonoses
Cross-species transmission
[ "Biology", "Environmental_science" ]
3,538
[ "Viruses", "Tree of life (biology)", "Epidemiology", "Microorganisms", "Environmental social science" ]
38,203,359
https://en.wikipedia.org/wiki/Reverse%20genetics
Reverse genetics is a method in molecular genetics that is used to help understand the function(s) of a gene by analysing the phenotypic effects caused by genetically engineering specific nucleic acid sequences within the gene. The process proceeds in the opposite direction to forward genetic screens of classical genetics. While forward genetics seeks to find the genetic basis of a phenotype or trait, reverse genetics seeks to find what phenotypes are controlled by particular genetic sequences. Automated DNA sequencing generates large volumes of genomic sequence data relatively rapidly. Many genetic sequences are discovered in advance of other, less easily obtained, biological information. Reverse genetics attempts to connect a given genetic sequence with specific effects on the organism. Reverse genetics systems can also allow the recovery and generation of infectious or defective viruses with desired mutations. This allows the ability to study the virus in vitro and in vivo. Techniques used In order to learn the influence a sequence has on phenotype, or to discover its biological function, researchers can engineer a change or disrupt the DNA. After this change has been made a researcher can look for the effect of such alterations in the whole organism. There are several different methods of reverse genetics: Directed deletions and point mutations Site-directed mutagenesis is a sophisticated technique that can either change regulatory regions in the promoter of a gene or make subtle codon changes in the open reading frame to identify important amino residues for protein function. Alternatively, the technique can be used to create null alleles so that the gene is not functional. For example, deletion of a gene by gene targeting (gene knockout) can be done in some organisms, such as yeast, mice and moss. Unique among plants, in Physcomitrella patens, gene knockout via homologous recombination to create knockout moss (see figure) is nearly as efficient as in yeast. In the case of the yeast model system directed deletions have been created in every non-essential gene in the yeast genome. In the case of the plant model system huge mutant libraries have been created based on gene disruption constructs. In gene knock-in, the endogenous exon is replaced by an altered sequence of interest. In some cases conditional alleles can be used so that the gene has normal function until the conditional allele is activated. This might entail 'knocking in' recombinase sites (such as lox or frt sites) that will cause a deletion at the gene of interest when a specific recombinase (such as CRE, FLP) is induced. Cre or Flp recombinases can be induced with chemical treatments, heat shock treatments or be restricted to a specific subset of tissues. Another technique that can be used is TILLING. This is a method that combines a standard and efficient technique of mutagenesis with a chemical mutagen such as ethyl methanesulfonate (EMS) with a sensitive DNA-screening technique that identifies point mutations in a target gene. In the field of virology, reverse-genetics techniques can be used to recover full-length infectious viruses with desired mutations or insertions in the viral genomes or in specific virus genes. Technologies that allow these manipulations include circular polymerase extension reaction (CPER) which was first used to generate infectious cDNA for Kunjin virus a close relative of West Nile virus. CPER has also been successfully utilised to generate a range of positive-sense RNA viruses such as SARS-CoV-2, the causative agent of COVID-19. Gene silencing The discovery of gene silencing using double stranded RNA, also known as RNA interference (RNAi), and the development of gene knockdown using Morpholino oligos, have made disrupting gene expression an accessible technique for many more investigators. This method is often referred to as a gene knockdown since the effects of these reagents are generally temporary, in contrast to gene knockouts which are permanent. RNAi creates a specific knockout effect without actually mutating the DNA of interest. In C. elegans, RNAi has been used to systematically interfere with the expression of most genes in the genome. RNAi acts by directing cellular systems to degrade target messenger RNA (mRNA). RNAi interference, specifically gene silencing, has become a useful tool to silence the expression of genes and identify and analyze their loss-of-function phenotype. When mutations occur in alleles, the function which it represents and encodes also is mutated and lost; this is generally called a loss-of-function mutation. The ability to analyze the loss-of-function phenotype allows analysis of gene function when there is no access to mutant alleles. While RNA interference relies on cellular components for efficacy (e.g. the Dicer proteins, the RISC complex) a simple alternative for gene knockdown is Morpholino antisense oligos. Morpholinos bind and block access to the target mRNA without requiring the activity of cellular proteins and without necessarily accelerating mRNA degradation. Morpholinos are effective in systems ranging in complexity from cell-free translation in a test tube to in vivo studies in large animal models. Interference using transgenes A molecular genetic approach is the creation of transgenic organisms that overexpress a normal gene of interest. The resulting phenotype may reflect the normal function of the gene. Alternatively it is possible to overexpress mutant forms of a gene that interfere with the normal (wildtype) gene's function. For example, over-expression of a mutant gene may result in high levels of a non-functional protein resulting in a dominant negative interaction with the wildtype protein. In this case the mutant version will out compete for the wildtype proteins partners resulting in a mutant phenotype. Other mutant forms can result in a protein that is abnormally regulated and constitutively active ('on' all the time). This might be due to removing a regulatory domain or mutating a specific amino residue that is reversibly modified (by phosphorylation, methylation, or ubiquitination). Either change is critical for modulating protein function and often result in informative phenotypes. Vaccine synthesis Reverse genetics plays a large role in vaccine synthesis. Vaccines can be created by engineering novel genotypes of infectious viral strains which diminish their pathogenic potency enough to facilitate immunity in a host. The reverse genetics approach to vaccine synthesis utilizes known viral genetic sequences to create a desired phenotype: a virus with both a weakened pathological potency and a similarity to the current circulating virus strain. Reverse genetics provides a convenient alternative to the traditional method of creating inactivated vaccines, viruses which have been killed using heat or other chemical methods. Vaccines created through reverse genetics methods are known as attenuated vaccines, named because they contain weakened (attenuated) live viruses. Attenuated vaccines are created by combining genes from a novel or current virus strain with previously attenuated viruses of the same species. Attenuated viruses are created by propagating a live virus under novel conditions, such as a chicken's egg. This produces a viral strain that is still live, but not pathogenic to humans, as these viruses are rendered defective in that they cannot replicate their genome enough to propagate and sufficiently infect a host. However, the viral genes are still expressed in the host's cell through a single replication cycle, allowing for the development of an immunity. Influenza vaccine A common way to create a vaccine using reverse genetic techniques is to utilize plasmids to synthesize attenuated viruses. This technique is most commonly used in the yearly production of influenza vaccines, where an eight plasmid system can rapidly produce an effective vaccine. The entire genome of the influenza A virus consists of eight RNA segments, so the combination of six attenuated viral cDNA plasmids with two wild-type plasmids allow for an attenuated vaccine strain to be constructed. For the development of influenza vaccines, the fourth and sixth RNA segments, encoding for the hemagglutinin and neuraminidase proteins respectively, are taken from the circulating virus, while the other six segments are derived from a previously attenuated master strain. The HA and NA proteins exhibit high antigen variety, and therefore are taken from the current strain for which the vaccine is being produced to create a well matching vaccine. The plasmid used in this eight-plasmid system contains three major components that allow for vaccine development. Firstly, the plasmid contains restriction sites that will enable the incorporation of influenza genes into the plasmid. Secondly, the plasmid contains an antibiotic resistance gene, allowing the selection of merely plasmids containing the correct gene. Lastly, the plasmid contains two promotors, human pol 1 and pol 2 promotor that transcribe genes in opposite directions. cDNA sequences of viral RNA are synthesized from attenuated master strains by using RT-PCR. This cDNA can then be inserted between an RNA polymerase I (Pol I) promoter and terminator sequence through restriction enzyme digestion. The cDNA and pol I sequence is then, in turn, surrounded by an RNA polymerase II (Pol II) promoter and a polyadenylation site. This entire sequence is then inserted into a plasmid. Six plasmids derived from attenuated master strain cDNA are cotransfected into a target cell, often a chicken egg, alongside two plasmids of the currently circulating wild-type influenza strain. Inside the target cell, the two "stacked" Pol I and Pol II enzymes transcribe the viral cDNA to synthesize both negative-sense viral RNA and positive-sense mRNA, effectively creating an attenuated virus. The result is a defective vaccine strain that is similar to the current virus strain, allowing a host to build immunity. This synthesized vaccine strain can then be used as a seed virus to create further vaccines. Advantages and disadvantages Vaccines engineered from reverse genetics carry several advantages over traditional vaccine designs. Most notable is speed of production. Due to the high antigenic variation in the HA and NA glycoproteins, a reverse-genetic approach allows for the necessary genotype (i.e. one containing HA and NA proteins taken from currently circulating virus strains) to be formulated rapidly. Additionally, since the final product of a reverse genetics attenuated vaccine production is a live virus, a higher immunogenicity is exhibited than in traditional inactivated vaccines, which must be killed using chemical procedures before being transferred as a vaccine. However, due to the live nature of attenuated viruses, complications may arise in immunodeficient patients. There is also the possibility that a mutation in the virus could result the vaccine to turning back into a live unattenuated virus. See also Forward genetics References Further reading External links Reassortment vs. Reverse Genetics Reverse Genetics: Building Flu Vaccines Piece by Piece Genetic engineering Molecular genetics
Reverse genetics
[ "Chemistry", "Engineering", "Biology" ]
2,277
[ "Biological engineering", "Molecular genetics", "Genetic engineering", "Molecular biology" ]
38,208,829
https://en.wikipedia.org/wiki/Varignon%27s%20theorem%20%28mechanics%29
Varignon's theorem is a theorem of French mathematician Pierre Varignon (1654–1722), published in 1687 in his book Projet d'une nouvelle mécanique. The theorem states that the torque of a resultant of two concurrent forces about any point is equal to the algebraic sum of the torques of its components about the same point. In other words, "If many concurrent forces are acting on a body, then the algebraic sum of torques of all the forces about a point in the plane of the forces is equal to the torque of their resultant about the same point." Proof Consider a set of force vectors that concur at a point in space. Their resultant is: . The torque of each vector with respect to some other point is . Adding up the torques and pulling out the common factor , one sees that the result may be expressed solely in terms of , and is in fact the torque of with respect to the point : . Proving the theorem, i.e. that the sum of torques about is the same as the torque of the sum of the forces about the same point. References External links Varignon's Theorem at TheFreeDictionary.com Eponymous theorems of physics Mechanics Moment (physics)
Varignon's theorem (mechanics)
[ "Physics", "Mathematics", "Engineering" ]
254
[ "Equations of physics", "Physical quantities", "Quantity", "Classical mechanics stubs", "Classical mechanics", "Eponymous theorems of physics", "Mechanics", "Mechanical engineering", "Moment (physics)", "Physics theorems" ]
42,430,141
https://en.wikipedia.org/wiki/Clay%20mineral%20X-ray%20diffraction
Clay minerals are one of the more diverse minerals but all have a commonalty of crystal or grain sizes below 2 μm. Chemically, clays are defined by crystal structure and chemical composition, often determined by clay mineral X-ray diffraction. Sometimes fine grain sediments are mistakenly described as clays; this is actually a description of the "clay-size fraction" rather than the mineralogy of the sediment. There are three crystallographic clay groups: platy clays (phyllosilicates), fibrous clay minerals, and amorphous clay. Phyllosilicates are the more abundant clays and are categorized based on the layering of a tetrahedral and octahedral layer. For most clays, the octahedral layer is centered with Al3+, Fe3+, or Mg(OH)2, but sometimes Zn2+, Li+, and Cr3+ can substitute as well. Si4+ is normally the center of the tetrahedral layer but Al3+ will often partially substitute and create a charge imbalance. Two-layer clays are composed of a tetrahedral layer and an octahedral layer (T-O) while three-layer clays contain an octahedral layer sandwiched by two tetrahedral layers (T-O-T). When substitution of Al3+ for Si4+ creates a charge imbalance, an interlayer cation will fill in between tetrahedral layers to balance the charge of the clay. X-ray diffraction method X-rays are used to determine the crystal structure of materials. It is an experimental method in which a beam of X-rays is made to pass through a sample of the material being tested. Since the atoms are arranged in some order in crystals, they tend to diffract the beam at certain angles and at certain intensities. The angles of the diffracted X-rays and the crystal structure of the material is calculated. It is also possible to determine if the material is not crystalline. X-ray diffraction of clays Typically, powder X-ray diffraction (XRD) is an average of randomly oriented microcrystals that should equally represent all crystal orientation if a large enough sample is present. X-rays are directed at the sample while slowly rotated that produce a diffraction pattern that shows intensity of x-rays collected at different angles. Randomly oriented XRD samples are not as useful for clay minerals because clays typically have similar X and Y dimensions. The Z dimension differs from clay to clay and is most diagnostic because the Z dimension represents the height of the tetrahedral-octahedral (T-O) or tetrahedral-octahedral-tetrahedral (T-O-T) layer. The Z dimension can increase or decrease because of substitution of the central cation in both the tetrahedral and octahedral layers. The presence and size of a charge balancing cation in the inter-layer of T-O-T clays will also effect the Z dimension. Because of this, clay minerals are typically identified by preparing samples so that they are oriented to increase basal (00l) reflection . D positions are calculated using Bragg’s law but because clay mineral analysis is one dimensional, l can substitute n, making the equation l λ = 2d sin Θ. When measuring the x-ray diffraction of clays, d is constant and λ is the known wavelength from the x-ray source, so the distance from one 00l peak to another is equal. Identification of clays using XRD Basal reflections give d-spacing of the basal layer which represent the thickness of the silicate layers and the unit cell often contains multiple layers. Clay mineral peaks can generally be distinguished by the width halfway up the peak (i.e. the full width at half maximum, FWHM). Well-defined crystalline minerals have sharp peaks while clays, which range from crystalline to noncrystalline, produce broad peaks with noticeable width on both sides. These broad peaks make it easy to pick out which peaks are contributed by clays. These peaks can be compared to known diffraction patterns for better identification but if some peaks are broader than others, it is likely that multiple clays are present. The Clay Mineral Society maintains a collection clays for the purpose of comparison to unknown clays. Because the majority of the clays available from the Clay Mineral Society are naturally formed, they can contain minerals other than the desired clay. Diffraction patterns calculated using theoretical methods do not generally match with experimental diffraction patterns, so using diffraction patterns from known samples to help identify a clay is preferable to calculation. Some minerals can be eliminated from identification using background information or prior analysis. Well-crystallized and pure samples are ideal for x-ray diffraction, but this is rarely the case for clay. Clay minerals are almost always mixed with very small amounts of nonclay minerals which can produce intense peaks, even when there is very little of the sample is not a clay. If additional minerals are known to be presents, attempts should be made to separate clays from nonclays otherwise additional peaks should be expected. Some common minerals associated with naturally occurring clays are; quartz, feldspars, zeolites, and carbonates organic matter is sometimes present. Synthesis of clays can reduce the presence of some of these associated materials but does not guarantee pure samples as quartz or other associated materials are still commonly produced alongside synthetic clays. Mix layered clay minerals Mixed–layering, interlaying, and interstratification are all terms that refer to clay minerals that form with two or more types of clays that with intergrown layering. Mixed-layering does not refer to clays that have been physically mixed. Mixed-layering in clays make add difficulty to interpretation so multiple analysis are usually necessary. Two component clays are most common with multicomponent clays containing more than two components are very rare. The entire diffraction pattern contributes to the identification and peaks should be considered as a whole rather than individually. Mixed layer clays with two equal components (50% of each clay) are the easiest to identify. These clays are thought of as one, non-mixed clay with a 001 spacing that is equal to the sum of the 001 spacing of both components. Some commonly occurring 50/50 mixed-layer clays are even given unique names such as dozyite, a Serpentine/Chlorite. Mixed clays that have unequal components with random stacking produce aperiodic 00l diffraction patterns known as irrational patterns. The coefficient of variation (CV) is the percent standard deviation of the average of d(001) calculated from various reflections. If CV is less than 0.75% then the mineral is given a unique name. If CV is greater than 0.75% then mixed-layered nomenclature is used. Preparation for clay mineral X-ray diffraction Clays should be separated from the nonclay minerals to reduce interference of 00l peaks. Nonclay minerals can usually be separated by sieving samples at a small enough mesh. Samples should be lightly crushed but not pulverized because nonclay minerals will be reduced to clay sizes and become impossible to separate from the sample. Lightly crushing breaks apart the soft clays while keeping harder nonclays intact for easier removal. Samples should be as homogeneous as possible, both in grain size and composition before mounting them for X-ray diffraction and long, flat, and thick samples are ideal. Four methods are commonly used for sample preparation and vary in difficulty and appropriateness of use. Glass slide method Easiest and fastest of the four commonly used methods but also the least accurate. A glass microscope slide is covered with a suspension of sample in water then placed in an oven at 90 °C and left to dry. For some samples, drying at temperatures this high can damage the clays. In that case, drying at room temperature is an option but will require more time. Orientation is usually fair and particles are segregated with the finest particles toward the top. This method produces thin films which provide inaccurate diffraction intensities at moderate and high angles. Smear method This is a quick method that is good identifying bulk sample constituents. The sample is crushed with a mortar and pestle until the powder is able to be brushed onto a glass slide. The powder is then mixed with a few drops of a dispersant solution, usually ethanol but others are available, and spread evenly over the slide. Both large and small grain size fractions can utilize this method. Filter membrane peel technique This technique prevents size segregation by using either quick filtration or rapid stirring to overcome settling velocities. Sample is poured into a vacuum filter apparatus and filtered quickly but some liquid is left remaining so that air is not drawn though the sample, the remaining is liquid is then decanted. The damp sample is then inverted onto a glass slide and the filter paper is removed. Fast filtration allows a representative particle size to collect on the filter paper which is then inverted and exposed when mounted on a slide. Centrifuged porous plate Produces the best diffraction patterns out of the four most common methods but requires the most skill and is most time-consuming. Upon completion, samples have thick aggregates and preferred orientation. A special apparatus designed to hold a porous ceramic plate is placed into a centrifuge container and filled with suspended sample. Centrifuging forces the liquid through the porous plate leaving the sample behind to be dried below 100 °C. An advantage of this method is that exchangeable cations can be removed by passing a chloride solution through the plate once the sample has been dried. Exchanging cations can be useful when establishing peaks for standards with variable interlayer cations. For example, nontronite has an interlayer which can contain both calcium and sodium. If an unknown sample was suspected to only contain one of these cations, a more accurate standard could be prepared by exchanging the undesired cation. References X-ray crystallography
Clay mineral X-ray diffraction
[ "Chemistry", "Materials_science" ]
2,094
[ "X-ray crystallography", "Crystallography" ]
42,435,396
https://en.wikipedia.org/wiki/Phase%20transformation%20crystallography
Phase transformation crystallography describes the orientation relationship and interface orientation after a phase transformation (such as martensitic transformation or precipitation). References Software to calculate transformation crystallography—PTCLab, http://sourceforge.net/projects/tclab/ Crystallography
Phase transformation crystallography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
60
[ "Materials science stubs", "Materials science", "Crystallography stubs", "Crystallography", "Condensed matter physics" ]
32,611,770
https://en.wikipedia.org/wiki/DmpG-like%20communication%20domain
In molecular biology, the DmpG-like communication domain is a protein domain found towards the C-terminal region of various aldolase enzymes. It consists of five alpha-helices, four of which form an antiparallel helical bundle that plugs the C terminus of the N-terminal TIM barrel domain. The communication domain is thought to play an important role in the heterodimerisation of the enzyme. This domain heterodimerises with acetaldehyde dehydrogenases to form a bifunctional aldolase-dehydrogenase. References Protein domains
DmpG-like communication domain
[ "Biology" ]
124
[ "Protein domains", "Protein classification" ]
32,612,385
https://en.wikipedia.org/wiki/Hindley%E2%80%93Milner%20type%20system
A Hindley–Milner (HM) type system is a classical type system for the lambda calculus with parametric polymorphism. It is also known as Damas–Milner or Damas–Hindley–Milner. It was first described by J. Roger Hindley and later rediscovered by Robin Milner. Luis Damas contributed a close formal analysis and proof of the method in his PhD thesis. Among HM's more notable properties are its completeness and its ability to infer the most general type of a given program without programmer-supplied type annotations or other hints. Algorithm W is an efficient type inference method in practice and has been successfully applied on large code bases, although it has a high theoretical complexity. HM is preferably used for functional languages. It was first implemented as part of the type system of the programming language ML. Since then, HM has been extended in various ways, most notably with type class constraints like those in Haskell. Introduction As a type inference method, Hindley–Milner is able to deduce the types of variables, expressions and functions from programs written in an entirely untyped style. Being scope sensitive, it is not limited to deriving the types only from a small portion of source code, but rather from complete programs or modules. Being able to cope with parametric types, too, it is core to the type systems of many functional programming languages. It was first applied in this manner in the ML programming language. The origin is the type inference algorithm for the simply typed lambda calculus that was devised by Haskell Curry and Robert Feys in 1958. In 1969, J. Roger Hindley extended this work and proved that their algorithm always inferred the most general type. In 1978, Robin Milner, independently of Hindley's work, provided an equivalent algorithm, Algorithm W. In 1982, Luis Damas finally proved that Milner's algorithm is complete and extended it to support systems with polymorphic references. Monomorphism vs. polymorphism In the simply typed lambda calculus, types are either atomic type constants or function types of form . Such types are monomorphic. Typical examples are the types used in arithmetic values: 3 : Number add 3 4 : Number add : Number -> Number -> Number Contrary to this, the untyped lambda calculus is neutral to typing at all, and many of its functions can be meaningfully applied to all type of arguments. The trivial example is the identity function id ≡ λ x . x which simply returns whatever value it is applied to. Less trivial examples include parametric types like lists. While polymorphism in general means that operations accept values of more than one type, the polymorphism used here is parametric. One finds the notation of type schemes in the literature, too, emphasizing the parametric nature of the polymorphism. Additionally, constants may be typed with (quantified) type variables. E.g.: cons : forall a . a -> List a -> List a nil : forall a . List a id : forall a . a -> a Polymorphic types can become monomorphic by consistent substitution of their variables. Examples of monomorphic instances are: id' : String -> String nil' : List Number More generally, types are polymorphic when they contain type variables, while types without them are monomorphic. Contrary to the type systems used for example in Pascal (1970) or C (1972), which only support monomorphic types, HM is designed with emphasis on parametric polymorphism. The successors of the languages mentioned, like C++ (1985), focused on different types of polymorphism, namely subtyping in connection with object-oriented programming and overloading. While subtyping is incompatible with HM, a variant of systematic overloading is available in the HM-based type system of Haskell. Let-polymorphism When extending the type inference for the simply-typed lambda calculus towards polymorphism, one has to decide whether assigning a polymorphic type not only as type of an expression, but also as the type of a λ-bound variable is admissible. This would allow the generic identity type to be assigned to the variable 'id' in: (λ id . ... (id 3) ... (id "text") ... ) (λ x . x) Allowing this gives rise to the polymorphic lambda calculus; however, unfortunately, type inference in this system is not decidable. Instead, HM distinguishes variables that are immediately bound to an expression from more general λ-bound variables, calling the former let-bound variables, and allows polymorphic types to be assigned only to these. This leads to let-polymorphism where the above example takes the form let id = λ x . x in ... (id 3) ... (id "text") ... which can be typed with a polymorphic type for 'id'. As indicated, the expression syntax is extended to make the let-bound variables explicit, and by restricting the type system to allow only let-bound variable to have polymorphic types, while the parameters in lambda-abstractions must get a monomorphic type, type inference becomes decidable. Overview The remainder of this article proceeds as follows: The HM type system is defined. This is done by describing a deduction system that makes precise what expressions have what type, if any. From there, it works towards an implementation of the type inference method. After introducing a syntax-driven variant of the above deductive system, it sketches an efficient implementation (algorithm J), appealing mostly to the reader's metalogical intuition. Because it remains open whether algorithm J indeed realises the initial deduction system, a less efficient implementation (algorithm W), is introduced and its use in a proof is hinted. Finally, further topics related to the algorithm are discussed. The same description of the deduction system is used throughout, even for the two algorithms, to make the various forms in which the HM method is presented directly comparable. The Hindley–Milner type system The type system can be formally described by syntax rules that fix a language for the expressions, types, etc. The presentation here of such a syntax is not too formal, in that it is written down not to study the surface grammar, but rather the depth grammar, and leaves some syntactical details open. This form of presentation is usual. Building on this, typing rules are used to define how expressions and types are related. As before, the form used is a bit liberal. Syntax The expressions to be typed are exactly those of the lambda calculus extended with a let-expression as shown in the adjacent table. Parentheses can be used to disambiguate an expression. The application is left-binding and binds stronger than abstraction or the let-in construct. Types are syntactically split into two groups, monotypes and polytypes. Monotypes Monotypes always designate a particular type. Monotypes are syntactically represented as terms. Examples of monotypes include type constants like or , and parametric types like . The latter types are examples of applications of type functions, for example, from the set , where the superscript indicates the number of type parameters. The complete set of type functions is arbitrary in HM, except that it must contain at least , the type of functions. It is often written in infix notation for convenience. For example, a function mapping integers to strings has type . Again, parentheses can be used to disambiguate a type expression. The application binds stronger than the infix arrow, which is right-binding. Type variables are admitted as monotypes. Monotypes are not to be confused with monomorphic types, which exclude variables and allow only ground terms. Two monotypes are equal if they have identical terms. Polytypes Polytypes (or type schemes) are types containing variables bound by zero or more for-all quantifiers, e.g. . A function with polytype can map any value of the same type to itself, and the identity function is a value for this type. As another example, is the type of a function mapping all finite sets to integers. A function which returns the cardinality of a set would be a value of this type. Quantifiers can only appear top level. For instance, a type is excluded by the syntax of types. Also monotypes are included in the polytypes, thus a type has the general form , where and is a monotype. Equality of polytypes is up to reordering the quantification and renaming the quantified variables (-conversion). Further, quantified variables not occurring in the monotype can be dropped. Context and typing To meaningfully bring together the still disjoint parts (syntax expressions and types) a third part is needed: context. Syntactically, a context is a list of pairs , called assignments, assumptions or bindings, each pair stating that value variable has type All three parts combined give a typing judgment of the form , stating that under assumptions , the expression has type . Free type variables In a type , the symbol is the quantifier binding the type variables in the monotype . The variables are called quantified and any occurrence of a quantified type variable in is called bound and all unbound type variables in are called free. Additionally to the quantification in polytypes, type variables can also be bound by occurring in the context, but with the inverse effect on the right hand side of the . Such variables then behave like type constants there. Finally, a type variable may legally occur unbound in a typing, in which case they are implicitly all-quantified. The presence of both bound and unbound type variables is a bit uncommon in programming languages. Often, all type variables are implicitly treated all-quantified. For instance, one does not have clauses with free variables in Prolog. Likewise in Haskell, where all type variables implicitly occur quantified, i.e. a Haskell type a -> a means here. Related and also very uncommon is the binding effect of the right hand side of the assignments. Typically, the mixture of both bound and unbound type variables originate from the use of free variables in an expression. The constant function K = provides an example. It has the monotype . One can force polymorphism by . Herein, has the type . The free monotype variable originates from the type of the variable bound in the surrounding scope. has the type . One could imagine the free type variable in the type of be bound by the in the type of . But such a scoping cannot be expressed in HM. Rather, the binding is realized by the context. Type order Polymorphism means that one and the same expression can have (perhaps infinitely) many types. But in this type system, these types are not completely unrelated, but rather orchestrated by the parametric polymorphism. As an example, the identity can have as its type as well as or and many others, but not . The most general type for this function is , while the others are more specific and can be derived from the general one by consistently replacing another type for the type parameter, i.e. the quantified variable . The counter-example fails because the replacement is not consistent. The consistent replacement can be made formal by applying a substitution to the term of a type , written . As the example suggests, substitution is not only strongly related to an order, that expresses that a type is more or less special, but also with the all-quantification which allows the substitution to be applied. Formally, in HM, a type is more general than , formally , if some quantified variable in is consistently substituted such that one gains as shown in the side bar. This order is part of the type definition of the type system. In our previous example, applying the substitution would result in . While substituting a monomorphic (ground) type for a quantified variable is straight forward, substituting a polytype has some pitfalls caused by the presence of free variables. Most particularly, unbound variables must not be replaced. They are treated as constants here. Additionally, quantifications can only occur top-level. Substituting a parametric type, one has to lift its quantifiers. The table on the right makes the rule precise. Alternatively, consider an equivalent notation for the polytypes without quantifiers in which quantified variables are represented by a different set of symbols. In such a notation, the specialization reduces to plain consistent replacement of such variables. The relation is a partial order and is its smallest element. Principal type While specialization of a type scheme is one use of the order, it plays a crucial second role in the type system. Type inference with polymorphism faces the challenge of summarizing all possible types an expression may have. The order guarantees that such a summary exists as the most general type of the expression. Substitution in typings The type order defined above can be extended to typings because the implied all-quantification of typings enables consistent replacement: Contrary to the specialisation rule, this is not part of the definition, but like the implicit all-quantification rather a consequence of the type rules defined next. Free type variables in a typing serve as placeholders for possible refinement. The binding effect of the environment to free type variables on the right hand side of that prohibits their substitution in the specialisation rule is again that a replacement has to be consistent and would need to include the whole typing. This article will discuss four different rule sets: declarative system syntactical system algorithm J algorithm W Deductive system The syntax of HM is carried forward to the syntax of the inference rules that form the body of the formal system, by using the typings as judgments. Each of the rules define what conclusion could be drawn from what premises. Additionally to the judgments, some extra conditions introduced above might be used as premises, too. A proof using the rules is a sequence of judgments such that all premises are listed before a conclusion. The examples below show a possible format of proofs. From left to right, each line shows the conclusion, the of the rule applied and the premises, either by referring to an earlier line (number) if the premise is a judgment or by making the predicate explicit. Typing rules See also Typing rules The side box shows the deduction rules of the HM type system. One can roughly divide the rules into two groups: The first four rules (variable or function access), (application, i.e. function call with one parameter), (abstraction, i.e. function declaration) and (variable declaration) are centered around the syntax, presenting one rule for each of the expression forms. Their meaning is obvious at the first glance, as they decompose each expression, prove their sub-expressions and finally combine the individual types found in the premises to the type in the conclusion. The second group is formed by the remaining two rules and . They handle specialization and generalization of types. While the rule should be clear from the section on specialization above, complements the former, working in the opposite direction. It allows generalization, i.e. to quantify monotype variables not bound in the context. The following two examples exercise the rule system in action. Since both the expression and the type are given, they are a type-checking use of the rules. Example: A proof for where , could be written Example: To demonstrate generalization, is shown below: Let-polymorphism Not visible immediately, the rule set encodes a regulation under which circumstances a type might be generalized or not by a slightly varying use of mono- and polytypes in the rules and . Remember that and denote poly- and monotypes respectively. In rule , the value variable of the parameter of the function is added to the context with a monomorphic type through the premise , while in the rule , the variable enters the environment in polymorphic form . Though in both cases the presence of in the context prevents the use of the generalisation rule for any free variable in the assignment, this regulation forces the type of parameter in a -expression to remain monomorphic, while in a let-expression, the variable could be introduced polymorphic, making specializations possible. As a consequence of this regulation, cannot be typed, since the parameter is in a monomorphic position, while has type , because has been introduced in a let-expression and is treated polymorphic therefore. Generalization rule The generalisation rule is also worth a closer look. Here, the all-quantification implicit in the premise is simply moved to the right hand side of in the conclusion, bound by an explicit universal quantifier. This is possible, since does not occur free in the context. Again, while this makes the generalization rule plausible, it is not really a consequence. On the contrary, the generalization rule is part of the definition of HM's type system and the implicit all-quantification a consequence. An inference algorithm Now that the deduction system of HM is at hand, one could present an algorithm and validate it with respect to the rules. Alternatively, it might be possible to derive it by taking a closer look on how the rules interact and proof are formed. This is done in the remainder of this article focusing on the possible decisions one can make while proving a typing. Degrees of freedom choosing the rules Isolating the points in a proof, where no decision is possible at all, the first group of rules centered around the syntax leaves no choice since to each syntactical rule corresponds a unique typing rule, which determines a part of the proof, while between the conclusion and the premises of these fixed parts chains of and could occur. Such a chain could also exist between the conclusion of the proof and the rule for topmost expression. All proofs must have the so sketched shape. Because the only choice in a proof with respect of rule selection are the and chains, the form of the proof suggests the question whether it can be made more precise, where these chains might not be needed. This is in fact possible and leads to a variant of the rules system with no such rules. Syntax-directed rule system A contemporary treatment of HM uses a purely syntax-directed rule system due to Clement as an intermediate step. In this system, the specialization is located directly after the original rule and merged into it, while the generalization becomes part of the rule. There the generalization is also determined to always produce the most general type by introducing the function , which quantifies all monotype variables not bound in . Formally, to validate that this new rule system is equivalent to the original , one has to show that , which decomposes into two sub-proofs: (Consistency) (Completeness) While consistency can be seen by decomposing the rules and of into proofs in , it is likely visible that is incomplete, as one cannot show in , for instance, but only . An only slightly weaker version of completeness is provable though, namely implying, one can derive the principal type for an expression in allowing us to generalize the proof in the end. Comparing and , now only monotypes appear in the judgments of all rules. Additionally, the shape of any possible proof with the deduction system is now identical to the shape of the expression (both seen as trees). Thus the expression fully determines the shape of the proof. In the shape would likely be determined with respect to all rules except and , which allow building arbitrarily long branches (chains) between the other nodes. Degrees of freedom instantiating the rules Now that the shape of the proof is known, one is already close to formulating a type inference algorithm. Because any proof for a given expression must have the same shape, one can assume the monotypes in the proof's judgements to be undetermined and consider how to determine them. Here, the substitution (specialisation) order comes into play. Although at the first glance one cannot determine the types locally, the hope is that it is possible to refine them with the help of the order while traversing the proof tree, additionally assuming, because the resulting algorithm is to become an inference method, that the type in any premise will be determined as the best possible. And in fact, one can, as looking at the rules of suggests: : The critical choice is . At this point, nothing is known about , so one can only assume the most general type, which is . The plan is to specialize the type if it should become necessary. Unfortunately, a polytype is not permitted in this place, so some has to do for the moment. To avoid unwanted captures, a type variable not yet in the proof is a safe choice. Additionally, one has to keep in mind that this monotype is not yet fixed, but might be further refined. : The choice is how to refine . Because any choice of a type here depends on the usage of the variable, which is not locally known, the safest bet is the most general one. Using the same method as above one can instantiate all quantified variables in with fresh monotype variables, again keeping them open to further refinement. : The rule does not leave any choice. Done. : Only the application rule might force a refinement to the variables "opened" so far, as required by both premises. The first premise forces the outcome of the inference to be of the form . If it is, then fine. One can later pick its for the result. If not, it might be an open variable. Then this can be refined to the required form with two new variables as before. Otherwise, the type checking fails because the first premise inferred a type which is not and cannot be made into a function type. The second premise requires that the inferred type is equal to of the first premise. Now there are two possibly different types, perhaps with open type variables, at hand to compare and to make equal if it is possible. If it is, a refinement is found, and if not, a type error is detected again. An effective method is known to "make two terms equal" by substitution, Robinson's Unification in combination with the so-called Union-Find algorithm. To briefly summarize the union-find algorithm, given the set of all types in a proof, it allows one to group them together into equivalence classes by means of a procedure and to pick a representative for each such class using a procedure. Emphasizing the word procedure in the sense of side effect, we're clearly leaving the realm of logic in order to prepare an effective algorithm. The representative of a is determined such that, if both and are type variables then the representative is arbitrarily one of them, but while uniting a variable and a term, the term becomes the representative. Assuming an implementation of union-find at hand, one can formulate the unification of two monotypes as follows: unify(ta, tb): ta = find(ta) tb = find(tb) if both ta,tb are terms of the form D p1..pn with identical D,n then unify(ta[i], tb[i]) for each corresponding ith parameter else if at least one of ta,tb is a type variable then union(ta, tb) else error 'types do not match' Now having a sketch of an inference algorithm at hand, a more formal presentation is given in the next section. It is described in Milner P. 370 ff. as algorithm J. Algorithm J The presentation of Algorithm J is a misuse of the notation of logical rules, since it includes side effects but allows a direct comparison with while expressing an efficient implementation at the same time. The rules now specify a procedure with parameters yielding in the conclusion where the execution of the premises proceeds from left to right. The procedure specializes the polytype by copying the term and replacing the bound type variables consistently by new monotype variables. '' produces a new monotype variable. Likely, has to copy the type introducing new variables for the quantification to avoid unwanted captures. Overall, the algorithm now proceeds by always making the most general choice leaving the specialization to the unification, which by itself produces the most general result. As noted above, the final result has to be generalized to in the end, to gain the most general type for a given expression. Because the procedures used in the algorithm have nearly O(1) cost, the overall cost of the algorithm is close to linear in the size of the expression for which a type is to be inferred. This is in strong contrast to many other attempts to derive type inference algorithms, which often came out to be NP-hard, if not undecidable with respect to termination. Thus the HM performs as well as the best fully informed type-checking algorithms can. Type-checking here means that an algorithm does not have to find a proof, but only to validate a given one. Efficiency is slightly reduced because the binding of type variables in the context has to be maintained to allow computation of and enable an occurs check to prevent the building of recursive types during . An example of such a case is , for which no type can be derived using HM. Practically, types are only small terms and do not build up expanding structures. Thus, in complexity analysis, one can treat comparing them as a constant, retaining O(1) costs. Proving the algorithm In the previous section, while sketching the algorithm its proof was hinted at with metalogical argumentation. While this leads to an efficient algorithm J, it is not clear whether the algorithm properly reflects the deduction systems D or S which serve as a semantic base line. The most critical point in the above argumentation is the refinement of monotype variables bound by the context. For instance, the algorithm boldly changes the context while inferring e.g. , because the monotype variable added to the context for the parameter later needs to be refined to when handling application. The problem is that the deduction rules do not allow such a refinement. Arguing that the refined type could have been added earlier instead of the monotype variable is an expedient at best. The key to reaching a formally satisfying argument is to properly include the context within the refinement. Formally, typing is compatible with substitution of free type variables. To refine the free variables thus means to refine the whole typing. Algorithm W From there, a proof of algorithm J leads to algorithm W, which only makes the side effects imposed by the procedure explicit by expressing its serial composition by means of the substitutions . The presentation of algorithm W in the sidebar still makes use of side effects in the operations set in italic, but these are now limited to generating fresh symbols. The form of judgement is , denoting a function with a context and expression as parameter producing a monotype together with a substitution. is a side-effect free version of producing a substitution which is the most general unifier. While algorithm W is normally considered to be the HM algorithm and is often directly presented after the rule system in literature, its purpose is described by Milner on P. 369 as follows: As it stands, W is hardly an efficient algorithm; substitutions are applied too often. It was formulated to aid the proof of soundness. We now present a simpler algorithm J which simulates W in a precise sense.'' While he considered W more complicated and less efficient, he presented it in his publication before J. It has its merits when side effects are unavailable or unwanted. W is also needed to prove completeness, which is factored by him into the soundness proof. Proof obligations Before formulating the proof obligations, a deviation between the rules systems D and S and the algorithms presented needs to be emphasized. While the development above sort of misused the monotypes as "open" proof variables, the possibility that proper monotype variables might be harmed was sidestepped by introducing fresh variables and hoping for the best. But there's a catch: One of the promises made was that these fresh variables would be "kept in mind" as such. This promise is not fulfilled by the algorithm. Having a context , the expression cannot be typed in either or , but the algorithms come up with the type , where W additionally delivers the substitution , meaning that the algorithm fails to detect all type errors. This omission can easily be fixed by more carefully distinguishing proof variables and monotype variables. The authors were well aware of the problem but decided not to fix it. One might assume a pragmatic reason behind this. While more properly implementing the type inference would have enabled the algorithm to deal with abstract monotypes, they were not needed for the intended application where none of the items in a preexisting context have free variables. In this light, the unneeded complication was dropped in favor of a simpler algorithm. The remaining downside is that the proof of the algorithm with respect to the rule system is less general and can only be made for contexts with as a side condition. The side condition in the completeness obligation addresses how the deduction may give many types, while the algorithm always produces one. At the same time, the side condition demands that the type inferred is actually the most general. To properly prove the obligations one needs to strengthen them first to allow activating the substitution lemma threading the substitution through and . From there, the proofs are by induction over the expression. Another proof obligation is the substitution lemma itself, i.e. the substitution of the typing, which finally establishes the all-quantification. The later cannot formally be proven, since no such syntax is at hand. Extensions Recursive definitions To make programming practical recursive functions are needed. A central property of the lambda calculus is that recursive definitions are not directly available, but can instead be expressed with a fixed point combinator. But unfortunately, the fixpoint combinator cannot be formulated in a typed version of the lambda calculus without having a disastrous effect on the system as outlined below. Typing rule The original paper shows recursion can be realized by a combinator . A possible recursive definition could thus be formulated as . Alternatively an extension of the expression syntax and an extra typing rule is possible: where basically merging and while including the recursively defined variables in monotype positions where they occur to the left of the but as polytypes to the right of it. Consequences While the above is straightforward it does come at a price. Type theory connects lambda calculus with computation and logic. The easy modification above has effects on both: The strong normalisation property is invalidated, because non-terminating terms can be formulated. The logic collapses because the type becomes inhabited. Overloading Overloading means that different functions can be defined and used with the same name. Most programming languages at least provide overloading with the built-in arithmetic operations (+, <, etc.), to allow the programmer to write arithmetic expressions in the same form, even for different numerical types like int or real. Because a mixture of these different types within the same expression also demands for implicit conversion, overloading especially for these operations is often built into the programming language itself. In some languages, this feature is generalized and made available to the user, e.g. in C++. While ad hoc overloading has been avoided in functional programming for the computation costs both in type checking and inference, a means to systematise overloading has been introduced that resembles both in form and naming to object oriented programming, but works one level upwards. "Instances" in this systematic are not objects (i.e. on value level), but rather types. The quicksort example mentioned in the introduction uses the overloading in the orders, having the following type annotation in Haskell: quickSort :: Ord a => [a] -> [a] Herein, the type a is not only polymorphic, but also restricted to be an instance of some type class Ord, that provides the order predicates < and >= used in the functions body. The proper implementations of these predicates are then passed to quicksorts as additional parameters, as soon as quicksort is used on more concrete types providing a single implementation of the overloaded function quickSort. Because the "classes" only allow a single type as their argument, the resulting type system can still provide inference. Additionally, the type classes can then be equipped with some kind of overloading order allowing one to arrange the classes as a lattice. Higher-order types Parametric polymorphism implies that types themselves are passed as parameters as if they were proper values. Passed as arguments to a proper functions, but also into "type functions" as in the "parametric" type constants, leads to the question how to more properly type types themselves. Higher-order types are used to create an even more expressive type system. Unfortunately, unification is no longer decidable in the presence of meta types, rendering type inference impossible in this extend of generality. Additionally, assuming a type of all types that includes itself as type leads into a paradox, as in the set of all sets, so one must proceed in steps of levels of abstraction. Research in second order lambda calculus, one step upwards, showed that type inference is undecidable in this generality. Haskell introduces one higher level named kind. In standard Haskell, kinds are inferred and used for little more than to describe the arity of type constructors. e.g. a list type constructor is thought of as mapping a type (the type of its elements) to another type (the type of the list containing said elements); notationally this is expressed as . Language extensions are available which extend kinds to emulate features of a dependent type system. Subtyping Attempts to combine subtyping and type inference have caused quite some frustration. It is straightforward to accumulate and propagate subtyping constraints (as opposed to type equality constraints), making the resulting constraints part of the inferred typing schemes, for example , where is a constraint on the type variable . However, because type variables are no longer unified eagerly in this approach, it tends to generate large and unwieldy typing schemes containing many useless type variables and constraints, making them hard to read and understand. Therefore, considerable effort was put into simplifying such typing schemes and their constraints, using techniques similar to those of nondeterministic finite automaton (NFA) simplification (useful in the presence of inferred recursive types). More recently, Dolan and Mycroft formalized the relationship between typing scheme simplification and NFA simplification and showed that an algebraic take on the formalization of subtyping allowed generating compact principal typing schemes for an ML-like language (called MLsub). Notably, their proposed typing scheme used a restricted form of union and intersection types instead of explicit constraints. Parreaux later claimed that this algebraic formulation was equivalent to a relatively simple algorithm resembling Algorithm W, and that the use of union and intersection types was not essential. On the other hand, type inference has proven more difficult in the context of object-oriented programming languages, because object methods tend to require first-class polymorphism in the style of System F (where type inference is undecidable) and because of features like F-bounded polymorphism. Consequently, type systems with subtyping enabling object-oriented programming, such as Cardelli's system , do not support HM-style type inference. Row polymorphism can be used as an alternative to subtyping for supporting language features like structural records. While this style of polymorphism is less flexible than subtyping in some ways, notably requiring more polymorphism than strictly necessary to cope with the lack of directionality in type constraints, it has the advantage that it can be integrated with the standard HM algorithms quite easily. Notes References External links A literate Haskell implementation of Algorithm W along with its source code on GitHub. A simple implementation of Hindley-Milner algorithm in Python. Type systems Type theory Type inference Lambda calculus Theoretical computer science Formal methods 1969 in computing 1978 in computing 1985 in computing Algorithms
Hindley–Milner type system
[ "Mathematics", "Engineering" ]
7,484
[ "Mathematical structures", "Type inference", "Applied mathematics", "Algorithms", "Mathematical logic", "Mathematical objects", "Theoretical computer science", "Type systems", "Type theory", "Software engineering", "Formal methods" ]
32,622,165
https://en.wikipedia.org/wiki/Ambiguity%20resolution
Ambiguity resolution is used to find the value of a measurement that requires modulo sampling. This is required for pulse-Doppler radar signal processing. Measurements Some types of measurements introduce an unavoidable modulo operation in the measurement process. This happens with all radar systems. Radar aliasing happens when: Pulse repetition frequency (PRF) is too low to sample Doppler frequency directly PRF is too high to sample range directly Pulse Doppler sonar uses similar principles to measure position and velocity involving liquids. Radar Systems Radar systems operating at a PRF below about 3 kHz pulse rate produce true range, but produce ambiguous target speed. Radar systems operating at a PRF above 30 kHz produce true target speed, but produce ambiguous target range. Medium PRF systems produce both ambiguous range measurement and ambiguous radial speed measurement using PRF from 3 kHz to 30 kHz. Ambiguity resolution finds true range and true speed by using ambiguous range and ambiguous speed measurements with multiple PRF. Doppler Measurements Doppler systems involve velocity measurements similar to the kind of measurements made using a strobe light. For example, a strobe light can be used as a tachometer to measure rotational velocity for rotating machinery. Strobe light measurements can be inaccurate because the light may be flashing 2 or 3 times faster than shaft rotation speed. The user can only produce an accurate measurement by increasing the pulse rate starting near zero until pulses are fast enough to make the rotating object appear stationary. Radar and sonar systems use the same phenomenon to detect target speed. Operation The ambiguity region is shown graphically in this image. The x axis is range (left-right). The y axis is radial speed. The z axis is amplitude (up-down). The shape of the rectangles changes when the PRF changes. The unambiguous zone is in the lower left corner. All of the other blocks have ambiguous range or ambiguous radial velocity. Pulse Doppler radar relies on medium pulse repetition frequency (PRF) from about 3 kHz to 30 kHz. Each transmit pulse is separated by between 5 km and 50 km of distance. Range Ambiguity Resolution The received signals from multiple PRF are compared using the range ambiguity resolution process. Each range sample is converted from time domain I/Q samples into frequency domain. Older systems use individual filters for frequency filtering. Newer systems use digital sampling and a Fast Fourier transform or Discrete Fourier transform instead of physical filters. Each filter converts time samples into a frequency spectrum. Each spectrum frequency corresponds with a different speed. These samples are thresholded to obtain ambiguous range for several different PRF. Frequency Ambiguity Resolution The received signals are also compared using the frequency ambiguity resolution process. A blind velocity occurs when Doppler frequency falls close to the PRF. This folds the return signal into the same filter as stationary clutter reflections. Rapidly alternating different PRF while scanning eliminates blind frequencies. Further reading References Radar Doppler effects Electromagnetism
Ambiguity resolution
[ "Physics" ]
599
[ "Electromagnetism", "Physical phenomena", "Astrophysics", "Fundamental interactions", "Doppler effects" ]
45,714,927
https://en.wikipedia.org/wiki/Mercury%20nano-trap%20water%20filtration
Mercury nano-trap water filtration is a method of decontaminating water of mercury. Mercury is considered to be one of the most notorious metal pollutants present in food, water, air and soil, but the process of eliminating it is limited. Heavy metals such as mercury are formed on the Earth's crust and made into solutions with ground water through certain natural processing and pH changes occurring in the soil. There are traditional methods that are used to extract mercury from the natural water sources and industrial waste water, such as chemical precipitation, amalgamation, reverse osmosis, membrane filtration and photochemical methods. However, these methods are expensive, time-consuming, and inefficient, hence the need for a nanofiltration technology that overcomes all of these issues. Nanofiltration technology is very efficient in removal of mercury species due to its characteristics of having high surface area-to-volume and the fact that it's easily chemically functionalized. Additionally, Brownian motion of nanomaterials allows them to scan large volume of solvent in short times. There are many copolymer nanoparticles (NPs) that can be used as scavengers to eliminate mercury species via redox reactions such as selenium NPs, manganese dioxide nanowhiskers, carbon nanotube−silverNP composites, silver NPs, silver NP-decorated silicaspheres, gold NP-based materials. Among these adsorbents, citrate-capped gold NP-based materials have been used intensively to capture mercury species from nature water. Instrumentation High-resolution transmission electron microscopy (HRTEM) is used to obtain the HRTEM images (FEI Tecnai G2 F20 S-Twin working at 200 kV) and double beam UV-visible spectrophotometer is used for the extinction, or removal, of the NPs gold spectra. A combination of scanning electron microscope and an energy dispersive X-ray detector (EDX) for obtaining EDX spectra while sub micrometer particle size analyzer and Delsa nano zeta potential measure the potential of gold NPs. Inductively coupled plasma-mass spectrometry (ICP-MS) measures the quantification of mercury species in gold NPs while the linear range of Hg2+ should be between 2.5 and 50 nM. The powder X-ray diffraction is measured using a diffractometer with Cu Kα radiation. A superconducting quantum interference device measures the magnetometry while the Fourier-transformed infrared spectroscopy spectrum is measured by a Nicolet 6700 FT-IR spectrometer. Removal of mercury species The application of an external magnetic field helps reduce the Fe3O4 NP compositions and can be used to remove the citrate capped gold NPs or Tween 20- Au NPs. The quantity of the Hg2+ is determined by ICP-MS. Elimination efficiency is calculated as: Elimination Efficiency (%) = [(Co – C)/ Co] x 100 The equilibrium absorption capacity was determined by: qe = (Co – Ce) x (V/W) where qe is the equilibrium absorption capacity, Ce is the concentration of Hg2+, V is the volume of the solution and W is the weight of Tween 20-Au NPs. Other metals in place of mercury can be used to determine the reusability of the nanofiltration method. A simple nanofiltration process is illustrated in Figure 2. When purifying mercury from sea water, a decrease in the volume to surface area ratio leads to a decline in the efficiency of the elimination.7 Cost efficiency of the nanotrap process is indicated in the fact that the materials can be reused to recycle more water. Tween 20-Au NPs are rapid, efficient and selective in capturing Hg2+ in high salt water concentration when the gold NPs catalyze the citrate ion induced reduction of Hg2+ to Hg0. ICP-MS is used to quantify the Hg2+ concentration to determine the level of elimination efficiency. Figure 3 shows an illustration process of the nano trap filtration process. The use of nanomaterials in removing the mercury from water is advantageous because of the high surface area to volume ratio and the fact that they are easily chemically functionalized. Nanomaterials capture five times more mercury than the maximum mercury captured predicted through the use of previous mercury filtration systems. Compared to other NP-based methods for Hg2+ removal from high salt matrix, Tween 20-Au NPs is a rapid and selective technique with high efficiency. For purification of drinking water, it is advisable to use nano-tablets with the filters because they are easily accessible and cost efficient. See also Aquatic toxicology American Water Works Association Ecological sanitation List of waste water treatment technologies Microfiltration National Rural Water Association Sewage treatment Water conservation Water treatment References Water technology Water pollution Water treatment Nanotechnology and the environment
Mercury nano-trap water filtration
[ "Chemistry", "Materials_science", "Engineering", "Environmental_science" ]
1,036
[ "Water treatment", "Water pollution", "Nanotechnology and the environment", "Environmental engineering", "Water technology", "Nanotechnology" ]
45,715,577
https://en.wikipedia.org/wiki/Canned%20cycle
A canned cycle is a way of conveniently performing repetitive CNC machine operations. Canned cycles automate certain machining functions such as drilling, boring, threading, pocketing, etc... Canned cycles are so called because they allow a concise way to program a machine to produce a feature of a part. A canned cycle is also known as a fixed cycle. A canned cycle is usually permanently stored as a pre-program in the machine's controller and cannot be altered by the user. Programming format The operation of a CNC machine tool is typically controlled by a "part program" written a language known as G-code. Canned cycles are similar in concept to functions in a traditional computer language, and can be compared also to G-code macros. The format for a canned cycle consists of a series of parameters specified with a letter and a numerical value. The letter is referred to as an "address". (This use of the term "address" may be unfamiliar to programmers of conventional computers. It arises because in early and primitive machine controllers, the binary representation of the letter formed a physical address at which the controller would store the value following.) N.. G.. G.. X.. Y.. R.. P.. Q.. I.. J.. Z.. F.. H.. S.. L.. A.. B.. C.. D.. These addresses and values tell the machine where and how to move. The syntax of a canned cycle may vary depending on the brand of the control. In general, the following "words" will be in a canned cycle "block". N= Block number G98 or G99= Tool retract to R-plane or prior position G73, G74, G76, G81-89= The function to perform, for example, G84 specifies a right-hand tapping cycle. X= Position of hole or pocket in X axis Y= Position of hole or pocket in Y axis R= Z axis start position, also known as the retract plane or "R-plane". P= Dwell time (in milliseconds, where applicable) Q= Depth of each peck (G73, G83) or amount of shift for boring (G76, G87) I= Shift amount in X direction J= Shift amount in Y direction Z= Shift amount in Z direction (Negative because cutting is done in negative Z direction) F= Feed rate H= Feed rate for finishing cut S= Spindle speed L= Number of cycle repetitions M= Miscellaneous functions A, B, C and D are used for Rectangular pocket machining. A= Machining allowance B= Step over C= Step depth D= Additional depth of cut for first pass G80 is used for cancelling the currently selected canned cycle as G-codes for canned cycles are modal. If the machine control supports it, the user may create their own custom canned cycles. As there are numbers not already used for G-codes, new canned cycle programs can be stored at these vacant locations. This may be done on the popular Fanuc control with a technique referred to as "macro programming", after the Fanuc Macro-B language. (The term "Macro programming" in this sense is distinctly different from its more common use to refer to the action of programming a macro in G-code.) Fanuc controllers (and most others, because Fanuc compatibility is a de facto standard) support the following fixed cycles: Source: Smid 2008 These are examples used on a mill. Some of them have different functions on a lathe. Advantages The conciseness of canned cycles allows for quicker and easier development of programs at the machine. As canned cycles reduce the number of blocks in a program, the storage space occupied by the program is less and the programmer escapes the tedium of writing the same instructions again and again. This reduces the potential for errors, and locating any errors that do exist is easier in a shorter program. Job setup is also facilitated by canned cycles. Some canned cycles exist which are designed for use by machine tool operators for simple job set-up and measuring tasks........ See also CNC Drilling Boring Threading Part Program G-code References Bibliography Computer-aided engineering
Canned cycle
[ "Engineering" ]
895
[ "Construction", "Industrial engineering", "Computer-aided engineering" ]
45,716,143
https://en.wikipedia.org/wiki/Chrysocolla%20%28gold-solder%29
Chrysocolla (gold-solder, Greek ; Latin chrȳsocolla, oerugo, santerna; Syriac "tankar" (Bar Bahlul), alchemical symbol 🜸), also known as "goldsmith's solder" and "solder of Macedonia" (Pseudo-Democritus), denotes: The soldering of gold. The materials used for soldering gold, as well as certain gold alloys, still used by goldsmiths. Martin Ruland (Lexicon alchemiae) explains chrysocolla as molybdochalkos, a copper-lead alloy. In Leyden papyrus X recipe 31 chrysocolla is an alloy composed of 4 parts copper, 2 parts asem (a kind of tin-copper alloy) and 1 part gold. Argyrochrysocolla appears to designate an alloy of gold and silver. A mix of copper and iron salts, produced by the dissolution of a metallic vein by water, either spontaneously or by introducing water into a mine from winter to summer, and letting the mass dry during summer, which results in a yellow product. Malachite (green carbonate of copper), and other alkaline copper salts of green colour. Azurite, the blue congener of malachite, was known as armenion, as it was mined in Armenia. On heating, malachite decomposes to carbon dioxide and copper, the latter inducing the soldering effect. According to an older opinion, chrysocolla was borax, which had been found in ancient gold foundries and is still used for soldering gold. Aristoteles (De mirabilibus) mentions that the Chalcedonian island Demonesus has a mine of cyan () and chrysocolla. Theophrastus (De lapidibus) describes chrysocolla as a kind of "false emerald" found in gold and copper mines, used for soldering gold. Pliny (Historia Naturalis) and Celsus mention that chrysocolla is extracted along with gold, and is used as a pigment and medicament. Dioscorides (De materia medica) describes the purification of the ore and its use in healing wounds, also noting its poisonous effect. Greenish copper salts obtained by boiling infant's urine and natron in copper vessels. The resulting copper salts were scraped off and used for soldering gold. Infant's urine (Greek , Latin ) appears in many ancient recipes (Dioscorides, Pliny, Celsus, etc.) as a source of phosphates and ammonia. A particular copper hydrosilicate is named chrysocolla by modern mineralogists. See also Chrysoberyl Chrysolite Chrysoprase Chrysotile Sarcocolla References History of metallurgy Alchemical substances
Chrysocolla (gold-solder)
[ "Chemistry", "Materials_science" ]
594
[ "Metallurgy", "History of metallurgy", "Alchemical substances" ]
45,717,790
https://en.wikipedia.org/wiki/Design-to-cost
Design-to-Cost (DTC), as part of cost management techniques, describes a systematic approach to controlling the costs of product development and manufacturing. The basic idea is that costs are designed "into the product", even from the earliest concept decisions on and are difficult to remove later. These costs are seen as an equally important parameter besides feature scope and schedule, the three taken together yielding the well-known project triangle. By taking the right design decisions as early as during the initiation and concept phase of the product life-cycle, unnecessary costs at later stages can be avoided. But DTC also tries to capture the necessary measures for cost control during the complete development cycle. In DTC, cost considerations also become part of extended requirements specifications. In contrast to the closely related target costing, DTC does not mean a product will exactly reach a defined cost, rather, it is about "considering cost as a design parameter in your product development activities". DTC can also be contrasted with Design-to-value which emphasizes the value that can be delivered to the customer, instead of the production costs for the manufacturer or company. See also Cost reduction Total cost of ownership References Manufacturing Product development Cost engineering
Design-to-cost
[ "Engineering" ]
243
[ "Cost engineering", "Manufacturing", "Mechanical engineering" ]
45,765,974
https://en.wikipedia.org/wiki/Robert%20E.%20Sheriff
Robert E. Sheriff (19 April 1922 – 19 November 2014) was an American geophysicist best known for writing the comprehensive geophysical reference, Encyclopedic Dictionary of Exploration Geophysics. His main research interests included the seismic detailing of reservoirs, in 3-D seismic interpretation and seismic stratigraphy, and practical applications of geophysical (especially seismic) methods. Hua-Wei Zhou, Department Chair of the Department of Earth and Atmospheric Sciences, said about Sheriff: “…a giant figure in the world of exploration geophysics… When I think about Bob, a number of key words pop up in my mind: kindness, honesty, hardworking, seeking perfection, generosity and wisdom.” Career Sheriff worked on uranium isotope separation for the World War II Manhattan Project in Oak Ridge, Tennessee. He worked on this project from 1943-1946. After receiving his masters and PhD in physics, Sheriff accepted a job at Standard of California (Chevron) to work in their new geophysical research lab. Serving in a variety of functions, including managing geophysical crews and drilling activity overseas, Sheriff worked at Chevron for 25 years. Sheriff went on to work 5 years as Senior Vice-President of Development with Seiscom-Delta Corporation before moving to academia at the University of Houston. He served as a tenured professor in the Department of Earth and Atmospheric Sciences for 23 years before retiring. He served as Professor Emeritus after his retirement. Sheriff was one of the originators of the geophysical topic, attributes, and is coauthor of what some consider the seminal article in the field, Complex trace analysis (GEOPHYSICS, 1979). Society of Exploration Geophysicists Sheriff served as First Vice President for Society of Exploration Geophysicists (SEG) from 1972-73. In 1969, Sheriff received the SEG Virgil Kauffman Medal for his initial publication of the Encyclopedic Dictionary of Exploration Geophysics. Sheriff received SEG's highest award in 1998, the Maurice Ewing Award, for his lifetime achievements in geophysics. In 2006, SEG members voted the 1973 dictionary as the top geophysical book ever published for the industry, citing a copy could be found in every working exploration office. Lee Lawyer said in Sheriff’s citation for the Maurice Ewing Medal that Sheriff was in the forefront of such major trends in geophysical theory as hydrocarbon indicators, sequence stratigraphy, and reservoir geophysics. In addition, he was responsible for the first poster session at an SEG Annual Meeting, arranged the SEG technical presentations at the first two Offshore Technology Conferences, and co-organized industry-academic seminars to expedite transfer of knowledge between campus and industry. Encyclopedic Dictionary of Applied Geophysics Sheriff created a 30-page pamphlet to support his training classes as well as help train employees on the latest concepts in geophysics. After a past SEG president received the pamphlet, the president recommended to the SEG membership that the document be expanded. This document served as the foundation for what would become the Encyclopedic Dictionary. It transformed from a 30-page glossary to 429 pages in its 4th edition. Each subsequent edition of the dictionary saw significant increases in the number of terms. The third edition (published in 1991) contained 20% more entries than the second (published in 1984). The fourth (published in 2002) had 61 more pages of definitions than the third. For over four decades Sheriff updated the dictionary to reflect the latest technology and research in geophysics. Honors 1998: Maurice Ewing medal of SEG for lifetime work in geophysics 1997: Quest for Excellence Award, University of Houston College of Natural Sciences and Mathematics 1996: Special Commendation Award, Society of Exploration Geophysicists 1993: Hayden Williams Fellow, Curtin Univ. of Tech., Perth, W.Australia 1993: Distinguished lecturer, Australian Society of Exploration Geophy. 1980: Honorary Membership in Geophysical Society of Houston 1979: Honorary Membership in Society of Exploration Geophysicists 1977: Distinguished lecturer, Society of Exploration Geophysicists 1969: Kauffman Gold Medal of SEG for outstanding contribution to geophysics Principle books Sheriff, Robert E., and Lloyd P. Geldart. Problems in Exploration Seismology and their solutions. Tulsa, OK: Society of Exploration Geophysicists, 2004. Sheriff, Robert E. Encyclopedic Dictionary of Applied Geophysics. Tulsa, OK: Society of Exploration Geophysicists, 2002. Print. Sheriff, Robert E., and Alistair R. Brown. Reservoir Geophysics. Tulsa, OK: Society of Exploration Geophysicists, 1992. Print. . Sheriff, Robert E., W.M. Telford, W.M., and Lloyd P. Geldart. Applied Geophysics. Cambridge: Cambridge UP, 1990. Print. Sheriff, Robert E. Geophysical Methods. Englewood Cliffs, NJ: Prentice Hall, 1989. Print. . Sheriff, Robert E., and Lloyd P. Geldart. Exploration Seismology. Cambridge: Cambridge UP, 1982. Print. . Sheriff, Robert E. Seismic Stratigraphy. Boston: International Human Resources Development, 1980. Print. . See also Dolores Proubasta, Associate Editor (1995). ”Bob Sheriff — Getting a better picture.” Bob Sheriff — Getting a better picture, 14(9), 941-945. Robert E. Sheriff (1991). ”How in the world I came to write the Encyclopedic Dictionary.” How in the world I came to write the Encyclopedic Dictionary, 10(4), 41-43. Robert E. Sheriff (1985). ”History of geophysical technology through advertisements in GEOPHYSICS.” History of geophysical technology through advertisements in GEOPHYSICS, 50(12), 2299-2408. References 1922 births 2014 deaths American physicists Geophysics American geophysicists People from Mansfield, Ohio People from Oak Ridge, Tennessee People from Missouri City, Texas Wittenberg University alumni Ohio State University alumni Manhattan Project people
Robert E. Sheriff
[ "Physics" ]
1,238
[ "Applied and interdisciplinary physics", "Geophysics" ]
46,062,204
https://en.wikipedia.org/wiki/Wetzel%27s%20problem
In mathematics, Wetzel's problem concerns bounds on the cardinality of a set of analytic functions that, for each of their arguments, take on few distinct values. It is named after John Wetzel, a mathematician at the University of Illinois at Urbana–Champaign. Let F be a family of distinct analytic functions on a given domain with the property that, for each x in the domain, the functions in F map x to a countable set of values. In his doctoral dissertation, Wetzel asked whether this assumption implies that F is necessarily itself countable. Paul Erdős in turn learned about the problem at the University of Michigan, likely via Lee Albert Rubel. In his paper on the problem, Erdős credited an anonymous mathematician with the observation that, when each x is mapped to a finite set of values, F is necessarily finite. However, as Erdős showed, the situation for countable sets is more complicated: the answer to Wetzel's question is yes if and only if the continuum hypothesis is false. That is, the existence of an uncountable set of functions that maps each argument x to a countable set of values is equivalent to the nonexistence of an uncountable set of real numbers whose cardinality is less than the cardinality of the set of all real numbers. One direction of this equivalence was also proven independently, but not published, by another UIUC mathematician, Robert Dan Dixon. It follows from the independence of the continuum hypothesis, proved in 1963 by Paul Cohen, that the answer to Wetzel's problem is independent of ZFC set theory. Erdős' proof is so short and elegant that it is considered to be one of the Proofs from THE BOOK. In the case that the continuum hypothesis is false, Erdős asked whether there is a family of analytic functions, with the cardinality of the continuum, such that each complex number has a smaller-than-continuum set of images. As Ashutosh Kumar and Saharon Shelah later proved, both positive and negative answers to this question are consistent. References Functional analysis Independence results Analytic functions
Wetzel's problem
[ "Mathematics" ]
430
[ "Independence results", "Functions and mappings", "Functional analysis", "Mathematical logic", "Mathematical objects", "Mathematical relations" ]
46,088,805
https://en.wikipedia.org/wiki/Dr.%20Young%27s%20Ideal%20Rectal%20Dilators
Dr. Young's Ideal Rectal Dilators were medical devices sold in the United States from the late nineteenth century until at least the 1940s, part of the burgeoning market for patent and proprietary medicines and devices at the time. They came in sets of four "torpedolike" hard rubber (later, plastic) instruments varying in diameter from to and in length from , and according to a retrospective article in The American Journal of Gastroenterology, were no different from modern rectal dilators. Early claims and criticism An 1893 Medical News editorial noted that "Dr. Young" himself, writing in another journal of which he was the editor, praised rectal dilation as a cure for insanity, claiming that at least "three-fourths of all the howling maniacs of the world" were curable "in a few weeks' time by the application of orificial methods". The Medical News asked, A 1905 advertisement by F.E. Young and Co. of Chicago promised that "The best results may be obtained by the use of Young's self-retaining rectal dilators", the use of which "accomplishes for the invalid just what nature does daily for the healthy individual". Doctors were advised that "If you will prescribe a set of these dilators in some of your obstinate cases of Chronic Constipation you will find them necessary in every case of this kind". The price of a set "to the profession" was $2.50 (). Young admitted that some patients panicked at the sight of the devices. Condemnation by Food and Drug Administration In 1940 the United States Attorney for the Southern District of New York seized a shipment of the devices as misbranded. According to the U.S. Food and Drug Administration's subsequent Drugs and Devices Court Case Notice of Judgment (captioned "U.S. v. 67 Sets of Dr. Young's Rectal Dilators and 83 Packages of Dr. Young's Piloment") the product's labeling claimed it corrected constipation, promoted more refreshing sleep, and could relieve foul breath, bad taste in the mouth, sallow skin, acne, anemia, lassitude, mental hebetude, insomnia, anorexia, headaches, diarrhea, hemorrhoids, flatulence, indigestion, nervousness, irritability, cold extremities, and numerous other ailments. The instructions warned, "Do not neglect to use your Dilators... It is advisable to use [them] occasionally as a precautionary measure. You need have no fear of using them too much." The devices were held to be "dangerous to health when used with the frequency and duration prescribed, recommended or suggested in the labeling", and the shipment was ordered to be destroyed. See also Butt plug References External links US Design Patent 21,551 "Design for a rectal dilator" (Frank E. Young, inventor) Medical devices Products introduced in the 1890s
Dr. Young's Ideal Rectal Dilators
[ "Biology" ]
627
[ "Medical devices", "Medical technology" ]
46,176,581
https://en.wikipedia.org/wiki/Cobalt%20oxide%20nanoparticle
In materials and electric battery research, cobalt oxide nanoparticles usually refers to particles of cobalt(II,III) oxide of nanometer size, with various shapes and crystal structures. Cobalt oxide nanoparticles have potential applications in lithium-ion batteries and electronic gas sensors. Applications Lithium-ion Battery The cathodes of lithium-ion batteries are often made of lithiated oxides of cobalt, nickel, or manganese, that can readily and reversibly incorporate lithium ions in their molecular structure. Cobalt oxide nanomaterials, such as nanotubes, offer high surface-to-volume ratio and short path lengths for lithium cation transport, leading to fast charging capabilities. However, capacity, coulombic efficiency, and cycle life may suffer due to excessive formation of SEI. The nanowires may incorporate other substances, for example, diphenylalanine. Cobalt oxide particles may be anchored on substrates such as graphene to improve the dimensional stability of the anode and to prevent particle aggregation during lithium charge and discharge processes. Gas Sensor Hollow nanospheres of cobalt oxide have been investigated as materials for gas sensor electrodes, for the detection of toluene, acetone, and other organic vapors. Cobalt oxide nanoparticles anchored on single-walled carbon nanotubes have been investigated for sensing nitrogen oxides and hydrogen. This application takes advantage of the reactivity between the gas and the oxide, as well as the electrical connection with the substrate (both being p-type semiconductors). Nitrogen oxides react with the oxide as electron acceptors, reducing the electrode's resistance; whereas hydrogen acts as an electron donor, increasing the resistance. Medicine Cobalt oxide nanoparticles have been observed to readily enter cells, a property that conceivably could lead to applications in hyperthermic treatment, gene therapy and drug delivery. However, their toxicity is an obstacle that would have to be overcome. Synthesis Hydrothermal Cobalt oxide is often obtained by hydrothermal synthesis in an autoclave. One-pot hydrothermal synthesis of metal oxide hollow spheres starts with carbohydrates and metal salts dissolved in water at 100-200 °C. The reaction produces carbon spheres, with metal ions integrated into the hydrophobic shell. The carbon cores are removed by calcination, leaving hollow metal oxide spheres. Surface area and thickness of the shell can be manipulated by varying the carbohydrate to metal salt concentration, as well as the temperature, pressure, and pH of the reaction medium, and the cations of the starting salts. The completion time for the procedure varies from hours to days. A drawback of this approach is its smaller yield compared to other methods. Thermal decomposition Another route to the synthesis of cobalt oxide nanoparticles is the thermal decomposition of organometallic compounds. For example, heating the metal salen complex bis(salicylaldehyde)ethylenediiminecobalt(II) ("Co-salen") in air to 500 °C. The precursor Co-salen can be obtained by reacting cobalt(II) acetate tetrahydrate in propanol at 50 °C under nitrogen atmosphere with the salen ligand (bis(salicylaldehyde)ethylenediimine). From anchored precursors Cobalt oxide/graphene composite are synthesized by first forming cobalt(II) hydroxide on the graphene sheet from a cobalt(II) salt and ammonium hydroxide , which is then heated to 450 °C for two hours to yield the oxide. Safety Like most cobalt compounds, cobalt oxide nanoparticles are toxic to humans and also aquatic life. References Nanoparticles by composition Cobalt compounds Semiconductor materials Transition metal oxides
Cobalt oxide nanoparticle
[ "Chemistry" ]
761
[ "Semiconductor materials" ]
51,067,850
https://en.wikipedia.org/wiki/C9H6O
{{DISPLAYTITLE:C9H6O}} The molecular formula C9H6O may refer to: Indenone Isoindenone Ethynylbenzaldehyde 2,3-Di(1-propynyl)-2-cyclopropen-1-one Phenylpropynal Molecular formulas
C9H6O
[ "Physics", "Chemistry" ]
74
[ "Molecules", "Set index articles on molecular formulas", "Isomerism", "Molecular formulas", "Matter" ]
51,069,037
https://en.wikipedia.org/wiki/Synthesis%20of%20carbon%20nanotubes
Techniques have been developed to produce carbon nanotubes (CNTs) in sizable quantities, including arc discharge, laser ablation, high-pressure carbon monoxide disproportionation, and chemical vapor deposition (CVD). Most of these processes take place in a vacuum or with process gases. CVD growth of CNTs can occur in a vacuum or at atmospheric pressure. Large quantities of nanotubes can be synthesized by these methods; advances in catalysis and continuous growth are making CNTs more commercially viable. Types Arc discharge Nanotubes were observed in 1991 in the carbon soot of graphite electrodes during an arc discharge, by using a current of 100 amps, that was intended to produce fullerenes. However the first macroscopic production of carbon nanotubes was made in 1992 by two researchers at NEC's Fundamental Research Laboratory. The method used was the same as in 1991. During this process, the carbon contained in the negative electrode sublimates because of the high-discharge temperatures. The yield for this method is up to 30% by weight and it produces both single- and multi-walled nanotubes with lengths of up to 50 micrometers with few structural defects. Arc-discharge technique uses higher temperatures (above 1,700 °C) for CNT synthesis which typically causes the expansion of CNTs with fewer structural defects in comparison with other methods. Laser ablation In laser ablation, a pulsed laser vaporizes a graphite target in a high-temperature reactor while an inert gas is led into the chamber. Nanotubes develop on the cooler surfaces of the reactor as the vaporized carbon condenses. A water-cooled surface may be included in the system to collect the nanotubes. This process was developed by Richard Smalley and co-workers at Rice University, who at the time of the discovery of carbon nanotubes, were blasting metals with a laser to produce various metal molecules. When they heard of the existence of nanotubes they replaced the metals with graphite to create multi-walled carbon nanotubes. Later that year the team used a composite of graphite and metal catalyst particles (the best yield was from a cobalt and nickel mixture) to synthesize single-walled carbon nanotubes. The laser ablation method yields around 70% and produces primarily single-walled carbon nanotubes with a controllable diameter determined by the reaction temperature. However, it is more expensive than either arc discharge or chemical vapor deposition. Plasma torch Single-walled carbon nanotubes can also be synthesized by a thermal plasma method, first invented in 2000 at INRS (Institut national de la recherche scientifique) in Varennes, Canada, by Olivier Smiljanic. In this method, the aim is to reproduce the conditions prevailing in the arc discharge and laser ablation approaches, but a carbon-containing gas is used instead of graphite vapors to supply the necessary carbon. Doing so, the growth of SWNT is more efficient (decomposing the gas can be 10 times less energy-consuming than graphite vaporization). The process is also continuous and low-cost. A gaseous mixture of argon, ethylene and ferrocene is introduced into a microwave plasma torch, where it is atomized by the atmospheric pressure plasma, which has the form of an intense 'flame'. The fumes created by the flame contain SWNT, metallic and carbon nanoparticles and amorphous carbon. Another way to produce single-walled carbon nanotubes with a plasma torch is to use the induction thermal plasma method, implemented in 2005 by groups from the Université de Sherbrooke and the National Research Council of Canada. The method is similar to arc discharge in that both use ionized gas to reach the high temperature necessary to vaporize carbon-containing substances and the metal catalysts necessary for the ensuing nanotube growth. The thermal plasma is induced by high-frequency oscillating currents in a coil, and is maintained in flowing inert gas. Typically, a feedstock of carbon black and metal catalyst particles is fed into the plasma, and then cooled down to form single-walled carbon nanotubes. Different single-wall carbon nanotube diameter distributions can be synthesized. The induction thermal plasma method can produce up to 2 grams of nanotube material per minute, which is higher than the arc discharge or the laser ablation methods. Chemical vapor deposition (CVD) The catalytic vapor phase deposition of carbon was reported in 1952 and 1959, but it was not until 1993 that carbon nanotubes were formed by this process. In 2007, researchers at the University of Cincinnati (UC) developed a process to grow aligned carbon nanotube arrays of length 18 mm on a FirstNano ET3000 carbon nanotube growth system. During CVD, a substrate is prepared with a layer of metal catalyst particles, most commonly nickel, cobalt, iron, or a combination. The metal nanoparticles can also be produced by other ways, including reduction of oxides or oxides solid solutions. The diameters of the nanotubes that are to be grown are related to the size of the metal particles. This can be controlled by patterned (or masked) deposition of the metal, annealing, or by plasma etching of a metal layer. The substrate is heated to approximately 700 °C. To initiate the growth of nanotubes, two gases are bled into the reactor: a process gas (such as ammonia, nitrogen or hydrogen) and a carbon-containing gas (such as acetylene, ethylene, ethanol or methane). Nanotubes grow at the sites of the metal catalyst; the carbon-containing gas is broken apart at the surface of the catalyst particle, and the carbon is transported to the edges of the particle, where it forms the nanotubes. This mechanism is still being studied. The catalyst particles can stay at the tips of the growing nanotube during growth, or remain at the nanotube base, depending on the adhesion between the catalyst particle and the substrate. Thermal catalytic decomposition of hydrocarbon has become an active area of research and can be a promising route for the bulk production of CNTs. Fluidized bed reactor is the most widely used reactor for CNT preparation. Scale-up of the reactor is the major challenge. CVD is the most widely used method for the production of carbon nanotubes. For this purpose, the metal nanoparticles are mixed with a catalyst support such as MgO or Al2O3 to increase the surface area for higher yield of the catalytic reaction of the carbon feedstock with the metal particles. One issue in this synthesis route is the removal of the catalyst support via an acid treatment, which sometimes could destroy the original structure of the carbon nanotubes. However, alternative catalyst supports that are soluble in water have proven effective for nanotube growth. If a plasma is generated by the application of a strong electric field during growth (plasma-enhanced chemical vapor deposition), then the nanotube growth will follow the direction of the electric field. By adjusting the geometry of the reactor it is possible to synthesize vertically aligned carbon nanotubes (i.e., perpendicular to the substrate), a morphology that has been of interest to researchers interested in electron emission from nanotubes. Without the plasma, the resulting nanotubes are often randomly oriented. Under certain reaction conditions, even in the absence of a plasma, closely spaced nanotubes will maintain a vertical growth direction resulting in a dense array of tubes resembling a carpet or forest. Of the various means for nanotube synthesis, CVD shows the most promise for industrial-scale deposition, because of its price/unit ratio, and because CVD is capable of growing nanotubes directly on a desired substrate, whereas the nanotubes must be collected in the other growth techniques. The growth sites are controllable by careful deposition of the catalyst. In 2007, a team from Meijo University demonstrated a high-efficiency CVD technique for growing carbon nanotubes from camphor. Researchers at Rice University, until recently led by the late Richard Smalley, have concentrated on finding methods to produce large, pure amounts of particular types of nanotubes. Their approach grows long fibers from many small seeds cut from a single nanotube; all of the resulting fibers were found to be of the same diameter as the original nanotube and are expected to be of the same type as the original nanotube. Super-growth CVD Super-growth CVD (water-assisted chemical vapor deposition) was developed by Kenji Hata, Sumio Iijima and co-workers at AIST, Japan. In this process, the activity and lifetime of the catalyst are enhanced by the addition of water into the CVD reactor. Dense millimeter-tall vertically aligned nanotube arrays (VANTAs) or "forests", aligned normal to the substrate, were produced. The forests' height could be expressed, as where β is the initial growth rate and is the characteristic catalyst lifetime. Their specific surface exceeds 1,000 m2/g (capped) or 2,200 m2/g (uncapped), surpassing the value of 400–1,000 m2/g for HiPco samples. The synthesis efficiency is about 100 times higher than for the laser ablation method. The time required to make SWNT forests of the height of 2.5 mm by this method was 10 minutes in 2004. Those SWNT forests can be easily separated from the catalyst, yielding clean SWNT material (purity >99.98%) without further purification. For comparison, the as-grown HiPco CNTs contain about 5–35% of metal impurities; it is therefore purified through dispersion and centrifugation that damages the nanotubes. Super-growth avoids this problem. Patterned highly organized single-walled nanotube structures were successfully fabricated using the super-growth technique. The super-growth method is essentially a variation of CVD. Therefore, it is possible to grow material containing SWNT, DWNTs and MWNTs, and to alter their ratios by tuning the growth conditions. Their ratios change by the thinness of the catalyst. Many MWNTs are included so that the diameter of the tube is wide. The vertically aligned nanotube forests originate from a "zipping effect" when they are immersed in a solvent and dried. The zipping effect is caused by the surface tension of the solvent and the van der Waals forces between the carbon nanotubes. It aligns the nanotubes into a dense material, which can be formed in various shapes, such as sheets and bars, by applying weak compression during the process. Densification increases the Vickers hardness by about 70 times and density is 0.55 g/cm3. The packed carbon nanotubes are more than 1 mm long and have a carbon purity of 99.9% or higher; they also retain the desirable alignment properties of the nanotubes forest. Liquid electrolysis method In 2015, researchers in the George Washington University discovered a new pathway to synthesize MWCNTs by electrolysis of molten carbonates. The mechanism is similar to CVD. Some metal ions were reduced to a metal form and attached on the cathode as the nucleation point for the growing of CNTs. The reaction on the cathode is Li2CO3 -> Li2O + CNTs + O2 The formed lithium oxide can in-situ absorb carbon dioxide (if present) and form lithium carbonate, as shown in the equation. Li2O + CO2 -> Li2CO3 Thus the net reaction is CO2 -> CNTs + O2 In other words, the reactant is only greenhouse gas of carbon dioxide, while the product is high valued CNTs. This discovery was highlighted as a possible technology for carbon dioxide capture and conversion. Later on non-lithium molten carbonate electrolytes were demonstrated or electrolyte consisting of lithium carbonate plus some other carbonate and/or additive. Additionally, by changing electrolysis conditions such as electrolyte, electrode, temperature, and/or current density, a wide range of carbon nanotubes can be grown through this process including: helical; thin; thick; doped with either nitrogen, boron, sulfur, or phosphorus; bulbous; and more with multiple macrostructures being produced, some quite porous with potential uses as sponge or electrodes. This method can also utilize non-gas source of carbon, such as from calcium carbonate (CaCO3), in which case it produces lime/cement (CaO) free of CO2 as that CO2 turns into CNTs and oxygen. Natural, incidental, and controlled flame environments Fullerenes and carbon nanotubes are not necessarily products of high-tech laboratories; they are commonly formed in such mundane places as ordinary flames, produced by burning methane, ethylene, and benzene, and they have been found in soot from both indoor and outdoor air. However, these naturally occurring varieties can be highly irregular in size and quality because the environment in which they are produced is often highly uncontrolled. Thus, although they can be used in some applications, they can lack in the high degree of uniformity necessary to satisfy the many needs of both research and industry. Recent efforts have focused on producing more uniform carbon nanotubes in controlled flame environments. Such methods have promise for large-scale, low-cost nanotube synthesis based on theoretical models, though they must compete with rapidly developing large scale CVD production. Purification Removal of catalysts Nanoscale metal catalysts are important ingredients for fixed- and fluidized-bed CVD synthesis of CNTs. They allow increasing the growth efficiency of CNTs and may give control over their structure and chirality. During synthesis, catalysts can convert carbon precursors into tubular carbon structures but can also form encapsulating carbon overcoats. Together with metal oxide supports they may therefore attach to or become incorporated into the CNT product. The presence of metal impurities can be problematic for many applications. Especially catalyst metals like nickel, cobalt or yttrium may be of toxicological concern. While unencapsulated catalyst metals may be readily removable by acid washing, encapsulated ones require oxidative treatment for opening their carbon shell. The effective removal of catalysts, especially of encapsulated ones, while preserving the CNT structure is a challenge and has been addressed in many studies. A new approach to break carbonaceous catalyst encapsulations is based on rapid thermal annealing. Application-related issues Many electronic applications of carbon nanotubes crucially rely on techniques of selectively producing either semiconducting or metallic CNTs, preferably of a certain chirality. Several methods of separating semiconducting and metallic CNTs are known, but most of them are not yet suitable for large-scale technological processes. The most efficient method relies on density-gradient ultracentrifugation, which separates surfactant-wrapped nanotubes by the minute difference in their density. This density difference often translates into a difference in the nanotube diameter and (semi)conducting properties. Another method of separation uses a sequence of freezing, thawing, and compression of SWNTs embedded in agarose gel. This process results in a solution containing 70% metallic SWNTs and leaves a gel containing 95% semiconducting SWNTs. The diluted solutions separated by this method show various colors. The separated carbon nanotubes using this method have been applied to electrodes, e.g. electric double-layer capacitor. Moreover, SWNTs can be separated by the column chromatography method. Yield is 95% in semiconductor type SWNT and 90% in metallic type SWNT. In addition to the separation of semiconducting and metallic SWNTs, it is possible to sort SWNTs by length, diameter, and chirality. The highest resolution length sorting, with length variation of <10%, has thus far been achieved by size-exclusion chromatography (SEC) of DNA-dispersed carbon nanotubes (DNA-SWNT). SWNT diameter separation has been achieved by density-gradient ultracentrifugation (DGU) using surfactant-dispersed SWNTs and by ion-exchange chromatography (IEC) for DNA-SWNT. Purification of individual chiralities has also been demonstrated with IEC of DNA-SWNT: specific short DNA oligomers can be used to isolate individual SWNT chiralities. Thus far, 12 chiralities have been isolated at purities ranging from 70% for (8,3) and (9,5) SWNTs to 90% for (6,5), (7,5) and (10,5) SWNTs. Alternatively, carbon nanotubes have been successfully sorted by chirality using the aqueous two-phase extraction method. There have been successful efforts to integrate these purified nanotubes into electronic devices, such as field-effect transistors. An alternative to separation is the development of a selective growth of semiconducting or metallic CNTs. This can be achieved by CVD that involves a combination of ethanol and methanol gases on a quartz substrate, resulting in horizontally aligned arrays of 95–98% semiconducting nanotubes. Nanotubes are usually grown on nanoparticles of magnetic metal (Fe, Co), which facilitates the production of electronic (spintronic) devices. In particular, control of current through a field-effect transistor by magnetic field has been demonstrated in such a single-tube nanostructure. References Carbon nanotubes Chemical synthesis
Synthesis of carbon nanotubes
[ "Chemistry" ]
3,644
[ "nan", "Chemical synthesis" ]
51,070,578
https://en.wikipedia.org/wiki/V883%20Orionis
V883 Orionis is a protostar in the constellation of Orion. It is associated with IC 430 (Haro 13A), a peculiar Hα object surveyed by Guillermo Haro in 1952. It is assumed to be a member of the Orion Nebula cluster at . V883 Orionis, like most protostars, is surrounded by a circumstellar disc of dust. The dust has a water snow-line, a certain distance where the stellar irradiance from the star is low enough that water can freeze to snow. The water snow-line was directly imaged by ALMA, when a stellar outburst increased the amount of insolation and pushed the line out farther. In 2023, it was announced that signs of water vapor had been detected in V883 Orionis' disc. Gallery References Orion (constellation) Orion molecular cloud complex FU Orionis stars Orionis, V883
V883 Orionis
[ "Astronomy" ]
187
[ "Constellations", "Orion (constellation)" ]
51,073,880
https://en.wikipedia.org/wiki/Plaza%20del%20Sol%20%28Gresham%2C%20Oregon%29
Plaza del Sol is a plaza located at Southeast Stark Street and 187th Avenue in Gresham, Oregon's Rockwood neighborhood, in the United States. It features an art installation and model of the solar system, created in 2009. The site was formerly occupied by a Fred Meyer store, and purchased by the city in 2005. Events In 2013, the plaza hosted a Cinco de Mayo festival and the Feast of All Nations. The plaza has hosted Rock the Block, which attracted thousands of people in 2015. References Gresham, Oregon Squares in Oregon
Plaza del Sol (Gresham, Oregon)
[ "Astronomy" ]
114
[ "Astronomy stubs" ]
51,074,510
https://en.wikipedia.org/wiki/International%20Fiberglass
International Fiberglass was a fiberglass molding company founded in Venice, California in about 1963, best known for their large molded fiberglass roadside advertising sculptures commonly called "Muffler Men". The company was formed when Steve Dashew purchased Prewitt Fiberglass Animals and acquired all of the molds created by Bob Prewitt. One of the molds which Dashew acquired in the transaction was a 20-foot human figure, which Prewitt had used in 1962 to create an oversized statue for the Paul Bunyan Cafe in Flagstaff, Arizona. The company had made fiberglass boats, but Dashew decided to use the mold to create some business during slow boat-building periods. He began advertising his outsize figure-making capability, and began selling his giant figures in 1964. The outsize figures eventually included a female, who could be fitted with either a bikini swimsuit or a dress. In 10 years of production, International Fiberglass sold hundreds of oversized figures, including cowboys, Indians, astronauts, giant chickens, dinosaurs, Yogi Bears, and tigers, selling each for $1,800 to $2,800 (or as low as $1,000 when ordered in bulk, as when Texaco ordered a batch of 300). Dashew ceased production in 1974, and sold the company's assets in 1976. The outsized molds were destroyed after the sale. References Fiberglass Design companies established in 1963 Manufacturing companies based in California Manufacturing companies established in 1963 1963 establishments in California Design companies disestablished in 1974 Manufacturing companies disestablished in 1974 1974 disestablishments in California
International Fiberglass
[ "Chemistry", "Materials_science" ]
329
[ "Fiberglass", "Polymer chemistry" ]
53,928,318
https://en.wikipedia.org/wiki/Materials%20Science%20and%20Engineering%20C
Materials Science and Engineering: C was a peer-reviewed scientific journal that has since been renamed to Biomaterials Advances. According to the Journal Citation Reports, the journal had a 2020 impact factor of 7.328. References External links Physics review journals Materials science journals Elsevier academic journals Academic journals established in 1993 English-language journals Monthly journals
Materials Science and Engineering C
[ "Materials_science", "Engineering" ]
70
[ "Materials science stubs", "Materials science journals", "Materials science journal stubs", "Materials science" ]
53,930,029
https://en.wikipedia.org/wiki/Nicola%20Pugno
Nicola Maria Pugno (born 4 January 1972) is an Italian scientist, mechanical engineer, astrophysicist, with phds in fracture mechanics and biology. He is a full professor of solid and structural mechanics at the University of Trento (previously at the Polytechnic University of Turin) and of materials science at the Queen Mary University of London (part-time; and visiting professor at the University of Oxford). He has been selected as member of several committees such as the technical and scientific committee of the Italian Space Agency and as plenary speaker in several international workshops, events and conferences, such as at Falling Walls, at the World Economic Forum and at the European Parliament invited by the European Research Council as well as -as opening plenary speaker- at the International Conference of Theoretical and Applied Mechanics. He is editorial board member of several international journals and has been appointed as the first field chief editor of Frontiers in Materials. He has published about 500 papers in international journals and, for his scientific contributions in nanomechanics, bioinspiration, fracture mechanics and adhesion, he received -among other prizes (such as the first edition in 2012 of the GiovedìScienza prize for both science research and popularization)- in 2017 the A. A. Griffith Medal and Prize and in 2022 the Humboldt Prize. Since 2011, he has received several grants also from the European Union within the Excellent Science pillars for both fundamental science and high-tech transfer, that he is developing for several high-tech industries. References External links Solid mechanics Italian materials scientists Nanophysicists 1972 births Living people
Nicola Pugno
[ "Physics" ]
322
[ "Solid mechanics", "Mechanics" ]
53,937,915
https://en.wikipedia.org/wiki/Ljungstr%C3%B6m%20air%20preheater
Ljungström air preheater is an air preheater invented by the Swedish engineer Fredrik Ljungström (1875–1964). The patent was achieved in 1930. The factory and workshop were in Lidingö throughout the 1920s, with about 70 employees. In the 1930s, the facilities were used as a film studio, and they were demolished in the 1970s to give space for new development. In 1995, the Ljungström air preheater was distinguished as the 185th International Historic Mechanical Engineering Landmark by the American Society of Mechanical Engineers. Ljungström's technology of the air preheater is implemented in a vast number of modern power stations around the world, with total attributed worldwide fuel savings estimated at 4,960,000,000 tons of oil, "few inventions have been as successful in saving fuel as the Ljungström Air Preheater". In modern boilers, the preheater can provide up to 20% of the total heat transfer in the boiler process, while only representing 2% of the investment. References External links History of the Ljungström Air Preheater LJUNGSTRÖM Air Preheater (APH) & Gas-gas Heater (GGH) Power Plant Overview Chemical equipment Mechanical engineering Power station technology Engineering thermodynamics Ljungström
Ljungström air preheater
[ "Physics", "Chemistry", "Engineering" ]
264
[ "Applied and interdisciplinary physics", "Chemical equipment", "Engineering thermodynamics", "Thermodynamics", "nan", "Mechanical engineering" ]
56,853,150
https://en.wikipedia.org/wiki/Bonding%20molecular%20orbital
In theoretical chemistry, the bonding orbital is used in molecular orbital (MO) theory to describe the attractive interactions between the atomic orbitals of two or more atoms in a molecule. In MO theory, electrons are portrayed to move in waves. When more than one of these waves come close together, the in-phase combination of these waves produces an interaction that leads to a species that is greatly stabilized. The result of the waves’ constructive interference causes the density of the electrons to be found within the binding region, creating a stable bond between the two species. Diatomic molecules In the classic example of the H2 MO, the two separate H atoms have identical atomic orbitals. When creating the molecule dihydrogen, the individual valence orbitals, 1s, either: merge in phase to get bonding orbitals, where the electron density is in between the nuclei of the atoms; or, merge out of phase to get antibonding orbitals, where the electron density is everywhere around the atom except for the space between the nuclei of the two atoms. Bonding orbitals lead to a more stable species than when the two hydrogens are monatomic. Antibonding orbitals are less stable because, with very little to no electron density in the middle, the two nuclei (holding the same charge) repulse each other. Therefore, it would require more energy to hold the two atoms together through the antibonding orbital. Each electron in the valence 1s shell of hydrogen come together to fill in the stabilizing bonding orbital. So, hydrogen prefers to exist as a diatomic, and not monatomic, molecule. When looking at helium, the atom holds two electrons in each valence 1s shell. When the two atomic orbitals come together, they first fill in the bonding orbital with two electrons, but unlike hydrogen, it has two electrons left, which must then go to the antibonding orbital. The instability of the antibonding orbital cancels out the stabilizing effect provided by the bonding orbital; therefore, dihelium's bond order is 0. This is why helium would prefer to be monatomic over diatomic. Polyatomic molecules Bonding MOs of pi bonds Pi bonds are created by the “side-on” interactions of the orbitals. Once again, in molecular orbitals, bonding pi (π) electrons occur when the interaction of the two π atomic orbitals are in-phase. In this case, the electron density of the π orbitals needs to be symmetric along the mirror plane in order to create the bonding interaction. Asymmetry along the mirror plane will lead to a node in that plane and is described in the antibonding orbital, π*. An example of a MO of a simple conjugated π system is butadiene. To create the MO for butadiene, the resulting π and π* orbitals of the previously described system will interact with each other. This mixing will result in the creation of 4 group orbitals (which can also be used to describe the π MO of any diene): π1 contains no vertical nodes, π2 contains one and both are considered bonding orbitals; π3 contains 2 vertical nodes, π4 contains 3 and are both considered antibonding orbitals. Localized molecular orbitals The spherical 3D shape of s orbitals have no directionality in space and px, py, and pz orbitals are all 90o with respect to each other. Therefore, in order to obtain orbitals corresponding to chemical bonds to describe chemical reactions, Edmiston and Ruedenberg pioneered the development of localization procedures. For example, in CH4, the four electrons from the 1s orbitals of the hydrogen atoms and the valence electrons from the carbon atom (2 in s and 2 in p) occupy the bonding molecular orbitals, σ and π. The delocalized MOs of the carbon atom in the molecule of methane can then be localized to give four sp3 hybrid orbitals. Applications Molecular orbitals and, more specifically, the bonding orbital is a theory that is taught in all different areas of chemistry, from organic to physical and even analytical, because it is widely applicable. Organic chemists use molecular orbital theory in their thought rationale for reactions; analytical chemists use it in different spectroscopy methods; physical chemists use it in calculations; it is even seen in materials chemistry through band theory—an extension of molecular orbital theory. References Chemical bonding
Bonding molecular orbital
[ "Physics", "Chemistry", "Materials_science" ]
913
[ "Chemical bonding", "Condensed matter physics", "nan" ]
56,855,846
https://en.wikipedia.org/wiki/List%20of%20human%20transcription%20factors
This list of manually curated human transcription factors is taken from Lambert, Jolma, Campitelli et al. It was assembled by manual curation. More detailed information is found in the manuscript and the web site accompanying the paper (Human Transcription Factors) List of human transcription factors (1639) References Transcription factors Biology-related lists
List of human transcription factors
[ "Chemistry", "Biology" ]
68
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
56,859,187
https://en.wikipedia.org/wiki/Plasma%20Physics%20and%20Controlled%20Fusion
Plasma Physics and Controlled Fusion is a monthly peer-reviewed scientific journal covering plasma physics. It is published by the Institute of Physics and the Editor-in-Chief is Jonathan P Graves (EPFL/University of York). The journal was established in 1960 as Plasma Physics, obtaining its current title in 1984. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2023 impact factor of 2.1. References External links Plasma science journals Monthly journals English-language journals Academic journals established in 1960 IOP Publishing academic journals
Plasma Physics and Controlled Fusion
[ "Physics" ]
120
[ "Plasma science journals", "Plasma physics stubs", "Plasma physics" ]
56,859,193
https://en.wikipedia.org/wiki/Solvated%20metal%20atom%20dispersion
Solvated Metal Atom Dispersion is a method of producing highly reactive solvated nanoparticles. Samples of a metal (or ceramic) are heated to evaporate free atoms (or species), as in PVD evaporation. This vapor is then co-deposited with a suitable organic solvent (e.g. toluene) at very low temperatures (on the order of 70K) to form a solid mixture of the two. This is then warmed towards room temperature, producing solvated metal atoms or (over time) larger clusters. Sometimes, catalyst supports (such as SiO2 or Al2O3) are added to improve nucleation, as the process can more readily take place on surface OH groups. References Chemical processes Nanotechnology Solutions
Solvated metal atom dispersion
[ "Chemistry", "Materials_science", "Engineering" ]
162
[ "Materials science stubs", "Materials science", "Homogeneous chemical mixtures", "Chemical processes", "Nanotechnology stubs", "nan", "Chemical process engineering", "Solutions", "Nanotechnology" ]
60,287,650
https://en.wikipedia.org/wiki/Vicki%20Wysocki
Vicki Hopper Wysocki is an American scientist. She is a professor and the current chair of the School of Chemistry and Biochemistry at Georgia Tech Education Vicki Wysocki received a BS in chemistry from Western Kentucky University in 1982. She received a PhD in chemistry at Purdue University in 1987, under the supervision of R. Graham Cooks. She did post-doctoral work at Purdue University and at the US Naval Research Laboratory as a National Research Council Fellow. Career Wysocki became an assistant professor at Virginia Commonwealth University in 1990, and an associate professor in 1994. In 1996, she continued her career at University of Arizona, and she was promoted to professor in 2000. She was the chair of the department of chemistry and biochemistry at University of Arizona. She is a professor and an Ohio Eminent Scholar at Ohio State University, and the director of the Campus Chemical Instrument Center. She was the treasurer (1998–2000), vice president for programs (2014–2016), president (2016–2018) and past president (2018–2020) of the American Society for Mass Spectrometry. She served on the editorial board of Analyst, and she served as an associate editor of Analytical Chemistry. She serves as the editor-in-chief of the Journal of the American Society for Mass Spectrometry, she serves on the Honorary Board of International Journal of Mass Spectrometry. Awards 2023 The Analytical Scientist The Power List - Leaders and Advocates 2022 International Mass Spectrometry Foundation Thomson Medal Award 2022 American Chemical Society Division of Analytical Chemistry Award in Chemical Instrumentation 2021 The Analytical Scientist The Power List 2019 The Analytical Scientist The Power List 2019 German Mass Spectrometry Society (Deutsche Gesellschaft für Massenspektrometrie, DGMS) Wolfgang Paul Lecture 2017 American Chemical Society Frank H. Field and Joe L. Franklin Award for Outstanding Achievement in Mass Spectrometry 2013 Purdue University Department of Chemistry Outstanding Alumni Award 2009 American Society for Mass Spectrometry John B. Fenn Award for a Distinguished Contribution in Mass Spectrometry 1992 American Society for Mass Spectrometry Research Award References External links Purdue University alumni Living people Mass spectrometrists Ohio State University faculty Year of birth missing (living people) American women chemists American women academics 21st-century American women
Vicki Wysocki
[ "Physics", "Chemistry" ]
466
[ "Biochemists", "Mass spectrometry", "Spectrum (physical sciences)", "Mass spectrometrists" ]
60,290,317
https://en.wikipedia.org/wiki/Neural%20dust
Neural dust is a hypothetical class of nanometer-sized devices operated as wirelessly powered nerve sensors; it is a type of brain–computer interface. The sensors may be used to study, monitor, or control the nerves and muscles and to remotely monitor neural activity. In practice, a medical treatment could introduce thousands of neural dust devices into human brains. The term is derived from "smart dust", as the sensors used as neural dust may also be defined by this concept. Background The design for neural dust was first proposed in a 2011 presentation by Jan Rabaey from the University of California, Berkeley Wireless Research Center and was subsequently demonstrated by graduate students in his lab. While the history of BCI begins with the invention of the electroencephalogram by Hans Berger in 1924, the term did not appear in scientific literature until the 1970s. The hallmark research of the field came from the University of California, Los Angeles (UCLA), following a research grant from the National Science Foundation. While neural dust does fall under the category of BCI, it also could be used in the field of neuroprosthetics (also known as neural prosthetics). While the terms can sometimes be used interchangeably, the main difference is that while BCI generally interface neural activity directly to a computer, neuroprosthetics tend to connect activity in the central nervous system to a device meant to replace the function of a missing or impaired body part. Function Components The principal components of the neural dust system include the sensor nodes (neural dust), which aim to be in the 10-100 μm3 scale, and a sub-cranial interrogator, which would sit below the dura mater and would provide both power and a communication link to the neural dust. Neural dust sensors can use a multitude of mechanisms for powering and communication, including traditional RF as well as ultrasonics. An ultrasound based neural dust motes consist of a pair of recording electrodes, a custom transistor, and a piezoelectric sensor. The piezoelectric crystal is capable of recording brain activity from the extracellular space, and converting it into an electrical signal. Data and Power Transfer While many forms of BCI exist, neural dust is in a class of its own due to its size and wireless capability. While electromagnetic waves (such as radio frequencies) can be used to interact with neural dust or other wireless neural sensors, the use of ultrasound offers reduced attenuation in the tissue. This results in higher implantation depths (and therefore easier communication with the sub-cranial communicator), as well as a reduction of energy being distributed into the body's tissues due to scattering or absorption. This excess energy would take the form of heat, which could cause damage to the surrounding tissue if temperatures rose too high. Theoretically, ultrasound would allow smaller sensor nodes, allowing for sizes less than 100 μm, however, many practical and scalability challenges remain. Backscatter Communication Due to the extremely small size of the neural dust sensors, it would be impractical and nearly impossible to create a functional transmitter in the sensor itself. Thus backscatter communication, adopted from radio frequency identification (RFID) technologies, is employed. In RFID passive, battery-less tags are capable of absorbing and reflecting radio frequency (RF) energy when in close proximity to a RF interrogator, which is a device that transmits RF energy. As they reflect the RF energy back to the interrogator, they are capable of modulating the frequency, and in doing so, encoding information. Neural dust employs this method by having the sub-dural communicator send out a pulse (either RF or ultrasound) that is then reflected by the neural dust sensors. While neural dust can use a traditional amplifier to sense action potentials, in the case of an ultrasound based neural dust sensor, a piezoelectric crystal can also be used to measure form its location in the extracellular space. The ultrasound energy reflected back to the interrogator would be modulated in a way that would communicate the recorded activity. In one proposed model of the neural dust sensor, the transistor model allowed for a method of separating between local field potentials and action potential "spikes", which would allow for a greatly diversified wealth of data acquirable from the recordings. Clinical and health applications Neural prosthetics Some examples of neural prostheses include cochlear implants that can aid in restoring hearing, artificial silicon retina microchips that have shown to be effective in treating retinal degeneration from retinitis pigmentosa, and even motor prostheses that could offer the capability for movement in those affected with quadriplegia or disorders like amyotrophic lateral sclerosis. The use of neural dust in conjunction with motor prostheses could allow for a much finer control of movement. Electrostimulation While methods of electrical stimulation of nerves and brain tissue have already been employed for some time, the size and wireless nature of neural dust allows for advancement in clinical applications of the technique. Importantly, because traditional methods of neurostimulation and certain forms of nerve stimulation such as spinal cord stimulation use implanted electrodes that remain connected to wires, the risk of infection and scarring is high. While these risks are not a factor in the use of neural dust, the challenge of applying sufficient electrical current to the sensor node, is still present. Sleep Apnea Electrostimulation devices have already shown some efficacy in treating Obstructive Sleep Apnea (OSA). Researchers that used a surgically implanted electrostimulation device on patients with severe OSA found significant improvement over a 12-month period of treatment with the device. Stimulation of the phrenic nerve has also been shown to be effective in reducing central sleep apnea. Bladder Control in Paraplegics Electrical stimulation devices have been effective in allowing spinal cord injury patients to have improved ability to urinate and defecate by using radio-linked implants to stimulate the sacral anterior root area of the spine Epilepsy Electrical stimulation therapy in patients with epilepsy has been a well established procedure for some time, being traced back to as early as the 1950s. A paramount goal of the American Epilepsy Society is the continued development of automated brain electrical stimulation (also known as contingent, or closed loop stimulation), which provides seizure-halting electrical stimulation based on brain patterns that indicate a seizure is about to happen. This provides a much better treatment of the disorder than stimulation that is based on an estimate of when the seizure might occur. While vagal nerve stimulation is often a target area for treatment of epileptic seizures, there has been research into the efficacy of stimulation in the hippocampus, thalamus, and subthalamic nucleus. Closed-loop cortical neuromodulation has also been investigated as a treatment technique for Parkinson's disease References Brain–computer interface Human–computer interaction Neural engineering Neuroprosthetics
Neural dust
[ "Engineering" ]
1,443
[ "Human–computer interaction", "Human–machine interaction" ]
60,290,687
https://en.wikipedia.org/wiki/Liu%27s%20stain
Liu's stain (劉氏染色法) is a staining technique used to stain animal cells. It is an improved staining based on Romanowsky stain, and was introduced by professor Chen-Hui Liu(劉禎輝), faculty of National Taiwan University, in 1953. The method sees a wide variety of usage in Taiwan. Comparing to other staining methods, Liu's stain is relatively fast, taking no more than 3 minutes to complete the process. In pathology, Liu's stain is primarily used to distinguish blood cells, but it can also apply on vaginal discharge, sputum, and pus as a simple stain. Components Liu's stain is composed of two dyes, Liu A and Liu B. Liu A is the anionic dye, contains eosin Y to stain cytoplasm as well as hemoglobin into red. Liu B, on the other hand, is the cationic dye, contains azur I and methylene azure, to stain nucleus and basophilic granules into blue. To apply the stain on a fixed smear, first add Liu A for some 45 seconds, then add Liu B for some 90 seconds. Then, wash off the excessive dye by gently flushing the back of the smear. The staining is done after the water on the smear dried up. See also Romanowsky stain Anatomical pathology Staining Staining dyes
Liu's stain
[ "Chemistry", "Biology" ]
294
[ "Staining", "Microbiology techniques", "Cell imaging", "Microscopy" ]
43,844,298
https://en.wikipedia.org/wiki/Script%20lichen
A script lichen, or graphid lichen, is a member of a group of lichens which have spore producing structures that look like writing on the lichen body. The structures are elongated and narrow apothecia called lirellae, which look like short scribbles on the thallus. "Graphid" is derived from Greek for "writing". An example is Graphis mucronata. References Lichenology Fungus common names
Script lichen
[ "Biology" ]
97
[ "Lichenology", "Fungus common names", "Fungi", "Common names of organisms" ]
43,845,714
https://en.wikipedia.org/wiki/Perovskite%20solar%20cell
A perovskite solar cell (PSC) is a type of solar cell that includes a perovskite-structured compound, most commonly a hybrid organic–inorganic lead or tin halide-based material as the light-harvesting active layer. Perovskite materials, such as methylammonium lead halides and all-inorganic cesium lead halide, are cheap to produce and simple to manufacture. Solar-cell efficiencies of laboratory-scale devices using these materials have increased from 3.8% in 2009 to 25.7% in 2021 in single-junction architectures, and, in silicon-based tandem cells, to 29.8%, exceeding the maximum efficiency achieved in single-junction silicon solar cells. Perovskite solar cells have therefore been the fastest-advancing solar technology . With the potential of achieving even higher efficiencies and very low production costs, perovskite solar cells have become commercially attractive. Core problems and research subjects include their short- and long-term stability. Advantages The raw materials used and the possible fabrication methods (such as various printing techniques) are both low cost. Their high absorption coefficient enables ultrathin films of around 500 nm to absorb the complete visible solar spectrum. These features combined result in the ability to create low cost, high efficiency, thin, lightweight, and flexible solar modules. Perovskite solar cells have found use in powering prototypes of low-power wireless electronics for ambient-powered Internet of things applications, and may help mitigate climate change. Perovskite cells also possess many optoelectrical properties that benefit their use in solar cells. For example, the exciton binding energy is small. This allows electron holes and electrons to be easily separated upon the absorption of a photon. Moreover, the long diffusion distance of the charge carrier and the high diffusivity - the rate of diffusion - allow the charge carriers to travel long distances within the perovskite solar cell, which improves the chance of it to be absorbed and converted to power. Lastly, perovskite cells are characterized by wide absorption ranges and high absorption coefficients, which further increase the power efficiency of the solar cell by increasing the range of photon energies that are absorbed Materials used The name "perovskite solar cell" is derived from the ABX3 crystal structure of the absorber materials, referred to as perovskite structure, where A and B are cations and X is an anion. A cations with radii between 1.60 Å and 2.50 Å have been found to form perovskite structures. The most commonly studied perovskite absorber is methylammonium lead trihalide (CH3NH3PbX3, where X is a halogen ion such as iodide, bromide, or chloride), which has an optical bandgap between ~1.55 and 2.3 eV, depending on halide content. Formamidinium lead trihalide (H2NCHNH2PbX3) has also shown promise, with bandgaps between 1.48 and 2.2 eV. Its minimum bandgap is closer to the optimal for a single-junction cell than methylammonium lead trihalide, so it should be capable of higher efficiencies. The first use of perovskite in a solid-state solar cell was in a dye-sensitized cell using CsSnI3 as a p-type hole transport layer and absorber. A common concern is the inclusion of lead as a component of perovskite materials; solar cells composed from tin-based perovskite absorbers such as CH3NH3SnI3 have also been reported, though with lower power-conversion efficiencies. Shockley–Queisser limit Solar cell efficiency is limited by the Shockley–Queisser limit. This calculated limit sets the maximum theoretical efficiency of a solar cell using a single junction with no other loss aside from radiative recombination in the solar cell. Based on the AM1.5G global solar spectra, the maximum power conversion efficiency is correlated to a respective bandgap, forming a parabolic relationship. This limit is described by the equation Where and u is the ultimate efficiency factor, and v is the ratio of open circuit voltage Vop to band-gap voltage Vg, and m is the impedance matching factor, and Vc is the thermal voltage, and Vs is the voltage equivalent of the temperature of the Sun. The most efficient bandgap is found to be at 1.34 eV, with a maximum power conversion efficiency (PCE) of 33.7%. Reaching this ideal bandgap energy can be difficult, but utilizing tunable perovskite solar cells allows for the flexibility to match this value. Further experimenting with multijunction solar cells allow for the Shockley–Queisser limit to be surpassed, expanding to allow photons of a broader wavelength range to be absorbed and converted, without increasing thermalisation loss. The actual band gap for formamidinium (FA) lead trihalide can be tuned as low as 1.48 eV, which is closer to the ideal bandgap energy of 1.34 eV for maximum power-conversion efficiency single junction solar cells, predicted by the Shockley–Queisser Limit. The 1.3 eV bandgap energy has been successfully achieved with the hybrid cell, which has a tunable bandgap energy (Eg) from 1.24 – 1.41 eV Multi-junction solar cells Multi-junction solar cells are capable of a higher power conversion efficiency (PCE), increasing the threshold beyond the thermodynamic maximum set by the Shockley–Queisser limit for single junction cells. By having multiple bandgaps in a single cell, it prevents the loss of photons above or below the band gap energy of a single junction solar cell. In tandem (double) junction solar cells, PCE of 31.1% has been recorded, increasing to 37.9% for triple junction and 38.8% for quadruple junction solar cells. However, the metal organic chemical vapor deposition (mocvd) process needed to synthesise lattice-matched and crystalline solar cells with more than one junction is very expensive, making it a less than ideal candidate for widespread use. Perovskite semiconductors offer an option that has the potential to rival the efficiency of multi-junction solar cells but can be synthesised under more common conditions at a greatly reduced cost. Rivalling the double, triple, and quadruple junction solar cells mentioned above, are all-perovskite tandem cells with a max PCE of 31.9%, all-perovskite triple-junction cell reaching 33.1%, and the perovskite-Si triple-junction cell, reaching an efficiency of 35.3%. These multi-junction perovskite solar cells, in addition to being available for cost-effective synthesis, also maintain high PCE under varying weather extremes – making them utilizable worldwide. Chiral ligands Utilizing organic chiral ligands shows promise for increasing the maximum power conversion efficiency for halide perovskite solar cells, when utilized correctly. Chirality can be produced in inorganic semiconductors by enantiomeric distortions near the surface of the lattice, electronic coupling between the substrate and a chiral ligand, assembly into a chiral secondary structure, or chiral surface defects. By attaching a chiral phenylethylamine ligand to an achiral lead bromide perovskite nanoplatelet, a chiral inorganic-organic perovskite is formed. Inspection of the inorganic-organic perovskite via Circular Dichroism (CD) spectroscopy, reveals two regions. One represents the charge transfer between the ligand and the nanoplatelet (300-350 nm), and the other represents the excitonic absorption maximum of the perovskite. Evidence of charge transfer in these systems shows promise for increasing power conversion efficiency in perovskite solar cells. Inorganic perovskites The highest performing perovskites solar cells suffer from chemical instability. The organic components such as methylammonium or formamidinium are the basis of the weakness. Encapsulation to prevent this decay is expensive. Fully inorganic perovskites could miminize these problems. Fully inorganic perovskites have PCE over 17%.  These high performing fully inorganic perovskite cells are created using CsPbI3, which has a band gap similar to that of high performing OIHPs (~1.7 eV), as well as excellent optoelectrical properties. Although chemically stable, these perovskite materials face significant issues with phase stability that prevent its broad industrial application. In high efficiency CsPbI3, for example, the photoactive black α-phase is prone to transform into the inactive yellow δ-phase seriously inhibiting the performance, especially when exposed to moisture. This also made them difficult to synthesize at ambient temperatures as the black α-phase is thermodynamically unstable with respect to the yellow δ-phase, although this has been recently tackled by Hei Ming Lai's group, who is a psychiatrist. The challenge of stabilizing the photoactive black α-phase of inorganic perovskite materials has been tackled in a variety of strategies, including octahedral anchoring and secondary crystal growth. 2D hybrid organic-inorganic perovskites 2D perovskites are characterized by improved stability and excitonic confinement properties compared with 3D perovskites, while maintaining the charge transport properties of 3D perovskite materials. Furthermore, the 2D hybrid organic-inorganic perovskite (HOIP) structure also eases the steric restrictions on the “B” cations, as outlined by the Goldschmidt’s tolerance factor in 3D HOIPs, providing a much larger compositional space to engineer new materials with tailored properties. Structure HOIPs follow the same ABX3 stoichiometry as their 3D counterparts. In this case, B is a metal cation, X is halogen anions (Cl−, Br−, I−) and A represents an organic molecular cation. The A-site cations are caged in a BX6 corner-sharing octahedra network via the hydrogen bond of N-H-X between the ammonium group from the A-site cation and the halogen from octahedra. As the length of the 2D organic ion increases, the spacing between the corner sharing octahedra does as well, forming a 2D or quasi-2D structure. The organic and inorganic layers are held together by van der waals forces. A formula of R2An−1BnX3n+1 is used to characterize the 2D and quasi 2D structures. Here, R is the large organic cation space that separates the inorganic layers and “n” refers to the number of organic units between inorganic layers. Mechanical properties To achieve mechanically durable devices, a top priority is to understand the inherent mechanical properties of the materials. Like other 2D materials, mechanical properties are analyzed using computational methods and are verified using experiments. Nanoindentation is a common technique to measure mechanical properties of 2D materials. Nanoindentation results in 2D HOIP reveal anisotropy in the Young’s modulus along different plane directions (100, 001, and 110). Gao et al. showed single-crystal (C6H5CH2NH3)2PbCl4 had mid-range anisotropy in these directions because of corner sharing inherent to the crystal structure. The strongest direction was the [100] direction which is perpendicular to the inorganic layers. Generally, across many 2D HOIPs, there is a dominant correlation between increased Pb-X (very common cation) bond strength and Young’s moduli. Similarly, another nanoindentation study found that changing the A ion from organic CH3NH3+ to inorganic Cs+ has negligible effects on the Young’s modulus, whereas the Pb–X strength has the dominating effect. Due to the increased mechanical stability of the inorganic layers, nanoindentation finds that 2D HOIP structures with thicker and more densely packed inorganic layers have increased Young’s moduli and increased stability. A study by Tu et al. performed mechanical properties testing on a simple lead iodide system to investigate the role of the number and the length of subunits (organic layer) on the out of plane Young’s modulus utilizing nanoindentation. This study found that 2D HOIPs are softer than 3D counterparts due to a shift from covalent/ionic bonding to van der waals bonding. Furthermore, increasing the number of subunits “n” from (1-5) increases the Young’s modulus and hardness until reaching 3D standard values. The length of the organic chain decreases and plateau’s the Young’s modulus. These factors can be tailored when designing perovskites solar cells for unique applications. 2D HOIP are also susceptible to the negative Poisson's ratio phenomenon, in which a material contracts laterally with stretched and expands laterally when compressed. This phenomenon is observed commonly in 2D materials and the Poisson's ratio can be modulated by changing the "X" halide in the 2D HOIP chemistry. Halides with weaker electronegativity form weaker bonds with the “B” cation resulting in increased (in magnitude) negative poisson ratio. This leaver allows for tunable flexibility of 2D HOIPs and applications of microelectromechanical and nanoelectronics devices. Other research Solar cells based on transition metal oxide perovskites and heterostructures thereof such as LaVO3/SrTiO3 have been studied. Rice University scientists discovered a novel phenomenon of light-induced lattice expansion in perovskite materials. Perovskite quantum dot solar cell technology may extend cell durability, which remains a critical limitation. In order to overcome the instability issues with lead-based organic perovskite materials in ambient air and reduce the use of lead, perovskite derivatives, such as Cs2SnI6 double perovskite, have been investigated. Processing Perovskite solar cells hold an advantage over traditional silicon solar cells in the simplicity of their processing and their tolerance to internal defects. Traditional silicon cells require expensive, multi-step processes, conducted at high temperatures (>1000 °C) under high vacuum in special cleanroom facilities. Meanwhile, the hybrid organic-inorganic perovskite material can be manufactured with simpler wet chemistry techniques in a traditional lab environment. Most notably, methylammonium and formamidinium lead trihalides, also known as hybrid perovskites, have been created using a variety of solution deposition techniques, such as spin coating, slot-die coating, blade coating, spray coating, inkjet printing, screen printing, electrodeposition, and vapor deposition techniques, all of which have the potential to be scaled up with relative ease except spin coating. Deposition methods The solution-based processing method can be classified into one-step solution deposition and two-step solution deposition. In one-step deposition, a perovskite precursor solution that is prepared by mixing lead halide and organic halide together, is directly deposited through various coating methods, such as spin coating, spraying, blade coating, and slot-die coating, to form perovskite film. One-step deposition is simple, fast, and inexpensive but it's also more challenging to control the perovskite film uniformity and quality. In the two-step deposition, the lead halide film is first deposited then reacts with organic halide to form perovskite film. The reaction takes time to complete but it can be facilitated by adding Lewis-bases or partial organic halide into lead halide precursors. In two-step deposition method, the volume expansion during the conversion of lead halide to perovskite can fill any pinholes to realize a better film quality. The vapor phase deposition processes can be categorized into physical vapor deposition (PVD) and chemical vapor deposition (CVD). PVD refers to the evaporation of a perovskite or its precursor to form a thin perovskite film on the substrate, which is free of solvent. While CVD involves the reaction of organic halide vapor with the lead halide thin film to convert it into the perovskite film. A solution-based CVD, aerosol-assisted CVD (AACVD) was also introduced to fabricate halide perovskite films, such as CH3NH3PbI3, CH3NH3PbBr3, and Cs2SnI6. One-step solution deposition In one-step solution processing, a lead halide and a methylammonium halide can be dissolved in a solvent and spin coated onto a substrate. Subsequent evaporation and convective self-assembly during spinning results in dense layers of well crystallized perovskite material, due to the strong ionic interactions within the material (The organic component also contributes to a lower crystallization temperature). However, simple spin-coating does not yield homogenous layers, instead requiring the addition of other chemicals such as GBL, DMSO, and toluene drips. Simple solution processing results in the presence of voids, platelets, and other defects in the layer, which would hinder the efficiency of a solar cell. Another technique using room temperature solvent-solvent extraction produces high-quality crystalline films with precise control over thickness down to 20 nanometers across areas several centimeters square without generating pinholes. In this method "perovskite precursors are dissolved in a solvent called NMP and coated onto a substrate. Then, instead of heating, the substrate is bathed in diethyl ether, a second solvent that selectively grabs the NMP solvent and whisks it away. What's left is an ultra-smooth film of perovskite crystals." In another solution processed method, the mixture of lead iodide and methylammonium halide dissolved in DMF is preheated. Then the mixture is spin coated on a substrate maintained at higher temperature. This method produces uniform films of up to 1 mm grain size. Pb halide perovskites can be fabricated from a PbI2 precursor, or non-PbI2 precursors, such as PbCl2, Pb(Ac)2, and Pb(SCN)2, giving films different properties. Two-step solution deposition In 2015, a new approach for forming the PbI2 nanostructure and the use of high CH3NH3I concentration have been adopted to form high quality (large crystal size and smooth) perovskite film with better photovoltaic performances. On one hand, self-assembled porous PbI2 is formed by incorporating small amounts of rationally chosen additives into the PbI2 precursor solutions, which significantly facilitate the conversion of perovskite without any PbI2 residue. On the other hand, through employing a relatively high CH3NH3I concentration, a firmly crystallized and uniform CH3NH3PbI3 film is formed. Furthermore, this is an inexpensive approach. Vapor deposition In vapor assisted techniques, spin coated or exfoliated lead halide is annealed in the presence of methylammonium iodide vapor at a temperature of around 150 °C. This technique holds an advantage over solution processing, as it opens up the possibility for multi-stacked thin films over larger areas. This could be applicable for the production of multi-junction cells. Additionally, vapor deposited techniques result in less thickness variation than simple solution processed layers. However, both techniques can result in planar thin film layers or for use in mesoscopic designs, such as coatings on a metal oxide scaffold. Such a design is common for current perovskite or dye-sensitized solar cells. Scalability Scalability includes not only scaling up the perovskite absorber layer, but also scaling up charge-transport layers and electrode. Both solution and vapor processes hold promise in terms of scalability. Process cost and complexity is significantly less than that of silicon solar cells. Vapor deposition or vapor assisted techniques reduce the need for use of further solvents, which reduces the risk of solvent remnants. Solution processing is cheaper. Current issues with perovskite solar cells revolve around stability, as the material is observed to degrade in standard environmental conditions, suffering drops in efficiency (See also Stability). In 2014, Olga Malinkiewicz presented her inkjet printing manufacturing process for perovskite sheets in Boston (US) during the MRS fall meeting – for which she received MIT Technology review's innovators under 35 award. The University of Toronto also claims to have developed a low-cost Inkjet solar cell in which the perovskite raw materials are blended into a Nanosolar ‘ink’ which can be applied by an inkjet printer onto glass, plastic or other substrate materials. Scaling up the absorber layer In order to scale up the perovskite layer while maintaining high efficiency, various techniques have been developed to coat the perovskite film more uniformly. For example, some physical approaches are developed to promote supersaturation through rapid solvent removal, thus getting more nucleations and reducing grain growth time and solute migration. Heating, gas flow, vacuum, and anti-solvent can all assist solvent removal. And chemical additives, such as chloride additives, Lewis base additives, surfactant additive, and surface modification, can influence the crystal growth to control the film morphology. For example, a recent report of surfactant additive, such as L-α-phosphatidylcholine (LP), demonstrated the suppression of solution flow by surfactants to eliminate gaps between islands and meanwhile the surface wetting improvement of perovskite ink on the hydrophobic substrate to ensure a full coverage. Besides, LP can also passivate charge traps to further enhance the device performance, which can be used in blade coating to get a high-throughput of PSCs with minimal efficiency loss. Scaling up the charge-transport layer Scaling up the charge-transport layer is also necessary for the scalability of PSCs. Common electron transport layer (ETL) in n-i-p PSCs are TiO2, SnO2 and ZnO. Currently, to make TiO2 layer deposition be compatible with flexible polymer substrate, low-temperature techniques, such as atomic layer deposition, molecular layer deposition, hydrothermal reaction, and electrodeposition, are developed to deposit compact TiO2 layer in large area. Same methods also apply to SnO2 deposition. As for hole transport layer (HTL), instead of commonly used PEDOT:PSS, NiOx is used as an alternative due to the water absorption of PEDOT, which can be deposited through room-temperature solution processing. CuSCN and NiO are alternative HTL materials which can be deposited by spray coating, blade coating, and electrodeposition, which are potentially scalable. Researchers also report a molecular doping method for scalable blading to make HTL-free PSCs. Scaling up the back electrode Evaporation deposition of back electrode is mature and scalable but it requires vacuum. Vacuum-free deposition of back electrode is important for full solution processibility of PSCs. Silver electrodes can be screen-printed, and silver nanowire network can be spray-coated as back electrode. Carbon is also a potential candidate as scalable PSCs electrode, such as graphite, carbon nanotubes, and graphene. Toxicity Toxicity issues associated with the lead content in perovskite solar cells strains the public perception and acceptance of the technology. The health and environmental impact of toxic heavy metals has been much debated in the case of CdTe solar cells, whose efficiency became industrially relevant in the 1990s. Although CdTe is a thermally and chemically very stable compound with a low solubility product (Ksp, of 10−34) and, accordingly, its toxicity was revealed to be extremely low, rigorous industrial hygiene programmes and recycling commitment programmes have been implemented. In contrast to CdTe, hybrid perovskites are very unstable and easily degrade to rather soluble compounds of Pb or Sn with KSP=4.4×10−9, which significantly increases their potential bioavailability and hazard for human health, as confirmed by recent toxicological studies. Although the 50% lethal dose of lead [LD50(Pb)] is less than 5 mg per kg of body weight, health issues arise at much lower exposure levels. Young children absorb 4–5 times as much lead as adults and are most susceptible to the adverse effects of lead. In 2003, a maximum blood Pb level (BLL) of 5 μg/dL was imposed by the World Health Organization, which corresponds to the amount of Pb contained in only 25 mm2 of the perovskite solar module. Furthermore, the BLL of 5 μg/dL was revoked in 2010 after the discovery of decreased intelligence and behavioral difficulties in children exposed to even lower values. Recently, Hong Zhang et al. reported a universal co-solvent dilution strategy to significantly reduce the toxic lead waste production, the usage of perovskite materials as well as the fabrication cost by 70%, which also delivers PCEs of over 24% and 18.45% in labotorary cells and modules, respectively. Reducing lead toxicity Replacing lead in perovskites Various studies have been performed to analyze promising alternatives to lead perovskite for use in PSCs. Good candidates for replacement, which ideally have low toxicity, narrow direct bandgaps, high optical absorption coefficients, high carrier mobility, and good charge transport properties, include tin/germanium-halide perovskites, double perovskites, and bismuth/antimony-halides with perovskite-like structures. Research done on tin halide-based PSCs show that they have a lower power conversion efficiency (PCE), with those fabricated experimentally achieving a PCE of 9.6%. This relatively low PCE is in part due to the oxidation of Sn2+ to Sn4+, which will act as a p-type dopant in the structure and result in higher dark carrier concentration and increased carrier recombination rates. Germanium halide perovskites have proven similarly unsuccessful due to low efficiencies and issues with oxidising tendencies, with one experimental solar cells displaying a PCE of only 0.11%. Higher PCEs have been reported from some germanium tin alloy-based perovskites, however, with an all-inorganic CsSn0.5Ge0.5I3 film having a reported PCE of 7.11%. In addition to this higher efficiency, the germanium tin alloy perovskites have also been found to have high photostability. Apart from the Tin and Germanium based perovskites, there has also been research on the viability of double-perovskites with the formula of A2M+M3+X6. While these double-perovskites have a favorable bandgap of approximately 2 eV and exhibit good stability, several issues including high electron/hole effective masses and the presence of indirect bandgaps result in lowered carrier mobility and charge transport. Research exploring the viability of Bismuth/Antimony halides in replacing lead perovskites has also been done, particularly with Cs3Sb2I9 and Cs3Bi2I9, which also have bandgaps of approximately 2 eV. Experimental results have also shown that, while Antimony and Bismuth halide-based PSCs have good stability, their low carrier mobilities and poor charge transport properties restrict their viability in replacing lead-based perovskites. Encapsulation to reduce lead leakage Recent research into the usage of encapsulation as a method for reducing lead leakage has been conducted, particularly focusing on the utilization of self-healing polymers. Research has been done on two promising polymers, Surlyn and a thermal crosslinking epoxy-resin, diglycidyl ether bisphenol A:n-octylamine:m-xylylenediamine = 4:2:1. Experiments showed a substantial reduction in lead leakage from PSCs using these self-healing polymers under simulated sunny weather conditions and after simulated hail damage had cracked the outer glass encapsulation. Notably, the epoxy-resin encapsulation was able to reduce lead leakage by a factor of 375 times when heated by simulated sunlight. Coatings to adsorb lead leakage Chemically lead-binding coatings have also been employed experimentally to reduce lead leakage from PSCs. In particular, Cation Exchange Resins (CERs) and P,P′-di(2-ethylhexyl)methanediphosphonic acid (DMDP) have been employed experimentally in this effort. Both coatings work similarly, chemically sequestering lead that might leak from a PSC module after weather damage occurs. Research into CERs has shown that, through diffusion-controlled processes, Pb2+ lead is effectively adsorbed and bonded onto the surface of CERs, even in the presence of competing divalent ions such as Mg2+ and Ca2+ that might also occupy binding sites on the CER surface. To test the efficacy of CER-based coatings in adsorbing lead in practical conditions, researchers dripped slightly acidic water, meant to simulate rainwater, onto a PSC module cracked by simulated hail damage. Researchers found that by applying a CER coating onto the copper electrodes of damaged PSC modules, lead leakage was reduced by 84%. When the CER was integrated into a carbon-based electrode paste applied to PSC and on the top of the encapsulating glass, the lead leakage decreased by 98%. A similar test was also performed on a PSC module with DMDP coated on both the top and bottom of the module to study the efficacy of DMDP in reducing lead leakage. In this test, the module was cracked by simulated hail damage, and placed in a solution of acidic water containing aqueous Ca2+ ions, meant to simulate acidic rain with low levels of aqueous Calcium present. The lead concentration of acidic water was tracked, and researchers found that the lead sequestration efficiency of the DMDP coating at room temperature 96.1%. Reducing the usage of lead materials during device fabrication A co-solvent dilution strategy has been reported to obtain high-quality perovskite films with very low concentration precursor solutions. This strategy substantially reduces the quantity of expensive raw materials in the perovskite precursor ink and reduces the toxic waste production by spin coating through two key routes: minimizing precursor loss during the processing of perovskite films and enhancing the lifetime and shelf-life of the inks by suppressing aggregation of precursor colloids. A PCE of over 24% for laboratory PSCs could be achieved with a co-solvent dilution to a level as low as 0.5 M. In addition, scalability of the co-solvent dilution strategy is tested via fabrication of perovskite solar modules (PSMs) with different sizes using industrial spin coating. The modules fabricated by co-solvent dilution strategy show higher PCEs and far better uniformity and reproducibility than modules prepared with conventional perovskite inks, whilst using a fraction of the precursor. Importantly, more than 70% toxic waste/solvent, perovskite raw material, and fabrication cost are projected to be reduced for module fabrication compared to the same modules made using conventional inks by industrial spin coating, and in doing so make spin coating a sustainable technique for medium scale manufacturing, for instance, for standalone modules or Si wafer-scale integration. This work shows that through judicious selection of a greener co-solvent, we can significantly reduce the usage and waste of toxic solvents and perovskite raw materials, while also simplifying fabrication and cutting costs of PSCs. Physics An important characteristic of the most commonly used perovskite system, the methylammonium lead halides, is a bandgap controllable by the halide content. The materials also display a diffusion length for both holes and electrons of over one micron. The long diffusion length means that these materials can function effectively in a thin-film architecture, and that charges can be transported in the perovskite itself over long distances. It has recently been reported that charges in the perovskite material are predominantly present as free electrons and holes, rather than as bound excitons, since the exciton binding energy is low enough to enable charge separation at room temperature. Efficiency limits Perovskite solar cell bandgaps are tunable and can be optimised for the solar spectrum by altering the halide content in the film (i.e., by mixing I and Br). The Shockley–Queisser limit radiative efficiency limit, also known as the detailed balance limit, is about 31% under an AM1.5G solar spectrum at 1000 W/m, for a Perovskite bandgap of 1.55 eV. This is slightly smaller than the radiative limit of gallium arsenide of bandgap 1.42 eV which can reach a radiative efficiency of 33%. Values of the detailed balance limit are available in tabulated form and a MATLAB program for implementing the detailed balance model has been written. In the meantime, the drift-diffusion model has found to successfully predict the efficiency limit of perovskite solar cells, which enable us to understand the device physics in-depth, especially the radiative recombination limit and selective contact on device performance. There are two prerequisites for predicting and approaching the perovskite efficiency limit. First, the intrinsic radiative recombination needs to be corrected after adopting optical designs which will significantly affect the open-circuit voltage at its Shockley–Queisser limit. Second, the contact characteristics of the electrodes need to be carefully engineered to eliminate the charge accumulation and surface recombination at the electrodes. With the two procedures, the accurate prediction of efficiency limit and precise evaluation of efficiency degradation for perovskite solar cells are attainable by the drift-diffusion model. Along with detailed balance analysis and drift-diffusion calculations, there have been many first principle studies to find the characteristics of the perovskite material numerically. These include but are not limited to bandgap, effective mass, and defect levels for different perovskite materials. Also there have some efforts to cast light on the device mechanism based on simulations where Agrawal et al. suggests a modeling framework, presents analysis of near ideal efficiency, and talks about the importance of interface of perovskite and hole/electron transport layers. Additionally, circuit model has been developed for describing the current density-voltage characteristics of perovskite solar cells. Sun et al. tries to come up with a compact model for perovskite different structures based on experimental transport data. Minshen Lin et al. proposed a modified diode model to quantify the efficiency loss of perovskite solar cells. Architectures Perovskite solar cells function efficiently in a number of somewhat different architectures depending either on the role of the perovskite material in the device, or the nature of the top and bottom electrode. Devices in which positive charges are extracted by the transparent bottom electrode (cathode), can predominantly be divided into 'sensitized', where the perovskite functions mainly as a light absorber, and charge transport occurs in other materials, or 'thin-film', where most electron or hole transport occurs in the bulk of the perovskite itself. Similar to the sensitization in dye-sensitized solar cells, the perovskite material is coated onto a charge-conducting mesoporous scaffold – most commonly TiO2 – as light-absorber. The photogenerated electrons are transferred from the perovskite layer to the mesoporous sensitized layer through which they are transported to the electrode and extracted into the circuit. The thin film solar cell architecture is based on the finding that perovskite materials can also act as highly efficient, ambipolar charge-conductor. After light absorption and the subsequent charge-generation, both negative and positive charge carrier are transported through the perovskite to charge selective contacts. Perovskite solar cells emerged from the field of dye-sensitized solar cells, so the sensitized architecture was that initially used, but over time it has become apparent that they function well, if not ultimately better, in a thin-film architecture. More recently, some researchers also successfully demonstrated the possibility of fabricating flexible devices with perovskites, which makes it more promising for flexible energy demand. Certainly, the aspect of UV-induced degradation in the sensitized architecture may be detrimental for the important aspect of long-term stability. There is another different class of architectures, in which the transparent electrode at the bottom acts as cathode by collecting the photogenerated p-type charge carriers. Research and development tools and methods The Perovskite Database is a database and analysis tool of perovskite solar cells research data which systematically integrates over 15,000 publications, in particular device-data about "over 42,400" perovskite devices. Authors described the FAIR open database site – which as of January 2022 requires signing up to access the data and uses software that is partly open source but not marked as having a free software license on GitHub – as a participative "Wikipedia for perovskite solar cell research". It allows data to be filtered and displayed according to various criteria such as material compositions or component type and could thereby support the development of optimal architecture designs (including the materials used). High-throughput screening of mixtures and contact layers is one development mechanism that has been used to develop relatively stable perovskite solar cells. History Perovskite materials have been well known for many years, but the first incorporation into a solar cell was reported by Tsutomu Miyasaka et al. in 2009. This was based on a dye-sensitized solar cell architecture, and generated only 3.8% power conversion efficiency (PCE) with a thin layer of perovskite on mesoporous TiO2 as electron-collector. Moreover, because a liquid corrosive electrolyte was used, the cell was only stable for a few minutes. Nam-Gyu Park et al. improved upon this in 2011, using the same dye-sensitized concept, achieving 6.5% PCE. A breakthrough came in 2012, when Mike Lee and Henry Snaith from the University of Oxford realised that the perovskite was stable if contacted with a solid-state hole transporter such as spiro-OMeTAD and did not require the mesoporous TiO2 layer in order to transport electrons. They showed that efficiencies of almost 10% were achievable using the 'sensitized' TiO2 architecture with the solid-state hole transporter, but higher efficiencies, above 10%, were attained by replacing it with an inert scaffold. Further experiments in replacing the mesoporous TiO2 with Al2O3 resulted in increased open-circuit voltage and a relative improvement in efficiency of 3–5% more than those with TiO2 scaffolds. This led to the hypothesis that a scaffold is not needed for electron extraction, which was later proved correct. This realisation was then closely followed by a demonstration that the perovskite itself could also transport holes, as well as electrons. A thin-film perovskite solar cell, with no mesoporous scaffold, of > 10% efficiency was achieved. In 2013 both the planar and sensitized architectures saw a number of developments. Burschka et al. demonstrated a deposition technique for the sensitized architecture exceeding 15% efficiency by a two-step solution processing, At a similar time Olga Malinkiewicz et al., and Liu et al. showed that it was possible to fabricate planar solar cells by thermal co-evaporation, achieving more than 12% and 15% efficiency in a p-i-n and an n-i-p architecture respectively. Docampo et al. also showed that it was possible to fabricate perovskite solar cells in the typical 'organic solar cell' architecture, an 'inverted' configuration with the hole transporter below and the electron collector above the perovskite planar film. A range of new deposition techniques and even higher efficiencies were reported in 2014. A reverse-scan efficiency of 19.3% was claimed by Yang Yang at UCLA using the planar thin-film architecture. In November 2014, a device by researchers from KRICT achieved a record with the certification of a non-stabilized efficiency of 20.1%. Continuing the trend, a new record of efficiency for a single-junction perovskite solar cell efficiency was set each year since 2015, with the most frequent record-breakers coming from KRICT and UNIST. The latest record-holders are researchers from UNIST who achieved 25.7% efficiency. There are also efforts focused on reducing energy cost, including the Apolo project consortium at CEA laboratories which aims to bring the module cost below €0.40/Wp (Watt peak). At least since 2016, the records for perovskite-silicon tandem solar cells have consistently remained higher than the ones for single-junction cells. Since 2018 the records were interchangeably broken by Oxford Photovoltaics and researchers from Helmholtz-Zentrum Berlin. In 2021, the latter achieved the best efficiency so far: 29.8%. Stability One big challenge for perovskite solar cells (PSCs) is the aspect of short-term and long-term stability. The traditional silicon-wafer solar cell in a power plant can last 20–25 years, setting that timeframe as the standard for solar cell stability. PSCs have great difficulty lasting that long [196]. The instability of PSCs is mainly related to environmental influence (moisture and oxygen), thermal stress and intrinsic stability of methylammonium-based perovskite, and formamidinium-based perovskite, heating under applied voltage, photo influence (ultraviolet light) (visible light) and mechanical fragility. Several studies about PSCs stability have been performed and some elements have been proven to be important to the PSCs stability. However, there is no standard "operational" stability protocol for PSCs. But a method to quantify the intrinsic chemical stability of hybrid halide perovskites has been recently proposed. The water-solubility of the organic constituent of the absorber material make devices highly prone to rapid degradation in moist environments. The degradation which is caused by moisture can be reduced by optimizing the constituent materials, the architecture of the cell, the interfaces and the environment conditions during the fabrication steps. Encapsulating the perovskite absorber with a composite of carbon nanotubes and an inert polymer matrix can prevent the immediate degradation of the material by moist air at elevated temperatures. However, no long-term studies and comprehensive encapsulation techniques have yet been demonstrated for perovskite solar cells. Devices with a mesoporous TiO2 layer sensitized with the perovskite absorber, are also UV-unstable, due to the interaction between photogenerated holes inside the TiO2 and oxygen radicals on the surface of TiO2. The measured ultra low thermal conductivity of 0.5 W/(Km) at room temperature in CH3NH3PbI3 can prevent fast propagation of the light deposited heat, and keep the cell resistive on thermal stresses that can reduce its life time. The PbI2 residue in perovskite film has been experimentally demonstrated to have a negative effect on the long-term stability of devices. The stabilization problem is claimed to be solved by replacing the organic transport layer with a metal oxide layer, allowing the cell to retain 90% capacity after 60 days. Besides, the two instabilities issues can be solved by using multifunctional fluorinated photopolymer coatings that confer luminescent and easy-cleaning features on the front side of the devices, while concurrently forming a strongly hydrophobic barrier toward environmental moisture on the back contact side. The front coating can prevent the UV light of the whole incident solar spectrum from negatively interacting with the PSC stack by converting it into visible light, and the back layer can prevent water from permeation within the solar cell stack. The resulting devices demonstrated excellent stability in terms of power conversion efficiencies during a 180-day aging test in the lab and a real outdoor condition test for more than 3 months. In July 2015, major hurdles were that the largest perovskite solar cell was only the size of a fingernail and that they degraded quickly in moist environments. However, researchers from EPFL published in June 2017, a work successfully demonstrating large scale perovskite solar modules with no observed degradation over one year (short circuit conditions). Now, together with other organizations, the research team aims to develop a fully printable perovskite solar cell with 22% efficiency and with 90% of performance after ageing tests. Early in 2019, the longest stability test reported to date showed a steady power output during at least 4000 h of continuous operation at Maximum power point tracking (MPPT) under 1 sun illumination from a xenon lamp based solar simulator without UV light filtering. Remarkably, the light harvester used during the stability test is classical methylammonium (MA) based perovskite, MAPbI3, but devices are built up with neither organic based selective layer nor metal back contact. Under these conditions, only thermal stress was found to be the major factor contributing to the loss of operational stability in encapsulated devices. The intrinsic fragility of the perovskite material requires extrinsic reinforcement to shield this crucial layer from mechanical stresses. Insertion of mechanically reinforcing scaffolds directly into the active layers of perovskite solar cells resulted in the compound solar cell formed exhibiting a 30-fold increase in fracture resistance, repositioning the fracture properties of perovskite solar cells into the same domain as conventional c-Si, CIGS and CdTe solar cells. Several approaches have been developed to improve perovskite solar cell stability. For instance, in 2021 researchers reported that the stability and long-term reliability of perovskite solar cells was improved with a new kind of "molecular glue". As of 2021, the existing stability tests for solar panels and solar cell systems are designed solely for those containing silicon wafers. As such, these tests, produced by the International Electrotechnical Commission (IEC), have been re-evaluated for their lack of suitability. At the International Summit on Organic PV Stability (ISOS), stability checks for in-lab development of all solar cells were created, but these were not adopted by the IEC. These tests are not pass/fail criteria, rather they evaluate the various causes of solar cell stability issues to root out the problems. They are grouped into five categories: dark storage testing, outdoor testing, light soaking testing, thermal cycling testing, and light-humidity-thermal cycling testing. In these tests, the PCE and J-V data graphs of the PSCs were calculated among varying physical conditions to determine the various causes of PSC degradation. Overall, these ISOS tests helped determine the causes of PSC degradation, which were found to include extended exposure to visible and UV light, environmental contamination, high temperatures, and electrical biases. After 200 temperature cycles, the 2020 PSCs still retained 90% of their power, indicating that they are capable of short-term stability. Now, what remains to be researched is long-term stability, and what material advances could be applied to boost these 200 temperature cycles (days) to 20–25 years. Methods to improve performance and stability The introduction of the Al2O3/NiO interfacial layer not only improves the crystalline quality of perovskite films with large grain size and enhances charge transport, but also effectively restricts the carrier recombination, but PSCs using this interface still have instability problem due to ion-migration and instability of perovskite crystals. To solve the problem, perovskite/Ag-rGO composites in active layer can be used to enhance the stability of PSCs and achieve high performance simultaneously. The Ag-rGO layer can act as a surface passivation layer, reducing defects and trap states at the perovskite layer's surface, which minimizes non-radiative recombination and improves performance and stability. In addition, the perovskite/Ag-rGO composite layer can act as a barrier, preventing moisture entering the perovskite layer and protecting it from degradation due to environmental effects. In the light harvesting measurements, perovskite/Ag-graphene PSCs show a higher Incident monochromatic photon-electron conversion efficiency (IPCE) value than traditional PSCs in the range of visible light. The current-voltage curve of the PSCs also shows the absence of hysteresis effect which is common in traditional PSCs. Perovskite/Ag-graphene PSCs also exhibit better thermal-stability aging at 90 degree Celsius and better photo-stability under continuous light illumination. However, the open-circuit voltage Voc and fill factor (FF) decreases as a trade-off. To address the loss in Voc and FF, SrTiO3/TiO2 composite layer is chosen to overcome this low Voc problem. By choosing SrTiO3/TiO2 as light harvesting material, it is expected to achieve high stability as well as high Voc. Recycling Another core problem in the development, production and use of perovskite solar cells is their recyclability. Perovskite recycling is an absolute necessity due to the presence of lead in perovskites. The use of this element means that simply disposing of perovskite solar cells into landfills would be a major health hazard due to lead runoff and toxicity to both bodies of water and human health [195]. Designs and processes or protocols for efficient recycling would reduce negative environmental impacts, exploitation of critical materials, health impacts and energy requirements beyond what can be achieved with increases in device lifetime. In a review, scientists concluded that "recycle and recovery technologies of perovskite solar cells should be researched and developed proactively". Some aspects of recyclability and recycling-rates depend on the design of the disseminated products. Scientific research and development may not get facilitated to design for recyclability – instead most scientists mainly "look at performance" – "energy conversion efficiency and stability" and often "neglect designing for recycling". In 2021, many solar cells implemented in the year 2000 are nearing their end-of-life stage. As such, research into perovskite recycling is crucial. One tricky component of perovskites to recycle is lead. Currently, producing 1 GW of energy using the most efficient perovskite solar cell would result in 3.5 tons of lead waste. The main strategy used right now to mitigate lead contamination is in-operation of the solar cell. Lead absorbing P,P’-di(2-ethylhexyl)methanediphosphonic acid and sulfonic acid cation-exchange resin are used to prevent lead leaking from any damage the solar panels may incur during use 195. Research is ongoing to discover means to reduce lead's impact beyond simply lead leakage prevention. Carboxylic acid cation-exchange resin has been found to adsorb lead ions via ion-exchange with hydrogen, and these ions can be easily released via recrystallization from adding sodium iodide to the aqueous solution. This process was found to be low-cost compared to other existing lead recycling techniques, and could theoretically be implemented commercially. Recently, since the efficiency of the best perovskite solar-cell reached 25.5%, comparable to the best PV cells made of single-crystal silicon, it is optimistic for the perovskite PV cells to be commercial in the future. Therefore, the recycling of the lead and transparent conductors are essential for the development of perovskite PV cells since the former reduces harmful environmental impact and the latter reduces costs. The organic solvent such as dimethylformamide (DMF) is used in the research to dissolve Pb and separate ITO/glass, then the carboxylic acid cation-exchange resin, in this research WAC-gel is used because of best performance, is utilized to absorb Pb ions in the DMF and release it in a form of Pb(NO3)2. By adding NaI into the solution, PbI2 can precipitate and be recycled. The recycled materials properties are analyzed in that both PbI2 and ITO/glass have the same performance as the new ones do, and the recycling efficiency reached 99.2%. Moreover, the cost analysis shows that solar modules based on recycling cost around $12 per square meter, whereas those based on new materials cost around $24.8 per square meter. From both an environmental and an economic perspective, it is beneficial to recycle the perovskite PV cells. Hysteretic current-voltage behavior Another major challenge for perovskite solar cells is the observation that current-voltage scans yield ambiguous efficiency values. The power conversion efficiency of a solar cell is usually determined by characterizing its current-voltage (IV) behavior under simulated solar illumination. In contrast to other solar cells, however, it has been observed that the IV-curves of perovskite solar cells show a hysteretic behavior: depending on scanning conditions – such as scan direction, scan speed, light soaking, biasing – there is a discrepancy between the scan from forward-bias to short-circuit (FB-SC) and the scan from short-circuit to forward bias (SC-FB). Various causes have been proposed such as ion movement, polarization, ferroelectric effects, filling of trap states, however, the exact origin for the hysteretic behavior is yet to be determined. But it appears that determining the solar cell efficiency from IV-curves risks producing inflated values if the scanning parameters exceed the time-scale which the perovskite system requires in order to reach an electronic steady-state. Two possible solutions have been proposed: Unger et al. show that extremely slow voltage-scans allow the system to settle into steady-state conditions at every measurement point which thus eliminates any discrepancy between the FB-SC and the SC-FB scan. The steady-state conditions with extremely slow voltage-scans can be simulated by drift-diffusion solvers SolarDesign and IonMonger. Henry Snaith et al. have proposed 'stabilized power output' as a metric for the efficiency of a solar cell. This value is determined by holding the tested device at a constant voltage around the maximum power-point (where the product of voltage and photocurrent reaches its maximum value) and track the power-output until it reaches a constant value. Both methods have been demonstrated to yield lower efficiency values when compared to efficiencies determined by fast IV-scans. However, initial studies have been published that show that surface passivation of the perovskite absorber is an avenue with which efficiency values can be stabilized very close to fast-scan efficiencies. No obvious hysteresis of photocurrent was observed by changing the sweep rates or the direction in devices or the sweep rates. This indicates that the origin of hysteresis in photocurrent is more likely due to the trap formation in some non optimized films and device fabrication processes. The ultimate way to examine the efficiency of a solar cell device is to measure its power output at the load point. If there is large density of traps in the devices or photocurrent hysteresis for other reasons, the photocurrent would rise slowly upon turning on illumination This suggests that the interfaces might play a crucial role with regards to the hysteretic IV behavior since the major difference of the inverted architecture to the regular architectures is that an organic n-type contact is used instead of a metal oxide. The ambiguity in determining the solar cell efficiency from current-voltage characteristics due to the observed hysteresis has also affected the certification process done by accredited laboratories such as NREL. The record efficiency of 20.1% for perovskite solar cells accepted as certified value by NREL in November 2014, has been classified as 'not stabilized'. To be able to compare results from different institution, it is necessary to agree on a reliable measurement protocol, as proposed by Zimmermann et al. with corresponding Matlab code on GitHub. As of 2021, the recorded peak power conversion efficiency has been found to be 25.6% efficiency. This was done using a formamidinium lead iodide metal-halide perovskite. Anions were pumped into existing highly efficient perovskites, and functioned to fill in gaps caused by trapped holes in the PV cell. Furthermore, this cell was found to be stable up to 450 hours, which is considered long-term stability. Finally, this device served to prove that anions other than iodine and bromine ions are capable of being bombarded into gaps in PV cells, breaking a trend that was evidently hindering prior research [198]. Perovskites for tandem applications A perovskite cell combined with a bottom cell such as Si or copper indium gallium selenide (CIGS) as a tandem design can suppress individual cell bottlenecks and take advantage of their complementary characteristics to enhance efficiency. These types of cells have higher efficiency potential, and therefore have attracted attention from academic researchers. 4-terminal tandems Using a four terminal configuration in which the two sub-cells are electrically isolated, Bailie et al. obtained a 17% to 18.6% efficient tandem cell with mc-Si (η ~ 11%) and copper indium gallium selenide (CIGS, η ~ 17%) bottom cells, respectively. A 13.4% efficient tandem cell with a highly efficient a-Si:H/c-Si heterojunction bottom cell using the same configuration has also been obtained. The application of TCO-based transparent electrodes to perovskite cells allowed fabricating near-infrared transparent devices with improved efficiency and lower parasitic absorption losses. The application of these cells in 4-terminal tandems allowed improved efficiencies up to 26.7% when using a silicon bottom cell and up to 23.9% with a CIGS bottom cell. In 2020, KAUST-University of Toronto teams reported 28.2% efficient four terminal perovskite/silicon tandem solar cells. To achieve these results, the team used Zr-doped In2O3 transparent electrodes on semitransparent perovskite top cells, previously introduced by Aydin et al., which improved the near infrared response of the silicon bottom cells by utilizing broadband transparent H-doped In2O3 electrodes. The team also enhanced the electron-diffusion length (up to 2.3 μm) thanks to Lewis base passivation via urea. The record efficiency for perovskite/silicon tandems currently stands at 28.2%. 2-terminal tandems Mailoa et al. started the efficiency race for monolithic 2-terminal tandems using an homojunction c-Si bottom cell, demonstrating a 13.7% efficiency cell, largely limited by parasitic absorption losses. Then, Albrecht et al. developed low-temperature processed perovskite cells using a SnO2 electron transport layer. This allowed the use of silicon heterojunction solar cells as bottom cells, with tandem cell efficiency up to 18.1%. Werner et al. then improved this performance by replacing the SnO2 layer with PCBM and introducing a sequential hybrid deposition method for the perovskite absorber, leading to a tandem cell with 21.2% efficiency. Important parasitic absorption losses due to the use of Spiro-OMeTAD were still limiting the overall performance. An important change was demonstrated by Bush et al., who inverted the polarity of the top cell (n-i-p to p-i-n). They used a bilayer of SnO2 and zinc tin oxide (ZTO) processed by ALD to work as a sputtering buffer layer, which deposited a transparent top of indium tin oxide (ITO) electrode. This change helped to improve the environmental and thermal stability of the perovskite cell and was crucial to further improve the perovskite/silicon tandem performance to 23.6%. In the meantime, using a p-i-n perovskite top cell, Sahli et al. demonstrated in June 2018 a fully textured monolithic tandem cell with 25.2% efficiency, independently certified by Fraunhofer ISE CalLab. This improved efficiency can largely be attributed to the massively reduced reflection losses (below 2% in the range 360 nm-1000 nm, excluding metallization) and reduced parasitic absorption losses, leading to certified short-circuit currents of 19.5 mA/cm2. Also in June 2018 the company Oxford Photovoltaics presented a cell with 27.3% efficiency. In March 2020, KAUST-University of Toronto teams reported in Science Magazine regarding tandem devices with spin-cast perovskite films on fully textured bottom cells with 25.7% efficiency. The research teams show effort to utilize more solution-based scalable techniques on textured bottom cells. Accordingly, blade-coated perovskite based tandems were reported by a collaborative team of University of North Carolina and Arizona State University. Following this, in August 2020 KAUST team demonstrated first slot-die coated perovskite based tandems, which was an important step for accelerated processing of tandems. In September 2020, Aydin et al. showed the highest certified short-circuit currents of 19.8 mA/cm2 on fully textured silicon bottom cells. Also, Aydin et al. showed the first outdoor performance results for perovskite/silicon tandem solar cells, which was an important hurdle for the reliability tests of such devices. In December 2021, KAUST team updated the champion certified PCE to 28.2%. The record efficiency for perovskite/silicon tandems currently stands at 29.8% as of December 2021. Simulation modeling To investigate possible all-tandem perovskite candidates in an efficient and economical way, simulation software has been implemented. Shankar et al. published a paper in 2022 detailing their use of the Solar Cell Capacitance Simulator – One Dimensional software. This software allows the user to vary device parameters and properties to optimize performance. Results from this simulation research have exhibited efficiencies as high as 30% for a band gap of 1.4 eV, which resulted from increasing the external quantum efficiency to 95% via doping the transport layer. Shankar et al simulated an efficiency of 32.3% by altering the material and thickness of the electron transport and hole transport layers. This simulated efficiency represents a 37% increase in simulated work so far and was obtained upon optimization of work done by Zhao et al. in two-terminal all-perovskite tandem solar cells. Up-scaling In May 2016, IMEC and its partner Solliance announced a tandem structure with a semi-transparent perovskite cell stacked on top of a back-contacted silicon cell. A combined power conversion efficiency of 20.2% was reported, with the potential claimed to exceed 30%. All-perovskite tandems In 2016, the development of efficient low-bandgap (1.2 - 1.3eV) perovskite materials and the fabrication of efficient devices based on these enabled a new concept: all-perovskite tandem solar cells, where two perovskite compounds with different bandgaps are stacked on top of each other. The first two- and four-terminal devices with this architecture reported in the literature achieved efficiencies of 17% and 20.3% respectively. In addition, making formamidinium cesium lead iodide bromide perovskite into four-terminal tandem cells could achieve efficiency ranging from 19.8% to 25.2%, depending on the parameters of the measurements. All-perovskite tandem cells offer the prospect of being the first fully solution-processable architecture that has a clear route to exceeding not only the efficiencies of silicon, but also GaAs and other expensive III-V semiconductor solar cells. In 2017, Dewei Zhao et al. fabricated low-bandgap (~1.25 eV) mixed Sn-Pb perovskite solar cells (PVSCs) with the thickness of 620 nm, which enables larger grains and higher crystallinity to extend the carrier lifetimes to more than 250 ns, reaching a maximum power conversion efficiency (PCE) of 17.6%. Furthermore, this low-bandgap PVSC reached an external quantum efficiency (EQE) of more than 70% in the wavelength range of 700–900 nm, the essential infrared spectral region where sunlight transmitted to bottom cell. They also combined the bottom cell with a ~1.58 eV bandgap perovskite top cell to create an all-perovskite tandem solar cell with four terminals, obtaining a steady-state PCE of 21.0%, suggesting the possibility of fabricating high-efficiency all-perovskite tandem solar cells. A study in 2020 shows that all-perovskite tandems have much lower carbon footprints than silicon-perovskite tandems. Additionally, in 2020, all-perovskite tandem efficiencies hit a new peak of 24.2% efficiency for 1cm2 solar cells. This value was measured and recorded by Japan Electrical Safety and Environment Technology Laboratories, and was reached by passivating defects at grain boundaries of the traditional lead-tin perovskite using zwitterionic molecules. These inhibit tin ion oxidation, a process which lowers the efficiency of the solar cell by increasing trap density and preventing diffusion. The introduction of zwitterionic antioxidants greatly boosts the efficiency of these devices while only permitting an additional 2% degradation. The addition of zwitterionic substances also requires using an environment rich with formamidine sulfinic acid, catalyzing the necessary reactions to permit charge to transport between the solar cells. In November 2022, the all-perovskite tandem efficiency reached a new record of 27.4%. This breaks the 2020 record for 1 cm2 solar cells, and was achieved by a joint team from Northwestern University, University of Toronto, and the University of Toledo. This cell additionally broke the previous record for Voc for all-perovskite tandems. This same cell was certified by NREL with a PCE of 26.3% and a Voc of 2.13V. This marks the “first certified all-perovskite tandem to surpass the record PCE (25.7%) of single-junction perovskite solar cells”. (AUTHOR NAMES ET AL) have found areas for improvement in the Jsc values that put 30% efficiency in the near future. Commercialization A factory producing perovskite solar cells was opened in May 2021 in Wrocław by Saule Technologies. there was a little manufacturing in Poland and China, but large-scale deployment was held back by the instability and shorter lifespan. Oxford PV opened a factory in Brandenburg, Germany in 2022. However companies hope to have perovskite-on-silicon tandem products on the market with a 25-year warranty sometime in the mid-2020s. They may help to meet the high targets for new solar power in India. Building integrated photovoltaics is a possible area of commercialisation, and while there are still stability-related concerns, in 2021 a building in Lublin became the first to be clad with perovskite solar panels, which marked the first commercial use of perovskite. The U.S. Department of Energy Solar Energy Technologies Office (SETO) is a government organization that is investing in the research and development of perovskite solar technologies. They have identified several key areas of improvement if perovskite solar cells are to play a part in the future of photovoltaic technologies. The four target areas for improvement are stability and durability, power conversion efficiency at scale, manufacturability, and technology validation and bankability. The first and third points are addressed above in the Processing and Scalability sections. Power conversion efficiency at scale remains a problem because laboratory efficiencies for small-area devices have not been proven at larger scale devices. Current small-scale devices may find use in mobile and disaster response technologies due to their light weight, flexibility, and power-to-weight ratios, but large-scale testing will be necessary before the power industry adopts this technology on the grid-level. The technology validation and bankability area of development points to the willingness of financial institutions to collaborate with these technologies. This will require a standardization of testing protocols and an increase in field data available. The degradation of perovskite solar cells makes current PV testing methods unrealistic in predicting performance in real-world applications. To address these concerns in the adoption of perovskite technology, SETO has funded the Perovskite Photovoltaic Accelerator for Commercializing Technologies (PACT) Validation and Bankability Center. PACT will set standardized field and lab testing and conduct bankability studies to ensure that perovskite technology is ready for commercialization. SETO also published performance targets to direct research and verify that projects are relevant to the development of commercialization. See also Dye-sensitized solar cell Emerging photovoltaics Hybrid solar cell List of types of solar cells Methylammonium lead halide Nanocrystal solar cell Perovskite (mineral) Polymer solar cell Thin film solar cell Third generation photovoltaic cell References Further reading Solar cells Thin-film cells Dye-sensitized solar cells Perovskites Japanese inventions 2009 introductions
Perovskite solar cell
[ "Materials_science", "Mathematics" ]
14,749
[ "Planes (geometry)", "Thin films", "Thin-film cells" ]
43,848,028
https://en.wikipedia.org/wiki/Dendriscocaulon
Dendriscocaulon is a taxonomic name that has been used for a genus of fruticose lichen (shrubby form lichen) with a cyanobacteria as the photobiont partner of the fungus. Dendriscocaulon is considered a taxonomic synonym of the genus Sticta, a foliose lichen (leafy form lichen), which generally has a green alga as the photobiont partner. Lichens that have been called Dendriscocaulon or Sticta involve the same fungal species. They show dramatically different morphology, may grow side-by-side, and mixed forms exist where different algae are growing within different portions of the same fungal thallus. The biochemistry of the two forms is very different, but the DNA sequences from the fungus and the photobiont can be distinguished using different primers for DNA sequencing. Use of the name Taxonomists now write the genus name with scare quotes around it, to make clear that they are not accepting the name as correct. The name can be convenient, however, because of the visible morphological difference, and because it was used in older literature. The International Code of Botanical Nomenclature states that "For nomenclatural purposes names given to lichens apply to their fungal component." Names of genera and species are based on type specimens. The type of Dendriscocaulon Nyl. from New Zealand, published in 1888, is not the same as the type of Sticta (Schreb.) Ach., published in 1803. The fungal partner in the type specimen of Dendriscocaulon is therefore unlikely to be the same strain as the fungal partner in the type specimen of Sticta, which raises the possibility that at some future time the two genera may be separated once more on the basis of genetic difference in the fungal partner, rather than by the photobiont or the morphology of the lichen. As with all lichens, the convention of nomenclature provides only partial illumination about the concepts of genus and species as they apply to the association of two or more species that are known as lichens. References Cyanobacteria Lichen genera Obsolete fungus taxa Taxa described in 1885 Taxa named by William Nylander (botanist)
Dendriscocaulon
[ "Biology" ]
474
[ "Algae", "Cyanobacteria" ]
48,717,005
https://en.wikipedia.org/wiki/Bearing%20pressure
Bearing pressure is a particular case of contact mechanics often occurring in cases where a convex surface (male cylinder or sphere) contacts a concave surface (female cylinder or sphere: bore or hemispherical cup). Excessive contact pressure can lead to a typical bearing failure such as a plastic deformation similar to peening. This problem is also referred to as bearing resistance. Hypotheses A contact between a male part (convex) and a female part (concave) is considered when the radii of curvature are close to one another. There is no tightening and the joint slides with no friction therefore, the contact forces are normal to the tangent of the contact surface. Moreover, bearing pressure is restricted to the case where the charge can be described by a radial force pointing towards the center of the joint. Case of a cylinder-cylinder contact In the case of a revolute joint or of a hinge joint, there is a contact between a male cylinder and a female cylinder. The complexity depends on the situation, and three cases are distinguished: the clearance is negligible: a) the parts are rigid bodies, b) the parts are elastic bodies; c) the clearance cannot be ignored and the parts are elastic bodies. By "negligible clearance", H7/g6 fit is typically meant. The axes of the cylinders are along the z-axis, and two external forces apply to the male cylinder: a force along the y-axis, the load; the action of the bore (contact pressure). The main concern is the contact pressure with the bore, which is uniformly distributed along the z-axis. Notation: D is the nominal diameter of both male and female cylinders; L the guiding length. Negligible clearance and rigid bodies In this first modeling, the pressure is uniform. It is equal to: . Negligible clearance and elastic bodies If it is considered that the parts deform elastically, then the contact pressure is no longer uniform and transforms to a sinusoidal repartition: P(θ) = Pmax⋅cos θ with . This is a particular case of the following section (θ0 = π/2). The maximum pressure is 4/π ≃ 1.27 times bigger than the case of uniform pressure. Clearance and elastic bodies In cases where the clearance can not be neglected, the contact between the male part is no longer the whole half-cylinder surface but is limited to a 2θ0 angle. The pressure follows Hooke's law: P(θ) = K⋅δα(θ) where K is a positive real number that represents the rigidity of the materials; δ(θ) is the radial displacement of the contact point at the angle θ; α is a coefficient that represents the behaviour of the material: α = 1 for metals (purely elastic behaviour), α > 1 for polymers (viscoelastic or viscoplastic behaviour). The pressure varies as: A⋅cos θ - B where A and B are positive real number. The maximum pressure is: the angle θ0 is in radians. The rigidity coefficient K and the half contact angle θ0 can not be derived from the theory. They must be measured. For a given system — given diameters and materials —, thus for given K and clearance j values, it is possible to obtain a curve θ0 = ƒ(F/(DL)). Case of a sphere-sphere contact A sphere-sphere contact corresponds to a spherical joint (socket/ball), such as a ball jointed cylinder saddle. It can also describe the situation of bearing balls. Case of uniform pressure The case is similar as above: when the parts are considered as rigid bodies and the clearance can be neglected, then the pressure is supposed to be uniform. It can also be calculated considering the projected area: . Case of a sinusoidal repartition of pressure As in the case of cylinder-cylinder contact, when the parts are modeled as elastic bodies with a negligible clearance, then the pressure can be modeled with a sinusoidal repartition: P(θ, φ) = Pmax⋅cos θ with . Hertz contact stress When the clearance can not be neglected, it is then necessary to know the value of the half contact angle θ0 , which can not be determined in a simple way and must be measured. When this value is not available, the Hertz contact theory can be used. The Hertz theory is normally only valid when the surfaces can not conform, or in other terms, can not fit each other by elastic deformation; one surface must be convex, the other one must be also convex plane. This is not the case here, as the outer cylinder is concave, so the results must be considered with great care. The approximation is only valid when the inner radius of the container R1 is far greater than the outer radius of the content R2, in which case the surface container is then seen as flat by the content. However, in all cases, the pressure that is calculated with the Hertz theory is greater than the actual pressure (because the contact surface of the model is smaller than the real contact surface), which affords designers with a safety margin for their design. In this theory, the radius of the female part (concave) is negative. A relative diameter of curvature is defined: where d1 is the diameter of the female part (negative) and d2 is the diameter of the male part (positive). An equivalent module of elasticity is also defined: where νi is the Poisson's ratio of the material of the part i and Ei its Young's modulus. For a cylinder-cylinder contact, the width of the contact surface is: and the maximal pressure is in the middle: . In case of a sphere-sphere contact, the contact surface is a disk whose radius is: and the maximal pressure is in the middle: . Applications Bolt used as a stop In a bolted connection, the role of the bolts is normally to press one parts on the other; the adherence (friction) is opposed to the tangent forces and prevents the parts from sliding apart. In some cases however, the adherence is not sufficient. The bolts then play the role of stops: the screws endure shear stress whereas the hole endure bearing pressure. In order to increase the bearing pressure of a material, there are several factors that can be considered. One of the most effective methods is to increase the surface area of the material. By increasing the surface area, the load is distributed over a larger area, reducing the bearing pressure. In good design practice, the threaded part of the screw should be small and only the smooth part should be in contact with the plates; in the case of a shoulder screw, the clearance between the screw and the hole is very small ( a case of rigid bodies with negligible clearance). If the acceptable pressure limit Plim of the material is known, the thickness t of the part and the diameter d of the screw, then the maximum acceptable tangent force for one bolt Fb, Rd (design bearing resistance per bolt) is: Fb, Rd = Plim × d × t. In this case, the acceptable pressure limit is calculated from the ultimate tensile stress fu and factors of safety, according to the Eurocode 3 standard. In the case of two plates with a single overlap and one row of bolts, the formula is: Plim = 1.5 × fu/γM2 where γM2 = 1.25: partial safety factor. In more complex situations, the formula is: Plim = k1 × α × fu/γM2 where k1 and α are factors that take into account other failure modes than the bearing pressure overload; k1 take into account the effects that are perpendicular to the tangent force, and α the effects along the force; k1 = min{2.8e2/d0 ; 2.5} for end bolts, k1 = min{1.4p2/d0 ; 2.5} for inner bolts, e2: edge distance from the centre of a fastener hole to the adjacent edge of the part, measured at right angles to the direction of load transfer, p2: spacing measured perpendicular to the load transfer direction between adjacent lines of fasteners, d0: diameter of the passthrough hole; α = min{e1/3d0 ; p1/3d0 - 1/4 ; fub/fu ; 1}, with e1: end distance from the center of a fastener hole to the adjacent end of the part, measured in the direction of load transfer, p1: spacing between centers of fasteners in the direction of load transfer, fub: specified ultimate tensile strength of the bolt. When the parts are in wood, the acceptable limit pressure is about 4 to 8.5 MPa. Plain bearing In plain bearings, the shaft is usually in contact with a bushing (sleeve or flanged) to reduce friction. When the rotation is slow and the load is radial, the model of uniform pressure can be used (small deformations and clearance). The product of the bearing pressure times the circumferential sliding speed, called load factor PV, is an estimation of the resistance capacity of the material against the frictional heating. References Bibliography [Aublin 1992] [Chevalier 2004] [Fanchon 2001] [Fanchon 2011] [GCM 2000] [SG 2003] Bearings (mechanical) Mechanical engineering Solid mechanics
Bearing pressure
[ "Physics", "Engineering" ]
1,954
[ "Applied and interdisciplinary physics", "Solid mechanics", "Mechanics", "Mechanical engineering" ]
48,718,678
https://en.wikipedia.org/wiki/Doppler%20tracking
Doppler tracking. The Doppler effect allows the measurement of the distance between a transmitter from space and a receiver on the ground by observing how the frequency received from the transmitter changes as it approaches the transmitter, is overhead, and moves away. When approaching, the frequency of the transmission appears to be higher and as the transmitter moves away, the frequency appears to be lower. When overhead, the transmitted frequency and the received frequency are the same. References Doppler effects
Doppler tracking
[ "Physics", "Astronomy" ]
97
[ "Physical phenomena", "Outer space", "Astronomy stubs", "Astrophysics", "Doppler effects", "Outer space stubs" ]
48,719,490
https://en.wikipedia.org/wiki/R%20%26%20J%20Beck
R & J Beck was an optical manufacturing enterprise established in 1843 by brothers Richard Beck (1827–1866) and Joseph Beck FRAS, FRMS (June 1828 – 18 April 1891). The firm operated from its headquarters located at 69 Mortimer Street in London. James Smith, worked with the company for a time, until his retirement in 1865. Smith's tenure is notably recognised for his instrumental role in elevating the significance of microscopes in scientific research circles. History Richard Beck (1827–1866) and Joseph Beck FRAS, FRMS (June 1828 – 18 April 1891) (nephews of J. J. Lister) formed the optical manufacturing firm of R and J Beck in 1843, based at 69 Mortimer Street, London. James Smith worked with the company under the name Smith and Beck, renamed Smith, Beck and Beck in 1854 but reverted to R and J Beck when Smith retired in 1865. Smith is credited with helping to raise the status of the use of microscopes within scientific research. Exhibitions and trades shows 1851 Great Exhibition Notable equipment Camera lenses of R and J Beck are known as Beck Ensign, and the Frena camera was developed in the 1890s, using celluloid films. A catalogue of work by R & J Beck from 1900 has been digitised as part of the Internet Archive which features the terms of business and pricing from 1900, simplex microscopes, No. 10 London Microscope, No. 22 London Microscope, No. 29 London Microscope, Beck Pathological Microscope, No. 3201 Massive Microscope, Radial Research Microscope, Angular Model Microscope, Beck Combined Binocular and Monocular Microscope, Baby London Microscope, No.3755 Portable Microscope, Pathological Microscope, Binomax magnifier, Greenough Binocular Microscope, Crescent Dissecting Microscope, Cornex Dissecting Microscope, Beck Ultra Violet Microscope made for J. E. Barnard F.R.S., Beck Object Glasses, Eyepieces, Beck-Chapman Opaque Illuminator, Photomigraphic Cameras, Optical Benches, Microtomes, University Micro-projector and Folding Pocket Magnifiers. Museums and Collections holding R and J Beck equipment Coats Observatory, Paisley, Scotland (contains a large collection of scientific and astronomical materials, including equipment by R and J Beck) National Museums of Scotland (microscopes by R and J Beck) National Science and Media Museum, Bradford, England (Beck field cameras ) Science Museum, London (compound molecular microscope, acquired 2012, periscope for trench use ) Museum of Technology, Lincolnshire (microscopes) Museum of the History of Science, Oxford (microscopes) Warren Anatomical Museum, Harvard (classroom demonstration microscopes) Hunterian Museum, Glasgow, Scotland (microscopes) Surgeons' Hall Museum, Edinburgh, Scotland (microscope) Scientific Instrument Collection, Macleay Museum, Sydney University (vertical illuminator) London School of Hygiene & Tropical Medicine Archives, London (crescent dissecting microscope, c.1900) Queen Victoria Museum and Art Gallery, Launceston, Tasmania, Australia (binocular microscope, 1865) Slideshow: Images of R and J Beck equipment From the Coats Observatory collection: References Photography equipment Optics manufacturing companies Microscopes
R & J Beck
[ "Chemistry", "Technology", "Engineering" ]
648
[ "Microscopes", "Measuring instruments", "Microscopy" ]
48,723,288
https://en.wikipedia.org/wiki/Cloaca%20Circi%20Maximi
The Cloaca Circi Maximi or Cloaca Circi was one of the three main sewers in ancient Rome. Alongside the Cloaca Maxima and Chiavicone dell'Olmo History The Cloaca Circi Maximi was built in the Augustan Period to clear Rome of unhealthy bodies of water. It was originally a small stream fed by various sources from around the Porta Capena right through the valley between the Palatine Hill and Aventine Hill, running down to the river Tiber. According to tradition, games and horse races were held in this valley from right after the founding of Rome in the 8th century. Over the centuries the Circus Maximus was built over the stream, with a channel named Euripus running across it halfway and two bridges carrying the track over it. This sewer would drain the area around the Circus Maximus. It also served as the spina down the middle of the track. Under Julius Caesar and Augustus the circus and its surroundings were greatly enlarged, covering over the channel, which became a sewer. It was connected to a tunnel modelled on that of the Cloaca Maxima and now terminated on the Tiber upstream of the Cloaca Maxima. Later in the sewer's history it was connected to culverts around the Colosseum and maybe the Baths of Caracalla. The Torre della Moletta, or Tower of Moletta was built upon the ruins of the Cloaca Circi Maximi. References Bibliography Evolution of Water Supply Through the Millennia, p 446 L. Richardson, jr, A New Topographical Dictionary of Ancient Rome, Baltimore - London 1992. pp.84. Ancient Roman sewers in Rome Sewerage History of water supply and sanitation
Cloaca Circi Maximi
[ "Chemistry", "Engineering", "Environmental_science" ]
357
[ "Sewerage", "Environmental engineering", "Water pollution" ]
48,727,130
https://en.wikipedia.org/wiki/Nano%20neuro%20knitting
Nano neuro knitting is an emerging technology for repairing nervous system tissues via nano scaffolding techniques. Currently being explored in numerous research endeavors, nano neuro knitting has been shown to allow partial reinnervation in damaged areas of the nervous system through the interactions between potentially regenerative axons and peptide scaffolds. This interaction has been shown to lead to sufficient axon density renewal to the point that functionality is restored. While nano neuro knitting shows promise, the uncertainty of the effects in human subjects warrants further investigation before clinical trials initiate. Mechanism The process of nano neuro knitting for nervous system tissue repair is carried out by engineering nanostructures for use as neural prosthetics and scaffolding in the brain. The nano neuro knitting process is two-fold. Firstly, the nanostructure is constructed. This entails creating electrospun nanofibers that are combined with self-assembling peptides, molecules made up of amino acids that spontaneously form into nanostructures. Electrospun nanofibers are commonly used in tissue grafts as they resemble natural tissue and are easy to fabricate. Peptide-based nanomaterials are used due to their highly permissive nature which creates an easily attachable landscape for nerve-cells. Scaffolds using a silk fibroin peptide (SF16) have also shown promise in nerve repair due to silk's biologically compatibility composition and mechanical features. Secondly, these nanostructures are transplanted into the area where tissue damage has occurred. Repairing damaged tissue in the nervous system using engineered nanofibers is a way of knitting damaged tissue back together. The main goal is to create a supplemental structure that imitates the body's natural connective tissue. This synthetic extracellular matrix works to fill in the gaps between damaged tissue sites, promoting axon regrowth and the return of normal neurological function. Potential applications Scaffolds produced using nanotechnology have enabled researchers to investigate clinically relevant applications that involve the promotion of tissue regeneration at sites of acute damage. In nano neuro knitting, these methods are applied specifically to the repair of tissues of the nervous system. Research and outlook Ophthalmic applications Nano neuro knitting has been researched for ophthalmic applications. The Massachusetts Institute of Technology (MIT) has tested a self-assembling peptide nanofiber scaffold (SAPNS) on hamsters to repair optic tract damage. Following injection, axon regeneration repaired the hamsters’ transected superior colliculi and restored vision in the tested animals. The mechanism behind the regeneration observed in these hamster models has been proposed to involve local axons with the potential to regenerate, the surrounding extracellular matrix (ECM), and the peptides of the nano scaffold. It has been shown that nano scaffolds can be carefully constructed to promote axonal growth and prevent scar formation at lesion sites. Using alternating positive and negative L-amino acids to form β-sheet ionic self-complementary peptides, nanofibers of the SAPNSs mimic the environment of the ECM and have been shown to serve as effective scaffolds in both in vitro and in vivo studies—appearing to be immunologically inert, feasibly excreted, and nontoxic to biological systems. Whereas previous research has attempted to graft nerve tissue to the optic tract and resulted in complications (leg disabilities in the case of sciatic nerve grafts, for instance), nano neuro knitting has been shown to promote the regeneration of these tissues without such drawbacks. While more research is required in order to understand how this technology works, scientists propose that SAPNSs either facilitate this neuroregeneration by promoting cell migration into the lesion area or bringing the lesion areas in closer proximity via contraction. Central nervous system obstacles One obstacle for drug delivery to the brain is the blood-brain barrier (BBB). The small size of nanomaterials, however, allows nanotechnologies to pass through. The scaffolds that enable nano neuro knitting, hence, are able to bypass this boundary without affecting the BBB that serves the essential role of managing what can and cannot enter and leave the central nervous system (CNS). While nano neuro knitting and other nanotechnologies may eventually replace procedures currently used to repair damage to the CNS through their improved biodistribution and pharmacokinetics, the toxicity and long-term impacts of nanomaterial exposure in humans has yet to be sufficiently assessed. While some studies demonstrate no immediate toxicity and immune responses, it has yet to be determined if this holds true for the human CNS (with particular concern for the retention of these materials in the brain and their capacity to form neurotoxic plaques) and the rest of the body's systems over time. Fortunately, SAPNSs may breakdown naturally by peptidase activity. In addition, promising monitoring methods are being explored in order to monitor axon regeneration in vivo that would provide patients real-time feedback via manganese-enhanced magnetic resonance imaging (MEMRI). In this way, these potential therapies could be monitored efficiently. Spinal cord injury repair Spinal cord injuries (SCIs) cause damage to the nervous system, which can result in neurological disfunction. The main barrier to recovery from a SCI arises from the absence of tissue regeneration ability, specifically in damage to a portion of the nerve cell called the axon. Damage to the spinal cord can result in irreversible deficiencies including paralysis and loss of sensation. As in ophthalmic applications, research has demonstrated that nano scaffolds may be an effective tool for repairing spinal cord injuries (SCIs). Such studies have utilized rat models to show that electrospun nanofibers and SAPNSs can effectively serve as guidance channels for regeneration of neural tissue lost at sites of SCI. Using these scaffolds with integrated, slowly-released proregenerative cytokines showed that SCI rat models could repair contused spinal cord tissue. After six months, the spinal cord cysts were shown to be replaced by bundles of myelinated axons, ECM, and vascularization. In addition, the rat models have been shown to regain motor control after this treatment. Studies also suggest that culturing Schwann cells (SCs) and neural progenitor cells (NPCs) in SAPNSs prior to transplantation can significantly improve SCI repair by promoting axon and blood vessel development in the scaffold which has been shown to connect damaged tissue back together. References Nanomedicine Neuroscience
Nano neuro knitting
[ "Materials_science", "Biology" ]
1,372
[ "Nanomedicine", "Nanotechnology", "Neuroscience" ]